threads
listlengths
1
275
[ { "msg_contents": "> We used Postgresql 7.1 under Linux and recently we have changed it to\n> Postgresql 8.1 under Windows XP. Our application uses ODBC and when we\n> try to get some information from the server throw a TCP connection,\nit's\n> very slow. We have also tried it using psql and pgAdmin III, and we\nget\n> the same results. If we try it locally, it runs much faster.\n> \n> We have been searching the mailing lists, we have found many people\nwith\n> the same problem, but we haven't found any final solution.\n> \n> How can we solve this? Any help will be appreciated.\n> \n> Thanks in advance.\n> \nby any chance are you working with large tuples/columns (long text,\nbytea, etc)?\n\nAlso please define slow.\n\nMerlin\n", "msg_date": "Fri, 2 Dec 2005 11:13:33 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Network permormance under windows" }, { "msg_contents": "\nDear Merlin,\n\nFor instance, we have this table (with 22900 tuples):\n\nCREATE TABLE tbl_empresa\n(\nid_empresa int4 NOT NULL DEFAULT nextval(('seq_empresa'::text)::regclass),\nref_poblacio int4 NOT NULL,\nnom varchar(50) NOT NULL,\nnif varchar(12),\ncarrer varchar(50),\ntelefon varchar(13),\nfax varchar(13),\nemail varchar(50),\nlab_materials int2 DEFAULT 0,\nweb varchar(50),\nref_empresa int4,\nref_classificacio_empresa int4,\nref_sector_empresa int4,\ncontrol int2,\norigen_volcat int2,\ndata_modificacio date,\nplantilla int4,\ntamany int2,\nautoritzacio_email int2,\nref_estat_empresa int2,\nCONSTRAINT tbl_clients_pkey PRIMARY KEY (id_empresa),\nCONSTRAINT fk_tbl_empresa_ref_classificacio_emp FOREIGN KEY \n(ref_classificacio_empresa)\nREFERENCES tbl_classificacio_empresa (id_classificacio_empresa) MATCH \nSIMPLE\nON UPDATE RESTRICT ON DELETE RESTRICT,\nCONSTRAINT fk_tbl_empresa_ref_empresa FOREIGN KEY (ref_empresa)\nREFERENCES tbl_empresa (id_empresa) MATCH SIMPLE\nON UPDATE RESTRICT ON DELETE RESTRICT,\nCONSTRAINT fk_tbl_empresa_ref_estat_emp FOREIGN KEY (ref_estat_empresa)\nREFERENCES tbl_estat_empresa (id_estat_empresa) MATCH SIMPLE\nON UPDATE RESTRICT ON DELETE RESTRICT,\nCONSTRAINT fk_tbl_empresa_ref_poblacio FOREIGN KEY (ref_poblacio)\nREFERENCES tbl_poblacions (id_poblacio) MATCH SIMPLE\nON UPDATE RESTRICT ON DELETE RESTRICT,\nCONSTRAINT fk_tbl_empresa_ref_sector_emp FOREIGN KEY (ref_sector_empresa)\nREFERENCES tbl_sector_empresa (id_sector_empresa) MATCH SIMPLE\nON UPDATE RESTRICT ON DELETE RESTRICT\n)\nWITH OIDS;\n\nWhen we select all data in local machine, we obtain results in 2-3 \nseconds aprox. In remote connections:\n\nPostgresql 7.1 usign pgAdminII:\nNetwork traffic generated with remote applications is about 77-80% in a \n10Mb connection.\n6 seconds aprox.\n\nPostgresql 8.1 usign pgAdminIII:\nNetwork traffic generated with remote applications is about 2-4% in a \n10Mb connection.\n12 seconds or more...\n\nI feel that is a problem with TCP_NODELAY of socket options... but I \ndon't know.\n\nJosep Maria\n\n\nEn/na Merlin Moncure ha escrit:\n\n>>We used Postgresql 7.1 under Linux and recently we have changed it to\n>>Postgresql 8.1 under Windows XP. Our application uses ODBC and when we\n>>try to get some information from the server throw a TCP connection,\n>> \n>>\n>it's\n> \n>\n>>very slow. We have also tried it using psql and pgAdmin III, and we\n>> \n>>\n>get\n> \n>\n>>the same results. If we try it locally, it runs much faster.\n>>\n>>We have been searching the mailing lists, we have found many people\n>> \n>>\n>with\n> \n>\n>>the same problem, but we haven't found any final solution.\n>>\n>>How can we solve this? Any help will be appreciated.\n>>\n>>Thanks in advance.\n>>\n>> \n>>\n>by any chance are you working with large tuples/columns (long text,\n>bytea, etc)?\n>\n>Also please define slow.\n>\n>Merlin\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: Don't 'kill -9' the postmaster\n> \n>\n\n\n-- \n\nJosep Maria Pinyol i Fontseca\nResponsable �rea de programaci�\n\nENDEPRO - Enginyeria de programari\nPasseig Anselm Clav�, 19 Bx. 08263 Call�s (Barcelona)\nTel. +34 936930018 - Mob. +34 600310755 - Fax. +34 938361994\[email protected] - http://www.endepro.com\n\n\nAquest missatge i els documents en el seu cas adjunts, \nes dirigeixen exclusivament al seu destinatari i poden contenir \ninformaci� reservada i/o CONFIDENCIAL, us del qual no est� \nautoritzat ni la divulgaci� del mateix, prohibit per la legislaci� \nvigent (Llei 32/2002 SSI-CE). Si ha rebut aquest missatge per error, \nli demanem que ens ho comuniqui immediatament per la mateixa via o \nb� per tel�fon (+34936930018) i procedeixi a la seva destrucci�. \nAquest e-mail no podr� considerar-se SPAM.\n\nEste mensaje, y los documentos en su caso anexos, \nse dirigen exclusivamente a su destinatario y pueden contener \ninformaci�n reservada y/o CONFIDENCIAL cuyo uso no \nautorizado o divulgaci�n est� prohibida por la legislaci�n \nvigente (Ley 32/2002 SSI-CE). Si ha recibido este mensaje por error, \nle rogamos que nos lo comunique inmediatamente por esta misma v�a o \npor tel�fono (+34936930018) y proceda a su destrucci�n. \nEste e-mail no podr� considerarse SPAM.\n\nThis message and the enclosed documents are directed exclusively \nto its receiver and can contain reserved and/or confidential \ninformation, from which use isn�t allowed its divulgation, forbidden \nby the current legislation (Law 32/2002 SSI-CE). If you have received \nthis message by mistake, we kindly ask you to communicate it to us \nright away by the same way or by phone (+34936930018) and destruct it. \nThis e-mail can�t be considered as SPAM. \n\n", "msg_date": "Fri, 02 Dec 2005 18:24:07 +0100", "msg_from": "Josep Maria Pinyol Fontseca <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network permormance under windows" }, { "msg_contents": "we experienced the same. had 2 win2003 servers - www and db connected to the \nsame router through 100mbit. the performance was quite bad. now we run the \ndb on the same machine as the web and everything runs smooth.\n\ncheers,\nthomas\n\n\n----- Original Message ----- \nFrom: \"Josep Maria Pinyol Fontseca\" <[email protected]>\nCc: <[email protected]>; <[email protected]>\nSent: Friday, December 02, 2005 6:24 PM\nSubject: Re: [PERFORM] Network permormance under windows\n\n\n>\n> Dear Merlin,\n>\n> For instance, we have this table (with 22900 tuples):\n>\n> CREATE TABLE tbl_empresa\n> (\n> id_empresa int4 NOT NULL DEFAULT nextval(('seq_empresa'::text)::regclass),\n> ref_poblacio int4 NOT NULL,\n> nom varchar(50) NOT NULL,\n> nif varchar(12),\n> carrer varchar(50),\n> telefon varchar(13),\n> fax varchar(13),\n> email varchar(50),\n> lab_materials int2 DEFAULT 0,\n> web varchar(50),\n> ref_empresa int4,\n> ref_classificacio_empresa int4,\n> ref_sector_empresa int4,\n> control int2,\n> origen_volcat int2,\n> data_modificacio date,\n> plantilla int4,\n> tamany int2,\n> autoritzacio_email int2,\n> ref_estat_empresa int2,\n> CONSTRAINT tbl_clients_pkey PRIMARY KEY (id_empresa),\n> CONSTRAINT fk_tbl_empresa_ref_classificacio_emp FOREIGN KEY \n> (ref_classificacio_empresa)\n> REFERENCES tbl_classificacio_empresa (id_classificacio_empresa) MATCH \n> SIMPLE\n> ON UPDATE RESTRICT ON DELETE RESTRICT,\n> CONSTRAINT fk_tbl_empresa_ref_empresa FOREIGN KEY (ref_empresa)\n> REFERENCES tbl_empresa (id_empresa) MATCH SIMPLE\n> ON UPDATE RESTRICT ON DELETE RESTRICT,\n> CONSTRAINT fk_tbl_empresa_ref_estat_emp FOREIGN KEY (ref_estat_empresa)\n> REFERENCES tbl_estat_empresa (id_estat_empresa) MATCH SIMPLE\n> ON UPDATE RESTRICT ON DELETE RESTRICT,\n> CONSTRAINT fk_tbl_empresa_ref_poblacio FOREIGN KEY (ref_poblacio)\n> REFERENCES tbl_poblacions (id_poblacio) MATCH SIMPLE\n> ON UPDATE RESTRICT ON DELETE RESTRICT,\n> CONSTRAINT fk_tbl_empresa_ref_sector_emp FOREIGN KEY (ref_sector_empresa)\n> REFERENCES tbl_sector_empresa (id_sector_empresa) MATCH SIMPLE\n> ON UPDATE RESTRICT ON DELETE RESTRICT\n> )\n> WITH OIDS;\n>\n> When we select all data in local machine, we obtain results in 2-3 seconds \n> aprox. In remote connections:\n>\n> Postgresql 7.1 usign pgAdminII:\n> Network traffic generated with remote applications is about 77-80% in a \n> 10Mb connection.\n> 6 seconds aprox.\n>\n> Postgresql 8.1 usign pgAdminIII:\n> Network traffic generated with remote applications is about 2-4% in a 10Mb \n> connection.\n> 12 seconds or more...\n>\n> I feel that is a problem with TCP_NODELAY of socket options... but I don't \n> know.\n>\n> Josep Maria\n>\n>\n> En/na Merlin Moncure ha escrit:\n>\n>>>We used Postgresql 7.1 under Linux and recently we have changed it to\n>>>Postgresql 8.1 under Windows XP. Our application uses ODBC and when we\n>>>try to get some information from the server throw a TCP connection,\n>>>\n>>it's\n>>\n>>>very slow. We have also tried it using psql and pgAdmin III, and we\n>>>\n>>get\n>>\n>>>the same results. If we try it locally, it runs much faster.\n>>>\n>>>We have been searching the mailing lists, we have found many people\n>>>\n>>with\n>>\n>>>the same problem, but we haven't found any final solution.\n>>>\n>>>How can we solve this? Any help will be appreciated.\n>>>\n>>>Thanks in advance.\n>>>\n>>>\n>>by any chance are you working with large tuples/columns (long text,\n>>bytea, etc)?\n>>\n>>Also please define slow.\n>>\n>>Merlin\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 2: Don't 'kill -9' the postmaster\n>>\n>\n>\n> -- \n>\n> Josep Maria Pinyol i Fontseca\n> Responsable �rea de programaci�\n>\n> ENDEPRO - Enginyeria de programari\n> Passeig Anselm Clav�, 19 Bx. 08263 Call�s (Barcelona)\n> Tel. +34 936930018 - Mob. +34 600310755 - Fax. +34 938361994\n> [email protected] - http://www.endepro.com\n>\n>\n> Aquest missatge i els documents en el seu cas adjunts, es dirigeixen \n> exclusivament al seu destinatari i poden contenir informaci� reservada i/o \n> CONFIDENCIAL, us del qual no est� autoritzat ni la divulgaci� del mateix, \n> prohibit per la legislaci� vigent (Llei 32/2002 SSI-CE). Si ha rebut \n> aquest missatge per error, li demanem que ens ho comuniqui immediatament \n> per la mateixa via o b� per tel�fon (+34936930018) i procedeixi a la seva \n> destrucci�. Aquest e-mail no podr� considerar-se SPAM.\n>\n> Este mensaje, y los documentos en su caso anexos, se dirigen \n> exclusivamente a su destinatario y pueden contener informaci�n reservada \n> y/o CONFIDENCIAL cuyo uso no autorizado o divulgaci�n est� prohibida por \n> la legislaci�n vigente (Ley 32/2002 SSI-CE). Si ha recibido este mensaje \n> por error, le rogamos que nos lo comunique inmediatamente por esta misma \n> v�a o por tel�fono (+34936930018) y proceda a su destrucci�n. Este e-mail \n> no podr� considerarse SPAM.\n>\n> This message and the enclosed documents are directed exclusively to its \n> receiver and can contain reserved and/or confidential information, from \n> which use isn�t allowed its divulgation, forbidden by the current \n> legislation (Law 32/2002 SSI-CE). If you have received this message by \n> mistake, we kindly ask you to communicate it to us right away by the same \n> way or by phone (+34936930018) and destruct it. This e-mail can�t be \n> considered as SPAM.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n", "msg_date": "Fri, 2 Dec 2005 21:26:49 +0100", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network permormance under windows" }, { "msg_contents": "\n\"Josep Maria Pinyol Fontseca\" <[email protected]> wrote\n>\n> When we select all data in local machine, we obtain results in 2-3 seconds \n> aprox. In remote connections:\n>\n> Postgresql 7.1 usign pgAdminII:\n> Network traffic generated with remote applications is about 77-80% in a \n> 10Mb connection.\n> 6 seconds aprox.\n>\n> Postgresql 8.1 usign pgAdminIII:\n> Network traffic generated with remote applications is about 2-4% in a 10Mb \n> connection.\n> 12 seconds or more...\n>\n\nHave you tried to use psql? And how you \"select all data\" - by \"select \ncount(*)\" or \"select *\"?\n\nRegards,\nQingqing \n\n\n", "msg_date": "Fri, 2 Dec 2005 15:43:34 -0500", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network permormance under windows" }, { "msg_contents": "\nYes, with psql, pgAdminIII and our application with ODBC I experiment \nthe same situation... the sentences that I execute are like \"select * \n...\" or similar like this.\n\n\nQingqing Zhou wrote:\n\n>\"Josep Maria Pinyol Fontseca\" <[email protected]> wrote\n> \n>\n>>When we select all data in local machine, we obtain results in 2-3 seconds \n>>aprox. In remote connections:\n>>\n>>Postgresql 7.1 usign pgAdminII:\n>>Network traffic generated with remote applications is about 77-80% in a \n>>10Mb connection.\n>>6 seconds aprox.\n>>\n>>Postgresql 8.1 usign pgAdminIII:\n>>Network traffic generated with remote applications is about 2-4% in a 10Mb \n>>connection.\n>>12 seconds or more...\n>>\n>> \n>>\n>\n>Have you tried to use psql? And how you \"select all data\" - by \"select \n>count(*)\" or \"select *\"?\n>\n>Regards,\n>Qingqing \n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: Don't 'kill -9' the postmaster\n> \n>\n\n\n-- \nJosep Maria Pinyol i Fontseca\nResponsable �rea de programaci�\n\nENDEPRO - Enginyeria de programari\nPasseig Anselm Clav�, 19 Bx. 08263 Call�s (Barcelona)\nTel. +34 936930018 - Mob. +34 600310755 - Fax. +34 938361994\[email protected] - http://www.endepro.com\n\n\nAquest missatge i els documents en el seu cas adjunts,\nes dirigeixen exclusivament al seu destinatari i poden contenir\ninformaci� reservada i/o CONFIDENCIAL, us del qual no est�\nautoritzat ni la divulgaci� del mateix, prohibit per la legislaci�\nvigent (Llei 32/2002 SSI-CE). Si ha rebut aquest missatge per error,\nli demanem que ens ho comuniqui immediatament per la mateixa via o\nb� per tel�fon (+34936930018) i procedeixi a la seva destrucci�.\nAquest e-mail no podr� considerar-se SPAM.\n\nEste mensaje, y los documentos en su caso anexos,\nse dirigen exclusivamente a su destinatario y pueden contener\ninformaci�n reservada y/o CONFIDENCIAL cuyo uso no\nautorizado o divulgaci�n est� prohibida por la legislaci�n\nvigente (Ley 32/2002 SSI-CE). Si ha recibido este mensaje por error,\nle rogamos que nos lo comunique inmediatamente por esta misma v�a o\npor tel�fono (+34936930018) y proceda a su destrucci�n.\nEste e-mail no podr� considerarse SPAM.\n\nThis message and the enclosed documents are directed exclusively\nto its receiver and can contain reserved and/or confidential\ninformation, from which use isn�t allowed its divulgation, forbidden\nby the current legislation (Law 32/2002 SSI-CE). If you have received\nthis message by mistake, we kindly ask you to communicate it to us\nright away by the same way or by phone (+34936930018) and destruct it.\nThis e-mail can�t be considered as SPAM.\n\n", "msg_date": "Fri, 02 Dec 2005 22:08:27 +0100", "msg_from": "Josep Maria Pinyol Fontseca <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network permormance under windows" } ]
[ { "msg_contents": "> \n> That was the command used to restore a database\n> \n> pg_restore.exe -i -h localhost -p 5432 -U postgres -d temp2 -v\n> \"D:\\d\\temp.bkp\"\n> \n> The database was created before using LATIN1 charset\n> \n> With 100 rows you can´t feel the test, then I decided send the whole\n> table.\n> \n> Very Thanks\n> \n> Franklin Haut\n\nHow are you dumping out your archive? I confirmed unreasonably slow dump with pg_dump -Z temp2 > temp2.bkp on windows 2000 server. I normally use bzip to compress my dumps.\n\nCan you measure time to dump uncompressed and also with bzip and compare?\n\nMerlin\n", "msg_date": "Fri, 2 Dec 2005 11:28:50 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump slow" } ]
[ { "msg_contents": "> How are you dumping out your archive? I confirmed unreasonably slow\ndump\n> with pg_dump -Z temp2 > temp2.bkp on windows 2000 server. I normally\nuse\n> bzip to compress my dumps.\n> \n> Can you measure time to dump uncompressed and also with bzip and\ncompare?\n> \n> Merlin\n\noops...cancel that. I was dumping the wrong database. Dumping your\ntable from localhost on a dual Opteron win2k server took a few seconds\nwith Z=0 and Z=9.\n\nMerlin\n", "msg_date": "Fri, 2 Dec 2005 11:33:18 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump slow" } ]
[ { "msg_contents": "I installed another drive in my linux pc in an attempt to improve\nperformance\n\non a large COPY to a table with a geometry index.\n\n \n\nBased on previous discussion, it seems there are three things competing for\nthe hard\n\ndrive:\n\n \n\n1) the input data file\n\n2) the pg table\n\n3) the WAL\n\n \n\nWhat is the best way to distribute these among two drives? From Tom's\ncomments\n\nI would think that the pg table and the WAL should be separate. Does it\nmatter where\n\nthe input data is?\n\n\n\n\n\n\n\n\n\n\nI installed another drive in my linux pc in an attempt to\nimprove performance\non a large COPY to a table with a geometry index.\n \nBased on previous discussion, it seems there are three\nthings competing for the hard\ndrive:\n \n1)      \nthe input data file\n2)      \nthe pg table\n3)      \nthe WAL\n \nWhat is the best way to distribute these among two\ndrives?  From Tom’s comments\nI would think that the pg table and the WAL should be\nseparate.  Does it matter where\nthe input data is?", "msg_date": "Fri, 2 Dec 2005 13:58:13 -0500", "msg_from": "\"Rick Schumeyer\" <[email protected]>", "msg_from_op": true, "msg_subject": "two disks - best way to use them?" }, { "msg_contents": "At 01:58 PM 12/2/2005, Rick Schumeyer wrote:\n>I installed another drive in my linux pc in an attempt to improve performance\n>on a large COPY to a table with a geometry index.\n>\n>Based on previous discussion, it seems there are three things \n>competing for the hard drive:\n>\n>1) the input data file\n>2) the pg table\n>3) the WAL\n>\n>What is the best way to distribute these among two drives? From \n>Tom's comments\n>I would think that the pg table and the WAL should be \n>separate. Does it matter where the input data is?\n\nBest is to have 3 HD or HD sets, one for each of the above.\n\nWith only 2, and assuming the input file is too large to fit \ncompletely into RAM at once, I'd test to see whether:\na= input on one + pg table & WAL on the other, or\nb= WAL on one + pg table & input file on the other\nis best.\n\nIf the input file can be made 100% RAM resident, then use\nc= pg table on one + WAL and input file on the other.\n\nThe big goal here is to minimize HD head seeks.\n\nRon\n\n\n", "msg_date": "Fri, 02 Dec 2005 15:05:30 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two disks - best way to use them?" }, { "msg_contents": "Rick Schumeyer wrote:\n> 1) the input data file\n> 2) the pg table\n> 3) the WAL\n\nAnd journal of file system, especially if you not set \"noatime\" mount \noption. WAL and file system journal like to make sync.\n\nIMHO: on first disk (raid mirror:)) I place /, pg_table and file system \njournal from second disk. On second /var and pg tables. Thus first disc \nis synced time to time, second not.\n-- \nOlleg Samoylov\n", "msg_date": "Mon, 05 Dec 2005 16:56:55 +0300", "msg_from": "Olleg Samoylov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two disks - best way to use them?" }, { "msg_contents": "Ron wrote:\n\n> At 01:58 PM 12/2/2005, Rick Schumeyer wrote:\n> \n>> I installed another drive in my linux pc in an attempt to improve \n>> performance\n>> on a large COPY to a table with a geometry index.\n>>\n>> Based on previous discussion, it seems there are three things \n>> competing for the hard drive:\n>>\n>> 1) the input data file\n>> 2) the pg table\n>> 3) the WAL\n>>\n>> What is the best way to distribute these among two drives? From Tom's \n>> comments\n>> I would think that the pg table and the WAL should be separate. Does \n>> it matter where the input data is?\n> \n> \n> Best is to have 3 HD or HD sets, one for each of the above.\n> \n> With only 2, and assuming the input file is too large to fit completely \n> into RAM at once, I'd test to see whether:\n> a= input on one + pg table & WAL on the other, or\n> b= WAL on one + pg table & input file on the other\n> is best.\n> \n> If the input file can be made 100% RAM resident, then use\n> c= pg table on one + WAL and input file on the other.\n> \n> The big goal here is to minimize HD head seeks.\n\n(noob question incoming)\n\nSection 26.4 WAL Internals\nhttp://www.postgresql.org/docs/8.1/interactive/wal-internals.html\n\nThis seems to be the applicable chapter. They talk about creating a \nsymlink for the data/pg_xlog folder to point at another disk set.\n\nIf I have (2) RAID1 sets with LVM2, can I instead create a logical \nvolume on the 2nd disk set and just mount data/pg_xlog to point at the \nlogical volume on the 2nd disk set?\n\nFor example, I have an LVM on my primary mirror called 'pgsql'. And \nI've created a 2nd LVM on my secondary mirror called 'pgxlog'. These \nare mounted as:\n\n/dev/vgraida/pgsql on /var/lib/postgresql type ext3 (rw,noatime)\n\n/dev/vgraidb/pgxlog on /var/lib/postgresql/data/pg_xlog type ext3 \n(rw,noatime)\n\n From the application's P.O.V., it's the same thing, right? (It seems to \nbe working, I'm just trying to double-check that I'm not missing something.)\n\n\n", "msg_date": "Mon, 05 Dec 2005 10:48:24 -0500", "msg_from": "Thomas Harold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two disks - best way to use them?" }, { "msg_contents": "On Mon, 5 Dec 2005, Thomas Harold wrote:\n\n> (noob question incoming)\n>\n> Section 26.4 WAL Internals\n> http://www.postgresql.org/docs/8.1/interactive/wal-internals.html\n>\n> This seems to be the applicable chapter. They talk about creating a symlink \n> for the data/pg_xlog folder to point at another disk set.\n>\n> If I have (2) RAID1 sets with LVM2, can I instead create a logical volume on \n> the 2nd disk set and just mount data/pg_xlog to point at the logical volume \n> on the 2nd disk set?\n>\n> For example, I have an LVM on my primary mirror called 'pgsql'. And I've \n> created a 2nd LVM on my secondary mirror called 'pgxlog'. These are mounted \n> as:\n>\n> /dev/vgraida/pgsql on /var/lib/postgresql type ext3 (rw,noatime)\n>\n> /dev/vgraidb/pgxlog on /var/lib/postgresql/data/pg_xlog type ext3 \n> (rw,noatime)\n>\n> From the application's P.O.V., it's the same thing, right? (It seems to be \n> working, I'm just trying to double-check that I'm not missing something.)\n>\n\nthe application can' tell the difference, but the reason for seperating \nthem isn't for the application, it's so that different pieces of hardware \ncan work on different things without having to bounce back and forth \nbetween them.\n\nuseing the same drives with LVM doesn't achieve this goal.\n\nthe problem is that the WAL is doing a LOT of writes, and postgres waits \nuntil each write is completed before going on to the next thing (for \nsafety), if a disk is dedicated to the WAL then the head doesn't move \nmuch. if the disk is used for other things as well then the heads have to \nmove across the disk surface between the WAL and where the data is. this \ndrasticly slows down the number of items that can go into the WAL, and \ntherefor slows down the entire system.\n\nthis slowdown isn't even something as simple as cutting your speed in half \n(half the time spent working on the WAL, half spent on the data itself), \nit's more like 10% spent on the WAL, 10% spent on the data, and 80% \nmoveing back and forth between them (I am probably wrong on the exact \nnumbers, but it is something similarly drastic)\n\nthis is also the reason why it's so good to have a filesystem journal on a \ndifferent drive.\n\nDavid Lang\n", "msg_date": "Mon, 5 Dec 2005 08:15:20 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two disks - best way to use them?" }, { "msg_contents": "David Lang wrote:\n\n> the application can' tell the difference, but the reason for seperating \n> them isn't for the application, it's so that different pieces of \n> hardware can work on different things without having to bounce back and \n> forth between them.\n> \n> useing the same drives with LVM doesn't achieve this goal.\n> \n> the problem is that the WAL is doing a LOT of writes, and postgres waits \n> until each write is completed before going on to the next thing (for \n> safety), if a disk is dedicated to the WAL then the head doesn't move \n> much. if the disk is used for other things as well then the heads have \n> to move across the disk surface between the WAL and where the data is. \n> this drasticly slows down the number of items that can go into the WAL, \n> and therefor slows down the entire system.\n> \n> this slowdown isn't even something as simple as cutting your speed in \n> half (half the time spent working on the WAL, half spent on the data \n> itself), it's more like 10% spent on the WAL, 10% spent on the data, and \n> 80% moveing back and forth between them (I am probably wrong on the \n> exact numbers, but it is something similarly drastic)\n\nYeah, I don't think I was clear about the config. It's (4) disks setup \nas a pair of RAID1 sets. My original config was pgsql on the first RAID \nset (data and WAL). I'm now experimenting with putting the data/pg_xlog \nfolder on the 2nd set of disks.\n\nUnder the old setup (everything on the original RAID1 set, in a \ndedicated 32GB LVM volume), I was seeing 80-90% wait percentages in \n\"top\". My understanding is that this is an indicator of an overloaded / \nbottlenecked disk system. This was while doing massive inserts into a \ntest table (millions of narrow rows). I'm waiting to see what happens \nonce I have data/pg_xlog on the 2nd disk set.\n\nThanks for the input.\n", "msg_date": "Mon, 05 Dec 2005 11:45:21 -0500", "msg_from": "Thomas Harold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two disks - best way to use them?" }, { "msg_contents": "On Mon, 5 Dec 2005, Thomas Harold wrote:\n\n> Yeah, I don't think I was clear about the config. It's (4) disks setup as a \n> pair of RAID1 sets. My original config was pgsql on the first RAID set (data \n> and WAL). I'm now experimenting with putting the data/pg_xlog folder on the \n> 2nd set of disks.\n>\n> Under the old setup (everything on the original RAID1 set, in a dedicated \n> 32GB LVM volume), I was seeing 80-90% wait percentages in \"top\". My \n> understanding is that this is an indicator of an overloaded / bottlenecked \n> disk system. This was while doing massive inserts into a test table \n> (millions of narrow rows). I'm waiting to see what happens once I have \n> data/pg_xlog on the 2nd disk set.\n\nin that case you logicly have two disks, so see the post from Ron earlier \nin this thread.\n\nDavid Lang\n", "msg_date": "Mon, 5 Dec 2005 19:54:25 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two disks - best way to use them?" }, { "msg_contents": "David Lang wrote:\n\n> in that case you logicly have two disks, so see the post from Ron \n> earlier in this thread.\n\nAnd it's a very nice performance gain. Percent spent waiting according \nto \"top\" is down around 10-20% instead of 80-90%. While I'm not \nprepared to benchmark, database performance is way up. The client \nmachines that are writing the data are running closer to 100% CPU \n(before they were well below 50% CPU utilization).\n\n\n", "msg_date": "Tue, 06 Dec 2005 00:52:59 -0500", "msg_from": "Thomas Harold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two disks - best way to use them?" }, { "msg_contents": "At 12:52 AM 12/6/2005, Thomas Harold wrote:\n>David Lang wrote:\n>\n>>in that case you logicly have two disks, so see the post from Ron \n>>earlier in this thread.\n>\n>And it's a very nice performance gain. Percent spent waiting \n>according to \"top\" is down around 10-20% instead of 80-90%. While \n>I'm not prepared to benchmark, database performance is way up. The \n>client machines that are writing the data are running closer to 100% \n>CPU (before they were well below 50% CPU utilization).\nFor accuracy's sake, which exact config did you finally use?\n\nHow did you choose the config you finally used? Did you test the \nthree options or just pick one?\n\nRon\n\n\n", "msg_date": "Tue, 06 Dec 2005 03:12:31 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two disks - best way to use them?" }, { "msg_contents": "Ron wrote:\n\n> For accuracy's sake, which exact config did you finally use?\n> \n> How did you choose the config you finally used? Did you test the three \n> options or just pick one?\n\n(Note: I'm not the original poster.)\n\nI just picked the option of putting the data/pg_xlog directory (WAL) on \na 2nd set of spindles. That was the easiest thing for me to change on \nthis test box.\n\nThe test server is simply a Gentoo box running software RAID and LVM2. \nThe primary disk set is 2x7200RPM 300GB drives and the secondary disk \nset is 2x5400RPM 300GB drives. Brand new install of PGSQL 8.1, with \nmostly default settings (I changed FSM pages to be a higher value, \nmax_fsm_pages = 150000). PGSQL was given it's own ext3 32GB LVM volume \non the primary disk set (2x7200RPM). Originally, all files were on the \nprimary disk.\n\nThe task at hand was inserting large quantity of ~45 byte rows \n(according to \"vacuum verbose\"), on the order of millions of records per \ntable. There was an unique key and a unique index. Test clients were \naccessing the database via ODBC / ADO and doing the inserts in a fairly \nbrute-force mode (attempt the insert, calling .CancelUpdate if it fails).\n\nWhen the tables were under 2 million rows, performance was okay. At one \npoint, I had a 1.8Ghz P4, dual Opteron 246, and Opteron 148 CPUs running \nat nearly 100% CPU processing and doing inserts into the database. So I \nhad 4 clients running, performing inserts to 4 separate tables in the \nsame database. The P4 ran at about half the throughput as the Opterons \n(client-bound due to the code that generated row data prior to the \ninsert), so I'd score my throughput as roughly 3.3-3.4. Where 1.0 would \nbe full utilization of the Opteron 148 box.\n\nHowever, once the tables started getting above ~2 million rows, \nperformance took a nose dive. CPU utilizations on the 4 client CPUs \ndropped into the basement (5-20% CPU) and I had to back off on the \nnumber of clients. So throughput had dropped down to around 0.25 or so. \n The linux box was spending nearly all of its time waiting on the \nprimary disks.\n\nMoving the data/pg_xlog (WAL) to the 2nd set of disks (2x5400RPM) in the \ntest server made a dramatic difference for this mass insert. I'm \nrunning the P4 (100% CPU) and the Opteron 148 (~80% CPU) at the moment. \n While it's not up to full speed, a throughput of ~1.3 is a lot better \nthen the ~0.25 that I was getting prior. (The two tables currently \nbeing written have over 5 million rows each. One table has ~16 million \nrows.) Wait percentage in \"top\" is only running 20-30% (dipping as low \nas 10%). I haven't pushed this new setup hard enough to determine where \nthe upper limit for throughput is.\n\nIt's very much a niche test (millions of inserts of narrow rows into \nmultiple tables using fairly brain-dead code). But it gives me data \npoints on which to base purchasing of the production box. The original \nplan was a simple RAID1 setup (2 spindles), but this tells me it's \nbetter to order 4 spindles and set it up as a pair of RAID1 sets.\n\nWhether 4 spindles is better as two separate RAID1 arrays, or configured \nas a single RAID1+0 array... dunno. Our application is typically more \nlimited by insert speed then read speed (so I'm leaning towards separate \nRAID arrays).\n\nI'm sure there's also more tuning that could be done to the PGSQL \ndatabase (in the configuration file). Also, the code is throwaway code \nthat isn't the most elegant.\n", "msg_date": "Tue, 06 Dec 2005 11:00:26 -0500", "msg_from": "Thomas Harold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two disks - best way to use them?" }, { "msg_contents": "On Tue, 6 Dec 2005, Thomas Harold wrote:\n\n> Ron wrote:\n>\n>> For accuracy's sake, which exact config did you finally use?\n>> \n>> How did you choose the config you finally used? Did you test the three \n>> options or just pick one?\n>\n> (Note: I'm not the original poster.)\n>\n> I just picked the option of putting the data/pg_xlog directory (WAL) on a 2nd \n> set of spindles. That was the easiest thing for me to change on this test \n> box.\n>\n> The test server is simply a Gentoo box running software RAID and LVM2. The \n> primary disk set is 2x7200RPM 300GB drives and the secondary disk set is \n> 2x5400RPM 300GB drives. Brand new install of PGSQL 8.1, with mostly default \n> settings (I changed FSM pages to be a higher value, max_fsm_pages = 150000). \n> PGSQL was given it's own ext3 32GB LVM volume on the primary disk set \n> (2x7200RPM). Originally, all files were on the primary disk.\n\nthe WAL is more sensitive to drive speeds then the data is, so you may \npick up a little more performance by switching the WAL to the 7200 rpm \ndrives instead of the 5400 rpm drives.\n\nif you see a noticable difference with this, consider buying a pair of \nsmaller, but faster drives (10k or 15k rpm drives, or a solid-state \ndrive). you can test this (with significant data risk) by putting the WAL \non a ramdisk and see what your performance looks like.\n\nDavid Lang\n", "msg_date": "Tue, 6 Dec 2005 13:57:12 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two disks - best way to use them?" } ]
[ { "msg_contents": "Hi,\n\nthanks for your comments so far - I appreciate it. I'd like to narrow \ndown my problem a bit:\n\nAs I said in the other thread, I estimate that only 20% of the 15,000 \ntables are accessed regularly. So I don't think that vacuuming or the \nnumber of file handles is a problem. Have a look at this:\n\ncontent2=# select relpages, relname from pg_class order by relpages desc \nlimit 20;\n relpages | relname\n----------+---------------------------------\n 11867 | pg_attribute\n 10893 | pg_attribute_relid_attnam_index\n 3719 | pg_class_relname_nsp_index\n 3310 | wsobjects_types\n 3103 | pg_class\n 2933 | wsobjects_types_fields\n 2903 | wsod_133143\n 2719 | pg_attribute_relid_attnum_index\n 2712 | wsod_109727\n 2666 | pg_toast_98845\n 2601 | pg_toast_9139566\n 1876 | wsod_32168\n 1837 | pg_toast_138780\n 1678 | pg_toast_101427\n 1409 | wsobjects_types_fields_idx\n 1088 | wso_log\n 943 | pg_depend\n 797 | pg_depend_depender_index\n 737 | wsod_3100\n 716 | wp_hp_zen\n\nI don't think that postgres was designed for a situation like this, \nwhere a system table that should be fairly small (pg_attribute) is this \nlarge.\n", "msg_date": "Sat, 03 Dec 2005 00:01:55 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": true, "msg_subject": "15,000 tables - next step" }, { "msg_contents": "On 12/2/2005 6:01 PM, Michael Riess wrote:\n\n> Hi,\n> \n> thanks for your comments so far - I appreciate it. I'd like to narrow \n> down my problem a bit:\n> \n> As I said in the other thread, I estimate that only 20% of the 15,000 \n> tables are accessed regularly. So I don't think that vacuuming or the \n> number of file handles is a problem. Have a look at this:\n\nWhat makes you think that? Have you at least tried to adjust your shared \nbuffers, freespace map settings and background writer options to values \nthat match your DB? How does increasing the kernel file desctriptor \nlimit (try the current limit times 5 or 10) affect your performance?\n\n\nJan\n\n\n\n\n> \n> content2=# select relpages, relname from pg_class order by relpages desc \n> limit 20;\n> relpages | relname\n> ----------+---------------------------------\n> 11867 | pg_attribute\n> 10893 | pg_attribute_relid_attnam_index\n> 3719 | pg_class_relname_nsp_index\n> 3310 | wsobjects_types\n> 3103 | pg_class\n> 2933 | wsobjects_types_fields\n> 2903 | wsod_133143\n> 2719 | pg_attribute_relid_attnum_index\n> 2712 | wsod_109727\n> 2666 | pg_toast_98845\n> 2601 | pg_toast_9139566\n> 1876 | wsod_32168\n> 1837 | pg_toast_138780\n> 1678 | pg_toast_101427\n> 1409 | wsobjects_types_fields_idx\n> 1088 | wso_log\n> 943 | pg_depend\n> 797 | pg_depend_depender_index\n> 737 | wsod_3100\n> 716 | wp_hp_zen\n> \n> I don't think that postgres was designed for a situation like this, \n> where a system table that should be fairly small (pg_attribute) is this \n> large.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Sat, 03 Dec 2005 10:51:43 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 15,000 tables - next step" }, { "msg_contents": "Jan Wieck schrieb:\n> On 12/2/2005 6:01 PM, Michael Riess wrote:\n> \n>> Hi,\n>>\n>> thanks for your comments so far - I appreciate it. I'd like to narrow \n>> down my problem a bit:\n>>\n>> As I said in the other thread, I estimate that only 20% of the 15,000 \n>> tables are accessed regularly. So I don't think that vacuuming or the \n>> number of file handles is a problem. Have a look at this:\n> \n> What makes you think that? Have you at least tried to adjust your shared \n> buffers, freespace map settings and background writer options to values \n> that match your DB? How does increasing the kernel file desctriptor \n> limit (try the current limit times 5 or 10) affect your performance?\n> \n> \n\nOf course I tried to tune these settings. You should take into account \nthat the majority of the tables are rarely ever modified, therefore I \ndon't think that I need a gigantic freespace map. And the background \nwriter never complained.\n\nShared memory ... I currently use 1500 buffers for 50 connections, and \nperformance really suffered when I used 3000 buffers. The problem is \nthat it is a 1GB machine, and Apache + Tomcat need about 400MB.\n\nBut thanks for your suggestions! I guess that I'll have to find a way to \nreduce the number of tables. Unfortunately my application needs them, so \nI'll have to find a way to delete rarely used tables and create them on \nthe fly when they're accessed again. But this will really make my \napplication much more complex and error-prone, and I had hoped that the \ndatabase system could take care of that. I still think that a database \nsystem's performance should not suffer from the mere presence of unused \ntables.\n\nMike\n", "msg_date": "Sat, 03 Dec 2005 17:20:21 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 15,000 tables - next step" }, { "msg_contents": "Michael Riess wrote:\n\n> Shared memory ... I currently use 1500 buffers for 50 connections, and \n> performance really suffered when I used 3000 buffers. The problem is \n> that it is a 1GB machine, and Apache + Tomcat need about 400MB.\n\nWell, I'd think that's were your problem is. Not only you have a\n(relatively speaking) small server -- you also share it with other\nvery-memory-hungry services! That's not a situation I'd like to be in.\nTry putting Apache and Tomcat elsewhere, and leave the bulk of the 1GB\nto Postgres. With 1500 shared buffers you are not really going\nanywhere -- you should have ten times that at the very least.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Sat, 3 Dec 2005 13:26:42 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 15,000 tables - next step" }, { "msg_contents": "Alvaro Herrera schrieb:\n> Michael Riess wrote:\n> \n>> Shared memory ... I currently use 1500 buffers for 50 connections, and \n>> performance really suffered when I used 3000 buffers. The problem is \n>> that it is a 1GB machine, and Apache + Tomcat need about 400MB.\n> \n> Well, I'd think that's were your problem is. Not only you have a\n> (relatively speaking) small server -- you also share it with other\n> very-memory-hungry services! That's not a situation I'd like to be in.\n> Try putting Apache and Tomcat elsewhere, and leave the bulk of the 1GB\n> to Postgres. \n\nNo can do. I can try to switch to a 2GB machine, but I will not use \nseveral machines. Not for a 5GB database. ;-)\n\n> With 1500 shared buffers you are not really going\n> anywhere -- you should have ten times that at the very least.\n> \n\nLike I said - I tried to double the buffers and the performance did not \nimprove in the least. And I also tried this on a 2GB machine, and \nswapping was not a problem. If I used 10x more buffers, I would in \nessence remove the OS buffers.\n", "msg_date": "Sat, 03 Dec 2005 17:41:42 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 15,000 tables - next step" }, { "msg_contents": "On 12/3/05, Michael Riess <[email protected]> wrote:\n> Alvaro Herrera schrieb:\n> > Michael Riess wrote:\n> >\n> >> Shared memory ... I currently use 1500 buffers for 50 connections, and\n> >> performance really suffered when I used 3000 buffers. The problem is\n> >> that it is a 1GB machine, and Apache + Tomcat need about 400MB.\n> >\n> > Well, I'd think that's were your problem is. Not only you have a\n> > (relatively speaking) small server -- you also share it with other\n> > very-memory-hungry services! That's not a situation I'd like to be in.\n> > Try putting Apache and Tomcat elsewhere, and leave the bulk of the 1GB\n> > to Postgres.\n>\n> No can do. I can try to switch to a 2GB machine, but I will not use\n> several machines. Not for a 5GB database. ;-)\n>\n\nNo for a 5GB database but because of the other services you have running\n\n> > With 1500 shared buffers you are not really going\n> > anywhere -- you should have ten times that at the very least.\n> >\n>\n> Like I said - I tried to double the buffers and the performance did not\n> improve in the least. And I also tried this on a 2GB machine, and\n> swapping was not a problem. If I used 10x more buffers, I would in\n> essence remove the OS buffers.\n>\n\nHow many disks do you have? (i wonder if you say 1)\n- in most cases is good idea to have the WAL file in another disk...\n\nWhat type of disks (ide, scsi, etc)?\nHow many processors?\n\nWhat other services (or applications) do you have in that machine?\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Sat, 3 Dec 2005 13:45:06 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 15,000 tables - next step" }, { "msg_contents": "On 12/3/2005 11:41 AM, Michael Riess wrote:\n\n> Alvaro Herrera schrieb:\n>> Michael Riess wrote:\n>> \n>>> Shared memory ... I currently use 1500 buffers for 50 connections, and \n>>> performance really suffered when I used 3000 buffers. The problem is \n>>> that it is a 1GB machine, and Apache + Tomcat need about 400MB.\n>> \n>> Well, I'd think that's were your problem is. Not only you have a\n>> (relatively speaking) small server -- you also share it with other\n>> very-memory-hungry services! That's not a situation I'd like to be in.\n>> Try putting Apache and Tomcat elsewhere, and leave the bulk of the 1GB\n>> to Postgres. \n> \n> No can do. I can try to switch to a 2GB machine, but I will not use \n> several machines. Not for a 5GB database. ;-)\n\nWhat version of PostgreSQL are we talking about? If it is anything older \nthan 8.0, you should upgrade at least to that. With 8.0 or better try \n20000 shared buffers or more. It is well possible that going from 1500 \nto 3000 buffers made things worse. Your buffer cache can't even hold the \nsystem catalog in shared memory. If those 50 backends serve all those \n500 apps at the same time, they suffer from constant catalog cache \nmisses and don't find the entries in the shared buffers either.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Sat, 03 Dec 2005 14:32:21 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 15,000 tables - next step" }, { "msg_contents": "Michael Riess wrote:\n>> Well, I'd think that's were your problem is. Not only you have a\n>> (relatively speaking) small server -- you also share it with other\n>> very-memory-hungry services! That's not a situation I'd like to be in.\n>> Try putting Apache and Tomcat elsewhere, and leave the bulk of the 1GB\n>> to Postgres. \n> \n> \n> No can do. I can try to switch to a 2GB machine, but I will not use \n> several machines. Not for a 5GB database. ;-)\n> \n>> With 1500 shared buffers you are not really going\n>> anywhere -- you should have ten times that at the very least.\n>>\n> \n> Like I said - I tried to double the buffers and the performance did not \n> improve in the least. And I also tried this on a 2GB machine, and \n> swapping was not a problem. If I used 10x more buffers, I would in \n> essence remove the OS buffers.\n\nIncreasing buffers do improve performance -- if you have enough memory. \nYou just don't have enough memory to play with. My servers run w/ 10K \nbuffers (128MB on 64-bit FC4) and it definitely runs better w/ it at 10K \nversus 1500.\n\nWith that many tables, your system catalogs are probably huge. To keep \nyour system catalog from continually cycling in-out of buffers/OS \ncache/disk, you need a lot more memory. Ordinarily, I'd say the 500MB \nyou have available for Postgres to cache 5GB is a workable ratio. My \nservers all have similar ratios of ~1:10 and they perform pretty good -- \n*except* when the system catalogs bloated due to lack of vacuuming on \nsystem tables. My app regularly creates & drops thousands of temporary \ntables leaving a lot of dead rows in the system catalogs. (Nearly the \nsame situation as you -- instead of 15K live tables, I had 200 live \ntables and tens of thousands of dead table records.) Even with almost \n8GB of RAM dedicated to postgres, performance on every single query -- \nnot matter how small the table was -- took forever because the query \nplanner had to spend a significant period of time scanning through my \nhuge system catalogs to build the execution plan.\n\nWhile my situtation was fixable by scheduling a nightly vacuum/analyze \non the system catalogs to get rid of the bazillion dead table/index \ninfo, you have no choice but to get more memory so you can stuff your \nentire system catalog into buffers/os cache. Personally, w/ 1GB of ECC \nRAM at ~$85, it's a no brainer. Get as much memory as your server can \nsupport.\n", "msg_date": "Sun, 04 Dec 2005 01:21:42 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 15,000 tables - next step" }, { "msg_contents": "William Yu schrieb:\n > Michael Riess wrote:\n >>> Well, I'd think that's were your problem is. Not only you have a\n >>> (relatively speaking) small server -- you also share it with other\n >>> very-memory-hungry services! That's not a situation I'd like to be in.\n >>> Try putting Apache and Tomcat elsewhere, and leave the bulk of the 1GB\n >>> to Postgres.\n >>\n >>\n >> No can do. I can try to switch to a 2GB machine, but I will not use \nseveral machines. Not for a 5GB database. ;-)\n >>\n >>> With 1500 shared buffers you are not really going\n >>> anywhere -- you should have ten times that at the very least.\n >>>\n >>\n >> Like I said - I tried to double the buffers and the performance did \nnot improve in the least. And I also tried this on a 2GB machine, and \nswapping was not a problem. If I used 10x more buffers, I would in \nessence remove the OS buffers.\n >\n > Increasing buffers do improve performance -- if you have enough \nmemory. You just don't have enough memory to play with. My servers run \nw/ 10K buffers (128MB on 64-bit FC4) and it definitely runs better w/ it \nat 10K versus 1500.\n >\n > With that many tables, your system catalogs are probably huge.\n\n\ncontent2=# select sum(relpages) from pg_class where relname like 'pg_%';\n sum\n-------\n 64088\n(1 row)\n\n:-)\n\n\n > While my situtation was fixable by scheduling a nightly \nvacuum/analyze on the system catalogs to get rid of the bazillion dead \ntable/index info, you have no choice but to get more memory so you can \nstuff your entire system catalog into buffers/os cache. Personally, w/ \n1GB of ECC RAM at ~$85, it's a no brainer. Get as much memory as your \nserver can support.\n\nThe problem is that we use pre-built hardware which isn't configurable. \nWe can only switch to a bigger server with 2GB, but that's tops.\n\nI will do the following:\n\n- switch to 10k buffers on a 1GB machine, 20k buffers on a 2GB machine\n- try to optimize my connection polls to remember which apps (groups of \n30 tables) were accessed, so that there is a better chance of using caches\n- \"swap out\" tables which are rarely used: export the content, drop the \ntable, and re-create it on the fly upon access.\n\nThanks for your comments!\n", "msg_date": "Sun, 04 Dec 2005 10:33:47 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 15,000 tables - next step" }, { "msg_contents": "On 12/4/2005 4:33 AM, Michael Riess wrote:\n> I will do the following:\n> \n> - switch to 10k buffers on a 1GB machine, 20k buffers on a 2GB machine\n> - try to optimize my connection polls to remember which apps (groups of \n> 30 tables) were accessed, so that there is a better chance of using caches\n> - \"swap out\" tables which are rarely used: export the content, drop the \n> table, and re-create it on the fly upon access.\n\nI hacked pgbench a little and did some tests (finally had to figure out \nfor myself if there is much of an impact with hundreds or thousands of \ntables).\n\nThe changes done to pgbench:\n\n - Use the [-s n] value allways, instead of determining the\n scaling from the DB.\n\n - Lower the number of accounts per scaling factor to 10,000.\n\n - Add another scaling type. Option [-a n] splits up the test\n into n schemas, each containing [-s n] branches.\n\nThe tests were performed on a 667 MHz P3, 640MB Ram with a single IDE \ndisk. All tests were IO bound. In all tests the number of clients was 5 \ndefault transaction and 50 readonly (option -S). The FreeBSD kernel of \nthe system is configured to handle up to 50,000 open files, fully cache \ndirectories in virtual memory and to lock all shared memory into \nphysical ram.\n\nThe different scalings used were\n\n init -a1 -s3000\n run -a1 -s300\n\nand\n\n init -a3000 -s1\n run -a300 -s1\n\nThe latter creates a database of 12,000 tables with 1,200 of them \nactually in use during the test. Both databases are about 4 GB in size.\n\nThe performance loss for going from -s3000 to -a3000 is about 10-15%.\n\nThe performance gain for going from 1,000 shared_buffers to 48,000 is \nroughly 70% (-a3000 test case) and 100% (-s3000 test case).\n\nConclusion: The right shared memory configuration easily outperforms the \nloss from increase in number of tables, given that the kernel is \nconfigured to be up to the task of dealing with thousands of files \naccessed by that number of backends too.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Mon, 05 Dec 2005 22:07:20 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 15,000 tables - next step" } ]
[ { "msg_contents": "I am in the process of designing a new system.\nThere will be a long list of words such as\n\n-word table\nword_id integer\nword varchar\nspecial boolean\n\nSome \"special\" words are used to determine if some work is to be done and \nwill be what we care the most for one type of operation. \n\nWill it be more effective to have a partial index 'where is special' or to \ncopy those special emails to their own table?\n\nThe projected number of non special words is in the millions while the \nspecial ones will be in the thousands at most (under 10K for sure).\n\nMy personal view is that performance should be pretty much equal, but one of \n my co-worker's believes that the smaller table would likely get cached by \nthe OS since it would be used so frequently and would perform better.\n\nIn both instances we would be hitting an index of exactly the same size.\n\nThe searches will be 'where word = <variable> and is special' \n\n", "msg_date": "Fri, 02 Dec 2005 18:28:09 -0500", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Small table or partial index?" }, { "msg_contents": "On Fri, Dec 02, 2005 at 06:28:09PM -0500, Francisco Reyes wrote:\n> I am in the process of designing a new system.\n> There will be a long list of words such as\n> \n> -word table\n> word_id integer\n> word varchar\n> special boolean\n> \n> Some \"special\" words are used to determine if some work is to be done and \n> will be what we care the most for one type of operation. \n\nTough call. The key here is the amount of time required to do a join. It\nalso depends on if you need all the special words or not. Your best bet\nis to try and benchmark both ways.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 12 Dec 2005 16:24:33 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Small table or partial index?" }, { "msg_contents": "Jim C. Nasby writes:\n\n> On Fri, Dec 02, 2005 at 06:28:09PM -0500, Francisco Reyes wrote:\n>> I am in the process of designing a new system.\n>> There will be a long list of words such as\n>> \n>> -word table\n>> word_id integer\n>> word varchar\n>> special boolean\n>> \n>> Some \"special\" words are used to determine if some work is to be done and \n>> will be what we care the most for one type of operation. \n> \n> Tough call. The key here is the amount of time required to do a join. It\n> also depends on if you need all the special words or not. Your best bet\n> is to try and benchmark both ways.\n\n\nIn your opinion do you think performance will be comparable?\nI am hoping I will have time to test, but not sure if will have time and the \ntables will be pretty large. :-(\n", "msg_date": "Tue, 13 Dec 2005 11:08:55 -0500", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Small table or partial index?" } ]
[ { "msg_contents": "And how do we compose the binary data on the client? Do we trust that the client encoding conversion logic is identical to the backend's? If there is a difference, what happens if the same file loaded from different client machines has different results? Key conflicts when loading a restore from one machine and not from another?\r\n- Luke\r\n--------------------------\r\nSent from my BlackBerry Wireless Device\r\n\r\n\r\n-----Original Message-----\r\nFrom: Mitch Skinner <[email protected]>\r\nTo: Luke Lonergan <[email protected]>\r\nCC: Stephen Frost <[email protected]>; David Lang <[email protected]>; Steve Oualline <[email protected]>; [email protected] <[email protected]>\r\nSent: Fri Dec 02 22:26:06 2005\r\nSubject: Re: [PERFORM] Database restore speed\r\n\r\nOn Fri, 2005-12-02 at 13:24 -0800, Luke Lonergan wrote:\r\n> It's a matter of safety and generality - in general you\r\n> can't be sure that client machines / OS'es will render the same conversions\r\n> that the backend does in all cases IMO.\r\n\r\nCan't binary values can safely be sent cross-platform in DataRow\r\nmessages? At least from my ignorant, cursory look at printtup.c,\r\nthere's a binary format code path. float4send in utils/adt/float.c uses\r\npq_sendfloat4. I obviously haven't followed the entire rabbit trail,\r\nbut it seems like it happens.\r\n\r\nIOW, why isn't there a cross-platform issue when sending binary data\r\nfrom the backend to the client in query results? And if there isn't a\r\nproblem there, why can't binary data be sent from the client to the\r\nbackend?\r\n\r\nMitch\r\n\r\n", "msg_date": "Fri, 2 Dec 2005 23:03:57 -0500", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database restore speed" }, { "msg_contents": "On Fri, 2 Dec 2005, Luke Lonergan wrote:\n\n> And how do we compose the binary data on the client? Do we trust that \n> the client encoding conversion logic is identical to the backend's? If \n> there is a difference, what happens if the same file loaded from \n> different client machines has different results? Key conflicts when \n> loading a restore from one machine and not from another? - Luke\n\nthe same way you deal with text data that could be in different encodings, \nyou tag your message with the format version you are useing and throw an \nerror if you get a format you don't understand how to deal with.\n\nif a client claims to be useing one format, but is instead doing something \ndifferent you will be in deep trouble anyway.\n\nremember, we aren't talking about random application code here, we are \ntalking about postgres client code and libraries, if the library is \nincorrect then it's a bug, parsing bugs could happen in the server as \nwelll. (in fact, the server could parse things to the intermediate format \nand then convert them, this sounds expensive, but given the high clock \nmultipliers in use, it may not end up being measurable)\n\nDavid Lang\n", "msg_date": "Sat, 3 Dec 2005 01:38:52 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database restore speed" }, { "msg_contents": "On Fri, 2005-12-02 at 23:03 -0500, Luke Lonergan wrote:\n> And how do we compose the binary data on the client? Do we trust that the client encoding conversion logic is identical to the backend's?\n\nWell, my newbieness is undoubtedly showing already, so I might as well\ncontinue with my line of dumb questions. I did a little mail archive\nsearching, but had a hard time coming up with unique query terms.\n\nThis is a slight digression, but my question about binary format query\nresults wasn't rhetorical. Do I have to worry about different platforms\nwhen I'm getting binary RowData(s) back from the server? Or when I'm\nsending binary bind messages?\n\nRegarding whether or not the client has identical encoding/conversion\nlogic, how about a fast path that starts out by checking for\ncompatibility? In addition to a BOM, you could add a \"float format\nmark\" that was an array of things like +0.0, -0.0, min, max, +Inf, -Inf,\nNaN, etc.\n\nIt looks like XDR specifies byte order for floats and otherwise punts to\nIEEE. I have no experience with SQL*Loader, but a quick read of the\ndocs appears to divide data types into \"portable\" and \"nonportable\"\ngroups, where loading nonportable data types requires extra care.\n\nThis may be overkill, but have you looked at HDF5? Only one hit came up\nin the mail archives.\nhttp://hdf.ncsa.uiuc.edu/HDF5/doc/H5.format.html\nFor (e.g.) floats, the format includes metadata that specifies byte\norder, padding, normalization, the location of the sign, exponent, and\nmantissa, and the size of the exponent and mantissa. The format appears\nnot to require length information on a per-datum basis. A cursory look\nat the data format page gives me the impression that there's a useful\nstreamable subset. The license of the implementation is BSD-style (no\nadvertising clause), and it appears to support a large variety of\nplatforms. Currently, the format spec only mentions ASCII, but since\nthe library doesn't do any actual string manipulation (just storage and\nretrieval, AFAICS) it may be UTF-8 clean.\n\nMitch\n", "msg_date": "Sat, 03 Dec 2005 15:29:15 -0800", "msg_from": "Mitch Skinner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database restore speed" } ]
[ { "msg_contents": "Imagine a table named Person with \"first_name\" and \"age\".\n\nNow let's make it fancy and put a \"mother\" and \"father\" field that is\na reference to the own table (Person). And to get even fuzzier, let's\ndrop in some siblings:\n\nCREATE TABLE person(\n id bigint PRIMARY KEY,\n first_name TEXT,\n age INT,\n mother bigint REFERENCES person,\n father biging REFERENCES person,\n siblings array of bigints (don't remember the syntax, but you get the point)\n);\n\nWell, this is ok, but imagine a search for \"brothers of person id\n34\". We would have to search inside the record's 'siblings' array. Is\nthis a bad design? is this going to be slow?\n\nWhat would be a better design to have these kind of relationships?\n(where you need several references to rows inside the table we are).\n\nThanks for any help,\nRodrigo\n", "msg_date": "Sat, 3 Dec 2005 23:00:21 +0000", "msg_from": "Rodrigo Madera <[email protected]>", "msg_from_op": true, "msg_subject": "Faster db architecture for a twisted table." }, { "msg_contents": "Rodrigo Madera wrote:\n\n>Imagine a table named Person with \"first_name\" and \"age\".\n>\n>Now let's make it fancy and put a \"mother\" and \"father\" field that is\n>a reference to the own table (Person). And to get even fuzzier, let's\n>drop in some siblings:\n>\n>CREATE TABLE person(\n> id bigint PRIMARY KEY,\n> first_name TEXT,\n> age INT,\n> mother bigint REFERENCES person,\n> father biging REFERENCES person,\n> siblings array of bigints (don't remember the syntax, but you get the point)\n>);\n>\n>Well, this is ok, but imagine a search for \"brothers of person id\n>34\". We would have to search inside the record's 'siblings' array. Is\n>this a bad design? is this going to be slow?\n>\n>What would be a better design to have these kind of relationships?\n>(where you need several references to rows inside the table we are).\n> \n>\n\nCreate a table \"sibling\" with parent_id, sibling_id and appropriate FKs, \nallowing the model to reflect the relation. At the same time, you can \ndrop \"mother\" and \"father\", because this relation is covered too.\n\nRegards,\nAndreas\n\n", "msg_date": "Sun, 04 Dec 2005 00:57:07 +0100", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster db architecture for a twisted table." }, { "msg_contents": "On Sat, 2005-12-03 at 23:00 +0000, Rodrigo Madera wrote:\n> CREATE TABLE person(\n> id bigint PRIMARY KEY,\n> first_name TEXT,\n> age INT,\n> mother bigint REFERENCES person,\n> father biging REFERENCES person,\n> siblings array of bigints (don't remember the syntax, but you get the point)\n> );\n> \n> Well, this is ok, but imagine a search for \"brothers of person id\n> 34\". We would have to search inside the record's 'siblings' array. Is\n> this a bad design? is this going to be slow?\n\nWell, I don't know how close this example is to your actual problem, but\nthe siblings array is redundant, AFAICS. If you got rid of it, you\ncould query for full sibling brothers with something like (not tested):\n\nselect bro.* from\n person p inner join person bro\n on (p.mother = bro.mother)\n AND (p.father = bro.father)\nwhere\n bro.sex='M' and p.id=34\n\n...assuming you added a \"sex\" field, which you would need in any case to\nquery for brothers.\n\nYou could query for half-siblings by changing the AND into an OR, I\nthink.\n\nMitch\n\n", "msg_date": "Sat, 03 Dec 2005 16:02:58 -0800", "msg_from": "Mitchell Skinner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster db architecture for a twisted table." }, { "msg_contents": "\n----- Original Message ----- \nFrom: \"Andreas Pflug\" <[email protected]>\n\n> Create a table \"sibling\" with parent_id, sibling_id and appropriate FKs, \n> allowing the model to reflect the relation. At the same time, you can drop \n> \"mother\" and \"father\", because this relation is covered too\n\n\nSomething like a table describing relationships and a table reflecting \nrelationships from both sides, I guess:\n\n\ncreate table relationship_type\n(\nrelationship_type_id serial,\nrelationship_type_description varchar(20)\n)\n\npopulated with values such as:\n1 Child_of\n2 Father_of\n3 Brother_of\n4 Sister_of\n...\n\n\nAnd then\n\n\ncreate table person_relationships\n(\nsource_person_id int4,\nrelationship_type_id int4,\ntarget_person_id int4\n)\n\npopulated with values such as:\n1 1 2 (person 1 is child of person 2)\n2 2 1 (person 2 is father of person 1)\n...\n\n\nIt requires a careful maintenance, as almost all (I'd stick with ALL) \nrelationships will require a person to appear twice (as source and as \ntarget), but flexible and easy to query.\n\n\nHelder M. Vieira\n\n\n\n\n\n", "msg_date": "Sun, 4 Dec 2005 01:33:13 -0000", "msg_from": "=?iso-8859-1?Q?H=E9lder_M._Vieira?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster db architecture for a twisted table." }, { "msg_contents": "H�lder M. Vieira wrote:\n> \n> ----- Original Message ----- From: \"Andreas Pflug\" \n> <[email protected]>\n> \n>> Create a table \"sibling\" with parent_id, sibling_id and appropriate \n>> FKs, allowing the model to reflect the relation. At the same time, you \n>> can drop \"mother\" and \"father\", because this relation is covered too\n> \n> \n> \n> Something like a table describing relationships and a table reflecting \n> relationships from both sides, I guess:\n> \n> \n> create table relationship_type\n> (\n> relationship_type_id serial,\n> relationship_type_description varchar(20)\n> )\n> \n> populated with values such as:\n> 1 Child_of\n> 2 Father_of\n> 3 Brother_of\n> 4 Sister_of\n> ...\n> \n> \n> And then\n> \n> \n> create table person_relationships\n> (\n> source_person_id int4,\n> relationship_type_id int4,\n> target_person_id int4\n> )\n> \n> populated with values such as:\n> 1 1 2 (person 1 is child of person 2)\n> 2 2 1 (person 2 is father of person 1)\n> \n\nThis is an extended version, that could describe general person \nrelations, not only family relations. Still, your your \nrelationship_types are not precise. Since a two way relation is \ndescribed, only the two Child_of and Brother/Sister are needed; the \ngender should be taken from the person themselves (to avoid data \ninconsistencies as \"Mary is a brother of Lucy\").\nBut this isn't pgsql-performances stuff any more.\n\n\nRegards,\nAndreas\n", "msg_date": "Sun, 04 Dec 2005 11:56:53 +0000", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster db architecture for a twisted table." }, { "msg_contents": "On Sat, 3 Dec 2005 23:00:21 +0000, Rodrigo Madera <[email protected]> wrote:\n> Imagine a table named Person with \"first_name\" and \"age\".\n> \n> Now let's make it fancy and put a \"mother\" and \"father\" field that is\n> a reference to the own table (Person). And to get even fuzzier, let's\n> drop in some siblings:\n> \n> CREATE TABLE person(\n> id bigint PRIMARY KEY,\n> first_name TEXT,\n> age INT,\n> mother bigint REFERENCES person,\n> father biging REFERENCES person,\n> siblings array of bigints (don't remember the syntax, but you get the point)\n> );\n> \n> Well, this is ok, but imagine a search for \"brothers of person id\n> 34\". We would have to search inside the record's 'siblings' array. Is\n> this a bad design? is this going to be slow?\n\nDo you need the array at all?\n\nalter table person add column gender;\n\nselect id \n>from person\nwhere gender = 'male' \nand (mother = (select mother from person where id = 34)\n OR father = (select father from person where id = 34))\n\nYou can change the OR depending if you want half brothers or not\n\n> What would be a better design to have these kind of relationships?\n> (where you need several references to rows inside the table we are).\n\nWe use that structure (without the sibiling array) for our systems. \nSiblings are calculated from parents (in our case, livestock, there can\nbe hundreds). You have to be prepared to use recursive functions and\nmake sure that a person doesnt appear anywhere higher in their family\ntree.\n\nklint.\n\n+---------------------------------------+-----------------+\n: Klint Gore : \"Non rhyming :\n: EMail : [email protected] : slang - the :\n: Snail : A.B.R.I. : possibilities :\n: Mail University of New England : are useless\" :\n: Armidale NSW 2351 Australia : L.J.J. :\n: Fax : +61 2 6772 5376 : :\n+---------------------------------------+-----------------+\n", "msg_date": "Mon, 05 Dec 2005 09:24:51 +1100", "msg_from": "Klint Gore <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster db architecture for a twisted table." } ]
[ { "msg_contents": "Hi!\n\n> -----Ursprüngliche Nachricht-----\n> Von: Tom Lane [mailto:[email protected]] \n> Gesendet: Donnerstag, 1. Dezember 2005 17:26\n> An: Markus Wollny\n> Cc: [email protected]\n> Betreff: Re: [PERFORM] Queries taking ages in PG 8.1, have \n> been much faster in PG<=8.0 \n \n> It looks like \"set enable_nestloop = 0\" might be a workable \n> hack for the immediate need. \n>\n> Once you're not under deadline, \n> I'd like to investigate more closely to find out why 8.1 does \n> worse than 8.0 here.\n\n\nI've just set up a PostgreSQL 8.0.3 installation ...\n\nselect version();\n version\n--------------------------------------------------------------------------------------------\n PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.5 (Debian 1:3.3.5-13)\n(1 row)\n\n...and restored a dump there; here's the explain analyze of the query for 8.0.3:\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=5193.63..5193.63 rows=3 width=16) (actual time=7365.107..7365.110 rows=3 loops=1)\n Sort Key: source.\"position\"\n -> HashAggregate (cost=5193.59..5193.60 rows=3 width=16) (actual time=7365.034..7365.041 rows=3 loops=1)\n -> Nested Loop (cost=0.00..5193.57 rows=3 width=16) (actual time=3190.642..7300.820 rows=11086 loops=1)\n -> Nested Loop (cost=0.00..3602.44 rows=4 width=20) (actual time=3169.968..5875.153 rows=11087 loops=1)\n -> Nested Loop (cost=0.00..1077.95 rows=750 width=16) (actual time=36.599..2778.129 rows=158288 loops=1)\n -> Seq Scan on handy_java source (cost=0.00..1.03 rows=3 width=14) (actual time=6.503..6.514 rows=3 loops=1)\n -> Index Scan using idx02_performance on answer (cost=0.00..355.85 rows=250 width=8) (actual time=10.071..732.746 rows=52763 loops=3)\n Index Cond: ((answer.question_id = 16) AND (answer.value = \"outer\".id))\n -> Index Scan using pk_participant on participant (cost=0.00..3.35 rows=1 width=4) (actual time=0.016..0.016 rows=0 loops=158288)\n Index Cond: (participant.session_id = \"outer\".session_id)\n Filter: ((status = 1) AND (date_trunc('month'::text, created) = date_trunc('month'::text, (now() - '2 mons'::interval))))\n -> Index Scan using idx_answer_session_id on answer (cost=0.00..397.77 rows=1 width=4) (actual time=0.080..0.122 rows=1 loops=11087)\n Index Cond: (\"outer\".session_id = answer.session_id)\n Filter: ((question_id = 6) AND (value = 1))\n Total runtime: 7365.461 ms\n(16 rows)\n\nDoes this tell you anything useful? It's not on the same machine, mind you, but configuration for PostgreSQL is absolutely identical (apart from the autovacuum-lines which 8.0.3 doesn't like).\n\nKind regards\n\n Markus\n\n\n\n\n\nRE: [PERFORM] Queries taking ages in PG 8.1, have been much faster in PG<=8.0 \n\n\n\nHi!\n\n> -----Ursprüngliche Nachricht-----\n> Von: Tom Lane [mailto:[email protected]]\n> Gesendet: Donnerstag, 1. Dezember 2005 17:26\n> An: Markus Wollny\n> Cc: [email protected]\n> Betreff: Re: [PERFORM] Queries taking ages in PG 8.1, have\n> been much faster in PG<=8.0\n\n> It looks like \"set enable_nestloop = 0\" might be a workable\n> hack for the immediate need. \n>\n> Once you're not under deadline,\n> I'd like to investigate more closely to find out why 8.1 does\n> worse than 8.0 here.\n\n\nI've just set up a PostgreSQL 8.0.3 installation ...\n\nselect version();\n                                          version\n--------------------------------------------------------------------------------------------\n PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.5 (Debian 1:3.3.5-13)\n(1 row)\n\n...and restored a dump there; here's the explain analyze of the query for 8.0.3:\n\n                                                                            QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=5193.63..5193.63 rows=3 width=16) (actual time=7365.107..7365.110 rows=3 loops=1)\n   Sort Key: source.\"position\"\n   ->  HashAggregate  (cost=5193.59..5193.60 rows=3 width=16) (actual time=7365.034..7365.041 rows=3 loops=1)\n         ->  Nested Loop  (cost=0.00..5193.57 rows=3 width=16) (actual time=3190.642..7300.820 rows=11086 loops=1)\n               ->  Nested Loop  (cost=0.00..3602.44 rows=4 width=20) (actual time=3169.968..5875.153 rows=11087 loops=1)\n                     ->  Nested Loop  (cost=0.00..1077.95 rows=750 width=16) (actual time=36.599..2778.129 rows=158288 loops=1)\n                           ->  Seq Scan on handy_java source  (cost=0.00..1.03 rows=3 width=14) (actual time=6.503..6.514 rows=3 loops=1)\n                           ->  Index Scan using idx02_performance on answer  (cost=0.00..355.85 rows=250 width=8) (actual time=10.071..732.746 rows=52763 loops=3)\n                                 Index Cond: ((answer.question_id = 16) AND (answer.value = \"outer\".id))\n                     ->  Index Scan using pk_participant on participant  (cost=0.00..3.35 rows=1 width=4) (actual time=0.016..0.016 rows=0 loops=158288)\n                           Index Cond: (participant.session_id = \"outer\".session_id)\n                           Filter: ((status = 1) AND (date_trunc('month'::text, created) = date_trunc('month'::text, (now() - '2 mons'::interval))))\n               ->  Index Scan using idx_answer_session_id on answer  (cost=0.00..397.77 rows=1 width=4) (actual time=0.080..0.122 rows=1 loops=11087)\n                     Index Cond: (\"outer\".session_id = answer.session_id)\n                     Filter: ((question_id = 6) AND (value = 1))\n Total runtime: 7365.461 ms\n(16 rows)\n\nDoes this tell you anything useful? It's not on the same machine, mind you, but configuration for PostgreSQL is absolutely identical (apart from the autovacuum-lines which 8.0.3 doesn't like).\n\nKind regards\n\n   Markus", "msg_date": "Sun, 4 Dec 2005 14:24:37 +0100", "msg_from": "\"Markus Wollny\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries taking ages in PG 8.1, have been much faster in PG<=8.0 " }, { "msg_contents": "\"Markus Wollny\" <[email protected]> writes:\n>> Once you're not under deadline,\n>> I'd like to investigate more closely to find out why 8.1 does\n>> worse than 8.0 here.\n\n> Does this tell you anything useful? It's not on the same machine, mind\n> you, but configuration for PostgreSQL is absolutely identical (apart\n> from the autovacuum-lines which 8.0.3 doesn't like).\n\nThe data is not quite the same, right? I notice different numbers of\nrows being returned. But anyway, it seems the problem is with the upper\nscan on \"answers\", which 8.0 does like this:\n\n -> Index Scan using idx_answer_session_id on answer (cost=0.00..397.77 rows=1 width=4) (actual time=0.080..0.122 rows=1 loops=11087)\n Index Cond: (\"outer\".session_id = answer.session_id)\n Filter: ((question_id = 6) AND (value = 1))\n\nand 8.1 does like this:\n\n -> Bitmap Heap Scan on answer (cost=185.85..187.26 rows=1 width=4) (actual time=197.490..197.494 rows=1 loops=9806)\n Recheck Cond: ((\"outer\".session_id = answer.session_id) AND (answer.question_id = 6) AND (answer.value = 1))\n -> BitmapAnd (cost=185.85..185.85 rows=1 width=0) (actual time=197.421..197.421 rows=0 loops=9806)\n -> Bitmap Index Scan on idx_answer_session_id (cost=0.00..2.83 rows=236 width=0) (actual time=0.109..0.109 rows=49 loops=9806)\n Index Cond: (\"outer\".session_id = answer.session_id)\n -> Bitmap Index Scan on idx02_performance (cost=0.00..182.77 rows=20629 width=0) (actual time=195.742..195.742 rows=165697 loops=9806)\n Index Cond: ((question_id = 6) AND (value = 1))\n\nIt seems that checking question_id/value via the index, rather than\ndirectly on the fetched tuple, is a net loss here. It looks like 8.1\nwould have made the right plan choice if it had made a better estimate\nof the combined selectivity of the question_id and value conditions,\nso ultimately this is another manifestation of the lack of cross-column\nstatistics. What I find interesting though is that the plain index scan\nin 8.0 is so enormously cheaper than it's estimated to be. Perhaps the\nanswer table in your 8.0 installation is almost perfectly ordered by\nsession_id?\n\nAre you using default values for the planner cost parameters? It looks\nlike reducing random_page_cost would help bring the planner estimates\ninto line with reality on your machines.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Dec 2005 13:31:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries taking ages in PG 8.1, have been much faster in PG<=8.0 " } ]
[ { "msg_contents": "Hi. We have a server provided for a test of a web application with the\nfollowing specifications:\n\n 1 Dual core 1.8GHz Opteron chip\n 6 GB RAM\n approx 250GB of RAID10 storage (LSI card + BBU, 4 x 15000 RPM,16MB\n Cache SCSI disks)\n\nThe database itself is very unlikely to use up more than 50GB of storage\n-- however we are going to be storing pictures and movies etc etc on the\nserver.\n\nI understand that it is better to run pg_xlog on a separate spindle but\nwe don't have that option available at present.\n\nNormally I use ext3. I wondered if I should make my normal partitions\nand then make a +/- 200GB LVM VG and then slice that initially into a\n100GB ext3 data directory and a 50GB xfs postgres data area, giving\n100GB to use between these as they grow. I haven't used LVM with xfs\nbefore, however.\n\nAdvice gratefully received.\nRory\n\n", "msg_date": "Sun, 4 Dec 2005 23:49:34 +0000", "msg_from": "Rory Campbell-Lange <[email protected]>", "msg_from_op": true, "msg_subject": "Dividing up a single 250GB RAID10 server for postgres" } ]
[ { "msg_contents": "Hi ...\n\nI am trying to run a query that selects 26 million rows from a\ntable with 68 byte rows.\n\nWhen run on the Server via psql the following error occurs:\n\ncalloc : Cannot allocate memory\n\nWhen run via ODBC from Cognos Framework Manager only works\nif we limit the retrieval to 3 million rows.\n\nI notice that the memory used by the query when run on the Server\nincreases\nto about 2.4 GB before the query fails.\n\nPostgres version is 7.3.4\n\nRunning on Linux Redhat 7.2\n\n4 GB memory\n\n7 Processor 2.5 Ghz\n\nShmmax set to 2 GB\n\nConfiguration Parameters\n\nShared Buffers\t\t\t12 288\nMax Connections\t\t16\nWal buffers\t\t\t\t24\nSort Mem\t\t\t\t40960\nVacuum Mem\t\t\t80192\nCheckpoint Timeout\t\t600\nEnable Seqscan\t\tfalse\nEffective Cache Size\t200000\n\n\nResults of explain analyze and expain analyze verbose:\n\nexplain analyze select * from flash_by_branches;\n QUERY\nPLAN \n------------------------------------------------------------------------\n----------------------------------------------------------------------\n Seq Scan on flash_by_branches (cost=100000000.00..100567542.06\nrows=26854106 width=68) (actual time=12.14..103936.35 rows=26854106\nloops=1)\n Total runtime: 122510.02 msec\n(2 rows)\n\nexplain analyze verbose:\n\n{ SEQSCAN\n :startup_cost 100000000.00\n :total_cost 100567542.06\n :rows 26854106\n :width 68\n :qptargetlist (\n { TARGETENTRY\n :resdom\n { RESDOM\n :resno 1\n :restype 1043\n :restypmod 8\n :resname br_code\n :reskey 0\n :reskeyop 0\n :ressortgroupref 0\n :resjunk false\n }\n\n :expr\n { VAR\n :varno 1\n :varattno 1\n :vartype 1043\n :vartypmod 8\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n }\n\n { TARGETENTRY\n :resdom\n { RESDOM\n :resno 2\n :restype 23\n :restypmod -1\n :resname fty_code\n :reskey 0\n :reskeyop 0\n :ressortgroupref 0\n :resjunk false\n }\n\n :expr\n { VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n }\n\n { TARGETENTRY\n :resdom\n { RESDOM\n :resno 3\n :restype 1082\n :restypmod -1\n :resname period\n :reskey 0\n :reskeyop 0\n :ressortgroupref 0\n :resjunk false\n }\n\n :expr\n { VAR\n :varno 1\n :varattno 3\n :vartype 1082\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n }\n\n { TARGETENTRY\n :resdom\n { RESDOM\n :resno 4\n :restype 1700\n :restypmod 786436\n :resname value\n :reskey 0\n :reskeyop 0\n :ressortgroupref 0\n :resjunk false\n }\n\n :expr\n { VAR\n :varno 1\n :varattno 4\n :vartype 1700\n :vartypmod 786436\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n }\n\n { TARGETENTRY\n :resdom\n { RESDOM\n :resno 7\n :restype 1700\n :restypmod 786438\n :resname value1\n :reskey 0\n :reskeyop 0\n :ressortgroupref 0\n :resjunk false\n }\n\n :expr\n { VAR\n :varno 1\n :varattno 7\n :vartype 1700\n :vartypmod 786438\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n }\n )\n\n :qpqual <>\n :lefttree <>\n :righttree <>\n :extprm ()\n\n :locprm ()\n\n :initplan <>\n :nprm 0\n :scanrelid 1\n }\n\n Seq Scan on flash_by_branches (cost=100000000.00..100567542.06\nrows=26854106 width=68) (actual time=6.59..82501.15 rows=2685\n4106 loops=1)\n Total runtime: 102089.00 msec\n(196 rows)\n\n\n\nPlease assist.\n\nThanks,\nHoward Oblowitz\n\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.859 / Virus Database: 585 - Release Date: 14/02/2005\n \n\n\n\n\n\nQuery Fails with error calloc - Cannot alocate memory\n\n\n\nHi …\nI am trying to run a query that selects 26 million rows from a\ntable with 68 byte rows.\nWhen run on the Server via psql the following error occurs:\ncalloc : Cannot allocate memory\nWhen run via ODBC from Cognos Framework Manager only works\nif we limit the retrieval to 3 million rows.\nI notice that the memory used by the query when run on the Server increases\nto about 2.4 GB before the query fails.\nPostgres version is 7.3.4\nRunning on Linux Redhat 7.2\n4 GB memory\n7 Processor 2.5 Ghz\nShmmax set to 2 GB\nConfiguration Parameters\nShared Buffers                  12 288\nMax Connections         16\nWal buffers                             24\nSort Mem                                40960\nVacuum Mem                      80192\nCheckpoint Timeout              600\nEnable Seqscan          false\nEffective Cache Size    200000\n\nResults of explain analyze and expain analyze verbose:\nexplain analyze select * from flash_by_branches;\n                                                                  QUERY PLAN                                                  \n----------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on flash_by_branches  (cost=100000000.00..100567542.06 rows=26854106 width=68) (actual time=12.14..103936.35 rows=26854106 loops=1)\n Total runtime: 122510.02 msec\n(2 rows)\nexplain analyze verbose:\n{ SEQSCAN\n    :startup_cost 100000000.00\n    :total_cost 100567542.06\n    :rows 26854106\n    :width 68\n    :qptargetlist (\n       { TARGETENTRY\n       :resdom\n          { RESDOM\n          :resno 1\n          :restype 1043\n          :restypmod 8\n          :resname br_code\n          :reskey 0\n          :reskeyop 0\n          :ressortgroupref 0\n          :resjunk false\n          }\n       :expr\n          { VAR\n          :varno 1\n          :varattno 1\n          :vartype 1043\n          :vartypmod 8\n          :varlevelsup 0\n          :varnoold 1\n          :varoattno 1\n          }\n       }\n       { TARGETENTRY\n       :resdom\n          { RESDOM\n          :resno 2\n          :restype 23\n          :restypmod -1\n          :resname fty_code\n          :reskey 0\n          :reskeyop 0\n          :ressortgroupref 0\n          :resjunk false\n          }\n       :expr\n          { VAR\n          :varno 1\n          :varattno 2\n          :vartype 23\n          :vartypmod -1\n          :varlevelsup 0\n          :varnoold 1\n          :varoattno 2\n        }\n       }\n       { TARGETENTRY\n       :resdom\n          { RESDOM\n          :resno 3\n          :restype 1082\n          :restypmod -1\n          :resname period\n          :reskey 0\n          :reskeyop 0\n          :ressortgroupref 0\n          :resjunk false\n          }\n       :expr\n          { VAR\n          :varno 1\n          :varattno 3\n          :vartype 1082\n          :vartypmod -1\n          :varlevelsup 0\n          :varnoold 1\n          :varoattno 3\n          }\n       }\n       { TARGETENTRY\n       :resdom\n          { RESDOM\n          :resno 4\n          :restype 1700\n          :restypmod 786436\n          :resname value\n          :reskey 0\n          :reskeyop 0\n          :ressortgroupref 0\n          :resjunk false\n          }\n       :expr\n          { VAR\n          :varno 1\n          :varattno 4\n          :vartype 1700\n          :vartypmod 786436\n          :varlevelsup 0\n          :varnoold 1\n          :varoattno 4\n          }\n       }\n       { TARGETENTRY\n       :resdom\n      { RESDOM\n          :resno 7\n          :restype 1700\n          :restypmod 786438\n          :resname value1\n          :reskey 0\n          :reskeyop 0\n          :ressortgroupref 0\n          :resjunk false\n          }\n       :expr\n          { VAR\n          :varno 1\n          :varattno 7\n          :vartype 1700\n          :vartypmod 786438\n          :varlevelsup 0\n          :varnoold 1\n          :varoattno 7\n          }\n       }\n    )\n    :qpqual <>\n    :lefttree <>\n    :righttree <>\n    :extprm ()\n    :locprm ()\n    :initplan <>\n    :nprm 0\n    :scanrelid 1\n    }\n Seq Scan on flash_by_branches  (cost=100000000.00..100567542.06 rows=26854106 width=68) (actual time=6.59..82501.15 rows=2685\n4106 loops=1)\n Total runtime: 102089.00 msec\n(196 rows)\n\n\nPlease assist.\nThanks,\nHoward Oblowitz\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.859 / Virus Database: 585 - Release Date: 14/02/2005", "msg_date": "Mon, 5 Dec 2005 09:42:43 +0200", "msg_from": "\"Howard Oblowitz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query Fails with error calloc - Cannot alocate memory" }, { "msg_contents": "If you're trying to retrieve 26 million rows into RAM in one go of \ncourse it'll be trouble.\n\nJust use a cursor. (DECLARE/FETCH/MOVE)\n\nChris\n\n\nHoward Oblowitz wrote:\n> Hi �\n> \n> I am trying to run a query that selects 26 million rows from a\n> \n> table with 68 byte rows.\n> \n> When run on the Server via psql the following error occurs:\n> \n> calloc : Cannot allocate memory\n> \n> When run via ODBC from Cognos Framework Manager only works\n> \n> if we limit the retrieval to 3 million rows.\n> \n> I notice that the memory used by the query when run on the Server increases\n> \n> to about 2.4 GB before the query fails.\n> \n> Postgres version is 7.3.4\n> \n> Running on Linux Redhat 7.2\n> \n> 4 GB memory\n> \n> 7 Processor 2.5 Ghz\n> \n> Shmmax set to 2 GB\n> \n> Configuration Parameters\n> \n> Shared Buffers 12 288\n> \n> Max Connections 16\n> \n> Wal buffers 24\n> \n> Sort Mem 40960\n> \n> Vacuum Mem 80192\n> \n> Checkpoint Timeout 600\n> \n> Enable Seqscan false\n> \n> Effective Cache Size 200000\n> \n> \n> Results of explain analyze and expain analyze verbose:\n> \n> explain analyze select * from flash_by_branches;\n> \n> QUERY \n> PLAN \n> \n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> \n> Seq Scan on flash_by_branches (cost=100000000.00..100567542.06 \n> rows=26854106 width=68) (actual time=12.14..103936.35 rows=26854106 loops=1)\n> \n> Total runtime: 122510.02 msec\n> \n> (2 rows)\n> \n> explain analyze verbose:\n> \n> { SEQSCAN\n> \n> :startup_cost 100000000.00\n> \n> :total_cost 100567542.06\n> \n> :rows 26854106\n> \n> :width 68\n> \n> :qptargetlist (\n> \n> { TARGETENTRY\n> \n> :resdom\n> \n> { RESDOM\n> \n> :resno 1\n> \n> :restype 1043\n> \n> :restypmod 8\n> \n> :resname br_code\n> \n> :reskey 0\n> \n> :reskeyop 0\n> \n> :ressortgroupref 0\n> \n> :resjunk false\n> \n> }\n> \n> :expr\n> \n> { VAR\n> \n> :varno 1\n> \n> :varattno 1\n> \n> :vartype 1043\n> \n> :vartypmod 8\n> \n> :varlevelsup 0\n> \n> :varnoold 1\n> \n> :varoattno 1\n> \n> }\n> \n> }\n> \n> { TARGETENTRY\n> \n> :resdom\n> \n> { RESDOM\n> \n> :resno 2\n> \n> :restype 23\n> \n> :restypmod -1\n> \n> :resname fty_code\n> \n> :reskey 0\n> \n> :reskeyop 0\n> \n> :ressortgroupref 0\n> \n> :resjunk false\n> \n> }\n> \n> :expr\n> \n> { VAR\n> \n> :varno 1\n> \n> :varattno 2\n> \n> :vartype 23\n> \n> :vartypmod -1\n> \n> :varlevelsup 0\n> \n> :varnoold 1\n> \n> :varoattno 2\n> \n> }\n> \n> }\n> \n> { TARGETENTRY\n> \n> :resdom\n> \n> { RESDOM\n> \n> :resno 3\n> \n> :restype 1082\n> \n> :restypmod -1\n> \n> :resname period\n> \n> :reskey 0\n> \n> :reskeyop 0\n> \n> :ressortgroupref 0\n> \n> :resjunk false\n> \n> }\n> \n> :expr\n> \n> { VAR\n> \n> :varno 1\n> \n> :varattno 3\n> \n> :vartype 1082\n> \n> :vartypmod -1\n> \n> :varlevelsup 0\n> \n> :varnoold 1\n> \n> :varoattno 3\n> \n> }\n> \n> }\n> \n> { TARGETENTRY\n> \n> :resdom\n> \n> { RESDOM\n> \n> :resno 4\n> \n> :restype 1700\n> \n> :restypmod 786436\n> \n> :resname value\n> \n> :reskey 0\n> \n> :reskeyop 0\n> \n> :ressortgroupref 0\n> \n> :resjunk false\n> \n> }\n> \n> :expr\n> \n> { VAR\n> \n> :varno 1\n> \n> :varattno 4\n> \n> :vartype 1700\n> \n> :vartypmod 786436\n> \n> :varlevelsup 0\n> \n> :varnoold 1\n> \n> :varoattno 4\n> \n> }\n> \n> }\n> \n> { TARGETENTRY\n> \n> :resdom\n> \n> { RESDOM\n> \n> :resno 7\n> \n> :restype 1700\n> \n> :restypmod 786438\n> \n> :resname value1\n> \n> :reskey 0\n> \n> :reskeyop 0\n> \n> :ressortgroupref 0\n> \n> :resjunk false\n> \n> }\n> \n> :expr\n> \n> { VAR\n> \n> :varno 1\n> \n> :varattno 7\n> \n> :vartype 1700\n> \n> :vartypmod 786438\n> \n> :varlevelsup 0\n> \n> :varnoold 1\n> \n> :varoattno 7\n> \n> }\n> \n> }\n> \n> )\n> \n> :qpqual <>\n> \n> :lefttree <>\n> \n> :righttree <>\n> \n> :extprm ()\n> \n> :locprm ()\n> \n> :initplan <>\n> \n> :nprm 0\n> \n> :scanrelid 1\n> \n> }\n> \n> Seq Scan on flash_by_branches (cost=100000000.00..100567542.06 \n> rows=26854106 width=68) (actual time=6.59..82501.15 rows=2685\n> \n> 4106 loops=1)\n> \n> Total runtime: 102089.00 msec\n> \n> (196 rows)\n> \n> \n> \n> Please assist.\n> \n> Thanks,\n> \n> Howard Oblowitz\n> \n> \n> \n> ---\n> Outgoing mail is certified Virus Free.\n> Checked by AVG anti-virus system (http://www.grisoft.com).\n> Version: 6.0.859 / Virus Database: 585 - Release Date: 14/02/2005\n> \n> \n\n", "msg_date": "Wed, 07 Dec 2005 12:18:24 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Fails with error calloc - Cannot alocate memory" }, { "msg_contents": "On Mon, 2005-12-05 at 09:42 +0200, Howard Oblowitz wrote:\n> I am trying to run a query that selects 26 million rows from a\n> table with 68 byte rows.\n> \n> When run on the Server via psql the following error occurs:\n> \n> calloc : Cannot allocate memory\n\nThat's precisely what I'd expect: the backend will process the query and\nbegin sending back the entire result set to the client. The client will\nattempt to allocate a local buffer to hold the entire result set, which\nobviously fails in this case.\n\nYou probably want to explicitly create and manipulate a cursor via\nDECLARE, FETCH, and the like -- Postgres will not attempt to do this\nautomatically (for good reason).\n\n> Postgres version is 7.3.4\n\nYou should consider upgrading, 7.3 is quite old. At the very least, you\nshould probably be using the most recent 7.3.x release, 7.3.11.\n\n-Neil\n\n\n", "msg_date": "Wed, 07 Dec 2005 01:31:37 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Fails with error calloc - Cannot alocate memory" } ]
[ { "msg_contents": "Hi! \n\n> -----Ursprüngliche Nachricht-----\n> Von: Tom Lane [mailto:[email protected]] \n> Gesendet: Sonntag, 4. Dezember 2005 19:32\n> An: Markus Wollny\n> Cc: [email protected]\n> Betreff: Re: [PERFORM] Queries taking ages in PG 8.1, have \n> been much faster in PG<=8.0 \n\n> The data is not quite the same, right? I notice different \n> numbers of rows being returned. \n\nNo, you're right, I didn't manage to restore the 8.1 dump into the 8.0.3 cluster, so I took the quick route and restored the last dump from my 8.0 installation. The numbers should be roughly within the same range, though:\n\nTable answer has got 8,646,320 rows (counted and estimated, as this db is not live, obviously), table participant has got 173,998 rows; for comparison:\nThe production db had an estimated 8,872,130, counted 8,876,648 rows for table answer, and estimated 178,165, counted 178,248 rows for participant. As the numbers are a mere 2% apart, I should think that this wouldn't make that much difference.\n\n> It seems that checking question_id/value via the index, \n> rather than directly on the fetched tuple, is a net loss \n> here. It looks like 8.1 would have made the right plan \n> choice if it had made a better estimate of the combined \n> selectivity of the question_id and value conditions, so \n> ultimately this is another manifestation of the lack of \n> cross-column statistics. What I find interesting though is \n> that the plain index scan in 8.0 is so enormously cheaper \n> than it's estimated to be. Perhaps the answer table in your \n> 8.0 installation is almost perfectly ordered by session_id?\n\nNot quite - there may be several concurrent sessions at any one time, but ordinarily the answers for one session-id would be quite close together, in a lot of cases even in perfect sequence, so \"almost perfectly\" might be a fair description, depending on the exact definition of \"almost\" :)\n\n> Are you using default values for the planner cost parameters? \n\nI have to admit that I did tune the random_page_cost and effective_cache_size settings ages ago (7.1-ish) to a value that seemed to work best then - and didn't touch it ever since, although my data pool has grown quite a bit over time. cpu_tuple_cost, cpu_index_tuple_cost and cpu_operator_cost are using default values.\n\n> It looks like reducing random_page_cost would help bring the \n> planner estimates into line with reality on your machines.\n\nI had set random_page_cost to 1.4 already, so I doubt that it would do much good to further reduce the value - reading the docs and the suggestions for tuning I would have thought that I should actually consider increasing this value a bit, as not all of my data will fit in memory any more. Do you nevertheless want me to try what happens if I reduce random_page_cost even further?\n\nKind regards\n\n Markus\n", "msg_date": "Mon, 5 Dec 2005 11:28:54 +0100", "msg_from": "\"Markus Wollny\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries taking ages in PG 8.1, have been much faster in PG<=8.0 " }, { "msg_contents": "\"Markus Wollny\" <[email protected]> writes:\n>> ... What I find interesting though is \n>> that the plain index scan in 8.0 is so enormously cheaper \n>> than it's estimated to be. Perhaps the answer table in your \n>> 8.0 installation is almost perfectly ordered by session_id?\n\n> Not quite - there may be several concurrent sessions at any one time, but ordinarily the answers for one session-id would be quite close together, in a lot of cases even in perfect sequence, so \"almost perfectly\" might be a fair description, depending on the exact definition of \"almost\" :)\n\nCould we see the pg_stats row for answer.session_id in both 8.0 and 8.1?\n\n> I had set random_page_cost to 1.4 already, so I doubt that it would do much good to further reduce the value - reading the docs and the suggestions for tuning I would have thought that I should actually consider increasing this value a bit, as not all of my data will fit in memory any more. Do you nevertheless want me to try what happens if I reduce random_page_cost even further?\n\nNo, that's probably quite low enough already ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Dec 2005 09:32:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries taking ages in PG 8.1, have been much faster in PG<=8.0 " } ]
[ { "msg_contents": "src/include/pg_config_manual.h define BLCKSZ 8196 (8kb).\n\nSomewhere I readed BLCKSZ must be equal to memory page of operational \nsystem. And default BLCKSZ 8kb because first OS where postgres was build \nhas memory page size 8kb.\n\nI try to test this. Linux, memory page 4kb, disk page 4kb. I set BLCKSZ \nto 4kb. I get some performance improve, but not big, may be because I \nhave 4Gb on test server (amd64).\n\nCan anyone test it also? May be better move BLCKSZ from \npg_config_manual.h to pg_config.h?\n\n-- \nOlleg Samoylov\n", "msg_date": "Mon, 05 Dec 2005 16:47:31 +0300", "msg_from": "Olleg Samoylov <[email protected]>", "msg_from_op": true, "msg_subject": "BLCKSZ" }, { "msg_contents": "Olleg Samoylov <[email protected]> writes:\n> I try to test this. Linux, memory page 4kb, disk page 4kb. I set BLCKSZ \n> to 4kb. I get some performance improve, but not big, may be because I \n> have 4Gb on test server (amd64).\n\nIt's highly unlikely that reducing BLCKSZ is a good idea. There are bad\nside-effects on the maximum index entry size, maximum number of tuple\nfields, etc. In any case, when you didn't say *what* you tested, it's\nimpossible to judge the usefulness of the change.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Dec 2005 10:02:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BLCKSZ " }, { "msg_contents": "Tom Lane wrote:\n> Olleg Samoylov <[email protected]> writes:\n> \n>>I try to test this. Linux, memory page 4kb, disk page 4kb. I set BLCKSZ \n>>to 4kb. I get some performance improve, but not big, may be because I \n>>have 4Gb on test server (amd64).\n> \n> It's highly unlikely that reducing BLCKSZ is a good idea. There are bad\n> side-effects on the maximum index entry size, maximum number of tuple\n> fields, etc. \n\nYes, when I set BLCKSZ=512, database dont' work. With BLCKSZ=1024 \ndatabase very slow. (This was surprise me. I expect increase performance \nin 8 times with 1024 BLCKSZ. :) ) As I already see in this maillist, \nincrease of BLCKSZ reduce performace too. May be exist optimum value? \nTheoretically BLCKSZ equal memory/disk page/block size may reduce \ndefragmentation drawback of memory and disk.\n\n> In any case, when you didn't say *what* you tested, it's\n> impossible to judge the usefulness of the change.\n> \t\t\tregards, tom lane\n\nI test performace on database test server. This is copy of working \nbilling system to test new features and experiments. Test task was one \nday traffic log. Average time of a one test was 260 minutes. Postgresql \n7.4.8. Server dual Opteron 240, 4Gb RAM.\n\n--\nOlleg\n", "msg_date": "Tue, 06 Dec 2005 00:32:03 +0300", "msg_from": "Olleg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BLCKSZ" }, { "msg_contents": "Olleg wrote:\n\n> I test performace on database test server. This is copy of working \n> billing system to test new features and experiments. Test task was one \n> day traffic log. Average time of a one test was 260 minutes. Postgresql \n> 7.4.8. Server dual Opteron 240, 4Gb RAM.\n\nDid you execute queries from the log, one after another? That may not\nbe a representative test -- try sending multiple queries in parallel, to\nsee how the server would perform in the real world.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Mon, 5 Dec 2005 21:07:52 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BLCKSZ" }, { "msg_contents": "At 04:32 PM 12/5/2005, Olleg wrote:\n>Tom Lane wrote:\n>>Olleg Samoylov <[email protected]> writes:\n>>\n>>>I try to test this. Linux, memory page 4kb, disk page 4kb. I set \n>>>BLCKSZ to 4kb. I get some performance improve, but not big, may be \n>>>because I have 4Gb on test server (amd64).\n>>It's highly unlikely that reducing BLCKSZ is a good idea. There \n>>are bad side-effects on the maximum index entry size, maximum \n>>number of tuple fields, etc.\n>\n>Yes, when I set BLCKSZ=512, database dont' work. With BLCKSZ=1024 \n>database very slow. (This was surprise me. I expect increase \n>performance in 8 times with 1024 BLCKSZ. :) )\n\nNo wonder pg did not work or was very slow BLCKSZ= 512 or 1024 means \n512 or 1024 *Bytes* respectively. That's 1/16 and 1/8 the default 8KB BLCKSZ.\n\n\n> As I already see in this maillist, increase of BLCKSZ reduce \n> performace too.\n\nWhere? BLCKSZ as large as 64KB has been shown to improve \nperformance. If running a RAID, BLCKSZ of ~1/2 the RAID stripe size \nseems to be a good value.\n\n\n>May be exist optimum value? Theoretically BLCKSZ equal memory/disk \n>page/block size may reduce defragmentation drawback of memory and disk.\nOf course there's an optimal value... ...and of course it is \ndependent on your HW, OS, and DB application.\n\nIn general, and in a very fuzzy sense, \"bigger is better\". pg files \nare laid down in 1GB chunks, so there's probably one limitation.\nGiven the HW you have mentioned, I'd try BLCKSZ= 65536 (you may have \nto recompile your kernel) and a RAID stripe of 128KB or 256KB as a first guess.\n\n\n>>In any case, when you didn't say *what* you tested, it's\n>>impossible to judge the usefulness of the change.\n>> regards, tom lane\n>\n>I test performace on database test server. This is copy of working \n>billing system to test new features and experiments. Test task was \n>one day traffic log. Average time of a one test was 260 minutes.\n\nHow large is a record in your billing system? You want it to be an \ninteger divisor of BLCKSZ (so for instance odd sizes in Bytes are BAD),\nBeyond that, you application domain matters. OLTP like systems need \nlow latency access for frequent small transactions. Data mining like \nsystems need to do IO in as big a chunk as the HW and OS will \nallow. Probably a good idea for BLCKSZ to be _at least_ max(8KB, 2x \nrecord size)\n\n\n> Postgresql 7.4.8. Server dual Opteron 240, 4Gb RAM.\n\n_Especially_ with that HW, upgrade to at least 8.0.x ASAP. It's a \ngood idea to not be running pg 7.x anymore anyway, but it's \nparticularly so if you are running 64b SMP boxes.\n\nRon\n\n\n", "msg_date": "Mon, 05 Dec 2005 20:21:21 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BLCKSZ" }, { "msg_contents": "Ron <[email protected]> writes:\n> Where? BLCKSZ as large as 64KB has been shown to improve \n> performance.\n\nNot in the Postgres context, because you can't set BLCKSZ higher than\n32K without doing extensive surgery on the page item pointer layout.\nIf anyone's actually gone to that much trouble, they sure didn't\npublicize their results ...\n\n>> Postgresql 7.4.8. Server dual Opteron 240, 4Gb RAM.\n\n> _Especially_ with that HW, upgrade to at least 8.0.x ASAP. It's a \n> good idea to not be running pg 7.x anymore anyway, but it's \n> particularly so if you are running 64b SMP boxes.\n\nI agree with this bit --- 8.1 is a significant improvement on any prior\nversion for SMP boxes. It's likely that 8.2 will be better yet,\nbecause this is an area we just recently started paying serious\nattention to.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Dec 2005 00:51:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BLCKSZ " }, { "msg_contents": "Ron wrote:\n> In general, and in a very fuzzy sense, \"bigger is better\". pg files are \n> laid down in 1GB chunks, so there's probably one limitation.\n\nHm, expect result of tests on other platforms, but if there theoretical \ndispute...\nI can't undestand why \"bigger is better\". For instance in search by \nindex. Index point to page and I need load page to get one row. Thus I \nload 8kb from disk for every raw. And keep it then in cache. You \nrecommend 64kb. With your recomendation I'll get 8 times more IO \nthroughput, 8 time more head seek on disk, 8 time more memory cache (OS \ncache and postgresql) become busy. I have small row in often loaded \ntable, 32 bytes. Table is not clustered, used several indices. And you \nrecommend load 64Kb when I need only 32b, isn't it?\n--\nOlleg\n", "msg_date": "Tue, 06 Dec 2005 13:40:47 +0300", "msg_from": "Olleg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BLCKSZ" }, { "msg_contents": "On Tue, Dec 06, 2005 at 01:40:47PM +0300, Olleg wrote:\n> I can't undestand why \"bigger is better\". For instance in search by \n> index. Index point to page and I need load page to get one row. Thus I \n> load 8kb from disk for every raw. And keep it then in cache. You \n> recommend 64kb. With your recomendation I'll get 8 times more IO \n> throughput, 8 time more head seek on disk, 8 time more memory cache (OS \n> cache and postgresql) become busy.\n\nHopefully, you won't have eight times the seeking; a single block ought to be\nin one chunk on disk. You're of course at your filesystem's mercy, though.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 6 Dec 2005 11:59:35 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BLCKSZ" }, { "msg_contents": "On Tue, 6 Dec 2005, Steinar H. Gunderson wrote:\n\n> On Tue, Dec 06, 2005 at 01:40:47PM +0300, Olleg wrote:\n>> I can't undestand why \"bigger is better\". For instance in search by\n>> index. Index point to page and I need load page to get one row. Thus I\n>> load 8kb from disk for every raw. And keep it then in cache. You\n>> recommend 64kb. With your recomendation I'll get 8 times more IO\n>> throughput, 8 time more head seek on disk, 8 time more memory cache (OS\n>> cache and postgresql) become busy.\n>\n> Hopefully, you won't have eight times the seeking; a single block ought to be\n> in one chunk on disk. You're of course at your filesystem's mercy, though.\n\nin fact useually it would mean 1/8 as many seeks, since the 64k chunk \nwould be created all at once it's probably going to be one chunk on disk \nas Steiner points out and that means that you do one seek per 64k instead \nof one seek per 8k.\n\nWith current disks it's getting to the point where it's the same cost to \nread 8k as it is to read 64k (i.e. almost free, you could read \nsubstantially more then 64k and not notice it in I/O speed), it's the \nseeks that are expensive.\n\nyes it will eat up more ram, but assuming that you are likly to need other \nthings nearby it's likly to be a win.\n\nas processor speed keeps climing compared to memory and disk speed true \nrandom access is really not the correct way to think about I/O anymore. \nIt's frequently more appropriate to think of your memory and disks as if \nthey were tape drives (seek then read, repeat)\n\neven for memory access what you really do is seek to the beginning of a \nblock (expensive) then read that block into cache (cheap, you get the \nentire cacheline of 64-128 bytes no matter if you need it or not) and then \nyou can then access that block fairly quickly. with memory on SMP machines \nit's a constant cost to seek anywhere in memory, with NUMA machines \n(including multi-socket Opterons) the cost to do the seek and fetch \ndepends on where in memory you are seeking to and what cpu you are running \non. it also becomes very expensive for multiple CPU's to write to memory \naddresses that are in the same block (cacheline) of memory.\n\nfor disks it's even more dramatic, the seek is incredibly expensive \ncompared to the read/write, and the cost of the seek varies based on how \nfar you need to seek, but once you are on a track you can read the entire \ntrack in for about the same cost as a single block (in fact the drive \nuseually does read the entire track before sending the one block on to \nyou). Raid complicates this becouse you have a block size per drive and \nreading larger then that block size involves multiple drives.\n\nmost of the work in dealing with these issues and optimizing for them is \nthe job of the OS, some other databases work very hard to take over this \nwork from the OS, Postgres instead tries to let the OS do this work, but \nwe still need to keep it in mind when configuring things becouse it's \npossible to make it much easier or much harder for the OS optimize things.\n\nDavid Lang\n", "msg_date": "Tue, 6 Dec 2005 03:39:41 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BLCKSZ" } ]
[ { "msg_contents": " \n\n> -----Ursprüngliche Nachricht-----\n> Von: Tom Lane [mailto:[email protected]] \n> Gesendet: Montag, 5. Dezember 2005 15:33\n> An: Markus Wollny\n> Cc: [email protected]\n> Betreff: Re: AW: [PERFORM] Queries taking ages in PG 8.1, \n> have been much faster in PG<=8.0 \n \n> Could we see the pg_stats row for answer.session_id in both \n> 8.0 and 8.1?\n\nHere you are:\n\nselect null_frac\n\t, avg_width\n\t, n_distinct\n\t, most_common_vals\n\t, most_common_freqs\n\t, histogram_bounds\n\t, Correlation\nfrom pg_stats\nwhere schemaname = 'survey'\nand tablename = 'answer'\nand attname = 'session_id';\n\n8.1:\nnull_frac\t\t0\navg_width\t\t4\nn_distinct\t\t33513\nmost_common_vals\t{1013854,1017890,1021551,1098817,764249,766938,776353,780954,782232,785985}\nmost_common_freqs\t{0.001,0.001,0.001,0.001,0.000666667,0.000666667,0.000666667,0.000666667,0.000666667,0.000666667}\nhistogram_bounds\t{757532,819803,874935,938170,1014421,1081507,1164659,1237281,1288267,1331016,1368939}\nCorrelation\t\t-0.0736492\n\n8.0.3:\nnull_frac\t\t0\navg_width\t\t4\nn_distinct\t\t29287\nmost_common_vals\t{765411,931762,983933,1180453,1181959,1229963,1280249,1288736,1314970,764901}\nmost_common_freqs\t{0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.000666667}\nhistogram_bounds\t{757339,822949,875834,939085,1004782,1065251,1140682,1218336,1270024,1312170,1353082}\nCorrelation\t\t-0.237136\n\nKind regards\n\n Markus\n", "msg_date": "Mon, 5 Dec 2005 15:44:54 +0100", "msg_from": "\"Markus Wollny\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries taking ages in PG 8.1, have been much faster in PG<=8.0 " }, { "msg_contents": "\"Markus Wollny\" <[email protected]> writes:\n>> Could we see the pg_stats row for answer.session_id in both \n>> 8.0 and 8.1?\n\n> Here you are:\n\n> 8.1:\n> Correlation\t\t-0.0736492\n\n> 8.0.3:\n> Correlation\t\t-0.237136\n\nInteresting --- if the 8.1 database is a dump and restore of the 8.0,\nyou'd expect the physical ordering to be similar. Why is 8.1 showing\na significantly lower correlation? That has considerable impact on the\nestimated cost of an indexscan (plain not bitmap), and so it might\nexplain why 8.1 is mistakenly avoiding the indexscan ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Dec 2005 10:11:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries taking ages in PG 8.1, have been much faster in PG<=8.0 " }, { "msg_contents": "On Mon, 05 Dec 2005 10:11:41 -0500, Tom Lane <[email protected]>\nwrote:\n>> Correlation\t\t-0.0736492\n>> Correlation\t\t-0.237136\n\n>That has considerable impact on the\n>estimated cost of an indexscan\n\nThe cost estimator uses correlationsquared. So all correlations\nbetween -0.3 and +0.3 can be considered equal under the assumption\nthat estimation errors of up to 10% are acceptable.\nServus\n Manfred\n", "msg_date": "Sat, 10 Dec 2005 18:14:00 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries taking ages in PG 8.1, have been much faster in PG<=8.0 " } ]
[ { "msg_contents": "> -----Ursprüngliche Nachricht-----\n> Von: Tom Lane [mailto:[email protected]] \n> Gesendet: Montag, 5. Dezember 2005 16:12\n> An: Markus Wollny\n> Cc: [email protected]\n> Betreff: Re: AW: AW: [PERFORM] Queries taking ages in PG 8.1, \n> have been much faster in PG<=8.0 \n> \n> \"Markus Wollny\" <[email protected]> writes:\n> >> Could we see the pg_stats row for answer.session_id in \n> both 8.0 and \n> >> 8.1?\n> \n> > Here you are:\n> \n> > 8.1:\n> > Correlation\t\t-0.0736492\n> \n> > 8.0.3:\n> > Correlation\t\t-0.237136\n> \n> Interesting --- if the 8.1 database is a dump and restore of \n> the 8.0, you'd expect the physical ordering to be similar. \n\nI dumped the data from my 8.0.1 cluster on 2005-11-18 00:23 using pg_dumpall with no further options; the dump was passed through iconv to clear up some UTF-8 encoding issues, then restored into a fresh 8.1 cluster where it went productive; I used the very same dump to restore the 8.0.3 cluster. So there is a difference between the two datasets, an additional 230.328 rows in the answers-table.\n\n> Why is 8.1 showing a significantly lower correlation? That \n> has considerable impact on the estimated cost of an indexscan \n> (plain not bitmap), and so it might explain why 8.1 is \n> mistakenly avoiding the indexscan ...\n\nI just ran a vacuum analyze on the table, just to make sure that the stats are up to date (forgot that on the previous run, thanks to pg_autovacuum...), and the current correlation on the 8.1 installation is now calculated as -0.158921. That's still more than twice the value as for the 8.0-db. I don't know whether that is significant, though.\n\nKind regards\n\n Markus\n", "msg_date": "Mon, 5 Dec 2005 16:40:38 +0100", "msg_from": "\"Markus Wollny\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries taking ages in PG 8.1, have been much faster in PG<=8.0 " } ]
[ { "msg_contents": " \n> -----Ursprüngliche Nachricht-----\n> Von: [email protected] \n> [mailto:[email protected]] Im Auftrag \n> von Markus Wollny\n> Gesendet: Montag, 5. Dezember 2005 16:41\n> An: Tom Lane\n> Cc: [email protected]\n> Betreff: Re: [PERFORM] Queries taking ages in PG 8.1, have \n> been much faster in PG<=8.0 \n\n> an additional 230.328 rows in the answers-table.\n\nThat was supposed to read 230,328 rows, sorry.\n", "msg_date": "Mon, 5 Dec 2005 16:45:25 +0100", "msg_from": "\"Markus Wollny\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries taking ages in PG 8.1, have been much faster in PG<=8.0 " } ]
[ { "msg_contents": "Hi,\n \nI'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0.\n \nMy application updates counters in DB. I left a test over the night that\nincreased counter of specific record. After night running (several\nhundreds of thousands updates), I found out that the time spent on\nUPDATE increased to be more than 1.5 second (at the beginning it was\nless than 10ms)! Issuing VACUUM ANALYZE and even reboot didn't seemed to\nsolve the problem.\n \nI succeeded to re-produce this with a simple test:\n \nI created a very simple table that looks like that:\nCREATE TABLE test1\n(\n id int8 NOT NULL,\n counter int8 NOT NULL DEFAULT 0,\n CONSTRAINT \"Test1_pkey\" PRIMARY KEY (id)\n) ;\n \nI've inserted 15 entries and wrote a script that increase the counter of\nspecific record over and over. The SQL command looks like this:\nUPDATE test1 SET counter=number WHERE id=10;\n \nAt the beginning the UPDATE time was around 15ms. After ~90000 updates,\nthe execution time increased to be more than 120ms.\n \n1. What is the reason for this phenomena?\n2. Is there anything that can be done in order to improve this?\n \nThanks,\nAssaf\n\n\n\n\n\nHi,\n \nI'm using PostgreSQL \n8.0.3 on Linux RedHat WS 3.0.\n \nMy application \nupdates counters in DB. I left a test over the night that increased counter of \nspecific record. After night running (several hundreds of thousands updates), I \nfound out that the time spent on UPDATE increased to be more than 1.5 second (at \nthe beginning it was less than 10ms)! Issuing VACUUM ANALYZE and even reboot \ndidn't seemed to solve the problem.\n \nI succeeded to \nre-produce this with a simple test:\n \nI created a very \nsimple table that looks like that:\nCREATE TABLE \ntest1(  id int8 NOT NULL,  counter int8 NOT NULL DEFAULT \n0,  CONSTRAINT \"Test1_pkey\" PRIMARY KEY (id)) ;\n \nI've inserted 15 \nentries and wrote a script that increase the counter of specific \nrecord over and over. The SQL command looks like this:\nUPDATE test1 SET \ncounter=number WHERE id=10;\n \nAt the beginning the \nUPDATE time was around 15ms. After ~90000 updates, the execution time increased \nto be more than 120ms.\n \n1. What is the \nreason for this phenomena?\n2. Is there anything \nthat can be done in order to improve this?\n \nThanks,\nAssaf", "msg_date": "Mon, 5 Dec 2005 19:05:01 +0200", "msg_from": "\"Assaf Yaari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance degradation after successive UPDATE's" }, { "msg_contents": "On Mon, Dec 05, 2005 at 19:05:01 +0200,\n Assaf Yaari <[email protected]> wrote:\n> Hi,\n> \n> I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0.\n> \n> My application updates counters in DB. I left a test over the night that\n> increased counter of specific record. After night running (several\n> hundreds of thousands updates), I found out that the time spent on\n> UPDATE increased to be more than 1.5 second (at the beginning it was\n> less than 10ms)! Issuing VACUUM ANALYZE and even reboot didn't seemed to\n> solve the problem.\n\nYou need to be running vacuum more often to get rid of the deleted rows\n(update is essentially insert + delete). Once you get too many, plain\nvacuum won't be able to clean them up without raising the value you use for\nFSM. By now the table is really bloated and you probably want to use\nvacuum full on it.\n", "msg_date": "Mon, 5 Dec 2005 14:36:04 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degradation after successive UPDATE's" }, { "msg_contents": "Hi,\n\nYou might try these steps\n\n1. Do a vacuum full analyze\n2. Reindex the index on id column\n3. Cluster the table based on this index\n\nOn 12/5/05, Assaf Yaari <[email protected]> wrote:\n>\n> Hi,\n>\n> I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0.\n>\n> My application updates counters in DB. I left a test over the night that\n> increased counter of specific record. After night running (several hundreds\n> of thousands updates), I found out that the time spent on UPDATE increased\n> to be more than 1.5 second (at the beginning it was less than 10ms)! Issuing\n> VACUUM ANALYZE and even reboot didn't seemed to solve the problem.\n>\n> I succeeded to re-produce this with a simple test:\n>\n> I created a very simple table that looks like that:\n> CREATE TABLE test1\n> (\n> id int8 NOT NULL,\n> counter int8 NOT NULL DEFAULT 0,\n> CONSTRAINT \"Test1_pkey\" PRIMARY KEY (id)\n> ) ;\n>\n> I've inserted 15 entries and wrote a script that increase the counter of\n> specific record over and over. The SQL command looks like this:\n> UPDATE test1 SET counter=number WHERE id=10;\n>\n> At the beginning the UPDATE time was around 15ms. After ~90000 updates, the\n> execution time increased to be more than 120ms.\n>\n> 1. What is the reason for this phenomena?\n> 2. Is there anything that can be done in order to improve this?\n>\n> Thanks,\n> Assaf\n\n\n--\nRegards\nPandu\n", "msg_date": "Tue, 6 Dec 2005 15:22:01 +0530", "msg_from": "Pandurangan R S <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degradation after successive UPDATE's" } ]
[ { "msg_contents": "\nI have a case where an outer join's taking 10X more time than\na non-outer join; and it looks to me like the outer join could\nhave taken advantage of the same indexes that the non-outer join did.\n\n\nIn both cases, the outermost thing is a nested loop. The\ntop subplan gets all \"point features\" whre featureid=120.\nThe outer join did not use an index for this. \nThe non-outer join did use an index for this.\n\nAny reason it couldn't have use the index there?\n\n\nAlso - in both cases the second part of the nested loop\nis using the same multi-column index on the table \"facets\".\nThe non-outer-join uses both columns of this multi-column index.\nThe outer-join only uses one of the columns and is much slower.\n\nAny reason it couldn't have use both columns of the index there?\n\n\nAttached below are explain analyze for the slow outer join\nand the fast non-outer join. This is using 8.1.0.\n\n Thanks in advance,\n Ron\n\n===============================================================================\n== The outer join - slow\n===============================================================================\nfli=# explain analyze select * from userfeatures.point_features upf left join facets b on (b.entity_id = upf.entity_id and b.fac_id=261) where featureid in (120);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=2.11..90317.33 rows=1207 width=505) (actual time=8.985..734.761 rows=917 loops=1)\n -> Seq Scan on point_features upf (cost=0.00..265.85 rows=948 width=80) (actual time=8.792..14.270 rows=917 loops=1)\n Filter: (featureid = 120)\n -> Bitmap Heap Scan on facets b (cost=2.11..94.60 rows=31 width=425) (actual time=0.101..0.770 rows=1 loops=917)\n Recheck Cond: (b.entity_id = \"outer\".entity_id)\n Filter: (fac_id = 261)\n -> Bitmap Index Scan on \"fac_val(entity_id,fac_id)\" (cost=0.00..2.11 rows=31 width=0) (actual time=0.067..0.067 rows=32 loops=917)\n Index Cond: (b.entity_id = \"outer\".entity_id)\n Total runtime: 736.444 ms\n(9 rows)\n\n\n\n===============================================================================\n== The non-outer join - fast\n===============================================================================\nfli=# explain analyze select * from userfeatures.point_features upf join facets b on (b.entity_id = upf.entity_id and b.fac_id=261) where featureid in (120);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=23.32..4942.48 rows=1207 width=505) (actual time=0.571..55.867 rows=917 loops=1)\n -> Bitmap Heap Scan on point_features upf (cost=23.32..172.17 rows=948 width=80) (actual time=0.468..2.226 rows=917 loops=1)\n Recheck Cond: (featureid = 120)\n -> Bitmap Index Scan on point_features__featureid (cost=0.00..23.32 rows=948 width=0) (actual time=0.413..0.413 rows=917 loops=1)\n Index Cond: (featureid = 120)\n -> Index Scan using \"fac_val(entity_id,fac_id)\" on facets b (cost=0.00..5.02 rows=1 width=425) (actual time=0.051..0.053 rows=1 loops=917)\n Index Cond: ((b.entity_id = \"outer\".entity_id) AND (b.fac_id = 261))\n Total runtime: 56.892 ms\n(8 rows)\n\n\n\n\n\n===============================================================================\n== The tables involved.\n===============================================================================\n\nfli=# \\d facets\n Table \"facet.facets\"\n Column | Type | Modifiers\n-----------+---------+-----------\n entity_id | integer |\n nam_hash | integer |\n val_hash | integer |\n fac_id | integer |\n dis_id | integer |\n fac_val | text |\n fac_ival | integer |\n fac_tval | text |\n fac_nval | numeric |\n fac_raval | real[] |\n fac_bval | bytea |\nIndexes:\n \"fac_val(entity_id,fac_id)\" btree (entity_id, fac_id)\n \"facets__dis_id\" btree (dis_id)\n \"facets__ent_id\" btree (entity_id)\n \"facets__fac_id\" btree (fac_id)\n \"facets__id_value\" btree (fac_id, fac_val) CLUSTER\nForeign-key constraints:\n \"facets_entity_id_fkey\" FOREIGN KEY (entity_id) REFERENCES entity(entity_id) ON DELETE CASCADE\n \"facets_fac_id_fkey\" FOREIGN KEY (fac_id) REFERENCES facet_lookup(fac_id) ON DELETE CASCADE\n\nfli=# \\d point_features \n Table \"userfeatures.point_features\"\n Column | Type | Modifiers\n-----------+----------+------------------------------------------------------------------\n pointid | integer | not null default nextval('point_features_pointid_seq'::regclass)\n entity_id | integer |\n featureid | integer |\n sessionid | integer |\n userid | integer |\n extid | text |\n label | text |\n iconid | integer |\n the_geom | geometry |\nIndexes:\n \"point_features__featureid\" btree (featureid)\n \"point_features__postgis\" gist (the_geom)\nCheck constraints:\n \"enforce_dims_the_geom\" CHECK (ndims(the_geom) = 2)\n \"enforce_geotype_the_geom\" CHECK (geometrytype(the_geom) = 'POINT'::text OR the_geom IS NULL)\n \"enforce_srid_the_geom\" CHECK (srid(the_geom) = -1)\n\n\n\n\n===============================================================================\n== version info\n===============================================================================\n\nfli=# select version();\n version\n-------------------------------------------------------------------------------------\n PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.3 (SuSE Linux)\n(1 row)\n", "msg_date": "Mon, 5 Dec 2005 14:13:02 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Missed index opportunity for outer join?" }, { "msg_contents": "[email protected] writes:\n> In both cases, the outermost thing is a nested loop. The\n> top subplan gets all \"point features\" whre featureid=120.\n> The outer join did not use an index for this. \n> The non-outer join did use an index for this.\n\nHm, I can't duplicate this in a simple test (see below). There were\nsome changes in this area between 8.1.0 and branch tip, but a quick\nlook at the CVS logs doesn't suggest that any of them would be related\n(AFAICS the intentions of the patches were to change behavior only for\nOR clauses, and you haven't got any here).\n\nCan you try updating to 8.1 branch tip and see if the problem goes away?\nOr if not, generate a self-contained test case that shows the problem\nstarting from an empty database?\n\nActually, a quick and dirty thing would be to try my would-be test case\nbelow, and see if you get a seqscan on your copy.\n\n\t\t\tregards, tom lane\n\nregression=# create table point_features(entity_id int, featureid int);\nCREATE TABLE\nregression=# create index point_features__featureid on point_features(featureid);\nCREATE INDEX\nregression=# create table facets(entity_id int, fac_id int);\nCREATE TABLE\nregression=# create index \"fac_val(entity_id,fac_id)\" on facets(entity_id,fac_id);\nCREATE INDEX\nregression=# set enable_hashjoin TO 0;\nSET\nregression=# set enable_mergejoin TO 0;\nSET\nregression=# explain select * from point_features upf join facets b on (b.entity_id = upf.entity_id and b.fac_id=261) where featureid in (120);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------\n Nested Loop (cost=1.03..59.90 rows=1 width=16)\n -> Bitmap Heap Scan on point_features upf (cost=1.03..11.50 rows=10 width=8)\n Recheck Cond: (featureid = 120)\n -> Bitmap Index Scan on point_features__featureid (cost=0.00..1.03 rows=10 width=0)\n Index Cond: (featureid = 120)\n -> Index Scan using \"fac_val(entity_id,fac_id)\" on facets b (cost=0.00..4.83 rows=1 width=8)\n Index Cond: ((b.entity_id = \"outer\".entity_id) AND (b.fac_id = 261))\n(7 rows)\n\nregression=# explain select * from point_features upf left join facets b on (b.entity_id = upf.entity_id and b.fac_id=261) where featureid in (120);\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=2.07..127.70 rows=10 width=16)\n -> Bitmap Heap Scan on point_features upf (cost=1.03..11.50 rows=10 width=8)\n Recheck Cond: (featureid = 120)\n -> Bitmap Index Scan on point_features__featureid (cost=0.00..1.03 rows=10 width=0)\n Index Cond: (featureid = 120)\n -> Bitmap Heap Scan on facets b (cost=1.03..11.50 rows=10 width=8)\n Recheck Cond: (b.entity_id = \"outer\".entity_id)\n Filter: (fac_id = 261)\n -> Bitmap Index Scan on \"fac_val(entity_id,fac_id)\" (cost=0.00..1.03 rows=10 width=0)\n Index Cond: (b.entity_id = \"outer\".entity_id)\n(10 rows)\n\n(Note to self: it is a bit odd that fac_id=261 is pushed down to become\nan indexqual in one case but not the other ...)\n", "msg_date": "Mon, 05 Dec 2005 17:38:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missed index opportunity for outer join? " }, { "msg_contents": "On Mon, 5 Dec 2005, Tom Lane wrote:\n> \n> Hm, I can't duplicate this in a simple test...\n> Can you try updating to 8.1 branch tip ...\n> Actually, a quick and dirty thing would be to try my would-be test case\n> below, and see if you get a seqscan on your copy.\n\nWith your simple test-case I did not get the seqscan on 8.1.0.\nOutput shown below that looks just like yours.\n\nI'll try upgrading a devel machine too - but will only be \nable to try on smalller test databases in the near term.\n\n> (Note to self: it is a bit odd that fac_id=261 is pushed down to become\n> an indexqual in one case but not the other ...)\n\nI speculate that the seq_scan wasn't really the slow part\ncompared to not using using both parts of the index in the \nsecond part of the plan. The table point_features is tens of\nthousands of rows, while the table facets is tens of millions.\n\n Thanks,\n Ron\n\n===============================================================================\n=== Output of Tom's test case showing the same results he got.\n===============================================================================\n\ngreenie /home/pg2> createdb foo\nCREATE DATABASE\ngreenie /home/pg2> psql foo\n[...]\nfoo=# create table point_features(entity_id int, featureid int);\nCREATE TABLE\nfoo=# create index point_features__featureid on point_features(featureid);\nCREATE INDEX\nfoo=# create table facets(entity_id int, fac_id int);\nCREATE TABLE\nfoo=# create index \"fac_val(entity_id,fac_id)\" on facets(entity_id,fac_id);\nCREATE INDEX\nfoo=# set enable_hashjoin TO 0;\nSET\nfoo=# set enable_mergejoin TO 0;\nSET\nfoo=# explain select * from point_features upf join facets b on (b.entity_id = upf.entity_id and b.fac_id=261) where featureid in (120);\n QUERY PLAN \n--------------------------------------------------------------------------------------------------\n Nested Loop (cost=1.03..49.15 rows=1 width=16)\n -> Bitmap Heap Scan on point_features upf (cost=1.03..10.27 rows=10 width=8)\n Recheck Cond: (featureid = 120)\n -> Bitmap Index Scan on point_features__featureid (cost=0.00..1.03 rows=10 width=0)\n Index Cond: (featureid = 120)\n -> Index Scan using \"fac_val(entity_id,fac_id)\" on facets b (cost=0.00..3.88 rows=1 width=8)\n Index Cond: ((b.entity_id = \"outer\".entity_id) AND (b.fac_id = 261))\n(7 rows)\n\nfoo=# explain select * from point_features upf left join facets b on (b.entity_id = upf.entity_id and b.fac_id=261) where featureid in (120);\n QUERY PLAN \n-------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=2.07..114.25 rows=10 width=16)\n -> Bitmap Heap Scan on point_features upf (cost=1.03..10.27 rows=10 width=8)\n Recheck Cond: (featureid = 120)\n -> Bitmap Index Scan on point_features__featureid (cost=0.00..1.03 rows=10 width=0)\n Index Cond: (featureid = 120)\n -> Bitmap Heap Scan on facets b (cost=1.03..10.27 rows=10 width=8)\n Recheck Cond: (b.entity_id = \"outer\".entity_id)\n Filter: (fac_id = 261)\n -> Bitmap Index Scan on \"fac_val(entity_id,fac_id)\" (cost=0.00..1.03 rows=10 width=0)\n Index Cond: (b.entity_id = \"outer\".entity_id)\n(10 rows)\n\nfoo=# \n\n\n", "msg_date": "Mon, 5 Dec 2005 15:05:04 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Missed index opportunity for outer join?" }, { "msg_contents": "[email protected] writes:\n> On Mon, 5 Dec 2005, Tom Lane wrote:\n>> (Note to self: it is a bit odd that fac_id=261 is pushed down to become\n>> an indexqual in one case but not the other ...)\n\n> I speculate that the seq_scan wasn't really the slow part\n> compared to not using using both parts of the index in the \n> second part of the plan. The table point_features is tens of\n> thousands of rows, while the table facets is tens of millions.\n\nAgreed, but it's still odd that it would use a seqscan in one case and\nnot the other.\n\nI found the reason why the fac_id=261 clause isn't getting used as an\nindex qual; it's a bit of excessive paranoia that goes back to 2002.\nI've fixed that for 8.1.1, but am still wondering about the seqscan\non the other side of the join.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Dec 2005 11:57:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missed index opportunity for outer join? " }, { "msg_contents": "Tom Lane wrote:\n> [email protected] writes:\n>>On Mon, 5 Dec 2005, Tom Lane wrote:\n> \n>>I speculate that the seq_scan wasn't really the slow part\n>>compared to not using using both parts of the index in the \n>>second part of the plan. The table point_features is tens of\n>>thousands of rows, while the table facets is tens of millions.\n> \n> Agreed, but it's still odd that it would use a seqscan in one case and\n> not the other.\n\nHmm. Unfortunately that was happening on a production system\nand the amount of data in the tables has changed - and now I'm\nno longer getting a seq_scan when I try to reproduce it. That\nsystem is still using 8.1.0.\n\nThe \"point_features\" table is pretty dynamic and it's possible\nthat the data changed between my 'explain analyze' statement in\nthe first post in this thread. However since both of them\nshow an estimate of \"rows=948\" and returned an actual of 917 I\ndon't think that happened.\n\n> I found the reason why the fac_id=261 clause isn't getting used as an\n> index qual; it's a bit of excessive paranoia that goes back to 2002.\n> I've fixed that for 8.1.1, but am still wondering about the seqscan\n> on the other side of the join.\n\nI now have a development system with cvs-tip; but have not yet\nreproduced the seq scan on it either. I'm using the same data\nthat was in \"point_features\" with \"featureid=120\" - but don't have\nany good way of knowing what other data may have been in the table\nat the time. If desired, I could set up a cron job to periodically\nexplain analyze that query and see if it recurs.\n", "msg_date": "Tue, 06 Dec 2005 12:17:55 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missed index opportunity for outer join?" }, { "msg_contents": "Ron Mayer <[email protected]> writes:\n> The \"point_features\" table is pretty dynamic and it's possible\n> that the data changed between my 'explain analyze' statement in\n> the first post in this thread. However since both of them\n> show an estimate of \"rows=948\" and returned an actual of 917 I\n> don't think that happened.\n\nYeah, I had considered the same explanation and rejected it for the same\nreason. Also, the difference in estimated cost is significant (265.85\nfor the seqscan vs 172.17 for the bitmap scan) so it's hard to think\nthat a small change in stats --- so small as to not reflect in estimated\nrow count --- would change the estimate by that much.\n\n[ thinks some more... ] Of course, what we have to remember is that the\nplanner is actually going to choose based on the ultimate join cost, not\non the subplan costs. The reason the seqscan survived initial\ncomparisons at all is that it has a cheaper startup cost (less time to\nreturn the first tuple) than the bitmap scan, and this will be reflected\ninto a cheaper startup cost for the overall nestloop. The extra hundred\nunits of total cost would only reflect into the nestloop total cost ---\nand there, they would be considered \"down in the noise\" compared to a\n90k total estimate. So probably what happened is that the planner\npreferred this plan on the basis that the total costs are the same to\nwithin estimation error while the startup cost is definitely less.\n\nIn this explanation, the reason for the change in plans over time could\nbe a change in the statistics for the other table. Is \"facets\" more\ndynamic than \"point_features\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Dec 2005 15:33:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missed index opportunity for outer join? " }, { "msg_contents": "Tom Lane wrote:\n> ...planner is actually going to choose based on the ultimate join cost, \n> not on the subplan costs...\n> \n> In this explanation, the reason for the change in plans over time could\n> be a change in the statistics for the other table. Is \"facets\" more\n> dynamic than \"point_features\"?\n\nIn total rows changing it's more dynamic, but percentage-wise, it's\nless dynamic (point_features probably turns round 50% of it's rows\nin a day -- while facets turns over about 3% per day -- but facets\nis 1000X larger).\n\nFacets is a big table with rather odd distributions of values.\nMany of the values in the indexed columns show up only\nonce, others show up hundreds-of-thousands of times. Perhaps\nan analyze ran and just randomly sampled differently creating\ndifferent stats on that table?\n", "msg_date": "Tue, 06 Dec 2005 13:15:13 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missed index opportunity for outer join?" }, { "msg_contents": "Ron Mayer <[email protected]> writes:\n> Tom Lane wrote:\n>> In this explanation, the reason for the change in plans over time could\n>> be a change in the statistics for the other table. Is \"facets\" more\n>> dynamic than \"point_features\"?\n\n> Facets is a big table with rather odd distributions of values.\n> Many of the values in the indexed columns show up only\n> once, others show up hundreds-of-thousands of times. Perhaps\n> an analyze ran and just randomly sampled differently creating\n> different stats on that table?\n\nIf you have background tasks doing ANALYZEs then this explanation seems\nplausible enough. I'm willing to accept it anyway ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Dec 2005 16:24:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missed index opportunity for outer join? " }, { "msg_contents": "Tom Lane wrote:\n> If you have background tasks doing ANALYZEs then this explanation seems\n> plausible enough. I'm willing to accept it anyway ...\n\nYup, there are such tasks. I could dig through logs to try to confirm\nor reject it; but I think it's reasonably likely that this happened.\nBasically, data gets added to that table as it becomes ready from other\nsystems, and after each batch a vacuum analyze is run.\n", "msg_date": "Tue, 06 Dec 2005 13:42:09 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missed index opportunity for outer join?" } ]
[ { "msg_contents": "I'm running PostgreSQL 8.0.3 on i686-pc-linux-gnu (Fedora Core 2). I've been\ndealing with Psql for over than 2 years now, but I've never had this case\nbefore.\n\nI have a table that has about 20 rows in it.\n\n Table \"public.s_apotik\"\n Column | Type | Modifiers\n-------------------+------------------------------+------------------\nobat_id | character varying(10) | not null\nstock | numeric | not null\ns_min | numeric | not null\ns_jual | numeric | \ns_r_jual | numeric | \ns_order | numeric | \ns_r_order | numeric | \ns_bs | numeric | \nlast_receive | timestamp without time zone |\nIndexes:\n \"s_apotik_pkey\" PRIMARY KEY, btree(obat_id)\n \nWhen I try to UPDATE one of the row, nothing happens for a very long time.\nFirst, I run it on PgAdminIII, I can see the miliseconds are growing as I\nwaited. Then I stop the query, because the time needed for it is unbelievably\nwrong.\n\nThen I try to run the query from the psql shell. For example, the table has\nobat_id : A, B, C, D.\ndb=# UPDATE s_apotik SET stock = 100 WHERE obat_id='A';\n(.... nothing happens.. I press the Ctrl-C to stop it. This is what comes out\n:)\nCancel request sent\nERROR: canceling query due to user request\n\n(If I try another obat_id)\ndb=# UPDATE s_apotik SET stock = 100 WHERE obat_id='B';\n(Less than a second, this is what comes out :)\nUPDATE 1\n\nI can't do anything to that row. I can't DELETE it. Can't DROP the table. \nI want this data out of my database.\nWhat should I do? It's like there's a falsely pointed index here.\nAny help would be very much appreciated.\n\n\nRegards,\nJenny Tania\n\n\n\t\t\n__________________________________________ \nYahoo! DSL � Something to write home about. \nJust $16.99/mo. or less. \ndsl.yahoo.com \n\n", "msg_date": "Tue, 6 Dec 2005 00:38:38 -0800 (PST)", "msg_from": "Jenny <[email protected]>", "msg_from_op": true, "msg_subject": "need help" }, { "msg_contents": "Jenny schrieb:\n> I'm running PostgreSQL 8.0.3 on i686-pc-linux-gnu (Fedora Core 2). I've been\n> dealing with Psql for over than 2 years now, but I've never had this case\n> before.\n> \n> I have a table that has about 20 rows in it.\n> \n> Table \"public.s_apotik\"\n> Column | Type | Modifiers\n> -------------------+------------------------------+------------------\n> obat_id | character varying(10) | not null\n> stock | numeric | not null\n> s_min | numeric | not null\n> s_jual | numeric | \n> s_r_jual | numeric | \n> s_order | numeric | \n> s_r_order | numeric | \n> s_bs | numeric | \n> last_receive | timestamp without time zone |\n> Indexes:\n> \"s_apotik_pkey\" PRIMARY KEY, btree(obat_id)\n> \n> When I try to UPDATE one of the row, nothing happens for a very long time.\n> First, I run it on PgAdminIII, I can see the miliseconds are growing as I\n> waited. Then I stop the query, because the time needed for it is unbelievably\n> wrong.\n> \n> Then I try to run the query from the psql shell. For example, the table has\n> obat_id : A, B, C, D.\n> db=# UPDATE s_apotik SET stock = 100 WHERE obat_id='A';\n> (.... nothing happens.. I press the Ctrl-C to stop it. This is what comes out\n> :)\n> Cancel request sent\n> ERROR: canceling query due to user request\n> \n> (If I try another obat_id)\n> db=# UPDATE s_apotik SET stock = 100 WHERE obat_id='B';\n> (Less than a second, this is what comes out :)\n> UPDATE 1\n> \n> I can't do anything to that row. I can't DELETE it. Can't DROP the table. \n> I want this data out of my database.\n> What should I do? It's like there's a falsely pointed index here.\n> Any help would be very much appreciated.\n> \n\n1) lets hope you do regulary backups - and actually tested restore.\n1a) if not, do it right now\n2) reindex the table\n3) try again to modify\n\nQ: are there any foreign keys involved? If so, reindex those\ntables too, just in case.\n\ndid you vacuum regulary?\n\nHTH\nTino\n", "msg_date": "Tue, 06 Dec 2005 09:54:40 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need help" }, { "msg_contents": "Jenny wrote:\n> I'm running PostgreSQL 8.0.3 on i686-pc-linux-gnu (Fedora Core 2). I've been\n> dealing with Psql for over than 2 years now, but I've never had this case\n> before.\n\n> Then I try to run the query from the psql shell. For example, the table has\n> obat_id : A, B, C, D.\n> db=# UPDATE s_apotik SET stock = 100 WHERE obat_id='A';\n> (.... nothing happens.. I press the Ctrl-C to stop it. This is what comes out\n> :)\n> Cancel request sent\n> ERROR: canceling query due to user request\n> \n> (If I try another obat_id)\n> db=# UPDATE s_apotik SET stock = 100 WHERE obat_id='B';\n> (Less than a second, this is what comes out :)\n> UPDATE 1\n\nIt could well be another client has a lock on that record, for example \nby doing a SELECT FOR UPDATE w/o a NOWAIT.\n\nYou can verify by querying pg_locks. IIRC you can also see what query \ncaused the lock by joining against some other system table, but the \ndetails escape me atm (check the archives, I learned that by following \nthis list).\n\nIf it's indeed a locked record, the process causing the lock is listed. \nEither kill it or call it's owner back from his/her coffee break ;)\n\nI doubt it's anything serious.\n\n-- \nAlban Hertroys\[email protected]\n\nmagproductions b.v.\n\nT: ++31(0)534346874\nF: ++31(0)534346876\nM:\nI: www.magproductions.nl\nA: Postbus 416\n 7500 AK Enschede\n\n//Showing your Vision to the World//\n", "msg_date": "Tue, 06 Dec 2005 11:45:48 +0100", "msg_from": "Alban Hertroys <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need help" }, { "msg_contents": "\nTry to execute your query (in psql) with prefixing by EXPLAIN ANALYZE and\nsend us the result\n db=# EXPLAIN ANALYZE UPDATE s_apotik SET stock = 100 WHERE obat_id='A';\n\nregards\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tino Wildenhain\nSent: mardi 6 décembre 2005 09:55\nTo: Jenny\nCc: [email protected]; [email protected];\[email protected]\nSubject: Re: [PERFORM] [GENERAL] need help\n\nJenny schrieb:\n> I'm running PostgreSQL 8.0.3 on i686-pc-linux-gnu (Fedora Core 2). \n> I've been dealing with Psql for over than 2 years now, but I've never \n> had this case before.\n> \n> I have a table that has about 20 rows in it.\n> \n> Table \"public.s_apotik\"\n> Column | Type | Modifiers\n> -------------------+------------------------------+------------------\n> obat_id | character varying(10) | not null\n> stock | numeric | not null\n> s_min | numeric | not null\n> s_jual | numeric | \n> s_r_jual | numeric | \n> s_order | numeric | \n> s_r_order | numeric | \n> s_bs | numeric | \n> last_receive | timestamp without time zone |\n> Indexes:\n> \"s_apotik_pkey\" PRIMARY KEY, btree(obat_id)\n> \n> When I try to UPDATE one of the row, nothing happens for a very long time.\n> First, I run it on PgAdminIII, I can see the miliseconds are growing \n> as I waited. Then I stop the query, because the time needed for it is \n> unbelievably wrong.\n> \n> Then I try to run the query from the psql shell. For example, the \n> table has obat_id : A, B, C, D.\n> db=# UPDATE s_apotik SET stock = 100 WHERE obat_id='A'; (.... nothing \n> happens.. I press the Ctrl-C to stop it. This is what comes out\n> :)\n> Cancel request sent\n> ERROR: canceling query due to user request\n> \n> (If I try another obat_id)\n> db=# UPDATE s_apotik SET stock = 100 WHERE obat_id='B'; (Less than a \n> second, this is what comes out :) UPDATE 1\n> \n> I can't do anything to that row. I can't DELETE it. Can't DROP the table. \n> I want this data out of my database.\n> What should I do? It's like there's a falsely pointed index here.\n> Any help would be very much appreciated.\n> \n\n1) lets hope you do regulary backups - and actually tested restore.\n1a) if not, do it right now\n2) reindex the table\n3) try again to modify\n\nQ: are there any foreign keys involved? If so, reindex those tables too,\njust in case.\n\ndid you vacuum regulary?\n\nHTH\nTino\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n", "msg_date": "Wed, 21 Dec 2005 12:10:33 +0100", "msg_from": "\"Alban Medici \\(NetCentrex\\)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] need help" } ]
[ { "msg_contents": "Thanks Bruno,\n\nIssuing VACUUM FULL seems not to have influence on the time.\nI've added to my script VACUUM ANALYZE every 100 UPDATE's and run the\ntest again (on different record) and the time still increase.\n\nAny other ideas?\n\nThanks,\nAssaf. \n\n> -----Original Message-----\n> From: Bruno Wolff III [mailto:[email protected]] \n> Sent: Monday, December 05, 2005 10:36 PM\n> To: Assaf Yaari\n> Cc: [email protected]\n> Subject: Re: Performance degradation after successive UPDATE's\n> \n> On Mon, Dec 05, 2005 at 19:05:01 +0200,\n> Assaf Yaari <[email protected]> wrote:\n> > Hi,\n> > \n> > I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0.\n> > \n> > My application updates counters in DB. I left a test over the night \n> > that increased counter of specific record. After night running \n> > (several hundreds of thousands updates), I found out that the time \n> > spent on UPDATE increased to be more than 1.5 second (at \n> the beginning \n> > it was less than 10ms)! Issuing VACUUM ANALYZE and even \n> reboot didn't \n> > seemed to solve the problem.\n> \n> You need to be running vacuum more often to get rid of the \n> deleted rows (update is essentially insert + delete). Once \n> you get too many, plain vacuum won't be able to clean them up \n> without raising the value you use for FSM. By now the table \n> is really bloated and you probably want to use vacuum full on it.\n> \n", "msg_date": "Tue, 6 Dec 2005 11:08:07 +0200", "msg_from": "\"Assaf Yaari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance degradation after successive UPDATE's" }, { "msg_contents": "On 12/6/2005 4:08 AM, Assaf Yaari wrote:\n> Thanks Bruno,\n> \n> Issuing VACUUM FULL seems not to have influence on the time.\n> I've added to my script VACUUM ANALYZE every 100 UPDATE's and run the\n> test again (on different record) and the time still increase.\n\nI think he meant\n\n - run VACUUM FULL once,\n - adjust FSM settings to database size and turnover ratio\n - run VACUUM ANALYZE more frequent from there on.\n\n\nJan\n\n> \n> Any other ideas?\n> \n> Thanks,\n> Assaf. \n> \n>> -----Original Message-----\n>> From: Bruno Wolff III [mailto:[email protected]] \n>> Sent: Monday, December 05, 2005 10:36 PM\n>> To: Assaf Yaari\n>> Cc: [email protected]\n>> Subject: Re: Performance degradation after successive UPDATE's\n>> \n>> On Mon, Dec 05, 2005 at 19:05:01 +0200,\n>> Assaf Yaari <[email protected]> wrote:\n>> > Hi,\n>> > \n>> > I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0.\n>> > \n>> > My application updates counters in DB. I left a test over the night \n>> > that increased counter of specific record. After night running \n>> > (several hundreds of thousands updates), I found out that the time \n>> > spent on UPDATE increased to be more than 1.5 second (at \n>> the beginning \n>> > it was less than 10ms)! Issuing VACUUM ANALYZE and even \n>> reboot didn't \n>> > seemed to solve the problem.\n>> \n>> You need to be running vacuum more often to get rid of the \n>> deleted rows (update is essentially insert + delete). Once \n>> you get too many, plain vacuum won't be able to clean them up \n>> without raising the value you use for FSM. By now the table \n>> is really bloated and you probably want to use vacuum full on it.\n>> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Tue, 06 Dec 2005 07:34:39 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degradation after successive UPDATE's" }, { "msg_contents": "On Tue, Dec 06, 2005 at 11:08:07 +0200,\n Assaf Yaari <[email protected]> wrote:\n> Thanks Bruno,\n> \n> Issuing VACUUM FULL seems not to have influence on the time.\nThat was just to get the table size back down to something reasonable.\n\n> I've added to my script VACUUM ANALYZE every 100 UPDATE's and run the\n> test again (on different record) and the time still increase.\n\nVacuuming every 100 updates should put an upperbound on how slow things\nget. I doubt you need to analyze every 100 updates, but that doesn't\ncost much more on top of a vacuum. However, if there is another transaction\nopen while you are doing the updates, that would prevent clearing out\nthe deleted rows, since they are potentially visible to it. This is something\nyou want to rule out.\n\n> Any other ideas?\n\nDo you have any triggers on this table? Are you updating any other tables\nat the same time? In particular ones that are referred to by the problem table.\n", "msg_date": "Tue, 6 Dec 2005 14:44:33 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degradation after successive UPDATE's" } ]
[ { "msg_contents": "Hi,\n\nI setup a database server using the following configuration.\n\nRedhat 9.0\nPostgresql 8.0.3\n\nThen, I setup a client workstation to access this database server with\nthe following configuration.\n\nRedhat 9.0\nunixODBC 2.2.11\npsqlodbc-08.01.0101\n\nand write a C++ program to run database query.\n\nIn this program, it will access this database server using simple and\ncomplex (joining tables) SQL Select statement and retrieve the matched\nrows. For each access, it will connect the database and disconnect it.\n\nI found that the memory of the databaser server nearly used up (total 2G RAM).\n\nAfter I stop the program, the used memory did not free.\n\nIs there any configuration in postgresql.conf I should set? Currently,\nI just set the following in postgresql.conf\n\n listen_addresses = '*'\n max_stack_depth = 8100 (when I run \"ulimit -s\" the max. value that\nkernel supports = 8192)\n stats_row_level = true\n\nAnd, I run pg_autovacuum as background job.\n\n--\nKathy Lo\n", "msg_date": "Tue, 6 Dec 2005 17:22:30 +0800", "msg_from": "Kathy Lo <[email protected]>", "msg_from_op": true, "msg_subject": "Memory Leakage Problem" }, { "msg_contents": "Kathy Lo <[email protected]> writes:\n> I found that the memory of the databaser server nearly used up (total 2G RAM).\n> After I stop the program, the used memory did not free.\n\nI see no particular reason to believe that you are describing an actual\nmemory leak. More likely, you are just seeing the kernel's normal\nbehavior of eating up unused memory for disk cache space.\n\nRepeat after me: zero free memory is the normal and desirable condition\non Unix-like systems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Dec 2005 09:45:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem " }, { "msg_contents": "On Tue, 2005-12-06 at 03:22, Kathy Lo wrote:\n> Hi,\n\n> \n> In this program, it will access this database server using simple and\n> complex (joining tables) SQL Select statement and retrieve the matched\n> rows. For each access, it will connect the database and disconnect it.\n> \n> I found that the memory of the databaser server nearly used up (total 2G RAM).\n> \n> After I stop the program, the used memory did not free.\n\nUmmmm. What exactly do you mean? Can we see the output of top and / or\nfree? I'm guessing that what Tom said is right, you're just seeing a\nnormal state of how unix does things.\n\nIf your output of free looks like this:\n\n-bash-2.05b$ free\n total used free shared buffers cached\nMem:6096912 6069588 27324 0 260728 5547264\n-/+ buffers/cache: 261596 5835316\nSwap: 4192880 16320 4176560\n\nThen that's normal.\n\nThat's the output of free on a machine with 6 gigs that runs a reporting\ndatabase. Note that while it shows almost ALL the memory as used, it is\nbeing used by the kernel, which is a good thing. Note that 5547264 or\nabout 90% of memory is being used as kernel cache. That's a good thing.\n\nNote you can also get yourself in trouble with top. It's not uncommon\nfor someone to see a bunch of postgres processes each eating up 50 or\nmore megs of ram, and panic and think that they're running out of\nmemory, when, in fact, 44 meg for each of those processes is shared, and\nthe real usage per backend is 6 megs or less.\n\nDefinitely grab yourself a good unix / linux sysadmin guide. The \"in a\nnutshell\" books from O'Reilley (sp?) are a good starting point.\n", "msg_date": "Tue, 06 Dec 2005 11:48:08 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "Please keep replies on list, this may help others in the future, and\nalso, don't top post (i.e. put your responses after my responses...\nThanks)\n\nOn Tue, 2005-12-06 at 20:16, Kathy Lo wrote:\n> For a back-end database server running Postgresql 8.0.3, it's OK. But,\n> this problem seriously affects the performance of my application\n> server.\n> \n> I upgraded my application server from\n> \n> Redhat 7.3\n> unixODBC 2.2.4\n> Postgresql 7.2.1 with ODBC driver\n> \n> to\n> \n> Redhat 9.0\n> unixODBC 2.2.11\n> Postgresql 8.0.3\n> psqlodbc-08.01.0101\n> pg_autovacuum runs as background job\n> \n> Before upgrading, the application server runs perfectly. After\n> upgrade, this problem appears.\n> \n> When the application server receives the request from a client, it\n> will access the back-end database server using both simple and complex\n> query. Then, it will create a database locally to store the matched\n> rows for data processing. After some data processing, it will return\n> the result to the requested client. If the client finishes browsing\n> the result, it will drop the local database.\n\nOK, there could be a lot of problems here. Are you actually doing\n\"create database ...\" for each of these things? I'm not sure that's a\nreal good idea. Even create schema, which would be better, strikes me\nas not the best way to handle this.\n\n> At the same time, this application server can serve many many clients\n> so the application server has many many local databases at the same\n> time.\n\nAre you sure that you're better off with databases on your application\nserver? You might be better off with either running these temp dbs on\nthe backend server in the same cluster, or creating a cluster just for\nthese jobs that is somewhat more conservative in its memory usage. I\nwould lean towards doing this all on the backend server in one database\nusing multiple schemas.\n\n> After running the application server for a few days, the memory of the\n> application server nearly used up and start to use the swap memory\n> and, as a result, the application server runs very very slow and the\n> users complain.\n\nCould you provide us with your evidence that the memory is \"used up?\" \nWhat is the problem, and what you perceive as the problem, may not be\nthe same thing. Is it the output of top / free, and if so, could we see\nit, or whatever output is convincing you you're running out of memory?\n\n> I tested the application server without accessing the local database\n> (not store matched rows). The testing program running in the\n> application server just retrieved rows from the back-end database\n> server and then returned to the requested client directly. The memory\n> usage of the application server becomes normally and it can run for a\n> long time.\n\nAgain, what you think is normal, and what normal really are may not be\nthe same thing. Evidence. Please show us the output of top / free or\nwhatever that is showing this.\n\n> I found this problem after I upgrading the application server.\n> \n> On 12/7/05, Scott Marlowe <[email protected]> wrote:\n> > On Tue, 2005-12-06 at 03:22, Kathy Lo wrote:\n> > > Hi,\n> >\n> > >\n> > > In this program, it will access this database server using simple and\n> > > complex (joining tables) SQL Select statement and retrieve the matched\n> > > rows. For each access, it will connect the database and disconnect it.\n> > >\n> > > I found that the memory of the databaser server nearly used up (total 2G\n> > RAM).\n> > >\n> > > After I stop the program, the used memory did not free.\n> >\n> > Ummmm. What exactly do you mean? Can we see the output of top and / or\n> > free? I'm guessing that what Tom said is right, you're just seeing a\n> > normal state of how unix does things.\n> >\n> > If your output of free looks like this:\n> >\n> > -bash-2.05b$ free\n> > total used free shared buffers cached\n> > Mem:6096912 6069588 27324 0 260728 5547264\n> > -/+ buffers/cache: 261596 5835316\n> > Swap: 4192880 16320 4176560\n> >\n> > Then that's normal.\n> >\n> > That's the output of free on a machine with 6 gigs that runs a reporting\n> > database. Note that while it shows almost ALL the memory as used, it is\n> > being used by the kernel, which is a good thing. Note that 5547264 or\n> > about 90% of memory is being used as kernel cache. That's a good thing.\n> >\n> > Note you can also get yourself in trouble with top. It's not uncommon\n> > for someone to see a bunch of postgres processes each eating up 50 or\n> > more megs of ram, and panic and think that they're running out of\n> > memory, when, in fact, 44 meg for each of those processes is shared, and\n> > the real usage per backend is 6 megs or less.\n> >\n> > Definitely grab yourself a good unix / linux sysadmin guide. The \"in a\n> > nutshell\" books from O'Reilley (sp?) are a good starting point.\n> >\n> \n> \n> --\n> Kathy Lo\n", "msg_date": "Wed, 07 Dec 2005 15:33:25 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "On 12/8/05, Scott Marlowe <[email protected]> wrote:\n> Please keep replies on list, this may help others in the future, and\n> also, don't top post (i.e. put your responses after my responses...\n> Thanks)\n>\n> On Tue, 2005-12-06 at 20:16, Kathy Lo wrote:\n> > For a back-end database server running Postgresql 8.0.3, it's OK. But,\n> > this problem seriously affects the performance of my application\n> > server.\n> >\n> > I upgraded my application server from\n> >\n> > Redhat 7.3\n> > unixODBC 2.2.4\n> > Postgresql 7.2.1 with ODBC driver\n> >\n> > to\n> >\n> > Redhat 9.0\n> > unixODBC 2.2.11\n> > Postgresql 8.0.3\n> > psqlodbc-08.01.0101\n> > pg_autovacuum runs as background job\n> >\n> > Before upgrading, the application server runs perfectly. After\n> > upgrade, this problem appears.\n> >\n> > When the application server receives the request from a client, it\n> > will access the back-end database server using both simple and complex\n> > query. Then, it will create a database locally to store the matched\n> > rows for data processing. After some data processing, it will return\n> > the result to the requested client. If the client finishes browsing\n> > the result, it will drop the local database.\n>\n> OK, there could be a lot of problems here. Are you actually doing\n> \"create database ...\" for each of these things? I'm not sure that's a\n> real good idea. Even create schema, which would be better, strikes me\n> as not the best way to handle this.\n>\nActually, my program is written using C++ so I use \"create database\"\nSQL to create database. If not the best way, please tell me another\nmethod to create database in C++ program.\n\n> > At the same time, this application server can serve many many clients\n> > so the application server has many many local databases at the same\n> > time.\n>\n> Are you sure that you're better off with databases on your application\n> server? You might be better off with either running these temp dbs on\n> the backend server in the same cluster, or creating a cluster just for\n> these jobs that is somewhat more conservative in its memory usage. I\n> would lean towards doing this all on the backend server in one database\n> using multiple schemas.\n>\nBecause the data are distributed in many back-end database servers\n(physically, in different hardware machines), I need to use\nApplication server to temporarily store the data retrieved from\ndifferent machines and then do the data processing. And, for security\nreason, all the users cannot directly access the back-end database\nservers. So, I use the database in application server to keep the\nresult of data processing.\n\n> > After running the application server for a few days, the memory of the\n> > application server nearly used up and start to use the swap memory\n> > and, as a result, the application server runs very very slow and the\n> > users complain.\n>\n> Could you provide us with your evidence that the memory is \"used up?\"\n> What is the problem, and what you perceive as the problem, may not be\n> the same thing. Is it the output of top / free, and if so, could we see\n> it, or whatever output is convincing you you're running out of memory?\n>\nWhen the user complains the system becomes very slow, I use top to\nview the memory statistics.\nIn top, I cannot find any processes that use so many memory. I just\nfound that all the memory was used up and the Swap memory nearly used\nup.\n\nI said it is the problem because, before upgrading the application\nserver, no memory problem even running the application server for 1\nmonth. After upgrading the application server, this problem appears\njust after running the application server for 1 week. Why having this\nBIG difference between postgresql 7.2.1 on Redhat 7.3 and postgresql\n8.0.3 on Redhat 9.0? I only upgrade the OS, postgresql, unixODBC and\npostgresql ODBC driver. The program I written IS THE SAME.\n\n> > I tested the application server without accessing the local database\n> > (not store matched rows). The testing program running in the\n> > application server just retrieved rows from the back-end database\n> > server and then returned to the requested client directly. The memory\n> > usage of the application server becomes normally and it can run for a\n> > long time.\n>\n> Again, what you think is normal, and what normal really are may not be\n> the same thing. Evidence. Please show us the output of top / free or\n> whatever that is showing this.\n>\nAfter I received the user's complain, I just use top to view the\nmemory statistic. I forgot to save the output. But, I am running a\ntest to get back the problem. So, after running the test, I will give\nyou the output of the top/free.\n\n> > I found this problem after I upgrading the application server.\n> >\n> > On 12/7/05, Scott Marlowe <[email protected]> wrote:\n> > > On Tue, 2005-12-06 at 03:22, Kathy Lo wrote:\n> > > > Hi,\n> > >\n> > > >\n> > > > In this program, it will access this database server using simple and\n> > > > complex (joining tables) SQL Select statement and retrieve the matched\n> > > > rows. For each access, it will connect the database and disconnect it.\n> > > >\n> > > > I found that the memory of the databaser server nearly used up (total\n> 2G\n> > > RAM).\n> > > >\n> > > > After I stop the program, the used memory did not free.\n> > >\n> > > Ummmm. What exactly do you mean? Can we see the output of top and / or\n> > > free? I'm guessing that what Tom said is right, you're just seeing a\n> > > normal state of how unix does things.\n> > >\n> > > If your output of free looks like this:\n> > >\n> > > -bash-2.05b$ free\n> > > total used free shared buffers cached\n> > > Mem:6096912 6069588 27324 0 260728 5547264\n> > > -/+ buffers/cache: 261596 5835316\n> > > Swap: 4192880 16320 4176560\n> > >\n> > > Then that's normal.\n> > >\n> > > That's the output of free on a machine with 6 gigs that runs a reporting\n> > > database. Note that while it shows almost ALL the memory as used, it is\n> > > being used by the kernel, which is a good thing. Note that 5547264 or\n> > > about 90% of memory is being used as kernel cache. That's a good thing.\n> > >\n> > > Note you can also get yourself in trouble with top. It's not uncommon\n> > > for someone to see a bunch of postgres processes each eating up 50 or\n> > > more megs of ram, and panic and think that they're running out of\n> > > memory, when, in fact, 44 meg for each of those processes is shared, and\n> > > the real usage per backend is 6 megs or less.\n> > >\n> > > Definitely grab yourself a good unix / linux sysadmin guide. The \"in a\n> > > nutshell\" books from O'Reilley (sp?) are a good starting point.\n> > >\n> >\n> >\n> > --\n> > Kathy Lo\n>\n\n\n--\nKathy Lo\n", "msg_date": "Thu, 8 Dec 2005 10:25:07 +0800", "msg_from": "Kathy Lo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "On 12/8/05, Kathy Lo <[email protected]> wrote:\n[snip]\n\n> When the user complains the system becomes very slow, I use top to\n> view the memory statistics.\n> In top, I cannot find any processes that use so many memory. I just\n> found that all the memory was used up and the Swap memory nearly used\n> up.\n\nNot to add fuel to the fire, but I'm seeing something similar to this\non my 4xOpteron with 32GB of RAM running Pg 8.1RC1 on Linux (kernel\n2.6.12). I don't see this happening on a similar box with 16GB of RAM\nrunning Pg 8.0.3. This is a lightly used box (until it goes into\nproduction), so it's not \"out of memory\", but the memory usage is\nclimbing without any obvious culprit. To cut to the chase, here are\nsome numbers for everyone to digest:\n\n total gnu ps resident size\n# ps ax -o rss|perl -e '$x += $_ for (<>);print \"$x\\n\";'\n5810492\n\n total gnu ps virual size\n# ps ax -o vsz|perl -e '$x += $_ for (<>);print \"$x\\n\";'\n10585400\n\n total gnu ps \"if all pages were dirtied and swapped\" size\n# ps ax -o size|perl -e '$x += $_ for (<>);print \"$x\\n\";'\n1970952\n\n ipcs -m\n# ipcs -m\n\n------ Shared Memory Segments --------\nkey shmid owner perms bytes nattch status\n0x0052e2c1 1802240 postgres 600 176054272 26\n\n(that's the entire ipcs -m output)\n\n and the odd man out, free\n# free\n total used free shared buffers cached\nMem: 32752268 22498448 10253820 0 329776 8289360\n-/+ buffers/cache: 13879312 18872956\nSwap: 31248712 136 31248576\n\nI guess dstat is getting it's info from the same source as free, because:\n\n# dstat -m 1\n------memory-usage-----\n_used _buff _cach _free\n 13G 322M 8095M 9.8G\n\nNow, I'm not blaming Pg for the apparent discrepancy in calculated vs.\nreported-by-free memory usage, but I only noticed this after upgrading\nto 8.1. I'll collect any more info that anyone would like to see,\njust let me know.\n\nIf anyone has any ideas on what is actually happening here I'd love to\nhear them!\n\n--\nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Thu, 8 Dec 2005 03:46:02 +0000", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "Mike Rylander <[email protected]> writes:\n> To cut to the chase, here are\n> some numbers for everyone to digest:\n> total gnu ps resident size\n> # ps ax -o rss|perl -e '$x += $_ for (<>);print \"$x\\n\";'\n> 5810492\n> total gnu ps virual size\n> # ps ax -o vsz|perl -e '$x += $_ for (<>);print \"$x\\n\";'\n> 10585400\n> total gnu ps \"if all pages were dirtied and swapped\" size\n> # ps ax -o size|perl -e '$x += $_ for (<>);print \"$x\\n\";'\n> 1970952\n\nI wouldn't put any faith in those numbers at all, because you'll be\ncounting the PG shared memory multiple times.\n\nOn the Linux versions I've used lately, ps and top report a process'\nmemory size as including all its private memory, plus all the pages\nof shared memory that it has touched since it started. So if you run\nsay a seqscan over a large table in a freshly-started backend, the\nreported memory usage will ramp up from a couple meg to the size of\nyour shared_buffer arena plus a couple meg --- but in reality the\nspace used by the process is staying constant at a couple meg.\n\nNow, multiply that effect by N backends doing this at once, and you'll\nhave a very skewed view of what's happening in your system.\n\nI'd trust the totals reported by free and dstat a lot more than summing\nper-process numbers from ps or top.\n\n> Now, I'm not blaming Pg for the apparent discrepancy in calculated vs.\n> reported-by-free memory usage, but I only noticed this after upgrading\n> to 8.1.\n\nI don't know of any reason to think that 8.1 would act differently from\nolder PG versions in this respect.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Dec 2005 23:38:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem " }, { "msg_contents": "On 12/8/05, Tom Lane <[email protected]> wrote:\n> Mike Rylander <[email protected]> writes:\n> > To cut to the chase, here are\n> > some numbers for everyone to digest:\n> > total gnu ps resident size\n> > # ps ax -o rss|perl -e '$x += $_ for (<>);print \"$x\\n\";'\n> > 5810492\n> > total gnu ps virual size\n> > # ps ax -o vsz|perl -e '$x += $_ for (<>);print \"$x\\n\";'\n> > 10585400\n> > total gnu ps \"if all pages were dirtied and swapped\" size\n> > # ps ax -o size|perl -e '$x += $_ for (<>);print \"$x\\n\";'\n> > 1970952\n>\n> I wouldn't put any faith in those numbers at all, because you'll be\n> counting the PG shared memory multiple times.\n>\n> On the Linux versions I've used lately, ps and top report a process'\n> memory size as including all its private memory, plus all the pages\n> of shared memory that it has touched since it started. So if you run\n> say a seqscan over a large table in a freshly-started backend, the\n> reported memory usage will ramp up from a couple meg to the size of\n> your shared_buffer arena plus a couple meg --- but in reality the\n> space used by the process is staying constant at a couple meg.\n\nRight, I can definitely see that happening. Some backends are upwards\nof 200M, some are just a few since they haven't been touched yet.\n\n>\n> Now, multiply that effect by N backends doing this at once, and you'll\n> have a very skewed view of what's happening in your system.\n\nAbsolutely ...\n>\n> I'd trust the totals reported by free and dstat a lot more than summing\n> per-process numbers from ps or top.\n>\n\nAnd there's the part that's confusing me: the numbers for used memory\nproduced by free and dstat, after subtracting the buffers/cache\namounts, are /larger/ than those that ps and top report. (top says the\nsame thing as ps, on the whole.)\n\n\n> > Now, I'm not blaming Pg for the apparent discrepancy in calculated vs.\n> > reported-by-free memory usage, but I only noticed this after upgrading\n> > to 8.1.\n>\n> I don't know of any reason to think that 8.1 would act differently from\n> older PG versions in this respect.\n>\n\nNeither can I, which is why I don't blame it. ;) I'm just reporting\nwhen/where I noticed the issue.\n\n> regards, tom lane\n>\n\n\n--\nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Thu, 8 Dec 2005 14:00:06 +0000", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "Mike Rylander wrote:\n\n>Right, I can definitely see that happening. Some backends are upwards\n>of 200M, some are just a few since they haven't been touched yet.\n>\n>\n>>Now, multiply that effect by N backends doing this at once, and you'll\n>>have a very skewed view of what's happening in your system.\n>>\n>\n>Absolutely ...\n>\n>>I'd trust the totals reported by free and dstat a lot more than summing\n>>per-process numbers from ps or top.\n>>\n>\n>And there's the part that's confusing me: the numbers for used memory\n>produced by free and dstat, after subtracting the buffers/cache\n>amounts, are /larger/ than those that ps and top report. (top says the\n>same thing as ps, on the whole.)\n>\n\nI'm seeing the same thing on one of our 8.1 servers. Summing RSS from \n`ps` or RES from `top` accounts for about 1 GB, but `free` says:\n\n total used free shared buffers cached\nMem: 4060968 3870328 190640 0 14788 432048\n-/+ buffers/cache: 3423492 637476\nSwap: 2097144 175680 1921464\n\nThat's 3.4 GB/170 MB in RAM/swap, up from 2.7 GB/0 last Thursday, 2.2 \nGB/0 last Monday, or 1.9 GB after a reboot ten days ago. Stopping \nPostgres brings down the number, but not all the way -- it drops to \nabout 2.7 GB, even though the next most memory-intensive process is \n`ntpd` at 5 MB. (Before Postgres starts, there's less than 30 MB of \nstuff running.) The only way I've found to get this box back to normal \nis to reboot it.\n\n>>>Now, I'm not blaming Pg for the apparent discrepancy in calculated vs.\n>>>reported-by-free memory usage, but I only noticed this after upgrading\n>>>to 8.1.\n>>>\n>>I don't know of any reason to think that 8.1 would act differently from\n>>older PG versions in this respect.\n>>\n>\n>Neither can I, which is why I don't blame it. ;) I'm just reporting\n>when/where I noticed the issue.\n>\nI can't offer any explanation for why this server is starting to swap -- \nwhere'd the memory go? -- but I know it started after upgrading to \nPostgreSQL 8.1. I'm not saying it's something in the PostgreSQL code, \nbut this server definitely didn't do this in the months under 7.4.\n\nMike: is your system AMD64, by any chance? The above system is, as is \nanother similar story I heard.\n\n--Will Glynn\nFreedom Healthcare\n", "msg_date": "Mon, 12 Dec 2005 15:19:21 -0500", "msg_from": "Will Glynn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "On 12/12/05, Will Glynn <[email protected]> wrote:\n> Mike Rylander wrote:\n>\n> >Right, I can definitely see that happening. Some backends are upwards\n> >of 200M, some are just a few since they haven't been touched yet.\n> >\n> >\n> >>Now, multiply that effect by N backends doing this at once, and you'll\n> >>have a very skewed view of what's happening in your system.\n> >>\n> >\n> >Absolutely ...\n> >\n> >>I'd trust the totals reported by free and dstat a lot more than summing\n> >>per-process numbers from ps or top.\n> >>\n> >\n> >And there's the part that's confusing me: the numbers for used memory\n> >produced by free and dstat, after subtracting the buffers/cache\n> >amounts, are /larger/ than those that ps and top report. (top says the\n> >same thing as ps, on the whole.)\n> >\n>\n> I'm seeing the same thing on one of our 8.1 servers. Summing RSS from\n> `ps` or RES from `top` accounts for about 1 GB, but `free` says:\n>\n> total used free shared buffers cached\n> Mem: 4060968 3870328 190640 0 14788 432048\n> -/+ buffers/cache: 3423492 637476\n> Swap: 2097144 175680 1921464\n>\n> That's 3.4 GB/170 MB in RAM/swap, up from 2.7 GB/0 last Thursday, 2.2\n> GB/0 last Monday, or 1.9 GB after a reboot ten days ago. Stopping\n> Postgres brings down the number, but not all the way -- it drops to\n> about 2.7 GB, even though the next most memory-intensive process is\n> `ntpd` at 5 MB. (Before Postgres starts, there's less than 30 MB of\n> stuff running.) The only way I've found to get this box back to normal\n> is to reboot it.\n>\n> >>>Now, I'm not blaming Pg for the apparent discrepancy in calculated vs.\n> >>>reported-by-free memory usage, but I only noticed this after upgrading\n> >>>to 8.1.\n> >>>\n> >>I don't know of any reason to think that 8.1 would act differently from\n> >>older PG versions in this respect.\n> >>\n> >\n> >Neither can I, which is why I don't blame it. ;) I'm just reporting\n> >when/where I noticed the issue.\n> >\n> I can't offer any explanation for why this server is starting to swap --\n> where'd the memory go? -- but I know it started after upgrading to\n> PostgreSQL 8.1. I'm not saying it's something in the PostgreSQL code,\n> but this server definitely didn't do this in the months under 7.4.\n>\n> Mike: is your system AMD64, by any chance? The above system is, as is\n> another similar story I heard.\n>\n\nIt sure is. Gentoo with kernel version 2.6.12, built for x86_64. \nLooks like we have a contender for the common factor. :)\n\n> --Will Glynn\n> Freedom Healthcare\n>\n\n\n--\nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Mon, 12 Dec 2005 20:48:48 +0000", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "Mike Rylander <[email protected]> writes:\n> On 12/12/05, Will Glynn <[email protected]> wrote:\n>> Mike: is your system AMD64, by any chance? The above system is, as is\n>> another similar story I heard.\n\n> It sure is. Gentoo with kernel version 2.6.12, built for x86_64. \n> Looks like we have a contender for the common factor. :)\n\nPlease tell me you're *not* running a production database on Gentoo.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Dec 2005 23:17:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem " }, { "msg_contents": ">\n>> It sure is. Gentoo with kernel version 2.6.12, built for x86_64. \n>> Looks like we have a contender for the common factor. :)\n>> \n>\n> Please tell me you're *not* running a production database on Gentoo.\n>\n> \n> \t\t\tregards, tom lane\n> \nYou don't even want to know how many companies I know that are doing \nthis very thing and no, it was not my suggestion.\n\nJoshua D. Drake\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Mon, 12 Dec 2005 20:31:52 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "We're seeing memory problems on one of our postgres databases. We're \nusing 7.4.6, and I suspect the kernel version is a key factor with this \nproblem.\n\nOne running under Redhat Linux 2.4.18-14smp #1 SMP and the other Debian \nLinux 2.6.8.1-4-686-smp #1 SMP\n\nThe second Debian server is a replicated slave using Slony.\n\nWe NEVER see any problems on the \"older\" Redhat (our master) DB, whereas \nthe Debian slave database requires slony and postgres to be stopped \nevery 2-3 weeks.\n\nThis server just consumes more and more memory until it goes swap crazy \nand the load averages start jumping through the roof.\n\nStopping the two services restores the server to some sort of normality \n- the load averages drop dramatically and remain low. But the memory is \nonly fully recovered by a server reboot.\n\nOver time memory gets used up, until you get to the point where those \nservices require another stop and start.\n\nJust my 2 cents...\n\nJohn\n\nWill Glynn wrote:\n> Mike Rylander wrote:\n> \n>> Right, I can definitely see that happening. Some backends are upwards\n>> of 200M, some are just a few since they haven't been touched yet.\n>>\n>>\n>>> Now, multiply that effect by N backends doing this at once, and you'll\n>>> have a very skewed view of what's happening in your system.\n>>>\n>>\n>> Absolutely ...\n>>\n>>> I'd trust the totals reported by free and dstat a lot more than summing\n>>> per-process numbers from ps or top.\n>>>\n>>\n>> And there's the part that's confusing me: the numbers for used memory\n>> produced by free and dstat, after subtracting the buffers/cache\n>> amounts, are /larger/ than those that ps and top report. (top says the\n>> same thing as ps, on the whole.)\n>>\n> \n> I'm seeing the same thing on one of our 8.1 servers. Summing RSS from \n> `ps` or RES from `top` accounts for about 1 GB, but `free` says:\n> \n> total used free shared buffers cached\n> Mem: 4060968 3870328 190640 0 14788 432048\n> -/+ buffers/cache: 3423492 637476\n> Swap: 2097144 175680 1921464\n> \n> That's 3.4 GB/170 MB in RAM/swap, up from 2.7 GB/0 last Thursday, 2.2 \n> GB/0 last Monday, or 1.9 GB after a reboot ten days ago. Stopping \n> Postgres brings down the number, but not all the way -- it drops to \n> about 2.7 GB, even though the next most memory-intensive process is \n> `ntpd` at 5 MB. (Before Postgres starts, there's less than 30 MB of \n> stuff running.) The only way I've found to get this box back to normal \n> is to reboot it.\n> \n>>>> Now, I'm not blaming Pg for the apparent discrepancy in calculated vs.\n>>>> reported-by-free memory usage, but I only noticed this after upgrading\n>>>> to 8.1.\n>>>>\n>>> I don't know of any reason to think that 8.1 would act differently from\n>>> older PG versions in this respect.\n>>>\n>>\n>> Neither can I, which is why I don't blame it. ;) I'm just reporting\n>> when/where I noticed the issue.\n>>\n> I can't offer any explanation for why this server is starting to swap -- \n> where'd the memory go? -- but I know it started after upgrading to \n> PostgreSQL 8.1. I'm not saying it's something in the PostgreSQL code, \n> but this server definitely didn't do this in the months under 7.4.\n> \n> Mike: is your system AMD64, by any chance? The above system is, as is \n> another similar story I heard.\n> \n> --Will Glynn\n> Freedom Healthcare\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n", "msg_date": "Tue, 13 Dec 2005 07:42:50 +0000", "msg_from": "John Sidney-Woollett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "John Sidney-Woollett <[email protected]> writes:\n> This server just consumes more and more memory until it goes swap crazy \n> and the load averages start jumping through the roof.\n\n*What* is consuming memory, exactly --- which processes?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Dec 2005 02:47:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem " }, { "msg_contents": "Sorry but I don't know how to determine that.\n\nWe stopped and started postgres yesterday so the server is behaving well \nat the moment.\n\ntop shows\n\ntop - 07:51:48 up 34 days, 6 min, 1 user, load average: 0.00, 0.02, 0.00\nTasks: 85 total, 1 running, 84 sleeping, 0 stopped, 0 zombie\nCpu(s): 0.6% us, 0.2% sy, 0.0% ni, 99.1% id, 0.2% wa, 0.0% hi, 0.0% si\nMem: 1035612k total, 1030380k used, 5232k free, 48256k buffers\nSwap: 497972k total, 122388k used, 375584k free, 32716k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n27852 postgres 16 0 17020 11m 14m S 1.0 1.2 18:00.34 postmaster\n27821 postgres 15 0 16236 6120 14m S 0.3 0.6 1:30.68 postmaster\n 4367 root 16 0 2040 1036 1820 R 0.3 0.1 0:00.05 top\n 1 root 16 0 1492 148 1340 S 0.0 0.0 0:04.75 init\n 2 root RT 0 0 0 0 S 0.0 0.0 0:02.00 migration/0\n 3 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/0\n 4 root RT 0 0 0 0 S 0.0 0.0 0:04.78 migration/1\n 5 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/1\n 6 root RT 0 0 0 0 S 0.0 0.0 0:04.58 migration/2\n 7 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2\n 8 root RT 0 0 0 0 S 0.0 0.0 0:21.28 migration/3\n 9 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/3\n 10 root 5 -10 0 0 0 S 0.0 0.0 0:00.14 events/0\n 11 root 5 -10 0 0 0 S 0.0 0.0 0:00.04 events/1\n 12 root 5 -10 0 0 0 S 0.0 0.0 0:00.01 events/2\n 13 root 5 -10 0 0 0 S 0.0 0.0 0:00.00 events/3\n 14 root 8 -10 0 0 0 S 0.0 0.0 0:00.00 khelper\n\n\nThis server only has postgres and slon running on it. There is also \npostfix but it is only used to relay emails from the root account to \nanother server - it isn't really doing anything (I hope).\n\nps shows\n\nUID PID PPID C STIME TIME CMD\nroot 1 0 0 Nov09 00:00:04 init [2]\nroot 2 1 0 Nov09 00:00:02 [migration/0]\nroot 3 1 0 Nov09 00:00:00 [ksoftirqd/0]\nroot 4 1 0 Nov09 00:00:04 [migration/1]\nroot 5 1 0 Nov09 00:00:00 [ksoftirqd/1]\nroot 6 1 0 Nov09 00:00:04 [migration/2]\nroot 7 1 0 Nov09 00:00:00 [ksoftirqd/2]\nroot 8 1 0 Nov09 00:00:21 [migration/3]\nroot 9 1 0 Nov09 00:00:00 [ksoftirqd/3]\nroot 10 1 0 Nov09 00:00:00 [events/0]\nroot 11 1 0 Nov09 00:00:00 [events/1]\nroot 12 1 0 Nov09 00:00:00 [events/2]\nroot 13 1 0 Nov09 00:00:00 [events/3]\nroot 14 11 0 Nov09 00:00:00 [khelper]\nroot 15 10 0 Nov09 00:00:00 [kacpid]\nroot 67 11 0 Nov09 00:17:10 [kblockd/0]\nroot 68 10 0 Nov09 00:00:52 [kblockd/1]\nroot 69 11 0 Nov09 00:00:07 [kblockd/2]\nroot 70 10 0 Nov09 00:00:09 [kblockd/3]\nroot 82 1 1 Nov09 09:08:14 [kswapd0]\nroot 83 11 0 Nov09 00:00:00 [aio/0]\nroot 84 10 0 Nov09 00:00:00 [aio/1]\nroot 85 11 0 Nov09 00:00:00 [aio/2]\nroot 86 10 0 Nov09 00:00:00 [aio/3]\nroot 222 1 0 Nov09 00:00:00 [kseriod]\nroot 245 1 0 Nov09 00:00:00 [scsi_eh_0]\nroot 278 1 0 Nov09 00:00:37 [kjournald]\nroot 359 1 0 Nov09 00:00:00 udevd\nroot 1226 1 0 Nov09 00:00:00 [kjournald]\nroot 1229 10 0 Nov09 00:00:16 [reiserfs/0]\nroot 1230 11 0 Nov09 00:00:08 [reiserfs/1]\nroot 1231 10 0 Nov09 00:00:00 [reiserfs/2]\nroot 1232 11 0 Nov09 00:00:00 [reiserfs/3]\nroot 1233 1 0 Nov09 00:00:00 [kjournald]\nroot 1234 1 0 Nov09 00:00:13 [kjournald]\nroot 1235 1 0 Nov09 00:00:24 [kjournald]\nroot 1583 1 0 Nov09 00:00:00 [pciehpd_event]\nroot 1598 1 0 Nov09 00:00:00 [shpchpd_event]\nroot 1669 1 0 Nov09 00:00:00 [khubd]\ndaemon 2461 1 0 Nov09 00:00:00 /sbin/portmap\nroot 2726 1 0 Nov09 00:00:10 /sbin/syslogd\nroot 2737 1 0 Nov09 00:00:00 /sbin/klogd\nmessage 2768 1 0 Nov09 00:00:00 /usr/bin/dbus-daemon-1 --system\nroot 2802 1 0 Nov09 00:04:38 [nfsd]\nroot 2804 1 0 Nov09 00:03:32 [nfsd]\nroot 2803 1 0 Nov09 00:04:58 [nfsd]\nroot 2806 1 0 Nov09 00:04:40 [nfsd]\nroot 2807 1 0 Nov09 00:04:41 [nfsd]\nroot 2805 1 0 Nov09 00:03:51 [nfsd]\nroot 2808 1 0 Nov09 00:04:36 [nfsd]\nroot 2809 1 0 Nov09 00:03:20 [nfsd]\nroot 2811 1 0 Nov09 00:00:00 [lockd]\nroot 2812 1 0 Nov09 00:00:00 [rpciod]\nroot 2815 1 0 Nov09 00:00:00 /usr/sbin/rpc.mountd\nroot 2933 1 0 Nov09 00:00:17 /usr/lib/postfix/master\npostfix 2938 2933 0 Nov09 00:00:11 qmgr -l -t fifo -u -c\nroot 2951 1 0 Nov09 00:00:09 /usr/sbin/sshd\nroot 2968 1 0 Nov09 00:00:00 /sbin/rpc.statd\nroot 2969 1 0 Nov09 00:01:41 /usr/sbin/xinetd -pidfile /var/r\nroot 2980 1 0 Nov09 00:00:07 /usr/sbin/ntpd -p /var/run/ntpd.\nroot 2991 1 0 Nov09 00:00:01 /sbin/mdadm -F -m root -s\ndaemon 3002 1 0 Nov09 00:00:00 /usr/sbin/atd\nroot 3013 1 0 Nov09 00:00:03 /usr/sbin/cron\nroot 3029 1 0 Nov09 00:00:00 /sbin/getty 38400 tty1\nroot 3031 1 0 Nov09 00:00:00 /sbin/getty 38400 tty2\nroot 3032 1 0 Nov09 00:00:00 /sbin/getty 38400 tty3\nroot 3033 1 0 Nov09 00:00:00 /sbin/getty 38400 tty4\nroot 3034 1 0 Nov09 00:00:00 /sbin/getty 38400 tty5\nroot 3035 1 0 Nov09 00:00:00 /sbin/getty 38400 tty6\npostgres 27806 1 0 Dec12 00:00:00 /usr/local/pgsql/bin/postmaster\npostgres 27809 27806 0 Dec12 00:00:00 postgres: stats buffer process\npostgres 27810 27809 0 Dec12 00:00:00 postgres: stats collector proces\npostgres 27821 27806 0 Dec12 00:01:30 postgres: postgres bp_live\npostgres 27842 1 0 Dec12 00:00:00 /usr/local/pgsql/bin/slon -d 1 b\npostgres 27844 27842 0 Dec12 00:00:00 /usr/local/pgsql/bin/slon -d 1 b\npostgres 27847 27806 0 Dec12 00:00:50 postgres: postgres bp_live\npostgres 27852 27806 1 Dec12 00:18:00 postgres: postgres bp_live\npostgres 27853 27806 0 Dec12 00:00:33 postgres: postgres bp_live\npostgres 27854 27806 0 Dec12 00:00:18 postgres: postgres bp_live\nroot 32735 10 0 05:35 00:00:00 [pdflush]\npostfix 2894 2933 0 07:04 00:00:00 pickup -l -t fifo -u -c\nroot 3853 10 0 07:37 00:00:00 [pdflush]\n\n\nAll I know is that stopping postgres brings the server back to \nnormality. Stopping slon on its own is not enough.\n\nJohn\n\nTom Lane wrote:\n> John Sidney-Woollett <[email protected]> writes:\n> \n>>This server just consumes more and more memory until it goes swap crazy \n>>and the load averages start jumping through the roof.\n> \n> \n> *What* is consuming memory, exactly --- which processes?\n> \n> \t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Dec 2005 08:05:05 +0000", "msg_from": "John Sidney-Woollett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "On Mon, Dec 12, 2005 at 08:31:52PM -0800, Joshua D. Drake wrote:\n> >\n> >>It sure is. Gentoo with kernel version 2.6.12, built for x86_64. \n> >>Looks like we have a contender for the common factor. :)\n> >> \n> >\n> >Please tell me you're *not* running a production database on Gentoo.\n> >\n> > \n> >\t\t\tregards, tom lane\n> > \n> You don't even want to know how many companies I know that are doing \n> this very thing and no, it was not my suggestion.\n\n\"Like the annoying teenager next door with a 90hp import sporting a 6\nfoot tall bolt-on wing, Gentoo users are proof that society is best\nserved by roving gangs of armed vigilantes, dishing out swift, cold\njustice with baseball bats...\"\nhttp://funroll-loops.org/\n", "msg_date": "Tue, 13 Dec 2005 02:22:34 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "John Sidney-Woollett <[email protected]> writes:\n> Tom Lane wrote:\n>> *What* is consuming memory, exactly --- which processes?\n\n> Sorry but I don't know how to determine that.\n\nTry \"ps auxw\", or some other incantation if you prefer, so long as it\nincludes some statistics about process memory use. What you showed us\nis certainly not helpful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Dec 2005 10:13:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem " }, { "msg_contents": "Tom Lane said:\n> John Sidney-Woollett <[email protected]> writes:\n>> Tom Lane wrote:\n>>> *What* is consuming memory, exactly --- which processes?\n>\n>> Sorry but I don't know how to determine that.\n>\n> Try \"ps auxw\", or some other incantation if you prefer, so long as it\n> includes some statistics about process memory use. What you showed us\n> is certainly not helpful.\n\nAt the moment not one process's VSZ is over 16Mb with the exception of one\nof the slon processes which is at 66Mb.\n\nI'll run this over the next few days and especially as the server starts\nbogging down to see if it identifies the culprit.\n\nIs it possible to grab memory outsize of a processes space? Or would a\nleak always show up by an ever increasing VSZ amount?\n\nThanks\n\nJohn\n", "msg_date": "Tue, 13 Dec 2005 16:37:42 -0000 (GMT)", "msg_from": "\"John Sidney-Woollett\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "On Tue, 2005-12-13 at 09:13, Tom Lane wrote:\n> John Sidney-Woollett <[email protected]> writes:\n> > Tom Lane wrote:\n> >> *What* is consuming memory, exactly --- which processes?\n> \n> > Sorry but I don't know how to determine that.\n> \n> Try \"ps auxw\", or some other incantation if you prefer, so long as it\n> includes some statistics about process memory use. What you showed us\n> is certainly not helpful.\n\nOr run top and hit M while it's running, and it'll sort according to\nwhat uses the most memory.\n", "msg_date": "Tue, 13 Dec 2005 11:40:06 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "\"John Sidney-Woollett\" <[email protected]> writes:\n> Is it possible to grab memory outsize of a processes space?\n\nNot unless there's a kernel bug involved.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Dec 2005 12:58:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem " }, { "msg_contents": "On Tue, Dec 13, 2005 at 04:37:42PM -0000, John Sidney-Woollett wrote:\n> I'll run this over the next few days and especially as the server starts\n> bogging down to see if it identifies the culprit.\n> \n> Is it possible to grab memory outsize of a processes space? Or would a\n> leak always show up by an ever increasing VSZ amount?\n\nThe only way to know what a process can access is by looking in\n/proc/<pid>/maps. This lists all the memory ranges a process can\naccess. The thing about postgres is that each backend dies when the\nconnection closes, so only a handful of processes are going to be\naround long enough to cause a problem.\n\nThe ones you need to look at are the number of mappings with a\nzero-inode excluding the shared memory segment. A diff between two days\nmight tell you which segments are growing. Must be for exactly the same\nprocess to be meaningful.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Wed, 14 Dec 2005 10:21:41 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" }, { "msg_contents": "Martijn\n\nThanks for the tip.\n\nSince the connections on this server are from slon, I'm hoping that they \nhand around for a *long* time, and long enough to take a look to see \nwhat is going on.\n\nJohn\n\nMartijn van Oosterhout wrote:\n> On Tue, Dec 13, 2005 at 04:37:42PM -0000, John Sidney-Woollett wrote:\n> \n>>I'll run this over the next few days and especially as the server starts\n>>bogging down to see if it identifies the culprit.\n>>\n>>Is it possible to grab memory outsize of a processes space? Or would a\n>>leak always show up by an ever increasing VSZ amount?\n> \n> \n> The only way to know what a process can access is by looking in\n> /proc/<pid>/maps. This lists all the memory ranges a process can\n> access. The thing about postgres is that each backend dies when the\n> connection closes, so only a handful of processes are going to be\n> around long enough to cause a problem.\n> \n> The ones you need to look at are the number of mappings with a\n> zero-inode excluding the shared memory segment. A diff between two days\n> might tell you which segments are growing. Must be for exactly the same\n> process to be meaningful.\n> \n> Have a nice day,\n", "msg_date": "Wed, 14 Dec 2005 13:52:00 +0000", "msg_from": "John Sidney-Woollett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Leakage Problem" } ]
[ { "msg_contents": "I need to slice up a web server's disk space to provide space for\npostgres and storing binaries such as images and sound files. I'm\nthinking of using logical volume management (LVM) to help manage the\namount of space I use between postgres and the data volumes.\n\nThe server has a 250GB RAID10 (LSI 320-I + BBU) volume which I am\nthinking of slicing up in the following way (Linux 2.6 kernel):\n\n / : ext3 : 47GB (root, home etc)\n /boot : ext3 : 1GB\n /tmp : ext2 : 2GB\n /usr : ext3 : 4GB\n /var : ext3 : 6GB\n -----------------------\n 60GB\n\n VG : 190GB approx\n -----------------------\n Initially divided so: \n /data : ext3 : 90GB\n /postgres : xfs : 40GB\n \nThis gives me left over space of roughly 60GB to extend into on the\nvolume group, which I can balance between the /data and /postgres\nlogical volumes as needed.\n\nAre there any major pitfalls to this approach?\n\nThanks,\nRory\n\n-- \nRory Campbell-Lange \n<[email protected]>\n<www.campbell-lange.net>\n", "msg_date": "Tue, 6 Dec 2005 09:38:18 +0000", "msg_from": "Rory Campbell-Lange <[email protected]>", "msg_from_op": true, "msg_subject": "LVM and Postgres" }, { "msg_contents": "Rory Campbell-Lange wrote:\n> The server has a 250GB RAID10 (LSI 320-I + BBU) volume which I am\n> thinking of slicing up in the following way (Linux 2.6 kernel):\n> \n> / : ext3 : 47GB (root, home etc)\n> /boot : ext3 : 1GB\n> /tmp : ext2 : 2GB\n> /usr : ext3 : 4GB\n> /var : ext3 : 6GB\n> -----------------------\n> 60GB\n> \n> VG : 190GB approx\n> -----------------------\n> Initially divided so: \n> /data : ext3 : 90GB\n> /postgres : xfs : 40GB\n> \n> This gives me left over space of roughly 60GB to extend into on the\n> volume group, which I can balance between the /data and /postgres\n> logical volumes as needed.\n> \n> Are there any major pitfalls to this approach?\n> \n> Thanks,\n> Rory\n> \n\nIt looks like you are using fast disks and xfs for filesystem on the \n/postgresql partition. That's nice.\n\nHow many disks in the array?\n\nOne thing you miss is sticking a bunch of sequential log writes on a \nseparate spindle as far as I can see with this? WAL / XFS (i think) both \nhave this pattern. If you've got a fast disk and can do BBU write \ncaching your WAL writes will hustle.\n\nOthers can probably speak a bit better on any potential speedups.\n\n- August\n\n", "msg_date": "Tue, 06 Dec 2005 12:12:22 -0800", "msg_from": "August Zajonc <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LVM and Postgres" }, { "msg_contents": "Hi August. Thanks very much for your mail.\n\nOn 06/12/05, August Zajonc ([email protected]) wrote:\n> Rory Campbell-Lange wrote:\n> >The server has a 250GB RAID10 (LSI 320-I + BBU) volume which I am\n> >thinking of slicing up in the following way (Linux 2.6 kernel):\n> >\n> > / : ext3 : 47GB (root, home etc)\n> > /boot : ext3 : 1GB\n> > /tmp : ext2 : 2GB\n> > /usr : ext3 : 4GB\n> > /var : ext3 : 6GB\n> > -----------------------\n> > 60GB\n> >\n> > VG : 190GB approx\n> > -----------------------\n> > Initially divided so: \n> > /data : ext3 : 90GB\n> > /postgres : xfs : 40GB\n> > \n> >This gives me left over space of roughly 60GB to extend into on the\n> >volume group, which I can balance between the /data and /postgres\n> >logical volumes as needed.\n> \n> It looks like you are using fast disks and xfs for filesystem on the \n> /postgresql partition. That's nice.\n> \n> How many disks in the array?\n\nFour.\n\n> One thing you miss is sticking a bunch of sequential log writes on a \n> separate spindle as far as I can see with this? WAL / XFS (i think) both \n> have this pattern. If you've got a fast disk and can do BBU write \n> caching your WAL writes will hustle.\n\nYes, we don't have any spare disks unfortunately. We have enabled the\nBBU write, so we are hoping for good performance. I'd be grateful for\nsome advice on dd/bonnie++ tests for checking this.\n\n> Others can probably speak a bit better on any potential speedups.\n\nI'd better test extending the Logical Volumes too!\n\nMany thanks\nRory\n\n\n", "msg_date": "Tue, 6 Dec 2005 21:36:23 +0000", "msg_from": "Rory Campbell-Lange <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LVM and Postgres" }, { "msg_contents": "On Tue, Dec 06, 2005 at 09:36:23PM +0000, Rory Campbell-Lange wrote:\n>Yes, we don't have any spare disks unfortunately. We have enabled the\n>BBU write, so we are hoping for good performance.\n\nEven if you don't use seperate disks you'll probably get better\nperformance by putting the WAL on a seperate ext2 partition. xfs gives\ngood performance for the table data, but is not particularly good for\nthe WAL. \n\nMike Stone\n", "msg_date": "Tue, 06 Dec 2005 19:25:37 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LVM and Postgres" }, { "msg_contents": "I would argue that almost certainly won't by doing that as you will\ncreate a new place even further away for the disk head to seek to\ninstead of just another file on the same FS that is probably closer to\nthe current head position.\n\nAlex\n\nOn 12/6/05, Michael Stone <[email protected]> wrote:\n> On Tue, Dec 06, 2005 at 09:36:23PM +0000, Rory Campbell-Lange wrote:\n> >Yes, we don't have any spare disks unfortunately. We have enabled the\n> >BBU write, so we are hoping for good performance.\n>\n> Even if you don't use seperate disks you'll probably get better\n> performance by putting the WAL on a seperate ext2 partition. xfs gives\n> good performance for the table data, but is not particularly good for\n> the WAL.\n>\n> Mike Stone\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n", "msg_date": "Tue, 6 Dec 2005 19:52:25 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LVM and Postgres" }, { "msg_contents": "On Tue, Dec 06, 2005 at 07:52:25PM -0500, Alex Turner wrote:\n>I would argue that almost certainly won't by doing that as you will\n>create a new place even further away for the disk head to seek to\n>instead of just another file on the same FS that is probably closer to\n>the current head position.\n\nI would argue that you should benchmark it instead of speculating. You\nare perhaps underestimating the effect of the xfs log. (Ordinarily xfs\nhas great performance, but it seems to be fairly lousy at\nfsync/osync/etc operations in my benchmarks; my wild speculation is that\nthe sync forces a log flush.) At any rate you're going to have a lot of\nhead movement on any reasonably sized filesystem anyway, and I'm not\nconvinced that hoping that your data will happen to land close to your log is\na valid, repeatable optimization technique. Note that the WAL will\nwander around the disk as files are created and deleted, whereas tables\nare basically updated in place.\n\nMike Stone\n", "msg_date": "Tue, 06 Dec 2005 20:31:07 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LVM and Postgres" }, { "msg_contents": "Michael Stone wrote:\n> Note that the WAL will\n> wander around the disk as files are created and deleted, whereas tables\n> are basically updated in place.\n\nHuh? I was rather under the impression that the WAL files (in\npg_xlog, right?) were reused once they'd been created, so their\nlocations on the disk should remain the same, as should their data\nblocks (roughly, depending on the implementation of the filesystem, of\ncourse).\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Tue, 6 Dec 2005 20:14:45 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LVM and Postgres" }, { "msg_contents": "On 06/12/05, Michael Stone ([email protected]) wrote:\n> On Tue, Dec 06, 2005 at 07:52:25PM -0500, Alex Turner wrote:\n> >I would argue that almost certainly won't by doing that as you will\n> >create a new place even further away for the disk head to seek to\n> >instead of just another file on the same FS that is probably closer to\n> >the current head position.\n> \n> I would argue that you should benchmark it instead of speculating. \n\nIs there a good way of benchmarking? We don't have much in the way of\ntest data at present.\n\nRegards,\nRory\n\n-- \nRory Campbell-Lange \n<[email protected]>\n<www.campbell-lange.net>\n", "msg_date": "Thu, 8 Dec 2005 11:28:41 +0000", "msg_from": "Rory Campbell-Lange <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LVM and Postgres" } ]
[ { "msg_contents": "I run the VACUUM as you suggested, but still no response from the server. So, I\ndecided to DROP the database. I got a message that the database is being used.\nI closed every application that accessing it. But, the message remains.\n\nI checked the server processes (ps -ax). There were lots of 'UPDATE is waiting\n...' on the list. I killed them all. I backuped current database and DROP the\ndatabase, restore to the backup file I just made. \n\nDon't really know why this happened, but thankfully now, everything's normal.\nThank you, guys.\n\nRegards,\nJenny Tania\n\n\n\t\t\n__________________________________________ \nYahoo! DSL � Something to write home about. \nJust $16.99/mo. or less. \ndsl.yahoo.com \n\n", "msg_date": "Tue, 6 Dec 2005 01:41:15 -0800 (PST)", "msg_from": "Jenny <[email protected]>", "msg_from_op": true, "msg_subject": "need help (not anymore)" } ]
[ { "msg_contents": "Hi,\n\nIs it possible to get this query run faster than it does now, by adding\nindexes, changing the query?\n\nSELECT customers.objectid FROM prototype.customers, prototype.addresses\nWHERE\ncustomers.contactaddress = addresses.objectid\nORDER BY zipCode asc, housenumber asc\nLIMIT 1 OFFSET 283745\n\nExplain:\n\nLimit (cost=90956.71..90956.71 rows=1 width=55)\n -> Sort (cost=90247.34..91169.63 rows=368915 width=55)\n Sort Key: addresses.zipcode, addresses.housenumber\n -> Hash Join (cost=14598.44..56135.75 rows=368915 width=55)\n Hash Cond: (\"outer\".contactaddress = \"inner\".objectid)\n -> Seq Scan on customers (cost=0.00..31392.15\nrows=368915 width=80)\n -> Hash (cost=13675.15..13675.15 rows=369315 width=55)\n -> Seq Scan on addresses (cost=0.00..13675.15\nrows=369315 width=55)\n\nThe customers table has an index on contactaddress and objectid.\nThe addresses table has an index on zipcode+housenumber and objectid.\n\nTIA\n\n--\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n\n\n", "msg_date": "Tue, 06 Dec 2005 10:43:49 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Can this query go faster???" }, { "msg_contents": "> Hi,\n> \n> Is it possible to get this query run faster than it does now, by adding\n> indexes, changing the query?\n> \n> SELECT customers.objectid FROM prototype.customers, prototype.addresses\n> WHERE\n> customers.contactaddress = addresses.objectid\n> ORDER BY zipCode asc, housenumber asc\n> LIMIT 1 OFFSET 283745\n> \n> Explain:\n> \n> Limit (cost=90956.71..90956.71 rows=1 width=55)\n> -> Sort (cost=90247.34..91169.63 rows=368915 width=55)\n> Sort Key: addresses.zipcode, addresses.housenumber\n> -> Hash Join (cost=14598.44..56135.75 rows=368915 width=55)\n> Hash Cond: (\"outer\".contactaddress = \"inner\".objectid)\n> -> Seq Scan on customers (cost=0.00..31392.15\n> rows=368915 width=80)\n> -> Hash (cost=13675.15..13675.15 rows=369315 width=55)\n> -> Seq Scan on addresses (cost=0.00..13675.15\n> rows=369315 width=55)\n> \n> The customers table has an index on contactaddress and objectid.\n> The addresses table has an index on zipcode+housenumber and objectid.\n\nWhen the resulting relation contains all the info from both tables, \nindexes won't help, seq scan is inevitable.\n", "msg_date": "Tue, 06 Dec 2005 10:51:25 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can this query go faster???" }, { "msg_contents": "Joost,\n\nWhy do you use an offset here ? I guess you're traversing the table\nsomehow, in this case it would be better to remember the last zipcode +\nhousenumber and put an additional condition to get the next bigger than\nthe last one you've got... that would go for the index on\nzipcode+housenumber and be very fast. The big offset forces postgres to\ntraverse that many entries until it's able to pick the one row for the\nresult...\n\nOn Tue, 2005-12-06 at 10:43, Joost Kraaijeveld wrote:\n> Hi,\n> \n> Is it possible to get this query run faster than it does now, by adding\n> indexes, changing the query?\n> \n> SELECT customers.objectid FROM prototype.customers, prototype.addresses\n> WHERE\n> customers.contactaddress = addresses.objectid\n> ORDER BY zipCode asc, housenumber asc\n> LIMIT 1 OFFSET 283745\n> \n> Explain:\n> \n> Limit (cost=90956.71..90956.71 rows=1 width=55)\n> -> Sort (cost=90247.34..91169.63 rows=368915 width=55)\n> Sort Key: addresses.zipcode, addresses.housenumber\n> -> Hash Join (cost=14598.44..56135.75 rows=368915 width=55)\n> Hash Cond: (\"outer\".contactaddress = \"inner\".objectid)\n> -> Seq Scan on customers (cost=0.00..31392.15\n> rows=368915 width=80)\n> -> Hash (cost=13675.15..13675.15 rows=369315 width=55)\n> -> Seq Scan on addresses (cost=0.00..13675.15\n> rows=369315 width=55)\n> \n> The customers table has an index on contactaddress and objectid.\n> The addresses table has an index on zipcode+housenumber and objectid.\n> \n> TIA\n> \n> --\n> Groeten,\n> \n> Joost Kraaijeveld\n> Askesis B.V.\n> Molukkenstraat 14\n> 6524NB Nijmegen\n> tel: 024-3888063 / 06-51855277\n> fax: 024-3608416\n> e-mail: [email protected]\n> web: www.askesis.nl\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n", "msg_date": "Tue, 06 Dec 2005 10:52:57 +0100", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can this query go faster???" }, { "msg_contents": "On Tue, 2005-12-06 at 10:52 +0100, Csaba Nagy wrote:\n> Joost,\n> \n> Why do you use an offset here ? I guess you're traversing the table\n> somehow, in this case it would be better to remember the last zipcode +\n> housenumber and put an additional condition to get the next bigger than\n> the last one you've got... that would go for the index on\n> zipcode+housenumber and be very fast. The big offset forces postgres to\n> traverse that many entries until it's able to pick the one row for the\nI am forced to translate a sorting dependent record number to a record\nin the database. The GUI (a Java JTable) works with record /row numbers,\nwhich is handy if one has an ISAM database, but not if one uses\nPostgreSQL.\n\nI wonder if using a forward scrolling cursor would be faster.\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Tue, 06 Dec 2005 11:21:00 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can this query go faster???" }, { "msg_contents": "Joost Kraaijeveld schrieb:\n> On Tue, 2005-12-06 at 10:52 +0100, Csaba Nagy wrote:\n> \n>>Joost,\n>>\n>>Why do you use an offset here ? I guess you're traversing the table\n>>somehow, in this case it would be better to remember the last zipcode +\n>>housenumber and put an additional condition to get the next bigger than\n>>the last one you've got... that would go for the index on\n>>zipcode+housenumber and be very fast. The big offset forces postgres to\n>>traverse that many entries until it's able to pick the one row for the\n> \n> I am forced to translate a sorting dependent record number to a record\n> in the database. The GUI (a Java JTable) works with record /row numbers,\n> which is handy if one has an ISAM database, but not if one uses\n> PostgreSQL.\n\nYou can have a row number in postgres easily too. For example if you\njust include a serial for the row number.\n\nCursor would work too but you would need to have a persistent connection.\n\nRegards\nTino\n", "msg_date": "Tue, 06 Dec 2005 11:32:36 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can this query go faster???" }, { "msg_contents": "Hi Tino,\n\nOn Tue, 2005-12-06 at 11:32 +0100, Tino Wildenhain wrote:\n> You can have a row number in postgres easily too. For example if you\n> just include a serial for the row number.\nNot if the order of things is determined runtime and not at insert time...\n\n> Cursor would work too but you would need to have a persistent connection.\nI just tried it: a cursor is not faster (what does not surprise me at\nall, as the amount of work looks the same to me)\n\nI guess there is no solution.\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Tue, 06 Dec 2005 12:21:24 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can this query go faster???" }, { "msg_contents": "Joost Kraaijeveld schrieb:\n> Hi Tino,\n> \n..\n> \n>>Cursor would work too but you would need to have a persistent connection.\n> \n> I just tried it: a cursor is not faster (what does not surprise me at\n> all, as the amount of work looks the same to me)\n\nActually no, if you scroll forward, you just ask the database for the\nnext rows to materialize. So if you are ahead in your database and\nask for next rows, it should be faster then working w/ an offset\nfrom start each time.\n\n\n", "msg_date": "Tue, 06 Dec 2005 12:36:50 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can this query go faster???" }, { "msg_contents": "On Tue, 2005-12-06 at 12:36 +0100, Tino Wildenhain wrote:\n> > \n> > I just tried it: a cursor is not faster (what does not surprise me at\n> > all, as the amount of work looks the same to me)\n> \n> Actually no, if you scroll forward, you just ask the database for the\n> next rows to materialize. So if you are ahead in your database and\n> ask for next rows, it should be faster then working w/ an offset\n> from start each time.\nAh, a misunderstanding: I only need to calculate an index if the user\nwants a record that is not in or adjacent to the cache (in which case I\ncan do a \"select values > last value in the cache\". So I must always\nmaterialize all rows below the wanted index.\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Tue, 06 Dec 2005 13:20:45 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can this query go faster???" }, { "msg_contents": "Joost Kraaijeveld schrieb:\n> On Tue, 2005-12-06 at 12:36 +0100, Tino Wildenhain wrote:\n> \n>>>I just tried it: a cursor is not faster (what does not surprise me at\n>>>all, as the amount of work looks the same to me)\n>>\n>>Actually no, if you scroll forward, you just ask the database for the\n>>next rows to materialize. So if you are ahead in your database and\n>>ask for next rows, it should be faster then working w/ an offset\n>>from start each time.\n> \n> Ah, a misunderstanding: I only need to calculate an index if the user\n> wants a record that is not in or adjacent to the cache (in which case I\n> can do a \"select values > last value in the cache\". So I must always\n> materialize all rows below the wanted index.\n> \nYes, but still advancing a few blocks from where the cursor is\nshould be faster then re-issuing the query and scroll thru\nthe whole resultset to where you want to go.\n\n\n", "msg_date": "Tue, 06 Dec 2005 13:30:25 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can this query go faster???" }, { "msg_contents": "On Tue, 2005-12-06 at 13:20, Joost Kraaijeveld wrote:\n[snip]\n> Ah, a misunderstanding: I only need to calculate an index if the user\n> wants a record that is not in or adjacent to the cache (in which case I\n> can do a \"select values > last value in the cache\". So I must always\n> materialize all rows below the wanted index.\n\nIn this case the query will very likely not work faster. It must always\nvisit all the records till the required offset. If the plan should be\nfaster using the index, then you probably need to analyze (I don't\nrecall from your former posts if you did it recently or not), in any\ncase you could check an \"explain analyze\" to see if the planner is\nmistaken or not - you might already know this.\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Tue, 06 Dec 2005 13:32:57 +0100", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can this query go faster???" }, { "msg_contents": "At 04:43 AM 12/6/2005, Joost Kraaijeveld wrote:\n>Hi,\n>\n>Is it possible to get this query run faster than it does now, by adding\n>indexes, changing the query?\n>\n>SELECT customers.objectid FROM prototype.customers, prototype.addresses\n>WHERE\n>customers.contactaddress = addresses.objectid\n>ORDER BY zipCode asc, housenumber asc\n>LIMIT 1 OFFSET 283745\n>\n>Explain:\n>\n>Limit (cost=90956.71..90956.71 rows=1 width=55)\n> -> Sort (cost=90247.34..91169.63 rows=368915 width=55)\n> Sort Key: addresses.zipcode, addresses.housenumber\n> -> Hash Join (cost=14598.44..56135.75 rows=368915 width=55)\n> Hash Cond: (\"outer\".contactaddress = \"inner\".objectid)\n> -> Seq Scan on customers (cost=0.00..31392.15\n>rows=368915 width=80)\n> -> Hash (cost=13675.15..13675.15 rows=369315 width=55)\n> -> Seq Scan on addresses (cost=0.00..13675.15\n>rows=369315 width=55)\n>\n>The customers table has an index on contactaddress and objectid.\n>The addresses table has an index on zipcode+housenumber and objectid.\n>\n>TIA\ncustomer names, customers.objectid, addresses, and addresses.objectid \nshould all be static (addresses do not change, just the customers \nassociated with them; and once a customer has been assigned an id \nthat better never change...).\n\nTo me, this sounds like the addresses and customers tables should be \nduplicated and then physically laid out in sorted order by \n<tablename>.objectid in one set and by the \"human friendly\" \nassociated string in the other set.\nThen a finding a specific <tablename>.objectid or it's associated \nstring can be done in at worse O(lgn) time assuming binary search \ninstead of O(n) time for a sequential scan. If pg is clever enough, \nit might be able to do better than that.\n\nIOW, I'd try duplicating the addresses and customers tables and using \nthe appropriate CLUSTERed Index on each.\n\nI know this breaks Normal Form. OTOH, this kind of thing is common \npractice for data mining problems on static or almost static data.\n\nHope this is helpful,\nRon\n\n\n", "msg_date": "Tue, 06 Dec 2005 08:41:11 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can this query go faster???" }, { "msg_contents": "On Tue, Dec 06, 2005 at 10:52:57 +0100,\n Csaba Nagy <[email protected]> wrote:\n> Joost,\n> \n> Why do you use an offset here ? I guess you're traversing the table\n> somehow, in this case it would be better to remember the last zipcode +\n> housenumber and put an additional condition to get the next bigger than\n> the last one you've got... that would go for the index on\n> zipcode+housenumber and be very fast. The big offset forces postgres to\n> traverse that many entries until it's able to pick the one row for the\n> result...\n\nThe other problem with saving an offset, is unless the data isn't changing\nor you are doing all of the searches in one serialized transaction, the\nfixed offset might not put you back where you left off.\nUsing the last key, instead of counting records is normally a better way\nto do this.\n", "msg_date": "Tue, 6 Dec 2005 14:50:48 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can this query go faster???" } ]
[ { "msg_contents": "Hi,\n\n> -----Ursprüngliche Nachricht-----\n> Von: [email protected] \n> [mailto:[email protected]] Im Auftrag \n> von Joost Kraaijeveld\n> Gesendet: Dienstag, 6. Dezember 2005 10:44\n> An: Pgsql-Performance\n> Betreff: [PERFORM] Can this query go faster???\n \n> SELECT customers.objectid FROM prototype.customers, \n> prototype.addresses WHERE customers.contactaddress = \n> addresses.objectid ORDER BY zipCode asc, housenumber asc \n> LIMIT 1 OFFSET 283745\n> \n> Explain:\n> \n> Limit (cost=90956.71..90956.71 rows=1 width=55)\n> -> Sort (cost=90247.34..91169.63 rows=368915 width=55)\n> Sort Key: addresses.zipcode, addresses.housenumber\n> -> Hash Join (cost=14598.44..56135.75 rows=368915 width=55)\n> Hash Cond: (\"outer\".contactaddress = \"inner\".objectid)\n> -> Seq Scan on customers (cost=0.00..31392.15\n> rows=368915 width=80)\n> -> Hash (cost=13675.15..13675.15 rows=369315 width=55)\n> -> Seq Scan on addresses (cost=0.00..13675.15\n> rows=369315 width=55)\n> \n> The customers table has an index on contactaddress and objectid.\n> The addresses table has an index on zipcode+housenumber and objectid.\n\nThe planner chooses sequential scans on customers.contactaddress and addresses.objectid instead of using the indices. In order to determine whether this is a sane decision, you should run EXPLAIN ANALYZE on this query, once with SET ENABLE_SEQSCAN = on; and once with SET ENABLE_SEQSCAN = off;. If the query is significantly faster with SEQSCAN off, then something is amiss - either you haven't run analyze often enough so the stats are out of date or you have random_page_cost set too high (look for the setting in postgresql.conf) - these two are the \"usual suspects\".\n\nKind regards\n\n Markus\n", "msg_date": "Tue, 6 Dec 2005 10:54:58 +0100", "msg_from": "\"Markus Wollny\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can this query go faster???" } ]
[ { "msg_contents": "> On Tue, 2005-12-06 at 11:32 +0100, Tino Wildenhain wrote:\n> > You can have a row number in postgres easily too. For example if you\n> > just include a serial for the row number.\n> Not if the order of things is determined runtime and not at insert\ntime...\n> \n> > Cursor would work too but you would need to have a persistent\n> connection.\n> I just tried it: a cursor is not faster (what does not surprise me at\n> all, as the amount of work looks the same to me)\n> \n> I guess there is no solution.\n> \n\nsure there is. This begs the question: 'why do you want to read exactly\n283745 rows ahead of row 'x'?) :)\n\nIf you are scrolling forwards in a set, just pull in, say, 100-1000 rows\nat a time, ordered, and grab the next 1000 based on the highest value\nread previously.\n\nYou can do this on server side (cursor) or client side (parameterized\nquery). There are advantages and disadvantages to each. If you are\nlooping over this set and doing processing, a cursor would be ideal (try\nout pl/pgsql).\n\nWelcome to PostgreSQL! :) \n\nMerlin\n", "msg_date": "Tue, 6 Dec 2005 08:58:31 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can this query go faster???" } ]
[ { "msg_contents": "\n\nHello,\n\nI have a question on postgres's performance tuning, in particular, the\nvacuum and reindex commands. Currently I do a vacuum (without full) on all\nof my tables. However, its noted in the docs (e.g.\nhttp://developer.postgresql.org/docs/postgres/routine-reindex.html)\nand on the lists here that indexes may still bloat after a while and hence\nreindex is necessary. How often do people reindex their tables out\nthere? I guess I'd have to update my cron scripts to do reindexing too\nalong with vacuuming but most probably at a much lower frequency than\nvacuum.\n\nBut these scripts do these maintenance tasks at a fixed time (every few\nhours, days, weeks, etc.) What I would like is to do these tasks on a need\nbasis. So for vacuuming, by \"need\" I mean every few updates or some such\nmetric that characterizes my workload. Similarly, \"need\" for the reindex\ncommand might mean every few updates or degree of bloat, etc.\n\nI came across the pg_autovacuum daemon, which seems to do exactly what I\nneed for vacuums. However, it'd be great if there was a similar automatic\nreindex utility, like say, a pg_autoreindex daemon. Are there any plans\nfor this feature? If not, then would cron scripts be the next best\nchoice?\n\nThanks,\nAmeet\n", "msg_date": "Tue, 6 Dec 2005 10:14:09 -0600 (CST)", "msg_from": "Ameet Kini <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql performance tuning" }, { "msg_contents": "\nOn Dec 6, 2005, at 11:14 AM, Ameet Kini wrote:\n\n> need for vacuums. However, it'd be great if there was a similar \n> automatic\n> reindex utility, like say, a pg_autoreindex daemon. Are there any \n> plans\n> for this feature? If not, then would cron scripts be the next best\n\nwhat evidence do you have that you are suffering index bloat? or are \nyou just looking for solutions to problems that don't exist as an \nacademic exercise? :-)\n\n", "msg_date": "Tue, 6 Dec 2005 15:07:03 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance tuning" }, { "msg_contents": "Vivek Khera wrote:\n>\n> On Dec 6, 2005, at 11:14 AM, Ameet Kini wrote:\n>\n>> need for vacuums. However, it'd be great if there was a similar \n>> automatic\n>> reindex utility, like say, a pg_autoreindex daemon. Are there any plans\n>> for this feature? If not, then would cron scripts be the next best\n>\n> what evidence do you have that you are suffering index bloat? or are \n> you just looking for solutions to problems that don't exist as an \n> academic exercise? :-) \n\nThe files for the two indices on a single table used 7.8GB of space \nbefore a reindex, and 4.4GB after. The table had been reindexed over \nthe weekend and a vacuum was completed on the table about 2 hours ago.\n\nThe two indices are now 3.4GB smaller. I don't think this counts as \nbloat, because of our use case. Even so, we reindex our whole database \nevery weekend.\n\n-- Alan\n\n", "msg_date": "Tue, 06 Dec 2005 15:48:20 -0500", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance tuning" }, { "msg_contents": "Alan Stange <[email protected]> writes:\n> Vivek Khera wrote:\n>> what evidence do you have that you are suffering index bloat?\n\n> The files for the two indices on a single table used 7.8GB of space \n> before a reindex, and 4.4GB after.\n\nThat's not bloat ... that's pretty nearly in line with the normal\nexpectation for a btree index, which is about 2/3rds fill factor.\nIf the compacted index were 10X smaller then I'd agree that you have\na bloat problem.\n\nPeriodic reindexing on this scale is not doing a lot for you except\nthrashing your disks --- you're just giving space back to the OS that\nwill shortly be sucked up again by the same index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Dec 2005 16:20:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance tuning " }, { "msg_contents": "Tom Lane wrote:\n> Alan Stange <[email protected]> writes:\n> \n>> Vivek Khera wrote:\n>> \n>>> what evidence do you have that you are suffering index bloat?\n>>> \n>\n> \n>> The files for the two indices on a single table used 7.8GB of space \n>> before a reindex, and 4.4GB after.\n>> \n>\n> That's not bloat ... that's pretty nearly in line with the normal\n> expectation for a btree index, which is about 2/3rds fill factor.\n> If the compacted index were 10X smaller then I'd agree that you have\n> a bloat problem.\n> \nI wrote \"I don't think this counts as bloat...\". I still don't.\n\n-- Alan\n", "msg_date": "Tue, 06 Dec 2005 16:39:59 -0500", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance tuning" }, { "msg_contents": "\n\n> what evidence do you have that you are suffering index bloat? or are\n> you just looking for solutions to problems that don't exist as an\n> academic exercise? :-)\n\nWell, firstly, its not an academic exercise - Its very much of a real\nproblem that needs a real solution :)\n\nI'm running postgresql v8.0 and my problem is that running vacuum on my\nindices are blazing fast (upto 10x faster) AFTER running reindex. For a\ntable with only 1 index, the time to do a vacuum (without full) went down\nfrom 45 minutes to under 3 minutes. Maybe thats not bloat but thats\nsurely surprising. And this was after running vacuum periodically.\n\nAmeet\n\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\nAmeet\n", "msg_date": "Tue, 6 Dec 2005 16:03:22 -0600 (CST)", "msg_from": "Ameet Kini <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql performance tuning" }, { "msg_contents": "On Tue, Dec 06, 2005 at 04:03:22PM -0600, Ameet Kini wrote:\n>I'm running postgresql v8.0 and my problem is that running vacuum on my\n>indices are blazing fast (upto 10x faster) AFTER running reindex. For a\n>table with only 1 index, the time to do a vacuum (without full) went down\n>from 45 minutes to under 3 minutes.\n\nI've also noticed a fairly large increase in vacuum speed after a\nreindex. (To the point where the reindex + vacuum was faster than just a\nvacuum.)\n\nMike Stone\n", "msg_date": "Tue, 06 Dec 2005 19:23:07 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance tuning" }, { "msg_contents": "\nOn Dec 6, 2005, at 5:03 PM, Ameet Kini wrote:\n\n> table with only 1 index, the time to do a vacuum (without full) \n> went down\n> from 45 minutes to under 3 minutes. Maybe thats not bloat but thats\n> surely surprising. And this was after running vacuum periodically.\n\nI'll bet either your FSM settings are too low and/or you don't vacuum \noften enough for your data churn rate.\n\nWithout more data, it is hard to solve the right problem.\n\n", "msg_date": "Wed, 7 Dec 2005 10:50:31 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance tuning" }, { "msg_contents": "Hi everybody!\n\nMy system is 2xXEON 3 GHz, 4GB RAM, RAID-10 (4 SCSI HDDs), running Postgres 8.1.0, taken from CVS REL8_1_STABLE, compiled with gcc-3.4 with options \"-march=nocona -O2 -mfpmath=sse -msse3\". Hyperthreading is disabled.\n\nThere are about 300,000 - 500,000 transactions per day. Database size is about 14 Gigabytes.\n\nThe problem is that all queries run pretty good except transaction COMMITs.\nSometimes it takes about 300-500 ms to commit a transaction and it is unacceptebly slow for my application.\nI had this problem before, on 8.0.x and 7.4.x, but since 8.1 upgrade all queries began to work very fast except commit.\n\nBTW, I ran my own performance test, a multithreaded typical application user emulator, on 7.4 and 8.1. 8.1 performance was 8x times faster than 7.4, on the same machine and with the same config file settings.\n\nSome settings from my postgresql.conf:\n\nshared_buffers = 32768\ntemp_buffers = 32768\nwork_mem = 12228\nbg_writer_delay = 400\nwal_buffers = 128\ncommit_delay = 30000\ncheckpoint_segments = 8\neffective_cache_size = 262144 # postgres is the one and the only application on this machine\ndefault_statistics_target = 250\n\nall statistic collection enabled\nautovacuum runs every 120 seconds. vacuum is run after 2000 updates, analyze is run after 1000 updates.\n\nI've run vmstat to monitor hard disk activity. It was 50-500 Kb/sec for reading and 200-1500 Kb/sec for writing. There are some peak hdd reads and writes (10-20Mb/s) but commit time does not always depend upon them.\n\nWhat parameters should I tune?\n\n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n", "msg_date": "Thu, 8 Dec 2005 20:10:29 +0300", "msg_from": "Evgeny Gridasov <[email protected]>", "msg_from_op": false, "msg_subject": "slow COMMITs" } ]
[ { "msg_contents": "\nGreetings all,\n\nI'm going to do a performance comparison with DocMgr and PG81/TSearch2 on \none end, and Apache Lucene on the other end.\n\nIn order to do this, I'm going to create a derivative of the \ndocmgr-autoimport script so that I can specify one file to import at a \ntime. I'll then create a Perl script which logs all details (such as \ntiming, etc.) as the test progresses.\n\nAs test data, I have approximately 9,000 text files from Project Gutenberg \nranging in size from a few hundred bytes to 4.5M.\n\nI plan to test the speed of import of each file. Then, I plan to write a \nweb-robot in Perl that will test the speed and number of results returned.\n\nCan anyone think of a validation of this test, or how I should configure \nPG to maximise import and search speed? Can I maximise search speed and \nimport speed, or are those things mutually exclusive? (Note that this \nwill be run on limited hardware - 900MHz Athlon with 512M of ram)\n\nHas anyone ever compared TSearch2 to Lucene, as far as performance is \nconcerned?\n\nThanks,\n-Josh\n", "msg_date": "Tue, 6 Dec 2005 11:47:44 -0500 (EST)", "msg_from": "Joshua Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "TSearch2 vs. Apache Lucene" }, { "msg_contents": "\n> Has anyone ever compared TSearch2 to Lucene, as far as performance is \n> concerned?\n\nI'll stay away from TSearch2 until it is fully integrated in the \npostgres core (like \"create index foo_text on foo (texta, textb) USING \nTSearch2\"). Because a full integration is unlikely to happen in the near \nfuture (as far as I know), I'll stick to Lucene.\n\nMike\n", "msg_date": "Tue, 06 Dec 2005 17:59:14 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TSearch2 vs. Apache Lucene" }, { "msg_contents": "On 6 Dec 2005, at 16:47, Joshua Kramer wrote:\n> Has anyone ever compared TSearch2 to Lucene, as far as performance \n> is concerned?\n\nIn our experience (small often-updated documents) Lucene leaves \ntsearch2 in the dust. This probably has a lot to do with our usage \npattern though. For our usage it's very beneficial to have the index \non a separate machine to the data, however in many cases this won't \nmake sense. Lucene is also a lot easier to \"cluster\" than Postgres \n(it's simply a matter of NFS-mounting the index).\n\nRuss Garrett\[email protected]\n", "msg_date": "Tue, 6 Dec 2005 17:00:21 +0000", "msg_from": "Russell Garrett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TSearch2 vs. Apache Lucene" }, { "msg_contents": "Folks,\n\ntsearch2 and Lucene are very different search engines, so it'd be unfair\ncomparison. If you need full access to metadata and instant indexing\nyou, probably, find tsearch2 is more suitable then Lucene. But, if \nyou could live without that features and need to search read only\narchives you need Lucene.\n\nTsearch2 integration into pgsql would be cool, but, I see no problem to \nuse tsearch2 as an official extension module. After completing our\ntodo, which we hope will likely happens for 8.2 release, you could\nforget about Lucene and other engines :) We'll be available for developing\nin spring and we estimate about three months for our todo, so, it's\nreally doable.\n\n \tOleg\n\nOn Tue, 6 Dec 2005, Michael Riess wrote:\n\n>\n>> Has anyone ever compared TSearch2 to Lucene, as far as performance is \n>> concerned?\n>\n> I'll stay away from TSearch2 until it is fully integrated in the postgres \n> core (like \"create index foo_text on foo (texta, textb) USING TSearch2\"). \n> Because a full integration is unlikely to happen in the near future (as far \n> as I know), I'll stick to Lucene.\n>\n> Mike\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Tue, 6 Dec 2005 20:14:14 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TSearch2 vs. Apache Lucene" }, { "msg_contents": "Oleg Bartunov wrote:\n> Folks,\n> \n> tsearch2 and Lucene are very different search engines, so it'd be unfair\n> comparison. If you need full access to metadata and instant indexing\n> you, probably, find tsearch2 is more suitable then Lucene. But, if \n> you could live without that features and need to search read only\n> archives you need Lucene.\n> \n> Tsearch2 integration into pgsql would be cool, but, I see no problem to \n> use tsearch2 as an official extension module. After completing our\n> todo, which we hope will likely happens for 8.2 release, you could\n> forget about Lucene and other engines :) We'll be available for developing\n> in spring and we estimate about three months for our todo, so, it's\n> really doable.\n\nAgreed. There isn't anything magical about a plug-in vs something\nintegrated, as least in PostgreSQL. In other database, plug-ins can't\nfully function as integrated, but in PostgreSQL, everything is really a\nplug-in because it is all abstracted.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 6 Dec 2005 12:27:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TSearch2 vs. Apache Lucene" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Oleg Bartunov wrote:\n>> Tsearch2 integration into pgsql would be cool, but, I see no problem to \n>> use tsearch2 as an official extension module.\n\n> Agreed. There isn't anything magical about a plug-in vs something\n> integrated, as least in PostgreSQL.\n\nThe quality gap between contrib and the main system is a lot smaller\nthan it used to be, at least for those contrib modules that have\nregression tests. Main and contrib get equal levels of testing from\nthe buildfarm, so they're about on par as far as portability goes.\nWe could never say that before 8.1 ...\n\n(Having said that, I think that tsearch2 will eventually become part\nof core, but probably not for awhile yet.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Dec 2005 13:02:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TSearch2 vs. Apache Lucene " }, { "msg_contents": "Bruce Momjian schrieb:\n> Oleg Bartunov wrote:\n>> Folks,\n>>\n>> tsearch2 and Lucene are very different search engines, so it'd be unfair\n>> comparison. If you need full access to metadata and instant indexing\n>> you, probably, find tsearch2 is more suitable then Lucene. But, if \n>> you could live without that features and need to search read only\n>> archives you need Lucene.\n>>\n>> Tsearch2 integration into pgsql would be cool, but, I see no problem to \n>> use tsearch2 as an official extension module. After completing our\n>> todo, which we hope will likely happens for 8.2 release, you could\n>> forget about Lucene and other engines :) We'll be available for developing\n>> in spring and we estimate about three months for our todo, so, it's\n>> really doable.\n> \n> Agreed. There isn't anything magical about a plug-in vs something\n> integrated, as least in PostgreSQL. In other database, plug-ins can't\n> fully function as integrated, but in PostgreSQL, everything is really a\n> plug-in because it is all abstracted.\n\n\nI only remember evaluating TSearch2 about a year ago, and when I read \nstatements like \"Vacuum and/or database dump/restore work differently \nwhen using TSearch2, sql scripts need to be executed etc.\" I knew that I \nwould not want to go there.\n\nBut I don't doubt that it works, and that it is a sane concept.\n", "msg_date": "Tue, 06 Dec 2005 19:28:44 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TSearch2 vs. Apache Lucene" }, { "msg_contents": "Michael Riess wrote:\n> Bruce Momjian schrieb:\n> > Oleg Bartunov wrote:\n> >> Folks,\n> >>\n> >> tsearch2 and Lucene are very different search engines, so it'd be unfair\n> >> comparison. If you need full access to metadata and instant indexing\n> >> you, probably, find tsearch2 is more suitable then Lucene. But, if \n> >> you could live without that features and need to search read only\n> >> archives you need Lucene.\n> >>\n> >> Tsearch2 integration into pgsql would be cool, but, I see no problem to \n> >> use tsearch2 as an official extension module. After completing our\n> >> todo, which we hope will likely happens for 8.2 release, you could\n> >> forget about Lucene and other engines :) We'll be available for developing\n> >> in spring and we estimate about three months for our todo, so, it's\n> >> really doable.\n> > \n> > Agreed. There isn't anything magical about a plug-in vs something\n> > integrated, as least in PostgreSQL. In other database, plug-ins can't\n> > fully function as integrated, but in PostgreSQL, everything is really a\n> > plug-in because it is all abstracted.\n> \n> \n> I only remember evaluating TSearch2 about a year ago, and when I read \n> statements like \"Vacuum and/or database dump/restore work differently \n> when using TSearch2, sql scripts need to be executed etc.\" I knew that I \n> would not want to go there.\n> \n> But I don't doubt that it works, and that it is a sane concept.\n\nGood point. I think we had some problems at that point because the API\nwas improved between versions. Even if it had been integrated, we might\nhave had the same problem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 6 Dec 2005 13:32:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TSearch2 vs. Apache Lucene" }, { "msg_contents": "...\n\nSo you'll avoid a non-core product and instead only use another non-core \nproduct...?\n\nChris\n\nMichael Riess wrote:\n> \n>> Has anyone ever compared TSearch2 to Lucene, as far as performance is \n>> concerned?\n> \n> \n> I'll stay away from TSearch2 until it is fully integrated in the \n> postgres core (like \"create index foo_text on foo (texta, textb) USING \n> TSearch2\"). Because a full integration is unlikely to happen in the near \n> future (as far as I know), I'll stick to Lucene.\n> \n> Mike\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n\n", "msg_date": "Wed, 07 Dec 2005 09:40:43 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TSearch2 vs. Apache Lucene" }, { "msg_contents": "No, my problem is that using TSearch2 interferes with other core \ncomponents of postgres like (auto)vacuum or dump/restore.\n\n\n> ...\n> \n> So you'll avoid a non-core product and instead only use another non-core \n> product...?\n> \n> Chris\n> \n> Michael Riess wrote:\n>>\n>>> Has anyone ever compared TSearch2 to Lucene, as far as performance is \n>>> concerned?\n>>\n>>\n>> I'll stay away from TSearch2 until it is fully integrated in the \n>> postgres core (like \"create index foo_text on foo (texta, textb) USING \n>> TSearch2\"). Because a full integration is unlikely to happen in the \n>> near future (as far as I know), I'll stick to Lucene.\n>>\n>> Mike\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n", "msg_date": "Wed, 07 Dec 2005 08:56:18 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TSearch2 vs. Apache Lucene" }, { "msg_contents": "> No, my problem is that using TSearch2 interferes with other core \n> components of postgres like (auto)vacuum or dump/restore.\n\nThat's nonsense...seriously.\n\nThe only trick with dump/restore is that you have to install the \ntsearch2 shared library before restoring. That's the same as all \ncontribs though.\n\nChris\n\n", "msg_date": "Wed, 07 Dec 2005 16:20:01 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TSearch2 vs. Apache Lucene" }, { "msg_contents": "Christopher Kings-Lynne schrieb:\n>> No, my problem is that using TSearch2 interferes with other core \n>> components of postgres like (auto)vacuum or dump/restore.\n> \n> That's nonsense...seriously.\n> \n> The only trick with dump/restore is that you have to install the \n> tsearch2 shared library before restoring. That's the same as all \n> contribs though.\n\nWell, then it changed since I last read the documentation. That was \nabout a year ago, and since then we are using Lucene ... and as it works \nquite nicely, I see no reason to switch to TSearch2. Including it with \nthe pgsql core would make it much more attractive to me, as it seems to \nme that once included into the core, features seem to be more stable. \nCall me paranoid, if you must ... ;-)\n\n\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n", "msg_date": "Wed, 07 Dec 2005 09:28:52 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TSearch2 vs. Apache Lucene" } ]
[ { "msg_contents": "\n\nThis didn't get through the first time around, so resending it again.\nSorry for any duplicate entries.\n\nHello,\n\nI have a question on postgres's performance tuning, in particular, the\nvacuum and reindex commands. Currently I do a vacuum (without full) on all\nof my tables. However, its noted in the docs (e.g.\nhttp://developer.postgresql.org/docs/postgres/routine-reindex.html)\nand on the lists here that indexes may still bloat after a while and hence\nreindex is necessary. How often do people reindex their tables out\nthere? I guess I'd have to update my cron scripts to do reindexing too\nalong with vacuuming but most probably at a much lower frequency than\nvacuum.\n\nBut these scripts do these maintenance tasks at a fixed time (every few\nhours, days, weeks, etc.) What I would like is to do these tasks on a need\nbasis. So for vacuuming, by \"need\" I mean every few updates or some such\nmetric that characterizes my workload. Similarly, \"need\" for the reindex\ncommand might mean every few updates or degree of bloat, etc.\n\nI came across the pg_autovacuum daemon, which seems to do exactly what I\nneed for vacuums. However, it'd be great if there was a similar automatic\nreindex utility, like say, a pg_autoreindex daemon. Are there any plans\nfor this feature? If not, then would cron scripts be the next best\nchoice?\n\nThanks,\nAmeet\n", "msg_date": "Tue, 6 Dec 2005 11:44:03 -0600 (CST)", "msg_from": "Ameet Kini <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql performance tuning" }, { "msg_contents": "Ameet Kini <[email protected]> writes:\n> I have a question on postgres's performance tuning, in particular, the\n> vacuum and reindex commands. Currently I do a vacuum (without full) on all\n> of my tables. However, its noted in the docs (e.g.\n> http://developer.postgresql.org/docs/postgres/routine-reindex.html)\n> and on the lists here that indexes may still bloat after a while and hence\n> reindex is necessary. How often do people reindex their tables out\n> there?\n\nNever, unless you have actual evidence that your indexes are bloating.\nIt's only very specific use-patterns that have problems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Dec 2005 13:03:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance tuning " }, { "msg_contents": "\nOn Dec 6, 2005, at 12:44 PM, Ameet Kini wrote:\n\n> I have a question on postgres's performance tuning, in particular, the\n> vacuum and reindex commands. Currently I do a vacuum (without full) \n> on all\n> of my tables. However, its noted in the docs (e.g.\n> http://developer.postgresql.org/docs/postgres/routine-reindex.html)\n> and on the lists here that indexes may still bloat after a while \n> and hence\n> reindex is necessary. How often do people reindex their tables out\n\nWhy would you be running a version older than 7.4? Index bloat is \nmostly a non-issue in recent releases of pg.\n\n", "msg_date": "Tue, 6 Dec 2005 13:33:37 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance tuning" }, { "msg_contents": "Ameet Kini schrieb:\n> \n> This didn't get through the first time around, so resending it again.\n> Sorry for any duplicate entries.\n> \n> Hello,\n> \n> I have a question on postgres's performance tuning, in particular, the\n> vacuum and reindex commands. Currently I do a vacuum (without full) on all\n> of my tables. \n\nI'm curious ... why no full vacuum? I bet that the full vacuum will \ncompact your (index) tables as much as a reindex would.\n\nI guess the best advice is to increase FSM and to use autovacuum.\n", "msg_date": "Tue, 06 Dec 2005 22:02:05 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance tuning" } ]
[ { "msg_contents": "I ran a bit exhaustive pgbench on 2 test machines I have (quad dual core\nIntel and Opteron). Ofcourse the Opteron was much faster, but\ninterestingly, it was experiencing 3x more context switches than the\nIntel box (upto 100k, versus ~30k avg on Dell). Both are RH4.0\n64bit/PG8.1 64bit.\n\nSun (v40z):\n-bash-3.00$ time pgbench -c 1000 -t 30 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 1000\nnumber of transactions per client: 30\nnumber of transactions actually processed: 30000/30000\ntps = 45.871234 (including connections establishing)\ntps = 46.092629 (excluding connections establishing)\n\nreal 10m54.240s\nuser 0m34.894s\nsys 3m9.470s\n\n\nDell (6850):\n-bash-3.00$ time pgbench -c 1000 -t 30 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 1000\nnumber of transactions per client: 30\nnumber of transactions actually processed: 30000/30000\ntps = 22.088214 (including connections establishing)\ntps = 22.162454 (excluding connections establishing)\n\nreal 22m38.301s\nuser 0m43.520s\nsys 5m42.108s\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, November 22, 2005 2:42 PM\nTo: Anjan Dave\nCc: Vivek Khera; Postgresql Performance\nSubject: Re: [PERFORM] High context switches occurring \n\n\"Anjan Dave\" <[email protected]> writes:\n> Would this problem change it's nature in any way on the recent\nDual-Core\n> Intel XEON MP machines?\n\nProbably not much.\n\nThere's some evidence that Opterons have less of a problem than Xeons\nin multi-chip configurations, but we've seen CS thrashing on Opterons\ntoo. I think the issue is probably there to some extent in any modern\nSMP architecture.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 6 Dec 2005 14:04:04 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High context switches occurring " }, { "msg_contents": "\nOn Dec 6, 2005, at 2:04 PM, Anjan Dave wrote:\n\n> interestingly, it was experiencing 3x more context switches than the\n> Intel box (upto 100k, versus ~30k avg on Dell). Both are RH4.0\n\nI'll assume that's context switches per second... so for the opteron \nthat's 65400000 cs's and for the Dell that's 40740000 switches during \nthe duration of the test. Not so much a difference...\n\nYou see, the opteron was context switching more because it was doing \nmore work :-)\n\n\n", "msg_date": "Tue, 6 Dec 2005 14:25:56 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High context switches occurring " }, { "msg_contents": "\"Anjan Dave\" <[email protected]> writes:\n> -bash-3.00$ time pgbench -c 1000 -t 30 pgbench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 1000\n> number of transactions per client: 30\n> number of transactions actually processed: 30000/30000\n> tps = 45.871234 (including connections establishing)\n> tps = 46.092629 (excluding connections establishing)\n\nI can hardly think of a worse way to run pgbench :-(. These numbers are\nabout meaningless, for two reasons:\n\n1. You don't want number of clients (-c) much higher than scaling factor\n(-s in the initialization step). The number of rows in the \"branches\"\ntable will equal -s, and since every transaction updates one\nrandomly-chosen \"branches\" row, you will be measuring mostly row-update\ncontention overhead if there's more concurrent transactions than there\nare rows. In the case -s 1, which is what you've got here, there is no\nactual concurrency at all --- all the transactions stack up on the\nsingle branches row.\n\n2. Running a small number of transactions per client means that\nstartup/shutdown transients overwhelm the steady-state data. You should\nprobably run at least a thousand transactions per client if you want\nrepeatable numbers.\n\nTry something like \"-s 10 -c 10 -t 3000\" to get numbers reflecting test\nconditions more like what the TPC council had in mind when they designed\nthis benchmark. I tend to repeat such a test 3 times to see if the\nnumbers are repeatable, and quote the middle TPS number as long as\nthey're not too far apart.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Dec 2005 18:45:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High context switches occurring " }, { "msg_contents": "Tom Lane wrote:\n> \"Anjan Dave\" <[email protected]> writes:\n> > -bash-3.00$ time pgbench -c 1000 -t 30 pgbench\n> > starting vacuum...end.\n> > transaction type: TPC-B (sort of)\n> > scaling factor: 1\n> > number of clients: 1000\n> > number of transactions per client: 30\n> > number of transactions actually processed: 30000/30000\n> > tps = 45.871234 (including connections establishing)\n> > tps = 46.092629 (excluding connections establishing)\n> \n> I can hardly think of a worse way to run pgbench :-(. These numbers are\n> about meaningless, for two reasons:\n> \n> 1. You don't want number of clients (-c) much higher than scaling factor\n> (-s in the initialization step). The number of rows in the \"branches\"\n> table will equal -s, and since every transaction updates one\n\nShould we throw a warning when someone runs the test this way?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 6 Dec 2005 23:34:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High context switches occurring" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom Lane wrote:\n>> 1. You don't want number of clients (-c) much higher than scaling factor\n>> (-s in the initialization step).\n\n> Should we throw a warning when someone runs the test this way?\n\nNot a bad idea (though of course only for the \"standard\" scripts).\nTatsuo, what do you think?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Dec 2005 23:49:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High context switches occurring " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Tom Lane wrote:\n> >> 1. You don't want number of clients (-c) much higher than scaling factor\n> >> (-s in the initialization step).\n> \n> > Should we throw a warning when someone runs the test this way?\n> \n> Not a bad idea (though of course only for the \"standard\" scripts).\n> Tatsuo, what do you think?\n\nThat would be annoying since almost every users will get the kind of\nwarnings. What about improving the README?\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\n", "msg_date": "Wed, 07 Dec 2005 16:26:44 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High context switches occurring " }, { "msg_contents": "On Tue, 2005-12-06 at 22:49, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Tom Lane wrote:\n> >> 1. You don't want number of clients (-c) much higher than scaling factor\n> >> (-s in the initialization step).\n> \n> > Should we throw a warning when someone runs the test this way?\n> \n> Not a bad idea (though of course only for the \"standard\" scripts).\n> Tatsuo, what do you think?\n\nJust to clarify, I think the pgbench program should throw the warning,\nnot postgresql itself. Not sure if that's what you were meaning or\nnot. Maybe even have it require a switch to run in such a mode, like a\n--yes-i-want-to-run-a-meaningless-test switch or something.\n", "msg_date": "Wed, 07 Dec 2005 10:24:33 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High context switches occurring" } ]
[ { "msg_contents": "Hi,\n\nI�m trying to optimize some selects between 2 tables and the best way I \nfound was\nalter the first table and add the fields of the 2nd table. I adjusted \nthe contents and\nnow a have only one table with all info that I need. Now resides my \nproblem, because\nof legacy queries I decided to make a Rule that replace the 2nd table.\n\nUntil now all worked well, but I found when I make a join between de result\ntable and de Rule, even tought is the same row in the same table, the \noptimizer\ngenerete two access for the same row:\ncta_pag is the table and ctapag_adm is the rule.\n\nCREATE OR REPLACE RULE \"_RETURN\" AS\n ON SELECT TO ctapag_adm DO INSTEAD SELECT cta_pag.nrlancto, \ncta_pag.codconta, cta_pag.frequencia, cta_pag.nrlanctopai\n FROM cta_pag\n WHERE cta_pag.origem = 'A'::bpchar;\n\nThis is one of the legacy queries:\n\nselect * from cta_pag p , ctapag_adm a where a.nrlancto= p.nrlancto and \np.nrlancto = 21861;\n\nEXPLAIN:\nNested Loop (cost=0.00..11.49 rows=1 width=443) (actual \ntime=0.081..0.088 rows=1 loops=1)\n -> Index Scan using cta_pag_pk on cta_pag p (cost=0.00..5.74 rows=1 \nwidth=408) (actual time=0.044..0.046 rows=1 loops=1)\n Index Cond: (nrlancto = 21861::numeric)\n -> Index Scan using cta_pag_pk on cta_pag (cost=0.00..5.74 rows=1 \nwidth=35) (actual time=0.023..0.025 rows=1 loops=1)\n Index Cond: (21861::numeric = nrlancto)\n Filter: (origem = 'A'::bpchar)\nTotal runtime: 0.341 ms\n\n\n Resulting in twice the time for accessing.\n\nAcessing just on time the same row:\n\nselect * from cta_pag p where p.nrlancto = 21861\n\nEXPLAIN:\nIndex Scan using cta_pag_pk on cta_pag p (cost=0.00..5.74 rows=1 \nwidth=408) (actual time=0.044..0.047 rows=1 loops=1)\n Index Cond: (nrlancto = 21861::numeric)\nTotal runtime: 0.161 ms\n\n\n Is there a way to force the optimizer to understand that is the same \nrow?\n\n Thanks,\n Edison\n\n\n--\nEdison Azzi\n<edisonazzi (at ) terra ( dot ) com ( dot ) br>\n\n", "msg_date": "Tue, 06 Dec 2005 18:22:47 -0200", "msg_from": "Edison Azzi <[email protected]>", "msg_from_op": true, "msg_subject": "Join the same row" }, { "msg_contents": "Edison Azzi wrote:\n> Hi,\n> \n> I´m trying to optimize some selects between 2 tables and the best way I \n> found was\n> alter the first table and add the fields of the 2nd table. I adjusted \n> the contents and\n> now a have only one table with all info that I need. Now resides my \n> problem, because\n> of legacy queries I decided to make a Rule that replace the 2nd table.\n> \n> Until now all worked well, but I found when I make a join between de result\n> table and de Rule, even tought is the same row in the same table, the \n> optimizer\n> generete two access for the same row:\n> cta_pag is the table and ctapag_adm is the rule.\n> \n> CREATE OR REPLACE RULE \"_RETURN\" AS\n> ON SELECT TO ctapag_adm DO INSTEAD SELECT cta_pag.nrlancto, \n> cta_pag.codconta, cta_pag.frequencia, cta_pag.nrlanctopai\n> FROM cta_pag\n> WHERE cta_pag.origem = 'A'::bpchar;\n> \n> This is one of the legacy queries:\n> \n> select * from cta_pag p , ctapag_adm a where a.nrlancto= p.nrlancto and \n> p.nrlancto = 21861;\n\nOK - and you get a self-join (which is what you asked for, but you'd \nlike the planner to notice that it might not be necessary).\n\n> Resulting in twice the time for accessing.\n> \n> Acessing just on time the same row:\n> \n> select * from cta_pag p where p.nrlancto = 21861\n\nThis isn't the same query though. Your rule has an additional condition \norigem='A'. This means it wouldn't be correct to eliminate the self-join \neven if the planner could.\n\n> Is there a way to force the optimizer to understand that is the same \n> row?\n\nHowever, even if you removed the condition on origem, I don't think the \nplanner will notice that it can eliminate the join. It's just too \nunusual a case for the planner to have a rule for it.\n\nI might be wrong about the planner - I'm just another user. One of the \ndevelopers may correct me.\n-- \n Richard Huxton\n Archonet Ltd\n\n", "msg_date": "Wed, 07 Dec 2005 09:27:33 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join the same row" }, { "msg_contents": "Richard Huxton escreveu:\n\n> Edison Azzi wrote:\n>\n>> Hi,\n>>\n>> I�m trying to optimize some selects between 2 tables and the best way \n>> I found was\n>> alter the first table and add the fields of the 2nd table. I adjusted \n>> the contents and\n>> now a have only one table with all info that I need. Now resides my \n>> problem, because\n>> of legacy queries I decided to make a Rule that replace the 2nd table.\n>>\n>> Until now all worked well, but I found when I make a join between de \n>> result\n>> table and de Rule, even tought is the same row in the same table, the \n>> optimizer\n>> generete two access for the same row:\n>> cta_pag is the table and ctapag_adm is the rule.\n>>\n>> CREATE OR REPLACE RULE \"_RETURN\" AS\n>> ON SELECT TO ctapag_adm DO INSTEAD SELECT cta_pag.nrlancto, \n>> cta_pag.codconta, cta_pag.frequencia, cta_pag.nrlanctopai\n>> FROM cta_pag\n>> WHERE cta_pag.origem = 'A'::bpchar;\n>>\n>> This is one of the legacy queries:\n>>\n>> select * from cta_pag p , ctapag_adm a where a.nrlancto= p.nrlancto \n>> and p.nrlancto = 21861;\n>\n>\n> OK - and you get a self-join (which is what you asked for, but you'd \n> like the planner to notice that it might not be necessary).\n>\n>> Resulting in twice the time for accessing.\n>>\n>> Acessing just on time the same row:\n>>\n>> select * from cta_pag p where p.nrlancto = 21861\n>\n>\n> This isn't the same query though. Your rule has an additional \n> condition origem='A'. This means it wouldn't be correct to eliminate \n> the self-join even if the planner could.\n>\n>> Is there a way to force the optimizer to understand that is the \n>> same row?\n>\n>\n> However, even if you removed the condition on origem, I don't think \n> the planner will notice that it can eliminate the join. It's just too \n> unusual a case for the planner to have a rule for it.\n>\n> I might be wrong about the planner - I'm just another user. One of the \n> developers may correct me.\n\n\nYou are rigth, the planner will not eliminate the join, see:\n\nselect * from cta_pag a, cta_pag p where a.nrlancto=p.nrlancto and \np.nrlancto = 21861;\n\nEXPLAIN:\nNested Loop (cost=0.00..11.48 rows=1 width=816)\n -> Index Scan using cta_pag_pk on cta_pag a (cost=0.00..5.74 rows=1 \nwidth=408)\n Index Cond: (21861::numeric = nrlancto)\n -> Index Scan using cta_pag_pk on cta_pag p (cost=0.00..5.74 rows=1 \nwidth=408)\n Index Cond: (nrlancto = 21861::numeric)\n\n\nI know that this is too unusual case, but I hoped that the planner could \ndeal\nwith this condition. I�m trying to speed up without have to rewrite a \nbunch of\nqueries. Now I'll have to think another way to work around this issue.\n\nThanks,\n\n Edison.\n\n\n\n", "msg_date": "Wed, 07 Dec 2005 16:45:12 -0200", "msg_from": "Edison Azzi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Join the same row" }, { "msg_contents": "Edison Azzi wrote:\n> Richard Huxton escreveu:\n>> However, even if you removed the condition on origem, I don't think \n>> the planner will notice that it can eliminate the join. It's just too \n>> unusual a case for the planner to have a rule for it.\n>>\n>> I might be wrong about the planner - I'm just another user. One of the \n>> developers may correct me.\n> \n> \n> You are rigth, the planner will not eliminate the join, see:\n> \n> select * from cta_pag a, cta_pag p where a.nrlancto=p.nrlancto and \n> p.nrlancto = 21861;\n> \n> EXPLAIN:\n> Nested Loop (cost=0.00..11.48 rows=1 width=816)\n> -> Index Scan using cta_pag_pk on cta_pag a (cost=0.00..5.74 rows=1 \n> width=408)\n> Index Cond: (21861::numeric = nrlancto)\n> -> Index Scan using cta_pag_pk on cta_pag p (cost=0.00..5.74 rows=1 \n> width=408)\n> Index Cond: (nrlancto = 21861::numeric)\n> \n> \n> I know that this is too unusual case, but I hoped that the planner could \n> deal\n> with this condition. I´m trying to speed up without have to rewrite a \n> bunch of\n> queries. Now I'll have to think another way to work around this issue.\n\nIs the performance really so bad? All the data is guaranteed to be \ncached for the second index-scan.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n", "msg_date": "Wed, 07 Dec 2005 18:55:04 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join the same row" }, { "msg_contents": "Edison Azzi <[email protected]> writes:\n> You are rigth, the planner will not eliminate the join, see:\n\n> select * from cta_pag a, cta_pag p where a.nrlancto=p.nrlancto and \n> p.nrlancto = 21861;\n\n> EXPLAIN:\n> Nested Loop (cost=0.00..11.48 rows=1 width=816)\n> -> Index Scan using cta_pag_pk on cta_pag a (cost=0.00..5.74 rows=1 \n> width=408)\n> Index Cond: (21861::numeric = nrlancto)\n> -> Index Scan using cta_pag_pk on cta_pag p (cost=0.00..5.74 rows=1 \n> width=408)\n> Index Cond: (nrlancto = 21861::numeric)\n\nBut do you care? That second fetch of the same row isn't going to cost\nmuch of anything, since everything it needs to touch will have been\nsucked into cache already. I don't really see the case for adding logic\nto the planner to detect this particular flavor of badly-written query.\n\nNotice that the planner *is* managing to propagate the constant\ncomparison to both relations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Dec 2005 15:36:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join the same row " } ]
[ { "msg_contents": "We're running a dual Xeon machine with hyperthreading enabled and\nPostgreSQL 8.0.3. Below is the type of CPUs:\n\n processor : 3\n vendor_id : GenuineIntel\n cpu family : 15\n model : 4\n model name : Intel(R) Xeon(TM) CPU 3.20GHz\n stepping : 1\n cpu MHz : 3200.274\n cache size : 1024 KB\n ...\n\nWe've been tuning the kernel (2.4 SMP flavor) and have improved\nperformance quite a bit. I'm now wondering if turning off HT will\nimprove performance even more. Based on the vmstat output below, is\nthe context switching typical or too high? And what is the latest on\nthe state of PostgreSQL running on Xeon processors with HT turned on?\nI searched the archives, but couldn't discern anything definitive.\n\n r b swpd free buff cache si so bi bo in cs us sy wa id\n 1 0 135944 64612 17136 3756816 0 0 0 210 154 178 2 0 4 94\n 1 0 135940 46600 17204 3754496 0 0 1 1231 442 3658 7 3 10 80\n 1 3 135940 51228 17240 3754680 0 0 0 1268 255 2659 4 1 14 81\n 1 0 135940 58512 17300 3754684 0 0 0 1818 335 1526 2 1 32 65\n 1 1 135940 18104 17328 3806516 0 0 17670 476 1314 1962 2 2 41 56\n 0 1 135940 17776 17232 3811620 0 0 23193 394 1600 2097 2 2 53 44\n 0 1 135940 17944 17188 3809636 0 0 25459 349 1547 2013 2 2 50 46\n 0 3 135940 18816 15184 3798312 0 0 24284 1328 1529 4730 6 5 53 36\n 0 6 135940 23536 6060 3817088 0 0 27376 1332 1350 2628 2 3 56 39\n 0 5 135940 18008 6036 3827132 0 0 18806 1539 1410 1416 1 2 61 36\n 0 5 135940 18492 5708 3826660 0 0 3540 10354 736 955 2 2 76 20\n 0 3 135940 18940 5788 3829864 0 0 2308 7506 707 519 2 1 81 15\n 1 4 135940 18980 5820 3828836 0 0 138 3503 556 261 1 0 74 24\n 0 10 135940 39332 5896 3777724 0 0 579 2805 621 4104 7 4 54 35\n 0 4 135936 37816 5952 3791404 0 0 260 1887 384 1574 2 1 40 57\n 0 5 135936 29552 5996 3802260 0 0 290 1642 434 1944 3 1 38 58\n\n\n-- \nBrandon\n", "msg_date": "Tue, 6 Dec 2005 15:01:02 -0600 (CST)", "msg_from": "\"Brandon Metcalf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Context switching and Xeon processors" }, { "msg_contents": "\"Brandon Metcalf\" <[email protected]> writes:\n> We've been tuning the kernel (2.4 SMP flavor) and have improved\n> performance quite a bit. I'm now wondering if turning off HT will\n> improve performance even more. Based on the vmstat output below, is\n> the context switching typical or too high?\n\nGiven that your CPU usage is hovering around 2%, it's highly unlikely\nthat you'll be able to measure any change at all by fiddling with HT.\nWhat you need to be working on is disk I/O --- the \"80% wait\" number\nis what should be getting your attention, not the CS number.\n\n(FWIW, on the sort of hardware you're talking about, I wouldn't worry\nabout CS rates lower than maybe 10000/sec --- the hardware can sustain\nwell over 10x that.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Dec 2005 16:36:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Context switching and Xeon processors " }, { "msg_contents": "t == [email protected] writes:\n\n t> \"Brandon Metcalf\" <[email protected]> writes:\n t> > We've been tuning the kernel (2.4 SMP flavor) and have improved\n t> > performance quite a bit. I'm now wondering if turning off HT will\n t> > improve performance even more. Based on the vmstat output below, is\n t> > the context switching typical or too high?\n\n t> Given that your CPU usage is hovering around 2%, it's highly unlikely\n t> that you'll be able to measure any change at all by fiddling with HT.\n t> What you need to be working on is disk I/O --- the \"80% wait\" number\n t> is what should be getting your attention, not the CS number.\n\n t> (FWIW, on the sort of hardware you're talking about, I wouldn't worry\n t> about CS rates lower than maybe 10000/sec --- the hardware can sustain\n t> well over 10x that.)\n\n\nYes, I agree the disk I/O is an issue and that's what we've been\naddressing with the tuning we've been doing and have been able to\nimprove. I think that we really need to go to a RAID 10 array to\naddress the I/O issue, but thought I would investigate the context\nswitching issue.\n\nThanks for the information.\n\n-- \nBrandon\n", "msg_date": "Tue, 6 Dec 2005 15:45:04 -0600 (CST)", "msg_from": "\"Brandon Metcalf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Context switching and Xeon processors " }, { "msg_contents": "On Tue, Dec 06, 2005 at 03:01:02PM -0600, Brandon Metcalf wrote:\n> We're running a dual Xeon machine with hyperthreading enabled and\n> PostgreSQL 8.0.3.\n\nThe two single most important things that will help you with high rates of\ncontext switching:\n\n - Turn off hyperthreading.\n - Upgrade to 8.1.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 6 Dec 2005 22:52:22 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Context switching and Xeon processors" } ]
[ { "msg_contents": "Based on a suggestion on the postgis list, I partitioned my 80 million (for\nnow) record table into\n\nsubtables of about 230k records (the amount of data collected in five\nminutes). At the moment\n\nI have 350 subtables.\n\n \n\nEverything seems to be great.COPY time is ok, building a geometric index on\n\"only\" 230k records\n\nis ok, query performance is ok.\n\n \n\nI'm a little concerned about having so many subtables. 350 tables is not\nbad, but what happens if\n\nthe number of subtables grows into the thousands? Is there a practical\nlimit to the effectiveness\n\npartitioning?\n\n \n\n\n\n\n\n\n\n\n\n\nBased on a suggestion on the postgis list, I partitioned my 80 million (for\nnow) record table into\nsubtables of about 230k records (the amount of data collected in five\nminutes).  At the moment\nI have 350 subtables.\n \nEverything seems to be great…COPY time is ok, building a\ngeometric index on “only” 230k records\nis ok, query performance is ok.\n \nI’m a little concerned about having so many subtables.  350\ntables is not bad, but what happens if\nthe number of subtables grows into the thousands?  Is there a\npractical limit to the effectiveness\npartitioning?", "msg_date": "Tue, 6 Dec 2005 22:06:52 -0500", "msg_from": "\"Rick Schumeyer\" <[email protected]>", "msg_from_op": true, "msg_subject": "table partitioning: effects of many sub-tables (was COPY too slow...)" } ]
[ { "msg_contents": "Hi everybody!\n\nThis is my first posting to this list and I'm quite a PostgreSQL\nnewbie. My question is:\n\nThe first time I execute a query, it is very slow, but subsequent\nqueries are as fast as expected. I would be very glad if somebody\ncould explain why the first query is so slow and what I could do to\nspeed it up.\n\nThe query operates on a tsearch2 indexed column, but I experienced\nthe same issue on other tables as well, so I don't think it's a\ntsearch2 issue.\n\nTo get a better overview of the queries and EXPLAIN outputs, I've\nput them on a temporary website, together with the table definition\nand my postgresql.conf:\n\n<http://dblp.dyndns.org:8080/dblptest/explain.jsp>\n\nI'm running PostgreSQL 8.1 on Windows XP SP2, Athlon64 3000+, 2 GB\nRAM, 400 GB SATA HDD, 120 GB ATA HDD. The data reside on the first\nHDD, the indexes in an index tablespace on the second HDD.\n\nIn the example below, the first query is still quite fast compared\nto others. Sometimes the first query takes up to 9000 ms (see\nwebsite). I've run VACUUM FULL, but it didn't seem to solve the problem.\n\nThanks very much in advance,\n\n- Stephan\n\n\n--------------------------------------------------------\nQuery:\n--------------------------------------------------------\nSELECT keyword, overview\nFROM publications\nWHERE idx_fti @@ to_tsquery('default', 'linux & kernel')\nORDER BY rank_cd(idx_fti, 'linux & kernel') DESC;\n\n\n--------------------------------------------------------\nEXPLAIN for first query:\n--------------------------------------------------------\nSort (cost=859.89..860.48 rows=237 width=299) (actual\ntime=1817.962..1817.971 rows=10 loops=1)\n Sort Key: rank_cd(idx_fti, '''linux'' & ''kernel'''::tsquery)\n -> Bitmap Heap Scan on publications (cost=3.83..850.54 rows=237\nwidth=299) (actual time=1817.839..1817.914 rows=10 loops=1)\n Filter: (idx_fti @@ '''linux'' & ''kernel'''::tsquery)\n -> Bitmap Index Scan on idx_fti_idx (cost=0.00..3.83\nrows=237 width=0) (actual time=1817.792..1817.792 rows=10 loops=1)\n Index Cond: (idx_fti @@ '''linux'' & ''kernel'''::tsquery)\nTotal runtime: 1818.068 ms\n\n\n--------------------------------------------------------\nEXPLAIN for second query:\n--------------------------------------------------------\nSort (cost=859.89..860.48 rows=237 width=299) (actual\ntime=4.817..4.826 rows=10 loops=1)\n Sort Key: rank_cd(idx_fti, '''linux'' & ''kernel'''::tsquery)\n -> Bitmap Heap Scan on publications (cost=3.83..850.54 rows=237\nwidth=299) (actual time=4.727..4.769 rows=10 loops=1)\n Filter: (idx_fti @@ '''linux'' & ''kernel'''::tsquery)\n -> Bitmap Index Scan on idx_fti_idx (cost=0.00..3.83\nrows=237 width=0) (actual time=4.675..4.675 rows=10 loops=1)\n Index Cond: (idx_fti @@ '''linux'' & ''kernel'''::tsquery)\nTotal runtime: 4.914 ms\n", "msg_date": "Wed, 07 Dec 2005 11:46:19 +0100", "msg_from": "Stephan Vollmer <[email protected]>", "msg_from_op": true, "msg_subject": "First query is slow, subsequent queries fast" }, { "msg_contents": "Stephan,\n\nyou cache is too low :) Try to increase shared_buffers, for example,\nfor 2Gb I'd set it to 100,000\n\nOn Wed, 7 Dec 2005, Stephan Vollmer wrote:\n\n> Hi everybody!\n>\n> This is my first posting to this list and I'm quite a PostgreSQL\n> newbie. My question is:\n>\n> The first time I execute a query, it is very slow, but subsequent\n> queries are as fast as expected. I would be very glad if somebody\n> could explain why the first query is so slow and what I could do to\n> speed it up.\n>\n> The query operates on a tsearch2 indexed column, but I experienced\n> the same issue on other tables as well, so I don't think it's a\n> tsearch2 issue.\n>\n> To get a better overview of the queries and EXPLAIN outputs, I've\n> put them on a temporary website, together with the table definition\n> and my postgresql.conf:\n>\n> <http://dblp.dyndns.org:8080/dblptest/explain.jsp>\n>\n> I'm running PostgreSQL 8.1 on Windows XP SP2, Athlon64 3000+, 2 GB\n> RAM, 400 GB SATA HDD, 120 GB ATA HDD. The data reside on the first\n> HDD, the indexes in an index tablespace on the second HDD.\n>\n> In the example below, the first query is still quite fast compared\n> to others. Sometimes the first query takes up to 9000 ms (see\n> website). I've run VACUUM FULL, but it didn't seem to solve the problem.\n>\n> Thanks very much in advance,\n>\n> - Stephan\n>\n>\n> --------------------------------------------------------\n> Query:\n> --------------------------------------------------------\n> SELECT keyword, overview\n> FROM publications\n> WHERE idx_fti @@ to_tsquery('default', 'linux & kernel')\n> ORDER BY rank_cd(idx_fti, 'linux & kernel') DESC;\n>\n>\n> --------------------------------------------------------\n> EXPLAIN for first query:\n> --------------------------------------------------------\n> Sort (cost=859.89..860.48 rows=237 width=299) (actual\n> time=1817.962..1817.971 rows=10 loops=1)\n> Sort Key: rank_cd(idx_fti, '''linux'' & ''kernel'''::tsquery)\n> -> Bitmap Heap Scan on publications (cost=3.83..850.54 rows=237\n> width=299) (actual time=1817.839..1817.914 rows=10 loops=1)\n> Filter: (idx_fti @@ '''linux'' & ''kernel'''::tsquery)\n> -> Bitmap Index Scan on idx_fti_idx (cost=0.00..3.83\n> rows=237 width=0) (actual time=1817.792..1817.792 rows=10 loops=1)\n> Index Cond: (idx_fti @@ '''linux'' & ''kernel'''::tsquery)\n> Total runtime: 1818.068 ms\n>\n>\n> --------------------------------------------------------\n> EXPLAIN for second query:\n> --------------------------------------------------------\n> Sort (cost=859.89..860.48 rows=237 width=299) (actual\n> time=4.817..4.826 rows=10 loops=1)\n> Sort Key: rank_cd(idx_fti, '''linux'' & ''kernel'''::tsquery)\n> -> Bitmap Heap Scan on publications (cost=3.83..850.54 rows=237\n> width=299) (actual time=4.727..4.769 rows=10 loops=1)\n> Filter: (idx_fti @@ '''linux'' & ''kernel'''::tsquery)\n> -> Bitmap Index Scan on idx_fti_idx (cost=0.00..3.83\n> rows=237 width=0) (actual time=4.675..4.675 rows=10 loops=1)\n> Index Cond: (idx_fti @@ '''linux'' & ''kernel'''::tsquery)\n> Total runtime: 4.914 ms\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Wed, 7 Dec 2005 13:55:36 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First query is slow, subsequent queries fast" }, { "msg_contents": "Hi Oleg, thanks for your quick reply!\n\nOleg Bartunov wrote:\n\n> you cache is too low :) Try to increase shared_buffers, for example,\n> for 2Gb I'd set it to 100,000\n\nOk, I set shared_buffers to 100000 and indeed it makes a big\ndifference. Other queries than the ones I mentioned are faster, too.\n\nThanks very much for your help,\n\n- Stephan\n", "msg_date": "Wed, 07 Dec 2005 19:33:37 +0100", "msg_from": "Stephan Vollmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First query is slow, subsequent queries fast" } ]
[ { "msg_contents": "Hi Jan,\n\nAs I'm novice with PostgreSQL, can you elaborate the term FSM and\nsettings recommendations?\nBTW: I'm issuing VACUUM ANALYZE every 15 minutes (using cron) and also\nchanges the setting of fsync to false in postgresql.conf but still time\nseems to be growing.\nAlso no other transactions are open.\n\nThanks,\nAssaf.\n\n> -----Original Message-----\n> From: Jan Wieck [mailto:[email protected]] \n> Sent: Tuesday, December 06, 2005 2:35 PM\n> To: Assaf Yaari\n> Cc: Bruno Wolff III; [email protected]\n> Subject: Re: [PERFORM] Performance degradation after \n> successive UPDATE's\n> \n> On 12/6/2005 4:08 AM, Assaf Yaari wrote:\n> > Thanks Bruno,\n> > \n> > Issuing VACUUM FULL seems not to have influence on the time.\n> > I've added to my script VACUUM ANALYZE every 100 UPDATE's \n> and run the \n> > test again (on different record) and the time still increase.\n> \n> I think he meant\n> \n> - run VACUUM FULL once,\n> - adjust FSM settings to database size and turnover ratio\n> - run VACUUM ANALYZE more frequent from there on.\n> \n> \n> Jan\n> \n> > \n> > Any other ideas?\n> > \n> > Thanks,\n> > Assaf. \n> > \n> >> -----Original Message-----\n> >> From: Bruno Wolff III [mailto:[email protected]]\n> >> Sent: Monday, December 05, 2005 10:36 PM\n> >> To: Assaf Yaari\n> >> Cc: [email protected]\n> >> Subject: Re: Performance degradation after successive UPDATE's\n> >> \n> >> On Mon, Dec 05, 2005 at 19:05:01 +0200,\n> >> Assaf Yaari <[email protected]> wrote:\n> >> > Hi,\n> >> > \n> >> > I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0.\n> >> > \n> >> > My application updates counters in DB. I left a test \n> over the night \n> >> > that increased counter of specific record. After night running \n> >> > (several hundreds of thousands updates), I found out \n> that the time \n> >> > spent on UPDATE increased to be more than 1.5 second (at\n> >> the beginning\n> >> > it was less than 10ms)! Issuing VACUUM ANALYZE and even\n> >> reboot didn't\n> >> > seemed to solve the problem.\n> >> \n> >> You need to be running vacuum more often to get rid of the deleted \n> >> rows (update is essentially insert + delete). Once you get \n> too many, \n> >> plain vacuum won't be able to clean them up without \n> raising the value \n> >> you use for FSM. By now the table is really bloated and \n> you probably \n> >> want to use vacuum full on it.\n> >> \n> > \n> > ---------------------------(end of \n> > broadcast)---------------------------\n> > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's \n> datatypes do not\n> > match\n> \n> \n> --\n> #=============================================================\n> =========#\n> # It's easier to get forgiveness for being wrong than for \n> being right. #\n> # Let's break this rule - forgive me. \n> #\n> #================================================== \n> [email protected] #\n> \n", "msg_date": "Wed, 7 Dec 2005 14:14:31 +0200", "msg_from": "\"Assaf Yaari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance degradation after successive UPDATE's" }, { "msg_contents": "On Wed, Dec 07, 2005 at 14:14:31 +0200,\n Assaf Yaari <[email protected]> wrote:\n> Hi Jan,\n> \n> As I'm novice with PostgreSQL, can you elaborate the term FSM and\n> settings recommendations?\nhttp://developer.postgresql.org/docs/postgres/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-FSM\n\n> BTW: I'm issuing VACUUM ANALYZE every 15 minutes (using cron) and also\n> changes the setting of fsync to false in postgresql.conf but still time\n> seems to be growing.\n\nYou generally don't want fsync set to false.\n\n> Also no other transactions are open.\n\nHave you given us explain analyse samples yet?\n\n> \n> Thanks,\n> Assaf.\n> \n> > -----Original Message-----\n> > From: Jan Wieck [mailto:[email protected]] \n> > Sent: Tuesday, December 06, 2005 2:35 PM\n> > To: Assaf Yaari\n> > Cc: Bruno Wolff III; [email protected]\n> > Subject: Re: [PERFORM] Performance degradation after \n> > successive UPDATE's\n> > \n> > On 12/6/2005 4:08 AM, Assaf Yaari wrote:\n> > > Thanks Bruno,\n> > > \n> > > Issuing VACUUM FULL seems not to have influence on the time.\n> > > I've added to my script VACUUM ANALYZE every 100 UPDATE's \n> > and run the \n> > > test again (on different record) and the time still increase.\n> > \n> > I think he meant\n> > \n> > - run VACUUM FULL once,\n> > - adjust FSM settings to database size and turnover ratio\n> > - run VACUUM ANALYZE more frequent from there on.\n> > \n> > \n> > Jan\n> > \n> > > \n> > > Any other ideas?\n> > > \n> > > Thanks,\n> > > Assaf. \n> > > \n> > >> -----Original Message-----\n> > >> From: Bruno Wolff III [mailto:[email protected]]\n> > >> Sent: Monday, December 05, 2005 10:36 PM\n> > >> To: Assaf Yaari\n> > >> Cc: [email protected]\n> > >> Subject: Re: Performance degradation after successive UPDATE's\n> > >> \n> > >> On Mon, Dec 05, 2005 at 19:05:01 +0200,\n> > >> Assaf Yaari <[email protected]> wrote:\n> > >> > Hi,\n> > >> > \n> > >> > I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0.\n> > >> > \n> > >> > My application updates counters in DB. I left a test \n> > over the night \n> > >> > that increased counter of specific record. After night running \n> > >> > (several hundreds of thousands updates), I found out \n> > that the time \n> > >> > spent on UPDATE increased to be more than 1.5 second (at\n> > >> the beginning\n> > >> > it was less than 10ms)! Issuing VACUUM ANALYZE and even\n> > >> reboot didn't\n> > >> > seemed to solve the problem.\n> > >> \n> > >> You need to be running vacuum more often to get rid of the deleted \n> > >> rows (update is essentially insert + delete). Once you get \n> > too many, \n> > >> plain vacuum won't be able to clean them up without \n> > raising the value \n> > >> you use for FSM. By now the table is really bloated and \n> > you probably \n> > >> want to use vacuum full on it.\n> > >> \n> > > \n> > > ---------------------------(end of \n> > > broadcast)---------------------------\n> > > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > > choose an index scan if your joining column's \n> > datatypes do not\n> > > match\n> > \n> > \n> > --\n> > #=============================================================\n> > =========#\n> > # It's easier to get forgiveness for being wrong than for \n> > being right. #\n> > # Let's break this rule - forgive me. \n> > #\n> > #================================================== \n> > [email protected] #\n> > \n", "msg_date": "Wed, 7 Dec 2005 14:04:34 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degradation after successive UPDATE's" } ]
[ { "msg_contents": "Thanks for your inputs, Tom. I was going after high concurrent clients,\nbut should have read this carefully - \n\n-s scaling_factor\n this should be used with -i (initialize) option.\n number of tuples generated will be multiple of the\n scaling factor. For example, -s 100 will imply 10M\n (10,000,000) tuples in the accounts table.\n default is 1. NOTE: scaling factor should be at least\n as large as the largest number of clients you intend\n to test; else you'll mostly be measuring update\ncontention.\n\nI'll rerun the tests.\n\nThanks,\nAnjan\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, December 06, 2005 6:45 PM\nTo: Anjan Dave\nCc: Vivek Khera; Postgresql Performance\nSubject: Re: [PERFORM] High context switches occurring \n\n\"Anjan Dave\" <[email protected]> writes:\n> -bash-3.00$ time pgbench -c 1000 -t 30 pgbench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 1000\n> number of transactions per client: 30\n> number of transactions actually processed: 30000/30000\n> tps = 45.871234 (including connections establishing)\n> tps = 46.092629 (excluding connections establishing)\n\nI can hardly think of a worse way to run pgbench :-(. These numbers are\nabout meaningless, for two reasons:\n\n1. You don't want number of clients (-c) much higher than scaling factor\n(-s in the initialization step). The number of rows in the \"branches\"\ntable will equal -s, and since every transaction updates one\nrandomly-chosen \"branches\" row, you will be measuring mostly row-update\ncontention overhead if there's more concurrent transactions than there\nare rows. In the case -s 1, which is what you've got here, there is no\nactual concurrency at all --- all the transactions stack up on the\nsingle branches row.\n\n2. Running a small number of transactions per client means that\nstartup/shutdown transients overwhelm the steady-state data. You should\nprobably run at least a thousand transactions per client if you want\nrepeatable numbers.\n\nTry something like \"-s 10 -c 10 -t 3000\" to get numbers reflecting test\nconditions more like what the TPC council had in mind when they designed\nthis benchmark. I tend to repeat such a test 3 times to see if the\nnumbers are repeatable, and quote the middle TPS number as long as\nthey're not too far apart.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 7 Dec 2005 10:54:41 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High context switches occurring " } ]
[ { "msg_contents": "Hi All,\n\nI am working on an application that uses PostgreSQL. One of the \nfunctions of the application is to generate reports. In order to keep \nthe code in the application simple we create a view of the required data \nin the database and then simply execute a SELECT * FROM \nview_of_the_data; All of the manipulation and most of the time even the \nordering is handled in the view.\n\nMy question is how much if any performance degradation is there in \ncreating a view of a view?\n\nIOW if I have a view that ties together a couple of tables and \nmanipulates some data what will perform better; a view that filters, \nmanipulates, and orders the data from the first view or a view that \nperforms all the necessary calculations on the original tables?\n\n-- \nKind Regards,\nKeith\n", "msg_date": "Wed, 07 Dec 2005 21:47:28 -0500", "msg_from": "Keith Worthington <[email protected]>", "msg_from_op": true, "msg_subject": "view of view" }, { "msg_contents": "Keith Worthington wrote:\n> Hi All,\n> \n> I am working on an application that uses PostgreSQL. One of the \n> functions of the application is to generate reports. In order to keep \n> the code in the application simple we create a view of the required data \n> in the database and then simply execute a SELECT * FROM \n> view_of_the_data; All of the manipulation and most of the time even the \n> ordering is handled in the view.\n> \n> My question is how much if any performance degradation is there in \n> creating a view of a view?\n> \n> IOW if I have a view that ties together a couple of tables and \n> manipulates some data what will perform better; a view that filters, \n> manipulates, and orders the data from the first view or a view that \n> performs all the necessary calculations on the original tables?\n\nfrom personal experience, if the inner views contain outer joins performance\nisn't that great.\n\n-- \n\n - Rich Doughty\n", "msg_date": "Thu, 08 Dec 2005 13:46:45 +0000", "msg_from": "Rich Doughty <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view of view" } ]
[ { "msg_contents": "Hi,\n\nI am breaking up huge texts (between 25K and 250K words) into single words using PgPlsql.\n\nFor this I am using a temp table in the first step :\n\n\tLOOP\t\n\n\t\tvLeft\t:= vRight;\n\t\tvTmp\t:= vLeft;\n\t\t\n\t\tLOOP\n\t\t\tvChr := SUBSTRING ( pText FROM vTmp FOR 1);\n\t\t\tvTmp := vTmp + 1;\n\t\t\tEXIT WHEN (vChr = ' ' OR vChr IS NULL OR vTmp = cBorder);\n\t\tEND LOOP;\n\t\t\t\n\t\tvRight\t:= vTmp;\n\t\t\n\t\tvLit\t:= SUBSTRING(pText FROM vLeft FOR (vRight - vLeft - 1));\n\n\t\tIF (LENGTH(vLit) > 0) THEN\n\t\t\tWRDCNT := WRDCNT +1;\n\t\t\tINSERT INTO DEX_TEMPDOC(TMP_DOO_ID\n\t\t\t\t\t\t,\tTMP_SEQ_ID\n\t\t\t\t\t\t,\tTMP_RAWTEXT)\n\t\t\tVALUES\t\t (pDOO_ID\n\t\t\t\t\t\t,\tI\n\t\t\t\t\t\t,\tvLIT\n\t\t\t\t\t\t );\t\n\t\tEND IF;\n\t\t\n\t\tI := I + 1;\n\t\tvTmp := LENGTH(vLIT);\n\n\t\t\n\t\tIF ((WRDCNT % 100) = 0) THEN\n\t\t\tPROGRESS = ROUND((100 * I) / DOCLEN,0); \n\t\t\tRAISE NOTICE '[PROC] % WORDS -- LAST LIT % (Len %) [% PCT / % of %]', WRDCNT, vLIT, vTMP, PROGRESS, I, DOCLEN;\n\t\tEND IF;\n\t\t\t\n\t\t\n\t\tEXIT WHEN vRight >= cBorder;\n\tEND LOOP;\n\n\nThe doc is preprocessed, between each word only a single blank can be.\n\nMy problem is : The first 25K words are quite quick, but the insert become slower and slower. starting with 1K words per sec I end up with 100 words in 10 sec (when I reach 80K-100K words)\n\nthe only (nonunique index) on tempdoc is on RAWTEXT.\n\nWhat can I do ? Should I drop the index ?\n\nHere is my config:\n\nshared_buffers = 2000 # min 16, at least max_connections*2, 8KB each\nwork_mem = 32768 # min 64, size in KB\nmaintenance_work_mem = 16384 # min 1024, size in KB\nmax_stack_depth = 8192 # min 100, size in KB\n\nenable_hashagg = true\nenable_hashjoin = true\nenable_indexscan = true\nenable_mergejoin = true\nenable_nestloop = true\nenable_seqscan = false\n\nThe machine is a XEON 3GHz, 1GB RAM, SATA RAID 1 Array running 8.0.4 i686 precompiled\n\n\nThanks !\n\n\n\nMit freundlichen Grüßen \nDipl.Inform.Marcus Noerder-Tuitje\nEntwickler\n\nsoftware technology AG\nKortumstraße 16 \n44787 Bochum\nTel: 0234 / 52 99 6 26\nFax: 0234 / 52 99 6 22\nE-Mail: [email protected] \nInternet: www.technology.de \n\n\n\n\n\n\n\nINSERTs becoming slower and slower\n\n\n\n\n\nHi,\n\nI am breaking up huge texts (between 25K and 250K words) into single words using PgPlsql.\n\nFor this I am using a temp table in the first step :\n\n        LOOP    \n\n                vLeft   := vRight;\n                vTmp    := vLeft;\n                \n\n                LOOP\n                        vChr := SUBSTRING ( pText FROM vTmp FOR 1);\n                        vTmp := vTmp + 1;\n                        EXIT WHEN (vChr = ' ' OR vChr IS NULL OR vTmp = cBorder);\n                END LOOP;\n                        \n\n                vRight  := vTmp;\n                \n\n                vLit    := SUBSTRING(pText FROM vLeft FOR (vRight - vLeft - 1));\n\n                IF (LENGTH(vLit) > 0) THEN\n                        WRDCNT := WRDCNT +1;\n                        INSERT INTO DEX_TEMPDOC(TMP_DOO_ID\n                                                ,       TMP_SEQ_ID\n                                                ,       TMP_RAWTEXT)\n                        VALUES             (pDOO_ID\n                                                ,       I\n                                                ,       vLIT\n                                                    );  \n                END IF;\n                \n\n                I := I + 1;\n                vTmp := LENGTH(vLIT);\n\n                \n\n                IF ((WRDCNT % 100) = 0) THEN\n                        PROGRESS = ROUND((100 * I) / DOCLEN,0); \n                        RAISE NOTICE '[PROC] % WORDS -- LAST LIT % (Len %) [% PCT / % of %]', WRDCNT, vLIT, vTMP, PROGRESS, I, DOCLEN;\n                END IF;\n                        \n\n                \n\n                EXIT WHEN vRight >= cBorder;\n        END LOOP;\n\n\nThe doc is preprocessed, between each word only a single blank can be.\n\nMy problem is : The first 25K words are quite quick, but  the insert become slower and slower. starting with 1K words per sec I end up with 100 words in 10 sec (when I reach 80K-100K words)\nthe only (nonunique index) on tempdoc is on RAWTEXT.\n\nWhat can I do ? Should I drop the index ?\n\nHere is my config:\n\nshared_buffers = 2000           # min 16, at least max_connections*2, 8KB each\nwork_mem = 32768                # min 64, size in KB\nmaintenance_work_mem = 16384    # min 1024, size in KB\nmax_stack_depth = 8192          # min 100, size in KB\n\nenable_hashagg = true\nenable_hashjoin = true\nenable_indexscan = true\nenable_mergejoin = true\nenable_nestloop = true\nenable_seqscan = false\n\nThe machine is a XEON 3GHz, 1GB RAM, SATA RAID 1 Array running 8.0.4 i686 precompiled\n\n\nThanks !\n\n\n\nMit freundlichen Grüßen\nDipl.Inform.Marcus Noerder-Tuitje\nEntwickler\n\nsoftware technology AG\nKortumstraße 16   \n44787 Bochum\nTel:  0234 / 52 99 6 26\nFax: 0234 / 52 99 6 22\nE-Mail:   [email protected]  \nInternet: www.technology.de", "msg_date": "Thu, 8 Dec 2005 09:36:43 +0100", "msg_from": "=?iso-8859-1?Q?N=F6rder-Tuitje=2C_Marcus?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "INSERTs becoming slower and slower" }, { "msg_contents": "You might find it faster to install contrib/tsearch2 for text indexing \nsort of purposes...\n\nN�rder-Tuitje wrote:\n> \n> \n> Hi,\n> \n> I am breaking up huge texts (between 25K and 250K words) into single \n> words using PgPlsql.\n> \n> For this I am using a temp table in the first step :\n> \n> LOOP \n> \n> vLeft := vRight;\n> vTmp := vLeft;\n> \n> LOOP\n> vChr := SUBSTRING ( pText FROM vTmp FOR 1);\n> vTmp := vTmp + 1;\n> EXIT WHEN (vChr = ' ' OR vChr IS NULL OR vTmp = \n> cBorder);\n> END LOOP;\n> \n> vRight := vTmp;\n> \n> vLit := SUBSTRING(pText FROM vLeft FOR (vRight - \n> vLeft - 1));\n> \n> IF (LENGTH(vLit) > 0) THEN\n> WRDCNT := WRDCNT +1;\n> INSERT INTO DEX_TEMPDOC(TMP_DOO_ID\n> , TMP_SEQ_ID\n> , TMP_RAWTEXT)\n> VALUES (pDOO_ID\n> , I\n> , vLIT\n> ); \n> END IF;\n> \n> I := I + 1;\n> vTmp := LENGTH(vLIT);\n> \n> \n> IF ((WRDCNT % 100) = 0) THEN\n> PROGRESS = ROUND((100 * I) / DOCLEN,0);\n> RAISE NOTICE '[PROC] % WORDS -- LAST LIT % (Len \n> %) [% PCT / % of %]', WRDCNT, vLIT, vTMP, PROGRESS, I, DOCLEN;\n> \n> END IF;\n> \n> \n> EXIT WHEN vRight >= cBorder;\n> END LOOP;\n> \n> \n> The doc is preprocessed, between each word only a single blank can be.\n> \n> My problem is : The first 25K words are quite quick, but the insert \n> become slower and slower. starting with 1K words per sec I end up with \n> 100 words in 10 sec (when I reach 80K-100K words)\n> \n> the only (nonunique index) on tempdoc is on RAWTEXT.\n> \n> What can I do ? Should I drop the index ?\n> \n> Here is my config:\n> \n> shared_buffers = 2000 # min 16, at least max_connections*2, \n> 8KB each\n> work_mem = 32768 # min 64, size in KB\n> maintenance_work_mem = 16384 # min 1024, size in KB\n> max_stack_depth = 8192 # min 100, size in KB\n> \n> enable_hashagg = true\n> enable_hashjoin = true\n> enable_indexscan = true\n> enable_mergejoin = true\n> enable_nestloop = true\n> enable_seqscan = false\n> \n> The machine is a XEON 3GHz, 1GB RAM, SATA RAID 1 Array running 8.0.4 \n> i686 precompiled\n> \n> \n> Thanks !\n> \n> \n> \n> Mit freundlichen Gr��en\n> *Dipl.Inform.Marcus Noerder-Tuitje\n> **Entwickler\n> *\n> software technology AG\n> *Kortumstra�e 16 *\n> *44787 Bochum*\n> *Tel: 0234 / 52 99 6 26*\n> *Fax: 0234 / 52 99 6 22*\n> *E-Mail: [email protected] *\n> *Internet: www.technology.de *\n> \n> \n\n", "msg_date": "Thu, 08 Dec 2005 16:44:03 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERTs becoming slower and slower" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> You might find it faster to install contrib/tsearch2 for text indexing \n> sort of purposes...\n> \n> N�rder-Tuitje wrote:\n>> Here is my config:\n>>\n>> shared_buffers = 2000 # min 16, at least max_connections*2, \n>> 8KB each\n>> work_mem = 32768 # min 64, size in KB\n>> maintenance_work_mem = 16384 # min 1024, size in KB\n>> max_stack_depth = 8192 # min 100, size in KB\n>>\n>> enable_hashagg = true\n>> enable_hashjoin = true\n>> enable_indexscan = true\n>> enable_mergejoin = true\n>> enable_nestloop = true\n>> enable_seqscan = false\n>>\n>> The machine is a XEON 3GHz, 1GB RAM, SATA RAID 1 Array running 8.0.4 \n>> i686 precompiled\n\nAlso, shared_buffers (server-wide) are low, compared to a high work_mem \n(32M for each sort operation, but this also depends on your concurrency \nlevel).\n\nAnd disabling sequential scans in your postgresql.conf would probabily \nlead to sub-optimal plans in many queries.\n\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com\nhttp://phppgads.com\n", "msg_date": "Thu, 08 Dec 2005 11:59:49 +0100", "msg_from": "Matteo Beccati <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERTs becoming slower and slower" } ]
[ { "msg_contents": "I'm currently benchmarking several RDBMSs with respect to analytical query performance on medium-sized multidimensional data sets. The data set contains 30,000,000 fact rows evenly distributed in a multidimensional space of 9 hierarchical dimensions. Each dimension has 8000 members.\n\n \n\nThe test query selects about one half of the members from each dimension, and calculates fact sums grouped by 5 high-level members from each dimensional hierarchy. (There are actually some additional complications that makes the query end up listing 20 table aliases in the from-clause, 18 of which are aliases for 2 physical tables.)\n\n \n\nOn Oracle the query runs in less than 3 seconds. All steps have been taken to ensure that Oracle will apply star schema optimization to the query (e.g. having loads of single-column bitmap indexes). The query plan reveals that a bitmap merge takes place before fact lookup.\n\n \n\nThere's a lot of RAM available, and series of queries have been run in advance to make sure the required data resides in the cache. This is confirmed by a very high CPU utilization and virtually no I/O during the query execution.\n\n \n\nI have established similar conditions for the query in PostgreSQL, and it runs in about 30 seconds. Again the CPU utilization is high with no noticable I/O. The query plan is of course very different from that of Oracle, since PostgreSQL lacks the bitmap index merge operation. It narrows down the result one dimension at a time, using the single-column indexes provided. It is not an option for us to provide multi-column indexes tailored to the specific query, since we want full freedom as to which dimensions each query will use.\n\n \n\nAre these the results we should expect when comparing PostgreSQL to Oracle for such queries, or are there special optimization options for PostgreSQL that we may have overlooked? (I wouldn't be suprised if there are, since I spent at least 2 full days trying to trigger the star optimization magic in my Oracle installation.)\n\n\n\n\n\n\n\n\n\n\nI'm currently benchmarking several RDBMSs with respect to\nanalytical query performance on medium-sized multidimensional data sets. The\ndata set contains 30,000,000 fact rows evenly distributed in a multidimensional\nspace of 9 hierarchical dimensions. Each dimension has 8000 members.\n \nThe test query selects about one half of the members from\neach dimension, and calculates fact sums grouped by 5 high-level members from\neach dimensional hierarchy. (There are actually some additional complications\nthat makes the query end up listing 20 table aliases in the from-clause, 18 of\nwhich are aliases for 2 physical tables.)\n \nOn Oracle the query runs in less than 3 seconds. All steps\nhave been taken to ensure that Oracle will apply star schema optimization to\nthe query (e.g. having loads of single-column bitmap indexes). The query plan\nreveals that a bitmap merge takes place before fact lookup.\n \nThere's a lot of RAM available, and series of queries have\nbeen run in advance to make sure the required data resides in the cache. This\nis confirmed by a very high CPU utilization and virtually no I/O during the\nquery execution.\n \nI have established similar conditions for the query in\nPostgreSQL, and it runs in about 30 seconds. Again the CPU utilization is high\nwith no noticable I/O. The query plan is of course very different from that of\nOracle, since PostgreSQL lacks the bitmap index merge operation. It narrows\ndown the result one dimension at a time, using the single-column indexes\nprovided. It is not an option for us to provide multi-column indexes tailored\nto the specific query, since we want full freedom as to which dimensions each\nquery will use.\n \nAre these the results we should expect when comparing\nPostgreSQL to Oracle for such queries, or are there special optimization options\nfor PostgreSQL that we may have overlooked? (I wouldn't be suprised if there\nare, since I spent at least 2 full days trying to trigger the star optimization\nmagic in my Oracle installation.)", "msg_date": "Thu, 8 Dec 2005 12:26:55 +0100", "msg_from": "=?iso-8859-1?Q?P=E5l_Stenslet?= <[email protected]>", "msg_from_op": true, "msg_subject": "Should Oracle outperform PostgreSQL on a complex multidimensional\n\tquery?" }, { "msg_contents": "=?iso-8859-1?Q?P=E5l_Stenslet?= <[email protected]> writes:\n> I have established similar conditions for the query in PostgreSQL, and =\n> it runs in about 30 seconds. Again the CPU utilization is high with no =\n> noticable I/O. The query plan is of course very different from that of =\n> Oracle, since PostgreSQL lacks the bitmap index merge operation.\n\nPerhaps you should be trying this on PG 8.1? In any case, without\nspecific details of your schema or a look at EXPLAIN ANALYZE results,\nit's unlikely that anyone is going to have any useful comments for you.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Dec 2005 15:38:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex multidimensional\n\tquery?" }, { "msg_contents": "On Thu, 2005-12-08 at 12:26 +0100, Pål Stenslet wrote:\n> I'm currently benchmarking several RDBMSs with respect to analytical\n> query performance on medium-sized multidimensional data sets. The data\n> set contains 30,000,000 fact rows evenly distributed in a\n> multidimensional space of 9 hierarchical dimensions. Each dimension\n> has 8000 members.\n\n> I have established similar conditions for the query in PostgreSQL, and\n> it runs in about 30 seconds. Again the CPU utilization is high with no\n> noticable I/O. The query plan is of course very different from that of\n> Oracle, since PostgreSQL lacks the bitmap index merge operation. It\n> narrows down the result one dimension at a time, using the\n> single-column indexes provided. It is not an option for us to provide\n> multi-column indexes tailored to the specific query, since we want\n> full freedom as to which dimensions each query will use.\n\n> Are these the results we should expect when comparing PostgreSQL to\n> Oracle for such queries, or are there special optimization options for\n> PostgreSQL that we may have overlooked? (I wouldn't be suprised if\n> there are, since I spent at least 2 full days trying to trigger the\n> star optimization magic in my Oracle installation.)\n\nYes, I'd expect something like this right now in 8.1; the numbers stack\nup to PostgreSQL doing equivalent join speeds, but w/o star join.\n\nYou've confused the issue here since:\n- Oracle performs star joins using a bit map index transform. It is the\nstar join that is the important bit here, not the just the bitmap part.\n- PostgreSQL does actually provide bitmap index merge, but not star join\n(YET!)\n\n[I've looked into this, but there seem to be multiple patent claims\ncovering various aspects of this technique, yet at least other 3 vendors\nmanage to achieve this. So far I've not dug too deeply, but I understand\nthe optimizations we'd need to perform in PostgreSQL to do this.]\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 13 Dec 2005 23:28:49 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "\nHow are star joins different from what we do now?\n\n---------------------------------------------------------------------------\n\nSimon Riggs wrote:\n> On Thu, 2005-12-08 at 12:26 +0100, P?l Stenslet wrote:\n> > I'm currently benchmarking several RDBMSs with respect to analytical\n> > query performance on medium-sized multidimensional data sets. The data\n> > set contains 30,000,000 fact rows evenly distributed in a\n> > multidimensional space of 9 hierarchical dimensions. Each dimension\n> > has 8000 members.\n> \n> > I have established similar conditions for the query in PostgreSQL, and\n> > it runs in about 30 seconds. Again the CPU utilization is high with no\n> > noticable I/O. The query plan is of course very different from that of\n> > Oracle, since PostgreSQL lacks the bitmap index merge operation. It\n> > narrows down the result one dimension at a time, using the\n> > single-column indexes provided. It is not an option for us to provide\n> > multi-column indexes tailored to the specific query, since we want\n> > full freedom as to which dimensions each query will use.\n> \n> > Are these the results we should expect when comparing PostgreSQL to\n> > Oracle for such queries, or are there special optimization options for\n> > PostgreSQL that we may have overlooked? (I wouldn't be suprised if\n> > there are, since I spent at least 2 full days trying to trigger the\n> > star optimization magic in my Oracle installation.)\n> \n> Yes, I'd expect something like this right now in 8.1; the numbers stack\n> up to PostgreSQL doing equivalent join speeds, but w/o star join.\n> \n> You've confused the issue here since:\n> - Oracle performs star joins using a bit map index transform. It is the\n> star join that is the important bit here, not the just the bitmap part.\n> - PostgreSQL does actually provide bitmap index merge, but not star join\n> (YET!)\n> \n> [I've looked into this, but there seem to be multiple patent claims\n> covering various aspects of this technique, yet at least other 3 vendors\n> manage to achieve this. So far I've not dug too deeply, but I understand\n> the optimizations we'd need to perform in PostgreSQL to do this.]\n> \n> Best Regards, Simon Riggs\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 16 Dec 2005 23:28:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "Bruce Momjian wrote:\n> How are star joins different from what we do now?\n> \n> ---------------------------------------------------------------------------\n> \n\nRecall that a \"star\" query with n tables means a query where there are \n(n - 1) supposedly small tables (dimensions) and 1 large table (fact) - \nwhich has foreign keys to each dimension.\n\nAs I understand it, the classic \"tar join\" is planned like this:\n\n1) The result of the restriction clauses on each of the (small) \ndimension tables is computed.\n2) The cartesian product of all the results of 1) is formed.\n3) The fact (big) table is joined to the pseudo relation formed by 2).\n\n From what I have seen most vendor implementations do not (always) \nperform the full cartesion product of the dimensions, but do some of \nthem, join to the fact, then join to the remaining dimensions afterwards.\n\nThere is another join strategy called the \"star transformation\" where \nsome of the dimension joins get rewritten as subqueries, then the above \nmethod is used again! This tends to be useful when the cartesion \nproducts would be stupidly large (e.g. \"sparse\" facts, or few \nrestriction clauses)\n\nregards\n\nMark\n\nP.s : Luke or Simon might have a better definition... but thought I'd \nchuck in my 2c... :-)\n\n", "msg_date": "Sat, 17 Dec 2005 18:21:50 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "On Fri, 2005-12-16 at 23:28 -0500, Bruce Momjian wrote:\n> How are star joins different from what we do now?\n\nVarious ways of doing them, but all use plans that you wouldn't have\ncome up with via normal join planning.\n\nMethods:\n1. join all N small tables together in a cartesian product, then join to\nmain Large table once (rather than N times)\n2. transform joins into subselects, then return subselect rows via an\nindex bitmap. Joins are performed via a bitmap addition process.\n\nYou can fake (1) yourself with a temporary table, and the basics for (2)\nare now in place also.\n\nThe characteristics of these joins make them suitable for large Data\nWarehouses with Fact-Dimension style designs.\n\nMany RDBMS have this, but we need to be careful of patent claims. I'm\nsure there's a way through that, but I'm not looking for it yet. Anybody\nelse wishing to assist with a detailed analysis would be much\nappreciated.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Sat, 17 Dec 2005 09:15:53 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "\nOK, so while our bitmap scan allows multiple indexes to be joined to get\nto heap rows, a star joins allows multiple dimensions _tables_ to be\njoined to index into a larger main fact table --- interesting.\n\nAdded to TODO:\n\n* Allow star join optimizations\n\n While our bitmap scan allows multiple indexes to be joined to get\n to heap rows, a star joins allows multiple dimension _tables_ to\n be joined to index into a larger main fact table. The join is\n usually performed by either creating a cartesian product of all\n the dimmension tables and doing a single join on that product or\n using subselects to create bitmaps of each dimmension table match\n and merge the bitmaps to perform the join on the fact table.\n\n\n---------------------------------------------------------------------------\n\nSimon Riggs wrote:\n> On Fri, 2005-12-16 at 23:28 -0500, Bruce Momjian wrote:\n> > How are star joins different from what we do now?\n> \n> Various ways of doing them, but all use plans that you wouldn't have\n> come up with via normal join planning.\n> \n> Methods:\n> 1. join all N small tables together in a cartesian product, then join to\n> main Large table once (rather than N times)\n> 2. transform joins into subselects, then return subselect rows via an\n> index bitmap. Joins are performed via a bitmap addition process.\n> \n> You can fake (1) yourself with a temporary table, and the basics for (2)\n> are now in place also.\n> \n> The characteristics of these joins make them suitable for large Data\n> Warehouses with Fact-Dimension style designs.\n> \n> Many RDBMS have this, but we need to be careful of patent claims. I'm\n> sure there's a way through that, but I'm not looking for it yet. Anybody\n> else wishing to assist with a detailed analysis would be much\n> appreciated.\n> \n> Best Regards, Simon Riggs\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 17 Dec 2005 11:43:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Fri, 2005-12-16 at 23:28 -0500, Bruce Momjian wrote:\n>> How are star joins different from what we do now?\n\n> Methods:\n> 1. join all N small tables together in a cartesian product, then join to\n> main Large table once (rather than N times)\n\nOf course, the reason the current planner does not think of this is that\nit does not consider clauseless joins unless there is no alternative.\n\nHowever, I submit that it wouldn't pick such a plan anyway, and should\nnot, because the idea is utterly stupid. The plan you currently get for\nthis sort of scenario is typically a nest of hash joins:\n\n QUERY PLAN \n------------------------------------------------------------------------\n Hash Join (cost=2.25..4652.25 rows=102400 width=16)\n Hash Cond: (\"outer\".f1 = \"inner\".f1)\n -> Hash Join (cost=1.12..3115.12 rows=102400 width=12)\n Hash Cond: (\"outer\".f2 = \"inner\".f1)\n -> Seq Scan on fact (cost=0.00..1578.00 rows=102400 width=8)\n -> Hash (cost=1.10..1.10 rows=10 width=4)\n -> Seq Scan on d2 (cost=0.00..1.10 rows=10 width=4)\n -> Hash (cost=1.10..1.10 rows=10 width=4)\n -> Seq Scan on d1 (cost=0.00..1.10 rows=10 width=4)\n(9 rows)\n\nThis involves only one scan of the fact table. As each row is pulled up\nthrough the nest of hash joins, we hash one dimension key and join to\none small table at each level. This is at worst the same amount of work\nas hashing all the keys at once and probing a single cartesian-product\nhashtable, probably less work (fewer wasted key-comparisons). And\ndefinitely less memory. You do have to keep your eye on the ball that\nyou don't waste a lot of overhead propagating the row up through\nmultiple join levels, but we got rid of most of the problem there in\n8.1 via the introduction of \"virtual tuple slots\". If this isn't fast\nenough yet, it'd make more sense to invest effort in further cutting the\nexecutor's join overhead (which of course benefits *all* plan types)\nthan in trying to make the planner choose a star join.\n\n> 2. transform joins into subselects, then return subselect rows via an\n> index bitmap. Joins are performed via a bitmap addition process.\n\nThis one might be interesting but it's not clear what you are talking\nabout. \"Bitmap addition\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Dec 2005 13:13:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Added to TODO:\n> * Allow star join optimizations\n\nSee my response to Simon for reasons why this doesn't seem like a\nparticularly good TODO item.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Dec 2005 13:35:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex " }, { "msg_contents": "I wrote:\n> However, I submit that it wouldn't pick such a plan anyway, and should\n> not, because the idea is utterly stupid.\n\nBTW, some experimentation suggests that in fact a star join is already\nslower than the \"regular\" plan in 8.1. You can force a star-join plan\nto be generated like this:\n\nregression=# set join_collapse_limit TO 1;\nSET\nregression=# explain select * from fact,d1 cross join d2 where fact.f1=d1.f1 and fact.f2=d2.f1;\n QUERY PLAN \n---------------------------------------------------------------------------\n Hash Join (cost=4.71..8238.71 rows=102400 width=16)\n Hash Cond: ((\"outer\".f1 = \"inner\".f1) AND (\"outer\".f2 = \"inner\".f1))\n -> Seq Scan on fact (cost=0.00..1578.00 rows=102400 width=8)\n -> Hash (cost=4.21..4.21 rows=100 width=8)\n -> Nested Loop (cost=1.11..4.21 rows=100 width=8)\n -> Seq Scan on d1 (cost=0.00..1.10 rows=10 width=4)\n -> Materialize (cost=1.11..1.21 rows=10 width=4)\n -> Seq Scan on d2 (cost=0.00..1.10 rows=10 width=4)\n(8 rows)\n\nand at least in the one test case I tried, this runs slower than the\nnested-hash plan. EXPLAIN ANALYZE misleadingly makes it look faster,\nbut that's just because of the excessive per-plan-node ANALYZE\noverhead. Try doing something like\n\n\t\\timing\n\tselect count(*) from fact, ...\n\nto get realistic numbers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Dec 2005 13:47:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Added to TODO:\n> > * Allow star join optimizations\n> \n> See my response to Simon for reasons why this doesn't seem like a\n> particularly good TODO item.\n\nYes, TODO removed. I thought we were waiting for bitmap joins before\ntrying star joins. I did not realize they might never be a win.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 17 Dec 2005 14:03:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "Tom,\n\nOn 12/17/05 10:47 AM, \"Tom Lane\" <[email protected]> wrote:\n\n> BTW, some experimentation suggests that in fact a star join is already\n> slower than the \"regular\" plan in 8.1. You can force a star-join plan\n> to be generated like this:\n\nCool!\n\nWe've got Paal's test case in the queue to run, it's taking us some time to\nget to it, possibly by next week we should be able to run some of these\ncases:\n1) 8.1.1 btree with bitmap scan\n2) 8.1.1 on-disk bitmap with direct AND operations\n3) (2) with forced star transformation (materialize)\n\nWe'll also be trying the same things with the CVS tip of Bizgres MPP,\nprobably over X-mas.\n\nWe should be able to handily beat Oracle's 3 second number.\n\n- Luke \n\n\n", "msg_date": "Sat, 17 Dec 2005 11:05:27 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> \n>>On Fri, 2005-12-16 at 23:28 -0500, Bruce Momjian wrote:\n>>\n>>>How are star joins different from what we do now?\n> \n> \n>>Methods:\n>>1. join all N small tables together in a cartesian product, then join to\n>>main Large table once (rather than N times)\n> \n> \n> Of course, the reason the current planner does not think of this is that\n> it does not consider clauseless joins unless there is no alternative.\n> \n> However, I submit that it wouldn't pick such a plan anyway, and should\n> not, because the idea is utterly stupid. The plan you currently get for\n> this sort of scenario is typically a nest of hash joins:\n> \n> QUERY PLAN \n> ------------------------------------------------------------------------\n> Hash Join (cost=2.25..4652.25 rows=102400 width=16)\n> Hash Cond: (\"outer\".f1 = \"inner\".f1)\n> -> Hash Join (cost=1.12..3115.12 rows=102400 width=12)\n> Hash Cond: (\"outer\".f2 = \"inner\".f1)\n> -> Seq Scan on fact (cost=0.00..1578.00 rows=102400 width=8)\n> -> Hash (cost=1.10..1.10 rows=10 width=4)\n> -> Seq Scan on d2 (cost=0.00..1.10 rows=10 width=4)\n> -> Hash (cost=1.10..1.10 rows=10 width=4)\n> -> Seq Scan on d1 (cost=0.00..1.10 rows=10 width=4)\n> (9 rows)\n> \n> This involves only one scan of the fact table. As each row is pulled up\n> through the nest of hash joins, we hash one dimension key and join to\n> one small table at each level. This is at worst the same amount of work\n> as hashing all the keys at once and probing a single cartesian-product\n> hashtable, probably less work (fewer wasted key-comparisons). And\n> definitely less memory. You do have to keep your eye on the ball that\n> you don't waste a lot of overhead propagating the row up through\n> multiple join levels, but we got rid of most of the problem there in\n> 8.1 via the introduction of \"virtual tuple slots\". If this isn't fast\n> enough yet, it'd make more sense to invest effort in further cutting the\n> executor's join overhead (which of course benefits *all* plan types)\n> than in trying to make the planner choose a star join.\n> \n> \n>>2. transform joins into subselects, then return subselect rows via an\n>>index bitmap. Joins are performed via a bitmap addition process.\n> \n> \n> This one might be interesting but it's not clear what you are talking\n> about. \"Bitmap addition\"?\n\nYeah - the quoted method of \"make a cartesian product of the dimensions \nand then join to the fact all at once\" is not actually used (as written) \nin many implementations - probably for the reasons you are pointing out. \nI found these two papers whilst browsing:\n\n\nhttp://www.cs.brown.edu/courses/cs227/Papers/Indexing/O'NeilGraefe.pdf\nhttp://www.dama.upc.edu/downloads/jaguilar-2005-4.pdf\n\n\nThey seem to be describing a more subtle method making use of join \nindexes and bitmapped indexes.\n\nIf I understand it correctly, the idea is to successively build up a \nlist (hash / bitmap) of fact RIDS that will satisfy the query, and when \ncomplete actually perform the join and construct tuples. The goal being \n similar in intent to the star join method (i.e. access the fact table \nas little and as \"late\" as possible), but avoiding the cost of actually \nconstructing the dimension cartesian product.\n\ncheers\n\nMark\n", "msg_date": "Sun, 18 Dec 2005 15:02:48 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "Tom Lane wrote:\n\n>>2. transform joins into subselects, then return subselect rows via an\n>>index bitmap. Joins are performed via a bitmap addition process.\n\nLooks like 8.1 pretty much does this right now:\n\nFirst the basic star:\n\nEXPLAIN ANALYZE\nSELECT\n d0.dmth,\n d1.dat,\n count(f.fval )\nFROM\n dim0 AS d0,\n dim1 AS d1,\n fact0 AS f\nWHERE d0.d0key = f.d0key\nAND d1.d1key = f.d1key\nAND d0.dyr BETWEEN 2010 AND 2015\nAND d1.dattyp BETWEEN '10th measure type' AND '14th measure type'\nGROUP BY\n d0.dmth,\n d1.dat\n;\n \nQUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=334842.41..334846.53 rows=329 width=37) (actual \ntime=144317.960..144318.814 rows=120 loops=1)\n -> Hash Join (cost=145.72..334636.91 rows=27400 width=37) (actual \ntime=1586.363..142831.025 rows=201600 loops=1)\n Hash Cond: (\"outer\".d0key = \"inner\".d0key)\n -> Hash Join (cost=89.72..333279.41 rows=137001 width=37) \n(actual time=1467.322..135585.317 rows=1000000 loops=1)\n Hash Cond: (\"outer\".d1key = \"inner\".d1key)\n -> Seq Scan on fact0 f (cost=0.00..281819.45 \nrows=10000045 width=12) (actual time=120.881..70364.473 rows=10000000 \nloops=1)\n -> Hash (cost=89.38..89.38 rows=137 width=33) (actual \ntime=24.822..24.822 rows=660 loops=1)\n -> Index Scan using dim1_dattyp on dim1 d1 \n(cost=0.00..89.38 rows=137 width=33) (actual time=0.502..19.374 rows=660 \nloops=1)\n Index Cond: (((dattyp)::text >= '10th \nmeasure type'::text) AND ((dattyp)::text <= '14th measure type'::text))\n -> Hash (cost=51.00..51.00 rows=2000 width=8) (actual \ntime=31.620..31.620 rows=2016 loops=1)\n -> Index Scan using dim0_dyr on dim0 d0 \n(cost=0.00..51.00 rows=2000 width=8) (actual time=0.379..17.377 \nrows=2016 loops=1)\n Index Cond: ((dyr >= 2010) AND (dyr <= 2015))\n Total runtime: 144320.588 ms\n(13 rows)\n\n\nNow apply the star transformation:\n\nEXPLAIN ANALYZE\nSELECT\n d0.dmth,\n d1.dat,\n count(f.fval )\nFROM\n dim0 AS d0,\n dim1 AS d1,\n fact0 AS f\nWHERE d0.d0key = f.d0key\nAND d1.d1key = f.d1key\nAND d0.dyr BETWEEN 2010 AND 2015\nAND d1.dattyp BETWEEN '10th measure type' AND '14th measure type'\nAND f.d0key IN (SELECT cd0.d0key FROM dim0 cd0\n WHERE cd0.dyr BETWEEN 2010 AND 2015)\nAND f.d1key IN (SELECT cd1.d1key FROM dim1 cd1\n WHERE cd1.dattyp BETWEEN '10th measure type'\n AND '14th measure type')\nGROUP BY\n d0.dmth,\n d1.dat\n;\n \n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=129230.89..129231.83 rows=75 width=37) (actual \ntime=39798.192..39799.015 rows=120 loops=1)\n -> Nested Loop IN Join (cost=149.44..129230.33 rows=75 width=37) \n(actual time=269.919..38125.520 rows=201600 loops=1)\n -> Hash Join (cost=147.43..128171.03 rows=375 width=45) \n(actual time=269.516..27342.866 rows=201600 loops=1)\n Hash Cond: (\"outer\".d0key = \"inner\".d0key)\n -> Nested Loop (cost=91.43..128096.03 rows=2000 \nwidth=37) (actual time=152.084..19869.365 rows=1000000 loops=1)\n -> Hash Join (cost=91.43..181.52 rows=2 \nwidth=37) (actual time=29.931..46.339 rows=660 loops=1)\n Hash Cond: (\"outer\".d1key = \"inner\".d1key)\n -> Index Scan using dim1_dattyp on dim1 d1 \n (cost=0.00..89.38 rows=137 width=33) (actual time=0.516..7.683 \nrows=660 loops=1)\n Index Cond: (((dattyp)::text >= '10th \nmeasure type'::text) AND ((dattyp)::text <= '14th measure type'::text))\n -> Hash (cost=91.09..91.09 rows=137 \nwidth=4) (actual time=29.238..29.238 rows=660 loops=1)\n -> HashAggregate (cost=89.72..91.09 \nrows=137 width=4) (actual time=20.940..24.900 rows=660 loops=1)\n -> Index Scan using dim1_dattyp \non dim1 cd1 (cost=0.00..89.38 rows=137 width=4) (actual \ntime=0.042..14.841 rows=660 loops=1)\n Index Cond: \n(((dattyp)::text >= '10th measure type'::text) AND ((dattyp)::text <= \n'14th measure type'::text))\n -> Index Scan using fact0_d1key on fact0 f \n(cost=0.00..62707.26 rows=100000 width=12) (actual time=0.205..12.691 \nrows=1515 loops=660)\n Index Cond: (\"outer\".d1key = f.d1key)\n -> Hash (cost=51.00..51.00 rows=2000 width=8) (actual \ntime=31.264..31.264 rows=2016 loops=1)\n -> Index Scan using dim0_dyr on dim0 d0 \n(cost=0.00..51.00 rows=2000 width=8) (actual time=0.339..16.885 \nrows=2016 loops=1)\n Index Cond: ((dyr >= 2010) AND (dyr <= 2015))\n -> Bitmap Heap Scan on dim0 cd0 (cost=2.00..2.81 rows=1 \nwidth=4) (actual time=0.031..0.031 rows=1 loops=201600)\n Recheck Cond: (\"outer\".d0key = cd0.d0key)\n Filter: ((dyr >= 2010) AND (dyr <= 2015))\n -> Bitmap Index Scan on dim0_d0key (cost=0.00..2.00 \nrows=1 width=0) (actual time=0.015..0.015 rows=1 loops=201600)\n Index Cond: (\"outer\".d0key = cd0.d0key)\n Total runtime: 39800.294 ms\n(24 rows)\n\n\nThe real run times are more like 24s and 9s, but you get the idea.\n\nCheers\n\nMark\n", "msg_date": "Sun, 18 Dec 2005 17:07:41 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Simon Riggs <[email protected]> writes:\n> > On Fri, 2005-12-16 at 23:28 -0500, Bruce Momjian wrote:\n> >> How are star joins different from what we do now?\n> \n> > Methods:\n> > 1. join all N small tables together in a cartesian product, then join to\n> > main Large table once (rather than N times)\n> \n> Of course, the reason the current planner does not think of this is that\n> it does not consider clauseless joins unless there is no alternative.\n> \n> However, I submit that it wouldn't pick such a plan anyway, and should\n> not, because the idea is utterly stupid. The plan you currently get for\n> this sort of scenario is typically a nest of hash joins:\n> \n> QUERY PLAN \n> ------------------------------------------------------------------------\n> Hash Join (cost=2.25..4652.25 rows=102400 width=16)\n> Hash Cond: (\"outer\".f1 = \"inner\".f1)\n> -> Hash Join (cost=1.12..3115.12 rows=102400 width=12)\n> Hash Cond: (\"outer\".f2 = \"inner\".f1)\n> -> Seq Scan on fact (cost=0.00..1578.00 rows=102400 width=8)\n> -> Hash (cost=1.10..1.10 rows=10 width=4)\n> -> Seq Scan on d2 (cost=0.00..1.10 rows=10 width=4)\n> -> Hash (cost=1.10..1.10 rows=10 width=4)\n> -> Seq Scan on d1 (cost=0.00..1.10 rows=10 width=4)\n> (9 rows)\n\nI realize DSS systems often expect to run queries using sequential scans but\nperhaps the point of this particular plan is to exploit indexes? (I think\nparticularly bitmap indexes but ...)\n\nSo in this case, you would expect an index scan of d1 to pull out just the\nrecords that d1 says should be included, and an index scan of d2 to pull out\njust the records that d2 says should be included, then finally a nested loop\nindex lookup of f1 for the primary keys that show up in both the d1 scan and\nthe d2 scan.\n\n\nSo in the following it would be nice if the index scan on f didn't have to\nappear until *after* all the hashes were checked for the dimenions, not after\nonly one of them. This would be even nicer if instead of hashes a bitmap data\nstructure could be built and bitmap operations used to do the joins, since no\nother columns from these dimension tables need to be preserved to be included\nin the select list.\n\nIt would be even better if there were an on-disk representation of these\nbitmap data structures but I don't see how to do that with MVCC at all.\n\n\nslo=> explain select * from fact as f where fact_id in (select fact_id from d d1 where dim_id = 4) and fact_id in (select fact_id from d d2 where dim_id = 29) and fact_id in (select fact_id from d d3 where dim_id = 57);\n QUERY PLAN \n------------------------------------------------------------------------------------------------\n Hash IN Join (cost=15.77..21.86 rows=1 width=110)\n Hash Cond: (\"outer\".fact_id = \"inner\".fact_id)\n -> Hash IN Join (cost=10.51..16.59 rows=1 width=118)\n Hash Cond: (\"outer\".fact_id = \"inner\".fact_id)\n -> Nested Loop (cost=5.26..11.31 rows=2 width=114)\n -> HashAggregate (cost=5.26..5.26 rows=2 width=4)\n -> Index Scan using di on d d2 (cost=0.00..5.25 rows=3 width=4)\n Index Cond: (dim_id = 29)\n -> Index Scan using fact_pkey on fact f (cost=0.00..3.01 rows=1 width=110)\n Index Cond: (f.fact_id = \"outer\".fact_id)\n -> Hash (cost=5.25..5.25 rows=3 width=4)\n -> Index Scan using di on d d1 (cost=0.00..5.25 rows=3 width=4)\n Index Cond: (dim_id = 4)\n -> Hash (cost=5.25..5.25 rows=3 width=4)\n -> Index Scan using di on d d3 (cost=0.00..5.25 rows=3 width=4)\n Index Cond: (dim_id = 57)\n(16 rows)\n\n\n\n> > 2. transform joins into subselects, then return subselect rows via an\n> > index bitmap. Joins are performed via a bitmap addition process.\n> \n> This one might be interesting but it's not clear what you are talking\n> about. \"Bitmap addition\"?\n\nWell \"transform joins into subselects\" is a red herring. Joins and subselects\nare two ways of spelling special cases of the same thing and internally they\nought to go through the same codepaths. They don't in Postgres but judging by\nthe plans it produces I believe they do at least in a lot of cases in Oracle.\n\nThat's sort of the whole point of the phrase \"star join\". What the user really\nwants is a single table joined to a bunch of small tables. There's no way to\nwrite that in SQL due to the limitations of the language but a bunch of\nsubqueries expresses precisely the same concept (albeit with another set of\nlanguage limitations which luckily don't impact this particular application).\n\n-- \ngreg\n\n", "msg_date": "18 Dec 2005 01:50:33 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "On Sun, 2005-12-18 at 15:02 +1300, Mark Kirkwood wrote:\n\n> Yeah - the quoted method of \"make a cartesian product of the dimensions \n> and then join to the fact all at once\" is not actually used (as written) \n> in many implementations \n\nBut it is used in some, which is why I mentioned it.\n\nI gave two implementations, that is just (1)\n\n> - probably for the reasons you are pointing out. \n> I found these two papers whilst browsing:\n> \n> \n> http://www.cs.brown.edu/courses/cs227/Papers/Indexing/O'NeilGraefe.pdf\n> http://www.dama.upc.edu/downloads/jaguilar-2005-4.pdf\n> \n> \n> They seem to be describing a more subtle method making use of join \n> indexes and bitmapped indexes.\n\nWhich is the option (2) I described.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Sun, 18 Dec 2005 10:27:50 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "On Sat, 2005-12-17 at 13:13 -0500, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > On Fri, 2005-12-16 at 23:28 -0500, Bruce Momjian wrote:\n> >> How are star joins different from what we do now?\n> \n> > Methods:\n> > 1. join all N small tables together in a cartesian product, then join to\n> > main Large table once (rather than N times)\n> \n> Of course, the reason the current planner does not think of this is that\n> it does not consider clauseless joins unless there is no alternative.\n\nUnderstood\n\n> The plan you currently get for\n> this sort of scenario is typically a nest of hash joins:\n> \n> QUERY PLAN \n> ------------------------------------------------------------------------\n> Hash Join (cost=2.25..4652.25 rows=102400 width=16)\n> Hash Cond: (\"outer\".f1 = \"inner\".f1)\n> -> Hash Join (cost=1.12..3115.12 rows=102400 width=12)\n> Hash Cond: (\"outer\".f2 = \"inner\".f1)\n> -> Seq Scan on fact (cost=0.00..1578.00 rows=102400 width=8)\n> -> Hash (cost=1.10..1.10 rows=10 width=4)\n> -> Seq Scan on d2 (cost=0.00..1.10 rows=10 width=4)\n> -> Hash (cost=1.10..1.10 rows=10 width=4)\n> -> Seq Scan on d1 (cost=0.00..1.10 rows=10 width=4)\n> (9 rows)\n> \n> This involves only one scan of the fact table. As each row is pulled up\n> through the nest of hash joins, we hash one dimension key and join to\n> one small table at each level. \n\nUnderstood\n\n> This is at worst the same amount of work\n> as hashing all the keys at once and probing a single cartesian-product\n> hashtable, probably less work (fewer wasted key-comparisons). And\n> definitely less memory. You do have to keep your eye on the ball that\n> you don't waste a lot of overhead propagating the row up through\n> multiple join levels, but we got rid of most of the problem there in\n> 8.1 via the introduction of \"virtual tuple slots\". If this isn't fast\n> enough yet, it'd make more sense to invest effort in further cutting the\n> executor's join overhead (which of course benefits *all* plan types)\n> than in trying to make the planner choose a star join.\n\nThat join type is used when an index-organised table is available, so\nthat a SeqScan of the larger table can be avoided.\n\nI'd say the plan would make sense if the columns of the cartesian\nproduct match a multi-column index on the larger table that would not\never be used unless sufficient columns are restricted in each lookup.\nThat way you are able to avoid the SeqScan that occurs for the multiple\nnested Hash Join case. (Clearly, normal selectivity rules apply on the\nuse of the index in this way).\n\nSo I think that plan type still can be effective in some circumstances.\nMind you: building an N-way index on a large table isn't such a good\nidea, unless you can partition the tables and still use a join. Which is\nwhy I've not centred on this case as being important before now.\n\nMy understanding: Teradata and DB2 use this.\n\nThis may be covered by patents.\n\n> > 2. transform joins into subselects, then return subselect rows via an\n> > index bitmap. Joins are performed via a bitmap addition process.\n> \n> This one might be interesting but it's not clear what you are talking\n> about. \"Bitmap addition\"?\n\nRef: \"Re: [HACKERS] slow IN() clause for many cases\"\n\nRequired Transforms: join -> IN (subselect) -> = ANY(ARRAY(subselect))\nending with the ability to use an bitmap index scan\n(which clearly requires a run-time, not a plan-time evaluation - though\nthe distinction is minor if you go straight from plan->execute as is the\ncase with most Data Warehouse queries).\n\nIf you do this for all joins, you can then solve the problem with a\nBitmap And step, which is what I meant by \"addition\".\n\nIf you have need columns in the result set from the smaller tables you\ncan get them by joining the result set back to the smaller tables again.\n\nMy understanding: Oracle uses this.\n\nThis may be covered by patents.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Sun, 18 Dec 2005 13:48:52 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "On Sun, 2005-12-18 at 17:07 +1300, Mark Kirkwood wrote:\n> Tom Lane wrote:\n> \n> >>2. transform joins into subselects, then return subselect rows via an\n> >>index bitmap. Joins are performed via a bitmap addition process.\n> \n> Looks like 8.1 pretty much does this right now:\n\nGood analysis.\n\n8.1 doesn't do:\n- the transforms sufficiently well (you just performed them manually)\n- doesn't AND together multiple bitmaps to assist with N-way joins\n\nThose aren't criticisms, just observations. Pal's original example was a\n9-dimension join, so I think PostgreSQL does very well on making that\nrun in 30 seconds. That's a complex example and I think upholds just how\ngood things are right now. \n\nAnyway, back to the starting point: IMHO there is an additional\noptimisation that can be performed to somehow speed up Single large\ntable-many small table joins. And we have some clues as to how we might\ndo that.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Sun, 18 Dec 2005 17:53:27 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "Simon Riggs wrote:\n> On Sun, 2005-12-18 at 17:07 +1300, Mark Kirkwood wrote:\n> \n>>Tom Lane wrote:\n>>\n>>\n>>>>2. transform joins into subselects, then return subselect rows via an\n>>>>index bitmap. Joins are performed via a bitmap addition process.\n>>\n>>Looks like 8.1 pretty much does this right now:\n> \n> \n> Good analysis.\n> \n> 8.1 doesn't do:\n> - the transforms sufficiently well (you just performed them manually)\n\nAbsolutely - I was intending to note that very point, but it got lost \nsomewhere between brain and fingers :-)\n\n> - doesn't AND together multiple bitmaps to assist with N-way joins\n> \n\nAh yes - I had overlooked that, good point!\n\nCheers\n\nMark\n\n\n", "msg_date": "Mon, 19 Dec 2005 11:04:37 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "Simon Riggs wrote:\n> On Sun, 2005-12-18 at 15:02 +1300, Mark Kirkwood wrote:\n> \n> \n>>Yeah - the quoted method of \"make a cartesian product of the dimensions \n>>and then join to the fact all at once\" is not actually used (as written) \n>>in many implementations \n> \n> \n> But it is used in some, which is why I mentioned it.\n> \n> I gave two implementations, that is just (1)\n> \n> \n\nSorry Simon, didn't mean to imply you shouldn't have mentioned it - was \nmerely opining about its effectiveness....\n\n>>- probably for the reasons you are pointing out. \n>>I found these two papers whilst browsing:\n>>\n>>\n>>http://www.cs.brown.edu/courses/cs227/Papers/Indexing/O'NeilGraefe.pdf\n>>http://www.dama.upc.edu/downloads/jaguilar-2005-4.pdf\n>>\n>>\n>>They seem to be describing a more subtle method making use of join \n>>indexes and bitmapped indexes.\n> \n> \n> Which is the option (2) I described.\n> \n\nOk - I misunderstood you on this one, and thought you were describing \nthe \"star transformation\" - upon re-reading, I see that yes, it's more \nor less a description of the O'Neil Graefe method.\n\nbest wishes\n\nMark\n", "msg_date": "Mon, 19 Dec 2005 11:10:01 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "Simon Riggs wrote:\n> On Sat, 2005-12-17 at 13:13 -0500, Tom Lane wrote:\n> \n>>Simon Riggs <[email protected]> writes:\n>>\n>>>On Fri, 2005-12-16 at 23:28 -0500, Bruce Momjian wrote:\n>>>\n>>>>How are star joins different from what we do now?\n>>\n>>>Methods:\n>>>1. join all N small tables together in a cartesian product, then join to\n>>>main Large table once (rather than N times)\n>>\n>>Of course, the reason the current planner does not think of this is that\n>>it does not consider clauseless joins unless there is no alternative.\n> \n> \n> Understood\n> \n> \n>> The plan you currently get for\n>>this sort of scenario is typically a nest of hash joins:\n>>\n>> QUERY PLAN \n>>------------------------------------------------------------------------\n>> Hash Join (cost=2.25..4652.25 rows=102400 width=16)\n>> Hash Cond: (\"outer\".f1 = \"inner\".f1)\n>> -> Hash Join (cost=1.12..3115.12 rows=102400 width=12)\n>> Hash Cond: (\"outer\".f2 = \"inner\".f1)\n>> -> Seq Scan on fact (cost=0.00..1578.00 rows=102400 width=8)\n>> -> Hash (cost=1.10..1.10 rows=10 width=4)\n>> -> Seq Scan on d2 (cost=0.00..1.10 rows=10 width=4)\n>> -> Hash (cost=1.10..1.10 rows=10 width=4)\n>> -> Seq Scan on d1 (cost=0.00..1.10 rows=10 width=4)\n>>(9 rows)\n>>\n>>This involves only one scan of the fact table. As each row is pulled up\n>>through the nest of hash joins, we hash one dimension key and join to\n>>one small table at each level. \n> \n> \n> Understood\n> \n> \n>>This is at worst the same amount of work\n>>as hashing all the keys at once and probing a single cartesian-product\n>>hashtable, probably less work (fewer wasted key-comparisons). And\n>>definitely less memory. You do have to keep your eye on the ball that\n>>you don't waste a lot of overhead propagating the row up through\n>>multiple join levels, but we got rid of most of the problem there in\n>>8.1 via the introduction of \"virtual tuple slots\". If this isn't fast\n>>enough yet, it'd make more sense to invest effort in further cutting the\n>>executor's join overhead (which of course benefits *all* plan types)\n>>than in trying to make the planner choose a star join.\n> \n> \n> That join type is used when an index-organised table is available, so\n> that a SeqScan of the larger table can be avoided.\n> \n> I'd say the plan would make sense if the columns of the cartesian\n> product match a multi-column index on the larger table that would not\n> ever be used unless sufficient columns are restricted in each lookup.\n> That way you are able to avoid the SeqScan that occurs for the multiple\n> nested Hash Join case. (Clearly, normal selectivity rules apply on the\n> use of the index in this way).\n> \n> So I think that plan type still can be effective in some circumstances.\n> Mind you: building an N-way index on a large table isn't such a good\n> idea, unless you can partition the tables and still use a join. Which is\n> why I've not centred on this case as being important before now.\n> \n> My understanding: Teradata and DB2 use this.\n> \n\nFWIW - I think DB2 uses the successive fact RID buildup (i.e method 2), \nunfortunately I haven't got a working copy of DB2 in front of me to test.\n\nCheers\n\nMark\n", "msg_date": "Mon, 19 Dec 2005 11:13:55 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "On Mon, 2005-12-19 at 11:10 +1300, Mark Kirkwood wrote:\n> >>I found these two papers whilst browsing:\n> >>\n> >>\n> >>http://www.cs.brown.edu/courses/cs227/Papers/Indexing/O'NeilGraefe.pdf\n> >>http://www.dama.upc.edu/downloads/jaguilar-2005-4.pdf\n> >>\n> >>\n> >>They seem to be describing a more subtle method making use of join \n> >>indexes and bitmapped indexes.\n> > \n> > \n> > Which is the option (2) I described.\n> > \n> \n> Ok - I misunderstood you on this one, and thought you were describing \n> the \"star transformation\" - upon re-reading, I see that yes, it's more \n> or less a description of the O'Neil Graefe method.\n\nPapers look interesting; I'd not seen them. My knowledge of this is\nmostly practical.\n\nO'Neil and Graefe seem to be talking about using join indexes, which is\nprobably method (3)... oh lordy. \n\nBest Regards, Simon Riggs\n\n", "msg_date": "Sun, 18 Dec 2005 22:21:04 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" }, { "msg_contents": "On Mon, 2005-12-19 at 11:13 +1300, Mark Kirkwood wrote:\n> > My understanding: Teradata and DB2 use this.\n> \n> FWIW - I think DB2 uses the successive fact RID buildup (i.e method 2), \n> unfortunately \n\nI think you're right; I was thinking about that point too because DB2\ndoesn't have index-organised tables (well, sort of: MDC).\n\nI was confused because IBM seem to have a patent on (1), even though it\nseems exactly like the original NCR/Teradata implementation, which\npredates the patent filing by many years. Wierd.\n\nIt's a minefield of patents....\n\n> I haven't got a working copy of DB2 in front of me to test.\n\nTrue, not all copies work :-)\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Sun, 18 Dec 2005 22:28:37 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" } ]
[ { "msg_contents": "We are testing disk I/O on our new server (referred to in my recent\nquestions about LVM and XFS on this list) and have run bonnie++ on the\nxfs partition destined for postgres; results noted below.\n\nI haven't been able to find many benchmarks showing desirable IO stats.\nAs far as I can tell the sequential input (around 110000K/sec) looks\ngood while the sequential output (around 50000K/sec) looks fairly\naverage.\n\nAdvice and comments gratefully received.\nSuggested Parameters for running pg_bench would be great!\n\nThanks,\nRory\n\nThe server has a dual core AMD Opteron 270 chip (2000MHz), 6GB of RAM\nand an LSI 320-1 card running 4x147GB disks running in a RAID10\nconfiguration. The server has a freshly compiled 2.6.14.3 linux kernel.\n\npartial df output:\n Filesystem Size Used Avail Use% Mounted on\n ...\n /dev/mapper/masvg-masdata\n 99G 33M 94G 1% /masdata\n /dev/mapper/masvg-postgres\n 40G 92M 40G 1% /postgres\n\npartial fstab config:\n ...\n /dev/mapper/masvg-masdata /masdata ext3 defaults 0 2\n /dev/mapper/masvg-postgres /postgres xfs defaults 0 2\n\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nmas 13000M 52655 99 52951 10 32246 7 41441 72 113438 12 590.0 0\nmas 13000M 49306 83 51967 9 32269 7 42442 70 115427 12 590.5 1\nmas 13000M 53449 89 51982 10 32625 7 42819 71 111829 11 638.3 0\nmas 13000M 51818 88 51639 9 33127 7 42377 70 108585 11 556.5 0\nmas 13000M 48930 90 51750 9 32854 7 41220 71 109813 11 566.2 0\nmas 13000M 52148 88 47393 9 35343 7 42871 70 109775 12 582.0 0\nmas 13000M 52427 88 53040 10 32315 7 42813 71 112045 12 596.7 0\nmas 13000M 51967 87 54004 10 30429 7 46180 76 110973 11 625.1 0\nmas 13000M 48690 89 46416 9 35678 7 41429 72 111612 11 627.2 0\nmas 13000M 52641 88 52807 10 31115 7 43476 72 110694 11 568.2 0\nmas 13000M 52186 88 47385 9 35341 7 42959 72 110963 11 558.7 0\nmas 13000M 52092 87 53111 10 32135 7 42636 69 110560 11 562.0 1\nmas 13000M 49445 90 47378 9 34410 7 41191 72 110736 11 610.3 0\nmas 13000M 51704 88 47699 9 35436 7 42413 69 110446 11 612.0 0\nmas 13000M 52434 88 53331 10 32479 7 43229 71 109385 11 620.6 0\nmas 13000M 52074 89 53291 10 32095 7 43593 72 109541 11 628.0 0\nmas 13000M 48084 88 52624 10 32301 7 40975 72 110548 11 594.0 0\nmas 13000M 53019 90 52441 10 32411 7 42574 68 111321 11 578.0 0\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\nfiles:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\nmas 16 7970 36 +++++ +++ 7534 34 7918 36 +++++ +++ 4482 22\nmas 16 7945 33 +++++ +++ 7483 30 7918 42 +++++ +++ 4438 20\nmas 16 8481 48 +++++ +++ 7468 31 7870 39 +++++ +++ 4385 23\nmas 16 7915 36 +++++ +++ 7498 33 7930 41 +++++ +++ 4619 23\nmas 16 8553 35 +++++ +++ 8312 38 8613 37 +++++ +++ 4513 24\nmas 16 8498 40 +++++ +++ 8215 33 8570 43 +++++ +++ 4858 26\nmas 16 7892 39 +++++ +++ 7624 30 5341 28 +++++ +++ 4762 22\nmas 16 5408 27 +++++ +++ 9378 37 8573 41 +++++ +++ 4385 21\nmas 16 5063 27 +++++ +++ 8656 38 5159 27 +++++ +++ 4705 24\nmas 16 4917 25 +++++ +++ 8682 39 5282 28 +++++ +++ 4723 22\nmas 16 5027 28 +++++ +++ 8538 36 5173 29 +++++ +++ 4719 23\nmas 16 5449 27 +++++ +++ 8630 36 5266 28 +++++ +++ 5463 27\nmas 16 5373 27 +++++ +++ 8658 37 5264 26 +++++ +++ 4731 22\nmas 16 4959 24 +++++ +++ 9126 46 5160 26 +++++ +++ 4717 24\nmas 16 5379 27 +++++ +++ 8620 40 5014 27 +++++ +++ 4701 21\nmas 16 5312 29 +++++ +++ 8642 36 7862 42 +++++ +++ 4869 24\nmas 16 5057 26 +++++ +++ 8566 36 5120 28 +++++ +++ 4681 21\nmas 16 5225 27 +++++ +++ 8740 37 5205 28 +++++ +++ 4744 21\n\n\n\n\n-- \nRory Campbell-Lange \n<[email protected]>\n<www.campbell-lange.net>\n", "msg_date": "Thu, 8 Dec 2005 12:12:59 +0000", "msg_from": "Rory Campbell-Lange <[email protected]>", "msg_from_op": true, "msg_subject": "Disk tests for a new database server" }, { "msg_contents": "Rory,\n\nWhile I don't have my specific stats with my from my tests with XFS and\nbonnie for our company's db server, I do recall vividly that seq. output\ndid not increase dramatically until I had 8+ discs in a RAID10\nconfiguration on an LSI card. I was not using LVM. If I had less than 8\ndiscs, seq. output was about equal regardless of file system being uses\n(EXT3,JFS,or XFS). \n\nSteve\n\n\nOn Thu, 2005-12-08 at 12:12 +0000, Rory Campbell-Lange wrote:\n> We are testing disk I/O on our new server (referred to in my recent\n> questions about LVM and XFS on this list) and have run bonnie++ on the\n> xfs partition destined for postgres; results noted below.\n> \n> I haven't been able to find many benchmarks showing desirable IO stats.\n> As far as I can tell the sequential input (around 110000K/sec) looks\n> good while the sequential output (around 50000K/sec) looks fairly\n> average.\n> \n> Advice and comments gratefully received.\n> Suggested Parameters for running pg_bench would be great!\n> \n> Thanks,\n> Rory\n> \n> The server has a dual core AMD Opteron 270 chip (2000MHz), 6GB of RAM\n> and an LSI 320-1 card running 4x147GB disks running in a RAID10\n> configuration. The server has a freshly compiled 2.6.14.3 linux kernel.\n> \n> partial df output:\n> Filesystem Size Used Avail Use% Mounted on\n> ...\n> /dev/mapper/masvg-masdata\n> 99G 33M 94G 1% /masdata\n> /dev/mapper/masvg-postgres\n> 40G 92M 40G 1% /postgres\n> \n> partial fstab config:\n> ...\n> /dev/mapper/masvg-masdata /masdata ext3 defaults 0 2\n> /dev/mapper/masvg-postgres /postgres xfs defaults 0 2\n> \n> \n> Version 1.03 ------Sequential Output------ --Sequential Input- --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n> mas 13000M 52655 99 52951 10 32246 7 41441 72 113438 12 590.0 0\n> mas 13000M 49306 83 51967 9 32269 7 42442 70 115427 12 590.5 1\n> mas 13000M 53449 89 51982 10 32625 7 42819 71 111829 11 638.3 0\n> mas 13000M 51818 88 51639 9 33127 7 42377 70 108585 11 556.5 0\n> mas 13000M 48930 90 51750 9 32854 7 41220 71 109813 11 566.2 0\n> mas 13000M 52148 88 47393 9 35343 7 42871 70 109775 12 582.0 0\n> mas 13000M 52427 88 53040 10 32315 7 42813 71 112045 12 596.7 0\n> mas 13000M 51967 87 54004 10 30429 7 46180 76 110973 11 625.1 0\n> mas 13000M 48690 89 46416 9 35678 7 41429 72 111612 11 627.2 0\n> mas 13000M 52641 88 52807 10 31115 7 43476 72 110694 11 568.2 0\n> mas 13000M 52186 88 47385 9 35341 7 42959 72 110963 11 558.7 0\n> mas 13000M 52092 87 53111 10 32135 7 42636 69 110560 11 562.0 1\n> mas 13000M 49445 90 47378 9 34410 7 41191 72 110736 11 610.3 0\n> mas 13000M 51704 88 47699 9 35436 7 42413 69 110446 11 612.0 0\n> mas 13000M 52434 88 53331 10 32479 7 43229 71 109385 11 620.6 0\n> mas 13000M 52074 89 53291 10 32095 7 43593 72 109541 11 628.0 0\n> mas 13000M 48084 88 52624 10 32301 7 40975 72 110548 11 594.0 0\n> mas 13000M 53019 90 52441 10 32411 7 42574 68 111321 11 578.0 0\n> ------Sequential Create------ --------Random Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n> files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> mas 16 7970 36 +++++ +++ 7534 34 7918 36 +++++ +++ 4482 22\n> mas 16 7945 33 +++++ +++ 7483 30 7918 42 +++++ +++ 4438 20\n> mas 16 8481 48 +++++ +++ 7468 31 7870 39 +++++ +++ 4385 23\n> mas 16 7915 36 +++++ +++ 7498 33 7930 41 +++++ +++ 4619 23\n> mas 16 8553 35 +++++ +++ 8312 38 8613 37 +++++ +++ 4513 24\n> mas 16 8498 40 +++++ +++ 8215 33 8570 43 +++++ +++ 4858 26\n> mas 16 7892 39 +++++ +++ 7624 30 5341 28 +++++ +++ 4762 22\n> mas 16 5408 27 +++++ +++ 9378 37 8573 41 +++++ +++ 4385 21\n> mas 16 5063 27 +++++ +++ 8656 38 5159 27 +++++ +++ 4705 24\n> mas 16 4917 25 +++++ +++ 8682 39 5282 28 +++++ +++ 4723 22\n> mas 16 5027 28 +++++ +++ 8538 36 5173 29 +++++ +++ 4719 23\n> mas 16 5449 27 +++++ +++ 8630 36 5266 28 +++++ +++ 5463 27\n> mas 16 5373 27 +++++ +++ 8658 37 5264 26 +++++ +++ 4731 22\n> mas 16 4959 24 +++++ +++ 9126 46 5160 26 +++++ +++ 4717 24\n> mas 16 5379 27 +++++ +++ 8620 40 5014 27 +++++ +++ 4701 21\n> mas 16 5312 29 +++++ +++ 8642 36 7862 42 +++++ +++ 4869 24\n> mas 16 5057 26 +++++ +++ 8566 36 5120 28 +++++ +++ 4681 21\n> mas 16 5225 27 +++++ +++ 8740 37 5205 28 +++++ +++ 4744 21\n> \n> \n> \n> \n\n", "msg_date": "Thu, 08 Dec 2005 07:06:31 -0800", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disk tests for a new database server" }, { "msg_contents": "Hi Steve\n\nOn 08/12/05, Steve Poe ([email protected]) wrote:\n> Rory,\n> \n> While I don't have my specific stats with my from my tests with XFS and\n> bonnie for our company's db server, I do recall vividly that seq. output\n> did not increase dramatically until I had 8+ discs in a RAID10\n> configuration on an LSI card. I was not using LVM. If I had less than 8\n> discs, seq. output was about equal regardless of file system being uses\n> (EXT3,JFS,or XFS). \n\nThanks for the information. I certainly had not appreciated this fact.\n\nRegards,\nRory\n\n> On Thu, 2005-12-08 at 12:12 +0000, Rory Campbell-Lange wrote:\n> > We are testing disk I/O on our new server (referred to in my recent\n> > questions about LVM and XFS on this list) and have run bonnie++ on the\n> > xfs partition destined for postgres; results noted below.\n> > \n> > I haven't been able to find many benchmarks showing desirable IO stats.\n> > As far as I can tell the sequential input (around 110000K/sec) looks\n> > good while the sequential output (around 50000K/sec) looks fairly\n> > average.\n> > \n> > Advice and comments gratefully received.\n> > Suggested Parameters for running pg_bench would be great!\n> > \n> > Thanks,\n> > Rory\n...\n> > Version 1.03 ------Sequential Output------ --Sequential Input- --Random-\n> > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n> > mas 13000M 52655 99 52951 10 32246 7 41441 72 113438 12 590.0 0\n> > mas 13000M 49306 83 51967 9 32269 7 42442 70 115427 12 590.5 1\n> > mas 13000M 53449 89 51982 10 32625 7 42819 71 111829 11 638.3 0\n> > mas 13000M 51818 88 51639 9 33127 7 42377 70 108585 11 556.5 0\n...\n> > ------Sequential Create------ --------Random Create--------\n> > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n> > files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> > mas 16 7970 36 +++++ +++ 7534 34 7918 36 +++++ +++ 4482 22\n> > mas 16 7945 33 +++++ +++ 7483 30 7918 42 +++++ +++ 4438 20\n> > mas 16 8481 48 +++++ +++ 7468 31 7870 39 +++++ +++ 4385 23\n> > mas 16 7915 36 +++++ +++ 7498 33 7930 41 +++++ +++ 4619 23\n> > mas 16 8553 35 +++++ +++ 8312 38 8613 37 +++++ +++ 4513 24\n> > mas 16 8498 40 +++++ +++ 8215 33 8570 43 +++++ +++ 4858 26\n", "msg_date": "Thu, 8 Dec 2005 22:56:20 +0000", "msg_from": "Rory Campbell-Lange <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disk tests for a new database server" } ]
[ { "msg_contents": "> Hi All,\n> \n> I am working on an application that uses PostgreSQL. One of the\n> functions of the application is to generate reports. In order to keep\n> the code in the application simple we create a view of the required\ndata\n> in the database and then simply execute a SELECT * FROM\n> view_of_the_data; All of the manipulation and most of the time even\nthe\n> ordering is handled in the view.\n> \n> My question is how much if any performance degradation is there in\n> creating a view of a view?\n> \n> IOW if I have a view that ties together a couple of tables and\n> manipulates some data what will perform better; a view that filters,\n> manipulates, and orders the data from the first view or a view that\n> performs all the necessary calculations on the original tables?\n\nvery little, or a lot :). Clear as mud? \n\nViews in pg are built with the rule system which basically just expands\nthem into the source queries when it is time to execute them. In my\nexperience, the time to expand the rule and generate the plan is trivial\nnext to actually running the query.\n\nWhat you have to watch out for is if your plan is such that the lower\nview has to be fully materialized in order for the lower query to\nexecute. For example if you do some string processing on a key\nexpression, it obviously can no longer by used in an index expression.\n\nA real simple way to do the materialization test is to do a select *\nlimit 1 from your view-on-view. If it runs quickly, you have no\nproblems.\n\nBy the way, I consider views on views to be a good indicator of a good\ndesign :).\n\nMerlin\n\n", "msg_date": "Thu, 8 Dec 2005 08:13:01 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: view of view" } ]
[ { "msg_contents": "I hope that this will demonstrate the problem and will give the needed\ninformation (global_content_id=90 is the record that was all the time\nupdated):\n\nV-Mark=# UPDATE active_content_t SET ac_counter_mm4_outbound=100 WHERE\nglobal_content_id=90;\nUPDATE 1\nTime: 396.089 ms\nV-Mark=# UPDATE active_content_t SET ac_counter_mm4_outbound=100 WHERE\nglobal_content_id=80;\nUPDATE 1\nTime: 1.320 ms\nV-Mark=# EXPLAIN UPDATE active_content_t SET\nac_counter_mm4_outbound=100 WHERE global_content_id=90;\n QUERY PLAN\n\n------------------------------------------------------------------------\n------------------------\n Index Scan using active_content_t_pkey on active_content_t\n(cost=0.00..5.50 rows=1 width=236)\n Index Cond: (global_content_id = 90)\n(2 rows)\n\nTime: 9.092 ms\nV-Mark=# EXPLAIN UPDATE active_content_t SET\nac_counter_mm4_outbound=100 WHERE global_content_id=80;\n QUERY PLAN\n\n------------------------------------------------------------------------\n------------------------\n Index Scan using active_content_t_pkey on active_content_t\n(cost=0.00..5.50 rows=1 width=236)\n Index Cond: (global_content_id = 80)\n(2 rows)\n\nTime: 0.666 ms \n\n> -----Original Message-----\n> From: Bruno Wolff III [mailto:[email protected]] \n> Sent: Wednesday, December 07, 2005 10:05 PM\n> To: Assaf Yaari\n> Cc: Jan Wieck; [email protected]\n> Subject: Re: [PERFORM] Performance degradation after \n> successive UPDATE's\n> \n> On Wed, Dec 07, 2005 at 14:14:31 +0200,\n> Assaf Yaari <[email protected]> wrote:\n> > Hi Jan,\n> > \n> > As I'm novice with PostgreSQL, can you elaborate the term FSM and \n> > settings recommendations?\n> http://developer.postgresql.org/docs/postgres/runtime-config-r\n> esource.html#RUNTIME-CONFIG-RESOURCE-FSM\n> \n> > BTW: I'm issuing VACUUM ANALYZE every 15 minutes (using \n> cron) and also \n> > changes the setting of fsync to false in postgresql.conf but still \n> > time seems to be growing.\n> \n> You generally don't want fsync set to false.\n> \n> > Also no other transactions are open.\n> \n> Have you given us explain analyse samples yet?\n> \n> > \n> > Thanks,\n> > Assaf.\n> > \n> > > -----Original Message-----\n> > > From: Jan Wieck [mailto:[email protected]]\n> > > Sent: Tuesday, December 06, 2005 2:35 PM\n> > > To: Assaf Yaari\n> > > Cc: Bruno Wolff III; [email protected]\n> > > Subject: Re: [PERFORM] Performance degradation after successive \n> > > UPDATE's\n> > > \n> > > On 12/6/2005 4:08 AM, Assaf Yaari wrote:\n> > > > Thanks Bruno,\n> > > > \n> > > > Issuing VACUUM FULL seems not to have influence on the time.\n> > > > I've added to my script VACUUM ANALYZE every 100 UPDATE's\n> > > and run the\n> > > > test again (on different record) and the time still increase.\n> > > \n> > > I think he meant\n> > > \n> > > - run VACUUM FULL once,\n> > > - adjust FSM settings to database size and turnover ratio\n> > > - run VACUUM ANALYZE more frequent from there on.\n> > > \n> > > \n> > > Jan\n> > > \n> > > > \n> > > > Any other ideas?\n> > > > \n> > > > Thanks,\n> > > > Assaf. \n> > > > \n> > > >> -----Original Message-----\n> > > >> From: Bruno Wolff III [mailto:[email protected]]\n> > > >> Sent: Monday, December 05, 2005 10:36 PM\n> > > >> To: Assaf Yaari\n> > > >> Cc: [email protected]\n> > > >> Subject: Re: Performance degradation after successive UPDATE's\n> > > >> \n> > > >> On Mon, Dec 05, 2005 at 19:05:01 +0200,\n> > > >> Assaf Yaari <[email protected]> wrote:\n> > > >> > Hi,\n> > > >> > \n> > > >> > I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0.\n> > > >> > \n> > > >> > My application updates counters in DB. I left a test\n> > > over the night\n> > > >> > that increased counter of specific record. After \n> night running \n> > > >> > (several hundreds of thousands updates), I found out\n> > > that the time\n> > > >> > spent on UPDATE increased to be more than 1.5 second (at\n> > > >> the beginning\n> > > >> > it was less than 10ms)! Issuing VACUUM ANALYZE and even\n> > > >> reboot didn't\n> > > >> > seemed to solve the problem.\n> > > >> \n> > > >> You need to be running vacuum more often to get rid of the \n> > > >> deleted rows (update is essentially insert + delete). Once you \n> > > >> get\n> > > too many,\n> > > >> plain vacuum won't be able to clean them up without\n> > > raising the value\n> > > >> you use for FSM. By now the table is really bloated and\n> > > you probably\n> > > >> want to use vacuum full on it.\n> > > >> \n> > > > \n> > > > ---------------------------(end of\n> > > > broadcast)---------------------------\n> > > > TIP 9: In versions below 8.0, the planner will ignore \n> your desire to\n> > > > choose an index scan if your joining column's\n> > > datatypes do not\n> > > > match\n> > > \n> > > \n> > > --\n> > > #=============================================================\n> > > =========#\n> > > # It's easier to get forgiveness for being wrong than for being \n> > > right. #\n> > > # Let's break this rule - forgive me. \n> > > #\n> > > #==================================================\n> > > [email protected] #\n> > > \n> \n", "msg_date": "Thu, 8 Dec 2005 16:29:14 +0200", "msg_from": "\"Assaf Yaari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance degradation after successive UPDATE's" } ]
[ { "msg_contents": "I have a choice to make on a RAID enclosure:\n\n14x 36GB 15kRPM ultra 320 SCSI drives\n\nOR\n\n12x 72GB 10kRPM ultra 320 SCSI drives\n\nboth would be configured into RAID 10 over two SCSI channels using a \nmegaraid 320-2x card.\n\nMy goal is speed. Either would provide more disk space than I would \nneed over the next two years.\n\nThe database does a good number of write transactions, and a decent \nnumber of sequential scans over the whole DB (about 60GB including \nindexes) for large reports.\n\nMy only concern is the 10kRPM vs 15kRPM. The advantage of the 10k \ndisks is that it would come from the same vendor as the systems to \nwhich it will be connected, making procurement easier.\n\n\n", "msg_date": "Thu, 8 Dec 2005 11:52:17 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "opinion on disk speed" }, { "msg_contents": "On Thu, 2005-12-08 at 10:52, Vivek Khera wrote:\n> I have a choice to make on a RAID enclosure:\n> \n> 14x 36GB 15kRPM ultra 320 SCSI drives\n> \n> OR\n> \n> 12x 72GB 10kRPM ultra 320 SCSI drives\n> \n> both would be configured into RAID 10 over two SCSI channels using a \n> megaraid 320-2x card.\n> \n> My goal is speed. Either would provide more disk space than I would \n> need over the next two years.\n> \n> The database does a good number of write transactions, and a decent \n> number of sequential scans over the whole DB (about 60GB including \n> indexes) for large reports.\n> \n> My only concern is the 10kRPM vs 15kRPM. The advantage of the 10k \n> disks is that it would come from the same vendor as the systems to \n> which it will be connected, making procurement easier.\n\nI would say that the RAID controller and the amount of battery backed\ncache will have a greater impact than the difference in seek times on\nthose two drives. \n\nAlso, having two more drives in the 15k category is likely to play to\nits advantage more so than the speed of the drive spindles and seek\ntimes. If you're worried about higher failures due to heat etc... you\ncould always make a couple of the drives spares.\n\nLooking at the datasheet for the seagate 10k and 15k drives, it would\nappear there is another difference, The 10k drives list a sustained\nxfer rate of 39 to 80 MBytes / second, while the 15k drives list one of\n58 to 96. That's quite a bit faster. So, sequential scans should be\nfaster as well.\n\nPower consumption isn't much differnt, about a watt more for the 15ks,\nso that's no big deal. I'd do a bit of googling to see if there are a\nlot more horror stories with 15k drives than with the 10k ones.\n", "msg_date": "Thu, 08 Dec 2005 11:50:33 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on disk speed" }, { "msg_contents": "On Thu, 2005-12-08 at 11:52 -0500, Vivek Khera wrote:\n> I have a choice to make on a RAID enclosure:\n> \n> 14x 36GB 15kRPM ultra 320 SCSI drives\n> \n> OR\n> \n> 12x 72GB 10kRPM ultra 320 SCSI drives\n> \n> both would be configured into RAID 10 over two SCSI channels using a \n> megaraid 320-2x card.\n> \n> My goal is speed. Either would provide more disk space than I would \n> need over the next two years.\n> \n> The database does a good number of write transactions, and a decent \n> number of sequential scans over the whole DB (about 60GB including \n> indexes) for large reports.\n\nThe STR of 15k is quite a bit higher than 10k. I'd be inclined toward\nthe 15k if it doesn't impact the budget.\n\nFor the write transactions, the speed and size of the DIMM on that LSI\ncard will matter the most. I believe the max memory on that adapter is\n512MB. These cost so little that it wouldn't make sense to go with\nanything smaller.\n\nWhen comparing the two disks, don't forget to check for supported SCSI\nfeatures. In the past I've been surprised that some 10k disks don't\nsupport packetization, QAS, and so forth. All 15k disks seem to support\nthese.\n\nDon't forget to post some benchmarks when your vendor delivers ;)\n\n-jwb\n", "msg_date": "Thu, 08 Dec 2005 11:21:48 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on disk speed" }, { "msg_contents": "On Thu, 8 Dec 2005, Vivek Khera wrote:\n\n> I have a choice to make on a RAID enclosure:\n>\n> 14x 36GB 15kRPM ultra 320 SCSI drives\n>\n> OR\n>\n> 12x 72GB 10kRPM ultra 320 SCSI drives\n>\n> both would be configured into RAID 10 over two SCSI channels using a megaraid \n> 320-2x card.\n>\n> My goal is speed. Either would provide more disk space than I would need \n> over the next two years.\n>\n> The database does a good number of write transactions, and a decent number of \n> sequential scans over the whole DB (about 60GB including indexes) for large \n> reports.\n>\n> My only concern is the 10kRPM vs 15kRPM. The advantage of the 10k disks is \n> that it would come from the same vendor as the systems to which it will be \n> connected, making procurement easier.\n\nif space isn't an issue then you fall back to the old standby rules of \nthumb\n\nmore spindles are better (more disk heads that can move around \nindependantly)\n\nfaster drives are better (less time to read or write a track)\n\nso the 15k drive option is better\n\none other note, you probably don't want to use all the disks in a raid10 \narray, you probably want to split a pair of them off into a seperate raid1 \narray and put your WAL on it.\n\nDavid Lang\n\n", "msg_date": "Thu, 8 Dec 2005 15:43:32 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on disk speed" }, { "msg_contents": "On Thu, 08 Dec 2005 11:50:33 -0600\nScott Marlowe <[email protected]> wrote:\n\n> Power consumption isn't much differnt, about a watt more for the 15ks,\n> so that's no big deal. I'd do a bit of googling to see if there are a\n> lot more horror stories with 15k drives than with the 10k ones.\n\n Just an FYI, but I've run both 10k and 15k rpm drives in PostgreSQL\n servers and haven't experienced any \"horror stories\". They do run\n hotter, but this shouldn't be a big issue in a decent case in a \n typical server room environment. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Fri, 9 Dec 2005 09:50:20 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on disk speed" }, { "msg_contents": "\nOn Dec 8, 2005, at 2:21 PM, Jeffrey W. Baker wrote:\n\n> For the write transactions, the speed and size of the DIMM on that LSI\n> card will matter the most. I believe the max memory on that \n> adapter is\n> 512MB. These cost so little that it wouldn't make sense to go with\n> anything smaller.\n\n From where did you get LSI MegaRAID controller with 512MB? The \n320-2X doesn't seem to come with more than 128 from the factory.\n\nCan you just swap out the DIMM card for higher capacity?\n\n", "msg_date": "Mon, 12 Dec 2005 16:59:09 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: opinion on disk speed" }, { "msg_contents": "\nOn Dec 12, 2005, at 1:59 PM, Vivek Khera wrote:\n> From where did you get LSI MegaRAID controller with 512MB? The \n> 320-2X doesn't seem to come with more than 128 from the factory.\n>\n> Can you just swap out the DIMM card for higher capacity?\n\n\nWe've swapped out the DIMMs on MegaRAID controllers. Given the cost \nof a standard low-end DIMM these days (which is what the LSI \ncontrollers use last I checked), it is a very cheap upgrade.\n\nAdmittedly I've never actually run benchmarks to see if it made a \nsignificant difference in practice, but it certainly seems like it \nshould in theory and the upgrade cost is below the noise floor for \nmost database servers.\n\nJ. Andrew Rogers\n\n", "msg_date": "Mon, 12 Dec 2005 14:16:47 -0800", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on disk speed" }, { "msg_contents": "\nOn Dec 12, 2005, at 5:16 PM, J. Andrew Rogers wrote:\n\n> We've swapped out the DIMMs on MegaRAID controllers. Given the \n> cost of a standard low-end DIMM these days (which is what the LSI \n> controllers use last I checked), it is a very cheap upgrade.\n\nWhat's the max you can put into one of these cards? I haven't been \nable to find docs on which specific DIMM type they use...\n\nThanks!\n\n", "msg_date": "Mon, 12 Dec 2005 17:19:39 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: opinion on disk speed" }, { "msg_contents": "On Mon, 2005-12-12 at 16:19, Vivek Khera wrote:\n> On Dec 12, 2005, at 5:16 PM, J. Andrew Rogers wrote:\n> \n> > We've swapped out the DIMMs on MegaRAID controllers. Given the \n> > cost of a standard low-end DIMM these days (which is what the LSI \n> > controllers use last I checked), it is a very cheap upgrade.\n> \n> What's the max you can put into one of these cards? I haven't been \n> able to find docs on which specific DIMM type they use...\n\nI found the manual for the 4 port U320 SCSI controller, and it listed\n256 Meg for single data rate DIMM, and 512 Meg for DDR DIMM. This was\non the lsi at:\n\nhttp://www.lsilogic.com/files/docs/techdocs/storage_stand_prod/RAIDpage/mr_320_ug.pdf\n\nI believe.\n\nThey've got a new one coming out, that's SAS, like SCSI on SATA or\nsomething. It comes with 256Meg, removeable, but doesn't yet say what\nthe max size it. I'd love to have one of these that could hold a couple\nof gigs for a TPC type test.\n\n", "msg_date": "Mon, 12 Dec 2005 17:12:01 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on disk speed" }, { "msg_contents": "\nOn Dec 12, 2005, at 2:19 PM, Vivek Khera wrote:\n> On Dec 12, 2005, at 5:16 PM, J. Andrew Rogers wrote:\n>\n>> We've swapped out the DIMMs on MegaRAID controllers. Given the \n>> cost of a standard low-end DIMM these days (which is what the LSI \n>> controllers use last I checked), it is a very cheap upgrade.\n>\n> What's the max you can put into one of these cards? I haven't been \n> able to find docs on which specific DIMM type they use...\n\n\nTable 3.7 in the MegaRAID Adapter User's Guide has the specs and \nlimits for various controllers. For the 320-2x, the limit is 512MB \nof PC100 ECC RAM.\n\n\nJ. Andrew Rogers\n\n", "msg_date": "Mon, 12 Dec 2005 15:17:50 -0800", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on disk speed" } ]
[ { "msg_contents": "Hi all,\n\nFirst of all, please pardon if the question is dumb! Is it even feasible or\nnormal to do such a thing ! This query is needed by a webpage so needs to be\nlightning fast. Anything beyond 2-3 seconds is unacceptable performance.\n\nI have two tables\n\nCREATE TABLE runresult\n(\n id_runresult int8 NOT NULL,\n rundefinition_id_rundefinition int4 NOT NULL,\n measure_id_measure int4 NOT NULL,\n value float4 NOT NULL,\n \"sequence\" varchar(20) NOT NULL,\n CONSTRAINT pk_runresult_ars PRIMARY KEY (id_runresult),\n) \n\n\nCREATE TABLE runresult_has_catalogtable\n(\n runresult_id_runresult int8 NOT NULL,\n catalogtable_id_catalogtable int4 NOT NULL,\n value int4 NOT NULL,\n CONSTRAINT pk_runresult_has_catalogtable PRIMARY KEY\n(runresult_id_runresult, catalogtable_id_catalogtable, value)\n CONSTRAINT fk_temp FOREIGN KEY (runresult_id_runresult) REFERENCES\nrunresult(id_runresult) ON UPDATE RESTRICT ON DELETE RESTRICT\n) \n\nEach table has around 300 million records (will grow to probably billions).\nBelow is the query and the explain analyze --\n\nexplain analyze SELECT measure.description, runresult.value\nFROM ((((rundefinition INNER JOIN runresult ON\nrundefinition.id_rundefinition = runresult.rundefinition_id_rundefinition) \nINNER JOIN runresult_has_catalogtable ON runresult.id_runresult =\nrunresult_has_catalogtable.runresult_id_runresult) \nINNER JOIN runresult_has_catalogtable AS runresult_has_catalogtable_1 ON\nrunresult.id_runresult =\nrunresult_has_catalogtable_1.runresult_id_runresult) \nINNER JOIN runresult_has_catalogtable AS runresult_has_catalogtable_2 ON\nrunresult.id_runresult =\nrunresult_has_catalogtable_2.runresult_id_runresult) \nINNER JOIN measure ON runresult.measure_id_measure = measure.id_measure\nWHERE (((runresult_has_catalogtable.catalogtable_id_catalogtable)=52) \nAND ((runresult_has_catalogtable_1.catalogtable_id_catalogtable)=54) \nAND ((runresult_has_catalogtable_2.catalogtable_id_catalogtable)=55) \nAND ((runresult_has_catalogtable.value)=15806) \nAND ((runresult_has_catalogtable_1.value)=1) \nAND ((runresult_has_catalogtable_2.value) In (21,22,23,24)) \nAND ((rundefinition.id_rundefinition)=10106));\n\n'Nested Loop (cost=0.00..622582.70 rows=1 width=28) (actual\ntime=25.221..150.563 rows=22 loops=1)'\n' -> Nested Loop (cost=0.00..622422.24 rows=2 width=52) (actual\ntime=25.201..150.177 rows=22 loops=1)'\n' -> Nested Loop (cost=0.00..622415.97 rows=2 width=32) (actual\ntime=25.106..149.768 rows=22 loops=1)'\n' -> Nested Loop (cost=0.00..621258.54 rows=15 width=24)\n(actual time=24.582..149.061 rows=30 loops=1)'\n' -> Index Scan using pk_rundefinition on rundefinition\n(cost=0.00..3.86 rows=1 width=4) (actual time=0.125..0.147 rows=1 loops=1)'\n' Index Cond: (id_rundefinition = 10106)'\n' -> Nested Loop (cost=0.00..621254.54 rows=15\nwidth=28) (actual time=24.443..148.784 rows=30 loops=1)'\n' -> Index Scan using\nrunresult_has_catalogtable_value on runresult_has_catalogtable\n(cost=0.00..575069.35 rows=14437 width=8) (actual time=0.791..33.036\nrows=10402 loops=1)'\n' Index Cond: (value = 15806)'\n' Filter: (catalogtable_id_catalogtable =\n52)'\n' -> Index Scan using pk_runresult_ars on\nrunresult (cost=0.00..3.19 rows=1 width=20) (actual time=0.007..0.007\nrows=0 loops=10402)'\n' Index Cond: (runresult.id_runresult =\n\"outer\".runresult_id_runresult)'\n' Filter: (10106 =\nrundefinition_id_rundefinition)'\n' -> Index Scan using runresult_has_catalogtable_id_runresult\non runresult_has_catalogtable runresult_has_catalogtable_1\n(cost=0.00..76.65 rows=41 width=8) (actual time=0.015..0.017 rows=1\nloops=30)'\n' Index Cond:\n(runresult_has_catalogtable_1.runresult_id_runresult =\n\"outer\".runresult_id_runresult)'\n' Filter: ((catalogtable_id_catalogtable = 54) AND (value\n= 1))'\n' -> Index Scan using pk_measure on measure (cost=0.00..3.12 rows=1\nwidth=28) (actual time=0.008..0.010 rows=1 loops=22)'\n' Index Cond: (\"outer\".measure_id_measure =\nmeasure.id_measure)'\n' -> Index Scan using runresult_has_catalogtable_id_runresult on\nrunresult_has_catalogtable runresult_has_catalogtable_2 (cost=0.00..79.42\nrows=65 width=8) (actual time=0.007..0.010 rows=1 loops=22)'\n' Index Cond: (runresult_has_catalogtable_2.runresult_id_runresult =\n\"outer\".runresult_id_runresult)'\n' Filter: ((catalogtable_id_catalogtable = 55) AND ((value = 21) OR\n(value = 22) OR (value = 23) OR (value = 24)))'\n'Total runtime: 150.863 ms'\n\n\n", "msg_date": "Thu, 8 Dec 2005 11:59:24 -0500 ", "msg_from": "Amit V Shah <[email protected]>", "msg_from_op": true, "msg_subject": "Joining 2 tables with 300 million rows" }, { "msg_contents": "On Thu, 8 Dec 2005 11:59:24 -0500 , Amit V Shah <[email protected]>\nwrote:\n> CONSTRAINT pk_runresult_has_catalogtable PRIMARY KEY\n>(runresult_id_runresult, catalogtable_id_catalogtable, value)\n\n>' -> Index Scan using runresult_has_catalogtable_id_runresult\n>on runresult_has_catalogtable runresult_has_catalogtable_1\n>(cost=0.00..76.65 rows=41 width=8) (actual time=0.015..0.017 rows=1\n>loops=30)'\n>' Index Cond:\n>(runresult_has_catalogtable_1.runresult_id_runresult =\n>\"outer\".runresult_id_runresult)'\n>' Filter: ((catalogtable_id_catalogtable = 54) AND (value\n>= 1))'\n\nIf I were the planner, I'd use the primary key index. You seem to\nhave a redundant(?) index on\nrunresult_has_catalogtable(runresult_id_runresult). Dropping it might\nhelp, or it might make things much worse. But at this stage this is\npure speculation.\n\nGive us more information first. Show us the complete definition\n(including *all* indices) of all tables occurring in your query. What\nPostgres version is this? And please post EXPLAIN ANALYSE output of a\n*slow* query.\nServus\n Manfred\n", "msg_date": "Mon, 12 Dec 2005 23:34:12 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joining 2 tables with 300 million rows" } ]
[ { "msg_contents": " \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Vivek Khera\n> Sent: 08 December 2005 16:52\n> To: Postgresql Performance\n> Subject: [PERFORM] opinion on disk speed\n> \n> I have a choice to make on a RAID enclosure:\n> \n> 14x 36GB 15kRPM ultra 320 SCSI drives\n> \n> OR\n> \n> 12x 72GB 10kRPM ultra 320 SCSI drives\n> \n> both would be configured into RAID 10 over two SCSI channels using a \n> megaraid 320-2x card.\n> \n> My goal is speed. Either would provide more disk space than I would \n> need over the next two years.\n> \n> The database does a good number of write transactions, and a decent \n> number of sequential scans over the whole DB (about 60GB including \n> indexes) for large reports.\n> \n> My only concern is the 10kRPM vs 15kRPM. The advantage of the 10k \n> disks is that it would come from the same vendor as the systems to \n> which it will be connected, making procurement easier.\n\n15K drives (well, the Seagate Cheetah X15's that I have a lot of at\nleast) can run very hot compared to the 10K's. Might be worth bearing\n(no pun intended) in mind.\n\nOther than that, without knowing the full specs of the drives, you've\ngot 2 extra spindles and a probably-lower-seek time if you go for the\nX15's so that would seem likely to be the faster option.\n\nRegards, Dave\n\n", "msg_date": "Thu, 8 Dec 2005 17:03:27 -0000", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: opinion on disk speed" }, { "msg_contents": "On Thu, 8 Dec 2005 17:03:27 -0000\n\"Dave Page\" <[email protected]> wrote:\n\n> > -----Original Message-----\n> > From: [email protected] \n> > [mailto:[email protected]] On Behalf Of \n> > Vivek Khera\n>\n> > I have a choice to make on a RAID enclosure:\n> > \n> > 14x 36GB 15kRPM ultra 320 SCSI drives\n> > \n> > OR\n> > \n> > 12x 72GB 10kRPM ultra 320 SCSI drives\n> > \n> > both would be configured into RAID 10 over two SCSI channels using\n> > a megaraid 320-2x card.\n>\n> 15K drives (well, the Seagate Cheetah X15's that I have a lot of at\n> least) can run very hot compared to the 10K's. Might be worth bearing\n> (no pun intended) in mind.\n> \n> Other than that, without knowing the full specs of the drives, you've\n> got 2 extra spindles and a probably-lower-seek time if you go for the\n> X15's so that would seem likely to be the faster option.\n\n I agree, the extra spindles and lower seek times are better if all\n you are concerned about is raw speed. \n\n However, that has to be balanced, from an overall perspective, with\n the nice single point of ordering/contact/support/warranty of the\n one vendor. It's a tough call. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Fri, 9 Dec 2005 09:47:18 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on disk speed" }, { "msg_contents": "Frank Wiles wrote:\n\n> \n> \n> I agree, the extra spindles and lower seek times are better if all\n> you are concerned about is raw speed. \n> \n> However, that has to be balanced, from an overall perspective, with\n> the nice single point of ordering/contact/support/warranty of the\n> one vendor. It's a tough call. \n\nWell, if your favourite dealer can't supply you with such common \nequipment as 15k drives you should consider changing the dealer. They \ndon't seem to be aware of db hardware reqirements.\n\nRegards,\nAndreas\n\n", "msg_date": "Fri, 09 Dec 2005 15:50:16 +0000", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on disk speed" }, { "msg_contents": "\nOn Dec 9, 2005, at 10:50 AM, Andreas Pflug wrote:\n\n> Well, if your favourite dealer can't supply you with such common \n> equipment as 15k drives you should consider changing the dealer. \n> They don't seem to be aware of db hardware reqirements.\n\nThanks to all for your opinions. I'm definitely sticking with 15k \ndrives like I've done in the past for all my other servers.\n\nThe reason I considered the 10k was because of the simplicity of \nordering from the same vendor. They do offer 15k drives, but at \ndouble the capacity I needed (73GB drives) which would make the cost \nway high and overkill for what I need.\n\n", "msg_date": "Mon, 12 Dec 2005 13:42:43 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on disk speed" } ]
[ { "msg_contents": "What's the problem? You are joining two 300 million row tables in 0.15\nof a second - seems reasonable.\n\nDmitri\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Amit V Shah\n> Sent: Thursday, December 08, 2005 11:59 AM\n> To: '[email protected]'\n> Subject: [PERFORM] Joining 2 tables with 300 million rows\n> \n> \n> Hi all,\n> \n> First of all, please pardon if the question is dumb! Is it \n> even feasible or normal to do such a thing ! This query is \n> needed by a webpage so needs to be lightning fast. Anything \n> beyond 2-3 seconds is unacceptable performance.\n> \n> I have two tables\n> \n> CREATE TABLE runresult\n> (\n> id_runresult int8 NOT NULL,\n> rundefinition_id_rundefinition int4 NOT NULL,\n> measure_id_measure int4 NOT NULL,\n> value float4 NOT NULL,\n> \"sequence\" varchar(20) NOT NULL,\n> CONSTRAINT pk_runresult_ars PRIMARY KEY (id_runresult),\n> ) \n> \n> \n> CREATE TABLE runresult_has_catalogtable\n> (\n> runresult_id_runresult int8 NOT NULL,\n> catalogtable_id_catalogtable int4 NOT NULL,\n> value int4 NOT NULL,\n> CONSTRAINT pk_runresult_has_catalogtable PRIMARY KEY \n> (runresult_id_runresult, catalogtable_id_catalogtable, value)\n> CONSTRAINT fk_temp FOREIGN KEY (runresult_id_runresult) REFERENCES\n> runresult(id_runresult) ON UPDATE RESTRICT ON DELETE RESTRICT\n> ) \n> \n> Each table has around 300 million records (will grow to \n> probably billions). Below is the query and the explain analyze --\n> \n> explain analyze SELECT measure.description, runresult.value \n> FROM ((((rundefinition INNER JOIN runresult ON \n> rundefinition.id_rundefinition = \n> runresult.rundefinition_id_rundefinition) \n> INNER JOIN runresult_has_catalogtable ON runresult.id_runresult =\n> runresult_has_catalogtable.runresult_id_runresult) \n> INNER JOIN runresult_has_catalogtable AS \n> runresult_has_catalogtable_1 ON runresult.id_runresult =\n> runresult_has_catalogtable_1.runresult_id_runresult) \n> INNER JOIN runresult_has_catalogtable AS \n> runresult_has_catalogtable_2 ON runresult.id_runresult =\n> runresult_has_catalogtable_2.runresult_id_runresult) \n> INNER JOIN measure ON runresult.measure_id_measure = \n> measure.id_measure WHERE \n> (((runresult_has_catalogtable.catalogtable_id_catalogtable)=52) \n> AND ((runresult_has_catalogtable_1.catalogtable_id_catalogtable)=54) \n> AND ((runresult_has_catalogtable_2.catalogtable_id_catalogtable)=55) \n> AND ((runresult_has_catalogtable.value)=15806) \n> AND ((runresult_has_catalogtable_1.value)=1) \n> AND ((runresult_has_catalogtable_2.value) In (21,22,23,24)) \n> AND ((rundefinition.id_rundefinition)=10106));\n> \n> 'Nested Loop (cost=0.00..622582.70 rows=1 width=28) (actual \n> time=25.221..150.563 rows=22 loops=1)' ' -> Nested Loop \n> (cost=0.00..622422.24 rows=2 width=52) (actual \n> time=25.201..150.177 rows=22 loops=1)'\n> ' -> Nested Loop (cost=0.00..622415.97 rows=2 \n> width=32) (actual\n> time=25.106..149.768 rows=22 loops=1)'\n> ' -> Nested Loop (cost=0.00..621258.54 rows=15 \n> width=24)\n> (actual time=24.582..149.061 rows=30 loops=1)'\n> ' -> Index Scan using pk_rundefinition on \n> rundefinition\n> (cost=0.00..3.86 rows=1 width=4) (actual time=0.125..0.147 \n> rows=1 loops=1)'\n> ' Index Cond: (id_rundefinition = 10106)'\n> ' -> Nested Loop (cost=0.00..621254.54 rows=15\n> width=28) (actual time=24.443..148.784 rows=30 loops=1)'\n> ' -> Index Scan using\n> runresult_has_catalogtable_value on \n> runresult_has_catalogtable (cost=0.00..575069.35 rows=14437 \n> width=8) (actual time=0.791..33.036 rows=10402 loops=1)'\n> ' Index Cond: (value = 15806)'\n> ' Filter: \n> (catalogtable_id_catalogtable =\n> 52)'\n> ' -> Index Scan using pk_runresult_ars on\n> runresult (cost=0.00..3.19 rows=1 width=20) (actual \n> time=0.007..0.007 rows=0 loops=10402)'\n> ' Index Cond: (runresult.id_runresult =\n> \"outer\".runresult_id_runresult)'\n> ' Filter: (10106 =\n> rundefinition_id_rundefinition)'\n> ' -> Index Scan using \n> runresult_has_catalogtable_id_runresult\n> on runresult_has_catalogtable runresult_has_catalogtable_1 \n> (cost=0.00..76.65 rows=41 width=8) (actual time=0.015..0.017 \n> rows=1 loops=30)'\n> ' Index Cond:\n> (runresult_has_catalogtable_1.runresult_id_runresult = \n> \"outer\".runresult_id_runresult)'\n> ' Filter: ((catalogtable_id_catalogtable = \n> 54) AND (value\n> = 1))'\n> ' -> Index Scan using pk_measure on measure \n> (cost=0.00..3.12 rows=1\n> width=28) (actual time=0.008..0.010 rows=1 loops=22)'\n> ' Index Cond: (\"outer\".measure_id_measure =\n> measure.id_measure)'\n> ' -> Index Scan using \n> runresult_has_catalogtable_id_runresult on \n> runresult_has_catalogtable runresult_has_catalogtable_2 \n> (cost=0.00..79.42 rows=65 width=8) (actual time=0.007..0.010 \n> rows=1 loops=22)'\n> ' Index Cond: \n> (runresult_has_catalogtable_2.runresult_id_runresult =\n> \"outer\".runresult_id_runresult)'\n> ' Filter: ((catalogtable_id_catalogtable = 55) AND \n> ((value = 21) OR\n> (value = 22) OR (value = 23) OR (value = 24)))'\n> 'Total runtime: 150.863 ms'\n> \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \nThe information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer\n", "msg_date": "Thu, 8 Dec 2005 13:47:13 -0500", "msg_from": "\"Dmitri Bichko\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Joining 2 tables with 300 million rows" } ]
[ { "msg_contents": "Hi, \n\nThe thing is, although it shows 0.15 seconds, when I run the actual query,\nit takes around 40-45 seconds (sorry I forgot to mention that). And then\nsometimes it depends on data. Some parameters have very less number of\nrecords, and others have lot more. I dont know how to read the \"explan\"\nresults very well, but looked like there were no sequential scans and it\nonly used indexes. \n\nAlso, another problem is, the second time I run this query, it returns it\nfrom cache I believe. So the second time I run it, it returns in like 2\nseconds or \nso !\n\nThats why I was worrying if joining 2 tables like that is even advisable at\nall ...\n\nThanks,\nAmit\n\n-----Original Message-----\nFrom: Dmitri Bichko [mailto:[email protected]]\nSent: Thursday, December 08, 2005 1:47 PM\nTo: Amit V Shah; [email protected]\nSubject: Re: [PERFORM] Joining 2 tables with 300 million rows\n\n\nWhat's the problem? You are joining two 300 million row tables in 0.15\nof a second - seems reasonable.\n\nDmitri\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Amit V Shah\n> Sent: Thursday, December 08, 2005 11:59 AM\n> To: '[email protected]'\n> Subject: [PERFORM] Joining 2 tables with 300 million rows\n> \n> \n> Hi all,\n> \n> First of all, please pardon if the question is dumb! Is it \n> even feasible or normal to do such a thing ! This query is \n> needed by a webpage so needs to be lightning fast. Anything \n> beyond 2-3 seconds is unacceptable performance.\n> \n> I have two tables\n> \n> CREATE TABLE runresult\n> (\n> id_runresult int8 NOT NULL,\n> rundefinition_id_rundefinition int4 NOT NULL,\n> measure_id_measure int4 NOT NULL,\n> value float4 NOT NULL,\n> \"sequence\" varchar(20) NOT NULL,\n> CONSTRAINT pk_runresult_ars PRIMARY KEY (id_runresult),\n> ) \n> \n> \n> CREATE TABLE runresult_has_catalogtable\n> (\n> runresult_id_runresult int8 NOT NULL,\n> catalogtable_id_catalogtable int4 NOT NULL,\n> value int4 NOT NULL,\n> CONSTRAINT pk_runresult_has_catalogtable PRIMARY KEY \n> (runresult_id_runresult, catalogtable_id_catalogtable, value)\n> CONSTRAINT fk_temp FOREIGN KEY (runresult_id_runresult) REFERENCES\n> runresult(id_runresult) ON UPDATE RESTRICT ON DELETE RESTRICT\n> ) \n> \n> Each table has around 300 million records (will grow to \n> probably billions). Below is the query and the explain analyze --\n> \n> explain analyze SELECT measure.description, runresult.value \n> FROM ((((rundefinition INNER JOIN runresult ON \n> rundefinition.id_rundefinition = \n> runresult.rundefinition_id_rundefinition) \n> INNER JOIN runresult_has_catalogtable ON runresult.id_runresult =\n> runresult_has_catalogtable.runresult_id_runresult) \n> INNER JOIN runresult_has_catalogtable AS \n> runresult_has_catalogtable_1 ON runresult.id_runresult =\n> runresult_has_catalogtable_1.runresult_id_runresult) \n> INNER JOIN runresult_has_catalogtable AS \n> runresult_has_catalogtable_2 ON runresult.id_runresult =\n> runresult_has_catalogtable_2.runresult_id_runresult) \n> INNER JOIN measure ON runresult.measure_id_measure = \n> measure.id_measure WHERE \n> (((runresult_has_catalogtable.catalogtable_id_catalogtable)=52) \n> AND ((runresult_has_catalogtable_1.catalogtable_id_catalogtable)=54) \n> AND ((runresult_has_catalogtable_2.catalogtable_id_catalogtable)=55) \n> AND ((runresult_has_catalogtable.value)=15806) \n> AND ((runresult_has_catalogtable_1.value)=1) \n> AND ((runresult_has_catalogtable_2.value) In (21,22,23,24)) \n> AND ((rundefinition.id_rundefinition)=10106));\n> \n> 'Nested Loop (cost=0.00..622582.70 rows=1 width=28) (actual \n> time=25.221..150.563 rows=22 loops=1)' ' -> Nested Loop \n> (cost=0.00..622422.24 rows=2 width=52) (actual \n> time=25.201..150.177 rows=22 loops=1)'\n> ' -> Nested Loop (cost=0.00..622415.97 rows=2 \n> width=32) (actual\n> time=25.106..149.768 rows=22 loops=1)'\n> ' -> Nested Loop (cost=0.00..621258.54 rows=15 \n> width=24)\n> (actual time=24.582..149.061 rows=30 loops=1)'\n> ' -> Index Scan using pk_rundefinition on \n> rundefinition\n> (cost=0.00..3.86 rows=1 width=4) (actual time=0.125..0.147 \n> rows=1 loops=1)'\n> ' Index Cond: (id_rundefinition = 10106)'\n> ' -> Nested Loop (cost=0.00..621254.54 rows=15\n> width=28) (actual time=24.443..148.784 rows=30 loops=1)'\n> ' -> Index Scan using\n> runresult_has_catalogtable_value on \n> runresult_has_catalogtable (cost=0.00..575069.35 rows=14437 \n> width=8) (actual time=0.791..33.036 rows=10402 loops=1)'\n> ' Index Cond: (value = 15806)'\n> ' Filter: \n> (catalogtable_id_catalogtable =\n> 52)'\n> ' -> Index Scan using pk_runresult_ars on\n> runresult (cost=0.00..3.19 rows=1 width=20) (actual \n> time=0.007..0.007 rows=0 loops=10402)'\n> ' Index Cond: (runresult.id_runresult =\n> \"outer\".runresult_id_runresult)'\n> ' Filter: (10106 =\n> rundefinition_id_rundefinition)'\n> ' -> Index Scan using \n> runresult_has_catalogtable_id_runresult\n> on runresult_has_catalogtable runresult_has_catalogtable_1 \n> (cost=0.00..76.65 rows=41 width=8) (actual time=0.015..0.017 \n> rows=1 loops=30)'\n> ' Index Cond:\n> (runresult_has_catalogtable_1.runresult_id_runresult = \n> \"outer\".runresult_id_runresult)'\n> ' Filter: ((catalogtable_id_catalogtable = \n> 54) AND (value\n> = 1))'\n> ' -> Index Scan using pk_measure on measure \n> (cost=0.00..3.12 rows=1\n> width=28) (actual time=0.008..0.010 rows=1 loops=22)'\n> ' Index Cond: (\"outer\".measure_id_measure =\n> measure.id_measure)'\n> ' -> Index Scan using \n> runresult_has_catalogtable_id_runresult on \n> runresult_has_catalogtable runresult_has_catalogtable_2 \n> (cost=0.00..79.42 rows=65 width=8) (actual time=0.007..0.010 \n> rows=1 loops=22)'\n> ' Index Cond: \n> (runresult_has_catalogtable_2.runresult_id_runresult =\n> \"outer\".runresult_id_runresult)'\n> ' Filter: ((catalogtable_id_catalogtable = 55) AND \n> ((value = 21) OR\n> (value = 22) OR (value = 23) OR (value = 24)))'\n> 'Total runtime: 150.863 ms'\n> \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \nThe information transmitted is intended only for the person or entity to\nwhich it is addressed and may contain confidential and/or privileged\nmaterial. Any review, retransmission, dissemination or other use of, or\ntaking of any action in reliance upon, this information by persons or\nentities other than the intended recipient is prohibited. If you received\nthis in error, please contact the sender and delete the material from any\ncomputer\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n", "msg_date": "Thu, 8 Dec 2005 17:01:01 -0500 ", "msg_from": "Amit V Shah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Joining 2 tables with 300 million rows" }, { "msg_contents": "\nOn Dec 8, 2005, at 5:01 PM, Amit V Shah wrote:\n\n> Hi,\n>\n> The thing is, although it shows 0.15 seconds, when I run the actual \n> query,\n> it takes around 40-45 seconds (sorry I forgot to mention that). And \n> then\n> sometimes it depends on data. Some parameters have very less number of\n> records, and others have lot more. I dont know how to read the \n> \"explan\"\n> results very well, but looked like there were no sequential scans \n> and it\n> only used indexes.\n>\n\nThe planner will look at the data you used and it may decide to \nswitch the plan if it realizes your're quering a very frequent value.\n\nAnother thing that may be a factor is the network - when doing \nexplain analyze it doesn't have to transfer the dataset to the client.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Fri, 9 Dec 2005 10:32:12 -0500", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joining 2 tables with 300 million rows" } ]
[ { "msg_contents": "Hi all,\n\nI have a problem with a query which doeson't want to use indexes. I \ntried to create different indexes but nothing help. Can anyone suggest \nwhat index I need.\nThis query is executed 1.5Milion times per day and I need it to be veri \nfast. I made my test on 8.0.0 beta but the production database is still \n7.4.6 so i need suggestions for 7.4.6.\nI will post the table with the indexes and the query plans.\niplog=# \\d croute\n Table \"public.croute\"\n Column | Type | Modifiers\n-----------------+--------------------------+-----------\n confid | integer |\n network | cidr |\n comment | text |\n router | text |\n port | text |\nvalid_at | timestamp with time zone |\n archived_at | timestamp with time zone |\nIndexes:\n \"croute_netwo\" btree (network) WHERE confid > 0 AND archived_at IS NULL\n \"croute_netwokr_valid_at\" btree (network, valid_at)\n \"croute_network\" btree (network) WHERE archived_at IS NULL\n \"croute_network_all\" btree (network)\n\n\niplog=# select version();\n version\n--------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.0.0beta1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) \n3.3.2 (Mandrake Linux 10.0 3.3.2-6mdk)\n(1 row)\n\n!!!!!!!!!!!!THIS IS THE QUERY!!!!!!!!!!!!!!!!!\ncustomer=> explain analyze SELECT *\ncustomer-> FROM croute\ncustomer-> WHERE '193.68.0.8/32' <<= \nnetwork AND\ncustomer-> (archived_at is NULL \nOR archived_at > '17-11-2005') AND\ncustomer-> valid_at < \n'1-12-2005'::date AND\ncustomer-> confid > 0;\n \nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on croute (cost=0.00..441.62 rows=413 width=102) (actual \ntime=14.131..37.515 rows=1 loops=1)\n Filter: (('193.68.0.8/32'::cidr <<= network) AND ((archived_at IS \nNULL) OR (archived_at > '2005-11-17 00:00:00+02'::timestamp with time \nzone)) AND (valid_at < ('2005-12-01'::date)::timestamp with time zone) \nAND (confid > 0))\n Total runtime: 37.931 ms\n(3 rows)\n\ncustomer=> select count(*) from croute;\n count\n-------\n 10066\n(1 row)\nThis is the result of the query:\nconfid | network | comment | router | port | \nvalid_at | archived_at |\n-------+---------------+---------+------+----+-------------------------+-----------+\n 19971 | xx.xx.xx.xx/32 | xxxxx | ? | ? | 2005-03-11 \n00:00:00+02 | |\n(1 row)\nAnd last I try to stop the sequance scan but it doesn't help. I suppose \nI don't have the right index.\niplog=# set enable_seqscan = off;\nSET\niplog=# explain analyze SELECT *\niplog-# FROM croute\niplog-# WHERE '193.68.0.8/32' <<= \nnetwork AND\niplog-# (archived_at is NULL OR \narchived_at > '17-11-2005') AND\niplog-# valid_at < \n'1-12-2005'::date AND\niplog-# confid > 0;\n \nQUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on croute (cost=100000000.00..100000780.64 rows=1030 \nwidth=103) (actual time=29.593..29.819 rows=1 loops=1)\n Filter: (('193.68.0.8/32'::cidr <<= network) AND ((archived_at IS \nNULL) OR (archived_at > '2005-11-17 00:00:00+02'::timestamp with time \nzone)) AND (valid_at < '2005-12-01'::date) AND (confid > 0))\n Total runtime: 29.931 ms\n(3 rows)\n\nI try creating one last index on all fields but it doesn't help.\niplog=# CREATE INDEX croute_all on \ncroute(network,archived_at,valid_at,confid);\nCREATE INDEX\niplog=# explain analyze SELECT *\niplog-# FROM croute\niplog-# WHERE '193.68.0.8/32' <<= \nnetwork AND\niplog-# (archived_at is NULL OR \narchived_at > '17-11-2005') AND\niplog-# valid_at < \n'1-12-2005'::date AND\niplog-# confid > 0;\n \nQUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on croute (cost=100000000.00..100000780.64 rows=1030 \nwidth=103) (actual time=29.626..29.879 rows=1 loops=1)\n Filter: (('193.68.0.8/32'::cidr <<= network) AND ((archived_at IS \nNULL) OR (archived_at > '2005-11-17 00:00:00+02'::timestamp with time \nzone)) AND (valid_at < '2005-12-01'::date) AND (confid > 0))\n Total runtime: 30.060 ms\n(3 rows)\n\n\nThanks in advance to all.\n\nKaloyan Iliev\n\n", "msg_date": "Fri, 09 Dec 2005 11:36:10 +0200", "msg_from": "Kaloyan Iliev <[email protected]>", "msg_from_op": true, "msg_subject": "Query not using index" }, { "msg_contents": "On 12/9/05, Kaloyan Iliev <[email protected]> wrote:\n> Hi all,\n>\n> I have a problem with a query which doeson't want to use indexes. I\n> tried to create different indexes but nothing help. Can anyone suggest\n> what index I need.\n> This query is executed 1.5Milion times per day and I need it to be veri\n> fast. I made my test on 8.0.0 beta but the production database is still\n> 7.4.6 so i need suggestions for 7.4.6.\n> I will post the table with the indexes and the query plans.\n> iplog=# \\d croute\n> Table \"public.croute\"\n> Column | Type | Modifiers\n> -----------------+--------------------------+-----------\n> confid | integer |\n> network | cidr |\n> comment | text |\n> router | text |\n> port | text |\n> valid_at | timestamp with time zone |\n> archived_at | timestamp with time zone |\n> Indexes:\n> \"croute_netwo\" btree (network) WHERE confid > 0 AND archived_at IS NULL\n> \"croute_netwokr_valid_at\" btree (network, valid_at)\n> \"croute_network\" btree (network) WHERE archived_at IS NULL\n> \"croute_network_all\" btree (network)\n>\n>\n> iplog=# select version();\n> version\n> --------------------------------------------------------------------------------------------------------------\n> PostgreSQL 8.0.0beta1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n> 3.3.2 (Mandrake Linux 10.0 3.3.2-6mdk)\n> (1 row)\n>\n> !!!!!!!!!!!!THIS IS THE QUERY!!!!!!!!!!!!!!!!!\n> customer=> explain analyze SELECT *\n> customer-> FROM croute\n> customer-> WHERE '193.68.0.8/32' <<=\n> network AND\n> customer-> (archived_at is NULL\n> OR archived_at > '17-11-2005') AND\n> customer-> valid_at <\n> '1-12-2005'::date AND\n> customer-> confid > 0;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on croute (cost=0.00..441.62 rows=413 width=102) (actual\n> time=14.131..37.515 rows=1 loops=1)\n> Filter: (('193.68.0.8/32'::cidr <<= network) AND ((archived_at IS\n> NULL) OR (archived_at > '2005-11-17 00:00:00+02'::timestamp with time\n> zone)) AND (valid_at < ('2005-12-01'::date)::timestamp with time zone)\n> AND (confid > 0))\n> Total runtime: 37.931 ms\n> (3 rows)\n>\n> customer=> select count(*) from croute;\n> count\n> -------\n> 10066\n> (1 row)\n> This is the result of the query:\n> confid | network | comment | router | port |\n> valid_at | archived_at |\n> -------+---------------+---------+------+----+-------------------------+-----------+\n> 19971 | xx.xx.xx.xx/32 | xxxxx | ? | ? | 2005-03-11\n> 00:00:00+02 | |\n> (1 row)\n> And last I try to stop the sequance scan but it doesn't help. I suppose\n> I don't have the right index.\n> iplog=# set enable_seqscan = off;\n> SET\n> iplog=# explain analyze SELECT *\n> iplog-# FROM croute\n> iplog-# WHERE '193.68.0.8/32' <<=\n> network AND\n> iplog-# (archived_at is NULL OR\n> archived_at > '17-11-2005') AND\n> iplog-# valid_at <\n> '1-12-2005'::date AND\n> iplog-# confid > 0;\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on croute (cost=100000000.00..100000780.64 rows=1030\n> width=103) (actual time=29.593..29.819 rows=1 loops=1)\n> Filter: (('193.68.0.8/32'::cidr <<= network) AND ((archived_at IS\n> NULL) OR (archived_at > '2005-11-17 00:00:00+02'::timestamp with time\n> zone)) AND (valid_at < '2005-12-01'::date) AND (confid > 0))\n> Total runtime: 29.931 ms\n> (3 rows)\n>\n> I try creating one last index on all fields but it doesn't help.\n> iplog=# CREATE INDEX croute_all on\n> croute(network,archived_at,valid_at,confid);\n> CREATE INDEX\n> iplog=# explain analyze SELECT *\n> iplog-# FROM croute\n> iplog-# WHERE '193.68.0.8/32' <<=\n> network AND\n> iplog-# (archived_at is NULL OR\n> archived_at > '17-11-2005') AND\n> iplog-# valid_at <\n> '1-12-2005'::date AND\n> iplog-# confid > 0;\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on croute (cost=100000000.00..100000780.64 rows=1030\n> width=103) (actual time=29.626..29.879 rows=1 loops=1)\n> Filter: (('193.68.0.8/32'::cidr <<= network) AND ((archived_at IS\n> NULL) OR (archived_at > '2005-11-17 00:00:00+02'::timestamp with time\n> zone)) AND (valid_at < '2005-12-01'::date) AND (confid > 0))\n> Total runtime: 30.060 ms\n> (3 rows)\n>\n>\n> Thanks in advance to all.\n>\n> Kaloyan Iliev\n>\n>\n\nIn oracle you can use this instead...\n\nSELECT * FROM croute\nWHERE '193.68.0.8/32' <<= network\n AND archived_at is NULL\n AND valid_at < '1-12-2005'::date\n AND confid > 0;\nUNION\nSELECT * FROM croute\nWHERE '193.68.0.8/32' <<= network\n AND archived_at > '17-11-2005'::date\n AND valid_at < '1-12-2005'::date\n AND confid > 0;\n\n\nalthough i think that your query can make use of bitmap index in 8.1\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Fri, 9 Dec 2005 10:06:07 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query not using index" }, { "msg_contents": "Hi all,\nThanks for the reply. I made some more test and find out that the \nproblem is with the <<= operator for the network type. Can I create \nindex which to work with <<=. Because if I use = the index is used. But \nnot for <<=.\niplog=# explain analyze SELECT *\niplog-# FROM croute\niplog-# WHERE '193.68.0.10/32' <<= \nnetwork;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Seq Scan on croute (cost=0.00..707.27 rows=4891 width=103) (actual \ntime=10.313..29.621 rows=2 loops=1)\n Filter: ('193.68.0.10/32'::cidr <<= network)\n Total runtime: 29.729 ms\n(3 rows)\n\niplog=# explain analyze SELECT *\niplog-# FROM croute\niplog-# WHERE '193.68.0.10/32' = network;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Index Scan using croute_network_all on croute (cost=0.00..17.99 rows=4 \nwidth=103) (actual time=0.053..0.059 rows=1 loops=1)\n Index Cond: ('193.68.0.10/32'::cidr = network)\n Total runtime: 0.167 ms\n(3 rows)\n\nWaiting for replies.\n\nThanks to all in advance.\n\nKaloyan Iliev\n", "msg_date": "Fri, 09 Dec 2005 18:38:42 +0200", "msg_from": "Kaloyan Iliev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query not using index" }, { "msg_contents": "Hi all,\nThanks for the reply. I made some more test and find out that the\nproblem is with the <<= operator for the network type. Can I create\nindex which to work with <<=. Because if I use = the index is used. But\nnot for <<=.\niplog=# explain analyze SELECT *\niplog-# FROM croute\niplog-# WHERE '193.68.0.10/32' <<=\nnetwork;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\nSeq Scan on croute (cost=0.00..707.27 rows=4891 width=103) (actual\ntime=10.313..29.621 rows=2 loops=1)\n Filter: ('193.68.0.10/32'::cidr <<= network)\nTotal runtime: 29.729 ms\n(3 rows)\n\niplog=# explain analyze SELECT *\niplog-# FROM croute\niplog-# WHERE '193.68.0.10/32' = network;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\nIndex Scan using croute_network_all on croute (cost=0.00..17.99 rows=4\nwidth=103) (actual time=0.053..0.059 rows=1 loops=1)\n Index Cond: ('193.68.0.10/32'::cidr = network)\nTotal runtime: 0.167 ms\n(3 rows)\n\nWaiting for replies.\n\nThanks to all in advance.\n\nKaloyan Iliev\n\n", "msg_date": "Fri, 09 Dec 2005 18:39:09 +0200", "msg_from": "Kaloyan Iliev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query not using index" } ]
[ { "msg_contents": "> one other note, you probably don't want to use all the disks in a raid10 \n> array, you probably want to split a pair of them off into a seperate\n> raid1 array and put your WAL on it.\n\nIs a RAID 1 array of two disks sufficient for WAL? What's a typical\nsetup for a high performance PostgreSQL installation? RAID 1 for WAL\nand RAID 10 for data? \n\nI've read that splitting the WAL and data offers huge performance\nbenefits. How much additional benefit is gained by moving indexes to\nanother RAID array? Would you typically set the indexes RAID array up\nas RAID 1 or 10? \n\n\n", "msg_date": "Fri, 09 Dec 2005 09:15:25 -0500", "msg_from": "\"Jeremy Haile\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: opinion on disk speed" }, { "msg_contents": "On Fri, 09 Dec 2005 09:15:25 -0500\n\"Jeremy Haile\" <[email protected]> wrote:\n\n> > one other note, you probably don't want to use all the disks in a\n> > raid10 array, you probably want to split a pair of them off into a\n> > seperate raid1 array and put your WAL on it.\n> \n> Is a RAID 1 array of two disks sufficient for WAL? What's a typical\n> setup for a high performance PostgreSQL installation? RAID 1 for WAL\n> and RAID 10 for data? \n> \n> I've read that splitting the WAL and data offers huge performance\n> benefits. How much additional benefit is gained by moving indexes to\n> another RAID array? Would you typically set the indexes RAID array up\n> as RAID 1 or 10? \n\n Yes most people put the WAL on a RAID 1 and use all the remaining\n disks in RAID 10 for data. \n\n Whether or not moving your indexes onto a different RAID array is\n worthwhile is harder to judge. If your indexes are small enough\n that they will usually be in ram, but your data is to large to\n fit then having the extra spindles available on the data partition\n is probably better. \n\n As always, it is probably best to test both configurations to see\n which is optimal for your particular application and setup. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Fri, 9 Dec 2005 09:58:36 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on disk speed" } ]
[ { "msg_contents": "Hello,\n \n I would like to know which is the best configuration to use 4 scsi drives with a pg 8.1 server.\n \n Configuring them as a RAID10 set seems a good choice but now I�m figuring another configuration:\nSCSI drive 1: operational system\nSCSI drive 2: pg_xlog\nSCSI drive 3: data\nSCSI drive 4: index\n \n I know the difference between them when you analyze risks of loosing data but how about performance?\n \n What should be better?\n \n Obs.: Our system uses always an index for every access.. (enable_seqscan(false))\n \n Thanks in advance!\n \n Benkendorf\n\n\t\t\n---------------------------------\n Yahoo! doce lar. Fa�a do Yahoo! sua homepage.\nHello,   I would like to know which is the best configuration to use 4 scsi drives with a pg 8.1 server.   Configuring them as a RAID10 set seems a good choice but now I�m figuring another configuration:SCSI drive 1: operational systemSCSI drive 2: pg_xlogSCSI drive 3: dataSCSI drive 4: index   I know the difference between them when you analyze risks of loosing data but how about performance?   What should be better?   Obs.: Our system uses always an index for every access.. (enable_seqscan(false))   Thanks in advance!   Benkendorf\n \nYahoo! doce lar. Fa�a do Yahoo! sua homepage.", "msg_date": "Sat, 10 Dec 2005 11:34:23 +0000 (GMT)", "msg_from": "Carlos Benkendorf <[email protected]>", "msg_from_op": true, "msg_subject": "Is RAID10 the best choice?" }, { "msg_contents": "Personaly I would split into two RAID 1s. One for pg_xlog, one for\nthe rest. This gives probably the best performance/reliability\ncombination.\n\nAlex.\n\nOn 12/10/05, Carlos Benkendorf <[email protected]> wrote:\n> Hello,\n>\n> I would like to know which is the best configuration to use 4 scsi drives\n> with a pg 8.1 server.\n>\n> Configuring them as a RAID10 set seems a good choice but now I´m figuring\n> another configuration:\n> SCSI drive 1: operational system\n> SCSI drive 2: pg_xlog\n> SCSI drive 3: data\n> SCSI drive 4: index\n>\n> I know the difference between them when you analyze risks of loosing data\n> but how about performance?\n>\n> What should be better?\n>\n> Obs.: Our system uses always an index for every access..\n> (enable_seqscan(false))\n>\n> Thanks in advance!\n>\n> Benkendorf\n>\n> ________________________________\n> Yahoo! doce lar. Faça do Yahoo! sua homepage.\n>\n>\n", "msg_date": "Sun, 11 Dec 2005 23:45:51 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is RAID10 the best choice?" } ]
[ { "msg_contents": "Hi,\n \n Sometime ago I worked in an implantation project that uses postgresql and I remember than the software house recommended us to use seqscan off...\nI was not very sure, but I thought the best way should be set seqscan on and let postgresql choose the best access plan (index or seqscan). Even against the other team members I changed the configuration to seqscan on and the system didn�t worked anymore.\n \n Studying better the reasons I verified that applications were expecting data in primary index order but with seqscan ON sometimes postgresql didn�t use an index and naturally data came without order.\n \n I suggested changing the application and including a order by clause... but\nthe software house didn�t make it because they said the system was originally designed for oracle and they did not need to use the ORDER BY clause with Oracle and even so the data were always retrieved in primary index order.\n \n I�m thinking with myself ... what kind of problems will they have in the future?\n \n I think this kind of configuration is very dependent of clustered tables... Am I right?\n \n Best regards!\n\n Engelmann.\n\n__________________________________________________\nFa�a liga��es para outros computadores com o novo Yahoo! Messenger \nhttp://br.beta.messenger.yahoo.com/ \nHi,   Sometime ago I worked in an implantation project that uses postgresql and I remember than the software house recommended us to use seqscan off...I was not very sure, but I thought the best way should be set seqscan on and let postgresql choose the best access plan (index or seqscan). Even against the other team members I changed the configuration to seqscan on and the system didn�t worked anymore.   Studying better the reasons I verified that applications were expecting data in primary index order but with seqscan ON sometimes postgresql didn�t use an index and naturally data came without order.   I suggested changing the application and including  a order by clause... butthe software house didn�t make it because they said the system was originally designed for oracle and they did not need to use the ORDER BY clause with Oracle and even so the data\n were\n always retrieved in primary index order.   I�m thinking with myself ... what kind of problems will they have in the future?   I think this kind of configuration is very dependent of clustered tables... Am I right?   Best regards! Engelmann.__________________________________________________Fa�a liga��es para outros computadores com o novo Yahoo! Messenger http://br.beta.messenger.yahoo.com/", "msg_date": "Sat, 10 Dec 2005 15:29:37 +0000 (GMT)", "msg_from": "Henrique Engelmann <[email protected]>", "msg_from_op": true, "msg_subject": "Clustered tables and seqscan disabled" }, { "msg_contents": "Henrique Engelmann <[email protected]> writes:\n> I suggested changing the application and including a order by clause... but\n> the software house didn�t make it because they said the system was originally designed for oracle and they did not need to use the ORDER BY clause with Oracle and even so the data were always retrieved in primary index order.\n \n> I�m thinking with myself ... what kind of problems will they have in the future?\n\nIf you aren't working with these people any more, be glad. They are\nobviously utterly incompetent. The SQL standard is perfectly clear\nabout the matter: without ORDER BY, there is no guarantee about the\norder in which rows are retrieved. The fact that one specific\nimplementation might have chanced to produce the rows in desired order\n(under all the conditions they had bothered to test, which I bet wasn't\na lot) does not make their code correct.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Dec 2005 11:53:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clustered tables and seqscan disabled " } ]
[ { "msg_contents": "Hi,\n\nIs it possible to run a shell script, passing values of fields to it, in \na Postgres function ?\n\nYves Vindevogel", "msg_date": "Sat, 10 Dec 2005 16:55:56 +0100", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Executing a shell command from a PG function" }, { "msg_contents": "On 12/10/05, Yves Vindevogel <[email protected]> wrote:\n> Hi,\n>\n> Is it possible to run a shell script, passing values of fields to it, in\n> a Postgres function ?\n>\n> Yves Vindevogel\n>\n\nsearch for the pl/sh language\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Sat, 10 Dec 2005 11:14:14 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Executing a shell command from a PG function" }, { "msg_contents": "On Sat, Dec 10, 2005 at 04:55:56PM +0100, Yves Vindevogel wrote:\n> Is it possible to run a shell script, passing values of fields to it, in \n> a Postgres function ?\n\nNot directly from SQL or PL/pgSQL functions, but you can execute\nshell commands with the untrusted versions of PL/Perl, PL/Tcl,\nPL/Python, etc. There's even a PL/sh:\n\nhttp://pgfoundry.org/projects/plsh/\n\n-- \nMichael Fuhr\n", "msg_date": "Sat, 10 Dec 2005 09:22:15 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Executing a shell command from a PG function" }, { "msg_contents": "Thanks Michael and Jaime. The pg/sh thing is probably what I was \nlooking for.\nTnx\n\n\nMichael Fuhr wrote:\n\n>On Sat, Dec 10, 2005 at 04:55:56PM +0100, Yves Vindevogel wrote:\n> \n>\n>>Is it possible to run a shell script, passing values of fields to it, in \n>>a Postgres function ?\n>> \n>>\n>\n>Not directly from SQL or PL/pgSQL functions, but you can execute\n>shell commands with the untrusted versions of PL/Perl, PL/Tcl,\n>PL/Python, etc. There's even a PL/sh:\n>\n>http://pgfoundry.org/projects/plsh/\n>\n> \n>", "msg_date": "Sat, 10 Dec 2005 17:53:39 +0100", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Executing a shell command from a PG function" } ]
[ { "msg_contents": "Can indexes be used for bit-filtering queries? For example:\n\ncreate table tt (\n flags integer not null default 0,\n str varchar\n);\n\nselect * from tt where (flags & 16) != 0;\n\nI suspected radix trees could be used for this but it seems it doesn't \nwork that way.\n\nIf not, is there a way of quickly filtering by such \"elements of a set\" \nthat doesn't involve creating 32 boolean fields (which would also need \nto be pretty uselessly indexed separately)?\n\nWould strings and regular expressions work?\n", "msg_date": "Sat, 10 Dec 2005 18:12:58 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Bitmasks" }, { "msg_contents": "Ivan Voras <[email protected]> writes:\n\n> select * from tt where (flags & 16) != 0;\n> \n> I suspected radix trees could be used for this but it seems it doesn't work\n> that way.\n\nYou would need a gist index method to make this work. I actually worked on one\nfor a while and had it working. But it wasn't really finished. If there's\ninterest I could finish it up and put it up somewhere like pgfoundry.\n\n> If not, is there a way of quickly filtering by such \"elements of a set\" that\n> doesn't involve creating 32 boolean fields (which would also need to be pretty\n> uselessly indexed separately)?\n\nYou could create 32 partial indexes on some other column which wouldn't really\ntake much more space than a single index on that other column. But that won't\nlet you combine them effectively as a gist index would.\n\n-- \ngreg\n\n", "msg_date": "11 Dec 2005 01:12:08 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmasks" } ]
[ { "msg_contents": "Hi,\n \n I would like to use autovacuum but is not too much expensive collecting row level statistics?\n \n Are there some numbers that I could use?\n \n Thanks in advance!\n \n Benkendorf\n\n\t\t\n---------------------------------\n Yahoo! doce lar. Fa�a do Yahoo! sua homepage.\nHi,   I would like to use autovacuum but is not too much expensive collecting row level statistics?   Are there some numbers that I could use?   Thanks in advance!   Benkendorf\n \nYahoo! doce lar. Fa�a do Yahoo! sua homepage.", "msg_date": "Sun, 11 Dec 2005 11:53:36 +0000 (GMT)", "msg_from": "Carlos Benkendorf <[email protected]>", "msg_from_op": true, "msg_subject": "How much expensive are row level statistics?" }, { "msg_contents": "On Sun, Dec 11, 2005 at 11:53:36AM +0000, Carlos Benkendorf wrote:\n> I would like to use autovacuum but is not too much expensive\n> collecting row level statistics?\n\nThe cost depends on your usage patterns. I did tests with one of\nmy applications and saw no significant performance difference for\nsimple selects, but a series of insert/update/delete operations ran\nabout 30% slower when block- and row-level statistics were enabled\nversus when the statistics collector was disabled.\n\n-- \nMichael Fuhr\n", "msg_date": "Sun, 11 Dec 2005 12:44:43 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much expensive are row level statistics?" }, { "msg_contents": "Michael Fuhr wrote:\n> On Sun, Dec 11, 2005 at 11:53:36AM +0000, Carlos Benkendorf wrote:\n> > I would like to use autovacuum but is not too much expensive\n> > collecting row level statistics?\n> \n> The cost depends on your usage patterns. I did tests with one of\n> my applications and saw no significant performance difference for\n> simple selects, but a series of insert/update/delete operations ran\n> about 30% slower when block- and row-level statistics were enabled\n> versus when the statistics collector was disabled.\n\nThis series of i/u/d operations ran with no sleep in between, right?\nI wouldn't expect a normal OLTP operation to be like this. (If it is\nyou have a serious shortage of hardware ...)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 12 Dec 2005 10:23:42 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much expensive are row level statistics?" }, { "msg_contents": "On Mon, Dec 12, 2005 at 10:23:42AM -0300, Alvaro Herrera wrote:\n> Michael Fuhr wrote:\n> > The cost depends on your usage patterns. I did tests with one of\n> > my applications and saw no significant performance difference for\n> > simple selects, but a series of insert/update/delete operations ran\n> > about 30% slower when block- and row-level statistics were enabled\n> > versus when the statistics collector was disabled.\n> \n> This series of i/u/d operations ran with no sleep in between, right?\n> I wouldn't expect a normal OLTP operation to be like this. (If it is\n> you have a serious shortage of hardware ...)\n\nThere's no sleeping but there is some client-side processing between\ngroups of i/u/d operations. As I mentioned in another message, the\napplication reads a chunk of data from a stream, does a few i/u/d\noperations to update the database, and repeats several thousand times.\n\nThe hardware is old but it's adequate for this application. What\nkind of overhead would you expect?\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 12 Dec 2005 13:37:38 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much expensive are row level statistics?" } ]
[ { "msg_contents": "Paal, \n\n> I'm currently benchmarking several RDBMSs with respect to \n> analytical query performance on medium-sized multidimensional \n> data sets. The data set contains 30,000,000 fact rows evenly \n> distributed in a multidimensional space of 9 hierarchical \n> dimensions. Each dimension has 8000 members.\n\nCan you provide the schema and queries here please?\n\n> On Oracle the query runs in less than 3 seconds. All steps \n> have been taken to ensure that Oracle will apply star schema \n> optimization to the query (e.g. having loads of single-column \n> bitmap indexes). The query plan reveals that a bitmap merge \n> takes place before fact lookup.\n\nPostgres currently lacks a bitmap index, though 8.1 has a bitmap \"predicate merge\" in 8.1\n\nWe have recently completed an Oracle-like bitmap index that we will contribute shortly to Postgres and it performs very similarly to the \"other commercial databases\" version.\n\n> I have established similar conditions for the query in \n> PostgreSQL, and it runs in about 30 seconds. Again the CPU \n> utilization is high with no noticable I/O. The query plan is \n> of course very different from that of Oracle, since \n> PostgreSQL lacks the bitmap index merge operation. It narrows \n> down the result one dimension at a time, using the \n> single-column indexes provided. It is not an option for us to \n> provide multi-column indexes tailored to the specific query, \n> since we want full freedom as to which dimensions each query will use.\n\nThis sounds like a very good case for bitmap index, please forward the schema and queries.\n\n> Are these the results we should expect when comparing \n> PostgreSQL to Oracle for such queries, or are there special \n> optimization options for PostgreSQL that we may have \n> overlooked? (I wouldn't be suprised if there are, since I \n> spent at least 2 full days trying to trigger the star \n> optimization magic in my Oracle installation.)\n\nSee above.\n\n- Luke\n\n", "msg_date": "Sun, 11 Dec 2005 16:15:14 -0500", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" } ]
[ { "msg_contents": "Hello,\n\nI've got a table with ~60 Million rows and am having performance problems\nquerying it. Disks are setup as 4x10K SCSI 76GB, RAID 1+0. The table is\nbeing inserted into multiple times every second of the day, with no updates\nand every 2nd day we delete 1/60th of the data (as it becomes old). Vacuum\nanalyze is scheduled to run 3 times a day.\n\nQuery:\n\nselect sum(TOTAL_FROM) as TOTAL_IN, sum(TOTAL_TO) as TOTAL_OUT, SOURCE_MAC\nfrom PC_TRAFFIC where FK_DEVICE = 996 and TRAFFIC_DATE >= '2005-10-14\n00:00:00' and TRAFFIC_DATE <= '2005-11-13 23:59:59' group by SOURCE_MAC\norder by 1 desc\n\nTable:\n\nCREATE TABLE PC_TRAFFIC (\n PK_PC_TRAFFIC INTEGER NOT NULL,\n TRAFFIC_DATE TIMESTAMP NOT NULL,\n SOURCE_MAC CHAR(20) NOT NULL,\n DEST_IP CHAR(15),\n DEST_PORT INTEGER,\n TOTAL_TO DOUBLE PRECISION,\n TOTAL_FROM DOUBLE PRECISION,\n FK_DEVICE SMALLINT,\n PROTOCOL_TYPE SMALLINT\n);\n\nCREATE INDEX pc_traffic_pkidx ON pc_traffic (pk_pc_traffic);\nCREATE INDEX pc_traffic_idx3 ON pc_traffic (fk_device, traffic_date);\n\nPlan:\nSort (cost=76650.58..76650.58 rows=2 width=40)\n Sort Key: sum(total_from)\n -> HashAggregate (cost=76650.54..76650.57 rows=2 width=40)\n -> Bitmap Heap Scan on pc_traffic \n(cost=534.64..76327.03rows=43134 width=40)\n Recheck Cond: ((fk_device = 996) AND (traffic_date >=\n'2005-10-01 00:00:00'::timestamp without time zone) AND (traffic_date <=\n'2005-10-31 23:59:59'::timestamp without time zone))\n -> Bitmap Index Scan on pc_traffic_idx3 \n(cost=0.00..534.64rows=43134 width=0)\n Index Cond: ((fk_device = 996) AND (traffic_date >=\n'2005-10-01 00:00:00'::timestamp without time zone) AND (traffic_date <=\n'2005-10-31 23:59:59'::timestamp without time zone))\n(7 rows)\n\nCLUSTER on PC_TRAFFIC_IDX3 gives me significantly improved performance:\n\nSort (cost=39886.65..39886.66 rows=2 width=40)\n Sort Key: sum(total_from)\n -> HashAggregate (cost=39886.61..39886.64 rows=2 width=40)\n -> Index Scan using pc_traffic_idx3 on pc_traffic (cost=\n0.00..39551.26 rows=44714 width=40)\n Index Cond: ((fk_device = 996) AND (traffic_date >=\n'2005-10-01 00:00:00'::timestamp without time zone) AND (traffic_date <=\n'2005-10-31 23:59:59'::timestamp without time zone))\n(5 rows)\n\nHowever the clustering is only effective on the first shot. Because of the\nconstant usage of the table we can't perform a vacuum full nor any exclusive\nlock function.\n\nWould table partitioning/partial indexes help much? Partitioning on date\nrange doesn't make much sense for this setup, where a typical 1-month query\nspans both tables (as the billing month for the customer might start midway\nthrough a calendar month).\n\nNoting that the index scan was quicker than the bitmap, I'm trying to make\nthe indexes smaller/more likely to index scan. I have tried partitioning\nagainst fk_device, with 10 child tables. I'm using fk_device % 10 = 1,\nfk_device % 10 = 2, fk_device % 10 = 3, etc... as the check constraint.\n\nCREATE TABLE pc_traffic_0 (CHECK(FK_DEVICE % 10 = 0)) INHERITS (pc_traffic);\nCREATE TABLE pc_traffic_1 (CHECK(FK_DEVICE % 10 = 1)) INHERITS (pc_traffic);\nCREATE TABLE pc_traffic_2 (CHECK(FK_DEVICE % 10 = 2)) INHERITS (pc_traffic);\nCREATE TABLE pc_traffic_3 (CHECK(FK_DEVICE % 10 = 3)) INHERITS (pc_traffic);\nCREATE TABLE pc_traffic_4 (CHECK(FK_DEVICE % 10 = 4)) INHERITS (pc_traffic);\nCREATE TABLE pc_traffic_5 (CHECK(FK_DEVICE % 10 = 5)) INHERITS (pc_traffic);\nCREATE TABLE pc_traffic_6 (CHECK(FK_DEVICE % 10 = 6)) INHERITS (pc_traffic);\nCREATE TABLE pc_traffic_7 (CHECK(FK_DEVICE % 10 = 7)) INHERITS (pc_traffic);\nCREATE TABLE pc_traffic_8 (CHECK(FK_DEVICE % 10 = 8)) INHERITS (pc_traffic);\nCREATE TABLE pc_traffic_9 (CHECK(FK_DEVICE % 10 = 9)) INHERITS (pc_traffic);\n\n... indexes now look like:\nCREATE INDEX pc_traffic_6_idx3 ON pc_traffic_6 (fk_device, traffic_date);\n\nTo take advantage of the query my SQL now has to include the mod operation\n(so the query planner picks up the correct child tables):\n\nselect sum(TOTAL_FROM) as TOTAL_IN, sum(TOTAL_TO) as TOTAL_OUT, SOURCE_MAC\nfrom PC_TRAFFIC where FK_DEVICE = 996 and FK_DEVICE % 10 = 6 and\nTRAFFIC_DATE >= '2005-10-14 00:00:00' and TRAFFIC_DATE <= '2005-11-13\n23:59:59' group by SOURCE_MAC order by 1 desc\n\nSorry I would show the plan but I'm rebuilding the dev database atm. It was\nfaster though and did pick up the correct child table. It was also a bitmap\nscan on the index IIRC.\n\nWould I be better off creating many partial indexes instead of multiple\ntables AND multiple indexes?\nAm I using a horrid method for partitioning the data? (% 10)\nShould there be that big of an improvement for multiple tables given that\nall the data is still stored on the same filesystem?\nAny advice on table splitting much appreciated.\n\nCheers,\n\nMike C.\n\nHello,I've got a table with ~60 Million rows and am having\nperformance problems querying it. Disks are setup as 4x10K SCSI 76GB,\nRAID 1+0. The table is being inserted into multiple times every second\nof the day, with no updates and every 2nd day we delete 1/60th of the\ndata (as it becomes old). Vacuum analyze is scheduled to run 3 times a\nday.Query:select sum(TOTAL_FROM) as TOTAL_IN,\nsum(TOTAL_TO) as TOTAL_OUT, SOURCE_MAC from PC_TRAFFIC where FK_DEVICE\n= 996 and TRAFFIC_DATE >= '2005-10-14 00:00:00' and TRAFFIC_DATE\n<= '2005-11-13 23:59:59' group by SOURCE_MAC order by 1 descTable:CREATE TABLE PC_TRAFFIC (     PK_PC_TRAFFIC  INTEGER NOT NULL,     TRAFFIC_DATE   TIMESTAMP NOT NULL,     SOURCE_MAC     CHAR(20) NOT NULL,\n     DEST_IP        CHAR(15),     DEST_PORT      INTEGER,     TOTAL_TO       DOUBLE PRECISION,     TOTAL_FROM     DOUBLE PRECISION,     FK_DEVICE      SMALLINT,     PROTOCOL_TYPE  SMALLINT);\nCREATE INDEX pc_traffic_pkidx ON pc_traffic (pk_pc_traffic);CREATE INDEX pc_traffic_idx3 ON pc_traffic (fk_device, traffic_date);Plan:Sort  (cost=76650.58..76650.58 rows=2 width=40)   Sort Key: sum(total_from)\n   ->  HashAggregate  (cost=76650.54..76650.57 rows=2 width=40)        \n->  Bitmap Heap Scan on\npc_traffic  (cost=534.64..76327.03 rows=43134 width=40)              \nRecheck Cond: ((fk_device = 996) AND (traffic_date >= '2005-10-01\n00:00:00'::timestamp without time zone) AND (traffic_date <=\n'2005-10-31 23:59:59'::timestamp without time zone))              \n->  Bitmap Index Scan on\npc_traffic_idx3  (cost=0.00..534.64 rows=43134 width=0)                    \nIndex Cond: ((fk_device = 996) AND (traffic_date >= '2005-10-01\n00:00:00'::timestamp without time zone) AND (traffic_date <=\n'2005-10-31 23:59:59'::timestamp without time zone))(7 rows)CLUSTER on PC_TRAFFIC_IDX3 gives me significantly improved performance: Sort  (cost=39886.65..39886.66 rows=2 width=40)   Sort Key: sum(total_from)\n   ->  HashAggregate  (cost=39886.61..39886.64 rows=2 width=40)        \n->  Index Scan using pc_traffic_idx3 on\npc_traffic  (cost=0.00..39551.26 rows=44714 width=40)              \nIndex Cond: ((fk_device = 996) AND (traffic_date >= '2005-10-01\n00:00:00'::timestamp without time zone) AND (traffic_date <=\n'2005-10-31 23:59:59'::timestamp without time zone))(5 rows)However\nthe clustering is only effective on the first shot. Because of the\nconstant usage of the table we can't perform a vacuum full nor any\nexclusive lock function.Would table partitioning/partial\nindexes help much? Partitioning on date range doesn't make much sense\nfor this setup, where a typical 1-month query spans both tables (as the\nbilling month for the customer might start midway through a calendar\nmonth).Noting that the index scan was quicker than the bitmap,\nI'm trying to make the indexes smaller/more likely to index scan. I\nhave tried partitioning against fk_device, with 10 child tables. I'm\nusing fk_device % 10 = 1, fk_device % 10 = 2, fk_device % 10 = 3,\netc... as the check constraint.CREATE TABLE pc_traffic_0 (CHECK(FK_DEVICE % 10 = 0)) INHERITS (pc_traffic);CREATE TABLE pc_traffic_1 (CHECK(FK_DEVICE % 10 = 1)) INHERITS (pc_traffic);CREATE TABLE pc_traffic_2 (CHECK(FK_DEVICE % 10 = 2)) INHERITS (pc_traffic);\nCREATE TABLE pc_traffic_3 (CHECK(FK_DEVICE % 10 = 3)) INHERITS (pc_traffic);CREATE TABLE pc_traffic_4 (CHECK(FK_DEVICE % 10 = 4)) INHERITS (pc_traffic);CREATE TABLE pc_traffic_5 (CHECK(FK_DEVICE % 10 = 5)) INHERITS (pc_traffic);\nCREATE TABLE pc_traffic_6 (CHECK(FK_DEVICE % 10 = 6)) INHERITS (pc_traffic);CREATE TABLE pc_traffic_7 (CHECK(FK_DEVICE % 10 = 7)) INHERITS (pc_traffic);CREATE TABLE pc_traffic_8 (CHECK(FK_DEVICE % 10 = 8)) INHERITS (pc_traffic);\nCREATE TABLE pc_traffic_9 (CHECK(FK_DEVICE % 10 = 9)) INHERITS (pc_traffic);... indexes now look like:CREATE INDEX pc_traffic_6_idx3 ON pc_traffic_6 (fk_device, traffic_date);To\ntake advantage of the query my SQL now has to include the mod operation\n(so the query planner picks up the correct child tables):select\nsum(TOTAL_FROM) as TOTAL_IN, sum(TOTAL_TO) as TOTAL_OUT, SOURCE_MAC\nfrom PC_TRAFFIC where FK_DEVICE = 996 and FK_DEVICE % 10 = 6 and\nTRAFFIC_DATE >= '2005-10-14 00:00:00' and TRAFFIC_DATE <=\n'2005-11-13 23:59:59' group by SOURCE_MAC order by 1 descSorry\nI would show the plan but I'm rebuilding the dev database atm. It was\nfaster though and did pick up the correct child table. It was also a\nbitmap scan on the index IIRC.Would I be better off creating many partial indexes instead of multiple tables AND multiple indexes?Am I using a horrid method for partitioning the data? (% 10) \nShould there be that big of an improvement for multiple tables given that all the data is still stored on the same filesystem?\nAny advice on table splitting much appreciated. Cheers,Mike C.", "msg_date": "Mon, 12 Dec 2005 15:07:59 +1300", "msg_from": "Mike C <[email protected]>", "msg_from_op": true, "msg_subject": "Table Partitions / Partial Indexes" }, { "msg_contents": "Mike C <[email protected]> writes:\n> CLUSTER on PC_TRAFFIC_IDX3 gives me significantly improved performance:\n\nHow can you tell? Neither of these are EXPLAIN ANALYZE output.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Dec 2005 21:39:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Partitions / Partial Indexes " }, { "msg_contents": "On 12/12/05, Tom Lane <[email protected]> wrote:\n>\n> Mike C <[email protected]> writes:\n> > CLUSTER on PC_TRAFFIC_IDX3 gives me significantly improved performance:\n>\n> How can you tell? Neither of these are EXPLAIN ANALYZE output.\n>\n> regards, tom lane\n\n\n\nSorry that's a result of my bad record keeping. I've been keeping records of\nthe explain but not the analyze. IIRC the times dropped from ~25 seconds\ndown to ~8 seconds (using analyze).\n\nRegards,\n\nMike\n\nOn 12/12/05, Tom Lane <[email protected]> wrote:\nMike C <[email protected]> writes:> CLUSTER on PC_TRAFFIC_IDX3 gives me significantly improved performance:How can you tell?  Neither of these are EXPLAIN ANALYZE output.\n                        regards,\ntom lane\n\nSorry that's a result of my bad record keeping. I've been keeping\nrecords of the explain but not the analyze. IIRC the times dropped from\n~25 seconds down to ~8 seconds (using analyze).\n\nRegards,\n\nMike", "msg_date": "Mon, 12 Dec 2005 15:49:10 +1300", "msg_from": "Mike C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table Partitions / Partial Indexes" }, { "msg_contents": "On Mon, 2005-12-12 at 15:07 +1300, Mike C wrote:\n\n> Partitioning on date range doesn't make much sense for this setup,\n> where a typical 1-month query spans both tables (as the billing month\n> for the customer might start midway through a calendar month).\n\nMaybe not for queries, but if you use a date range then you never need\nto run a DELETE and never need to VACUUM.\n\nYou could split the data into two-day chunks.\n\n> Am I using a horrid method for partitioning the data? (% 10) \n\nNo, but what benefit do you think it provides. I'm not sure I see...\n\n> Should there be that big of an improvement for multiple tables given\n> that all the data is still stored on the same filesystem?\n\nYou could store partitions in separate tablespaces/filesystems.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 13 Dec 2005 23:17:05 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Partitions / Partial Indexes" }, { "msg_contents": "On 12/14/05, Simon Riggs <[email protected]> wrote:\n>\n> Maybe not for queries, but if you use a date range then you never need\n> to run a DELETE and never need to VACUUM.\n>\n> You could split the data into two-day chunks.\n\n\nThat's an interesting idea, thanks.\n\n> Am I using a horrid method for partitioning the data? (% 10)\n>\n> No, but what benefit do you think it provides. I'm not sure I see...\n\n\nI was trying to get both the indexes to be smaller without loosing\nselectivity, and make any table scans/index scans faster from having to read\nless data.\n\n> Should there be that big of an improvement for multiple tables given\n> > that all the data is still stored on the same filesystem?\n>\n> You could store partitions in separate tablespaces/filesystems.\n>\n\nIdeally that's what I would do, but to make the most of that I would have to\nhave a dedicated RAID setup for each partition right? (Which is a bit pricey\nfor the budget).\n\nCheers,\n\nMike\n\nOn 12/14/05, Simon Riggs <[email protected]> wrote:\nMaybe not for queries, but if you use a date range then you never needto run a DELETE and never need to VACUUM.You could split the data into two-day chunks.\nThat's an interesting idea, thanks.\n> Am I using a horrid method for partitioning the data? (% 10)No, but what benefit do you think it provides. I'm not sure I see...\n\nI was trying to get both the indexes to be smaller without loosing\nselectivity, and make any table scans/index scans faster from having to\nread less data.\n\n> Should there be that big of an improvement for multiple tables given> that all the data is still stored on the same filesystem?\nYou could store partitions in separate tablespaces/filesystems.\nIdeally that's what I would do, but to make the most of that I would\nhave to have a dedicated RAID setup for each partition right? (Which is\na bit pricey for the budget).\n\nCheers,\n\nMike", "msg_date": "Wed, 14 Dec 2005 12:54:08 +1300", "msg_from": "Mike C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table Partitions / Partial Indexes" } ]
[ { "msg_contents": "Hi,\n\nI am ready to install ver. 8.1 to our db server, but I have some \nquestions about it.\n When I use autovacuum (8.1) is it required to use \"vacuum analyze\" for \nmaintenance or autovacuum is enough?\n We have 2 processors (hyperthread) and is it needed to configure the \npsql to use it or is it enough to configure the kernel base only?\n Is 8.1 stable?\n \nThanks\nSzabek\n\n\n\n\n", "msg_date": "Mon, 12 Dec 2005 09:54:30 +0100", "msg_from": "Szabolcs BALLA <[email protected]>", "msg_from_op": true, "msg_subject": "7.4.7 vs. 8.1" }, { "msg_contents": "On Mon, Dec 12, 2005 at 09:54:30AM +0100, Szabolcs BALLA wrote:\n> When I use autovacuum (8.1) is it required to use \"vacuum analyze\" for \n> maintenance or autovacuum is enough?\n\nautovacuum should be enough.\n\n> We have 2 processors (hyperthread) and is it needed to configure the \n> psql to use it or is it enough to configure the kernel base only?\n\nTurn _off_ hyperthreading; it's more likely to do harm than good.\n\n> Is 8.1 stable?\n\nYes. Note that 8.1.1 should be out quite soon.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Mon, 12 Dec 2005 13:04:54 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.4.7 vs. 8.1" } ]
[ { "msg_contents": "Hello,\n\nClearly, I shouldn't actually use these transactions unless I have to, and\nin cases where I do use it, I'd expect the completion of the transaction to\ndepend on the speed of all participating databases in the transaction, but\nare there any additional overheads which might come with a significant time\npenalty that might commonly be overlooked by someone like me with no\nprevious experience with two-phase commit (2PC)?\n\n---\n\nThe application:\n\nI'm evaluating a design for a database scheme where the nation is\npartitioned into small areas (e.g. 2km squares), each area serviced solely\nby its own dedicated database.\n\nAll queries are locally pinpointed, with a limited search radius, and the\ndatabase enclosing the centre is responsible for executing the query.\n\nThe only issue is to ensure that a query near a boundary between two\nadjacent areas behaves as though there was no partitioning. To do this, I'm\nlooking into using 8.1's new 2PC to allow me to selectively copy data\ninserted near a boundary into the adjacent neighbouring databases, so that\nthis data will appear in boundary searches carried out by the neighbours.\nThe percentage of inserts which are copied into neighbours is intended to be\nroughly 25%, most of which involve just a single copy.\n\nMy scheme intends to ensure that all the databases are able to fit entirely\nin RAM, and in addition, the amount of data in each database will be\nrelatively small (and therefore quick to sift through). Inserted data is\n'small', and most of the time, each database is servicing read requests\nrather than writing, updating or inserting.\n\nA single nationwide database would be logically simpler, but in my case, the\napplication is a website, and I want a hardware solution that is cheap to\nstart with, easily extensible, allows a close coupling between the apache\nserver responsible for a region and the database it hits.\n\nAny insights gratefully received!\n\nAndy Ballingall \n\n", "msg_date": "Mon, 12 Dec 2005 09:13:55 -0000", "msg_from": "\"Andy Ballingall\" <[email protected]>", "msg_from_op": true, "msg_subject": "2 phase commit: performance implications?" }, { "msg_contents": ">The only issue is to ensure that a query near a boundary between two\n>adjacent areas behaves as though there was no partitioning. To do this, I'm\n>looking into using 8.1's new 2PC to allow me to selectively copy data\n>inserted near a boundary into the adjacent neighbouring databases, so that\n>this data will appear in boundary searches carried out by the neighbours.\n>\nWhy not just query adjacent databases, rather than copying the data around?\n\nIf you really wanted to do this, do you need 2pc? Once data has been \nuploaded to the database for region A, then asynchronously copy the data \nto B, C, D and E later, using a queue. If you try to commit to all at \nonce, then if one fails, then none has the data.\n\nAll depends on what type of data you are dealing with, how important is \nconsistency, i.e. will it cost you money if the data is inconsistent \nbetween nodes.\n\nGenerally queuing is your friend. You can use 2pc to ensure your queues \nwork correctly if you like.\n\nDavid\n\n\n\n\n\n\n\n\n\nThe only issue is to ensure that a query near a boundary between two\nadjacent areas behaves as though there was no partitioning. To do this, I'm\nlooking into using 8.1's new 2PC to allow me to selectively copy data\ninserted near a boundary into the adjacent neighbouring databases, so that\nthis data will appear in boundary searches carried out by the neighbours.\n\nWhy not just query adjacent databases, rather than copying the data\naround?\n\nIf you really wanted to do this, do you need 2pc?  Once data has been\nuploaded to the database for region A, then asynchronously copy the\ndata to B, C, D and E later, using a queue.  If you try to commit to\nall at once, then if one fails, then none has the data.\n\nAll depends on what type of data you are dealing with, how important is\nconsistency, i.e. will it cost you money if the data is inconsistent\nbetween nodes.\n\nGenerally queuing is your friend.  You can use 2pc to ensure your\nqueues work correctly if you like.\n\nDavid", "msg_date": "Tue, 20 Dec 2005 15:00:30 +0000", "msg_from": "David Roussel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2 phase commit: performance implications?" }, { "msg_contents": "\n\n\n>Why not just query adjacent databases, rather than copying the data around?\n\nThe reasons I didn't choose this way were:\n1) I didn't think there's a way to write a query that can act on the data in\ntwo\nDatabases as though it was all in one, and I didn't want to get into merging\nmultiple database query results on the Application side. I'd rather just\nhave all the needed data sitting in a single database so that I can perform\nwhatever query I like without complication.\n2) Most database accesses are reads, and I didn't want a big network\noverhead for these, especially since I'm aiming for each database to be\nentirely RAM resident.\n\n>If you really wanted to do this, do you need 2pc?  Once data has been\nuploaded to the database for region A, then asynchronously copy the data to\nB, C, D and E later, using a queue.  \n\nI've always assumed that my data needed to be consistent. I guess there are\nsome circumstances where it isn't really a problem, but each would need to\nbe carefully evaluated. The easy answer is to say 'yes, it must be\nconsistent'.\n\n>If you try to commit to all at once, then if one fails, then none has the\ndata.\n\nYes, I'd prefer things to be that way in any event.\n\nRegards,\nAndy\n\n", "msg_date": "Wed, 21 Dec 2005 12:10:53 -0000", "msg_from": "\"Andy Ballingall\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 2 phase commit: performance implications?" } ]
[ { "msg_contents": "Paal,\n\n\nOn 12/12/05 2:10 AM, \"Pål Stenslet\" <[email protected]> wrote:\n\n> Here are the schema details, but first a little introduction:\n\nTerrific, very helpful and thanks for both.\n\nI wonder why the bitmap scan isn't selected in this query, Tom might have\nsome opinion and suggestions about it.\n\nI'd like to run your case with Bizgres and the new bitmap index to see if we\ncan increase the selectivity on the query and knock down the times being\nspent joining. I think the AND bitmap operations should do just that.\n\nCan you provide one more thing - either the smalltalk code to generate the\ncsv files or I can provide a web server to upload to (or yours).\n\nThanks!\n\n- Luke\n\n\n", "msg_date": "Mon, 12 Dec 2005 10:30:03 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" } ]
[ { "msg_contents": "> \n> On Sun, Dec 11, 2005 at 11:53:36AM +0000, Carlos Benkendorf wrote:\n> > I would like to use autovacuum but is not too much expensive\n> > collecting row level statistics?\n> \n> The cost depends on your usage patterns. I did tests with one of\n> my applications and saw no significant performance difference for\n> simple selects, but a series of insert/update/delete operations ran\n> about 30% slower when block- and row-level statistics were enabled\n> versus when the statistics collector was disabled.\n\nThat approximately confirms my results, except that the penalty may even\nbe a little bit higher in the worst-case scenario. Row level stats hit\nthe hardest if you are doing 1 row at a time operations over a\npersistent connection. Since my apps inherited this behavior from their\nCOBOL legacy, I keep them off. If your app follows the monolithic query\napproach to problem solving (pull lots of rows in, edit them on the\nclient, and send them back), penalty is basically zero. \n\nMerlin\n\n", "msg_date": "Mon, 12 Dec 2005 13:33:27 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How much expensive are row level statistics?" }, { "msg_contents": "On Mon, Dec 12, 2005 at 01:33:27PM -0500, Merlin Moncure wrote:\n> > The cost depends on your usage patterns. I did tests with one of\n> > my applications and saw no significant performance difference for\n> > simple selects, but a series of insert/update/delete operations ran\n> > about 30% slower when block- and row-level statistics were enabled\n> > versus when the statistics collector was disabled.\n> \n> That approximately confirms my results, except that the penalty may even\n> be a little bit higher in the worst-case scenario. Row level stats hit\n> the hardest if you are doing 1 row at a time operations over a\n> persistent connection.\n\nThat's basically how the application I tested works: it receives\ndata from a stream and performs whatever insert/update/delete\nstatements are necessary to update the database for each chunk of\ndata. Repeat a few thousand times.\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 12 Dec 2005 11:50:16 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much expensive are row level statistics?" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n>> The cost depends on your usage patterns. I did tests with one of\n>> my applications and saw no significant performance difference for\n>> simple selects, but a series of insert/update/delete operations ran\n>> about 30% slower when block- and row-level statistics were enabled\n>> versus when the statistics collector was disabled.\n\n> That approximately confirms my results, except that the penalty may even\n> be a little bit higher in the worst-case scenario. Row level stats hit\n> the hardest if you are doing 1 row at a time operations over a\n> persistent connection.\n\nIIRC, the only significant cost from enabling stats is the cost of\ntransmitting the counts to the stats collector, which is a cost\nbasically paid once at each transaction commit. So short transactions\nwill definitely have more overhead than longer ones. Even for a really\nsimple transaction, though, 30% seems high --- the stats code is\ndesigned deliberately to minimize the penalty.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Dec 2005 18:01:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much expensive are row level statistics? " }, { "msg_contents": "On Mon, Dec 12, 2005 at 06:01:01PM -0500, Tom Lane wrote:\n> IIRC, the only significant cost from enabling stats is the cost of\n> transmitting the counts to the stats collector, which is a cost\n> basically paid once at each transaction commit. So short transactions\n> will definitely have more overhead than longer ones. Even for a really\n> simple transaction, though, 30% seems high --- the stats code is\n> designed deliberately to minimize the penalty.\n\nNow there goes Tom with his skeptical eye again, and here comes me\nsaying \"oops\" again. Further tests show that for this application\nthe killer is stats_command_string, not stats_block_level or\nstats_row_level. Here are timings for the same set of operations\n(thousands of insert, update, and delete statements in one transaction)\nrun under various settings:\n\nstats_command_string = off\nstats_block_level = off\nstats_row_level = off\ntime: 2:09.46\n\nstats_command_string = off\nstats_block_level = on\nstats_row_level = off\ntime: 2:12.28\n\nstats_command_string = off\nstats_block_level = on\nstats_row_level = on\ntime: 2:14.38\n\nstats_command_string = on\nstats_block_level = off\nstats_row_level = off\ntime: 2:50.58\n\nstats_command_string = on\nstats_block_level = on\nstats_row_level = on\ntime: 2:53.76\n\n[Wanders off, swearing that he ran these tests before and saw higher\npenalties for block- and row-level statistics.]\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 12 Dec 2005 18:07:51 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much expensive are row level statistics?" }, { "msg_contents": "Michael Fuhr <[email protected]> writes:\n> Further tests show that for this application\n> the killer is stats_command_string, not stats_block_level or\n> stats_row_level.\n\nI tried it with pgbench -c 10, and got these results:\n\t41% reduction in TPS rate for stats_command_string\n\t9% reduction in TPS rate for stats_block/row_level (any combination)\n\nstrace'ing a backend confirms my belief that stats_block/row_level send\njust one stats message per transaction (at least for the relatively\nsmall number of tables touched per transaction by pgbench). However\nstats_command_string sends 14(!) --- there are seven commands per\npgbench transaction and each results in sending a <command> message and\nlater an <IDLE> message.\n\nGiven the rather lackadaisical way in which the stats collector makes\nthe data available, it seems like the backends are being much too\nenthusiastic about posting their stats_command_string status\nimmediately. Might be worth thinking about how to cut back the\noverhead by suppressing some of these messages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Dec 2005 22:20:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much expensive are row level statistics? " }, { "msg_contents": "On Mon, Dec 12, 2005 at 10:20:45PM -0500, Tom Lane wrote:\n> Given the rather lackadaisical way in which the stats collector makes\n> the data available, it seems like the backends are being much too\n> enthusiastic about posting their stats_command_string status\n> immediately. Might be worth thinking about how to cut back the\n> overhead by suppressing some of these messages.\n\nWould a GUC setting akin to log_min_duration_statement be feasible?\nDoes the backend support, or could it be easily modified to support,\na mechanism that would post the command string after a configurable\namount of time had expired, and then continue processing the query?\nThat way admins could avoid the overhead of posting messages for\nshort-lived queries that nobody's likely to see in pg_stat_activity\nanyway.\n\n-- \nMichael Fuhr\n", "msg_date": "Thu, 15 Dec 2005 16:44:48 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much expensive are row level statistics?" }, { "msg_contents": "Michael Fuhr <[email protected]> writes:\n> Does the backend support, or could it be easily modified to support,\n> a mechanism that would post the command string after a configurable\n> amount of time had expired, and then continue processing the query?\n\nNot really, unless you want to add the overhead of setting a timer\ninterrupt for every query. Which is sort of counterproductive when\nthe motivation is to reduce overhead ...\n\n(It might be more or less free if you have statement_timeout set, since\nthere would be a setitimer call anyway. But I don't think that's the\nnorm.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Dec 2005 19:06:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much expensive are row level statistics? " }, { "msg_contents": "Tom Lane wrote:\n> Michael Fuhr <[email protected]> writes:\n> > Does the backend support, or could it be easily modified to support,\n> > a mechanism that would post the command string after a configurable\n> > amount of time had expired, and then continue processing the query?\n> \n> Not really, unless you want to add the overhead of setting a timer\n> interrupt for every query. Which is sort of counterproductive when\n> the motivation is to reduce overhead ...\n> \n> (It might be more or less free if you have statement_timeout set, since\n> there would be a setitimer call anyway. But I don't think that's the\n> norm.)\n\nActually, it's probably not necessary to set the timer at the\nbeginning of every query. It's probably sufficient to just have it go\noff periodically, e.g. once every second, and thus set it when the\ntimer goes off. And the running command wouldn't need to be re-posted\nif it's the same as last time around. Turn off the timer if the\nconnection is idle now and was idle last time around (or not, if\nthere's no harm in having the timer running all the time), turn it on\nagain at the start of the next transaction.\n\nIn essence, the backend would be \"polling\" itself every second or so\nand recording its state at that time, rather than on every\ntransaction.\n\nAssuming that doing all that wouldn't screw something else up...\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Thu, 15 Dec 2005 21:44:58 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much expensive are row level statistics?" }, { "msg_contents": "On Thu, 2005-12-15 at 19:06 -0500, Tom Lane wrote:\n> Michael Fuhr <[email protected]> writes:\n> > Does the backend support, or could it be easily modified to support,\n> > a mechanism that would post the command string after a configurable\n> > amount of time had expired, and then continue processing the query?\n> \n> Not really, unless you want to add the overhead of setting a timer\n> interrupt for every query. Which is sort of counterproductive when\n> the motivation is to reduce overhead ...\n> \n> (It might be more or less free if you have statement_timeout set, since\n> there would be a setitimer call anyway. But I don't think that's the\n> norm.)\n\nWe could do the deferred send fairly easily. You need only set a timer\nwhen stats_command_string = on, so we'd only do that when requested by\nthe admin. Overall, that would be a cheaper way of doing it than now.\n\nHowever, I'm more inclined to the idea of a set of functions that allow\nan administrator to retrieve the full SQL text executing in a backend,\nwith an option to return an EXPLAIN of the currently executing plan.\nRight now, stats only gives you the first 1000 chars, so you're always\nstuck if its a big query. Plus we don't yet have a way of getting the\nexact EXPLAIN of a running query (you can get close, but it could\ndiffer).\n\nPull is better than push. Asking specific backends what they're doing\nwhen you need to know will be efficient; asking them to send their\ncommand strings, all of the time, deferred or not will always be more\nwasteful. Plus if you forgot to turn on stats_command_string before\nexecution, then you've no way of knowing anyhow.\n\nBest Regards, Simon Riggs\n\n\n\n", "msg_date": "Fri, 16 Dec 2005 13:17:25 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much expensive are row level statistics?" }, { "msg_contents": "Tom Lane wrote:\n> Michael Fuhr <[email protected]> writes:\n> > Further tests show that for this application\n> > the killer is stats_command_string, not stats_block_level or\n> > stats_row_level.\n> \n> I tried it with pgbench -c 10, and got these results:\n> \t41% reduction in TPS rate for stats_command_string\n\nWoh, 41%. That's just off the charts! What are we doing internally\nthat would cause that?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 16 Dec 2005 21:44:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much expensive are row level statistics?" }, { "msg_contents": "Tom Lane wrote:\n> Michael Fuhr <[email protected]> writes:\n> > Further tests show that for this application\n> > the killer is stats_command_string, not stats_block_level or\n> > stats_row_level.\n> \n> I tried it with pgbench -c 10, and got these results:\n> \t41% reduction in TPS rate for stats_command_string\n> \t9% reduction in TPS rate for stats_block/row_level (any combination)\n> \n> strace'ing a backend confirms my belief that stats_block/row_level send\n> just one stats message per transaction (at least for the relatively\n> small number of tables touched per transaction by pgbench). However\n> stats_command_string sends 14(!) --- there are seven commands per\n> pgbench transaction and each results in sending a <command> message and\n> later an <IDLE> message.\n> \n> Given the rather lackadaisical way in which the stats collector makes\n> the data available, it seems like the backends are being much too\n> enthusiastic about posting their stats_command_string status\n> immediately. Might be worth thinking about how to cut back the\n> overhead by suppressing some of these messages.\n\nI did some research on this because the numbers Tom quotes indicate there\nis something wrong in the way we process stats_command_string\nstatistics.\n\nI made a small test script:\n\t\n\tif [ ! -f /tmp/pgstat.sql ]\n\tthen\ti=0\n\t\twhile [ $i -lt 10000 ]\n\t\tdo\n\t\t\ti=`expr $i + 1`\n\t\t\techo \"SELECT 1;\"\n\t\tdone > /tmp/pgstat.sql\n\tfi\n\t\n\ttime sql test </tmp/pgstat.sql >/dev/null\n\nThis sends 10,000 \"SELECT 1\" queries to the backend, and reports the\nexecution time. I found that without stats_command_string defined, it\nran in 3.5 seconds. With stats_command_string defined, it took 5.5\nseconds, meaning the command string is causing a 57% slowdown. That is\nway too much considering that the SELECT 1 has to be send from psql to\nthe backend, parsed, optimized, and executed, and the result returned to\nthe psql, while stats_command_string only has to send a string to a\nbackend collector. There is _no_ way that collector should take 57% of\nthe time it takes to run the actual query.\n\nWith the test program, I tried various options. The basic code we have\nsends a UDP packet to a statistics buffer process, which recv()'s the\npacket, puts it into a memory queue buffer, and writes it to a pipe()\nthat is read by the statistics collector process which processes the\npacket.\n\nI tried various ways of speeding up the buffer and collector processes. \nI found if I put a pg_usleep(100) in the buffer process the backend\nspeed was good, but packets were lost. What I found worked well was to\ndo multiple recv() calls in a loop. The previous code did a select(),\nthen perhaps a recv() and pipe write() based on the results of the\nselect(). This caused many small packets to be written to the pipe and\nthe pipe write overhead seems fairly large. The best fix I found was to\nloop over the recv() call at most 25 times, collecting a group of\npackets that can then be sent to the collector in one pipe write. The\nrecv() socket is non-blocking, so a zero return indicates there are no\nmore packets available. Patch attached.\n\nThis change reduced the stats_command_string time from 5.5 to 3.9, which\nis closer to the 3.5 seconds with stats_command_string off.\n\nA second improvement I discovered is that the statistics collector is\ncalling gettimeofday() for every packet received, so it can determine\nthe timeout for the select() call to write the flat file. I removed\nthat behavior and instead used setitimer() to issue a SIGINT every\n500ms, which was the original behavior. This eliminates the\ngettimeofday() call and makes the code cleaner. Second patch attached.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/backend/postmaster/pgstat.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/postmaster/pgstat.c,v\nretrieving revision 1.116\ndiff -c -c -r1.116 pgstat.c\n*** src/backend/postmaster/pgstat.c\t2 Jan 2006 00:58:00 -0000\t1.116\n--- src/backend/postmaster/pgstat.c\t2 Jan 2006 18:36:43 -0000\n***************\n*** 1911,1916 ****\n--- 1911,1918 ----\n \t */\n \tfor (;;)\n \t{\n+ loop_again:\n+ \n \t\tFD_ZERO(&rfds);\n \t\tFD_ZERO(&wfds);\n \t\tmaxfd = -1;\n***************\n*** 1970,2014 ****\n \t\t */\n \t\tif (FD_ISSET(pgStatSock, &rfds))\n \t\t{\n! \t\t\tlen = recv(pgStatSock, (char *) &input_buffer,\n! \t\t\t\t\t sizeof(PgStat_Msg), 0);\n! \t\t\tif (len < 0)\n! \t\t\t\tereport(ERROR,\n! \t\t\t\t\t\t(errcode_for_socket_access(),\n! \t\t\t\t\t\t errmsg(\"could not read statistics message: %m\")));\n! \n! \t\t\t/*\n! \t\t\t * We ignore messages that are smaller than our common header\n! \t\t\t */\n! \t\t\tif (len < sizeof(PgStat_MsgHdr))\n! \t\t\t\tcontinue;\n! \n! \t\t\t/*\n! \t\t\t * The received length must match the length in the header\n! \t\t\t */\n! \t\t\tif (input_buffer.msg_hdr.m_size != len)\n! \t\t\t\tcontinue;\n! \n \t\t\t/*\n! \t\t\t * O.K. - we accept this message. Copy it to the circular\n! \t\t\t * msgbuffer.\n \t\t\t */\n! \t\t\tfrm = 0;\n! \t\t\twhile (len > 0)\n \t\t\t{\n! \t\t\t\txfr = PGSTAT_RECVBUFFERSZ - msg_recv;\n! \t\t\t\tif (xfr > len)\n! \t\t\t\t\txfr = len;\n! \t\t\t\tAssert(xfr > 0);\n! \t\t\t\tmemcpy(msgbuffer + msg_recv,\n! \t\t\t\t\t ((char *) &input_buffer) + frm,\n! \t\t\t\t\t xfr);\n! \t\t\t\tmsg_recv += xfr;\n! \t\t\t\tif (msg_recv == PGSTAT_RECVBUFFERSZ)\n! \t\t\t\t\tmsg_recv = 0;\n! \t\t\t\tmsg_have += xfr;\n! \t\t\t\tfrm += xfr;\n! \t\t\t\tlen -= xfr;\n \t\t\t}\n \t\t}\n \n--- 1972,2033 ----\n \t\t */\n \t\tif (FD_ISSET(pgStatSock, &rfds))\n \t\t{\n! \t\t\tint loops = 0;\n! \t\t\t\n \t\t\t/*\n! \t\t\t *\tWhile pipewrite() can send multiple data packets, recv() pulls\n! \t\t\t *\tonly a single packet per call. For busy systems, doing\n! \t\t\t *\tmultiple recv() calls and then one pipewrite() can improve\n! \t\t\t *\tquery speed by 40%. 25 was chosen because 25 packets should\n! \t\t\t *\teasily fit in a single pipewrite() call. recv()'s socket is\n! \t\t\t *\tnon-blocking.\n \t\t\t */\n! \t\t\twhile (++loops < 25 &&\n! \t\t\t\t (len = recv(pgStatSock, (char *) &input_buffer,\n! \t\t\t\t\t\t\t sizeof(PgStat_Msg), 0)) != 0)\n \t\t\t{\n! \t\t\t\tif (len < 0)\n! \t\t\t\t{\n! \t\t\t\t\tif (errno == EAGAIN)\n! \t\t\t\t\t\tcontinue;\n! \t\t\t\t\tereport(ERROR,\n! \t\t\t\t\t\t\t(errcode_for_socket_access(),\n! \t\t\t\t\t\t\t errmsg(\"could not read statistics message: %m\")));\n! \t\t\t\t}\n! \t\n! \t\t\t\t/*\n! \t\t\t\t * We ignore messages that are smaller than our common header\n! \t\t\t\t */\n! \t\t\t\tif (len < sizeof(PgStat_MsgHdr))\n! \t\t\t\t\tgoto loop_again;\n! \t\n! \t\t\t\t/*\n! \t\t\t\t * The received length must match the length in the header\n! \t\t\t\t */\n! \t\t\t\tif (input_buffer.msg_hdr.m_size != len)\n! \t\t\t\t\tgoto loop_again;\n! \t\n! \t\t\t\t/*\n! \t\t\t\t * O.K. - we accept this message. Copy it to the circular\n! \t\t\t\t * msgbuffer.\n! \t\t\t\t */\n! \t\t\t\tfrm = 0;\n! \t\t\t\twhile (len > 0)\n! \t\t\t\t{\n! \t\t\t\t\txfr = PGSTAT_RECVBUFFERSZ - msg_recv;\n! \t\t\t\t\tif (xfr > len)\n! \t\t\t\t\t\txfr = len;\n! \t\t\t\t\tAssert(xfr > 0);\n! \t\t\t\t\tmemcpy(msgbuffer + msg_recv,\n! \t\t\t\t\t\t ((char *) &input_buffer) + frm,\n! \t\t\t\t\t\t xfr);\n! \t\t\t\t\tmsg_recv += xfr;\n! \t\t\t\t\tif (msg_recv == PGSTAT_RECVBUFFERSZ)\n! \t\t\t\t\t\tmsg_recv = 0;\n! \t\t\t\t\tmsg_have += xfr;\n! \t\t\t\t\tfrm += xfr;\n! \t\t\t\t\tlen -= xfr;\n! \t\t\t\t}\n \t\t\t}\n \t\t}\n \n***************\n*** 2023,2029 ****\n \t\t * caught up, or because more data arrives so that we have more than\n \t\t * PIPE_BUF bytes buffered). This is not good, but is there any way\n \t\t * around it? We have no way to tell when the collector has caught\n! \t\t * up...\n \t\t */\n \t\tif (FD_ISSET(writePipe, &wfds))\n \t\t{\n--- 2042,2048 ----\n \t\t * caught up, or because more data arrives so that we have more than\n \t\t * PIPE_BUF bytes buffered). This is not good, but is there any way\n \t\t * around it? We have no way to tell when the collector has caught\n! \t\t * up. Followup, the pipe rarely fills up.\n \t\t */\n \t\tif (FD_ISSET(writePipe, &wfds))\n \t\t{\n\nIndex: src/backend/postmaster/pgstat.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/postmaster/pgstat.c,v\nretrieving revision 1.116\ndiff -c -c -r1.116 pgstat.c\n*** src/backend/postmaster/pgstat.c\t2 Jan 2006 00:58:00 -0000\t1.116\n--- src/backend/postmaster/pgstat.c\t2 Jan 2006 18:21:28 -0000\n***************\n*** 145,150 ****\n--- 145,151 ----\n static PgStat_StatBeEntry *pgStatBeTable = NULL;\n static int\tpgStatNumBackends = 0;\n \n+ static volatile bool\tneed_statwrite;\n \n /* ----------\n * Local function forward declarations\n***************\n*** 164,169 ****\n--- 165,171 ----\n \n NON_EXEC_STATIC void PgstatBufferMain(int argc, char *argv[]);\n NON_EXEC_STATIC void PgstatCollectorMain(int argc, char *argv[]);\n+ static void force_statwrite(SIGNAL_ARGS);\n static void pgstat_recvbuffer(void);\n static void pgstat_exit(SIGNAL_ARGS);\n static void pgstat_die(SIGNAL_ARGS);\n***************\n*** 1548,1560 ****\n \tPgStat_Msg\tmsg;\n \tfd_set\t\trfds;\n \tint\t\t\treadPipe;\n- \tint\t\t\tnready;\n \tint\t\t\tlen = 0;\n! \tstruct timeval timeout;\n! \tstruct timeval next_statwrite;\n! \tbool\t\tneed_statwrite;\n \tHASHCTL\t\thash_ctl;\n! \n \tMyProcPid = getpid();\t\t/* reset MyProcPid */\n \n \t/*\n--- 1550,1560 ----\n \tPgStat_Msg\tmsg;\n \tfd_set\t\trfds;\n \tint\t\t\treadPipe;\n \tint\t\t\tlen = 0;\n! \tstruct itimerval timeval;\n \tHASHCTL\t\thash_ctl;\n! \tbool\t\tneed_timer = false;\n! \t\n \tMyProcPid = getpid();\t\t/* reset MyProcPid */\n \n \t/*\n***************\n*** 1572,1578 ****\n \t/* kluge to allow buffer process to kill collector; FIXME */\n \tpqsignal(SIGQUIT, pgstat_exit);\n #endif\n! \tpqsignal(SIGALRM, SIG_IGN);\n \tpqsignal(SIGPIPE, SIG_IGN);\n \tpqsignal(SIGUSR1, SIG_IGN);\n \tpqsignal(SIGUSR2, SIG_IGN);\n--- 1572,1578 ----\n \t/* kluge to allow buffer process to kill collector; FIXME */\n \tpqsignal(SIGQUIT, pgstat_exit);\n #endif\n! \tpqsignal(SIGALRM, force_statwrite);\n \tpqsignal(SIGPIPE, SIG_IGN);\n \tpqsignal(SIGUSR1, SIG_IGN);\n \tpqsignal(SIGUSR2, SIG_IGN);\n***************\n*** 1597,1608 ****\n \tinit_ps_display(\"stats collector process\", \"\", \"\");\n \tset_ps_display(\"\");\n \n- \t/*\n- \t * Arrange to write the initial status file right away\n- \t */\n- \tgettimeofday(&next_statwrite, NULL);\n \tneed_statwrite = TRUE;\n \n \t/*\n \t * Read in an existing statistics stats file or initialize the stats to\n \t * zero.\n--- 1597,1608 ----\n \tinit_ps_display(\"stats collector process\", \"\", \"\");\n \tset_ps_display(\"\");\n \n \tneed_statwrite = TRUE;\n \n+ \tMemSet(&timeval, 0, sizeof(struct itimerval));\n+ \ttimeval.it_value.tv_sec = PGSTAT_STAT_INTERVAL / 1000;\n+ \ttimeval.it_value.tv_usec = PGSTAT_STAT_INTERVAL % 1000;\n+ \n \t/*\n \t * Read in an existing statistics stats file or initialize the stats to\n \t * zero.\n***************\n*** 1634,1667 ****\n \t */\n \tfor (;;)\n \t{\n- \t\t/*\n- \t\t * If we need to write the status file again (there have been changes\n- \t\t * in the statistics since we wrote it last) calculate the timeout\n- \t\t * until we have to do so.\n- \t\t */\n \t\tif (need_statwrite)\n \t\t{\n! \t\t\tstruct timeval now;\n! \n! \t\t\tgettimeofday(&now, NULL);\n! \t\t\t/* avoid assuming that tv_sec is signed */\n! \t\t\tif (now.tv_sec > next_statwrite.tv_sec ||\n! \t\t\t\t(now.tv_sec == next_statwrite.tv_sec &&\n! \t\t\t\t now.tv_usec >= next_statwrite.tv_usec))\n! \t\t\t{\n! \t\t\t\ttimeout.tv_sec = 0;\n! \t\t\t\ttimeout.tv_usec = 0;\n! \t\t\t}\n! \t\t\telse\n! \t\t\t{\n! \t\t\t\ttimeout.tv_sec = next_statwrite.tv_sec - now.tv_sec;\n! \t\t\t\ttimeout.tv_usec = next_statwrite.tv_usec - now.tv_usec;\n! \t\t\t\tif (timeout.tv_usec < 0)\n! \t\t\t\t{\n! \t\t\t\t\ttimeout.tv_sec--;\n! \t\t\t\t\ttimeout.tv_usec += 1000000;\n! \t\t\t\t}\n! \t\t\t}\n \t\t}\n \n \t\t/*\n--- 1634,1644 ----\n \t */\n \tfor (;;)\n \t{\n \t\tif (need_statwrite)\n \t\t{\n! \t\t\tpgstat_write_statsfile();\n! \t\t\tneed_statwrite = false;\n! \t\t\tneed_timer = true;\n \t\t}\n \n \t\t/*\n***************\n*** 1673,1681 ****\n \t\t/*\n \t\t * Now wait for something to do.\n \t\t */\n! \t\tnready = select(readPipe + 1, &rfds, NULL, NULL,\n! \t\t\t\t\t\t(need_statwrite) ? &timeout : NULL);\n! \t\tif (nready < 0)\n \t\t{\n \t\t\tif (errno == EINTR)\n \t\t\t\tcontinue;\n--- 1650,1656 ----\n \t\t/*\n \t\t * Now wait for something to do.\n \t\t */\n! \t\tif (select(readPipe + 1, &rfds, NULL, NULL, NULL) < 0)\n \t\t{\n \t\t\tif (errno == EINTR)\n \t\t\t\tcontinue;\n***************\n*** 1685,1702 ****\n \t\t}\n \n \t\t/*\n- \t\t * If there are no descriptors ready, our timeout for writing the\n- \t\t * stats file happened.\n- \t\t */\n- \t\tif (nready == 0)\n- \t\t{\n- \t\t\tpgstat_write_statsfile();\n- \t\t\tneed_statwrite = FALSE;\n- \n- \t\t\tcontinue;\n- \t\t}\n- \n- \t\t/*\n \t\t * Check if there is a new statistics message to collect.\n \t\t */\n \t\tif (FD_ISSET(readPipe, &rfds))\n--- 1660,1665 ----\n***************\n*** 1813,1829 ****\n \t\t\t */\n \t\t\tpgStatNumMessages++;\n \n! \t\t\t/*\n! \t\t\t * If this is the first message after we wrote the stats file the\n! \t\t\t * last time, setup the timeout that it'd be written.\n! \t\t\t */\n! \t\t\tif (!need_statwrite)\n \t\t\t{\n! \t\t\t\tgettimeofday(&next_statwrite, NULL);\n! \t\t\t\tnext_statwrite.tv_usec += ((PGSTAT_STAT_INTERVAL) * 1000);\n! \t\t\t\tnext_statwrite.tv_sec += (next_statwrite.tv_usec / 1000000);\n! \t\t\t\tnext_statwrite.tv_usec %= 1000000;\n! \t\t\t\tneed_statwrite = TRUE;\n \t\t\t}\n \t\t}\n \n--- 1776,1787 ----\n \t\t\t */\n \t\t\tpgStatNumMessages++;\n \n! \t\t\tif (need_timer)\n \t\t\t{\n! \t\t\t\tif (setitimer(ITIMER_REAL, &timeval, NULL))\n! \t\t\t\t\tereport(ERROR,\n! \t\t\t\t\t\t (errmsg(\"unable to set statistics collector timer: %m\")));\n! \t\t\t\tneed_timer = false;\n \t\t\t}\n \t\t}\n \n***************\n*** 1848,1853 ****\n--- 1806,1818 ----\n }\n \n \n+ static void\n+ force_statwrite(SIGNAL_ARGS)\n+ {\n+ \tneed_statwrite = true;\n+ }\n+ \n+ \n /* ----------\n * pgstat_recvbuffer() -\n *", "msg_date": "Mon, 2 Jan 2006 13:40:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Stats collector performance improvement" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I found if I put a pg_usleep(100) in the buffer process the backend\n> speed was good, but packets were lost. What I found worked well was to\n> do multiple recv() calls in a loop. The previous code did a select(),\n> then perhaps a recv() and pipe write() based on the results of the\n> select(). This caused many small packets to be written to the pipe and\n> the pipe write overhead seems fairly large. The best fix I found was to\n> loop over the recv() call at most 25 times, collecting a group of\n> packets that can then be sent to the collector in one pipe write. The\n> recv() socket is non-blocking, so a zero return indicates there are no\n> more packets available. Patch attached.\n\nThis seems incredibly OS-specific. How many platforms did you test it\non?\n\nA more serious objection is that it will cause the stats machinery to\nwork very poorly if there isn't a steady stream of incoming messages.\nYou can't just sit on 24 messages until the 25th one arrives next week.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Jan 2006 13:45:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > I found if I put a pg_usleep(100) in the buffer process the backend\n> > speed was good, but packets were lost. What I found worked well was to\n> > do multiple recv() calls in a loop. The previous code did a select(),\n> > then perhaps a recv() and pipe write() based on the results of the\n> > select(). This caused many small packets to be written to the pipe and\n> > the pipe write overhead seems fairly large. The best fix I found was to\n> > loop over the recv() call at most 25 times, collecting a group of\n> > packets that can then be sent to the collector in one pipe write. The\n> > recv() socket is non-blocking, so a zero return indicates there are no\n> > more packets available. Patch attached.\n> \n> This seems incredibly OS-specific. How many platforms did you test it\n> on?\n\nOnly mine. I am posting the patch so others can test it, of course.\n\n> A more serious objection is that it will cause the stats machinery to\n> work very poorly if there isn't a steady stream of incoming messages.\n> You can't just sit on 24 messages until the 25th one arrives next week.\n\nYou wouldn't. It exits out of the loop on a not found, checks the pipe\nwrite descriptor, and writes on it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Jan 2006 14:13:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement" }, { "msg_contents": "[ moving to -hackers ]\n\nBruce Momjian <[email protected]> writes:\n> I did some research on this because the numbers Tom quotes indicate there\n> is something wrong in the way we process stats_command_string\n> statistics.\n> [ ... proposed patch that seems pretty klugy to me ... ]\n\nI wonder whether we shouldn't consider something more drastic, like\ngetting rid of the intermediate stats buffer process entirely.\n\nThe original design for the stats communication code was based on the\npremise that it's better to drop data than to make backends wait on\nthe stats collector. However, as things have turned out I think this\nnotion is a flop: the people who are using stats at all want the stats\nto be reliable. We've certainly seen plenty of gripes from people who\nare unhappy that backend-exit messages got dropped, and anyone who's\nusing autovacuum would really like the tuple update counts to be pretty\nsolid too.\n\nIf we abandoned the unreliable-communication approach, could we build\nsomething with less overhead?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Jan 2006 15:20:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement " }, { "msg_contents": "\n\"Tom Lane\" <[email protected]> wrote\n>\n> I wonder whether we shouldn't consider something more drastic, like\n> getting rid of the intermediate stats buffer process entirely.\n>\n> The original design for the stats communication code was based on the\n> premise that it's better to drop data than to make backends wait on\n> the stats collector. However, as things have turned out I think this\n> notion is a flop: the people who are using stats at all want the stats\n> to be reliable. We've certainly seen plenty of gripes from people who\n> are unhappy that backend-exit messages got dropped, and anyone who's\n> using autovacuum would really like the tuple update counts to be pretty\n> solid too.\n>\n\nAFAICS if we can maintain the stats counts solid, then it may hurt \nperformance dramatically. Think if we maintain \npgstat_count_heap_insert()/pgstat_count_heap_delete() pretty well, then we \nget a replacement of count(*). To do so, I believe that will add another \nlock contention on the target table stats.\n\nRegards,\nQingqing \n\n\n", "msg_date": "Mon, 2 Jan 2006 16:03:20 -0500", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement" }, { "msg_contents": "Ühel kenal päeval, E, 2006-01-02 kell 15:20, kirjutas Tom Lane:\n> [ moving to -hackers ]\n> \n> Bruce Momjian <[email protected]> writes:\n> > I did some research on this because the numbers Tom quotes indicate there\n> > is something wrong in the way we process stats_command_string\n> > statistics.\n> > [ ... proposed patch that seems pretty klugy to me ... ]\n> \n> I wonder whether we shouldn't consider something more drastic, like\n> getting rid of the intermediate stats buffer process entirely.\n> \n> The original design for the stats communication code was based on the\n> premise that it's better to drop data than to make backends wait on\n> the stats collector. However, as things have turned out I think this\n> notion is a flop: the people who are using stats at all want the stats\n> to be reliable. We've certainly seen plenty of gripes from people who\n> are unhappy that backend-exit messages got dropped, and anyone who's\n> using autovacuum would really like the tuple update counts to be pretty\n> solid too.\n> \n> If we abandoned the unreliable-communication approach, could we build\n> something with less overhead?\n\nWeell, at least it should be non-WAL, and probably non-fsync, at least\noptionally . Maybe also inserts inserts + offline aggregator (instead of\nupdates) to avoid lock contention. Something that collects data in\nblocks of local or per-backend shared memory in each backend and then\ngives complete blocks to aggregator process. Maybe use 2 alternating\nblocks per backend - 1 for ongoing stats collection and another given to\naggregator. this has a little time shift, but will deliver accurate\nstarts in the end. Things that need up-to-date stats (like\npg_stat_activity), should look (and lock) also the ongoing satas\ncollection blocks if needed (how do we know know the *if*) and delay\neach backend process momentaryly by looking.\n\n-----------------\nHannu\n\n\n", "msg_date": "Mon, 02 Jan 2006 23:48:15 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement" }, { "msg_contents": "\"Qingqing Zhou\" <[email protected]> writes:\n> AFAICS if we can maintain the stats counts solid, then it may hurt \n> performance dramatically. Think if we maintain \n> pgstat_count_heap_insert()/pgstat_count_heap_delete() pretty well, then we \n> get a replacement of count(*).\n\nNot at all. For one thing, the stats don't attempt to maintain\nper-transaction state, so they don't have the MVCC issues of count(*).\nI'm not suggesting any fundamental changes in what is counted or when.\n\nThe two compromises that were made in the original stats design to make\nit fast were (1) stats updates lag behind reality, and (2) some updates\nmay be missed entirely. Now that we have a couple of years' field\nexperience with the code, it seems that (1) is acceptable for real usage\nbut (2) not so much. And it's not even clear that we are buying any\nperformance gain from (2), considering that it's adding the overhead of\npassing the data through an extra process.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Jan 2006 16:48:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement " }, { "msg_contents": "On 1/2/2006 3:20 PM, Tom Lane wrote:\n\n> [ moving to -hackers ]\n> \n> Bruce Momjian <[email protected]> writes:\n>> I did some research on this because the numbers Tom quotes indicate there\n>> is something wrong in the way we process stats_command_string\n>> statistics.\n>> [ ... proposed patch that seems pretty klugy to me ... ]\n> \n> I wonder whether we shouldn't consider something more drastic, like\n> getting rid of the intermediate stats buffer process entirely.\n> \n> The original design for the stats communication code was based on the\n> premise that it's better to drop data than to make backends wait on\n\nThe original design was geared towards searching for useless/missing \nindexes and tuning activity like that. This never happened, but instead \npeople tried to use it as a reliable debugging or access statistics aid \n... which is fine but not what it originally was intended for.\n\nSo yes, I think looking at what it usually is used for, a message \npassing system like SysV message queues (puke) or similar would do a \nbetter job.\n\n\nJan\n\n> the stats collector. However, as things have turned out I think this\n> notion is a flop: the people who are using stats at all want the stats\n> to be reliable. We've certainly seen plenty of gripes from people who\n> are unhappy that backend-exit messages got dropped, and anyone who's\n> using autovacuum would really like the tuple update counts to be pretty\n> solid too.\n> \n> If we abandoned the unreliable-communication approach, could we build\n> something with less overhead?\n> \n> \t\t\tregards, tom lane\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Mon, 02 Jan 2006 23:06:57 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement" }, { "msg_contents": "On Mon, 2006-01-02 at 16:48 -0500, Tom Lane wrote:\n\n> The two compromises that were made in the original stats design to make\n> it fast were (1) stats updates lag behind reality, and (2) some updates\n> may be missed entirely. Now that we have a couple of years' field\n> experience with the code, it seems that (1) is acceptable for real usage\n> but (2) not so much. \n\nWe decided that the stats update had to occur during execution, in case\nthe statement aborted and row versions were not notified. That means we\nmust notify things as they happen, yet could use a reliable queuing\nsystem that could suffer a delay in the stats becoming available.\n\nBut how often do we lose a backend? Could we simply buffer that a little\nbetter? i.e. don't send message to stats unless we have altered at least\n10 rows? So we would buffer based upon the importance of the message,\nnot the actual size of the message. That way singleton-statements won't\ngenerate the same stats traffic, but we risk losing a buffers worth of\nrow changes should we crash - everything would still work if we lost a\nfew small row change notifications.\n\nWe can also save lots of cycles on the current statement overhead, which\nis currently the worst part of the stats, performance-wise. That\ndefinitely needs redesign. AFAICS we only ever need to know the SQL\nstatement via the stats system if the statement has been running for\nmore than a few minutes - the main use case is for an admin to be able\nto diagnose a rogue or hung statement. Pushing the statement to stats\nevery time is just a big overhead. That suggests we should either have a\npull or a deferred push (longer-than-X-secs) approach.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 03 Jan 2006 09:40:53 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement" }, { "msg_contents": "On Mon, 2006-01-02 at 13:40 -0500, Bruce Momjian wrote:\n\n> This change reduced the stats_command_string time from 5.5 to 3.9, which\n> is closer to the 3.5 seconds with stats_command_string off.\n\nExcellent work, port specific or not.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 03 Jan 2006 09:54:57 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement" }, { "msg_contents": "On Tue, Jan 03, 2006 at 09:40:53AM +0000, Simon Riggs wrote:\n> On Mon, 2006-01-02 at 16:48 -0500, Tom Lane wrote:\n> We can also save lots of cycles on the current statement overhead, which\n> is currently the worst part of the stats, performance-wise. That\n> definitely needs redesign. AFAICS we only ever need to know the SQL\n> statement via the stats system if the statement has been running for\n> more than a few minutes - the main use case is for an admin to be able\n> to diagnose a rogue or hung statement. Pushing the statement to stats\n> every time is just a big overhead. That suggests we should either have a\n> pull or a deferred push (longer-than-X-secs) approach.\n\nI would argue that minutes is too long, but of course this could be\nuser-adjustable. I suspect that even waiting just a second could be a\nhuge win, since this only matters if you're executing a lot of\nstatements and you won't be doing that if those statements are taking\nmore than a second or two to execute.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 3 Jan 2006 10:35:56 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> A second improvement I discovered is that the statistics collector is\n> calling gettimeofday() for every packet received, so it can determine\n> the timeout for the select() call to write the flat file. I removed\n> that behavior and instead used setitimer() to issue a SIGINT every\n> 500ms, which was the original behavior. This eliminates the\n> gettimeofday() call and makes the code cleaner. Second patch attached.\n\nI have applied this second patch, with a few small stylistic\nimprovements.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/backend/postmaster/pgstat.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/postmaster/pgstat.c,v\nretrieving revision 1.116\ndiff -c -c -r1.116 pgstat.c\n*** src/backend/postmaster/pgstat.c\t2 Jan 2006 00:58:00 -0000\t1.116\n--- src/backend/postmaster/pgstat.c\t3 Jan 2006 16:26:04 -0000\n***************\n*** 117,123 ****\n \n static long pgStatNumMessages = 0;\n \n! static bool pgStatRunningInCollector = FALSE;\n \n /*\n * Place where backends store per-table info to be sent to the collector.\n--- 117,123 ----\n \n static long pgStatNumMessages = 0;\n \n! static bool pgStatRunningInCollector = false;\n \n /*\n * Place where backends store per-table info to be sent to the collector.\n***************\n*** 145,150 ****\n--- 145,151 ----\n static PgStat_StatBeEntry *pgStatBeTable = NULL;\n static int\tpgStatNumBackends = 0;\n \n+ static volatile bool\tneed_statwrite;\n \n /* ----------\n * Local function forward declarations\n***************\n*** 164,169 ****\n--- 165,171 ----\n \n NON_EXEC_STATIC void PgstatBufferMain(int argc, char *argv[]);\n NON_EXEC_STATIC void PgstatCollectorMain(int argc, char *argv[]);\n+ static void force_statwrite(SIGNAL_ARGS);\n static void pgstat_recvbuffer(void);\n static void pgstat_exit(SIGNAL_ARGS);\n static void pgstat_die(SIGNAL_ARGS);\n***************\n*** 1548,1560 ****\n \tPgStat_Msg\tmsg;\n \tfd_set\t\trfds;\n \tint\t\t\treadPipe;\n- \tint\t\t\tnready;\n \tint\t\t\tlen = 0;\n! \tstruct timeval timeout;\n! \tstruct timeval next_statwrite;\n! \tbool\t\tneed_statwrite;\n \tHASHCTL\t\thash_ctl;\n! \n \tMyProcPid = getpid();\t\t/* reset MyProcPid */\n \n \t/*\n--- 1550,1560 ----\n \tPgStat_Msg\tmsg;\n \tfd_set\t\trfds;\n \tint\t\t\treadPipe;\n \tint\t\t\tlen = 0;\n! \tstruct itimerval timeval;\n \tHASHCTL\t\thash_ctl;\n! \tbool\t\tneed_timer = false;\n! \t\n \tMyProcPid = getpid();\t\t/* reset MyProcPid */\n \n \t/*\n***************\n*** 1572,1578 ****\n \t/* kluge to allow buffer process to kill collector; FIXME */\n \tpqsignal(SIGQUIT, pgstat_exit);\n #endif\n! \tpqsignal(SIGALRM, SIG_IGN);\n \tpqsignal(SIGPIPE, SIG_IGN);\n \tpqsignal(SIGUSR1, SIG_IGN);\n \tpqsignal(SIGUSR2, SIG_IGN);\n--- 1572,1578 ----\n \t/* kluge to allow buffer process to kill collector; FIXME */\n \tpqsignal(SIGQUIT, pgstat_exit);\n #endif\n! \tpqsignal(SIGALRM, force_statwrite);\n \tpqsignal(SIGPIPE, SIG_IGN);\n \tpqsignal(SIGUSR1, SIG_IGN);\n \tpqsignal(SIGUSR2, SIG_IGN);\n***************\n*** 1597,1613 ****\n \tinit_ps_display(\"stats collector process\", \"\", \"\");\n \tset_ps_display(\"\");\n \n! \t/*\n! \t * Arrange to write the initial status file right away\n! \t */\n! \tgettimeofday(&next_statwrite, NULL);\n! \tneed_statwrite = TRUE;\n \n \t/*\n \t * Read in an existing statistics stats file or initialize the stats to\n \t * zero.\n \t */\n! \tpgStatRunningInCollector = TRUE;\n \tpgstat_read_statsfile(&pgStatDBHash, InvalidOid, NULL, NULL);\n \n \t/*\n--- 1597,1613 ----\n \tinit_ps_display(\"stats collector process\", \"\", \"\");\n \tset_ps_display(\"\");\n \n! \tneed_statwrite = true;\n! \n! \tMemSet(&timeval, 0, sizeof(struct itimerval));\n! \ttimeval.it_value.tv_sec = PGSTAT_STAT_INTERVAL / 1000;\n! \ttimeval.it_value.tv_usec = PGSTAT_STAT_INTERVAL % 1000;\n \n \t/*\n \t * Read in an existing statistics stats file or initialize the stats to\n \t * zero.\n \t */\n! \tpgStatRunningInCollector = true;\n \tpgstat_read_statsfile(&pgStatDBHash, InvalidOid, NULL, NULL);\n \n \t/*\n***************\n*** 1634,1667 ****\n \t */\n \tfor (;;)\n \t{\n- \t\t/*\n- \t\t * If we need to write the status file again (there have been changes\n- \t\t * in the statistics since we wrote it last) calculate the timeout\n- \t\t * until we have to do so.\n- \t\t */\n \t\tif (need_statwrite)\n \t\t{\n! \t\t\tstruct timeval now;\n! \n! \t\t\tgettimeofday(&now, NULL);\n! \t\t\t/* avoid assuming that tv_sec is signed */\n! \t\t\tif (now.tv_sec > next_statwrite.tv_sec ||\n! \t\t\t\t(now.tv_sec == next_statwrite.tv_sec &&\n! \t\t\t\t now.tv_usec >= next_statwrite.tv_usec))\n! \t\t\t{\n! \t\t\t\ttimeout.tv_sec = 0;\n! \t\t\t\ttimeout.tv_usec = 0;\n! \t\t\t}\n! \t\t\telse\n! \t\t\t{\n! \t\t\t\ttimeout.tv_sec = next_statwrite.tv_sec - now.tv_sec;\n! \t\t\t\ttimeout.tv_usec = next_statwrite.tv_usec - now.tv_usec;\n! \t\t\t\tif (timeout.tv_usec < 0)\n! \t\t\t\t{\n! \t\t\t\t\ttimeout.tv_sec--;\n! \t\t\t\t\ttimeout.tv_usec += 1000000;\n! \t\t\t\t}\n! \t\t\t}\n \t\t}\n \n \t\t/*\n--- 1634,1644 ----\n \t */\n \tfor (;;)\n \t{\n \t\tif (need_statwrite)\n \t\t{\n! \t\t\tpgstat_write_statsfile();\n! \t\t\tneed_statwrite = false;\n! \t\t\tneed_timer = true;\n \t\t}\n \n \t\t/*\n***************\n*** 1673,1681 ****\n \t\t/*\n \t\t * Now wait for something to do.\n \t\t */\n! \t\tnready = select(readPipe + 1, &rfds, NULL, NULL,\n! \t\t\t\t\t\t(need_statwrite) ? &timeout : NULL);\n! \t\tif (nready < 0)\n \t\t{\n \t\t\tif (errno == EINTR)\n \t\t\t\tcontinue;\n--- 1650,1656 ----\n \t\t/*\n \t\t * Now wait for something to do.\n \t\t */\n! \t\tif (select(readPipe + 1, &rfds, NULL, NULL, NULL) < 0)\n \t\t{\n \t\t\tif (errno == EINTR)\n \t\t\t\tcontinue;\n***************\n*** 1685,1702 ****\n \t\t}\n \n \t\t/*\n- \t\t * If there are no descriptors ready, our timeout for writing the\n- \t\t * stats file happened.\n- \t\t */\n- \t\tif (nready == 0)\n- \t\t{\n- \t\t\tpgstat_write_statsfile();\n- \t\t\tneed_statwrite = FALSE;\n- \n- \t\t\tcontinue;\n- \t\t}\n- \n- \t\t/*\n \t\t * Check if there is a new statistics message to collect.\n \t\t */\n \t\tif (FD_ISSET(readPipe, &rfds))\n--- 1660,1665 ----\n***************\n*** 1813,1829 ****\n \t\t\t */\n \t\t\tpgStatNumMessages++;\n \n! \t\t\t/*\n! \t\t\t * If this is the first message after we wrote the stats file the\n! \t\t\t * last time, setup the timeout that it'd be written.\n! \t\t\t */\n! \t\t\tif (!need_statwrite)\n \t\t\t{\n! \t\t\t\tgettimeofday(&next_statwrite, NULL);\n! \t\t\t\tnext_statwrite.tv_usec += ((PGSTAT_STAT_INTERVAL) * 1000);\n! \t\t\t\tnext_statwrite.tv_sec += (next_statwrite.tv_usec / 1000000);\n! \t\t\t\tnext_statwrite.tv_usec %= 1000000;\n! \t\t\t\tneed_statwrite = TRUE;\n \t\t\t}\n \t\t}\n \n--- 1776,1787 ----\n \t\t\t */\n \t\t\tpgStatNumMessages++;\n \n! \t\t\tif (need_timer)\n \t\t\t{\n! \t\t\t\tif (setitimer(ITIMER_REAL, &timeval, NULL))\n! \t\t\t\t\tereport(ERROR,\n! \t\t\t\t\t\t (errmsg(\"unable to set statistics collector timer: %m\")));\n! \t\t\t\tneed_timer = false;\n \t\t\t}\n \t\t}\n \n***************\n*** 1848,1853 ****\n--- 1806,1818 ----\n }\n \n \n+ static void\n+ force_statwrite(SIGNAL_ARGS)\n+ {\n+ \tneed_statwrite = true;\n+ }\n+ \n+ \n /* ----------\n * pgstat_recvbuffer() -\n *\n***************\n*** 1865,1871 ****\n \tstruct timeval timeout;\n \tint\t\t\twritePipe = pgStatPipe[1];\n \tint\t\t\tmaxfd;\n- \tint\t\t\tnready;\n \tint\t\t\tlen;\n \tint\t\t\txfr;\n \tint\t\t\tfrm;\n--- 1830,1835 ----\n***************\n*** 1907,1912 ****\n--- 1871,1884 ----\n \tmsgbuffer = (char *) palloc(PGSTAT_RECVBUFFERSZ);\n \n \t/*\n+ \t * Wait for some work to do; but not for more than 10 seconds. (This\n+ \t * determines how quickly we will shut down after an ungraceful\n+ \t * postmaster termination; so it needn't be very fast.)\n+ \t */\n+ \ttimeout.tv_sec = 10;\n+ \ttimeout.tv_usec = 0;\n+ \n+ \t/*\n \t * Loop forever\n \t */\n \tfor (;;)\n***************\n*** 1946,1961 ****\n \t\t\t\tmaxfd = writePipe;\n \t\t}\n \n! \t\t/*\n! \t\t * Wait for some work to do; but not for more than 10 seconds. (This\n! \t\t * determines how quickly we will shut down after an ungraceful\n! \t\t * postmaster termination; so it needn't be very fast.)\n! \t\t */\n! \t\ttimeout.tv_sec = 10;\n! \t\ttimeout.tv_usec = 0;\n! \n! \t\tnready = select(maxfd + 1, &rfds, &wfds, NULL, &timeout);\n! \t\tif (nready < 0)\n \t\t{\n \t\t\tif (errno == EINTR)\n \t\t\t\tcontinue;\n--- 1918,1924 ----\n \t\t\t\tmaxfd = writePipe;\n \t\t}\n \n! \t\tif (select(maxfd + 1, &rfds, &wfds, NULL, &timeout) < 0)\n \t\t{\n \t\t\tif (errno == EINTR)\n \t\t\t\tcontinue;", "msg_date": "Tue, 3 Jan 2006 11:43:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement" }, { "msg_contents": "Ühel kenal päeval, T, 2006-01-03 kell 09:40, kirjutas Simon Riggs:\n\n> We can also save lots of cycles on the current statement overhead, which\n> is currently the worst part of the stats, performance-wise. That\n> definitely needs redesign. AFAICS we only ever need to know the SQL\n> statement via the stats system if the statement has been running for\n> more than a few minutes - the main use case is for an admin to be able\n> to diagnose a rogue or hung statement. \n\nInterestingly I use pg_stat_activity view to watch for stuck backends,\n\"stuck\" in the sense that they have not noticed when client want away\nand are now waitin the TCP timeout to happen. I query for backends which\nhave been in \"<IDLE>\" state for longer than XX seconds. I guess that at\nleast some kind of indication for this should be available.\n\nOf course this would be much less of a problem if there was a\npossibility for sime kind of keepalive system to detect when\nclient/frontend goes away.\n\n> Pushing the statement to stats\n> every time is just a big overhead. That suggests we should either have a\n> pull \n\nI could live with \"push\", where pg_stat_activity would actually ask each\nlive backend for its \"current query\". This surely happens less often\nthan queries are performed (up to few thousand per sec)\n\n-------------\nHannu\n\n\n", "msg_date": "Tue, 03 Jan 2006 23:42:53 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement" }, { "msg_contents": "\n\"Jim C. Nasby\" <[email protected]> writes:\n\n> I would argue that minutes is too long, but of course this could be\n> user-adjustable. I suspect that even waiting just a second could be a\n> huge win, since this only matters if you're executing a lot of\n> statements and you won't be doing that if those statements are taking\n> more than a second or two to execute.\n\nThat's not necessarily true at all. You could just as easily have a\nperformance problem caused by a quick statement that is being executed many\ntimes as a slow statement that is being executed few times.\n\nThat is, you could be executing dozens of queries that take seconds or minutes\nonce a second but none of those might be the problem. The problem might be the\nquery that's taking only 300ms that you're executing hundreds of of times a\nminute.\n\nMoreover, if you're not gathering stats for queries that are fast then how\nwill you know whether they're performing properly when you look at them when\nthey do show up?\n\n-- \ngreg\n\n", "msg_date": "03 Jan 2006 18:28:34 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement" }, { "msg_contents": "Bruce Momjian wrote:\n> I did some research on this because the numbers Tom quotes indicate there\n> is something wrong in the way we process stats_command_string\n> statistics.\n> \n...\n> This sends 10,000 \"SELECT 1\" queries to the backend, and reports the\n> execution time. I found that without stats_command_string defined, it\n> ran in 3.5 seconds. With stats_command_string defined, it took 5.5\n> seconds, meaning the command string is causing a 57% slowdown. That is\n> way too much considering that the SELECT 1 has to be send from psql to\n> the backend, parsed, optimized, and executed, and the result returned to\n> the psql, while stats_command_string only has to send a string to a\n> backend collector. There is _no_ way that collector should take 57% of\n> the time it takes to run the actual query.\n\nI have updated information on this performance issue. It seems it is\nthe blocking activity of recv() that is slowing down the buffer process\nand hence the backends. Basically, I found if I use select() or recv()\nto block until data arrives, I see the huge performance loss reported\nabove. If I loop over the recv() call in non-blocking mode, I see\nalmost no performance hit from stats_command_string (no backend\nslowdown), but of course that consumes all the CPU (bad). What I found\nworked perfectly was to do a non-blocking recv(), and if no data was\nreturned, change the socket to blocking mode and loop back over the\nrecv(). This allowed for no performance loss, and prevented infinite\nlooping over the recv() call.\n\nMy theory is that the kernel blocking logic of select() or recv() is\nsomehow locking up the socket for a small amount of time, therefore\nslowing down the backend. With the on/off blocking, the packets arrive\nin groups, we get a few packets then block when nothing is available. \n\nThe test program:\n\n\tTMPFILE=/tmp/pgstat.sql\n\texport TMPFILE\n\t\n\tif [ ! -f $TMPFILE ]\n\tthen\ti=0\n\t\twhile [ $i -lt 10000 ]\n\t\tdo\n\t\t\ti=`expr $i + 1`\n\t\t\techo \"SELECT 1;\"\n\t\tdone > $TMPFILE\n\tfi\n\t\n\ttime psql test < $TMPFILE >/dev/null\n\nis basically sending 30k packets of roughly 26 bytes each, or roughly\n800k in 3.5 seconds, meaning there is a packet every 0.0001 seconds. I\nwouldn't have thought that was too much volume for a dual Xeon BSD\nmachine, but it seems it might be. Tom seeing 44% slowdown from pgbench\nmeans Linux might have an issue too.\n\nTwo patches are attached. The first patch shows the use of the on/off\nblocking method to have almost zero overhead for reading from the\nsocket. (The packets are discarded.) The second patch removes the\nbuffer process entirely and uses the on/off buffering to process the\nincoming packets. I tried running two test scripts simultaneously and\nsaw almost no packet loss. Also keep in mind we are writing the stat\nfile twice a second, which might need to be pushed into a separate\nprocess.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/backend/postmaster/pgstat.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/postmaster/pgstat.c,v\nretrieving revision 1.118\ndiff -c -c -r1.118 pgstat.c\n*** src/backend/postmaster/pgstat.c\t3 Jan 2006 19:54:08 -0000\t1.118\n--- src/backend/postmaster/pgstat.c\t4 Jan 2006 23:22:44 -0000\n***************\n*** 1839,1845 ****\n \tint\t\t\tmsg_recv = 0;\t/* next receive index */\n \tint\t\t\tmsg_have = 0;\t/* number of bytes stored */\n \tbool\t\toverflow = false;\n! \n \t/*\n \t * Identify myself via ps\n \t */\n--- 1839,1847 ----\n \tint\t\t\tmsg_recv = 0;\t/* next receive index */\n \tint\t\t\tmsg_have = 0;\t/* number of bytes stored */\n \tbool\t\toverflow = false;\n! \tbool\t\tis_block_mode = false;\n! \tint\t\t\tcnt = 0, bloops = 0, nbloops = 0;\n! \t\n \t/*\n \t * Identify myself via ps\n \t */\n***************\n*** 1870,1875 ****\n--- 1872,1921 ----\n \t */\n \tmsgbuffer = (char *) palloc(PGSTAT_RECVBUFFERSZ);\n \n+ \n+ \twhile (1)\n+ \t{\n+ #if 0\n+ \t\t FD_ZERO(&rfds);\n+ \t\t FD_ZERO(&wfds);\n+ \t\t maxfd = -1;\n+ \t\t\t FD_SET(pgStatSock, &rfds);\n+ \t\t\t maxfd = pgStatSock;\n+ \t\n+ \t\t timeout.tv_sec = 0;\n+ \t\t timeout.tv_usec = 0;\n+ \t\n+ \t\t select(maxfd + 1, &rfds, &wfds, NULL, &timeout);\n+ #endif\n+ \t\n+ \t\t\t if (is_block_mode)\n+ \t\t\t\t bloops++;\n+ \t\t\t else\n+ \t\t\t\t nbloops++;\n+ \t\n+ \t\t\t len = recv(pgStatSock, (char *) &input_buffer,\n+ \t\t\t\t\t\t sizeof(PgStat_Msg), 0);\n+ \t\t\t if (len > 0)\n+ \t\t\t\t cnt += len;\n+ \t\n+ //fprintf(stderr, \"len = %d, errno = %d\\n\", len, errno);\n+ \t\n+ \t\t\t if (len > 0 && is_block_mode)\n+ \t\t\t {\n+ \t\t\t\t pg_set_noblock(pgStatSock);\n+ \t\t\t\t is_block_mode = false;\n+ \t\t\t }\n+ \t\t\t else if (len < 0 && errno == EAGAIN && !is_block_mode)\n+ \t\t\t {\n+ \t\t\t\t pg_set_block(pgStatSock);\n+ \t\t\t\t is_block_mode = true;\n+ \t\t\t }\n+ //\t\t\t if ((bloops + nbloops) % 1000 == 0)\n+ //\t\t\t\t fprintf(stderr, \"cnt = %d, len = %d, bloops = %d, nbloops = %d\\n\", cnt, len, bloops, nbloops);\n+ \t}\n+ \t\n+ \texit(1);\n+ \n \t/*\n \t * Loop forever\n \t */\n\nIndex: src/backend/postmaster/pgstat.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/postmaster/pgstat.c,v\nretrieving revision 1.118\ndiff -c -c -r1.118 pgstat.c\n*** src/backend/postmaster/pgstat.c\t3 Jan 2006 19:54:08 -0000\t1.118\n--- src/backend/postmaster/pgstat.c\t4 Jan 2006 23:06:26 -0000\n***************\n*** 109,117 ****\n * ----------\n */\n NON_EXEC_STATIC int pgStatSock = -1;\n- NON_EXEC_STATIC int pgStatPipe[2] = {-1, -1};\n static struct sockaddr_storage pgStatAddr;\n- static pid_t pgStatCollectorPid = 0;\n \n static time_t last_pgstat_start_time;\n \n--- 109,115 ----\n***************\n*** 166,172 ****\n NON_EXEC_STATIC void PgstatBufferMain(int argc, char *argv[]);\n NON_EXEC_STATIC void PgstatCollectorMain(int argc, char *argv[]);\n static void force_statwrite(SIGNAL_ARGS);\n- static void pgstat_recvbuffer(void);\n static void pgstat_exit(SIGNAL_ARGS);\n static void pgstat_die(SIGNAL_ARGS);\n static void pgstat_beshutdown_hook(int code, Datum arg);\n--- 164,169 ----\n***************\n*** 1491,1536 ****\n \tpgstat_parseArgs(argc, argv);\n #endif\n \n- \t/*\n- \t * Start a buffering process to read from the socket, so we have a little\n- \t * more time to process incoming messages.\n- \t *\n- \t * NOTE: the process structure is: postmaster is parent of buffer process\n- \t * is parent of collector process.\tThis way, the buffer can detect\n- \t * collector failure via SIGCHLD, whereas otherwise it wouldn't notice\n- \t * collector failure until it tried to write on the pipe. That would mean\n- \t * that after the postmaster started a new collector, we'd have two buffer\n- \t * processes competing to read from the UDP socket --- not good.\n- \t */\n- \tif (pgpipe(pgStatPipe) < 0)\n- \t\tereport(ERROR,\n- \t\t\t\t(errcode_for_socket_access(),\n- \t\t\t\t errmsg(\"could not create pipe for statistics buffer: %m\")));\n- \n \t/* child becomes collector process */\n! #ifdef EXEC_BACKEND\n! \tpgStatCollectorPid = pgstat_forkexec(STAT_PROC_COLLECTOR);\n! #else\n! \tpgStatCollectorPid = fork();\n! #endif\n! \tswitch (pgStatCollectorPid)\n! \t{\n! \t\tcase -1:\n! \t\t\tereport(ERROR,\n! \t\t\t\t\t(errmsg(\"could not fork statistics collector: %m\")));\n! \n! #ifndef EXEC_BACKEND\n! \t\tcase 0:\n! \t\t\t/* child becomes collector process */\n! \t\t\tPgstatCollectorMain(0, NULL);\n! \t\t\tbreak;\n! #endif\n! \n! \t\tdefault:\n! \t\t\t/* parent becomes buffer process */\n! \t\t\tclosesocket(pgStatPipe[0]);\n! \t\t\tpgstat_recvbuffer();\n! \t}\n \texit(0);\n }\n \n--- 1488,1495 ----\n \tpgstat_parseArgs(argc, argv);\n #endif\n \n \t/* child becomes collector process */\n! \tPgstatCollectorMain(0, NULL);\n \texit(0);\n }\n \n***************\n*** 1548,1559 ****\n PgstatCollectorMain(int argc, char *argv[])\n {\n \tPgStat_Msg\tmsg;\n- \tfd_set\t\trfds;\n- \tint\t\t\treadPipe;\n- \tint\t\t\tlen = 0;\n- \tstruct itimerval timeval;\n \tHASHCTL\t\thash_ctl;\n \tbool\t\tneed_timer = false;\n \t\n \tMyProcPid = getpid();\t\t/* reset MyProcPid */\n \n--- 1507,1517 ----\n PgstatCollectorMain(int argc, char *argv[])\n {\n \tPgStat_Msg\tmsg;\n \tHASHCTL\t\thash_ctl;\n \tbool\t\tneed_timer = false;\n+ \tstruct itimerval timeval;\n+ \tbool\t\tis_block_mode = false;\n+ \tint\t\t\tloops = 0;\n \t\n \tMyProcPid = getpid();\t\t/* reset MyProcPid */\n \n***************\n*** 1587,1596 ****\n \tpgstat_parseArgs(argc, argv);\n #endif\n \n- \t/* Close unwanted files */\n- \tclosesocket(pgStatPipe[1]);\n- \tclosesocket(pgStatSock);\n- \n \t/*\n \t * Identify myself via ps\n \t */\n--- 1545,1550 ----\n***************\n*** 1626,1791 ****\n \tpgStatBeTable = (PgStat_StatBeEntry *)\n \t\tpalloc0(sizeof(PgStat_StatBeEntry) * MaxBackends);\n \n- \treadPipe = pgStatPipe[0];\n- \n \t/*\n \t * Process incoming messages and handle all the reporting stuff until\n \t * there are no more messages.\n \t */\n \tfor (;;)\n \t{\n \t\tif (need_statwrite)\n \t\t{\n! \t\t\tpgstat_write_statsfile();\n \t\t\tneed_statwrite = false;\n \t\t\tneed_timer = true;\n \t\t}\n \n! \t\t/*\n! \t\t * Setup the descriptor set for select(2)\n! \t\t */\n! \t\tFD_ZERO(&rfds);\n! \t\tFD_SET(readPipe, &rfds);\n! \n! \t\t/*\n! \t\t * Now wait for something to do.\n! \t\t */\n! \t\tif (select(readPipe + 1, &rfds, NULL, NULL, NULL) < 0)\n \t\t{\n! \t\t\tif (errno == EINTR)\n! \t\t\t\tcontinue;\n! \t\t\tereport(ERROR,\n! \t\t\t\t\t(errcode_for_socket_access(),\n! \t\t\t\t\t errmsg(\"select() failed in statistics collector: %m\")));\n \t\t}\n \n! \t\t/*\n! \t\t * Check if there is a new statistics message to collect.\n! \t\t */\n! \t\tif (FD_ISSET(readPipe, &rfds))\n! \t\t{\n! \t\t\t/*\n! \t\t\t * We may need to issue multiple read calls in case the buffer\n! \t\t\t * process didn't write the message in a single write, which is\n! \t\t\t * possible since it dumps its buffer bytewise. In any case, we'd\n! \t\t\t * need two reads since we don't know the message length\n! \t\t\t * initially.\n! \t\t\t */\n! \t\t\tint\t\t\tnread = 0;\n! \t\t\tint\t\t\ttargetlen = sizeof(PgStat_MsgHdr);\t\t/* initial */\n! \t\t\tbool\t\tpipeEOF = false;\n \n! \t\t\twhile (nread < targetlen)\n \t\t\t{\n! \t\t\t\tlen = piperead(readPipe, ((char *) &msg) + nread,\n! \t\t\t\t\t\t\t targetlen - nread);\n! \t\t\t\tif (len < 0)\n! \t\t\t\t{\n! \t\t\t\t\tif (errno == EINTR)\n! \t\t\t\t\t\tcontinue;\n! \t\t\t\t\tereport(ERROR,\n! \t\t\t\t\t\t\t(errcode_for_socket_access(),\n! \t\t\t\t\t\t\t errmsg(\"could not read from statistics collector pipe: %m\")));\n! \t\t\t\t}\n! \t\t\t\tif (len == 0)\t/* EOF on the pipe! */\n \t\t\t\t{\n! \t\t\t\t\tpipeEOF = true;\n! \t\t\t\t\tbreak;\n! \t\t\t\t}\n! \t\t\t\tnread += len;\n! \t\t\t\tif (nread == sizeof(PgStat_MsgHdr))\n! \t\t\t\t{\n! \t\t\t\t\t/* we have the header, compute actual msg length */\n! \t\t\t\t\ttargetlen = msg.msg_hdr.m_size;\n! \t\t\t\t\tif (targetlen < (int) sizeof(PgStat_MsgHdr) ||\n! \t\t\t\t\t\ttargetlen > (int) sizeof(msg))\n! \t\t\t\t\t{\n! \t\t\t\t\t\t/*\n! \t\t\t\t\t\t * Bogus message length implies that we got out of\n! \t\t\t\t\t\t * sync with the buffer process somehow. Abort so that\n! \t\t\t\t\t\t * we can restart both processes.\n! \t\t\t\t\t\t */\n! \t\t\t\t\t\tereport(ERROR,\n! \t\t\t\t\t\t\t (errmsg(\"invalid statistics message length\")));\n! \t\t\t\t\t}\n \t\t\t\t}\n \t\t\t}\n! \n! \t\t\t/*\n! \t\t\t * EOF on the pipe implies that the buffer process exited. Fall\n! \t\t\t * out of outer loop.\n! \t\t\t */\n! \t\t\tif (pipeEOF)\n! \t\t\t\tbreak;\n! \n! \t\t\t/*\n! \t\t\t * Distribute the message to the specific function handling it.\n! \t\t\t */\n! \t\t\tswitch (msg.msg_hdr.m_type)\n \t\t\t{\n! \t\t\t\tcase PGSTAT_MTYPE_DUMMY:\n! \t\t\t\t\tbreak;\n \n! \t\t\t\tcase PGSTAT_MTYPE_BESTART:\n! \t\t\t\t\tpgstat_recv_bestart((PgStat_MsgBestart *) &msg, nread);\n! \t\t\t\t\tbreak;\n \n! \t\t\t\tcase PGSTAT_MTYPE_BETERM:\n! \t\t\t\t\tpgstat_recv_beterm((PgStat_MsgBeterm *) &msg, nread);\n! \t\t\t\t\tbreak;\n \n! \t\t\t\tcase PGSTAT_MTYPE_TABSTAT:\n! \t\t\t\t\tpgstat_recv_tabstat((PgStat_MsgTabstat *) &msg, nread);\n! \t\t\t\t\tbreak;\n \n! \t\t\t\tcase PGSTAT_MTYPE_TABPURGE:\n! \t\t\t\t\tpgstat_recv_tabpurge((PgStat_MsgTabpurge *) &msg, nread);\n! \t\t\t\t\tbreak;\n \n! \t\t\t\tcase PGSTAT_MTYPE_ACTIVITY:\n! \t\t\t\t\tpgstat_recv_activity((PgStat_MsgActivity *) &msg, nread);\n! \t\t\t\t\tbreak;\n \n! \t\t\t\tcase PGSTAT_MTYPE_DROPDB:\n! \t\t\t\t\tpgstat_recv_dropdb((PgStat_MsgDropdb *) &msg, nread);\n! \t\t\t\t\tbreak;\n \n! \t\t\t\tcase PGSTAT_MTYPE_RESETCOUNTER:\n! \t\t\t\t\tpgstat_recv_resetcounter((PgStat_MsgResetcounter *) &msg,\n! \t\t\t\t\t\t\t\t\t\t\t nread);\n! \t\t\t\t\tbreak;\n \n! \t\t\t\tcase PGSTAT_MTYPE_AUTOVAC_START:\n! \t\t\t\t\tpgstat_recv_autovac((PgStat_MsgAutovacStart *) &msg, nread);\n! \t\t\t\t\tbreak;\n \n! \t\t\t\tcase PGSTAT_MTYPE_VACUUM:\n! \t\t\t\t\tpgstat_recv_vacuum((PgStat_MsgVacuum *) &msg, nread);\n! \t\t\t\t\tbreak;\n \n! \t\t\t\tcase PGSTAT_MTYPE_ANALYZE:\n! \t\t\t\t\tpgstat_recv_analyze((PgStat_MsgAnalyze *) &msg, nread);\n! \t\t\t\t\tbreak;\n \n! \t\t\t\tdefault:\n! \t\t\t\t\tbreak;\n! \t\t\t}\n \n! \t\t\t/*\n! \t\t\t * Globally count messages.\n! \t\t\t */\n! \t\t\tpgStatNumMessages++;\n \n! \t\t\tif (need_timer)\n! \t\t\t{\n! \t\t\t\tif (setitimer(ITIMER_REAL, &timeval, NULL))\n! \t\t\t\t\tereport(ERROR,\n! \t\t\t\t\t\t (errmsg(\"unable to set statistics collector timer: %m\")));\n! \t\t\t\tneed_timer = false;\n! \t\t\t}\n \t\t}\n \n \t\t/*\n \t\t * Note that we do NOT check for postmaster exit inside the loop; only\n \t\t * EOF on the buffer pipe causes us to fall out. This ensures we\n \t\t * don't exit prematurely if there are still a few messages in the\n--- 1580,1704 ----\n \tpgStatBeTable = (PgStat_StatBeEntry *)\n \t\tpalloc0(sizeof(PgStat_StatBeEntry) * MaxBackends);\n \n \t/*\n \t * Process incoming messages and handle all the reporting stuff until\n \t * there are no more messages.\n \t */\n \tfor (;;)\n \t{\n+ \t\tint nread;\n+ \t\t\n \t\tif (need_statwrite)\n \t\t{\n! \t\t\t//pgstat_write_statsfile();\n \t\t\tneed_statwrite = false;\n \t\t\tneed_timer = true;\n \t\t}\n \n! \t\tif (need_timer)\n \t\t{\n! \t\t\tif (setitimer(ITIMER_REAL, &timeval, NULL))\n! \t\t\t\tereport(ERROR,\n! \t\t\t\t\t (errmsg(\"unable to set statistics collector timer: %m\")));\n! \t\t\tneed_timer = false;\n \t\t}\n \n! \t\tnread = recv(pgStatSock, (char *) &msg,\n! \t\t\t\t sizeof(PgStat_Msg), 0);\n \n! \t\tif (nread > 0 && is_block_mode)\t/* got data */\n! \t\t{\n! \t\t\tpg_set_noblock(pgStatSock);\n! \t\t\tis_block_mode = false;\n! \t\t}\n! \t\telse if (nread < 0)\n! \t\t{\n! \t\t\tif (errno == EAGAIN)\n \t\t\t{\n! \t\t\t\tif (!is_block_mode)\n \t\t\t\t{\n! \t\t\t\t\t/* no data, block mode */\n! \t\t\t\t\tpg_set_block(pgStatSock);\n! \t\t\t\t\tis_block_mode = true;\n \t\t\t\t}\n+ \t\t\t\tcontinue;\n \t\t\t}\n! \t\t\telse if (errno == EINTR)\n \t\t\t{\n! \t\t\t\tif (!PostmasterIsAlive(true))\n! \t\t\t\t\tereport(ERROR,\n! \t\t\t\t\t\t\t(errmsg(\"stats collector exited: %m\")));\n! \t\t\t\tcontinue;\n! \t\t\t}\n! \t\t\telse\n! \t\t\t\tereport(ERROR,\n! \t\t\t\t\t\t(errmsg(\"stats collector exited: %m\")));\n! \t\t}\n \n! //fprintf(stderr, \"nread = %d, type = %d\\n\", nread, msg.msg_hdr.m_type);\n! if (++loops % 1000 == 0)\n! \tfprintf(stderr, \"loops = %d\\n\", loops);\n \n! \t\t/*\n! \t\t * Distribute the message to the specific function handling it.\n! \t\t */\n! \t\tswitch (msg.msg_hdr.m_type)\n! \t\t{\n! \t\t\tcase PGSTAT_MTYPE_DUMMY:\n! \t\t\t\tbreak;\n \n! \t\t\tcase PGSTAT_MTYPE_BESTART:\n! \t\t\t\tpgstat_recv_bestart((PgStat_MsgBestart *) &msg, nread);\n! \t\t\t\tbreak;\n \n! \t\t\tcase PGSTAT_MTYPE_BETERM:\n! \t\t\t\tpgstat_recv_beterm((PgStat_MsgBeterm *) &msg, nread);\n! \t\t\t\tbreak;\n \n! \t\t\tcase PGSTAT_MTYPE_TABSTAT:\n! \t\t\t\tpgstat_recv_tabstat((PgStat_MsgTabstat *) &msg, nread);\n! \t\t\t\tbreak;\n \n! \t\t\tcase PGSTAT_MTYPE_TABPURGE:\n! \t\t\t\tpgstat_recv_tabpurge((PgStat_MsgTabpurge *) &msg, nread);\n! \t\t\t\tbreak;\n \n! \t\t\tcase PGSTAT_MTYPE_ACTIVITY:\n! \t\t\t\tpgstat_recv_activity((PgStat_MsgActivity *) &msg, nread);\n! \t\t\t\tbreak;\n \n! \t\t\tcase PGSTAT_MTYPE_DROPDB:\n! \t\t\t\tpgstat_recv_dropdb((PgStat_MsgDropdb *) &msg, nread);\n! \t\t\t\tbreak;\n \n! \t\t\tcase PGSTAT_MTYPE_RESETCOUNTER:\n! \t\t\t\tpgstat_recv_resetcounter((PgStat_MsgResetcounter *) &msg,\n! \t\t\t\t\t\t\t\t\t\t nread);\n! \t\t\t\tbreak;\n \n! \t\t\tcase PGSTAT_MTYPE_AUTOVAC_START:\n! \t\t\t\tpgstat_recv_autovac((PgStat_MsgAutovacStart *) &msg, nread);\n! \t\t\t\tbreak;\n \n! \t\t\tcase PGSTAT_MTYPE_VACUUM:\n! \t\t\t\tpgstat_recv_vacuum((PgStat_MsgVacuum *) &msg, nread);\n! \t\t\t\tbreak;\n \n! \t\t\tcase PGSTAT_MTYPE_ANALYZE:\n! \t\t\t\tpgstat_recv_analyze((PgStat_MsgAnalyze *) &msg, nread);\n! \t\t\t\tbreak;\n \n! \t\t\tdefault:\n! \t\t\t\tbreak;\n \t\t}\n \n \t\t/*\n+ \t\t * Globally count messages.\n+ \t\t */\n+ \t\tpgStatNumMessages++;\n+ \n+ \n+ \t\t/*\n \t\t * Note that we do NOT check for postmaster exit inside the loop; only\n \t\t * EOF on the buffer pipe causes us to fall out. This ensures we\n \t\t * don't exit prematurely if there are still a few messages in the\n***************\n*** 1813,2032 ****\n }\n \n \n- /* ----------\n- * pgstat_recvbuffer() -\n- *\n- *\tThis is the body of the separate buffering process. Its only\n- *\tpurpose is to receive messages from the UDP socket as fast as\n- *\tpossible and forward them over a pipe into the collector itself.\n- *\tIf the collector is slow to absorb messages, they are buffered here.\n- * ----------\n- */\n- static void\n- pgstat_recvbuffer(void)\n- {\n- \tfd_set\t\trfds;\n- \tfd_set\t\twfds;\n- \tstruct timeval timeout;\n- \tint\t\t\twritePipe = pgStatPipe[1];\n- \tint\t\t\tmaxfd;\n- \tint\t\t\tlen;\n- \tint\t\t\txfr;\n- \tint\t\t\tfrm;\n- \tPgStat_Msg\tinput_buffer;\n- \tchar\t *msgbuffer;\n- \tint\t\t\tmsg_send = 0;\t/* next send index in buffer */\n- \tint\t\t\tmsg_recv = 0;\t/* next receive index */\n- \tint\t\t\tmsg_have = 0;\t/* number of bytes stored */\n- \tbool\t\toverflow = false;\n- \n- \t/*\n- \t * Identify myself via ps\n- \t */\n- \tinit_ps_display(\"stats buffer process\", \"\", \"\");\n- \tset_ps_display(\"\");\n- \n- \t/*\n- \t * We want to die if our child collector process does.\tThere are two ways\n- \t * we might notice that it has died: receive SIGCHLD, or get a write\n- \t * failure on the pipe leading to the child. We can set SIGPIPE to kill\n- \t * us here. Our SIGCHLD handler was already set up before we forked (must\n- \t * do it that way, else it's a race condition).\n- \t */\n- \tpqsignal(SIGPIPE, SIG_DFL);\n- \tPG_SETMASK(&UnBlockSig);\n- \n- \t/*\n- \t * Set the write pipe to nonblock mode, so that we cannot block when the\n- \t * collector falls behind.\n- \t */\n- \tif (!pg_set_noblock(writePipe))\n- \t\tereport(ERROR,\n- \t\t\t\t(errcode_for_socket_access(),\n- \t\t\t\t errmsg(\"could not set statistics collector pipe to nonblocking mode: %m\")));\n- \n- \t/*\n- \t * Allocate the message buffer\n- \t */\n- \tmsgbuffer = (char *) palloc(PGSTAT_RECVBUFFERSZ);\n- \n- \t/*\n- \t * Loop forever\n- \t */\n- \tfor (;;)\n- \t{\n- \t\tFD_ZERO(&rfds);\n- \t\tFD_ZERO(&wfds);\n- \t\tmaxfd = -1;\n- \n- \t\t/*\n- \t\t * As long as we have buffer space we add the socket to the read\n- \t\t * descriptor set.\n- \t\t */\n- \t\tif (msg_have <= (int) (PGSTAT_RECVBUFFERSZ - sizeof(PgStat_Msg)))\n- \t\t{\n- \t\t\tFD_SET(pgStatSock, &rfds);\n- \t\t\tmaxfd = pgStatSock;\n- \t\t\toverflow = false;\n- \t\t}\n- \t\telse\n- \t\t{\n- \t\t\tif (!overflow)\n- \t\t\t{\n- \t\t\t\tereport(LOG,\n- \t\t\t\t\t\t(errmsg(\"statistics buffer is full\")));\n- \t\t\t\toverflow = true;\n- \t\t\t}\n- \t\t}\n- \n- \t\t/*\n- \t\t * If we have messages to write out, we add the pipe to the write\n- \t\t * descriptor set.\n- \t\t */\n- \t\tif (msg_have > 0)\n- \t\t{\n- \t\t\tFD_SET(writePipe, &wfds);\n- \t\t\tif (writePipe > maxfd)\n- \t\t\t\tmaxfd = writePipe;\n- \t\t}\n- \n- \t\t/*\n- \t\t * Wait for some work to do; but not for more than 10 seconds. (This\n- \t\t * determines how quickly we will shut down after an ungraceful\n- \t\t * postmaster termination; so it needn't be very fast.) struct timeout\n- \t\t * is modified by some operating systems.\n- \t\t */\n- \t\ttimeout.tv_sec = 10;\n- \t\ttimeout.tv_usec = 0;\n- \n- \t\tif (select(maxfd + 1, &rfds, &wfds, NULL, &timeout) < 0)\n- \t\t{\n- \t\t\tif (errno == EINTR)\n- \t\t\t\tcontinue;\n- \t\t\tereport(ERROR,\n- \t\t\t\t\t(errcode_for_socket_access(),\n- \t\t\t\t\t errmsg(\"select() failed in statistics buffer: %m\")));\n- \t\t}\n- \n- \t\t/*\n- \t\t * If there is a message on the socket, read it and check for\n- \t\t * validity.\n- \t\t */\n- \t\tif (FD_ISSET(pgStatSock, &rfds))\n- \t\t{\n- \t\t\tlen = recv(pgStatSock, (char *) &input_buffer,\n- \t\t\t\t\t sizeof(PgStat_Msg), 0);\n- \t\t\tif (len < 0)\n- \t\t\t\tereport(ERROR,\n- \t\t\t\t\t\t(errcode_for_socket_access(),\n- \t\t\t\t\t\t errmsg(\"could not read statistics message: %m\")));\n- \n- \t\t\t/*\n- \t\t\t * We ignore messages that are smaller than our common header\n- \t\t\t */\n- \t\t\tif (len < sizeof(PgStat_MsgHdr))\n- \t\t\t\tcontinue;\n- \n- \t\t\t/*\n- \t\t\t * The received length must match the length in the header\n- \t\t\t */\n- \t\t\tif (input_buffer.msg_hdr.m_size != len)\n- \t\t\t\tcontinue;\n- \n- \t\t\t/*\n- \t\t\t * O.K. - we accept this message. Copy it to the circular\n- \t\t\t * msgbuffer.\n- \t\t\t */\n- \t\t\tfrm = 0;\n- \t\t\twhile (len > 0)\n- \t\t\t{\n- \t\t\t\txfr = PGSTAT_RECVBUFFERSZ - msg_recv;\n- \t\t\t\tif (xfr > len)\n- \t\t\t\t\txfr = len;\n- \t\t\t\tAssert(xfr > 0);\n- \t\t\t\tmemcpy(msgbuffer + msg_recv,\n- \t\t\t\t\t ((char *) &input_buffer) + frm,\n- \t\t\t\t\t xfr);\n- \t\t\t\tmsg_recv += xfr;\n- \t\t\t\tif (msg_recv == PGSTAT_RECVBUFFERSZ)\n- \t\t\t\t\tmsg_recv = 0;\n- \t\t\t\tmsg_have += xfr;\n- \t\t\t\tfrm += xfr;\n- \t\t\t\tlen -= xfr;\n- \t\t\t}\n- \t\t}\n- \n- \t\t/*\n- \t\t * If the collector is ready to receive, write some data into his\n- \t\t * pipe. We may or may not be able to write all that we have.\n- \t\t *\n- \t\t * NOTE: if what we have is less than PIPE_BUF bytes but more than the\n- \t\t * space available in the pipe buffer, most kernels will refuse to\n- \t\t * write any of it, and will return EAGAIN. This means we will\n- \t\t * busy-loop until the situation changes (either because the collector\n- \t\t * caught up, or because more data arrives so that we have more than\n- \t\t * PIPE_BUF bytes buffered). This is not good, but is there any way\n- \t\t * around it? We have no way to tell when the collector has caught\n- \t\t * up...\n- \t\t */\n- \t\tif (FD_ISSET(writePipe, &wfds))\n- \t\t{\n- \t\t\txfr = PGSTAT_RECVBUFFERSZ - msg_send;\n- \t\t\tif (xfr > msg_have)\n- \t\t\t\txfr = msg_have;\n- \t\t\tAssert(xfr > 0);\n- \t\t\tlen = pipewrite(writePipe, msgbuffer + msg_send, xfr);\n- \t\t\tif (len < 0)\n- \t\t\t{\n- \t\t\t\tif (errno == EINTR || errno == EAGAIN)\n- \t\t\t\t\tcontinue;\t/* not enough space in pipe */\n- \t\t\t\tereport(ERROR,\n- \t\t\t\t\t\t(errcode_for_socket_access(),\n- \t\t\t\terrmsg(\"could not write to statistics collector pipe: %m\")));\n- \t\t\t}\n- \t\t\t/* NB: len < xfr is okay */\n- \t\t\tmsg_send += len;\n- \t\t\tif (msg_send == PGSTAT_RECVBUFFERSZ)\n- \t\t\t\tmsg_send = 0;\n- \t\t\tmsg_have -= len;\n- \t\t}\n- \n- \t\t/*\n- \t\t * Make sure we forwarded all messages before we check for postmaster\n- \t\t * termination.\n- \t\t */\n- \t\tif (msg_have != 0 || FD_ISSET(pgStatSock, &rfds))\n- \t\t\tcontinue;\n- \n- \t\t/*\n- \t\t * If the postmaster has terminated, we die too. (This is no longer\n- \t\t * the normal exit path, however.)\n- \t\t */\n- \t\tif (!PostmasterIsAlive(true))\n- \t\t\texit(0);\n- \t}\n- }\n- \n /* SIGQUIT signal handler for buffer process */\n static void\n pgstat_exit(SIGNAL_ARGS)\n--- 1726,1731 ----\n***************\n*** 2049,2054 ****\n--- 1748,1754 ----\n \texit(0);\n }\n \n+ \n /* SIGCHLD signal handler for buffer process */\n static void\n pgstat_die(SIGNAL_ARGS)", "msg_date": "Wed, 4 Jan 2006 19:39:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement" }, { "msg_contents": "\nHannu Krosing <[email protected]> writes:\n\n> Interestingly I use pg_stat_activity view to watch for stuck backends,\n> \"stuck\" in the sense that they have not noticed when client want away\n> and are now waitin the TCP timeout to happen. I query for backends which\n> have been in \"<IDLE>\" state for longer than XX seconds. I guess that at\n> least some kind of indication for this should be available.\n\nYou mean like the tcp_keepalives_idle option?\n\nhttp://www.postgresql.org/docs/8.1/interactive/runtime-config-connection.html#GUC-TCP-KEEPALIVES-IDLE\n\n-- \ngreg\n\n", "msg_date": "08 Jan 2006 11:49:12 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement" }, { "msg_contents": "Ühel kenal päeval, P, 2006-01-08 kell 11:49, kirjutas Greg Stark:\n> Hannu Krosing <[email protected]> writes:\n> \n> > Interestingly I use pg_stat_activity view to watch for stuck backends,\n> > \"stuck\" in the sense that they have not noticed when client want away\n> > and are now waitin the TCP timeout to happen. I query for backends which\n> > have been in \"<IDLE>\" state for longer than XX seconds. I guess that at\n> > least some kind of indication for this should be available.\n> \n> You mean like the tcp_keepalives_idle option?\n> \n> http://www.postgresql.org/docs/8.1/interactive/runtime-config-connection.html#GUC-TCP-KEEPALIVES-IDLE\n> \n\nKind of, only I'd like to be able to set timeouts less than 120 minutes.\n\nfrom:\nhttp://developer.apple.com/documentation/mac/NetworkingOT/NetworkingWOT-390.html#HEADING390-0\n\nkp_timeout\n Set the requested timeout value, in minutes. Specify a value of\n T_UNSPEC to use the default value. You may specify any positive\n value for this field of 120 minutes or greater. The timeout\n value is not an absolute requirement; if you specify a value\n less than 120 minutes, TCP will renegotiate a timeout of 120\n minutes.\n \n-----------\nHannu\n\n\n", "msg_date": "Mon, 09 Jan 2006 17:48:21 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stats collector performance improvement" }, { "msg_contents": "\nWould some people please run the attached test procedure and report back\nthe results? I basically need to know the patch is an improvement on\nmore platforms than just my own. Thanks\n\n---------------------------------------------------------------------------\n\nRun this script and record the time reported:\n\n\tftp://candle.pha.pa.us/pub/postgresql/mypatches/stat.script\n\nModify postgresql.conf:\n\n\tstats_command_string = on\n\nand reload the server. Do \"SELECT * FROM pg_stat_activity;\" to verify\nthe command string is enabled. You should see your query in the\n\"current query\" column.\n\nRerun the stat.script again and record the time.\n\nApply this patch to CVS HEAD:\n\n\tftp://candle.pha.pa.us/pub/postgresql/mypatches/stat.nobuffer\n\nRun the stat.script again and record the time.\n\nReport via email your three times and your platform.\n\nIf the patch worked, the first and third times will be similar, and\nthe second time will be high.\n\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Tom Lane wrote:\n> > Michael Fuhr <[email protected]> writes:\n> > > Further tests show that for this application\n> > > the killer is stats_command_string, not stats_block_level or\n> > > stats_row_level.\n> > \n> > I tried it with pgbench -c 10, and got these results:\n> > \t41% reduction in TPS rate for stats_command_string\n> > \t9% reduction in TPS rate for stats_block/row_level (any combination)\n> > \n> > strace'ing a backend confirms my belief that stats_block/row_level send\n> > just one stats message per transaction (at least for the relatively\n> > small number of tables touched per transaction by pgbench). However\n> > stats_command_string sends 14(!) --- there are seven commands per\n> > pgbench transaction and each results in sending a <command> message and\n> > later an <IDLE> message.\n> > \n> > Given the rather lackadaisical way in which the stats collector makes\n> > the data available, it seems like the backends are being much too\n> > enthusiastic about posting their stats_command_string status\n> > immediately. Might be worth thinking about how to cut back the\n> > overhead by suppressing some of these messages.\n> \n> I did some research on this because the numbers Tom quotes indicate there\n> is something wrong in the way we process stats_command_string\n> statistics.\n> \n> I made a small test script:\n> \t\n> \tif [ ! -f /tmp/pgstat.sql ]\n> \tthen\ti=0\n> \t\twhile [ $i -lt 10000 ]\n> \t\tdo\n> \t\t\ti=`expr $i + 1`\n> \t\t\techo \"SELECT 1;\"\n> \t\tdone > /tmp/pgstat.sql\n> \tfi\n> \t\n> \ttime psql test </tmp/pgstat.sql >/dev/null\n> \n> This sends 10,000 \"SELECT 1\" queries to the backend, and reports the\n> execution time. I found that without stats_command_string defined, it\n> ran in 3.5 seconds. With stats_command_string defined, it took 5.5\n> seconds, meaning the command string is causing a 57% slowdown. That is\n> way too much considering that the SELECT 1 has to be send from psql to\n> the backend, parsed, optimized, and executed, and the result returned to\n> the psql, while stats_command_string only has to send a string to a\n> backend collector. There is _no_ way that collector should take 57% of\n> the time it takes to run the actual query.\n> \n> With the test program, I tried various options. The basic code we have\n> sends a UDP packet to a statistics buffer process, which recv()'s the\n> packet, puts it into a memory queue buffer, and writes it to a pipe()\n> that is read by the statistics collector process which processes the\n> packet.\n> \n> I tried various ways of speeding up the buffer and collector processes. \n> I found if I put a pg_usleep(100) in the buffer process the backend\n> speed was good, but packets were lost. What I found worked well was to\n> do multiple recv() calls in a loop. The previous code did a select(),\n> then perhaps a recv() and pipe write() based on the results of the\n> select(). This caused many small packets to be written to the pipe and\n> the pipe write overhead seems fairly large. The best fix I found was to\n> loop over the recv() call at most 25 times, collecting a group of\n> packets that can then be sent to the collector in one pipe write. The\n> recv() socket is non-blocking, so a zero return indicates there are no\n> more packets available. Patch attached.\n> \n> This change reduced the stats_command_string time from 5.5 to 3.9, which\n> is closer to the 3.5 seconds with stats_command_string off.\n> \n> A second improvement I discovered is that the statistics collector is\n> calling gettimeofday() for every packet received, so it can determine\n> the timeout for the select() call to write the flat file. I removed\n> that behavior and instead used setitimer() to issue a SIGINT every\n> 500ms, which was the original behavior. This eliminates the\n> gettimeofday() call and makes the code cleaner. Second patch attached.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 15 Jun 2006 00:05:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Test request for Stats collector performance improvement" }, { "msg_contents": "\n\"Bruce Momjian\" <[email protected]> wrote\n>\n> Would some people please run the attached test procedure and report back\n> the results? I basically need to know the patch is an improvement on\n> more platforms than just my own. Thanks\n>\n\nObviously it matches your expectation.\n\nuname: Linux amd64 2.6.9-5.13smp #1 SMP Wed Aug 10 10:55:44 CST 2005 x86_64\nx86_64 x86_64 GNU/Linux\ncompiler: gcc (GCC) 3.4.3 20041212\nconfigure: '--prefix=/home/qqzhou/pginstall'\n\n--Before patch --\nreal 0m1.149s\nuser 0m0.182s\nsys 0m0.122s\n\nreal 0m1.121s\nuser 0m0.173s\nsys 0m0.103s\n\nreal 0m1.128s\nuser 0m0.116s\nsys 0m0.092s\n\n-- After patch --\n\nreal 0m1.275s\nuser 0m0.097s\nsys 0m0.160s\n\nreal 0m4.063s\nuser 0m0.663s\nsys 0m0.377s\n\nreal 0m1.259s\nuser 0m0.073s\nsys 0m0.160s\n\n\n", "msg_date": "Thu, 15 Jun 2006 14:09:43 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "Bruce Momjian wrote:\n> Would some people please run the attached test procedure and report\n> back the results? I basically need to know the patch is an\n> improvement on more platforms than just my own. Thanks \n> \n>\n---------------------------------------------------------------------------\n> \n[snip]\n FreeBSD thebighonker.lerctr.org 6.1-STABLE FreeBSD 6.1-STABLE #60: Mon Jun\n12 16:55:31 CDT 2006\[email protected]:/usr/obj/usr/src/sys/THEBIGHONKER amd64\n$\nwith all stats on, except command string, cvs HEAD, no other patch:\n \n$ sh stat.script\n 1.92 real 0.35 user 0.42 sys\n$\n# same as above, with command_string on.\n \n$ sh stat.script\n 2.51 real 0.34 user 0.45 sys\n$\n#with patch and command_string ON.\n$ sh stat.script\n 2.37 real 0.35 user 0.34 sys\n$ \nThe above uname is for a very current RELENG_6 FreeBSD. This was done \n on a dual-xeon in 64-bit mode. HTT *IS* enabled.\n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 512-248-2683 E-Mail: [email protected]\nUS Mail: 430 Valona Loop, Round Rock, TX 78681-3683 US\n\n", "msg_date": "Thu, 15 Jun 2006 03:00:27 -0500", "msg_from": "\"Larry Rosenman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "\"Qingqing Zhou\" <[email protected]> writes:\n> Obviously it matches your expectation.\n\nHm? I don't see any improvement there:\n\n> --Before patch --\n> real 0m1.149s\n> real 0m1.121s\n> real 0m1.128s\n\n> -- After patch --\n> real 0m1.275s\n> real 0m4.063s\n> real 0m1.259s\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jun 2006 10:27:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement " }, { "msg_contents": "Tom Lane wrote:\n> \"Qingqing Zhou\" <[email protected]> writes:\n> > Obviously it matches your expectation.\n> \n> Hm? I don't see any improvement there:\n> \n> > --Before patch --\n> > real 0m1.149s\n> > real 0m1.121s\n> > real 0m1.128s\n> \n> > -- After patch --\n> > real 0m1.275s\n> > real 0m4.063s\n> > real 0m1.259s\n\nThe report is incomplete. I need three outputs:\n\n\tstats off\n\tstats on\n\tstats on, patched\n\nHe only reported two sets of results.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 15 Jun 2006 11:57:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "Bruce Momjian wrote:\n> Would some people please run the attached test procedure and report back\n> the results? I basically need to know the patch is an improvement on\n> more platforms than just my own. Thanks\n\n\nOpenBSD 3.9-current/x86:\n\nwithout stats:\n 0m6.79s real 0m1.56s user 0m1.12s system\n\n-HEAD + stats:\n 0m10.44s real 0m2.26s user 0m1.22s system\n\n-HEAD + stats + patch:\n 0m10.68s real 0m2.16s user 0m1.36s system\n\n\nStefan\n", "msg_date": "Thu, 15 Jun 2006 21:58:27 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "Bruce Momjian wrote:\n> Would some people please run the attached test procedure and report back\n> the results? I basically need to know the patch is an improvement on\n> more platforms than just my own. Thanks\n\n\nDebian Sarge/AMD64 Kernel 2.6.16.16 (all tests done multiple times with\nvariation of less then 10%):\n\n-HEAD:\n\nreal 0m0.486s\nuser 0m0.064s\nsys 0m0.048s\n\n-HEAD with 100000 \"SELECT 1;\" queries:\n\nreal 0m4.763s\nuser 0m0.896s\nsys 0m1.232s\n\n-HEAD + stats:\n\n\nreal 0m0.720s\nuser 0m0.128s\nsys 0m0.096s\n\n\n-HEAD + stats (100k):\n\n\nreal 0m7.204s\nuser 0m1.504s\nsys 0m1.028s\n\n\n-HEAD + stats + patch:\n\nthere is something weird going on here - I get either runtimes like:\n\nreal 0m0.729s\nuser 0m0.092s\nsys 0m0.100s\n\nand occasionally:\n\n\nreal 0m3.926s\nuser 0m0.144s\nsys 0m0.140s\n\n\n(always ~0,7 vs ~4 seconds - same variation as Qingqing Zhou seems to see)\n\n\n-HEAD + stats + patch(100k):\n\nsimiliar variation with:\n\nreal 0m7.955s\nuser 0m1.124s\nsys 0m1.164s\n\nand\n\nreal 0m11.836s\nuser 0m1.368s\nsys 0m1.156s\n\n(ie 7-8 seconds vs 11-12 seconds)\n\n\nlooks like this patch is actually a loss on that box.\n\n\nStefan\n", "msg_date": "Thu, 15 Jun 2006 22:29:36 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "Bruce,\n\n> The report is incomplete. I need three outputs:\n>\n> \tstats off\n> \tstats on\n> \tstats on, patched\n>\n> He only reported two sets of results.\n\nYou need stats off, patched too.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 15 Jun 2006 14:38:01 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> You need stats off, patched too.\n\nShouldn't really be necessary, as the code being patched won't be\nexecuted if stats aren't being collected...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jun 2006 17:42:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement " }, { "msg_contents": "Josh Berkus wrote:\n> Bruce,\n> \n> > The report is incomplete. I need three outputs:\n> >\n> > \tstats off\n> > \tstats on\n> > \tstats on, patched\n> >\n> > He only reported two sets of results.\n> \n> You need stats off, patched too.\n\nNo need --- stats off, patched too, should be the same as stats off, no\npatch.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 15 Jun 2006 17:46:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "\n\"Tom Lane\" <[email protected]> wrote\n>\n> Hm? I don't see any improvement there:\n>\n\nI was referening this sentence, though I am not sure why that's the\nexpectation:\n>\n> \"Bruce Momjian\" <[email protected]> wrote\n> If the patch worked, the first and third times will be similar, and\n> the second time will be high.\n>\n\n-- After patch --\n\nreal 0m1.275s\nuser 0m0.097s\nsys 0m0.160s\n\nreal 0m4.063s\nuser 0m0.663s\nsys 0m0.377s\n\nreal 0m1.259s\nuser 0m0.073s\nsys 0m0.160s\n\n\n\n\n", "msg_date": "Fri, 16 Jun 2006 09:34:12 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "\"Qingqing Zhou\" <[email protected]> writes:\n> \"Tom Lane\" <[email protected]> wrote\n>> Hm? I don't see any improvement there:\n\n> I was referening this sentence, though I am not sure why that's the\n> expectation:\n>> \"Bruce Momjian\" <[email protected]> wrote\n>> If the patch worked, the first and third times will be similar, and\n>> the second time will be high.\n\nYou need to label your results more clearly then. I thought you were\nshowing us three repeats of the same test, and I gather Bruce thought\nso too...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jun 2006 21:56:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement " }, { "msg_contents": "Qingqing Zhou wrote:\n> \n> \"Tom Lane\" <[email protected]> wrote\n> >\n> > Hm? I don't see any improvement there:\n> >\n> \n> I was referening this sentence, though I am not sure why that's the\n> expectation:\n> >\n> > \"Bruce Momjian\" <[email protected]> wrote\n> > If the patch worked, the first and third times will be similar, and\n> > the second time will be high.\n\nI meant that the non-stats and the patched stats should be the similar,\nand the stats without the patch (the second test) should be high.\n\n> -- After patch --\n> \n> real 0m1.275s\n> user 0m0.097s\n> sys 0m0.160s\n> \n> real 0m4.063s\n> user 0m0.663s\n> sys 0m0.377s\n> \n> real 0m1.259s\n> user 0m0.073s\n> sys 0m0.160s\n\nI assume the above is just running the same test three times, right?\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 15 Jun 2006 23:14:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "\n\"Bruce Momjian\" <[email protected]> wrote\n>\n> > -- After patch --\n> >\n> > real 0m1.275s\n> > user 0m0.097s\n> > sys 0m0.160s\n> >\n> > real 0m4.063s\n> > user 0m0.663s\n> > sys 0m0.377s\n> >\n> > real 0m1.259s\n> > user 0m0.073s\n> > sys 0m0.160s\n>\n> I assume the above is just running the same test three times, right?\n>\n\nRight -- it is the result of the patched CVS tip runing three times with\nstats_command_string = on. And the tests marked \"--Before patch--\" is the\nresult of CVS tip running three times with stats_command_string = on.\n\nRegards,\nQingqing\n\n\n", "msg_date": "Fri, 16 Jun 2006 11:27:50 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "Qingqing Zhou wrote:\n> \n> \"Bruce Momjian\" <[email protected]> wrote\n> >\n> > > -- After patch --\n> > >\n> > > real 0m1.275s\n> > > user 0m0.097s\n> > > sys 0m0.160s\n> > >\n> > > real 0m4.063s\n> > > user 0m0.663s\n> > > sys 0m0.377s\n> > >\n> > > real 0m1.259s\n> > > user 0m0.073s\n> > > sys 0m0.160s\n> >\n> > I assume the above is just running the same test three times, right?\n> >\n> \n> Right -- it is the result of the patched CVS tip runing three times with\n> stats_command_string = on. And the tests marked \"--Before patch--\" is the\n> result of CVS tip running three times with stats_command_string = on.\n\nAny idea why there is such a variance in the result? The second run\nlooks quite slow.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 15 Jun 2006 23:57:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "\n\"Bruce Momjian\" <[email protected]> wrote\n>\n> Any idea why there is such a variance in the result? The second run\n> looks quite slow.\n>\n\nNo luck so far. It is quite repeatble in my machine -- runing times which\nshow a long execution time: 2, 11, 14, 21 ... But when I do strace, the\nweiredness disappered totally. Have we seen any strange things like this\nbefore?\n\nRegards,\nQingqing\n\n\n", "msg_date": "Fri, 16 Jun 2006 12:48:27 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "\nOK, based on reports I have seen, generally stats_query_string adds 50%\nto the total runtime of a \"SELECT 1\" query, and the patch reduces the\noverhead to 25%.\n\nHowever, that 25% is still much too large. Consider that \"SELECT 1\" has\nto travel from psql to the server, go through the\nparser/optimizer/executor, and then return, it is clearly wrong that the\nstats_query_string performance hit should be measurable.\n\nI am actually surprised that so few people in the community are\nconcerned about this. While we have lots of people studying large\nqueries, these small queries should also get attention from a\nperformance perspective.\n\nI have created a new test that also turns off writing of the stats file.\nThis will not pass regression tests, but it will show the stats write\noverhead.\n\nUpdated test to be run:\n\n---------------------------------------------------------------------------\n\n1) Run this script and record the time reported:\n\n\tftp://candle.pha.pa.us/pub/postgresql/mypatches/stat.script\n\n It should take only a few seconds. \n\n2) Modify postgresql.conf:\n\n\tstats_command_string = on\n\n and reload the server. Do \"SELECT * FROM pg_stat_activity;\" to verify\n the command string is enabled. You should see your query in the\n \"current query\" column.\n\n3) Rerun the stat.script again and record the time.\n\n4) Apply this patch to CVS HEAD:\n\n\tftp://candle.pha.pa.us/pub/postgresql/mypatches/stat.nobuffer\n\n5) Run the stat.script again and record the time.\n\n6) Revert the patch and apply this patch to CVS HEAD:\n\n\tftp://candle.pha.pa.us/pub/postgresql/mypatches/stat.nobuffer_nowrite\n\n7) Run the stat.script again and record the time.\n\n8) Report the four results and your platform via email to\n [email protected]. Label times:\n\n\tstats_command_string = off\n\tstats_command_string = on\n\tstat.nobuffer patch\n\tstat.nobuffer_nowrite patch\n\n\n---------------------------------------------------------------------------\n\nQingqing Zhou wrote:\n> \n> \"Bruce Momjian\" <[email protected]> wrote\n> >\n> > Any idea why there is such a variance in the result? The second run\n> > looks quite slow.\n> >\n> \n> No luck so far. It is quite repeatble in my machine -- runing times which\n> show a long execution time: 2, 11, 14, 21 ... But when I do strace, the\n> weiredness disappered totally. Have we seen any strange things like this\n> before?\n> \n> Regards,\n> Qingqing\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Fri, 16 Jun 2006 12:03:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "Bruce Momjian wrote:\n> OK, based on reports I have seen, generally stats_query_string adds 50%\n> to the total runtime of a \"SELECT 1\" query, and the patch reduces the\n> overhead to 25%.\n\nthat is actually not true for both of the platforms(a slow OpenBSD\n3.9/x86 and a very fast Linux/x86_64) I tested on. Both of them show\nvirtually no improvement with the patch and even worst it causes\nconsiderable (negative) variance on at least the Linux box.\n\n> \n> However, that 25% is still much too large. Consider that \"SELECT 1\" has\n> to travel from psql to the server, go through the\n> parser/optimizer/executor, and then return, it is clearly wrong that the\n> stats_query_string performance hit should be measurable.\n> \n> I am actually surprised that so few people in the community are\n> concerned about this. While we have lots of people studying large\n> queries, these small queries should also get attention from a\n> performance perspective.\n> \n> I have created a new test that also turns off writing of the stats file.\n> This will not pass regression tests, but it will show the stats write\n> overhead.\n\nwill try to run those too in a few.\n\n\nStefan\n", "msg_date": "Fri, 16 Jun 2006 18:12:50 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "Stefan Kaltenbrunner wrote:\n> Bruce Momjian wrote:\n> > OK, based on reports I have seen, generally stats_query_string adds 50%\n> > to the total runtime of a \"SELECT 1\" query, and the patch reduces the\n> > overhead to 25%.\n> \n> that is actually not true for both of the platforms(a slow OpenBSD\n> 3.9/x86 and a very fast Linux/x86_64) I tested on. Both of them show\n> virtually no improvement with the patch and even worst it causes\n> considerable (negative) variance on at least the Linux box.\n\nI see the results I suggested on OpenBSD that you reported.\n\n> OpenBSD 3.9-current/x86:\n> \n> without stats:\n> 0m6.79s real 0m1.56s user 0m1.12s system\n> \n> -HEAD + stats:\n> 0m10.44s real 0m2.26s user 0m1.22s system\n> \n> -HEAD + stats + patch:\n> 0m10.68s real 0m2.16s user 0m1.36s system\n\nand I got similar results reported from a Debian:\n\n\tLinux 2.6.16 on a single processor HT 2.8Ghz Pentium compiled\n\twith gcc 4.0.4.\n\n\t> > real 0m3.306s\n\t> > real 0m4.905s\n\t> > real 0m4.448s\n\nI am unclear on the cuase for the widely varying results you saw in\nDebian.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Fri, 16 Jun 2006 12:24:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "Bruce Momjian wrote:\n> Stefan Kaltenbrunner wrote:\n>> Bruce Momjian wrote:\n>>> OK, based on reports I have seen, generally stats_query_string adds 50%\n>>> to the total runtime of a \"SELECT 1\" query, and the patch reduces the\n>>> overhead to 25%.\n>> that is actually not true for both of the platforms(a slow OpenBSD\n>> 3.9/x86 and a very fast Linux/x86_64) I tested on. Both of them show\n>> virtually no improvement with the patch and even worst it causes\n>> considerable (negative) variance on at least the Linux box.\n> \n> I see the results I suggested on OpenBSD that you reported.\n> \n>> OpenBSD 3.9-current/x86:\n>>\n>> without stats:\n>> 0m6.79s real 0m1.56s user 0m1.12s system\n>>\n>> -HEAD + stats:\n>> 0m10.44s real 0m2.26s user 0m1.22s system\n>>\n>> -HEAD + stats + patch:\n>> 0m10.68s real 0m2.16s user 0m1.36s system\n\nyep those are very stable even over a large number of runs\n\n> \n> and I got similar results reported from a Debian:\n> \n> \tLinux 2.6.16 on a single processor HT 2.8Ghz Pentium compiled\n> \twith gcc 4.0.4.\n> \n> \t> > real 0m3.306s\n> \t> > real 0m4.905s\n> \t> > real 0m4.448s\n> \n> I am unclear on the cuase for the widely varying results you saw in\n> Debian.\n> \n\nI can reproduce the widely varying results on a number of x86 and x86_64\nbased Linux boxes here (Debian,Fedora and CentOS) though I cannot\nreproduce it on a Fedora core 5/ppc box.\nAll the x86 boxes are SMP - while the ppc one is not - that might have\nsome influence on the results.\n\nStefan\n", "msg_date": "Fri, 16 Jun 2006 20:14:47 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> 1) Run this script and record the time reported:\n> \tftp://candle.pha.pa.us/pub/postgresql/mypatches/stat.script\n\nOne thing you neglected to specify is that the test must be done on a\nNON ASSERT CHECKING build of CVS HEAD (or recent head, at least).\nOn these trivial \"SELECT 1\" commands, an assert-checking backend is\ngoing to spend over 50% of its time doing end-of-transaction assert\nchecks. I was reminded of this upon trying to do oprofile:\n\nCPU: P4 / Xeon with 2 hyper-threads, speed 2793.03 MHz (estimated)\nCounted GLOBAL_POWER_EVENTS events (time during which processor is not stopped)\nwith a unit mask of 0x01 (mandatory) count 240000\nsamples % symbol name\n129870 37.0714 AtEOXact_CatCache\n67112 19.1571 AllocSetCheck\n16611 4.7416 AtEOXact_Buffers\n10054 2.8699 base_yyparse\n7499 2.1406 hash_seq_search\n7037 2.0087 AllocSetAlloc\n4267 1.2180 hash_search\n4060 1.1589 AtEOXact_RelationCache\n2537 0.7242 base_yylex\n1984 0.5663 grouping_planner\n1873 0.5346 LWLockAcquire\n1837 0.5244 AllocSetFree\n1808 0.5161 exec_simple_query\n1763 0.5032 ExecutorStart\n1527 0.4359 PostgresMain\n1464 0.4179 MemoryContextAllocZeroAligned\n\nLet's be sure we're all measuring the same thing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Jun 2006 13:43:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test request for Stats collector performance improvement " } ]
[ { "msg_contents": "Hi,\n\nI've been working on trying to partition a big table (I've never partitioned a \ntable in any other database till now).\nEverything went ok, except one query that didn't work afterwards.\n\nI've put the partition description, indexes, etc ..., and the explain plan \nattached.\n\nThe query is extremely fast without partition (index scan backards on the \nprimary key)\n\nThe query is : \"select * from logs order by id desc limit 100;\"\nid is the primary key.\n\nIt is indexed on all partitions.\n\nBut the explain plan does full table scan on all partitions.\n\nWhile I think I understand why it is doing this plan right now, is there \nsomething that could be done to optimize this case ? Or put a warning in the \ndocs about this kind of behaviour. I guess normally someone would partition \nto get faster queries :)\n\nAnyway, I thought I should mention this, as it has been quite a surprise.", "msg_date": "Tue, 13 Dec 2005 09:20:47 +0100", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "partitioning" }, { "msg_contents": "Did you set constraint_exclusion = true in postgresql.conf file?\n\nOn 12/13/05, Marc Cousin <[email protected]> wrote:\n> Hi,\n>\n> I've been working on trying to partition a big table (I've never partitioned a\n> table in any other database till now).\n> Everything went ok, except one query that didn't work afterwards.\n>\n> I've put the partition description, indexes, etc ..., and the explain plan\n> attached.\n>\n> The query is extremely fast without partition (index scan backards on the\n> primary key)\n>\n> The query is : \"select * from logs order by id desc limit 100;\"\n> id is the primary key.\n>\n> It is indexed on all partitions.\n>\n> But the explain plan does full table scan on all partitions.\n>\n> While I think I understand why it is doing this plan right now, is there\n> something that could be done to optimize this case ? Or put a warning in the\n> docs about this kind of behaviour. I guess normally someone would partition\n> to get faster queries :)\n>\n> Anyway, I thought I should mention this, as it has been quite a surprise.\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n>\n>\n\n\n--\nRegards\nPandu\n", "msg_date": "Tue, 13 Dec 2005 17:10:11 +0530", "msg_from": "Pandurangan R S <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning" }, { "msg_contents": "I just saw that there is no where clause in the query, that you had\nfed to explain plan.\nyou need to include a where clause based on id_machine column to see the effect.\n\nOn 12/13/05, Pandurangan R S <[email protected]> wrote:\n> Did you set constraint_exclusion = true in postgresql.conf file?\n>\n> On 12/13/05, Marc Cousin <[email protected]> wrote:\n> > Hi,\n> >\n> > I've been working on trying to partition a big table (I've never partitioned a\n> > table in any other database till now).\n> > Everything went ok, except one query that didn't work afterwards.\n> >\n> > I've put the partition description, indexes, etc ..., and the explain plan\n> > attached.\n> >\n> > The query is extremely fast without partition (index scan backards on the\n> > primary key)\n> >\n> > The query is : \"select * from logs order by id desc limit 100;\"\n> > id is the primary key.\n> >\n> > It is indexed on all partitions.\n> >\n> > But the explain plan does full table scan on all partitions.\n> >\n> > While I think I understand why it is doing this plan right now, is there\n> > something that could be done to optimize this case ? Or put a warning in the\n> > docs about this kind of behaviour. I guess normally someone would partition\n> > to get faster queries :)\n> >\n> > Anyway, I thought I should mention this, as it has been quite a surprise.\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n> >\n> >\n> >\n>\n>\n> --\n> Regards\n> Pandu\n>\n", "msg_date": "Tue, 13 Dec 2005 17:20:07 +0530", "msg_from": "Pandurangan R S <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning" }, { "msg_contents": "Yes, that's how I solved it... and I totally agree that it's hard for the \nplanner to guess what to do on the partitions. But maybe there should be \nsomething in the docs explaining the limitations ...\n\nI'm only asking for the biggest 100 ids from the table, so I thought maybe the \nplanner would take the 100 biggest from all partitions or something like that \nand return me the 100 biggest from those results. It didn't and that's quite \nlogical.\n\nWhat I meant is that I understand why the planner chooses this plan, but maybe \nit should be written somewhere in the docs that some plans will be worse \nafter partitionning.\n\nLe Mardi 13 Décembre 2005 12:50, vous avez écrit :\n> I just saw that there is no where clause in the query, that you had\n> fed to explain plan.\n> you need to include a where clause based on id_machine column to see the\n> effect.\n>\n> On 12/13/05, Pandurangan R S <[email protected]> wrote:\n> > Did you set constraint_exclusion = true in postgresql.conf file?\n> >\n> > On 12/13/05, Marc Cousin <[email protected]> wrote:\n> > > Hi,\n> > >\n> > > I've been working on trying to partition a big table (I've never\n> > > partitioned a table in any other database till now).\n> > > Everything went ok, except one query that didn't work afterwards.\n> > >\n> > > I've put the partition description, indexes, etc ..., and the explain\n> > > plan attached.\n> > >\n> > > The query is extremely fast without partition (index scan backards on\n> > > the primary key)\n> > >\n> > > The query is : \"select * from logs order by id desc limit 100;\"\n> > > id is the primary key.\n> > >\n> > > It is indexed on all partitions.\n> > >\n> > > But the explain plan does full table scan on all partitions.\n> > >\n> > > While I think I understand why it is doing this plan right now, is\n> > > there something that could be done to optimize this case ? Or put a\n> > > warning in the docs about this kind of behaviour. I guess normally\n> > > someone would partition to get faster queries :)\n> > >\n> > > Anyway, I thought I should mention this, as it has been quite a\n> > > surprise.\n> > >\n> > >\n> > >\n> > > ---------------------------(end of\n> > > broadcast)--------------------------- TIP 1: if posting/reading through\n> > > Usenet, please send an appropriate subscribe-nomail command to\n> > > [email protected] so that your message can get through to the\n> > > mailing list cleanly\n> >\n> > --\n> > Regards\n> > Pandu\n", "msg_date": "Tue, 13 Dec 2005 14:11:24 +0100", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning" } ]
[ { "msg_contents": "Hi.\n\ncreate table device(id int);\n\ninsert into device values(1);\ninsert into device values(2);\n.....\ninsert into device values(250);\n\ncreate table base (\n\t\t\t\t\tid int,\n\t\t\t\t\tdata float,\n\t\t\t\t\tdatatime timestamp,\n\t\t\t\t\tmode int,\n\t\t\t\t\tstatus int);\n\ncreate table base_1 (\n \t\t\t\tcheck ( id = 1 and datatime >= DATE '2005-01-01' \n\t\t\t\t\tand datatime < DATE '2006-01-01' )\n\t\t\t\t\t) INHERITS (base);\n\ncreate table base_2 (\n check ( id = 2 and datatime >= DATE '2005-01-01'\n and datatime < DATE '2006-01-01' )\n ) INHERITS (base);\n....\ncreate table base_250\n\n\nAnd\nselect * from base \n\twhere id in (1,2) and datatime between '2005-05-15' and '2005-05-17';\n10 seconds\n\nselect * from base\n\twhere id in (select id from device where id = 1 or id = 2) and\n\tdatatime between '2005-05-15' and '2005-05-17';\n10 minits\n\nWhy?\n\n-- \nmailto: [email protected]\n", "msg_date": "Tue, 13 Dec 2005 18:18:19 +0300", "msg_from": "=?utf-8?B?0JrQu9GO0YfQvdC40LrQvtCyINCQLtChLg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "query from partitions" }, { "msg_contents": "Ключников А.С. wrote:\n> And\n> select * from base \n> \twhere id in (1,2) and datatime between '2005-05-15' and '2005-05-17';\n> 10 seconds\n> \n> select * from base\n> \twhere id in (select id from device where id = 1 or id = 2) and\n> \tdatatime between '2005-05-15' and '2005-05-17';\n> 10 minits\n> \n> Why?\n\nRun EXPLAIN ANALYSE on both queries to see how the plan has changed.\n\nMy guess for why the plans are different is that in the first case your \nquery ends up as ...where (id=1 or id=2)...\n\nIn the second case, the planner doesn't know what it's going to get back \nfrom the subquery until it's executed it, so can't tell it just needs to \nscan base_1,base_2. Result: you'll scan all child tables of base.\n\nI think the planner will occasionally evaluate constants before \nplanning, but I don't think it will ever execute a subquery and then \nre-plan the outer query based on those results. Of course, someone might \npop up and tell me I'm wrong now...\n\n-- \n Richard Huxton\n Archonet Ltd\n\n", "msg_date": "Tue, 13 Dec 2005 15:59:11 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query from partitions" }, { "msg_contents": "On Tue, Dec 13, 2005 at 06:18:19PM +0300, Ключников А.С. wrote:\n> select * from base\n> \twhere id in (select id from device where id = 1 or id = 2) and\n> \tdatatime between '2005-05-15' and '2005-05-17';\n> 10 minits\n\nThat's a really odd way of saying \"1 or 2\". It probably has to go through all\nthe records in device, not realizing it can just scan for two of them (using\ntwo index scans). I'd guess an EXPLAIN ANALYZE would confirm something like\nthis happening (you'd want to run that and post the results here anyhow).\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 13 Dec 2005 17:08:52 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query from partitions" }, { "msg_contents": "* Richard Huxton <[email protected]> [2005-12-13 15:59:11 +0000]:\n\n> Ключников А.С. wrote:\n> >And\n> >select * from base \n> >\twhere id in (1,2) and datatime between '2005-05-15' and '2005-05-17';\n> >10 seconds\n> >\n> >select * from base\n> >\twhere id in (select id from device where id = 1 or id = 2) and\n> >\tdatatime between '2005-05-15' and '2005-05-17';\n> >10 minits\n> >\n> >Why?\n> \n> Run EXPLAIN ANALYSE on both queries to see how the plan has changed.\nexplain select distinct on(id) * from base where id in (1,2) and\ndata_type=2 and datatime < '2005-11-02' order by id, datatime desc;\n\nUnique (cost=10461.14..10527.30 rows=2342 width=38)\n -> Sort (cost=10461.14..10494.22 rows=13232 width=38)\n Sort Key: public.base.id, public.base.datatime\n -> Result (cost=0.00..9555.29 rows=13232 width=38)\n -> Append (cost=0.00..9555.29 rows=13232 width=38)\n -> Seq Scan on base (cost=0.00..32.60 rows=1\nwidth=38)\n Filter: (((id = 1) OR (id = 2)) AND (data_type =\n2) AND (datatime < '2005-11-02 00:00:00'::timestamp without time zone))\n -> Seq Scan on base_batch base (cost=0.00..32.60\nrows=1 width=38)\n.......................\n\n-> Seq Scan on base_1_2004 base (cost=0.00..32.60 rows=1 width=38)\n Filter: (((id = 1) OR (id = 2)) AND (data_type =\n2) AND (datatime < '2005-11-02 00:00:00'::timestamp without time zone))\n(записей: 34)\n\n\nand\nexplain select distinct on(id) * from base where id in (select id from\ndevice where id = 1 or id = 2) and data_type=2 and datatime < '2005-11-02'\norder by id, datatime desc;\n\nUnique (cost=369861.89..369872.52 rows=2126 width=38)\n -> Sort (cost=369861.89..369867.21 rows=2126 width=38)\n Sort Key: public.base.id, public.base.datatime\n -> Hash IN Join (cost=5.88..369744.39 rows=2126 width=38)\n Hash Cond: (\"outer\".id = \"inner\".id)\n -> Append (cost=0.00..368654.47 rows=212554 width=38)\n -> Seq Scan on base (cost=0.00..26.95 rows=2\nwidth=38)\n Filter: ((data_type = 2) AND (datatime <\n'2005-11-02 00:00:00'::timestamp without time zone))\n -> Seq Scan on base_batch base (cost=0.00..26.95\nrows=2 width=38)\n Filter: ((data_type = 2) AND (datatime <\n'2005-11-02 00:00:00'::timestamp without time zone))\n -> Seq Scan on base_lines_05_12 base\n(cost=0.00..26.95 rows=2 width=38)\n............................\n -> Hash (cost=5.88..5.88 rows=2 width=4)\n -> Seq Scan on device (cost=0.00..5.88 rows=2\nwidth=4)\n Filter: ((id = 1) OR (id = 2))\n(записей: 851)\n\n> \n> My guess for why the plans are different is that in the first case your \n> query ends up as ...where (id=1 or id=2)...\n> \n> In the second case, the planner doesn't know what it's going to get back \n> from the subquery until it's executed it, so can't tell it just needs to \n> scan base_1,base_2. Result: you'll scan all child tables of base.\n> \n> I think the planner will occasionally evaluate constants before \n> planning, but I don't think it will ever execute a subquery and then \n> re-plan the outer query based on those results. Of course, someone might \n> pop up and tell me I'm wrong now...\n> \n> -- \n> Richard Huxton\n> Archonet Ltd\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n-- \nС уважением,\nКлючников А.С.\nВедущий инженер ПРП \"Аналитприбор\"\n432030 г.Ульяновск, а/я 3117\nтел./факс +7 (8422) 43-44-78\nmailto: [email protected]\n", "msg_date": "Tue, 13 Dec 2005 19:57:59 +0300", "msg_from": "=?utf-8?B?0JrQu9GO0YfQvdC40LrQvtCyINCQLtChLg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query from partitions" }, { "msg_contents": "On Tue, 2005-12-13 at 15:59 +0000, Richard Huxton wrote:\n> Ключников А.С. wrote:\n> > And\n> > select * from base \n> > \twhere id in (1,2) and datatime between '2005-05-15' and '2005-05-17';\n> > 10 seconds\n> > \n> > select * from base\n> > \twhere id in (select id from device where id = 1 or id = 2) and\n> > \tdatatime between '2005-05-15' and '2005-05-17';\n> > 10 minits\n> > \n> > Why?\n> \n> Run EXPLAIN ANALYSE on both queries to see how the plan has changed.\n> \n> My guess for why the plans are different is that in the first case your \n> query ends up as ...where (id=1 or id=2)...\n> \n> In the second case, the planner doesn't know what it's going to get back \n> from the subquery until it's executed it, so can't tell it just needs to \n> scan base_1,base_2. Result: you'll scan all child tables of base.\n> \n> I think the planner will occasionally evaluate constants before \n> planning, but I don't think it will ever execute a subquery and then \n> re-plan the outer query based on those results. Of course, someone might \n> pop up and tell me I'm wrong now...\n\nThats right. Partitioning doesn't work for joins in 8.1.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 13 Dec 2005 22:48:02 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query from partitions" } ]
[ { "msg_contents": "\n\nResending it here as it may be more relevant here...\nAmeet\n\n---------- Forwarded message ----------\nDate: Tue, 13 Dec 2005 11:24:26 -0600 (CST)\nFrom: Ameet Kini <[email protected]>\nTo: [email protected]\nSubject: Lots of postmaster processes\n\n\n\nIn our installation of the postgres 7.4.7, we are seeing a lot of the\nfollowing postmaster processes (around 50) being spawned by the initial\npostmaster process once in a while:\n\npostgres 3977 1 1 Nov03 ? 15:11:38\n/s/postgresql-7.4.7/bin/postmaster -D /scratch.1/postgres/condor-db-7.4.7\n\n......\n\npostgres 31985 3977 0 10:08 ? 00:00:00\n/s/postgresql-7.4.7/bin/postmaster -D /scratch.1/postgres/condor-db-7.4.7\n\npostgres 31986 3977 0 10:08 ? 00:00:00\n/s/postgresql-7.4.7/bin/postmaster -D /scratch.1/postgres/condor-db-7.4.7\n\npostgres 31987 3977 0 10:08 ? 00:00:00\n/s/postgresql-7.4.7/bin/postmaster -D /scratch.1/postgres/condor-db-7.4.7\n\npostgres 31988 3977 0 10:08 ? 00:00:00\n/s/postgresql-7.4.7/bin/postmaster -D /scratch.1/postgres/condor-db-7.4.7\n\n......\n\n\nAt the same time when these processes being spawned, sometimes there is\nalso the checkpoint subprocess. I am not sure if that is related. The\ndocument doesn't provide any information. The other activity going on at\nthe same time is a 'COPY' statement from a client application.\n\nThese extra processes put a considerable load on the machine and cause it\nto hang up.\n\nThanks,\nAmeet\n", "msg_date": "Tue, 13 Dec 2005 11:25:40 -0600 (CST)", "msg_from": "Ameet Kini <[email protected]>", "msg_from_op": true, "msg_subject": "Lots of postmaster processes (fwd)" }, { "msg_contents": "Dunno if this has gotten a reply elsewhere, but during a checkpoint the\ndatabase can become quite busy. If that happens and performance slows\ndown, other queries will slow down as well. If you have an app where a\na high rate of incomming requests (like a busy website), existing\nbackends won't be able to keep up with demand, so incomming connections\nwill end up spawning more connections to the database.\n\nOn Tue, Dec 13, 2005 at 11:25:40AM -0600, Ameet Kini wrote:\n> \n> \n> Resending it here as it may be more relevant here...\n> Ameet\n> \n> ---------- Forwarded message ----------\n> Date: Tue, 13 Dec 2005 11:24:26 -0600 (CST)\n> From: Ameet Kini <[email protected]>\n> To: [email protected]\n> Subject: Lots of postmaster processes\n> \n> \n> \n> In our installation of the postgres 7.4.7, we are seeing a lot of the\n> following postmaster processes (around 50) being spawned by the initial\n> postmaster process once in a while:\n> \n> postgres 3977 1 1 Nov03 ? 15:11:38\n> /s/postgresql-7.4.7/bin/postmaster -D /scratch.1/postgres/condor-db-7.4.7\n> \n> ......\n> \n> postgres 31985 3977 0 10:08 ? 00:00:00\n> /s/postgresql-7.4.7/bin/postmaster -D /scratch.1/postgres/condor-db-7.4.7\n> \n> postgres 31986 3977 0 10:08 ? 00:00:00\n> /s/postgresql-7.4.7/bin/postmaster -D /scratch.1/postgres/condor-db-7.4.7\n> \n> postgres 31987 3977 0 10:08 ? 00:00:00\n> /s/postgresql-7.4.7/bin/postmaster -D /scratch.1/postgres/condor-db-7.4.7\n> \n> postgres 31988 3977 0 10:08 ? 00:00:00\n> /s/postgresql-7.4.7/bin/postmaster -D /scratch.1/postgres/condor-db-7.4.7\n> \n> ......\n> \n> \n> At the same time when these processes being spawned, sometimes there is\n> also the checkpoint subprocess. I am not sure if that is related. The\n> document doesn't provide any information. The other activity going on at\n> the same time is a 'COPY' statement from a client application.\n> \n> These extra processes put a considerable load on the machine and cause it\n> to hang up.\n> \n> Thanks,\n> Ameet\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 16 Dec 2005 15:56:52 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lots of postmaster processes (fwd)" } ]
[ { "msg_contents": " \nPostgres 8.1 performance rocks (compared with 8.0) specially with the use in-memory index bitmaps. Complex queries that used to take 30+ minutes, it takes now a few minutes to complete in 8.1. Many thanks to the all wonderful developers for the huge 8.1 performance boost.\n\n---\n \n Husam Tomeh\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Tom Lane\nSent: Sunday, December 11, 2005 12:39 PM\nTo: Pål Stenslet\nCc: [email protected]\nSubject: Re: [PERFORM] Should Oracle outperform PostgreSQL on a complex multidimensional query?\n\nPerhaps you should be trying this on PG 8.1? In any case, without\nspecific details of your schema or a look at EXPLAIN ANALYZE results,\nit's unlikely that anyone is going to have any useful comments for you.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n**********************************************************************\nThis message contains confidential information intended only for the use of the addressee(s) named above and may contain information that is legally privileged. If you are not the addressee, or the person responsible for delivering it to the addressee, you are hereby notified that reading, disseminating, distributing or copying this message is strictly prohibited. If you have received this message by mistake, please immediately notify us by replying to the message and delete the original message immediately thereafter.\n\nThank you.\n\r\n FADLD Tag\n**********************************************************************\n\n", "msg_date": "Tue, 13 Dec 2005 15:18:35 -0800", "msg_from": "\"Tomeh, Husam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" } ]
[ { "msg_contents": "Simon, \n\n> Yes, I'd expect something like this right now in 8.1; the \n> numbers stack up to PostgreSQL doing equivalent join speeds, \n> but w/o star join.\n\nI do expect a significant improvement from 8.1 using the new bitmap index because there is no need to scan the full Btree indexes. Also, the new bitmap index has a fast compressed bitmap storage and access that make the AND operations speedy with no loss like the bitmap scan lossy compression, which may enhance the selectivity on very large datasets.\n\n> You've confused the issue here since:\n> - Oracle performs star joins using a bit map index transform. \n> It is the star join that is the important bit here, not the \n> just the bitmap part.\n> - PostgreSQL does actually provide bitmap index merge, but \n> not star join\n> (YET!)\n\nYes, that is true, a star join optimization may be a big deal, I'm not sure. I've certainly talked to people with that experience from RedBrick, Teradata and Oracle.\n \n> [I've looked into this, but there seem to be multiple patent \n> claims covering various aspects of this technique, yet at \n> least other 3 vendors manage to achieve this. So far I've not \n> dug too deeply, but I understand the optimizations we'd need \n> to perform in PostgreSQL to do this.]\n\nHmm - I bet there's a way.\n\nYou should test the new bitmap index in Bizgres - it rocks hard. We're prepping a Postgres 8.1.1 patch soon, but you can get it in Bizgres CVS now.\n\n- Luke\n\n", "msg_date": "Tue, 13 Dec 2005 20:54:38 -0500", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should Oracle outperform PostgreSQL on a complex" } ]
[ { "msg_contents": "Hello all,\n\nIt seems that I'm starting to outgrow our current Postgres setup. We've \nbeen running a handful of machines as standalone db servers. This is all \nin a colocation environment, so everything is stuffed into 1U Supermicro \nboxes. Our standard build looks like this:\n\nSupermicro 1U w/SCA backplane and 4 bays\n2x2.8 GHz Xeons\nAdaptec 2015S \"zero channel\" RAID card\n2 or 4 x 73GB Seagate 10K Ultra 320 drives (mirrored+striped)\n2GB RAM\nFreeBSD 4.11\nPGSQL data from 5-10GB per box\n\nRecently I started studying what we were running up against in our nightly \nruns that do a ton of updates/inserts to prep things for the tasks the db \ndoes during the business day (light mix of selects/inserts/updates). \nWhile we have plenty of disk bandwidth (according to bonnie), we are \nreally dying on IOPS. I'm guessing this is a mix of a rather anemic RAID \ncontroller (ever notice how adaptec doesn't publish any real \nperformance specs on raid cards?) and having only two or four spindles \n(effectively 1 or 2 on writes).\n\nSo that's where we are...\n\nI'm new to the whole SAN thing, but did recently pick up a few used NetApp \nshelves and a Fibre Channel RAID HBA (Mylex ExtremeRAID 3000, also used) \nto toy with. I started wondering if I could put something together to \nboth get our storage on one set of boxes and allow me to get data striped \nacross more drives. Our budget is not huge and we are not adverse to \ngetting used gear where appropriate.\n\nWhat do you folks recommend? I'm just starting to look at what's out \nthere for SANs and NAS, and from what I've seen, our options are:\n\nNetApp Filers - the pluses with these are that if we use NFS, we don't \nhave to worry about either large filesystem support in FreeBSD (2TB \npractical limit), or limitation on \"growing\" partitions as the NetApp just \ndeals with that. I also understand these make backups a bit simpler. I \nhave a great, trusted, spare-stocking source for these.\n\nApple X-Serve RAID - well, it's pretty cheap. Honestly, that's all I know \nabout it - they don't talk about IOPS numbers, and I have no idea what \nlurks in that box as a RAID controller.\n\nSAN box w/integrated RAID - it seems like this might not be a good choice \nsince the RAID hardware in the box may be where I hit any limits. I also \nimagine I'm probably overpaying for some OEM RAID controller integrated \ninto the box. No idea where to look for used gear.\n\nSAN box, JBOD - this seems like it might be affordable as well. A few big \nshelves full of drives a SAN \"switch\" to plug all the shelves and hosts \ninto and a FC RAID card in each host. No idea where to look for used gear \nhere either.\n\nYou'll note that I'm being somewhat driven by my OS of choice, FreeBSD. \nUnlike Solaris or other commercial offerings, there is no nice volume \nmanagement available. While I'd love to keep managing a dozen or so \nFreeBSD boxes, I could be persuaded to go to Solaris x86 if the volume \nmanagement really shines and Postgres performs well on it.\n\nLastly, one thing that I'm not yet finding in trying to educate myself on \nSANs is a good overview of what's come out in the past few years that's \nmore affordable than the old big-iron stuff. For example I saw some brief \ninfo on this list's archives about the Dell/EMC offerings. Anything else \nin that vein to look at?\n\nI hope this isn't too far off topic for this list. Postgres is the \nmain application that I'm looking to accomodate. Anything else I can do \nwith whatever solution we find is just gravy...\n\nThanks!\n\nCharles\n\n", "msg_date": "Wed, 14 Dec 2005 01:43:41 -0500 (EST)", "msg_from": "Charles Sprickman <[email protected]>", "msg_from_op": true, "msg_subject": "SAN/NAS options" }, { "msg_contents": "On Wed, 14 Dec 2005, Charles Sprickman wrote:\n\n[big snip]\n\nThe list server seems to be regurgitating old stuff, and in doing so it \nreminded me to thank everyone for their input. I was kind of waiting to \nsee if anyone who was very pro-NAS/SAN was going to pipe up, but it looks \nlike most people are content with per-host storage.\n\nYou've given me a lot to go on... Now I'm going to have to do some \nresearch as to real-world RAID controller performance. It's vexing (to \nsay the least) that most vendors don't supply any raw throughput or TPS \nstats on this stuff...\n\nAnyhow, thanks again. You'll probably see me back here in the coming \nmonths as I try to shake some mysql info out of my brain as our pgsql DBA \ngets me up to speed on pgsql and what specifically he's doing to stress \nthings.\n\nCharles\n\n> I hope this isn't too far off topic for this list. Postgres is the main \n> application that I'm looking to accomodate. Anything else I can do with \n> whatever solution we find is just gravy...\n>\n> Thanks!\n>\n> Charles\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Wed, 21 Dec 2005 00:58:54 -0500 (EST)", "msg_from": "Charles Sprickman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": ">\n>> I hope this isn't too far off topic for this list. Postgres is \n>> the main application that I'm looking to accomodate. Anything \n>> else I can do with whatever solution we find is just gravy...\n> You've given me a lot to go on... Now I'm going to have to do some \n> research as to real-world RAID controller performance. It's vexing \n> (to say the least) that most vendors don't supply any raw \n> throughput or TPS stats on this stuff...\n\nOne word of advice. Stay away from Dell kit. The PERC 4 controllers \nthey use don't implement RAID 10 properly. It's RAID 1 + JBOD array. \nIt also has generally dismal IOPS performance too. You might get away \nwith running software RAID, either in conjunction with, or entirely \navoiding the card.\n", "msg_date": "Wed, 21 Dec 2005 10:17:27 +0000", "msg_from": "Alex Stapleton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "Charles,\n\nOn 12/20/05 9:58 PM, \"Charles Sprickman\" <[email protected]> wrote:\n\n> You've given me a lot to go on... Now I'm going to have to do some\n> research as to real-world RAID controller performance. It's vexing (to\n> say the least) that most vendors don't supply any raw throughput or TPS\n> stats on this stuff...\n\nTake a look at this:\n http://www.wlug.org.nz/HarddiskBenchmarks\n\n> Anyhow, thanks again. You'll probably see me back here in the coming\n> months as I try to shake some mysql info out of my brain as our pgsql DBA\n> gets me up to speed on pgsql and what specifically he's doing to stress\n> things.\n\nCool!\n\nBTW - based on the above benchmark page, I just immediately ordered 2 x of\nthe Areca 1220 SATA controllers (\nhttp://www.areca.com.tw/products/html/pcie-sata.htm) so that we can compare\nthem to the 3Ware 9550SX that we've been using. The 3Ware controllers have\nbeen super fast on sequential access, but I'm concerned about their random\nIOPs. The Areca's aren't as popular, and there's consequently less volume\nof them, but people who use them rave about them.\n\n- Luke \n\n\n", "msg_date": "Wed, 21 Dec 2005 02:46:54 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" } ]
[ { "msg_contents": "Hello all,\n\nIt seems that I'm starting to outgrow our current Postgres setup. We've been \nrunning a handful of machines as standalone db servers. This is all in a \ncolocation environment, so everything is stuffed into 1U Supermicro boxes. Our \nstandard build looks like this:\n\nSupermicro 1U w/SCA backplane and 4 bays\n2x2.8 GHz Xeons\nAdaptec 2015S \"zero channel\" RAID card\n2 or 4 x 73GB Seagate 10K Ultra 320 drives (mirrored+striped)\n2GB RAM\nFreeBSD 4.11\nPGSQL data from 5-10GB per box\n\nRecently I started studying what we were running up against in our nightly runs \nthat do a ton of updates/inserts to prep things for the tasks the db does \nduring the business day (light mix of selects/inserts/updates). While we have \nplenty of disk bandwidth (according to bonnie), we are really dying on IOPS. \nI'm guessing this is a mix of a rather anemic RAID controller (ever notice how \nadaptec doesn't publish any real performance specs on raid cards?) and having \nonly two or four spindles (effectively 1 or 2 on writes).\n\nSo that's where we are...\n\nI'm new to the whole SAN thing, but did recently pick up a few used NetApp \nshelves and a Fibre Channel RAID HBA (Mylex ExtremeRAID 3000, also used) to toy \nwith. I started wondering if I could put something together to both get our \nstorage on one set of boxes and allow me to get data striped across more \ndrives. Our budget is not huge and we are not adverse to getting used gear \nwhere appropriate.\n\nWhat do you folks recommend? I'm just starting to look at what's out there for \nSANs and NAS, and from what I've seen, our options are:\n\nNetApp Filers - the pluses with these are that if we use NFS, we don't have to \nworry about either large filesystem support in FreeBSD (2TB practical limit), \nor limitation on \"growing\" partitions as the NetApp just deals with that. I \nalso understand these make backups a bit simpler. I have a great, trusted, \nspare-stocking source for these.\n\nApple X-Serve RAID - well, it's pretty cheap. Honestly, that's all I know \nabout it - they don't talk about IOPS numbers, and I have no idea what lurks in \nthat box as a RAID controller.\n\nSAN box w/integrated RAID - it seems like this might not be a good choice since \nthe RAID hardware in the box may be where I hit any limits. I also imagine I'm \nprobably overpaying for some OEM RAID controller integrated into the box. No \nidea where to look for used gear.\n\nSAN box, JBOD - this seems like it might be affordable as well. A few big \nshelves full of drives a SAN \"switch\" to plug all the shelves and hosts into \nand a FC RAID card in each host. No idea where to look for used gear here \neither.\n\nYou'll note that I'm being somewhat driven by my OS of choice, FreeBSD. Unlike \nSolaris or other commercial offerings, there is no nice volume management \navailable. While I'd love to keep managing a dozen or so FreeBSD boxes, I \ncould be persuaded to go to Solaris x86 if the volume management really shines \nand Postgres performs well on it.\n\nLastly, one thing that I'm not yet finding in trying to educate myself on SANs \nis a good overview of what's come out in the past few years that's more \naffordable than the old big-iron stuff. For example I saw some brief info on \nthis list's archives about the Dell/EMC offerings. Anything else in that vein \nto look at?\n\nI hope this isn't too far off topic for this list. Postgres is the main \napplication that I'm looking to accomodate. Anything else I can do with \nwhatever solution we find is just gravy...\n\nThanks!\n\nCharles\n\n", "msg_date": "Wed, 14 Dec 2005 01:56:10 -0500 (EST)", "msg_from": "Charles Sprickman <[email protected]>", "msg_from_op": true, "msg_subject": "SAN/NAS options" }, { "msg_contents": "Charles Sprickman wrote:\n> Hello all,\n> \n> It seems that I'm starting to outgrow our current Postgres setup. We've \n> been running a handful of machines as standalone db servers. This is \n> all in a colocation environment, so everything is stuffed into 1U \n> Supermicro boxes. Our standard build looks like this:\n> \n> Supermicro 1U w/SCA backplane and 4 bays\n> 2x2.8 GHz Xeons\n> Adaptec 2015S \"zero channel\" RAID card\n> 2 or 4 x 73GB Seagate 10K Ultra 320 drives (mirrored+striped)\n> 2GB RAM\n> FreeBSD 4.11\n> PGSQL data from 5-10GB per box\n> \n> Recently I started studying what we were running up against in our \n> nightly runs that do a ton of updates/inserts to prep things for the \n> tasks the db does during the business day (light mix of \n> selects/inserts/updates). While we have plenty of disk bandwidth \n> (according to bonnie), we are really dying on IOPS. I'm guessing this is \n> a mix of a rather anemic RAID controller (ever notice how adaptec \n> doesn't publish any real performance specs on raid cards?) and having \n> only two or four spindles (effectively 1 or 2 on writes).\n> \n> So that's where we are...\n> \n> I'm new to the whole SAN thing, but did recently pick up a few used \n> NetApp shelves and a Fibre Channel RAID HBA (Mylex ExtremeRAID 3000, \n> also used) to toy with. I started wondering if I could put something \n> together to both get our storage on one set of boxes and allow me to get \n> data striped across more drives. Our budget is not huge and we are not \n> adverse to getting used gear where appropriate.\n> \n> What do you folks recommend? I'm just starting to look at what's out \n> there for SANs and NAS, and from what I've seen, our options are:\n> \n\nLeaving the whole SAN issue for a moment:\n\nIt would be interesting to see if moving to FreeBSD 6.0 would help you - \nthe vfs layer is no longer throttled by the (SMP) GIANT lock in this \nversion, and that may make quite a difference (given you have SMP boxes).\n\nAnother interesting thing to try is rebuilding the database ufs \nfilesystem(s) with 32K blocks and 4K frags (as opposed to 8K/1K or \n16K/2K - can't recall the default on 4.x). I found this to give a factor \nof 2 speedup on random disk access (specifically queries doing indexed \njoins).\n\nIs it mainly your 2 disk machines that are IOPS bound? if so, a cheap \noption may be to buy 2 more cheetahs for them! If it's the 4's, well how \nabout a 2U U320 diskpack from whomever supplies you the Supermicro boxes?\n\nI have just noticed Luke's posting - I would second the advice to avoid \nSAN - in my experience it's an expensive way to buy storage.\n\nbest wishes\n\nMark\n\n", "msg_date": "Wed, 14 Dec 2005 20:28:56 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "\nThe Apple is, as you say, cheap (except, the Apple markup on the disks\nfuzzes that a bit). Its easy to set up, and has been quite reliable for me,\nbut do not expect anything resembling good DB performance out of it (I gave\nup running anything but backup DBs on it). From the mouth of Apple guys, it\n(and Xsan) are heavily optimized for sequential access. They want to sell\npiles of these to the music/film industry, where they have some cred. Oracle\nhas apparently gotten some performance gains through raw device pixie dust\nand voodoo, but even as a (reluctant, kicking-and-screaming) Oracle guy I\nwouldn't go there.\n\nOther goofy things about it: it isn't 1 device with 14 disks and redundant\ncontrollers. Its 2 7 disk arrays with non-redundant controllers. It doesn't\ndo RAID10.\n\nIf you want a gob-o-space with no performance requirements, its fine.\nOtherwise...\n\nOn 12/14/05 1:56 AM, \"Charles Sprickman\" <[email protected]> wrote:\n\n> Hello all,\n> \n> It seems that I'm starting to outgrow our current Postgres setup. We've been\n> running a handful of machines as standalone db servers. This is all in a\n> colocation environment, so everything is stuffed into 1U Supermicro boxes.\n> Our \n> standard build looks like this:\n> \n> Supermicro 1U w/SCA backplane and 4 bays\n> 2x2.8 GHz Xeons\n> Adaptec 2015S \"zero channel\" RAID card\n> 2 or 4 x 73GB Seagate 10K Ultra 320 drives (mirrored+striped)\n> 2GB RAM\n> FreeBSD 4.11\n> PGSQL data from 5-10GB per box\n> \n> Recently I started studying what we were running up against in our nightly\n> runs \n> that do a ton of updates/inserts to prep things for the tasks the db does\n> during the business day (light mix of selects/inserts/updates). While we have\n> plenty of disk bandwidth (according to bonnie), we are really dying on IOPS.\n> I'm guessing this is a mix of a rather anemic RAID controller (ever notice how\n> adaptec doesn't publish any real performance specs on raid cards?) and having\n> only two or four spindles (effectively 1 or 2 on writes).\n> \n> So that's where we are...\n> \n> I'm new to the whole SAN thing, but did recently pick up a few used NetApp\n> shelves and a Fibre Channel RAID HBA (Mylex ExtremeRAID 3000, also used) to\n> toy \n> with. I started wondering if I could put something together to both get our\n> storage on one set of boxes and allow me to get data striped across more\n> drives. Our budget is not huge and we are not adverse to getting used gear\n> where appropriate.\n> \n> What do you folks recommend? I'm just starting to look at what's out there\n> for \n> SANs and NAS, and from what I've seen, our options are:\n> \n> NetApp Filers - the pluses with these are that if we use NFS, we don't have to\n> worry about either large filesystem support in FreeBSD (2TB practical limit),\n> or limitation on \"growing\" partitions as the NetApp just deals with that. I\n> also understand these make backups a bit simpler. I have a great, trusted,\n> spare-stocking source for these.\n> \n> Apple X-Serve RAID - well, it's pretty cheap. Honestly, that's all I know\n> about it - they don't talk about IOPS numbers, and I have no idea what lurks\n> in \n> that box as a RAID controller.\n> \n> SAN box w/integrated RAID - it seems like this might not be a good choice\n> since \n> the RAID hardware in the box may be where I hit any limits. I also imagine\n> I'm \n> probably overpaying for some OEM RAID controller integrated into the box. No\n> idea where to look for used gear.\n> \n> SAN box, JBOD - this seems like it might be affordable as well. A few big\n> shelves full of drives a SAN \"switch\" to plug all the shelves and hosts into\n> and a FC RAID card in each host. No idea where to look for used gear here\n> either.\n> \n> You'll note that I'm being somewhat driven by my OS of choice, FreeBSD. Unlike\n> Solaris or other commercial offerings, there is no nice volume management\n> available. While I'd love to keep managing a dozen or so FreeBSD boxes, I\n> could be persuaded to go to Solaris x86 if the volume management really shines\n> and Postgres performs well on it.\n> \n> Lastly, one thing that I'm not yet finding in trying to educate myself on SANs\n> is a good overview of what's come out in the past few years that's more\n> affordable than the old big-iron stuff. For example I saw some brief info on\n> this list's archives about the Dell/EMC offerings. Anything else in that vein\n> to look at?\n> \n> I hope this isn't too far off topic for this list. Postgres is the main\n> application that I'm looking to accomodate. Anything else I can do with\n> whatever solution we find is just gravy...\n> \n> Thanks!\n> \n> Charles\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n\n\n", "msg_date": "Wed, 14 Dec 2005 11:53:52 -0500", "msg_from": "Andrew Rawnsley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "On Wed, Dec 14, 2005 at 11:53:52AM -0500, Andrew Rawnsley wrote:\n>Other goofy things about it: it isn't 1 device with 14 disks and redundant\n>controllers. Its 2 7 disk arrays with non-redundant controllers. It doesn't\n>do RAID10.\n\nAnd if you want hot spares you need *two* per tray (one for each\ncontroller). That definately changes the cost curve. :)\n\nMike Stone\n", "msg_date": "Wed, 14 Dec 2005 13:32:15 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "On Wed, Dec 14, 2005 at 08:28:56PM +1300, Mark Kirkwood wrote:\n> Another interesting thing to try is rebuilding the database ufs \n> filesystem(s) with 32K blocks and 4K frags (as opposed to 8K/1K or \n> 16K/2K - can't recall the default on 4.x). I found this to give a factor \n> of 2 speedup on random disk access (specifically queries doing indexed \n> joins).\n\nEven if you're doing a lot of random IO? I would think that random IO\nwould perform better if you use smaller (8K) blocks, since there's less\ndata being read in and then just thrown away that way.\n\n> Is it mainly your 2 disk machines that are IOPS bound? if so, a cheap \n> option may be to buy 2 more cheetahs for them! If it's the 4's, well how \n> about a 2U U320 diskpack from whomever supplies you the Supermicro boxes?\n\nAlso, on the 4 drive machines if you can spare the room you might see a\nbig gain by putting the tables on one mirror and the OS and transaction\nlogs on the other.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 16 Dec 2005 16:18:01 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "On Wed, Dec 14, 2005 at 01:56:10AM -0500, Charles Sprickman wrote:\n You'll note that I'm being somewhat driven by my OS of choice, FreeBSD. \n> Unlike Solaris or other commercial offerings, there is no nice volume \n> management available. While I'd love to keep managing a dozen or so \n> FreeBSD boxes, I could be persuaded to go to Solaris x86 if the volume \n> management really shines and Postgres performs well on it.\n\nHave you looked at vinum? It might not qualify as a true volume manager,\nbut it's still pretty handy.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 16 Dec 2005 16:19:09 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Wed, Dec 14, 2005 at 08:28:56PM +1300, Mark Kirkwood wrote:\n> \n>>Another interesting thing to try is rebuilding the database ufs \n>>filesystem(s) with 32K blocks and 4K frags (as opposed to 8K/1K or \n>>16K/2K - can't recall the default on 4.x). I found this to give a factor \n>>of 2 speedup on random disk access (specifically queries doing indexed \n>>joins).\n> \n> \n> Even if you're doing a lot of random IO? I would think that random IO\n> would perform better if you use smaller (8K) blocks, since there's less\n> data being read in and then just thrown away that way.\n> \n> \n\nYeah, that's what I would have expected too! but the particular queries \nI tested do a ton of random IO (correlation of 0.013 on the join column \nfor the big table). I did wonder if the gain has something to do with \nthe underlying RAID stripe size (64K or 256K in my case), as I have only \ntested the 32K vs 8K/16K on RAIDed systems.\n\nI guess for a system where the number of concurrent users give rise to \nmemory pressure, it will cause more thrashing of the file buffer cache, \nmuch could be a net loss.\n\nStill worth trying out I think, you will know soon enough if it is a win \nor lose!\n\nNote that I did *not* alter Postgres page/block size (BLCKSZ) from 8K, \nso no dump/reload is required to test this out.\n\ncheers\n\nMark\n\n", "msg_date": "Sat, 17 Dec 2005 11:49:55 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "On Fri, Dec 16, 2005 at 04:18:01PM -0600, Jim C. Nasby wrote:\n>Even if you're doing a lot of random IO? I would think that random IO\n>would perform better if you use smaller (8K) blocks, since there's less\n>data being read in and then just thrown away that way.\n\nThe overhead of reading an 8k block instead of a 32k block is too small\nto measure on modern hardware. The seek is what dominates; leaving the\nread head on a little longer and then transmitting a little more over a\n200 megabyte channel is statistical fuzz.\n\nMike Stone\n", "msg_date": "Fri, 16 Dec 2005 17:51:03 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "On Fri, Dec 16, 2005 at 05:51:03PM -0500, Michael Stone wrote:\n> On Fri, Dec 16, 2005 at 04:18:01PM -0600, Jim C. Nasby wrote:\n> >Even if you're doing a lot of random IO? I would think that random IO\n> >would perform better if you use smaller (8K) blocks, since there's less\n> >data being read in and then just thrown away that way.\n> \n> The overhead of reading an 8k block instead of a 32k block is too small\n> to measure on modern hardware. The seek is what dominates; leaving the\n> read head on a little longer and then transmitting a little more over a\n> 200 megabyte channel is statistical fuzz.\n\nTrue, but now you've got 4x the amount of data in your cache that you\nprobably don't need.\n\nLooks like time to do some benchmarking...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 16 Dec 2005 18:25:25 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "On Fri, Dec 16, 2005 at 06:25:25PM -0600, Jim C. Nasby wrote:\n>True, but now you've got 4x the amount of data in your cache that you\n>probably don't need.\n\nOr you might be 4x more likely to have data cached that's needed later.\nIf you're hitting disk either way, that's probably more likely than the\nextra IO pushing something critical out--if *all* the important stuff\nwere cached you wouldn't be doing the seeks in the first place. This\nwill obviously be heavily dependent on the amount of ram you've got and\nyour workload, so (as always) you'll have to benchmark it to get past\nthe hand-waving stage.\n\nMike Stone\n", "msg_date": "Fri, 16 Dec 2005 19:48:00 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Wed, Dec 14, 2005 at 01:56:10AM -0500, Charles Sprickman wrote:\n> You'll note that I'm being somewhat driven by my OS of choice, FreeBSD. \n> \n>>Unlike Solaris or other commercial offerings, there is no nice volume \n>>management available. While I'd love to keep managing a dozen or so \n>>FreeBSD boxes, I could be persuaded to go to Solaris x86 if the volume \n>>management really shines and Postgres performs well on it.\n> \n> \n> Have you looked at vinum? It might not qualify as a true volume manager,\n> but it's still pretty handy.\n\nI am looking very closely at purchasing a SANRAD Vswitch 2000, a Nexsan\nSATABoy with SATA disks, and the Qlogic iscsi controller cards.\n\nNexsan claims up to 370MB/s sustained per controller and 44,500 IOPS but\nI'm not sure if that is good or bad. It's certainly faster than the LSI\nmegaraid controller I'm using now with a raid 1 mirror.\n\nThe sanrad box looks like it saves money in that you don't have to by\ncontroller cards for everything, but for I/O intensive servers such as\nthe database server, I would end up buying an iscsi controller card anyway.\n\nAt this point I'm not sure what the best solution is. I like the idea\nof having logical disks available though iscsi because of how flexible\nit is, but I really don't want to spend $20k (10 for the nexsan and 10\nfor the sanrad) and end up with poor performance.\n\nOn other advantage to iscsi is that I can go completely diskless on my\nservers and boot from iscsi which means that I don't have to have spare\ndisks for each host, now I just have spare disks for the nexsan chassis.\n\nSo the question becomes: has anyone put postgres on an iscsi san, and if\nso how did it perform?\n\nschu\n\n\n", "msg_date": "Mon, 19 Dec 2005 15:41:09 -0900", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "Following up to myself again...\n\nOn Wed, 14 Dec 2005, Charles Sprickman wrote:\n\n> Hello all,\n>\n> Supermicro 1U w/SCA backplane and 4 bays\n> 2x2.8 GHz Xeons\n> Adaptec 2015S \"zero channel\" RAID card\n\nI don't want to throw away the four machines like that that we have. I do \nwant to throw away the ZCR cards... :) If I ditch those I still have a 1U \nbox with a U320 scsi plug on the back.\n\nI'm vaguely considering pairing these two devices:\n\nhttp://www.areca.us/products/html/products.htm\n\nThat's an Areca 16 channel SATA II (I haven't even read up on what's new \nin SATA II) RAID controller with an optional U320 SCSI daughter card to \nconnect to the host(s).\n\nhttp://www.chenbro.com.tw/Chenbro_Special/RM321.php\n\nHow can I turn that box down? Those people in the picture look very \nexcited about it? Seriously though, it looks like an interesting and \neconomical pairing that gives me most of what I'm looking for:\n\n-a modern RAID engine\n-small form factor\n-remote management of the array\n-ability to reuse my current db hosts that are disk-bound\n\nDisadvantages:\n\n-only 1 or 2 hosts per box\n-more difficult to move storage from host to host (compared to a SAN or \nNAS system)\n-no fancy NetApp features like snapshots\n-I have no experience with Areca SATA->SCSI RAID controllers\n\nAny thoughts on this? The controller looks to be about $1500, the \nenclosure about $400, and the drives are no great mystery, cost would \ndepend on what total capacity I'm looking for.\n\nOur initial plan is to set one up for storage for a mail archive project, \nand to also have a host use this storage to host replicated copies of all \nPostgres databases. If things look good, we'd start moving our main PG \nhosts to use a similar RAID box.\n\nThanks,\n\nCharles\n\n", "msg_date": "Sat, 14 Jan 2006 21:37:01 -0500 (EST)", "msg_from": "Charles Sprickman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "Charles,\n\nOn 1/14/06 6:37 PM, \"Charles Sprickman\" <[email protected]> wrote:\n\n> I'm vaguely considering pairing these two devices:\n> \n> http://www.areca.us/products/html/products.htm\n> \n> That's an Areca 16 channel SATA II (I haven't even read up on what's new\n> in SATA II) RAID controller with an optional U320 SCSI daughter card to\n> connect to the host(s).\n\nI'm confused - SATA with a SCSI daughter card? Where does the SCSI go?\n\nThe Areca has a number (8,12,16) of single drive attach SATA ports coming\nout of it, each of which will go to a disk drive connection on the\nbackplane.\n \n> http://www.chenbro.com.tw/Chenbro_Special/RM321.php\n> \n> How can I turn that box down? Those people in the picture look very\n> excited about it? Seriously though, it looks like an interesting and\n> economical pairing that gives me most of what I'm looking for:\n\nWhat a picture! I'm totally enthusiastic all of a sudden! I'm putting !!!\nat the end of every sentence!\n\nWe just bought 4 very similar systems that use the chassis from California\nDesign - our latest favorite source:\n http://www.asacomputers.com/\n\nThey did an excellent job of setting the systems up, with proper labeling\nand Quality Control. They also installed Fedora Core 4 and set up the\nfilesystems, the only mistake they made was that they didn't enable 2TB\nclipping so that we had to rebuild the RAIDs (and install CentOS with the\nxfs filesystem).\n\nWe paid $10.4K each for 16x 400GB WD RE2 SATA II drives, 16GB RAM and two\nOpteron 250s. We also put a single 200GB SATA system drive into each. RAID\ncard is the 3Ware 9550SX.\n \nPerformance has been stunning - we're getting 800MB/s sustained I/O\nthroughput using the two 9550SX controllers in parallel.\n\n> Any thoughts on this? The controller looks to be about $1500, the\n> enclosure about $400, and the drives are no great mystery, cost would\n> depend on what total capacity I'm looking for.\n\nI'd get ASA to build it for you - use the Tyan 2882 series motherboard for\ngreatest stablity. They may try to sell you hard on the SuperMicro boards,\nwe've had less luck with them.\n \n> Our initial plan is to set one up for storage for a mail archive project,\n> and to also have a host use this storage to host replicated copies of all\n> Postgres databases. If things look good, we'd start moving our main PG\n> hosts to use a similar RAID box.\n\nGood approach.\n\nI'm personally spending as much time using these machines as I can - they\nare the fastest I've been on in a *long* time.\n \n- Luke\n\n\n", "msg_date": "Sat, 14 Jan 2006 19:02:07 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "On Sat, 14 Jan 2006, Luke Lonergan wrote:\n\n> Charles,\n>\n> On 1/14/06 6:37 PM, \"Charles Sprickman\" <[email protected]> wrote:\n>\n>> I'm vaguely considering pairing these two devices:\n>>\n>> http://www.areca.us/products/html/products.htm\n>>\n>> That's an Areca 16 channel SATA II (I haven't even read up on what's new\n>> in SATA II) RAID controller with an optional U320 SCSI daughter card to\n>> connect to the host(s).\n>\n> I'm confused - SATA with a SCSI daughter card? Where does the SCSI go?\n\nBad ASCII diagram follows (D=disk, C=controller H=host):\n\n SATA ____\nD -------| | SCSI ________\nD -------| C |--------| H |\nD -------| | |________|\nD -------|____|\n\n(etc. up\nto 16\ndrives)\n\nThe drives and the controller go in the Chenbro case. U320 SCSI from the \nRAID controller in the Chenbro case to the 1U server.\n\nC\n", "msg_date": "Sat, 14 Jan 2006 22:23:05 -0500 (EST)", "msg_from": "Charles Sprickman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "> Following up to myself again...\n>\n> On Wed, 14 Dec 2005, Charles Sprickman wrote:\n>\n>> Hello all,\n>>\n>> Supermicro 1U w/SCA backplane and 4 bays\n>> 2x2.8 GHz Xeons\n>> Adaptec 2015S \"zero channel\" RAID card\n>\n> I don't want to throw away the four machines like that that we have.\n> I do want to throw away the ZCR cards... :) If I ditch those I still\n> have a 1U box with a U320 scsi plug on the back.\n>\n> I'm vaguely considering pairing these two devices:\n\n> http://www.areca.us/products/html/products.htm\n> http://www.chenbro.com.tw/Chenbro_Special/RM321.php\n\n> How can I turn that box down? Those people in the picture look very\n> excited about it? Seriously though, it looks like an interesting and\n> economical pairing that gives me most of what I'm looking for:\n\nThe combination definitely looks attractive. I have only been hearing\npositive things about the Areca cards; the overall combination sounds\npretty attractive.\n\n> Disadvantages:\n>\n> -only 1 or 2 hosts per box\n> -more difficult to move storage from host to host (compared to a SAN\n> or NAS system)\n> -no fancy NetApp features like snapshots\n> -I have no experience with Areca SATA->SCSI RAID controllers\n>\n> Any thoughts on this? The controller looks to be about $1500, the\n> enclosure about $400, and the drives are no great mystery, cost would\n> depend on what total capacity I'm looking for.\n\nAnother \"usage model\" that could be appropriate would be\nATA-over-Ethernet...\n\n<http://en.wikipedia.org/wiki/ATA-over-Ethernet>\n\n> Our initial plan is to set one up for storage for a mail archive\n> project, and to also have a host use this storage to host replicated\n> copies of all Postgres databases. If things look good, we'd start\n> moving our main PG hosts to use a similar RAID box.\n\nWe're thinking about some stuff like this to host things that require\nbulky amounts of disk that are otherwise \"not high TPC\" sorts of apps.\nThis is definitely not a \"gold plated\" answer, compared to the NetApp\nand EMC boxes of the world, but can be useful in contexts where they\nare too expensive.\n-- \nlet name=\"cbbrowne\" and tld=\"gmail.com\" in String.concat \"@\" [name;tld];;\nhttp://linuxdatabases.info/info/x.html\nIt is usually a good idea to put a capacitor of a few microfarads\nacross the output, as shown.\n", "msg_date": "Sat, 14 Jan 2006 23:04:52 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "Charles,\n\nOn 1/14/06 7:23 PM, \"Charles Sprickman\" <[email protected]> wrote:\n\n> The drives and the controller go in the Chenbro case. U320 SCSI from the\n> RAID controller in the Chenbro case to the 1U server.\n\nThanks for the explanation - I didn't click on your Areca link until now,\nthinking it was a generic link to their products page.\n\nLooks great - I think this might do better than the SATA -> FC products\nbecause of the use of faster processors, but I'd keep my expectations low\nuntil we see some performance data on it.\n\nWe've had some very poor experiences with Fibre Channel attach SATA disk\ncontrollers. A large vendor of same ultimately concluded that they will no\nlonger recommend them for database use because of the terrible performance\nof their unit. We ended up with a 110MB/s bottleneck on the controller when\nusing 200MB/s FC connections.\n\nWith the dual U320 attach and 16 drives, you should be able to saturate the\nSCSI busses at about 600MB/s. It would be great if you could post your I/O\nresults here!\n\n- Luke\n\n\n", "msg_date": "Sun, 15 Jan 2006 09:21:00 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" }, { "msg_contents": "On Sat, Jan 14, 2006 at 09:37:01PM -0500, Charles Sprickman wrote:\n> Following up to myself again...\n> \n> On Wed, 14 Dec 2005, Charles Sprickman wrote:\n> \n> >Hello all,\n> >\n> >Supermicro 1U w/SCA backplane and 4 bays\n> >2x2.8 GHz Xeons\n> >Adaptec 2015S \"zero channel\" RAID card\n> \n> I don't want to throw away the four machines like that that we have. I do \n> want to throw away the ZCR cards... :) If I ditch those I still have a 1U \n> box with a U320 scsi plug on the back.\n> \n> I'm vaguely considering pairing these two devices:\n> \n> http://www.areca.us/products/html/products.htm\n> \n> That's an Areca 16 channel SATA II (I haven't even read up on what's new \n> in SATA II) RAID controller with an optional U320 SCSI daughter card to \n> connect to the host(s).\n> \n> http://www.chenbro.com.tw/Chenbro_Special/RM321.php\n\nNot sure how significant, but the RM321 backplane claims to support\nSATA 150 (aka SATA I) only.\n\n -Mike\n", "msg_date": "Wed, 18 Jan 2006 13:54:16 -0500", "msg_from": "Michael Adler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN/NAS options" } ]
[ { "msg_contents": "Charles,\n\n> Lastly, one thing that I'm not yet finding in trying to \n> educate myself on SANs is a good overview of what's come out \n> in the past few years that's more affordable than the old \n> big-iron stuff. For example I saw some brief info on this \n> list's archives about the Dell/EMC offerings. Anything else \n> in that vein to look at?\n\nMy two cents: SAN is a bad investment, go for big internal storage.\n\nThe 3Ware or Areca SATA RAID adapters kick butt and if you look in the\nnewest colos (I was just in ours \"365main.net\" today), you will see rack\non rack of machines with from 4 to 16 internal SATA drives. Are they\nall DB servers? Not necessarily, but that's where things are headed.\n\nYou can get a 3U server with dual opteron 250s, 16GB RAM and 16x 400GB\nSATAII drives with the 3Ware 9550SX controller for $10K - we just\nordered 4 of them. I don't think you can buy an external disk chassis\nand a Fibre channel NIC for that.\n\nPerformance? 800MB/s RAID5 reads, 400MB/s RAID5 writes. Random IOs are\nalso very high for RAID10, but we don't use it so YMMV - look at Areca\nand 3Ware.\n\nManagability? Good web management interfaces with 6+ years of\ndevelopment from 3Ware, e-mail, online rebuild options, all the goodies.\nNo \"snapshot\" or offline backup features like the high-end SANs, but do\nyou really need it?\n\nNeed more power or storage over time? Run a parallel DB like Bizgres\nMPP, you can add more servers with internal storage and increase your\nI/O, CPU and memory.\n\n- Luke\n\n", "msg_date": "Wed, 14 Dec 2005 02:10:20 -0500", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN/NAS options" } ]
[ { "msg_contents": "Dear Sir or Madam:\n\nThe function \"convert_IN_to_join(Query *parse, SubLink *sublink)\", from\nfile: <postgres-8.0.4 root>/src/backend/optimizer/plan/subselect.c, is\nresponsible for converting IN type sublinks to joins, whenever appropriate.\n\nThe following lines of code, extracted from convert_IN_to_join, verify if\nthe subquery is correlated:\n\n /*\n * The sub-select must not refer to any Vars of the parent query.\n * (Vars of higher levels should be okay, though.)\n */\n if (contain_vars_of_level((Node *) subselect, 1))\n return NULL; \n\nBy commenting this code region I was able to optimize several correlated\nsubqueries. Apparently, the rest of the PostgreSQL code is ready to handle\nthe \"convert subquery to join\" optimization. Is this test really necessary?\n\nPlease analyze the following example:\n\nDDL:\nCREATE TABLE \"students\" ( sid char(5) primary key, name char(20) not null,\nage integer, email char(20) not null, unique( email ), avgrade float not\nnull );\nCREATE TABLE \"enrolled\" ( sid char(5), cid char(5), grade real not null,\nprimary key(sid,cid), foreign key(sid) references students (sid) on delete\nrestrict );\n\nDML:\n\n1) correlated IN subquery:\n\"Find all students who's grade in 'TFC' class is higher than their average\ngrade.\"\nselect a.sid, a.name from students a where a.sid IN ( select i.sid from\nenrolled i where i.grade > a.avgrade AND i.cid = 'TFC');\n\nQUERY PLAN:\n\n Seq Scan on students a (cost=0.00..5763804.50 rows=5000 width=33)\n Filter: (subplan)\n SubPlan\n -> Seq Scan on enrolled i (cost=0.00..1144.00 rows=3473 width=9)\n Filter: ((grade > $0) AND (cid = 'TFC'::bpchar))\n\n2) the same query after commenting out the above code region in\nconvert_IN_to_join:\n\nQUERY PLAN:\n\n Hash Join (cost=1050.24..1518.21 rows=693 width=33)\n Hash Cond: (\"outer\".sid = \"inner\".sid)\n Join Filter: (\"inner\".grade > \"outer\".avgrade)\n -> Seq Scan on students a (cost=0.00..367.00 rows=10000 width=41)\n -> Hash (cost=1045.04..1045.04 rows=2078 width=13)\n -> HashAggregate (cost=1045.04..1045.04 rows=2078 width=13)\n -> Seq Scan on enrolled i (cost=0.00..1019.00 rows=10417\nwidth=13)\n Filter: (cid = 'TFC'::bpchar)\n\n\n3) Clearly, it is possible to extract the IN subquery from query 1 since the\nouter attribute a.sid matches, at most once, with the inner tuple i.sid.\nAlthough s.sid is not a primary key by itself, together with \"i.cid = 'TFC'\"\nconjunct, it forms a unique tuple. Here is an efficient alternative to query\n1:\n\nselect a.sid, a.name from students a, enrolled i where a.sid = i.sid AND\ni.cid = 'TFC' AND i.grade > a.avgrade;\n\nQUERY PLAN:\n\n Hash Join (cost=480.00..2366.86 rows=3473 width=33)\n Hash Cond: (\"outer\".sid = \"inner\".sid)\n Join Filter: (\"outer\".grade > \"inner\".avgrade)\n -> Seq Scan on enrolled i (cost=0.00..1019.00 rows=10417 width=13)\n Filter: (cid = 'TFC'::bpchar)\n -> Hash (cost=367.00..367.00 rows=10000 width=41)\n -> Seq Scan on students a (cost=0.00..367.00 rows=10000 width=41)\n\n\nI have verified that both 2) and 3) return the exact same tuples, query 1)\nnever completed due to the highly inefficient execution plan.\nPlease help me with this issue.\n\nKind regards,\nFrancisco Santos\n\n \n\n\n\n\n\n\nDear Sir or Madam:\n\n\n\n\n\n\n\nDear Sir or Madam:\n\nThe function \"convert_IN_to_join(Query *parse, SubLink *sublink)\",\nfrom file: <postgres-8.0.4 root>/src/backend/optimizer/plan/subselect.c,\nis responsible for converting IN type sublinks to joins, whenever\nappropriate.\n\nThe following lines of code, extracted from convert_IN_to_join, verify if the\nsubquery is correlated:\n\n      /*\n       * The sub-select must not refer to any Vars of the parent query.\n       * (Vars of higher levels should be okay, though.)\n       */\n      if (contain_vars_of_level((Node *) subselect, 1))\n              return NULL; \n\nBy commenting this code region I was able to optimize several correlated\nsubqueries. Apparently, the rest of the PostgreSQL code is ready to handle the\n\"convert subquery to join\" optimization. Is this test really necessary?\n\nPlease analyze the following example:\n\nDDL:\nCREATE TABLE \"students\" ( sid char(5) primary key, name char(20) not\nnull, age integer, email char(20) not null, unique( email ), avgrade float not\nnull );\nCREATE TABLE \"enrolled\" ( sid char(5), cid char(5), grade real not\nnull, primary key(sid,cid), foreign key(sid) references students (sid) on\ndelete restrict  );\n\nDML:\n\n1) correlated IN subquery:\n\"Find all students who's grade in 'TFC' class is higher than their average\ngrade.\"\nselect a.sid, a.name from students a where a.sid IN ( select i.sid from\nenrolled i where i.grade > a.avgrade AND i.cid = 'TFC');\n\nQUERY PLAN:\n\n Seq Scan on students a  (cost=0.00..5763804.50 rows=5000 width=33)\n   Filter: (subplan)\n   SubPlan\n     ->  Seq Scan on enrolled i  (cost=0.00..1144.00 rows=3473 width=9)\n           Filter: ((grade > $0) AND (cid = 'TFC'::bpchar))\n\n2) the same query after commenting out the above code region in\nconvert_IN_to_join:\n\nQUERY PLAN:\n\n Hash Join  (cost=1050.24..1518.21 rows=693 width=33)\n   Hash Cond: (\"outer\".sid = \"inner\".sid)\n   Join Filter: (\"inner\".grade > \"outer\".avgrade)\n   ->  Seq Scan on students a  (cost=0.00..367.00 rows=10000 width=41)\n   ->  Hash  (cost=1045.04..1045.04 rows=2078 width=13)\n         ->  HashAggregate  (cost=1045.04..1045.04 rows=2078 width=13)\n               ->  Seq Scan on enrolled i  (cost=0.00..1019.00 rows=10417\nwidth=13)\n                     Filter: (cid = 'TFC'::bpchar)\n\n\n3) Clearly, it is possible to extract the IN subquery from query 1 since the\nouter attribute a.sid matches, at most once, with the inner tuple i.sid.\nAlthough s.sid is not a primary key by itself, together with \"i.cid =\n'TFC'\" conjunct, it forms a unique tuple. Here is an efficient alternative\nto query 1:\n\nselect a.sid, a.name from students a, enrolled i where a.sid = i.sid AND  i.cid\n= 'TFC' AND i.grade > a.avgrade;\n\nQUERY PLAN:\n\n Hash Join  (cost=480.00..2366.86 rows=3473 width=33)\n   Hash Cond: (\"outer\".sid = \"inner\".sid)\n   Join Filter: (\"outer\".grade > \"inner\".avgrade)\n   ->  Seq Scan on enrolled i  (cost=0.00..1019.00 rows=10417\nwidth=13)\n         Filter: (cid = 'TFC'::bpchar)\n   ->  Hash  (cost=367.00..367.00 rows=10000 width=41)\n         ->  Seq Scan on students a  (cost=0.00..367.00 rows=10000\nwidth=41)\n\n\nI have verified that both 2) and 3) return the exact same tuples, query 1)\nnever completed due to the highly inefficient execution plan.\nPlease help me with this issue.\n\nKind regards,\nFrancisco Santos", "msg_date": "Wed, 14 Dec 2005 13:27:21 -0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Convert IN sublink to join" }, { "msg_contents": "<[email protected]> writes:\n> /*\n> * The sub-select must not refer to any Vars of the parent query.\n> * (Vars of higher levels should be okay, though.)\n> */\n> if (contain_vars_of_level((Node *) subselect, 1))\n> return NULL; \n\n> By commenting this code region I was able to optimize several correlated\n> subqueries.\n\nIt's only pure luck that your test case still produces the right answer.\nThe IN code depends on the assumption that the sub-SELECT is independent\nof the outer query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Dec 2005 10:26:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert IN sublink to join " } ]
[ { "msg_contents": "Hi,\nI have a java.util.List of values (10000) which i wanted to use for a query in the where clause of an simple select statement. iterating over the list and and use an prepared Statement is quite slow. Is there a more efficient way to execute such a query.\nThanks for any help.\n \nJohannes\n \n.....\n \nList ids = new ArrayList(); \n\n.... List is filled with 10000 values ...\n\nList uuids = new ArrayList();\n \nPreparedStatement pstat = db.prepareStatement(\"SELECT UUID FROM MDM.KEYWORDS_INFO WHERE KEYWORDS_ID = ?\");\n \nfor (Iterator iter = ids.iterator(); iter.hasNext();) {\n String id = (String) iter.next();\n pstat.setString(1, id);\n rs = pstat.executeQuery();\n if (rs.next()) {\n uuids.add(rs.getString(1));\n }\n rs.close();\n }\n...\n \n", "msg_date": "Wed, 14 Dec 2005 22:28:24 +0100", "msg_from": "=?iso-8859-1?Q?B=FChler=2C_Johannes?= <[email protected]>", "msg_from_op": true, "msg_subject": "effizient query with jdbc" } ]
[ { "msg_contents": "Hi,\n\nwe have a VIEW that is an UNION of 12 SELECTs, and every\nmember of the UNION have a constant field to be able to distinguish\nbetween them.\n\nAn often performed operation on this VIEW is to search for only one record\nthat can be found by the value of the constant field and the serial of\na table in one of the UNION members.\n\nUnfortunately, the search operation works in such a way so these two fields\nare concatenated and the results is searched for.\n\nHere is a shorter example, so it gets obvious what I tried to describe:\n\ncreate view v1 (code,num) as\nselect 'AAA',id from table1\nunion\nselect 'BBB',id from table2;\n\nThe query is:\n\nselect * from v1 where code||num = 'AAA2005000001';\n\nMy problem is that this is slow, even after creating expression indexes\non the tables for e.g. ('AAA'||id).\n\nIf I optimize the UNION manually, it becomes:\n\nselect * from table1 where 'AAA'||id = 'AAA2005000001'\nunion\nselect * from table2 where 'BBB'||id = 'AAA2005000001';\n\nand because of the expression indexes it's fast.\n\nIs there a GEQO setting that make the above optimization\non VIEWs automatic?\n\n From the VIEW definition, the database already knows the connection\nbetween the fields of the view and the fields of the table(s)\nso the above optimization could be performed automatically.\n\nBest regards,\nZoltďż˝n Bďż˝szďż˝rmďż˝nyi\n\n", "msg_date": "Wed, 14 Dec 2005 22:30:29 +0100", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": true, "msg_subject": "Auto-tuning a VIEW?" }, { "msg_contents": "\n> create view v1 (code,num) as\n> select 'AAA',id from table1\n> union\n> select 'BBB',id from table2;\n\n\tAs your rows are, by definition, distinct between each subquery, you \nshould use UNION ALL instead of UNION to save postgres the trouble of \nhunting non-existing duplicates. This will save you a few sorts.\n\n> select * from v1 where code||num = 'AAA2005000001';\n\n\tWhy don't you use code='AAA' and num='2005000001' ?\n", "msg_date": "Wed, 14 Dec 2005 22:38:49 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Auto-tuning a VIEW?" }, { "msg_contents": "> Thanks, now the SELECT from the huge VIEW runs under one third of the \n> original runtime.\n\n\tNice.\n\n>>> select * from v1 where code||num = 'AAA2005000001';\n\n\tI do not know if it is at all possible, but maybe you could use a rule \nto, on select to your view, do instead a select on the two separate \ncolumns used in the key, with a bit of massaging on the values using \nsubstring()...\n", "msg_date": "Wed, 14 Dec 2005 23:18:40 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Auto-tuning a VIEW?" }, { "msg_contents": "PFC írta:\n\n>\n>> create view v1 (code,num) as\n>> select 'AAA',id from table1\n>> union\n>> select 'BBB',id from table2;\n>\n>\n> As your rows are, by definition, distinct between each subquery, \n> you should use UNION ALL instead of UNION to save postgres the \n> trouble of hunting non-existing duplicates. This will save you a few \n> sorts.\n\n\nThanks, now the SELECT from the huge VIEW runs under one third of the \noriginal runtime.\n\n>> select * from v1 where code||num = 'AAA2005000001';\n>\n>\n> Why don't you use code='AAA' and num='2005000001' ?\n\n\nThat's the point, the software environment we use cannot use it.\nThe whole stuff is built on PowerBuilder 8.0.x, using PFC.\nThe communication between the sheet and the response forms\nallows only one key field, and changing the foundation is risky.\nOne particular application that uses the before mentioned VIEW with\nthe huge UNION also cannot workaround the problem, that's why I asked it.\n\nThe system is using Informix 9.21 and it's dog slow. I worked with\nPostgreSQL earlier, and my tests show that PostgreSQL 8.x is\nat least 5 times faster on normal queries than this other DBMS.\nSo I am trying to port the database contents to PostgreSQL first\nand test some often used processing, to see whether it's feasible to \nswitch later.\nInterestingly, the example query I provided runs about two times faster\nin Informix than in PostgreSQL. I experimented a little and found what I \ndescribed.\n\nBest regards,\nZoltán Böszörményi\n\n", "msg_date": "Wed, 14 Dec 2005 23:39:27 +0100", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Auto-tuning a VIEW?" } ]
[ { "msg_contents": "I'll just start by warning that I'm new-ish to postgresql.\n\nI'm running 8.1 installed from source on a Debian Sarge server. I have a \nsimple query that I believe I've placed the indexes correctly for, and I \nstill end up with a seq scan. It makes sense, kinda, but it should be able \nto use the index to gather the right values. I do have a production set of \ndata inserted into the tables, so this is running realistically:\n\ndli=# explain analyze SELECT ordered_products.product_id\ndli-# FROM to_ship, ordered_products\ndli-# WHERE to_ship.ordered_product_id = ordered_products.id AND\ndli-# ordered_products.paid = TRUE AND\ndli-# ordered_products.suspended_sub = FALSE;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=5126.19..31528.40 rows=20591 width=8) (actual \ntime=6517.438..25123.115 rows=14367 loops=1)\n Hash Cond: (\"outer\".ordered_product_id = \"inner\".id)\n -> Seq Scan on to_ship (cost=0.00..11529.12 rows=611612 width=8) (actual \ntime=393.206..15711.715 rows=611612 loops=1)\n -> Hash (cost=4954.79..4954.79 rows=21759 width=16) (actual \ntime=6076.153..6076.153 rows=18042 loops=1)\n -> Index Scan using paid_index on ordered_products \n(cost=0.00..4954.79 rows=21759 width=16) (actual time=136.472..5966.275 \nrows=18042 loops=1)\n Index Cond: (paid = true)\n Filter: (paid AND (NOT suspended_sub))\n Total runtime: 25136.190 ms\n(8 rows)\n\nThis is running on just about the world's slowest server (with a laptop hard \ndrive to boot), but how can I avoid the seq scan, or in general speed up this \nquery?\n\nto_ship will have far less tuples than ordered_products, but it's still not \nsmall, as you can see.\n", "msg_date": "Wed, 14 Dec 2005 16:03:52 -0600", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Simple Join" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> I'm running 8.1 installed from source on a Debian Sarge server. I have a \n> simple query that I believe I've placed the indexes correctly for, and I \n> still end up with a seq scan. It makes sense, kinda, but it should be able \n> to use the index to gather the right values.\n\nI continue to marvel at how many people think that if it's not using an\nindex it must ipso facto be a bad plan ...\n\nThat plan looks perfectly fine to me. You could try forcing some other\nchoices by fooling with the planner enable switches (eg set\nenable_seqscan = off) but I doubt you'll find much improvement. There\nare too many rows being pulled from ordered_products to make an index\nnestloop a good idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Dec 2005 17:47:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Join " }, { "msg_contents": "On Wednesday 14 December 2005 16:47, you wrote:\n> Kevin Brown <[email protected]> writes:\n> > I'm running 8.1 installed from source on a Debian Sarge server. I have a\n> > simple query that I believe I've placed the indexes correctly for, and I\n> > still end up with a seq scan. It makes sense, kinda, but it should be\n> > able to use the index to gather the right values.\n>\n> I continue to marvel at how many people think that if it's not using an\n> index it must ipso facto be a bad plan ...\n>\n> That plan looks perfectly fine to me. You could try forcing some other\n> choices by fooling with the planner enable switches (eg set\n> enable_seqscan = off) but I doubt you'll find much improvement. There\n> are too many rows being pulled from ordered_products to make an index\n> nestloop a good idea.\n\nThat's fine, so being a postgres novice, as I stated in my original post, what \nwould be the best way to improve performance? Redundant column that's \nupdated via a trigger? I'm asking this list because I'd like to do it right, \nas opposed to get it done.\n\n> regards, tom lane\n", "msg_date": "Wed, 14 Dec 2005 17:12:56 -0600", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple Join" }, { "msg_contents": "On 12/14/05, Kevin Brown <[email protected]> wrote:\n> I'll just start by warning that I'm new-ish to postgresql.\n>\n> I'm running 8.1 installed from source on a Debian Sarge server. I have a\n> simple query that I believe I've placed the indexes correctly for, and I\n> still end up with a seq scan. It makes sense, kinda, but it should be able\n> to use the index to gather the right values. I do have a production set of\n> data inserted into the tables, so this is running realistically:\n>\n\nwhat hardware?\n\n> dli=# explain analyze SELECT ordered_products.product_id\n> dli-# FROM to_ship, ordered_products\n> dli-# WHERE to_ship.ordered_product_id = ordered_products.id AND\n> dli-# ordered_products.paid = TRUE AND\n> dli-# ordered_products.suspended_sub = FALSE;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=5126.19..31528.40 rows=20591 width=8) (actual\n> time=6517.438..25123.115 rows=14367 loops=1)\n> Hash Cond: (\"outer\".ordered_product_id = \"inner\".id)\n> -> Seq Scan on to_ship (cost=0.00..11529.12 rows=611612 width=8) (actual\n> time=393.206..15711.715 rows=611612 loops=1)\n> -> Hash (cost=4954.79..4954.79 rows=21759 width=16) (actual\n> time=6076.153..6076.153 rows=18042 loops=1)\n> -> Index Scan using paid_index on ordered_products\n> (cost=0.00..4954.79 rows=21759 width=16) (actual time=136.472..5966.275\n> rows=18042 loops=1)\n> Index Cond: (paid = true)\n> Filter: (paid AND (NOT suspended_sub))\n> Total runtime: 25136.190 ms\n> (8 rows)\n>\n\nshow the tables and the indexes for those tables\n\n> This is running on just about the world's slowest server (with a laptop hard\n> drive to boot), but how can I avoid the seq scan, or in general speed up this\n> query?\n>\n> to_ship will have far less tuples than ordered_products, but it's still not\n> small, as you can see.\n>\n\n\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Wed, 14 Dec 2005 18:23:20 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Join" }, { "msg_contents": "Kevin Brown wrote:\n> I'll just start by warning that I'm new-ish to postgresql.\n> \n> I'm running 8.1 installed from source on a Debian Sarge server. I have a \n> simple query that I believe I've placed the indexes correctly for, and I \n> still end up with a seq scan. It makes sense, kinda, but it should be able \n> to use the index to gather the right values. I do have a production set of \n> data inserted into the tables, so this is running realistically:\n> \n> dli=# explain analyze SELECT ordered_products.product_id\n> dli-# FROM to_ship, ordered_products\n> dli-# WHERE to_ship.ordered_product_id = ordered_products.id AND\n> dli-# ordered_products.paid = TRUE AND\n> dli-# ordered_products.suspended_sub = FALSE;\n\nYou scan 600000 rows from to_ship to get about 25000 - so some way to \ncut this down would help.\n\nTry out an explicit INNER JOIN which includes the filter info for paid \nand suspended_sub in the join condition (you may need indexes on each of \nid, paid and suspended_sub, so that the 8.1 optimizer can use a bitmap \nscan):\n\n\nSELECT ordered_products.product_id\nFROM to_ship INNER JOIN ordered_products\nON (to_ship.ordered_product_id = ordered_products.id\n AND ordered_products.paid = TRUE AND \nordered_products.suspended_sub = FALSE); \n\n\n", "msg_date": "Thu, 15 Dec 2005 12:30:18 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Join" }, { "msg_contents": "On Wednesday 14 December 2005 17:23, you wrote:\n> what hardware?\n\nVia 800 mhz (about equiv to a 300 mhz pentium 2)\n128 mb of slow ram\n4200 rpm ide hard drive.\n\nTold you it was slow. :-)\n\nThis is not the production system. I don't expect this to be \"fast\" but \neverything else happens in under 2 seconds, so I know I could do this faster. \nEspecially becaue the information I'm looking for probably just needs some \ndenormalization, or other such trick to pop right out. I'm using this system \nso I can locate my performance bottlenecks easier, and actually, it's plenty \nfast enough except for this one single query. I don't necessarily want to \noptimize the query, more than just get the info faster, so that's why I'm \nposting here.\n\n> show the tables and the indexes for those tables\n\nNo prob:\n\nCREATE TABLE to_ship\n(\n id int8 NOT NULL DEFAULT nextval(('to_ship_seq'::text)::regclass),\n ordered_product_id int8 NOT NULL,\n bounced int4 NOT NULL DEFAULT 0,\n operator_id varchar(20) NOT NULL,\n \"timestamp\" timestamptz NOT NULL DEFAULT ('now'::text)::timestamp(6) with \ntime zone,\n CONSTRAINT to_ship_pkey PRIMARY KEY (id),\n CONSTRAINT to_ship_ordered_product_id_fkey FOREIGN KEY (ordered_product_id) \nREFERENCES ordered_products (id) ON UPDATE RESTRICT ON DELETE RESTRICT\n) \nWITHOUT OIDS;\n\nCREATE TABLE ordered_products\n(\n id int8 NOT NULL DEFAULT nextval(('ordered_products_seq'::text)::regclass),\n order_id int8 NOT NULL,\n product_id int8 NOT NULL,\n recipient_address_id int8 NOT NULL,\n hide bool NOT NULL DEFAULT false,\n renewal bool NOT NULL DEFAULT false,\n \"timestamp\" timestamptz NOT NULL DEFAULT ('now'::text)::timestamp(6) with \ntime zone,\n operator_id varchar(20) NOT NULL,\n suspended_sub bool NOT NULL DEFAULT false,\n quantity int4 NOT NULL DEFAULT 1,\n price_paid numeric NOT NULL,\n tax_paid numeric NOT NULL DEFAULT 0,\n shipping_paid numeric NOT NULL DEFAULT 0,\n remaining_issue_obligation int4 NOT NULL DEFAULT 0,\n parent_product_id int8,\n delivery_method_id int8 NOT NULL,\n paid bool NOT NULL DEFAULT false,\n CONSTRAINT ordered_products_pkey PRIMARY KEY (id),\n CONSTRAINT ordered_products_order_id_fkey FOREIGN KEY (order_id) REFERENCES \norders (id) ON UPDATE RESTRICT ON DELETE RESTRICT,\n CONSTRAINT ordered_products_parent_product_id_fkey FOREIGN KEY \n(parent_product_id) REFERENCES ordered_products (id) ON UPDATE RESTRICT ON \nDELETE RESTRICT,\n CONSTRAINT ordered_products_recipient_address_id_fkey FOREIGN KEY \n(recipient_address_id) REFERENCES addresses (id) ON UPDATE RESTRICT ON DELETE \nRESTRICT\n) \nWITHOUT OIDS;\n\n=== The two indexes that should matter ===\nCREATE INDEX ordered_product_id_index\n ON to_ship\n USING btree\n (ordered_product_id);\n\nCREATE INDEX paid_index\n ON ordered_products\n USING btree\n (paid);\n\nordered_products.id is a primary key, so it should have an implicit index.\n", "msg_date": "Wed, 14 Dec 2005 17:44:10 -0600", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple Join" }, { "msg_contents": "On Wed, Dec 14, 2005 at 04:03:52PM -0600, Kevin Brown wrote:\n> -> Index Scan using paid_index on ordered_products \n> (cost=0.00..4954.79 rows=21759 width=16) (actual time=136.472..5966.275 \n> rows=18042 loops=1)\n> Index Cond: (paid = true)\n> Filter: (paid AND (NOT suspended_sub))\n> Total runtime: 25136.190 ms\n\nYou might want to consider an index on (paid,suspended_sub), not just (paid);\nit's probably not going to give you any dramatic improvements, but it could\nhelp a bit.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 15 Dec 2005 00:47:36 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Join" }, { "msg_contents": "On Wednesday 14 December 2005 17:30, Mark Kirkwood wrote:\n> You scan 600000 rows from to_ship to get about 25000 - so some way to\n> cut this down would help.\n\nYup. I'm open to anything too, as this is the only real part of the system \nthat cares. So either maintaining a denormalized copy column, or whatever \nwould be fine. We're doing far more reads than writes.\n\n> Try out an explicit INNER JOIN which includes the filter info for paid\n> and suspended_sub in the join condition (you may need indexes on each of\n> id, paid and suspended_sub, so that the 8.1 optimizer can use a bitmap\n> scan):\n\nI only had two explicit indexes. One was on to_ship.ordered_product_id and \nthe other was on ordered_products.paid. ordered_products.id is a primary \nkey. This is on your query with an index added on suspended_sub:\n\ndli=# explain analyze SELECT ordered_products.product_id\ndli-# FROM to_ship INNER JOIN ordered_products\ndli-# ON (to_ship.ordered_product_id = ordered_products.id\ndli(# AND ordered_products.paid = TRUE AND \ndli(# ordered_products.suspended_sub = FALSE);\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=5126.19..31528.40 rows=20591 width=8) (actual \ntime=4554.190..23519.618 rows=14367 loops=1)\n Hash Cond: (\"outer\".ordered_product_id = \"inner\".id)\n -> Seq Scan on to_ship (cost=0.00..11529.12 rows=611612 width=8) (actual \ntime=11.254..15192.042 rows=611612 loops=1)\n -> Hash (cost=4954.79..4954.79 rows=21759 width=16) (actual \ntime=4494.900..4494.900 rows=18042 loops=1)\n -> Index Scan using paid_index on ordered_products \n(cost=0.00..4954.79 rows=21759 width=16) (actual time=72.431..4414.697 \nrows=18042 loops=1)\n Index Cond: (paid = true)\n Filter: (paid AND (NOT suspended_sub))\n Total runtime: 23532.785 ms\n(8 rows)\n\nSo what's the best way to performance wiggle this info out of the db? The \nlist of values is only about 30 tuples long out of this query, so I was \nfiguring I could trigger on insert to to_ship to place the value into another \ntable if it didn't already exist. I'd rather the writing be slow than the \nreading.\n", "msg_date": "Wed, 14 Dec 2005 17:52:45 -0600", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple Join" }, { "msg_contents": "Kevin Brown wrote:\n\n> \n> \n> I only had two explicit indexes. One was on to_ship.ordered_product_id and \n> the other was on ordered_products.paid. ordered_products.id is a primary \n> key. This is on your query with an index added on suspended_sub:\n> \n> dli=# explain analyze SELECT ordered_products.product_id\n> dli-# FROM to_ship INNER JOIN ordered_products\n> dli-# ON (to_ship.ordered_product_id = ordered_products.id\n> dli(# AND ordered_products.paid = TRUE AND \n> dli(# ordered_products.suspended_sub = FALSE);\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=5126.19..31528.40 rows=20591 width=8) (actual \n> time=4554.190..23519.618 rows=14367 loops=1)\n> Hash Cond: (\"outer\".ordered_product_id = \"inner\".id)\n> -> Seq Scan on to_ship (cost=0.00..11529.12 rows=611612 width=8) (actual \n> time=11.254..15192.042 rows=611612 loops=1)\n> -> Hash (cost=4954.79..4954.79 rows=21759 width=16) (actual \n> time=4494.900..4494.900 rows=18042 loops=1)\n> -> Index Scan using paid_index on ordered_products \n> (cost=0.00..4954.79 rows=21759 width=16) (actual time=72.431..4414.697 \n> rows=18042 loops=1)\n> Index Cond: (paid = true)\n> Filter: (paid AND (NOT suspended_sub))\n> Total runtime: 23532.785 ms\n> (8 rows)\n> \n\nWell - that had no effect at all :-) You don't have and index on \nto_ship.ordered_product_id do you? - try adding one (ANALYZE again), and \nlet use know what happens (you may want to play with SET \nenable_seqscan=off as well).\n\nAnd also, if you are only ever interested in paid = true and \nsuspended_sub = false, then you can recreate these indexes as partials - \ne.g:\n\nCREATE INDEX paid_index ON ordered_products (paid) WHERE paid = true;\nCREATE INDEX suspended_sub_index ON ordered_products (suspended_sub) \nWHERE suspended_sub = false;\n\n> So what's the best way to performance wiggle this info out of the db? The \n> list of values is only about 30 tuples long out of this query, so I was \n> figuring I could trigger on insert to to_ship to place the value into another \n> table if it didn't already exist. I'd rather the writing be slow than the \n> reading.\n\nYeah - all sort of horrible denormalizations are possible :-), hopefully \nwe can get the original query to work ok, and avoid the need to add code \nor triggers to you app.\n", "msg_date": "Thu, 15 Dec 2005 13:36:00 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Join" }, { "msg_contents": "On Wed, 2005-12-14 at 17:47 -0500, Tom Lane wrote:\n> That plan looks perfectly fine to me. You could try forcing some other\n> choices by fooling with the planner enable switches (eg set\n> enable_seqscan = off) but I doubt you'll find much improvement. There\n> are too many rows being pulled from ordered_products to make an index\n> nestloop a good idea.\n\nWell, I'm no expert either, but if there was an index on\nordered_products (paid, suspended_sub, id) it should be mergejoinable\nwith the index on to_ship.ordered_product_id, right? Given the\nconditions on paid and suspended_sub.\n\nIf you (Kevin) try adding such an index, ideally it would get used given\nthat you're only pulling out a small fraction of the rows in to_ship.\nIf it doesn't get used, then I had a similar issue with 8.0.3 where an\nindex that was mergejoinable (only because of the restrictions in the\nwhere clause) wasn't getting picked up.\n\nMitch\n\nKevin Brown wrote:\n> CREATE TABLE to_ship\n> (\n> id int8 NOT NULL DEFAULT nextval(('to_ship_seq'::text)::regclass),\n> ordered_product_id int8 NOT NULL,\n> bounced int4 NOT NULL DEFAULT 0,\n> operator_id varchar(20) NOT NULL,\n> \"timestamp\" timestamptz NOT NULL DEFAULT ('now'::text)::timestamp(6)\n> with \n> time zone,\n> CONSTRAINT to_ship_pkey PRIMARY KEY (id),\n> CONSTRAINT to_ship_ordered_product_id_fkey FOREIGN KEY\n> (ordered_product_id) \n> REFERENCES ordered_products (id) ON UPDATE RESTRICT ON DELETE RESTRICT\n> ) \n> WITHOUT OIDS;\n> \n> CREATE TABLE ordered_products\n> (\n> id int8 NOT NULL DEFAULT\n> nextval(('ordered_products_seq'::text)::regclass),\n> order_id int8 NOT NULL,\n> product_id int8 NOT NULL,\n> recipient_address_id int8 NOT NULL,\n> hide bool NOT NULL DEFAULT false,\n> renewal bool NOT NULL DEFAULT false,\n> \"timestamp\" timestamptz NOT NULL DEFAULT ('now'::text)::timestamp(6)\n> with \n> time zone,\n> operator_id varchar(20) NOT NULL,\n> suspended_sub bool NOT NULL DEFAULT false,\n> quantity int4 NOT NULL DEFAULT 1,\n> price_paid numeric NOT NULL,\n> tax_paid numeric NOT NULL DEFAULT 0,\n> shipping_paid numeric NOT NULL DEFAULT 0,\n> remaining_issue_obligation int4 NOT NULL DEFAULT 0,\n> parent_product_id int8,\n> delivery_method_id int8 NOT NULL,\n> paid bool NOT NULL DEFAULT false,\n> CONSTRAINT ordered_products_pkey PRIMARY KEY (id),\n> CONSTRAINT ordered_products_order_id_fkey FOREIGN KEY (order_id)\n> REFERENCES \n> orders (id) ON UPDATE RESTRICT ON DELETE RESTRICT,\n> CONSTRAINT ordered_products_parent_product_id_fkey FOREIGN KEY \n> (parent_product_id) REFERENCES ordered_products (id) ON UPDATE\n> RESTRICT ON \n> DELETE RESTRICT,\n> CONSTRAINT ordered_products_recipient_address_id_fkey FOREIGN KEY \n> (recipient_address_id) REFERENCES addresses (id) ON UPDATE RESTRICT ON\n> DELETE \n> RESTRICT\n> ) \n> WITHOUT OIDS;\n> \n> === The two indexes that should matter ===\n> CREATE INDEX ordered_product_id_index\n> ON to_ship\n> USING btree\n> (ordered_product_id);\n> \n> CREATE INDEX paid_index\n> ON ordered_products\n> USING btree\n> (paid);\n> \n> ordered_products.id is a primary key, so it should have an implicit\n> index.\n\n\n", "msg_date": "Wed, 14 Dec 2005 22:52:47 -0800", "msg_from": "Mitchell Skinner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Join" }, { "msg_contents": "On Wednesday 14 December 2005 18:36, you wrote:\n> Well - that had no effect at all :-) You don't have and index on\n> to_ship.ordered_product_id do you? - try adding one (ANALYZE again), and\n> let use know what happens (you may want to play with SET\n> enable_seqscan=off as well).\n\nI _DO_ have an index on to_ship.ordered_product_id. It's a btree.\n\n> And also, if you are only ever interested in paid = true and\n> suspended_sub = false, then you can recreate these indexes as partials -\n> e.g:\n>\n> CREATE INDEX paid_index ON ordered_products (paid) WHERE paid = true;\n> CREATE INDEX suspended_sub_index ON ordered_products (suspended_sub)\n> WHERE suspended_sub = false;\n\nThey're currently defined as individuals and I'm depending on the bitmap \nindexing.\n\n> > So what's the best way to performance wiggle this info out of the db? \n> > The list of values is only about 30 tuples long out of this query, so I\n> > was figuring I could trigger on insert to to_ship to place the value into\n> > another table if it didn't already exist. I'd rather the writing be slow\n> > than the reading.\n>\n> Yeah - all sort of horrible denormalizations are possible :-), hopefully\n> we can get the original query to work ok, and avoid the need to add code\n> or triggers to you app.\n\nThat'd be great.\n", "msg_date": "Thu, 15 Dec 2005 01:46:06 -0600", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple Join" }, { "msg_contents": "On Thursday 15 December 2005 00:52, you wrote:\n> On Wed, 2005-12-14 at 17:47 -0500, Tom Lane wrote:\n> > That plan looks perfectly fine to me. You could try forcing some other\n> > choices by fooling with the planner enable switches (eg set\n> > enable_seqscan = off) but I doubt you'll find much improvement. There\n> > are too many rows being pulled from ordered_products to make an index\n> > nestloop a good idea.\n>\n> Well, I'm no expert either, but if there was an index on\n> ordered_products (paid, suspended_sub, id) it should be mergejoinable\n> with the index on to_ship.ordered_product_id, right? Given the\n> conditions on paid and suspended_sub.\n>\n> If you (Kevin) try adding such an index, ideally it would get used given\n> that you're only pulling out a small fraction of the rows in to_ship.\n> If it doesn't get used, then I had a similar issue with 8.0.3 where an\n> index that was mergejoinable (only because of the restrictions in the\n> where clause) wasn't getting picked up.\n\nThe following is already there:\n\nCREATE INDEX ordered_product_id_index\n ON to_ship\n USING btree\n (ordered_product_id);\n\nThat's why I emailed this list.\n\n> Mitch\n>\n> Kevin Brown wrote:\n> > CREATE TABLE to_ship\n> > (\n> > id int8 NOT NULL DEFAULT nextval(('to_ship_seq'::text)::regclass),\n> > ordered_product_id int8 NOT NULL,\n> > bounced int4 NOT NULL DEFAULT 0,\n> > operator_id varchar(20) NOT NULL,\n> > \"timestamp\" timestamptz NOT NULL DEFAULT ('now'::text)::timestamp(6)\n> > with\n> > time zone,\n> > CONSTRAINT to_ship_pkey PRIMARY KEY (id),\n> > CONSTRAINT to_ship_ordered_product_id_fkey FOREIGN KEY\n> > (ordered_product_id)\n> > REFERENCES ordered_products (id) ON UPDATE RESTRICT ON DELETE RESTRICT\n> > )\n> > WITHOUT OIDS;\n> >\n> > CREATE TABLE ordered_products\n> > (\n> > id int8 NOT NULL DEFAULT\n> > nextval(('ordered_products_seq'::text)::regclass),\n> > order_id int8 NOT NULL,\n> > product_id int8 NOT NULL,\n> > recipient_address_id int8 NOT NULL,\n> > hide bool NOT NULL DEFAULT false,\n> > renewal bool NOT NULL DEFAULT false,\n> > \"timestamp\" timestamptz NOT NULL DEFAULT ('now'::text)::timestamp(6)\n> > with\n> > time zone,\n> > operator_id varchar(20) NOT NULL,\n> > suspended_sub bool NOT NULL DEFAULT false,\n> > quantity int4 NOT NULL DEFAULT 1,\n> > price_paid numeric NOT NULL,\n> > tax_paid numeric NOT NULL DEFAULT 0,\n> > shipping_paid numeric NOT NULL DEFAULT 0,\n> > remaining_issue_obligation int4 NOT NULL DEFAULT 0,\n> > parent_product_id int8,\n> > delivery_method_id int8 NOT NULL,\n> > paid bool NOT NULL DEFAULT false,\n> > CONSTRAINT ordered_products_pkey PRIMARY KEY (id),\n> > CONSTRAINT ordered_products_order_id_fkey FOREIGN KEY (order_id)\n> > REFERENCES\n> > orders (id) ON UPDATE RESTRICT ON DELETE RESTRICT,\n> > CONSTRAINT ordered_products_parent_product_id_fkey FOREIGN KEY\n> > (parent_product_id) REFERENCES ordered_products (id) ON UPDATE\n> > RESTRICT ON\n> > DELETE RESTRICT,\n> > CONSTRAINT ordered_products_recipient_address_id_fkey FOREIGN KEY\n> > (recipient_address_id) REFERENCES addresses (id) ON UPDATE RESTRICT ON\n> > DELETE\n> > RESTRICT\n> > )\n> > WITHOUT OIDS;\n> >\n> > === The two indexes that should matter ===\n> > CREATE INDEX ordered_product_id_index\n> > ON to_ship\n> > USING btree\n> > (ordered_product_id);\n> >\n> > CREATE INDEX paid_index\n> > ON ordered_products\n> > USING btree\n> > (paid);\n> >\n> > ordered_products.id is a primary key, so it should have an implicit\n> > index.\n", "msg_date": "Thu, 15 Dec 2005 01:48:15 -0600", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple Join" }, { "msg_contents": "Kevin Brown wrote:\n> On Wednesday 14 December 2005 18:36, you wrote:\n> \n>>Well - that had no effect at all :-) You don't have and index on\n>>to_ship.ordered_product_id do you? - try adding one (ANALYZE again), and\n>>let use know what happens (you may want to play with SET\n>>enable_seqscan=off as well).\n> \n> \n> I _DO_ have an index on to_ship.ordered_product_id. It's a btree.\n>\n\nSorry - read right past it!\n\nDid you try out enable_seqscan=off? I'm interested to see if we can get \n8.1 bitmap anding the three possibly useful columns together on \nordered_products and *then* doing the join to to_ship.\n\nCheers\n\nMark\n", "msg_date": "Thu, 15 Dec 2005 21:15:05 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Join" }, { "msg_contents": "On Thu, 2005-12-15 at 01:48 -0600, Kevin Brown wrote:\n> > Well, I'm no expert either, but if there was an index on\n> > ordered_products (paid, suspended_sub, id) it should be mergejoinable\n> > with the index on to_ship.ordered_product_id, right? Given the\n> > conditions on paid and suspended_sub.\n> >\n> The following is already there:\n> \n> CREATE INDEX ordered_product_id_index\n> ON to_ship\n> USING btree\n> (ordered_product_id);\n> \n> That's why I emailed this list.\n\nI saw that; what I'm suggesting is that that you try creating a 3-column\nindex on ordered_products using the paid, suspended_sub, and id columns.\nIn that order, I think, although you could also try the reverse. It may\nor may not help, but it's worth a shot--the fact that all of those\ncolumns are used together in the query suggests that you might do better\nwith a three-column index on those. \n\nWith all three columns indexed individually, you're apparently not\ngetting the bitmap plan that Mark is hoping for. I imagine this has to\ndo with the lack of multi-column statistics in postgres, though you\ncould also try raising the statistics target on the columns of interest.\n\nSetting enable_seqscan to off, as others have suggested, is also a\nworthwhile experiment, just to see what you get.\n\nMitch\n\n", "msg_date": "Thu, 15 Dec 2005 03:02:06 -0800", "msg_from": "Mitch Skinner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Join" }, { "msg_contents": "Mitch Skinner wrote:\n> I saw that; what I'm suggesting is that that you try creating a 3-column\n> index on ordered_products using the paid, suspended_sub, and id columns.\n> In that order, I think, although you could also try the reverse. It may\n> or may not help, but it's worth a shot--the fact that all of those\n> columns are used together in the query suggests that you might do better\n> with a three-column index on those. \n> \n> With all three columns indexed individually, you're apparently not\n> getting the bitmap plan that Mark is hoping for. I imagine this has to\n> do with the lack of multi-column statistics in postgres, though you\n> could also try raising the statistics target on the columns of interest.\n> \n> Setting enable_seqscan to off, as others have suggested, is also a\n> worthwhile experiment, just to see what you get.\n> \n>\n\nRight on. Some of these \"coerced\" plans may perform much better. If so, \nwe can look at tweaking your runtime config: e.g.\n\neffective_cache_size\nrandom_page_cost\ndefault_statistics_target\n\nto see if said plans can be chosen \"naturally\".\n\ncheers\n\nMark\n", "msg_date": "Fri, 16 Dec 2005 08:29:17 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Join" }, { "msg_contents": "I asked a while back if there were any plans to allow developers to override the optimizer's plan and force certain plans, and received a fairly resounding \"No\". The general feeling I get is that a lot of work has gone into the optimizer, and by God we're going to use it!\n\nI think this is just wrong, and I'm curious whether I'm alone in this opinion.\n\nOver and over, I see questions posted to this mailing list about execution plans that don't work out well. Many times there are good answers - add an index, refactor the design, etc. - that yield good results. But, all too often the answer comes down to something like this recent one:\n\n > Right on. Some of these \"coerced\" plans may perform \n > much better. If so, we can look at tweaking your runtime\n > config: e.g.\n >\n > effective_cache_size\n > random_page_cost\n > default_statistics_target\n >\n > to see if said plans can be chosen \"naturally\".\n\nI see this over and over. Tweak the parameters to \"force\" a certain plan, because there's no formal way for a developer to say, \"I know the best plan.\"\n\nThere isn't a database in the world that is as smart as a developer, or that can have insight into things that only a developer can possibly know. Here's a real-life example that caused me major headaches. It's a trivial query, but Postgres totally blows it:\n\n select * from my_table \n where row_num >= 50000 and row_num < 100000\n and myfunc(foo, bar);\n\nHow can Postgres possibly know what \"myfunc()\" does? In this example, my_table is about 10 million rows and row_num is indexed. When the row_num range is less than about 30,000, Postgres (correctly) uses an row_num index scan, then filters by myfunc(). But beyond that, it chooses a sequential scan, filtering by myfunc(). This is just wrong. Postgres can't possibly know that myfunc() is VERY expensive. The correct plan would be to switch from index to filtering on row_num. Even if 99% of the database is selected by row_num, it should STILL at least filter by row_num first, and only filter by myfunc() as the very last step.\n\nHow can a database with no ability to override a plan possibly cope with this?\n\nWithout the explicit ability to override the plan Postgres generates, these problems dominate our development efforts. Postgres does an excellent job optimizing on 90% of the SQL we write, but the last 10% is nearly impossible to get right. We spend huge amounts of time on trial-and-error queries, second guessing Postgress, creating unnecessary temporary tables, sticking in the occasional OFFSET in a subquery to prevent merging layers, and so forth.\n\nThis same application also runs on Oracle, and although I've cursed Oracle's stupid planner many times, at least I can force it to do it right if I need to.\n\nThe danger of forced plans is that inexperienced developers tend to abuse them. So it goes -- the documentation should be clear that forced plans are always a last resort. \n\nBut there's no getting around the fact that Postgres needs a way for a developer to specify the execution plan.\n\nCraig\n\n", "msg_date": "Thu, 15 Dec 2005 15:06:03 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Overriding the optimizer" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> I see this over and over. Tweak the parameters to \"force\" a certain\n> plan, because there's no formal way for a developer to say, \"I know\n> the best plan.\"\n\nI think you've misunderstood those conversations entirely. The point\nis not to force the planner into a certain plan, it is to explore what's\ngoing on with a view to understanding why the planner isn't making a\ngood choice, and thence hopefully improve the planner in future. (Now,\nthat's not necessarily what the user with an immediate problem is\nthinking, but that's definitely what the developers are thinking.)\n\n> There isn't a database in the world that is as smart as a developer,\n\nPeople who are convinced they are smarter than the machine are often\nwrong ;-). If we did put in the nontrivial amount of work needed to\nhave such a facility, it would probably get abused more often than it\nwas used correctly. I'd rather spend the work on making the planner\nbetter.\n\nThis discussion has been had before (many times) ... see the -hackers\narchives for detailed arguments. The one that carries the most weight\nin my mind is that planner hints embedded in applications will not adapt\nto changing circumstances --- the plan that was best when you designed\nthe code might not be best today.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Dec 2005 19:23:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer " }, { "msg_contents": "> select * from my_table where row_num >= 50000 and row_num < 100000\n> and myfunc(foo, bar);\n\nYou just create an index on myfunc(foo, bar)\n\nChris\n\n", "msg_date": "Fri, 16 Dec 2005 09:50:44 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "On 12/15/05, Christopher Kings-Lynne <[email protected]> wrote:\n> > select * from my_table where row_num >= 50000 and row_num < 100000\n> > and myfunc(foo, bar);\n>\n> You just create an index on myfunc(foo, bar)\n>\n> Chris\n>\n\nonly if myfunc(foo, bar) is immutable...\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Thu, 15 Dec 2005 21:00:44 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": ">>> select * from my_table where row_num >= 50000 and row_num < 100000\n>>> and myfunc(foo, bar);\n>>\n>>You just create an index on myfunc(foo, bar)\n> \n> only if myfunc(foo, bar) is immutable...\n\nAnd if it's not then the best any database can do is to index scan \nrow_num - so still you have no problem.\n\nChris\n\n", "msg_date": "Fri, 16 Dec 2005 10:02:35 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Tom,\n\n>>I see this over and over. Tweak the parameters to \"force\" a certain\n>>plan, because there's no formal way for a developer to say, \"I know\n>>the best plan.\"\n> \n> I think you've misunderstood those conversations entirely. The point\n> is not to force the planner into a certain plan, it is to explore what's\n> going on with a view to understanding why the planner isn't making a\n> good choice, and thence hopefully improve the planner in future.\n\nNo, I understood the conversations very clearly. But no matter how clever the optimizer, it simply can't compete with a developer who has knowledge that Postgres *can't* have. The example of a user-written function is obvious.\n\n>>There isn't a database in the world that is as smart as a developer,\n> \n> People who are convinced they are smarter than the machine are often\n> wrong ;-). \n\nOften, but not always -- as I noted in my original posting. And when the developer is smarter than Postgres, and Postgres makes the wrong choice, what is the developer supposed to do? This isn't academic -- the wrong plans Postgres makes can be *catastrophic*, e.g. turning a 3-second query into a three-hour query.\n\nHow about this: Instead of arguing in the abstract, tell me in concrete terms how you would address the very specific example I gave, where myfunc() is a user-written function. To make it a little more challenging, try this: myfunc() can behave very differently depending on the parameters, and sometimes (but not always), the application knows how it will behave and could suggest a good execution plan.\n\n(And before anyone suggests that I rewrite myfunc(), I should explain that it's in the class of NP-complete problems. The function is inherently hard and can't be made faster or more predictable.)\n\nThe example I raised in a previous thread, of irregular usage, is the same: I have a particular query that I *always* want to be fast even if it's only used rarely, but the system swaps its tables out of the file-system cache, based on \"low usage\", even though the \"high usage\" queries are low priority. How can Postgres know such things when there's no way for me to tell it?\n\nThe answers from the Postgres community were essentially, \"Postgres is smarter than you, let it do its job.\" Unfortunately, this response completely ignores the reality: Postgres is NOT doing its job, and can't, because it doesn't have enough information.\n\nCraig\n\n", "msg_date": "Thu, 15 Dec 2005 18:16:08 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n>> select * from my_table where row_num >= 50000 and row_num < 100000\n>> and myfunc(foo, bar);\n> \n> \n> You just create an index on myfunc(foo, bar)\n\nThanks, but myfunc() takes parameters (shown here as \"foo, bar\"), one of which is not a column, it's external and changes with every query. A function index won't work.\n\nCraig\n", "msg_date": "Thu, 15 Dec 2005 18:18:09 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "\n\nChristopher Kings-Lynne wrote:\n>>>> select * from my_table where row_num >= 50000 and row_num < \n>>>> 100000\n>>>> and myfunc(foo, bar);\n>>>\n>>>\n>>> You just create an index on myfunc(foo, bar)\n>>\n>>\n>> only if myfunc(foo, bar) is immutable...\n> \n> \n> And if it's not then the best any database can do is to index scan \n> row_num - so still you have no problem.\n\nBoy, you picked a *really* bad example ;-)\n\nThe problem is that Postgres decided to filter on myfunc() *first*, and then filter on row_num, resulting in a query time that jumped from seconds to hours. And there's no way for me to tell Postgres not to do that!\n\nSo, \"you still have no problem\" is exactly wrong, because Postgres picked the wrong plan. Postgres decided that applying myfunc() to 10,000,000 rows was a better plan than an index scan of 50,000 row_nums. So I'm screwed.\n\nCraig\n", "msg_date": "Thu, 15 Dec 2005 18:23:22 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "> > Right on. Some of these \"coerced\" plans may perform > much better. \n> If so, we can look at tweaking your runtime\n> > config: e.g.\n> >\n> > effective_cache_size\n> > random_page_cost\n> > default_statistics_target\n> >\n> > to see if said plans can be chosen \"naturally\".\n> \n> I see this over and over. Tweak the parameters to \"force\" a certain \n> plan, because there's no formal way for a developer to say, \"I know the \n> best plan.\"\n\nNo, this is \"fixing your wrongn, inaccurate parameters so that \npostgresql can choose a better plan\".\n\nI don't necessarily disagree with your assertion that we need planner \nhints, but unless you or someone else is willing to submit a patch with \nthe feature it's unlikely to ever be implemented...\n\nChris\n\n", "msg_date": "Fri, 16 Dec 2005 10:25:22 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Craig A. James wrote:\n> I asked a while back if there were any plans to allow developers to \n> override the optimizer's plan and force certain plans, and received a \n> fairly resounding \"No\". The general feeling I get is that a lot of work \n> has gone into the optimizer, and by God we're going to use it!\n> \n> I think this is just wrong, and I'm curious whether I'm alone in this \n> opinion.\n> \n> Over and over, I see questions posted to this mailing list about \n> execution plans that don't work out well. Many times there are good \n> answers - add an index, refactor the design, etc. - that yield good \n> results. But, all too often the answer comes down to something like \n> this recent one:\n> \n> > Right on. Some of these \"coerced\" plans may perform > much better. \n> If so, we can look at tweaking your runtime\n> > config: e.g.\n> >\n> > effective_cache_size\n> > random_page_cost\n> > default_statistics_target\n> >\n> > to see if said plans can be chosen \"naturally\".\n> \n> I see this over and over. Tweak the parameters to \"force\" a certain \n> plan, because there's no formal way for a developer to say, \"I know the \n> best plan.\"\n> \n\nI hear what you are saying, but to use this fine example - I don't know \nwhat the best plan is - these experiments part of an investigation to \nfind *if* there is a better plan, and if so, why Postgres is not finding it.\n\n> There isn't a database in the world that is as smart as a developer, or \n> that can have insight into things that only a developer can possibly \n> know.\n\nThat is often true - but the aim is to get Postgres's optimizer closer \nto developer smartness.\n\nAfter years of using several other database products (some supporting \nhint type constructs and some not), I have come to believe that hinting \n(or similar) actually *hinders* the development of a great optimizer.\n\n\nBest wishes\n\nMark\n", "msg_date": "Fri, 16 Dec 2005 15:31:03 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "> Boy, you picked a *really* bad example ;-)\n> \n> The problem is that Postgres decided to filter on myfunc() *first*, and \n> then filter on row_num, resulting in a query time that jumped from \n> seconds to hours. And there's no way for me to tell Postgres not to do \n> that!\n\nCan you paste explain analyze and your effective_cache_size, etc. settings.\n\n> So, \"you still have no problem\" is exactly wrong, because Postgres \n> picked the wrong plan. Postgres decided that applying myfunc() to \n> 10,000,000 rows was a better plan than an index scan of 50,000 \n> row_nums. So I'm screwed.\n\nThis seems like a case where PostgreSQL's current optimiser should \neasily know what to do if your config settings are correct and you've \nbeen running ANALYZE, so I'd like to see your settings and the explain \nanalyze plan...\n\nChris\n\n", "msg_date": "Fri, 16 Dec 2005 10:34:38 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> I don't necessarily disagree with your assertion that we need planner \n> hints, but unless you or someone else is willing to submit a patch with \n> the feature it's unlikely to ever be implemented...\n\nNow that's an answer I understand and appreciate. Open-source development relies on many volunteers, and I've benefitted from it since the early 1980's when emacs and Common Lisp first came to my attention. I've even written a widely-circulated article about open-source development, which some of you may have read:\n\n http://www.moonviewscientific.com/essays/software_lifecycle.htm\n\nI hope nobody here thinks I'm critical of all the hard work that's been put into Postgres. My hope is to raise the awareness of this issue in the hope that it's at least put on \"the list\" for serious consideration.\n\nCraig\n\n", "msg_date": "Thu, 15 Dec 2005 18:44:03 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Mark Kirkwood wrote:\n> I hear what you are saying, but to use this fine example - I don't know \n> what the best plan is - these experiments part of an investigation to \n> find *if* there is a better plan, and if so, why Postgres is not finding \n> it.\n> \n>> There isn't a database in the world that is as smart as a developer, \n>> or that can have insight into things that only a developer can \n>> possibly know.\n> \n> That is often true - but the aim is to get Postgres's optimizer closer \n> to developer smartness.\n\nWhat would be cool would be some way the developer could alter the plan, but they way of doing so would strongly encourage the developer to send the information to this mailing list. Postgres would essentially say, \"Ok, you can do that, but we want to know why!\"\n\n> After years of using several other database products (some supporting \n> hint type constructs and some not), I have come to believe that hinting \n> (or similar) actually *hinders* the development of a great optimizer.\n\nI agree. It takes the pressure off the optimizer gurus. If the users can just work around every problem, then the optimizer can suck and the system is still usable.\n\nLest anyone think I'm an all-out advocate of overriding the optimizer, I know from first-hand experience what a catastrophe it can be. An Oracle hint I used worked fine on my test schema, but the customer's \"table\" turned out to be a view, and Oracle's optimizer worked well on the view whereas my hint was horrible. Unfortunately, without the hint, Oracle sucked when working on an ordinary table. Hints are dangerous, and I consider them a last resort.\n\nCraig\n", "msg_date": "Thu, 15 Dec 2005 18:52:19 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Tom Lane wrote:\n> This discussion has been had before (many times) ... see the -hackers\n> archives for detailed arguments. The one that carries the most weight\n> in my mind is that planner hints embedded in applications will not adapt\n> to changing circumstances --- the plan that was best when you designed\n> the code might not be best today.\n\nAbsolutely right. But what am I supposed to do *today* if the planner makes a mistake? Shut down my web site?\n\nRopes are useful, but you can hang yourself with them. Knives are useful, but you can cut yourself with them. Should we ban useful tools because they cause harm to the careless?\n\nCraig\n", "msg_date": "Thu, 15 Dec 2005 19:04:03 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Can you paste explain analyze and your effective_cache_size, etc. settings.\n> ... \n> This seems like a case where PostgreSQL's current optimiser should \n> easily know what to do if your config settings are correct and you've \n> been running ANALYZE, so I'd like to see your settings and the explain \n> analyze plan...\n\nI could, but it would divert us from the main topic of this discussion. It's not about that query, which was just an example. It's the larger issue.\n\nTom's earlier response tells the story better than I can:\n> This discussion has been had before (many times) ... see\n> the -hackers archives for detailed arguments. \n\nIf it's \"been had before (many times)\", and now I'm bringing it up again, then it's clearly an ongoing problem that hasn't been resolved.\n\nCraig\n", "msg_date": "Thu, 15 Dec 2005 19:11:53 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Craig A. James wrote:\n\n> \n> What would be cool would be some way the developer could alter the plan, \n> but they way of doing so would strongly encourage the developer to send \n> the information to this mailing list. Postgres would essentially say, \n> \"Ok, you can do that, but we want to know why!\"\n> \n\nYeah it would - an implementation I have seen that I like is where the \ndeveloper can supply the *entire* execution plan with a query. This is \ncomplex enough to make casual use unlikely :-), but provides the ability \nto try out other plans, and also fix that vital query that must run \ntoday.....\n\ncheers\n\nMark\n", "msg_date": "Fri, 16 Dec 2005 16:16:58 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": ">> ... This seems like a case where PostgreSQL's current optimiser should \n>> easily know what to do if your config settings are correct and you've \n>> been running ANALYZE, so I'd like to see your settings and the explain \n>> analyze plan...\n> \n> I could, but it would divert us from the main topic of this discussion. \n> It's not about that query, which was just an example. It's the larger \n> issue.\n\nSo your main example bad query is possibly just a case of lack of \nanalyze stats and wrong postgresql.conf config? And that's what causes \nyou to shut down your database? Don't you want your problem FIXED?\n\nBut like I said - no developer is interested in doing planner hints. \nPossibly you could get a company to sponsor it. Maybe what you want is \na statement of \"If someone submits a good, working, fully implemented \npatch that does planner hints, then we'll accept it.\"\n\nChris\n\n", "msg_date": "Fri, 16 Dec 2005 11:20:28 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> So your main example bad query is possibly just a case of lack of \n> analyze stats and wrong postgresql.conf config? And that's what causes \n> you to shut down your database? Don't you want your problem FIXED?\n\nI'm trying to help by raising a question that I think is important, and have an honest, perhaps vigorous, but respectful, discussion about it. I respect everyone's opinion, and I hope you respect mine. I've been in this business a long time, and I don't raise issues lightly.\n\nYes, I want my query fixed. And I may post it, in a thread with a new title. In fact, I posted a different query with essentially the same problem a while back and got nothing that helped:\n\n http://archives.postgresql.org/pgsql-performance/2005-11/msg00133.php\n\n(I can't help but point out that Tom's response was to suggest a way to fool the optimizer so as to prevent it from \"optimizing\" the query. In other words, he told me a trick that would force a particular plan on the optimizer. Which is exactly the point of this discussion.)\n\nThe point is that the particular query is not relevant -- it's the fact that this topic (according to Tom) has been and continues to be raised. This should tell us all something, that it's not going to go away, and that it's a real issue.\n\nRegards,\nCraig\n", "msg_date": "Thu, 15 Dec 2005 19:38:36 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "> Yeah it would - an implementation I have seen that I like is where the \n> developer can supply the *entire* execution plan with a query. This is \n> complex enough to make casual use unlikely :-), but provides the ability \n> to try out other plans, and also fix that vital query that must run \n> today.....\n\nSo, to move on to the concrete...\n\nI'm not familiar with the innards of Postgres except in a theoretical way. Maybe this is a totally naive or dumb question, but I have to ask: How hard would it be to essentially turn off the optimizer?\n\n1. Evaluate WHERE clauses left-to-right.\n\nselect ... from FOO where A and B and C;\n\nThis would just apply the criteria left-to-right, first A, then B, then C. If an index was available it would use it, but only in left-to-right order, i.e. if A had no index but B did, then too bad, you should have written \"B and A and C\".\n\n\n2. Evaluate joins left-to-right.\n\nselect ... from FOO join BAR on (...) join BAZ on (...) where ...\n\nThis would join FOO to BAR, then join the result to BAZ. The only optimization would be to apply relevant \"where\" conditions to each join before processing the next join.\n\n\n3. Don't flatten sub-selects\n\nselect ... from (select ... from FOO where ...) as X where ...;\n\nThis would do the inner select then use the result in the outer select, and wouldn't attempt to flatten the query.\n\nThanks,\nCraig\n", "msg_date": "Thu, 15 Dec 2005 19:57:50 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "On Thu, 15 Dec 2005, Craig A. James wrote:\n\n> Mark Kirkwood wrote:\n>> I hear what you are saying, but to use this fine example - I don't know \n>> what the best plan is - these experiments part of an investigation to find \n>> *if* there is a better plan, and if so, why Postgres is not finding it.\n>> \n>>> There isn't a database in the world that is as smart as a developer, or \n>>> that can have insight into things that only a developer can possibly know.\n>> \n>> That is often true - but the aim is to get Postgres's optimizer closer to \n>> developer smartness.\n>\n> What would be cool would be some way the developer could alter the plan, but \n> they way of doing so would strongly encourage the developer to send the \n> information to this mailing list. Postgres would essentially say, \"Ok, you \n> can do that, but we want to know why!\"\n\nat the risk of sounding flippent (which is NOT what I intend) I will point \nout that with the source you can change the optimizer any way you need to \n:-)\n\nthat being said, in your example the issue is the cost of the user created \nfunction and the fact that postgres doesn't know it's cost.\n\nwould a resonable answer be to give postgres a way to learn how expensive \nthe call is?\n\na couple ways I could see to do this.\n\n1. store some stats automagicly when the function is called and update the \noptimization plan when you do an ANALYSE\n\n2. provide a way for a user to explicitly set a cost factor for a function \n(with a default value that's sane for fairly trivial functions so that it \nwould only have to be set for unuseually expensive functions)\n\nnow, neither of these will work all the time if a given function is \nsometimes cheap and sometimes expensive (depending on it's parameters), \nbut in that case I would say that if the application knows that a function \nwill be unusueally expensive under some conditions (and knows what those \nconditions will be) it may be a reasonable answer to duplicate the \nfunction, one copy that it uses most of the time, and a second copy that \nit uses when it expects it to be expensive. at this point the cost of the \nfunction can be set via either of the methods listed above)\n\n>> After years of using several other database products (some supporting hint \n>> type constructs and some not), I have come to believe that hinting (or \n>> similar) actually *hinders* the development of a great optimizer.\n>\n> I agree. It takes the pressure off the optimizer gurus. If the users can \n> just work around every problem, then the optimizer can suck and the system is \n> still usable.\n>\n> Lest anyone think I'm an all-out advocate of overriding the optimizer, I know \n> from first-hand experience what a catastrophe it can be. An Oracle hint I \n> used worked fine on my test schema, but the customer's \"table\" turned out to \n> be a view, and Oracle's optimizer worked well on the view whereas my hint was \n> horrible. Unfortunately, without the hint, Oracle sucked when working on an \n> ordinary table. Hints are dangerous, and I consider them a last resort.\n\nI've been on the linux-kernel mailing list for the last 9 years, and have \nseen a similar debate rage during that entire time about kernel memory \nmanagement. overall both of these tend to be conflicts between short-term \nand long-term benifits.\n\nin the short-term the application user wants to be able to override the \nsystem to get the best performance _now_\n\nin the long run the system designers don't trust the application \nprogrammers to get the hints right and want to figure out the right \noptimizer plan, even if it takes a lot longer to do so.\n\nthe key to this balance seems to be to work towards as few controls as \npossible, becouse the user will get them wrong far more frequently then \nthey get them right, but when you hit a point where there's absolutly no \nway for the system to figure things out (and it's a drastic difference) \nprovide the application with a way to hint to the system that things are \nunusueal, but always keep looking for patterns that will let the system \ndetect the need itself\n\neven the existing defaults are wrong as frequently as they are right (they \nwere set when hardware was very different then it is today) so some way to \ngather real-world stats and set the system defaults based on actual \nhardware performance is really the right way to go (even for things like \nsequential scan speed that are set in the config file today)\n\nDavid Lang\n", "msg_date": "Thu, 15 Dec 2005 20:02:58 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "On Fri, 16 Dec 2005, Mark Kirkwood wrote:\n\n>\n> Right on. Some of these \"coerced\" plans may perform much better. If so, we \n> can look at tweaking your runtime config: e.g.\n>\n> effective_cache_size\n> random_page_cost\n> default_statistics_target\n>\n> to see if said plans can be chosen \"naturally\".\n\nMark, I've seen these config options listed as tweaking targets fairly \nfrequently, has anyone put any thought or effort into creating a test \nprogram that could analyse the actual system and set the defaults based on \nthe measured performance?\n\nDavid Lang\n", "msg_date": "Thu, 15 Dec 2005 20:05:29 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Join" }, { "msg_contents": "On Thu, 15 Dec 2005, Craig A. James wrote:\n\n> The example I raised in a previous thread, of irregular usage, is the same: I \n> have a particular query that I *always* want to be fast even if it's only \n> used rarely, but the system swaps its tables out of the file-system cache, \n> based on \"low usage\", even though the \"high usage\" queries are low priority. \n> How can Postgres know such things when there's no way for me to tell it?\n\nactually, postgres doesn't manage the file-system cache, it deliberatly \nleaves that up to the OS it is running on to do that job.\n\none (extremely ugly) method that you could use would be to have a program \nthat looks up what files are used to store your high priority tables and \nthen write a trivial program to keep those files in memory (it may be as \nsimple as mmaping the files and then going to sleep, or you may have to \nread various points through the file to keep them current in the cache, it \nWILL vary depending on your OS and filesystem in use)\n\noracle goes to extremes with this sort of control, I'm actually mildly \nsurprised that they still run on a host OS and haven't completely taken \nover the machine (I guess they don't want to have to write device drivers, \nthat's about the only OS code they really want to use, they do their own \nmemory management, filesystem, and user systems), by avoiding areas like \nthis postgres sacrafices a bit of performance, but gains a much broader \nset of platforms (hardware and OS) that it can run on. and this by itself \ncan result in significant wins (does oracle support Opteron CPU's in 64 \nbit mode yet? as of this summer it just wasn't an option)\n\nDavid Lang\n", "msg_date": "Thu, 15 Dec 2005 20:22:58 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "On 12/15/05, Craig A. James <[email protected]> wrote:\n> > Yeah it would - an implementation I have seen that I like is where the\n> > developer can supply the *entire* execution plan with a query. This is\n> > complex enough to make casual use unlikely :-), but provides the ability\n> > to try out other plans, and also fix that vital query that must run\n> > today.....\n>\n> So, to move on to the concrete...\n>\n> I'm not familiar with the innards of Postgres except in a theoretical way.\n> Maybe this is a totally naive or dumb question, but I have to ask: How\n> hard would it be to essentially turn off the optimizer?\n>\n> 1. Evaluate WHERE clauses left-to-right.\n>\n> select ... from FOO where A and B and C;\n>\n> This would just apply the criteria left-to-right, first A, then B, then C.\n> If an index was available it would use it, but only in left-to-right order,\n> i.e. if A had no index but B did, then too bad, you should have written \"B\n> and A and C\".\n>\n\npg < 8.1 when you use multi-column indexes do exactly this... but i\ndon't know why everyone wants this...\n\n>\n> 2. Evaluate joins left-to-right.\n>\n> select ... from FOO join BAR on (...) join BAZ on (...) where ...\n>\n> This would join FOO to BAR, then join the result to BAZ. The only\n> optimization would be to apply relevant \"where\" conditions to each join\n> before processing the next join.\n>\n\nusing explicit INNER JOIN syntax and parenthesis\n\n>\n> 3. Don't flatten sub-selects\n>\n> select ... from (select ... from FOO where ...) as X where ...;\n>\n\nselect ... from (select ... from FOO where ... offset 0) as X where ...;\n\n> This would do the inner select then use the result in the outer select, and\n> wouldn't attempt to flatten the query.\n>\n> Thanks,\n> Craig\n>\n\nwhat else?\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Thu, 15 Dec 2005 23:23:27 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "On 12/15/05, David Lang <[email protected]> wrote:\n> On Thu, 15 Dec 2005, Craig A. James wrote:\n>\n> > Mark Kirkwood wrote:\n> >> I hear what you are saying, but to use this fine example - I don't know\n> >> what the best plan is - these experiments part of an investigation to\n> find\n> >> *if* there is a better plan, and if so, why Postgres is not finding it.\n> >>\n> >>> There isn't a database in the world that is as smart as a developer, or\n> >>> that can have insight into things that only a developer can possibly\n> know.\n> >>\n> >> That is often true - but the aim is to get Postgres's optimizer closer to\n> >> developer smartness.\n> >\n> > What would be cool would be some way the developer could alter the plan,\n> but\n> > they way of doing so would strongly encourage the developer to send the\n> > information to this mailing list. Postgres would essentially say, \"Ok,\n> you\n> > can do that, but we want to know why!\"\n>\n> at the risk of sounding flippent (which is NOT what I intend) I will point\n> out that with the source you can change the optimizer any way you need to\n> :-)\n>\n> that being said, in your example the issue is the cost of the user created\n> function and the fact that postgres doesn't know it's cost.\n>\n> would a resonable answer be to give postgres a way to learn how expensive\n> the call is?\n>\n> a couple ways I could see to do this.\n>\n> 1. store some stats automagicly when the function is called and update the\n> optimization plan when you do an ANALYSE\n>\n> 2. provide a way for a user to explicitly set a cost factor for a function\n> (with a default value that's sane for fairly trivial functions so that it\n> would only have to be set for unuseually expensive functions)\n>\n> now, neither of these will work all the time if a given function is\n> sometimes cheap and sometimes expensive (depending on it's parameters),\n> but in that case I would say that if the application knows that a function\n> will be unusueally expensive under some conditions (and knows what those\n> conditions will be) it may be a reasonable answer to duplicate the\n> function, one copy that it uses most of the time, and a second copy that\n> it uses when it expects it to be expensive. at this point the cost of the\n> function can be set via either of the methods listed above)\n>\n> >> After years of using several other database products (some supporting\n> hint\n> >> type constructs and some not), I have come to believe that hinting (or\n> >> similar) actually *hinders* the development of a great optimizer.\n> >\n> > I agree. It takes the pressure off the optimizer gurus. If the users can\n> > just work around every problem, then the optimizer can suck and the system\n> is\n> > still usable.\n> >\n> > Lest anyone think I'm an all-out advocate of overriding the optimizer, I\n> know\n> > from first-hand experience what a catastrophe it can be. An Oracle hint I\n> > used worked fine on my test schema, but the customer's \"table\" turned out\n> to\n> > be a view, and Oracle's optimizer worked well on the view whereas my hint\n> was\n> > horrible. Unfortunately, without the hint, Oracle sucked when working on\n> an\n> > ordinary table. Hints are dangerous, and I consider them a last resort.\n>\n> I've been on the linux-kernel mailing list for the last 9 years, and have\n> seen a similar debate rage during that entire time about kernel memory\n> management. overall both of these tend to be conflicts between short-term\n> and long-term benifits.\n>\n> in the short-term the application user wants to be able to override the\n> system to get the best performance _now_\n>\n> in the long run the system designers don't trust the application\n> programmers to get the hints right and want to figure out the right\n> optimizer plan, even if it takes a lot longer to do so.\n>\n> the key to this balance seems to be to work towards as few controls as\n> possible, becouse the user will get them wrong far more frequently then\n> they get them right, but when you hit a point where there's absolutly no\n> way for the system to figure things out (and it's a drastic difference)\n> provide the application with a way to hint to the system that things are\n> unusueal, but always keep looking for patterns that will let the system\n> detect the need itself\n>\n> even the existing defaults are wrong as frequently as they are right (they\n> were set when hardware was very different then it is today) so some way to\n> gather real-world stats and set the system defaults based on actual\n> hardware performance is really the right way to go (even for things like\n> sequential scan speed that are set in the config file today)\n>\n> David Lang\n>\n\nthere was discussion on this and IIRC the consensus was that could be\nuseful tu give some statistics to user defined functions... i don't if\nsomeone is working on this or even if it is doable...\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Thu, 15 Dec 2005 23:25:24 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Craig A. James wrote:\n> \n> \n> Christopher Kings-Lynne wrote:\n> >>>> select * from my_table where row_num >= 50000 and row_num < \n> >>>>100000\n> >>>> and myfunc(foo, bar);\n> >>>\n> >>>\n> >>>You just create an index on myfunc(foo, bar)\n> >>\n> >>\n> >>only if myfunc(foo, bar) is immutable...\n> >\n> >\n> >And if it's not then the best any database can do is to index scan \n> >row_num - so still you have no problem.\n> \n> Boy, you picked a *really* bad example ;-)\n> \n> The problem is that Postgres decided to filter on myfunc() *first*, and \n> then filter on row_num, resulting in a query time that jumped from seconds \n> to hours. And there's no way for me to tell Postgres not to do that!\n\nApologies in advance if all of this has been said, or if any of it is\nwrong.\n\n\nWhat kind of plan do you get if you eliminate the myfunc(foo, bar)\nfrom the query entirely? An index scan or a full table scan? If the\nlatter then (assuming that the statistics are accurate) the reason you\nwant inclusion of myfunc() to change the plan must be the expense of\nthe function, not the expense of the scan (index versus sequential).\nWhile the expense of the function isn't, as far as I know, known or\nused by the planner, that obviously needn't be the case.\n\nOn the other hand, if the inclusion of the function call changes the\nplan that is selected from an index scan to a sequential scan, then\nthat, I think, is clearly a bug, since even a zero-cost function\ncannot make the sequential scan more efficient than an index scan\nwhich is already more efficient than the base sequential scan.\n\n\n> So, \"you still have no problem\" is exactly wrong, because Postgres picked \n> the wrong plan. Postgres decided that applying myfunc() to 10,000,000 \n> rows was a better plan than an index scan of 50,000 row_nums. So I'm \n> screwed.\n\nIf PostgreSQL is indeed applying myfunc() to 10,000,000 rows, then\nthat is a bug if the function is declared VOLATILE (which is the\ndefault if no volatility is specified), because it implies that it's\napplying the function to rows that don't match the selection\ncondition. From your prior description, it sounds like your function\nis declared STABLE.\n\n\nFor your specific situation, my opinion is that the proper\nmodification to PostgreSQL would be to give it (if it isn't already\nthere) the ability to include the cost of functions in the plan. The\ncost needn't be something that it automatically measures -- it could\nbe specified at function creation time.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Thu, 15 Dec 2005 21:08:33 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Craig A. James wrote:\n> Hints are dangerous, and I consider them a last resort.\n\nIf you consider them a last resort, then why do you consider them to\nbe a better alternative than a workaround such as turning off\nenable_seqscan, when all the other tradeoffs are considered?\n\nIf your argument is that planner hints would give you finer grained\ncontrol, then the question is whether you'd rather the developers\nspend their time implementing planner hints or improving the planner.\nI'd rather they did the latter, as long as workarounds are available\nwhen needed. A workaround will probably give the user greater\nincentive to report the problem than use of planner hints.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Thu, 15 Dec 2005 21:20:25 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Kevin Brown wrote:\n>>Hints are dangerous, and I consider them a last resort.\n> \n> If you consider them a last resort, then why do you consider them to\n> be a better alternative than a workaround such as turning off\n> enable_seqscan, when all the other tradeoffs are considered?\n\nIf I understand enable_seqscan, it's an all-or-nothing affair. Turning it off turns it off for the whole database, right? The same is true of all of the planner-tuning parameters in the postgres conf file. Since the optimizer does a good job most of the time, I'd hate to change a global setting like this -- what else would be affected? I could try this, but it would make me nervous to alter the whole system to fix one particular query.\n\n> If your argument is that planner hints would give you finer grained\n> control, then the question is whether you'd rather the developers\n> spend their time implementing planner hints or improving the planner.\n\nI agree 100% -- I'd much prefer a better planner. But when it comes down to a do-or-die situation, you need a hack, some sort of workaround, to get you working *today*.\n\nRegards,\nCraig\n", "msg_date": "Thu, 15 Dec 2005 21:41:06 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Craig A. James wrote:\n> Kevin Brown wrote:\n> >>Hints are dangerous, and I consider them a last resort.\n> >\n> >If you consider them a last resort, then why do you consider them to\n> >be a better alternative than a workaround such as turning off\n> >enable_seqscan, when all the other tradeoffs are considered?\n> \n> If I understand enable_seqscan, it's an all-or-nothing affair. Turning it \n> off turns it off for the whole database, right? The same is true of all \n> of the planner-tuning parameters in the postgres conf file.\n\nNope. What's in the conf file are the defaults. You can change them\non a per-connection basis, via the SET command. Thus, before doing\nyour problematic query:\n\nSET enable_seqscan = off;\n\nand then, after your query is done, \n\nSET enable_seqscan = on;\n\n> >If your argument is that planner hints would give you finer grained\n> >control, then the question is whether you'd rather the developers\n> >spend their time implementing planner hints or improving the planner.\n> \n> I agree 100% -- I'd much prefer a better planner. But when it comes down \n> to a do-or-die situation, you need a hack, some sort of workaround, to get \n> you working *today*.\n\nAnd that's why I was asking about workarounds versus planner hints. I\nexpect that the situations in which the planner gets things wrong\n*and* where there's no workaround are very rare indeed.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Thu, 15 Dec 2005 21:48:55 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "On Thu, Dec 15, 2005 at 21:41:06 -0800,\n \"Craig A. James\" <[email protected]> wrote:\n> \n> If I understand enable_seqscan, it's an all-or-nothing affair. Turning it \n> off turns it off for the whole database, right? The same is true of all of \n\nYou can turn it off just for specific queries. However, it will apply to\nall joins within a query.\n", "msg_date": "Thu, 15 Dec 2005 23:55:11 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Kevin Brown wrote:\n\n>Craig A. James wrote:\n> \n>\n>>Hints are dangerous, and I consider them a last resort.\n>> \n>>\n>\n>If you consider them a last resort, then why do you consider them to\n>be a better alternative than a workaround such as turning off\n>enable_seqscan, when all the other tradeoffs are considered?\n> \n>\n\nI would like a bit finer degree of control on this - I'd like to be able \nto tell PG that for my needs, it is never OK to scan an entire table of \nmore than N rows. I'd typically set N to 1,000,000 or so. What I would \nreally like is for my DBMS to give me a little more pushback - I'd like \nto ask it to run a query, and have it either find a \"good\" way to run \nthe query, or politely refuse to run it at all.\n\nYes, I know that is an unusual request :-)\n\nThe context is this - in a busy OLTP system, sometimes a query comes \nthrough that, for whatever reason (foolishness on my part as a \ndeveloper, unexpected use by a user, imperfection of the optimizer, \netc.), takes a really long time to run, usually because it table-scans \none or more large tables. If several of these happen at once, it can \ngrind an important production system effectively to a halt. I'd like to \nhave a few users/operations get a \"sorry, I couldn't find a good way to \ndo that\" message, rather than all the users find that their system has \neffectively stopped working.\n\nKyle Cordes\nwww.kylecordes.com\n\n\n", "msg_date": "Fri, 16 Dec 2005 08:19:27 -0600", "msg_from": "Kyle Cordes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "On 12/16/05, Kyle Cordes <[email protected]> wrote:\n> Kevin Brown wrote:\n>\n> >Craig A. James wrote:\n> >\n> >\n> >>Hints are dangerous, and I consider them a last resort.\n> >>\n> >>\n> >\n> >If you consider them a last resort, then why do you consider them to\n> >be a better alternative than a workaround such as turning off\n> >enable_seqscan, when all the other tradeoffs are considered?\n> >\n> >\n>\n> I would like a bit finer degree of control on this - I'd like to be able\n> to tell PG that for my needs, it is never OK to scan an entire table of\n> more than N rows. I'd typically set N to 1,000,000 or so. What I would\n> really like is for my DBMS to give me a little more pushback - I'd like\n> to ask it to run a query, and have it either find a \"good\" way to run\n> the query, or politely refuse to run it at all.\n>\n> Yes, I know that is an unusual request :-)\n>\n> The context is this - in a busy OLTP system, sometimes a query comes\n> through that, for whatever reason (foolishness on my part as a\n> developer, unexpected use by a user, imperfection of the optimizer,\n> etc.), takes a really long time to run, usually because it table-scans\n> one or more large tables. If several of these happen at once, it can\n> grind an important production system effectively to a halt. I'd like to\n> have a few users/operations get a \"sorry, I couldn't find a good way to\n> do that\" message, rather than all the users find that their system has\n> effectively stopped working.\n>\n> Kyle Cordes\n> www.kylecordes.com\n>\n>\n\nset statement_timeout in postgresql.conf\n\n--\nAtentamente,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Fri, 16 Dec 2005 09:45:34 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Dnia 16-12-2005, piďż˝ o godzinie 16:16 +1300, Mark Kirkwood napisaďż˝(a):\n> Craig A. James wrote:\n> \n> > \n> > What would be cool would be some way the developer could alter the plan, \n> > but they way of doing so would strongly encourage the developer to send \n> > the information to this mailing list. Postgres would essentially say, \n> > \"Ok, you can do that, but we want to know why!\"\n> > \n> \n> Yeah it would - an implementation I have seen that I like is where the \n> developer can supply the *entire* execution plan with a query. This is \n> complex enough to make casual use unlikely :-), but provides the ability \n> to try out other plans, and also fix that vital query that must run \n> today.....\n\nI think you could use SPI for that.\nThere is function SPI_prepare, which prepares plan,\nand SPI_execute_plan, executing it.\nThese functions are defined in src/backend/executor/spi.c.\n\nI think (someone please correct me if I'm wrong) you could\nprepare plan yourself, instead of taking it from SPI_prepare,\nand give it to SPI_execute_plan.\n\nSPI_prepare calls _SPI_prepare_plan, which parses query and calls\npg_analyze_and_rewrite. In your version don't call this function,\nbut provide PostgreSQL with your own plan (not-optimised according to\nPostrgeSQL, but meeting your criteria).\n\n-- \nTomasz Rybak <[email protected]>\n\n", "msg_date": "Fri, 16 Dec 2005 15:47:19 +0100", "msg_from": "Tomasz Rybak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Jaime Casanova wrote:\n>>The context is this - in a busy OLTP system, sometimes a query comes\n>>through that, for whatever reason (foolishness on my part as a\n>>developer, unexpected use by a user, imperfection of the optimizer,\n>>etc.), takes a really long time to run, usually because it table-scans\n>>one or more large tables. If several of these happen at once, it can\n>>grind an important production system effectively to a halt. I'd like to\n>>have a few users/operations get a \"sorry, I couldn't find a good way to\n>>do that\" message, rather than all the users find that their system has\n>>effectively stopped working.\n> ... \n> set statement_timeout in postgresql.conf\n\nI found it's better to use \"set statement_timeout\" in the code, rather than setting it globally. Someone else pointed out to me that setting it in postgresql.conf makes it apply to ALL transactions, including VACUUM, ANALYZE and so forth. I put it in my code just around the queries that are \"user generated\" -- queries that are from users' input. I expect any SQL that I write to finish in a reasonable time ;-).\n\nCraig\n", "msg_date": "Fri, 16 Dec 2005 08:49:21 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "On Thu, 2005-12-15 at 18:23 -0800, Craig A. James wrote:\n> So, \"you still have no problem\" is exactly wrong, because Postgres picked the wrong plan. Postgres decided that applying myfunc() to 10,000,000 rows was a better plan than an index scan of 50,000 row_nums. So I'm screwed.\n\nFWIW,\nThe cost_functionscan procedure in costsize.c has the following comment:\n /*\n * For now, estimate function's cost at one operator eval per\nfunction\n * call. Someday we should revive the function cost estimate\ncolumns in * pg_proc...\n */\n\nI recognize that you're trying to talk about the issue in general rather\nthan about this particular example. However, the example does seem to\nme to be exactly the case where the effort might be better spent\nimproving the optimizer (reviving the function cost estimate columns),\nrather than implementing a general hinting facility. Which one is more\neffort? I don't really know for sure, but cost_functionscan does seem\npretty straightforward.\n\nWhat percentage of problems raised on this list can be fixed by setting\nconfiguration parameters, adding indexes, increasing statistics, or\nre-architecting a crazy schema? I've only been lurking for a few\nmonths, but it seems like a pretty large fraction. Of the remainder,\nwhat percentage represent actual useful feedback about what needs\nimprovement in the optimizer? A pretty large fraction, I think.\nIncluding your example.\n\nPersonally, I think whoever was arguing for selectivity hints in\n-hackers recently made a pretty good point, so I'm partly on your side.\nActually, function cost \"hints\" don't really seem that much different\nfrom selectivity hints, and both seem to me to be slicker solutions\n(closer to the right level of abstraction) than a general hint facility.\n\nMitch\n\n", "msg_date": "Fri, 16 Dec 2005 11:13:31 -0800", "msg_from": "Mitch Skinner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "David Lang wrote:\n> On Fri, 16 Dec 2005, Mark Kirkwood wrote:\n> \n>>\n>> Right on. Some of these \"coerced\" plans may perform much better. If \n>> so, we can look at tweaking your runtime config: e.g.\n>>\n>> effective_cache_size\n>> random_page_cost\n>> default_statistics_target\n>>\n>> to see if said plans can be chosen \"naturally\".\n> \n> \n> Mark, I've seen these config options listed as tweaking targets fairly \n> frequently, has anyone put any thought or effort into creating a test \n> program that could analyse the actual system and set the defaults based \n> on the measured performance?\n> \n\nI am sure this has been discussed before, I found this thread -\n\nhttp://archives.postgresql.org/pgsql-performance/2004-07/msg00189.php\n\nbut I seem to recall others (but offhand can't find any of them).\n\n\n\nI think that the real difficultly here is that the construction of the \ntest program is non trivial - for instance, the best test program for \ntuning *my* workload is my application with its collection of data, but \nit is probably not a good test program for *anyone else's* workload.\n\ncheers\n\nMark\n\n\n", "msg_date": "Sat, 17 Dec 2005 10:11:19 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Join" }, { "msg_contents": "On Thu, Dec 15, 2005 at 09:48:55PM -0800, Kevin Brown wrote:\n> Craig A. James wrote:\n> > Kevin Brown wrote:\n> > >>Hints are dangerous, and I consider them a last resort.\n> > >\n> > >If you consider them a last resort, then why do you consider them to\n> > >be a better alternative than a workaround such as turning off\n> > >enable_seqscan, when all the other tradeoffs are considered?\n> > \n> > If I understand enable_seqscan, it's an all-or-nothing affair. Turning it \n> > off turns it off for the whole database, right? The same is true of all \n> > of the planner-tuning parameters in the postgres conf file.\n> \n> Nope. What's in the conf file are the defaults. You can change them\n> on a per-connection basis, via the SET command. Thus, before doing\n> your problematic query:\n> \n> SET enable_seqscan = off;\n> \n> and then, after your query is done, \n> \n> SET enable_seqscan = on;\n\nYou can also turn it off inside a transaction and have it only affect\nthat transaction so that you can't accidentally forget to turn it back\non (which could seriously hose things up if you're doing this in\nconjunction with a connection pool).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 16 Dec 2005 16:38:46 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "On Fri, Dec 16, 2005 at 03:31:03PM +1300, Mark Kirkwood wrote:\n> After years of using several other database products (some supporting \n> hint type constructs and some not), I have come to believe that hinting \n> (or similar) actually *hinders* the development of a great optimizer.\n\nI don't think you can assume that would hold true for an open-source\ndatabase. Unlike a commercial database, it's trivially easy to notify\ndevelopers about a bad query plan. With a commercial database you'd have\nto open a support ticket and hope they actually use that info to improve\nthe planner. Here you need just send an email to this list and the\ndevelopers will at least see it, and will usually try and fix the issue.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 16 Dec 2005 16:41:21 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "On Fri, Dec 16, 2005 at 04:16:58PM +1300, Mark Kirkwood wrote:\n> Craig A. James wrote:\n> \n> >\n> >What would be cool would be some way the developer could alter the plan, \n> >but they way of doing so would strongly encourage the developer to send \n> >the information to this mailing list. Postgres would essentially say, \n> >\"Ok, you can do that, but we want to know why!\"\n> >\n> \n> Yeah it would - an implementation I have seen that I like is where the \n> developer can supply the *entire* execution plan with a query. This is \n> complex enough to make casual use unlikely :-), but provides the ability \n> to try out other plans, and also fix that vital query that must run \n> today.....\n\nBeing able to specify an exact plan would also provide for query plan\nstability; something that is critically important in certain\napplications. If you have to meet a specific response time requirement\nfor a query, you can't afford to have the optimizer suddenly decide that\nsome other plan might be faster when in fact it's much slower.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 16 Dec 2005 16:45:12 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Jaime Casanova wrote:\n\n>>What I would\n>>really like is for my DBMS to give me a little more pushback - I'd like\n>>to ask it to run a query, and have it either find a \"good\" way to run\n>>the query, or politely refuse to run it at all.\n>> \n>>\n\n>set statement_timeout in postgresql.conf\n> \n>\n\nThat is what I am doing now, and it is much better than nothing.\n\nBut it's not really sufficient, in that it is still quite possible for \nusers repeatedly trying an operation that unexpectedly causes excessive \nDB usage, to load down the system to the point of failure. In other \nwords, I'd ideally like it to give up right away, not after N seconds of \ntable scanning my 100-million-row tables... and not with a timeout, but \nwith an explicit throwing up of its hands, exasperated, that it could \nnot come up with an efficient way to run my query.\n\nKyle Cordes\nwww.kylecordes.com\n\n\n", "msg_date": "Fri, 16 Dec 2005 19:42:30 -0600", "msg_from": "Kyle Cordes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> How about this: Instead of arguing in the abstract, tell me in\n> concrete terms how you would address the very specific example I gave,\n> where myfunc() is a user-written function. To make it a little more\n> challenging, try this: myfunc() can behave very differently depending\n> on the parameters, and sometimes (but not always), the application\n> knows how it will behave and could suggest a good execution plan.\n\nA word to the wise:\n\nregression=# explain select * from tenk1 where ten > 5 and ten < 9\nregression-# and myfunc(unique1,unique2);\n QUERY PLAN \n------------------------------------------------------------------\n Seq Scan on tenk1 (cost=0.00..533.00 rows=982 width=244)\n Filter: ((ten > 5) AND (ten < 9) AND myfunc(unique1, unique2))\n(2 rows)\n\nregression=# explain select * from tenk1 where myfunc(unique1,unique2)\nregression-# and ten > 5 and ten < 9;\n QUERY PLAN \n------------------------------------------------------------------\n Seq Scan on tenk1 (cost=0.00..533.00 rows=982 width=244)\n Filter: (myfunc(unique1, unique2) AND (ten > 5) AND (ten < 9))\n(2 rows)\n\nI might have taken your original complaint more seriously if it\nweren't so blatantly bogus. Your query as written absolutely\nwould not have evaluated myfunc() first, because there was no\nreason for the planner to reorder the WHERE list.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Dec 2005 02:28:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer " }, { "msg_contents": "On Fri, 16 Dec 2005, Mark Kirkwood wrote:\n\n> Craig A. James wrote:\n>\n>> \n>> What would be cool would be some way the developer could alter the plan, \n>> but they way of doing so would strongly encourage the developer to send the \n>> information to this mailing list. Postgres would essentially say, \"Ok, you \n>> can do that, but we want to know why!\"\n>> \n>\n> Yeah it would - an implementation I have seen that I like is where the \n> developer can supply the *entire* execution plan with a query. This is \n> complex enough to make casual use unlikely :-), but provides the ability to \n> try out other plans, and also fix that vital query that must run today.....\n\nhmm, I wonder if this option would have uses beyond the production hacks \nthat are being discussed.\n\nspecificly developers working on the optimizer (or related things like \nclustered databases) could use the same hooks to develop and modify the \n'optimizer' externally to postgres (doing an explain would let them find \nthe costs that postgres thinks each option has, along with it's \nreccomendation, but the developer could try different execution plans \nwithout having to recompile postgres between runs. and for clustered \ndatabases where the data is split between machines this would be a hook \nthat the cluster engine could use to put it's own plan into place without \nhaving to modify and recompile)\n\nDavid Lang\n", "msg_date": "Sat, 17 Dec 2005 00:40:51 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "> > Yeah it would - an implementation I have seen that I like is where the\n> > developer can supply the *entire* execution plan with a query. This is\n> > complex enough to make casual use unlikely :-), but provides the ability\n> > to try out other plans, and also fix that vital query that must run\n> > today.....\n>\n> Being able to specify an exact plan would also provide for query plan\n> stability; something that is critically important in certain\n> applications. If you have to meet a specific response time requirement\n> for a query, you can't afford to have the optimizer suddenly decide that\n> some other plan might be faster when in fact it's much slower.\n\nPlan stability doesn't mean time response stability...\nThe plan that today is almost instantaneous tomorrow can take hours...\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Sat, 17 Dec 2005 07:31:40 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > I'm running 8.1 installed from source on a Debian Sarge server. I have a \n> > simple query that I believe I've placed the indexes correctly for, and I \n> > still end up with a seq scan. It makes sense, kinda, but it should be able \n> > to use the index to gather the right values.\n> \n> I continue to marvel at how many people think that if it's not using an\n> index it must ipso facto be a bad plan ...\n> \n> That plan looks perfectly fine to me. You could try forcing some other\n> choices by fooling with the planner enable switches (eg set\n> enable_seqscan = off) but I doubt you'll find much improvement. There\n> are too many rows being pulled from ordered_products to make an index\n> nestloop a good idea.\n\nWe do have an FAQ item:\n\n 4.6) Why are my queries slow? Why don't they use my indexes?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 17 Dec 2005 09:56:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Join" }, { "msg_contents": "On Sat, Dec 17, 2005 at 07:31:40AM -0500, Jaime Casanova wrote:\n> > > Yeah it would - an implementation I have seen that I like is where the\n> > > developer can supply the *entire* execution plan with a query. This is\n> > > complex enough to make casual use unlikely :-), but provides the ability\n> > > to try out other plans, and also fix that vital query that must run\n> > > today.....\n> >\n> > Being able to specify an exact plan would also provide for query plan\n> > stability; something that is critically important in certain\n> > applications. If you have to meet a specific response time requirement\n> > for a query, you can't afford to have the optimizer suddenly decide that\n> > some other plan might be faster when in fact it's much slower.\n> \n> Plan stability doesn't mean time response stability...\n> The plan that today is almost instantaneous tomorrow can take hours...\n\nSure, if your underlying data changes that much, but that's often not\ngoing to happen in production systems (especially OLTP where this is\nmost important).\n\nOf course if you have a proposal for ensuring that a query always\nfinishes in X amount of time, rather than always using the same plan,\nI'd love to hear it. ;)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 20 Dec 2005 12:56:18 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overriding the optimizer" }, { "msg_contents": "Mark Kirkwood wrote:\n> Kevin Brown wrote:\n> \n>> I'll just start by warning that I'm new-ish to postgresql.\n>>\n>> I'm running 8.1 installed from source on a Debian Sarge server. I \n>> have a simple query that I believe I've placed the indexes correctly \n>> for, and I still end up with a seq scan. It makes sense, kinda, but \n>> it should be able to use the index to gather the right values. I do \n>> have a production set of data inserted into the tables, so this is \n>> running realistically:\n>>\n>> dli=# explain analyze SELECT ordered_products.product_id\n>> dli-# FROM to_ship, ordered_products\n>> dli-# WHERE to_ship.ordered_product_id = ordered_products.id AND\n>> dli-# ordered_products.paid = TRUE AND\n>> dli-# ordered_products.suspended_sub = FALSE;\n> \n> \n> You scan 600000 rows from to_ship to get about 25000 - so some way to \n> cut this down would help.\n> \n> Try out an explicit INNER JOIN which includes the filter info for paid \n> and suspended_sub in the join condition (you may need indexes on each of \n> id, paid and suspended_sub, so that the 8.1 optimizer can use a bitmap \n> scan):\n> \n> \n> SELECT ordered_products.product_id\n> FROM to_ship INNER JOIN ordered_products\n> ON (to_ship.ordered_product_id = ordered_products.id\n> AND ordered_products.paid = TRUE AND \n> ordered_products.suspended_sub = FALSE);\n\n\nIt has been a quiet day today, so I took another look at this. If the \nselectivity of clauses :\n\npaid = TRUE\nsuspended_sub = FALSE\n\nis fairly high, then rewriting as a subquery might help:\n\nSELECT o.product_id\nFROM ordered_products o\nWHERE o.paid = TRUE\nAND o.suspended_sub = FALSE\nAND EXISTS (\n SELECT 1\n FROM to_ship s\n WHERE s.ordered_product_id = o.id\n);\n\n\nHowever it depends on you not needing anything from to_ship in the \nSELECT list...\n\nCheers\n\nMark\n", "msg_date": "Wed, 21 Dec 2005 19:55:20 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Join" } ]
[ { "msg_contents": "\nForgive the cross-posting, but I found myself wondering if might not be some way future way of telling the planner that a given table (column ?) has a high likelyhood of being TOASTed. Similar to the random_page_cost in spirit. We've got a lot of indexed data that is spatial and have some table where no data is toasted (road segments) and others where evrything is.\n\nAn idle suggestion from one who knows that he is meddling with ;-}\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of\n> Jessica M Salmon\n> Sent: Wednesday, December 14, 2005 9:09 AM\n> To: PostGIS Users Discussion\n> Subject: Re: [postgis-users] Is my query planner failing me, or vice versa?\n> \n> Thanks, Marcus, for explaining.\n> \n> And thanks, Robert, for asking that question about adjusting page size.\n> \n> My tuples are definitely toasted (some of my geometries are 30X too big for\n> a single page!), so I'm glad I'm aware of the TOAST tables now. I suppose\n> there's not much to be done about it, but it's good to know.\n> \n> Thanks everyone for such an array of insightful help.\n> \n> -Meghan\n", "msg_date": "Wed, 14 Dec 2005 16:23:47 -0800", "msg_from": "\"Gregory S. Williamson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [postgis-users] Is my query planner failing me, or vice versa?" }, { "msg_contents": "\"Gregory S. Williamson\" <[email protected]> writes:\n> Forgive the cross-posting, but I found myself wondering if might not\n> be some way future way of telling the planner that a given table\n> (column ?) has a high likelyhood of being TOASTed.\n\nWhat would you expect the planner to do with the information, exactly?\n\nWe could certainly cause ANALYZE to record some estimate of this, but\nI'm not too clear on what happens after that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Dec 2005 00:36:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [postgis-users] Is my query planner failing me, or vice versa? " }, { "msg_contents": "Hi, Gregory,\n\nGregory S. Williamson wrote:\n> Forgive the cross-posting, but I found myself wondering if might not\n> be some way future way of telling the planner that a given table\n> (column ?) has a high likelyhood of being TOASTed. Similar to the\n> random_page_cost in spirit. We've got a lot of indexed data that is\n> spatial and have some table where no data is toasted (road segments)\n> and others where evrything is.\n\nI'd personally put this into ANALYZE, it already collects statistics, so\nit could also calculate TOASTing likelyhood and average TOASTed size.\n\nMaybe that 8.X PostgreSQL already does this, I'm a bit lagging :-)\n\nMarkus\n", "msg_date": "Thu, 15 Dec 2005 12:03:23 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [postgis-users] Is my query planner failing me, or vice versa?" } ]
[ { "msg_contents": "Hi all,\n\nI have been using PostgreSQL (currently 7.4.7) for several years now and am\nvery happy with it but I currently run a website that has had a little bit\nof a boost and I am starting to see some performance problems (Not\nnecessarily PostgreSQL).\n\nThe original website server ran on a single machine Dual 1.4 Dell 1650. I\nmoved the database off this machine and onto a Dual Opteron 248 with two\nSATA hard disks mirrored using software raid. The Apache server remains on\nthe small machine along with a Squid Proxy. I also started to agressively\ncache most requests to the database and have taken the requests hitting the\ndatabase down by about %65 using Squid and memcached. I am looking to take\nthis to about %80 over the next few weeks. The problem is that the database\nhas increased in size by over 100% over the same period and looks likely to\nincrease further.\n\nThe database has been allocated 2Gb worth of shared buffers and I have\ntweaked most of the settings in the config recently to see if I could\nincrease the performance any more and have seen very little performance gain\nfor the various types of queries that I am running.\n\nIt would appear that the only alternative may be a new machine that has a\nbetter disk subsystem or a large disk array then bung more RAM in the\nOpteron machine (max 16Gb 4Gb fitted) or purchase another machine with built\nin U320 SCSI ie an HP Proliant DL380 or Dell 2850.\n\nSome indication of current performance is as follows. I know these\nstatements are hardly indicative of a full running application and\neverything that goes with it but I would be very interested in hearing if\nanyone has a similar setup and is able to squeeze a lot more out of\nPostgreSQL. From what I can see here the numbers look OK for the hardware I\nam running on and that its not PostgreSQL that is the problem.\n\nInserting 1 million rows into the following table.These are raw insert\nstatements.\n\n Column | Type | Modifiers\n--------+------------------------+-----------\n id | integer |\n data | character varying(100) |\n\nwhere \"data\" has an average of 95 characters.\n\n23mins 12 seconds.\n\nWrapping this in a transaction:\n\n1min 47 seconds.\n\nSelect from the following table.\n\n\n Table \"public.test\"\nColumn | Type | Modifiers\n text | character varying(50) | not null\n id | integer | not null\n num | integer | default 0\nIndexes:\n \"test_pkey\" primary key, btree (text, id)\n \"test_id_idx\" btree (id)\n \"test_text_idx\" btree (text)\n\nselect count(*) from test;\n count\n----------\n 14289420\n(1 row)\n\n# select * from test where text = 'uk' ;\nTime: 1680.607 ms\n\nGet it into RAM hence the slight delay here. This delay has a serious impact\non the user waiting in the web application.\n\n# select * from test where text = 'uk' ;\nTime: 477.739 ms\n\nAfter it is in RAM.\n\nselect count(*) from test where text = 'uk' ;\n count\n--------\n 121058\n(1 row)\n\n\nThe website has a fairly high volume of inserts and deletes which also means\nthat I use pg_autovacum to keep things reasonably clean. However, I find\nthat every couple of weeks performance degrades so much that I need to do a\nvacuum full which can take a long time and cripples the database. I have\nread in the docs that you should only need to vacuum full rarely but I am\nfinding in practice this is not the case which might suggest that I have\nsomething set wrong in my config file.\n\nmax_fsm_pages = 500000 # I am thinking this might be a bit low.\nmax_fsm_relations = 1000\n\nAny pointers to better hardware or recommendations on settings gladly\nrecieved.\n\nRegards,\nHarry Jackson.\n\n--\nhttp://www.hjackson.org\nhttp://www.uklug.co.uk\n\nHi all,\n\n\nI have been using PostgreSQL (currently 7.4.7) for several years now\nand am very happy with it but I currently run a website that has had a\nlittle bit of a boost and I am starting to see some performance\nproblems (Not necessarily PostgreSQL).\n\n\nThe original website server ran on a single machine Dual 1.4 Dell 1650.\nI moved the database off this machine and onto a Dual Opteron 248 with\ntwo SATA  hard disks mirrored using software raid. The Apache\nserver remains on the small machine along with a Squid Proxy. I also\nstarted to agressively cache most requests to the database and have\ntaken the requests hitting the database down by about %65 using Squid\nand memcached. I am looking to take this to about %80 over the next few\nweeks. The problem is that the database has increased in size by over\n100% over the same period and looks likely to increase further.\n\n\nThe database has been allocated 2Gb worth of shared buffers and I have\ntweaked most of the settings in the config recently to see if I could\nincrease the performance any more and have seen very little performance\ngain for the various types of queries that I am running.\n\n\nIt would appear that the only alternative may be a new machine that has\na better disk subsystem or a large disk array then bung more RAM in the\nOpteron machine (max 16Gb 4Gb fitted) or purchase another machine with\nbuilt in U320 SCSI ie an HP Proliant DL380 or Dell 2850. \n\n\nSome indication of current performance is as follows. I know these\nstatements are hardly indicative of a full running application and\neverything that goes with it but I would be very interested in hearing\nif anyone has a similar setup and is able to squeeze a lot more out of\nPostgreSQL. From what I can see here the numbers look OK for the\nhardware I am running on and that its not PostgreSQL that is the\nproblem.\n\n\nInserting 1 million rows into the following table.These are raw insert statements.\n\n\n Column |         \nType          | Modifiers \n\n--------+------------------------+-----------\n\n id     |\ninteger               \n| \n data   | character varying(100) | \n\nwhere \"data\" has an average of 95 characters.\n\n23mins 12 seconds.\n\nWrapping this in a transaction:\n\n1min 47 seconds.\n\nSelect from the following table.\n\n\n\n        Table \"public.test\"\nColumn   |     \n      \nType          \n|    \nModifiers           \n\n text        | character varying(50) | not null\n id           |\ninteger          \n          | not null\n num       |\ninteger           \n         | default 0\n\nIndexes:\n\n    \"test_pkey\" primary key, btree (text, id)\n\n    \"test_id_idx\" btree (id)\n\n    \"test_text_idx\" btree (text)\n\nselect count(*) from test;\n  count   \n----------\n 14289420\n(1 row)\n\n# select * from test where text = 'uk' ;\nTime: 1680.607 ms\n\nGet it into RAM hence the slight delay here. This delay has a serious impact on the user waiting in the web application.\n\n# select * from test where text = 'uk' ;\nTime: 477.739 ms\n\nAfter it is in RAM.\n\nselect count(*) from test where text = 'uk' ;\n count  \n--------\n 121058\n(1 row)\n\n\nThe website has a fairly high volume of inserts and deletes which also\nmeans that I use pg_autovacum to keep things reasonably clean. However,\nI find that every couple of weeks performance degrades so much that I\nneed to do a vacuum full which can take a long time and cripples the\ndatabase. I have read in the docs that you should only need to vacuum\nfull rarely but I am finding in practice this is not the case which\nmight suggest that I have something set wrong in my config file.\n\nmax_fsm_pages = 500000 # I am thinking this might be a bit low.\nmax_fsm_relations = 1000\n\nAny pointers to better hardware or recommendations on settings gladly recieved.\n\nRegards,\nHarry Jackson.\n-- http://www.hjackson.orghttp://www.uklug.co.uk", "msg_date": "Thu, 15 Dec 2005 01:51:48 +0000", "msg_from": "Harry Jackson <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL performance question." }, { "msg_contents": "> I have been using PostgreSQL (currently 7.4.7) for several years now and \n> am very happy with it but I currently run a website that has had a \n> little bit of a boost and I am starting to see some performance problems \n> (Not necessarily PostgreSQL).\n\nPostgreSQL 8.1.1 should give you greater performance...\n\n> The database has been allocated 2Gb worth of shared buffers and I have \n> tweaked most of the settings in the config recently to see if I could \n> increase the performance any more and have seen very little performance \n> gain for the various types of queries that I am running.\n\nThat sounds like far too many shared buffers? I wouldn't usually use \nmore than a few tens of thousands, eg. 10k-50k. And that'd only be on \n8.1 that has more efficient buffer management.\n\n> Get it into RAM hence the slight delay here. This delay has a serious \n> impact on the user waiting in the web application.\n> \n> # select * from test where text = 'uk' ;\n> Time: 477.739 ms\n\nYou need to show us the explain analyze plan output for this. But 477ms \nis far too slow for an index scan on a million row table.\n\n> max_fsm_pages = 500000 # I am thinking this might be a bit low.\n> max_fsm_relations = 1000\n\nMaybe do a once-off vacuum full to make sure all your tables are clean?\n\nChris\n\n", "msg_date": "Thu, 15 Dec 2005 10:02:18 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance question." }, { "msg_contents": "On Thu, 15 Dec 2005, Harry Jackson wrote:\n\n> Hi all,\n\n> I have been using PostgreSQL (currently 7.4.7) for several years now and\n> am very happy with it but I currently run a website that has had a\n> little bit of a boost and I am starting to see some performance problems\n> (Not necessarily PostgreSQL).\n\nDefinately plan an 8.1 upgrade.\n\n[snip]\n\n> The database has been allocated 2Gb worth of shared buffers and I have\n> tweaked most of the settings in the config recently to see if I could\n> increase the performance any more and have seen very little performance\n> gain for the various types of queries that I am running.\n\n2 GB is too much for 7.4. I'm not sure about 8.1 because there hasn't been\nany conclusive testing I think. OSDL is using 200000, which is ~1.5GB.\n\nWhy not turn on log_min_duration_statement or process the log with PQA\n(http://pgfoundry.org/projects/pqa/) to look for expensive queries.\n\nAlso, why kind of IO load are you seeing (iostat will tell you).\n\n> It would appear that the only alternative may be a new machine that has\n> a better disk subsystem or a large disk array then bung more RAM in the\n> Opteron machine (max 16Gb 4Gb fitted) or purchase another machine with\n> built in U320 SCSI ie an HP Proliant DL380 or Dell 2850.\n\nHave a look at what your IO load is like, first.\n\n\n> Some indication of current performance is as follows. I know these\n> statements are hardly indicative of a full running application and\n> everything that goes with it but I would be very interested in hearing\n> if anyone has a similar setup and is able to squeeze a lot more out of\n> PostgreSQL. From what I can see here the numbers look OK for the\n> hardware I am running on and that its not PostgreSQL that is the\n> problem.\n\n> Inserting 1 million rows into the following table.These are raw insert\n> statements.\n\n[snip]\n\nYes, the performance looks a bit poor. I'd say that 8.1 will help address\nthat.\n\nAlso, don't under estimate the effects of CLUSTER on performance,\nparticularly <8.1.\n\nThanks,\n\nGavin\n", "msg_date": "Thu, 15 Dec 2005 13:29:43 +1100 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance question." }, { "msg_contents": "On 12/15/05, Christopher Kings-Lynne <[email protected]> wrote:\n> PostgreSQL 8.1.1 should give you greater performance...\n\nIndeed it has.\n\nI am seeing a 25% increase in one particular select statement. This\nincreases to 32% with\n\nset enable_bitmapscan to off;\n\nI also ran a test script full of common SQL that the application runs.\nI added some extra SQL to burst the cache a bit and I have managed to\nget an average 14% increase.\n\nI have not started tweaking things that much yet to take advantage of\nthe new parameters so I may yet see more of an increase but initial\nindications are that the changes from 7.4.7 to 8.1.1 are significant.\n\nThe one thing that may be skewing these results is that this was\ncompiled and installed from source with\n\n./configure CFLAGS='-O2' --with-openssl --enable-thread-safety\n\nI am not sure what the default Debian binary for 7.4.7 is compiled\nwith so this may have had some affect.\n\n--\nHarry\nhttp://www.hjackson.org\nhttp://www.uklug.co.uk\n", "msg_date": "Sun, 18 Dec 2005 02:11:16 +0000", "msg_from": "Harry Jackson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL performance question." }, { "msg_contents": "On Sun, Dec 18, 2005 at 02:11:16AM +0000, Harry Jackson wrote:\n> The one thing that may be skewing these results is that this was\n> compiled and installed from source with\n> \n> ./configure CFLAGS='-O2' --with-openssl --enable-thread-safety\n> \n> I am not sure what the default Debian binary for 7.4.7 is compiled\n> with so this may have had some affect.\n\nThis isn't a performance note, but you might be interested in hearing that\nthere are being maintained official backports of 8.0 and 8.1 for Debian sarge\n(by Martin Pitt, the same person who maintains both the sarge and sid\nversions). Take a look at\n\n http://people.debian.org/~mpitt/packages/sarge-backports/\n\nIt might be more comfortable in the long run than maintaining your own source\ninstallation, although YMMV.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sun, 18 Dec 2005 03:18:40 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance question. [OT]" } ]
[ { "msg_contents": "Hi, \nI have a java.util.List of values (10000) which i wanted to use for a query in the where clause of an simple select statement. iterating over the list and and use an prepared Statement is quite slow. Is there a more efficient way to execute such a query. \nThanks for any help. \nJohannes \n..... \nList ids = new ArrayList(); \n\n.... List is filled with 10000 values ...\n\nList uuids = new ArrayList(); \nPreparedStatement pstat = db.prepareStatement(\"SELECT UUID FROM MDM.KEYWORDS_INFO WHERE KEYWORDS_ID = ?\"); \nfor (Iterator iter = ids.iterator(); iter.hasNext();) {\nString id = (String) iter.next();\npstat.setString(1, id);\nrs = pstat.executeQuery();\nif (rs.next()) {\nuuids.add(rs.getString(1));\n}\nrs.close();\n} \n... \n\n", "msg_date": "Thu, 15 Dec 2005 07:22:47 +0100", "msg_from": "=?iso-8859-1?Q?B=FChler=2C_Johannes?= <[email protected]>", "msg_from_op": true, "msg_subject": "effizient query with jdbc" } ]
[ { "msg_contents": " Hi, \n I have a java.util.List of values (10000) which i wanted to use for a\n query in the where clause of an simple select statement. iterating\n over the list and and use an prepared Statement is quite slow. Is\n there a more efficient way to execute such a query. Thanks for any\n help. Johannes \n ..... \n List ids = new ArrayList(); \n \n .... List is filled with 10000 values ...\n \n List uuids = new ArrayList(); \n PreparedStatement pstat = db.prepareStatement(\"SELECT UUID FROM\n MDM.KEYWORDS_INFO WHERE KEYWORDS_ID = ?\"); for (Iterator iter =\n ids.iterator(); iter.hasNext();) { String id = (String) iter.next();\n pstat.setString(1, id);\n rs = pstat.executeQuery();\n if (rs.next()) {\n uuids.add(rs.getString(1));\n }\n rs.close();\n } \n ... \n \n\n", "msg_date": "Thu, 15 Dec 2005 08:07:14 +0100", "msg_from": "Johannes =?ISO-8859-1?B?QvxobGVy?= <[email protected]>", "msg_from_op": true, "msg_subject": "effizient query with jdbc" }, { "msg_contents": "You could issue one query containing a\nselect uuid FROM MDM.KEYWORDS_INFO WHERE KEYWORDS_ID in (xy)\nwhere xy is a large comma separated list of your values.\n\nBest Regards / Viele Gr��e\n\nSebastian Hennebrueder\n\n-----\n\nhttp://www.laliluna.de\n\n* Tutorials for JSP, JavaServer Faces, Struts, Hibernate and EJB\n* Seminars and Education at reasonable prices\n* Get professional support and consulting for these technologies\n\n\n\nJohannes B�hler schrieb:\n\n> Hi, \n> I have a java.util.List of values (10000) which i wanted to use for a\n> query in the where clause of an simple select statement. iterating\n> over the list and and use an prepared Statement is quite slow. Is\n> there a more efficient way to execute such a query. Thanks for any\n> help. Johannes \n> ..... \n> List ids = new ArrayList(); \n> \n> .... List is filled with 10000 values ...\n> \n> List uuids = new ArrayList(); \n> PreparedStatement pstat = db.prepareStatement(\"SELECT UUID FROM\n> MDM.KEYWORDS_INFO WHERE KEYWORDS_ID = ?\"); for (Iterator iter =\n> ids.iterator(); iter.hasNext();) { String id = (String) iter.next();\n> pstat.setString(1, id);\n> rs = pstat.executeQuery();\n> if (rs.next()) {\n> uuids.add(rs.getString(1));\n> }\n> rs.close();\n> } \n> ... \n> \n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n>\n> \n>\n", "msg_date": "Fri, 06 Jan 2006 15:03:42 +0100", "msg_from": "Sebastian Hennebrueder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: effizient query with jdbc" } ]
[ { "msg_contents": "Well, what does the random_page_cost do internally ?\n\nI don't think I'd expect postgres to be able to *do* anything in particular, any more than I would expect it to \"do\" something about slow disk I/O or having limited cache. But it might be useful to the EXPLAIN ANALYZE in estimating costs of retrieving such data. \n\nAdmittedly, this is not as clear as wanting a sequential scan in preference to indexed reads when there are either very few rows or a huge number, but it strikes me as useful to me the DBA to have this factoid thrust in front of me when considering why a given query is slower than I might like. Perhaps an added time based on this factor and the random_page_cost value, since lots of TOAST data and a high access time would indicate to my (ignorant!) mind that retrieval would be slower, especially over large data sets.\n\nForgive my ignorance ... obviously I am but a humble user. grin.\n\nG\n\n-----Original Message-----\nFrom:\tTom Lane [mailto:[email protected]]\nSent:\tWed 12/14/2005 9:36 PM\nTo:\tGregory S. Williamson\nCc:\[email protected]; PostGIS Users Discussion\nSubject:\tRe: [PERFORM] [postgis-users] Is my query planner failing me, or vice versa? \n\"Gregory S. Williamson\" <[email protected]> writes:\n> Forgive the cross-posting, but I found myself wondering if might not\n> be some way future way of telling the planner that a given table\n> (column ?) has a high likelyhood of being TOASTed.\n\nWhat would you expect the planner to do with the information, exactly?\n\nWe could certainly cause ANALYZE to record some estimate of this, but\nI'm not too clear on what happens after that...\n\n\t\t\tregards, tom lane\n\n!DSPAM:43a100d6285261205920220!\n\n\n\n\n", "msg_date": "Thu, 15 Dec 2005 04:03:16 -0800", "msg_from": "\"Gregory S. Williamson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [postgis-users] Is my query planner failing me, or vice versa? " }, { "msg_contents": "> -----Original Message-----\n> From:\tTom Lane [mailto:[email protected]]\n> Sent:\tWed 12/14/2005 9:36 PM\n> To:\tGregory S. Williamson\n> Cc:\[email protected]; PostGIS Users Discussion\n> Subject:\tRe: [PERFORM] [postgis-users] Is my query planner failing\nme,\n> or vice versa?\n> \"Gregory S. Williamson\" <[email protected]> writes:\n> > Forgive the cross-posting, but I found myself wondering if might not\n> > be some way future way of telling the planner that a given table\n> > (column ?) has a high likelyhood of being TOASTed.\n> \n> What would you expect the planner to do with the information, exactly?\n> \n> We could certainly cause ANALYZE to record some estimate of this, but\n> I'm not too clear on what happens after that...\n> \n> \t\t\tregards, tom lane\n>\n>\n> -----Original Message-----\n> From: [email protected] [mailto:postgis-users-\n> [email protected]] On Behalf Of Gregory S. Williamson\n> Sent: 15 December 2005 12:03\n> To: Tom Lane\n> Cc: [email protected]; PostGIS Users Discussion\n> Subject: RE: [PERFORM] [postgis-users] Is my query planner failing me,or\n> vice versa?\n> \n> Well, what does the random_page_cost do internally ?\n> \n> I don't think I'd expect postgres to be able to *do* anything in\n> particular, any more than I would expect it to \"do\" something about slow\n> disk I/O or having limited cache. But it might be useful to the EXPLAIN\n> ANALYZE in estimating costs of retrieving such data.\n> \n> Admittedly, this is not as clear as wanting a sequential scan in\n> preference to indexed reads when there are either very few rows or a huge\n> number, but it strikes me as useful to me the DBA to have this factoid\n> thrust in front of me when considering why a given query is slower than I\n> might like. Perhaps an added time based on this factor and the\n> random_page_cost value, since lots of TOAST data and a high access time\n> would indicate to my (ignorant!) mind that retrieval would be slower,\n> especially over large data sets.\n> \n> Forgive my ignorance ... obviously I am but a humble user. grin.\n> \n> G\n\n\nAs I understood from the original discussions with Markus/Tom, the problem\nwas that the optimizer didn't consider the value of the VacAttrStats\nstawidth value when calculating the cost of a sequential scan. I don't know\nif this is still the case though - Tom will probably have a rough idea\nalready whereas I would need to spend some time sifting through the source.\n\nHowever, I do know that the PostGIS statistics collector does store the\naverage detoasted geometry size in stawidth during ANALYZE so the value is\nthere if it can be used.\n\n\nKind regards,\n\nMark.\n\n------------------------\nWebBased Ltd\n17 Research Way\nPlymouth\nPL6 8BT\n\nT: +44 (0)1752 797131\nF: +44 (0)1752 791023\n\nhttp://www.webbased.co.uk \nhttp://www.infomapper.com\nhttp://www.swtc.co.uk \n\nThis email and any attachments are confidential to the intended recipient\nand may also be privileged. If you are not the intended recipient please\ndelete it from your system and notify the sender. You should not copy it or\nuse it for any purpose nor disclose or distribute its contents to any other\nperson.\n\n\n", "msg_date": "Thu, 15 Dec 2005 14:20:55 -0000", "msg_from": "\"Mark Cave-Ayland\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [postgis-users] Is my query planner failing me,or vice versa?" } ]
[ { "msg_contents": "Hi, \nI have a java.util.List of values (10000) which i wanted to use for a query in the where clause of an simple select statement. iterating over the list and and use an prepared Statement is quite slow. Is there a more efficient way to execute such a query. \n\nThanks for any help. \nJohannes \n..... \nList ids = new ArrayList();\n\n.... List is filled with 10000 values ...\n\nList uuids = new ArrayList(); \nPreparedStatement pstat = db.prepareStatement(\"SELECT UUID FROM MDM.KEYWORDS_INFO WHERE KEYWORDS_ID = ?\"); \nfor (Iterator iter = ids.iterator(); iter.hasNext();) {\nString id = (String) iter.next();\npstat.setString(1, id);\nrs = pstat.executeQuery();\nif (rs.next()) {\nuuids.add(rs.getString(1));\n}\nrs.close();\n} \n... \n\n\n\n\n\n", "msg_date": "Thu, 15 Dec 2005 15:44:23 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "effizient query with jdbc" }, { "msg_contents": "The problem is you are getting the entire list back at once.\n\nYou may want to try using a cursor.\n\nDave\nOn 15-Dec-05, at 9:44 AM, [email protected] wrote:\n\n> Hi,\n> I have a java.util.List of values (10000) which i wanted to use for \n> a query in the where clause of an simple select statement. \n> iterating over the list and and use an prepared Statement is quite \n> slow. Is there a more efficient way to execute such a query.\n>\n> Thanks for any help.\n> Johannes\n> .....\n> List ids = new ArrayList();\n>\n> .... List is filled with 10000 values ...\n>\n> List uuids = new ArrayList();\n> PreparedStatement pstat = db.prepareStatement(\"SELECT UUID FROM \n> MDM.KEYWORDS_INFO WHERE KEYWORDS_ID = ?\");\n> for (Iterator iter = ids.iterator(); iter.hasNext();) {\n> String id = (String) iter.next();\n> pstat.setString(1, id);\n> rs = pstat.executeQuery();\n> if (rs.next()) {\n> uuids.add(rs.getString(1));\n> }\n> rs.close();\n> }\n> ...\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n", "msg_date": "Thu, 22 Dec 2005 09:17:45 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: effizient query with jdbc" }, { "msg_contents": "Is there a reason you can't rewrite your SELECT like:\n\nSELECT UUID FROM MDM.KEYWORDS_INFO WHERE KEYWORDS_ID IN (a, b, c, d)\n\nEven doing them 100 at a time will make a big difference; you should \nput as many in the list as pgsql supports. I'm assuming that there's \nan index over KEYWORDS_ID.\n\nRetrieving 10000 rows with 10000 statements is generally a Bad Idea.\n\nS\n\nAt 08:17 AM 12/22/2005, Dave Cramer wrote:\n>The problem is you are getting the entire list back at once.\n>\n>You may want to try using a cursor.\n>\n>Dave\n>On 15-Dec-05, at 9:44 AM, [email protected] wrote:\n>\n>>Hi,\n>>I have a java.util.List of values (10000) which i wanted to use for\n>>a query in the where clause of an simple select statement.\n>>iterating over the list and and use an prepared Statement is quite\n>>slow. Is there a more efficient way to execute such a query.\n>>\n>>Thanks for any help.\n>>Johannes\n>>.....\n>>List ids = new ArrayList();\n>>\n>>.... List is filled with 10000 values ...\n>>\n>>List uuids = new ArrayList();\n>>PreparedStatement pstat = db.prepareStatement(\"SELECT UUID FROM\n>>MDM.KEYWORDS_INFO WHERE KEYWORDS_ID = ?\");\n>>for (Iterator iter = ids.iterator(); iter.hasNext();) {\n>>String id = (String) iter.next();\n>>pstat.setString(1, id);\n>>rs = pstat.executeQuery();\n>>if (rs.next()) {\n>>uuids.add(rs.getString(1));\n>>}\n>>rs.close();\n>>}\n>>...\n>>\n>>\n>>\n>>\n>>\n>>\n>>---------------------------(end of\n>>broadcast)---------------------------\n>>TIP 6: explain analyze is your friend\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n\n", "msg_date": "Thu, 22 Dec 2005 09:23:49 -0600", "msg_from": "Steve Peterson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: effizient query with jdbc" } ]
[ { "msg_contents": "Luke,\n\nHow did you measure 800MB/sec, is it cached, or physical I/O?\n\n-anjan\n\n-----Original Message-----\nFrom: Luke Lonergan [mailto:[email protected]] \nSent: Wednesday, December 14, 2005 2:10 AM\nTo: Charles Sprickman; [email protected]\nSubject: Re: [PERFORM] SAN/NAS options\n\nCharles,\n\n> Lastly, one thing that I'm not yet finding in trying to \n> educate myself on SANs is a good overview of what's come out \n> in the past few years that's more affordable than the old \n> big-iron stuff. For example I saw some brief info on this \n> list's archives about the Dell/EMC offerings. Anything else \n> in that vein to look at?\n\nMy two cents: SAN is a bad investment, go for big internal storage.\n\nThe 3Ware or Areca SATA RAID adapters kick butt and if you look in the\nnewest colos (I was just in ours \"365main.net\" today), you will see rack\non rack of machines with from 4 to 16 internal SATA drives. Are they\nall DB servers? Not necessarily, but that's where things are headed.\n\nYou can get a 3U server with dual opteron 250s, 16GB RAM and 16x 400GB\nSATAII drives with the 3Ware 9550SX controller for $10K - we just\nordered 4 of them. I don't think you can buy an external disk chassis\nand a Fibre channel NIC for that.\n\nPerformance? 800MB/s RAID5 reads, 400MB/s RAID5 writes. Random IOs are\nalso very high for RAID10, but we don't use it so YMMV - look at Areca\nand 3Ware.\n\nManagability? Good web management interfaces with 6+ years of\ndevelopment from 3Ware, e-mail, online rebuild options, all the goodies.\nNo \"snapshot\" or offline backup features like the high-end SANs, but do\nyou really need it?\n\nNeed more power or storage over time? Run a parallel DB like Bizgres\nMPP, you can add more servers with internal storage and increase your\nI/O, CPU and memory.\n\n- Luke\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Thu, 15 Dec 2005 16:13:04 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN/NAS options" } ]
[ { "msg_contents": "Physical using xfs on Linux.\r\n- Luke\r\n--------------------------\r\nSent from my BlackBerry Wireless Device\r\n\r\n\r\n-----Original Message-----\r\nFrom: Anjan Dave <[email protected]>\r\nTo: Luke Lonergan <[email protected]>; Charles Sprickman <[email protected]>; [email protected] <[email protected]>\r\nSent: Thu Dec 15 16:13:04 2005\r\nSubject: RE: [PERFORM] SAN/NAS options\r\n\r\nLuke,\r\n\r\nHow did you measure 800MB/sec, is it cached, or physical I/O?\r\n\r\n-anjan\r\n\r\n-----Original Message-----\r\nFrom: Luke Lonergan [mailto:[email protected]] \r\nSent: Wednesday, December 14, 2005 2:10 AM\r\nTo: Charles Sprickman; [email protected]\r\nSubject: Re: [PERFORM] SAN/NAS options\r\n\r\nCharles,\r\n\r\n> Lastly, one thing that I'm not yet finding in trying to \r\n> educate myself on SANs is a good overview of what's come out \r\n> in the past few years that's more affordable than the old \r\n> big-iron stuff. For example I saw some brief info on this \r\n> list's archives about the Dell/EMC offerings. Anything else \r\n> in that vein to look at?\r\n\r\nMy two cents: SAN is a bad investment, go for big internal storage.\r\n\r\nThe 3Ware or Areca SATA RAID adapters kick butt and if you look in the\r\nnewest colos (I was just in ours \"365main.net\" today), you will see rack\r\non rack of machines with from 4 to 16 internal SATA drives. Are they\r\nall DB servers? Not necessarily, but that's where things are headed.\r\n\r\nYou can get a 3U server with dual opteron 250s, 16GB RAM and 16x 400GB\r\nSATAII drives with the 3Ware 9550SX controller for $10K - we just\r\nordered 4 of them. I don't think you can buy an external disk chassis\r\nand a Fibre channel NIC for that.\r\n\r\nPerformance? 800MB/s RAID5 reads, 400MB/s RAID5 writes. Random IOs are\r\nalso very high for RAID10, but we don't use it so YMMV - look at Areca\r\nand 3Ware.\r\n\r\nManagability? Good web management interfaces with 6+ years of\r\ndevelopment from 3Ware, e-mail, online rebuild options, all the goodies.\r\nNo \"snapshot\" or offline backup features like the high-end SANs, but do\r\nyou really need it?\r\n\r\nNeed more power or storage over time? Run a parallel DB like Bizgres\r\nMPP, you can add more servers with internal storage and increase your\r\nI/O, CPU and memory.\r\n\r\n- Luke\r\n\r\n\r\n---------------------------(end of broadcast)---------------------------\r\nTIP 5: don't forget to increase your free space map settings\r\n\r\n\r\n", "msg_date": "Thu, 15 Dec 2005 16:40:14 -0500", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN/NAS options" } ]
[ { "msg_contents": "Hello group,\n\nI've got a really bad problem with my postgres DB and server.\nIt is a linux machine with 1GB of RAM and 2 CPUs.\nThe postgres Version is 7.4.\n\nThe problem is, that once the postmaster has started, the System is just\nabout useless. At least querys to the db are so slow, that it most often\nruns into timeouts. First I thought something was wrong with my querys or I\nforgot to close the connections. But this isn't the case, I cut all the\nconnections to the server so that there are no incoming requests. Still,\nonce I start the postmaster and look into the statistics of the top-command,\nthe IOWAIT parameter of all CPUs are at about 100%.\n\nThis is really weird, just a few hours ago the machine run very smooth\nserving the data for a big portal.\n\nHas anybody an idea what might have happened here?\nI need a quick solution, since I'm talking about an live server that should\nbe running 24 hours a day.\n\nThanks in advance,\nMoritz\n\nPS: I'm not a administrator so I don't know if I have wrote down all the\nrelevant data. If not, please ask for it and give me a hint how to get them\n\nHello group,\n\nI've got a really bad problem with my postgres DB and server.\nIt is a linux machine with 1GB of RAM and 2 CPUs.\nThe postgres Version is 7.4.\n\nThe problem is, that once the postmaster has started, the System is\njust about useless. At least querys to the db are so slow, that it most\noften runs into timeouts. First I thought something was wrong with my\nquerys or I forgot to close the connections. But this isn't the case, I\ncut all the connections to the server so that there are no incoming\nrequests. Still, once I start the postmaster and look into the\nstatistics of the top-command, the IOWAIT parameter of all CPUs are at\nabout 100%. \n\nThis is really weird, just a few hours ago the machine run very smooth serving the data for a big portal.\n\nHas anybody an idea what might have happened here?\nI need a quick solution, since I'm talking about an live server that should be running 24 hours a day.\n\nThanks in advance,\nMoritz\n\nPS: I'm not a administrator so I don't know if I have wrote down all\nthe relevant data. If not, please ask for it and give me a hint how to\nget them", "msg_date": "Fri, 16 Dec 2005 14:15:58 +0100", "msg_from": "Moritz Bayer <[email protected]>", "msg_from_op": true, "msg_subject": "Crashing DB or Server?" }, { "msg_contents": "On 12/16/05, Moritz Bayer <[email protected]> wrote:\n> This is really weird, just a few hours ago the machine run very smooth\n> serving the data for a big portal.\n\nCan you log the statements that are taking a long time and post them\nto the list with the table structures and indexes for the tables being\nused.\n\nTo do this turn on logging for statements taking a long time, edit\npostgresql.conf file and change the following two parameters.\n\nlog_min_duration_statement = 2000 # 2 seconds\n\nYour log should now be catching the statements that are slow. Then use\nthe statements to get the explain plan ie\n\ndbnamr=# explain [sql thats taking a long time]\n\nWe would also need to see the table structures.\n\ndbname=# \\d [table name of each table in above explain plan]\n\n> Has anybody an idea what might have happened here?\n> I need a quick solution, since I'm talking about an live server that should\n> be running 24 hours a day.\n\nIt may be that the planner has started to pick a bad plan. This can\nhappen if the database is regularly changing and the stats are not up\nto date. I believe it can happen even if the stats are up to date but\nis much less likely to do so.\n\nIt might also be an idea to vacuum the database.\n\ndbname=# VACUUM ANALYZE;\n\nThis will load the server up for a while though.\n\n--\nhttp://www.hjackson.org\nhttp://www.uklug.co.uk\n", "msg_date": "Fri, 16 Dec 2005 13:38:24 +0000", "msg_from": "Harry Jackson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Crashing DB or Server?" }, { "msg_contents": "Hi,\n\nactually every SELECT statements takes a couple of minutes.\nFor example\nSELECT * FROM pg_stat_activity already takes 260 sec.\n\nAnd the IOWAIT value increases just after starting the postmaster, no\nquerys are processed.\n\nI started vacuumizing the tables of the DB. Still, it doesn't make a\ndifference.\n\nSo I don't know if the structure of the tables are relevant.\nFor example, I have got about 30 of those:\n\nCREATE TABLE \"public\".\"tbl_highscore_app4\" (\n \"id\" BIGSERIAL,\n \"userid\" INTEGER NOT NULL,\n \"score\" INTEGER DEFAULT 0 NOT NULL,\n \"occured\" DATE DEFAULT now() NOT NULL,\n CONSTRAINT \"tbl_highscore_app4_pkey\" PRIMARY KEY(\"userid\")\n) WITHOUT OIDS;\n\nthe select-statements are done through functions, for example\n\nCREATE OR REPLACE FUNCTION \"public\".\"getownrankingapp4\" (integer, integer)\nRETURNS integer AS'\nDECLARE i_userid INTEGER;\nDECLARE i_score INTEGER;\nDECLARE i_rank INTEGER;\nbegin\ni_userid := $1;\ni_score := $2;\ni_rank := 1;\n if i_score <= 0 then\n SELECT INTO i_rank max(id) FROM tbl_highscore_app4_tmp;\n if i_rank IS null then\n i_rank = 1;\n else\n i_rank = i_rank +1;\n end if;\n else\n SELECT INTO i_rank max(id) FROM tbl_highscore_app4_tmp WHERE\nscore>=i_score; if i_rank IS null then i_rank = 1; end if; end if;\nreturn (i_rank);\nEND\n'LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY INVOKER;\n\n\nThe tmp table looks like this (and is filled once a night with the current\ndata):\n\nCREATE TABLE \"public\".\"tbl_highscore_app4_tmp\" (\n \"id\" INTEGER NOT NULL,\n \"userid\" INTEGER NOT NULL,\n \"score\" INTEGER NOT NULL\n) WITH OIDS;\n\nCREATE INDEX \"tbl_highscore_app4_tmp_index\" ON\n\"public\".\"tbl_highscore_app4_tmp\"\nUSING btree (\"score\");\n\nHi,\n\nactually every SELECT statements takes a couple of minutes. \nFor example \nSELECT * FROM pg_stat_activity already takes 260 sec.\n\nAnd the IOWAIT value increases just after  starting the postmaster, no querys are processed.\n\nI started vacuumizing the tables of the DB.  Still, it doesn't make a difference.\n\nSo I don't know if the structure of the tables are relevant. \nFor example, I have got about 30 of those:\n\nCREATE TABLE \"public\".\"tbl_highscore_app4\" (\n  \"id\" BIGSERIAL, \n  \"userid\" INTEGER NOT NULL, \n  \"score\" INTEGER DEFAULT 0 NOT NULL, \n  \"occured\" DATE DEFAULT now() NOT NULL, \n  CONSTRAINT \"tbl_highscore_app4_pkey\" PRIMARY KEY(\"userid\")\n) WITHOUT OIDS;\n\nthe select-statements are done through functions, for example \n\nCREATE OR REPLACE FUNCTION \"public\".\"getownrankingapp4\" (integer, integer) RETURNS integer AS'\nDECLARE i_userid INTEGER; \nDECLARE i_score INTEGER;  \nDECLARE i_rank INTEGER;  \nbegin  \ni_userid := $1;  \ni_score := $2;  \ni_rank := 1;  \n if i_score <= 0 then  \n             \nSELECT INTO i_rank max(id) FROM  \ntbl_highscore_app4_tmp;  \n             if i_rank IS null then    \n                  i_rank = 1;  \n             else    \n                 \ni_rank = i_rank +1;  \n            end if;  \n else  \n        SELECT INTO i_rank max(id)\nFROM tbl_highscore_app4_tmp WHERE score>=i_score;  if i_rank IS\nnull then    i_rank = 1;  end if;  end\nif;  \nreturn (i_rank);  \nEND\n'LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY INVOKER;\n\n\nThe tmp table looks like this (and is filled once a night with the current data):\n\nCREATE TABLE \"public\".\"tbl_highscore_app4_tmp\" (\n  \"id\" INTEGER NOT NULL, \n  \"userid\" INTEGER NOT NULL, \n  \"score\" INTEGER NOT NULL\n) WITH OIDS;\n\nCREATE INDEX \"tbl_highscore_app4_tmp_index\" ON \"public\".\"tbl_highscore_app4_tmp\"\nUSING btree (\"score\");", "msg_date": "Fri, 16 Dec 2005 15:10:51 +0100", "msg_from": "Moritz Bayer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Crashing DB or Server?" }, { "msg_contents": "Moritz,\n\nIs it possible that you use lots of temporary tables, and you don't\nvacuum the system tables ? That would cause such symptoms I guess...\nTry to make a \"vacuum analyze\" connected as the postgres super user,\nthat will vacuum all your system tables too. Note that if you have a\nreally big bloat, a simple vacuum might not help, so you might need to\ndo \"vacuum full analyze\", and possibly reindex on some tables - I'm not\nan expert on this, so others might have better advice.\n\nCheers,\nCsaba.\n\n\nOn Fri, 2005-12-16 at 15:10, Moritz Bayer wrote:\n> Hi,\n> \n> actually every SELECT statements takes a couple of minutes. \n> For example \n> SELECT * FROM pg_stat_activity already takes 260 sec.\n> \n> And the IOWAIT value increases just after starting the postmaster, no\n> querys are processed.\n> \n> I started vacuumizing the tables of the DB. Still, it doesn't make a\n> difference.\n> \n> So I don't know if the structure of the tables are relevant. \n> For example, I have got about 30 of those:\n> \n> CREATE TABLE \"public\".\"tbl_highscore_app4\" (\n> \"id\" BIGSERIAL, \n> \"userid\" INTEGER NOT NULL, \n> \"score\" INTEGER DEFAULT 0 NOT NULL, \n> \"occured\" DATE DEFAULT now() NOT NULL, \n> CONSTRAINT \"tbl_highscore_app4_pkey\" PRIMARY KEY(\"userid\")\n> ) WITHOUT OIDS;\n> \n> the select-statements are done through functions, for example \n> \n> CREATE OR REPLACE FUNCTION \"public\".\"getownrankingapp4\" (integer,\n> integer) RETURNS integer AS'\n> DECLARE i_userid INTEGER; \n> DECLARE i_score INTEGER; \n> DECLARE i_rank INTEGER; \n> begin \n> i_userid := $1; \n> i_score := $2; \n> i_rank := 1; \n> if i_score <= 0 then \n> SELECT INTO i_rank max(id) FROM \n> tbl_highscore_app4_tmp; \n> if i_rank IS null then \n> i_rank = 1; \n> else \n> i_rank = i_rank +1; \n> end if; \n> else \n> SELECT INTO i_rank max(id) FROM tbl_highscore_app4_tmp WHERE\n> score>=i_score; if i_rank IS null then i_rank = 1; end if; end\n> if; \n> return (i_rank); \n> END\n> 'LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY\n> INVOKER;\n> \n> \n> The tmp table looks like this (and is filled once a night with the\n> current data):\n> \n> CREATE TABLE \"public\".\"tbl_highscore_app4_tmp\" (\n> \"id\" INTEGER NOT NULL, \n> \"userid\" INTEGER NOT NULL, \n> \"score\" INTEGER NOT NULL\n> ) WITH OIDS;\n> \n> CREATE INDEX \"tbl_highscore_app4_tmp_index\" ON\n> \"public\".\"tbl_highscore_app4_tmp\"\n> USING btree (\"score\");\n> \n> \n> \n> \n\n", "msg_date": "Fri, 16 Dec 2005 15:32:47 +0100", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Crashing DB or Server?" } ]
[ { "msg_contents": "We're storing tif images in a table as bytea. We were running low on our \nprimary space and moved several tables, including the one with the images, \nto a second tablespace using ALTER TABLE SET TABLESPACE.\nThis moved quite cleaned out quite a bit of space on the original \ntablespace, but not as much as it should have. It does not appear that the \ncorresponding pg_toast tables were moved. So, my questions are:\n\n1) Is there a way to move pg_toast tables to new tablespaces (or at least \nassure that new ones are created there)?\n2) Also, is there a good way to determine which pg_toast tables are \nassociated with any particular table and column?\n\nThank you for your help,\nMartin \n\n\n", "msg_date": "Fri, 16 Dec 2005 08:25:41 -0600", "msg_from": "\"PostgreSQL\" <[email protected]>", "msg_from_op": true, "msg_subject": "ALTER TABLE SET TABLESPACE and pg_toast" }, { "msg_contents": "\"PostgreSQL\" <[email protected]> writes:\n> We're storing tif images in a table as bytea. We were running low on our \n> primary space and moved several tables, including the one with the images, \n> to a second tablespace using ALTER TABLE SET TABLESPACE.\n> This moved quite cleaned out quite a bit of space on the original \n> tablespace, but not as much as it should have. It does not appear that the \n> corresponding pg_toast tables were moved.\n\nI think you're mistaken; at least, the SET TABLESPACE code certainly\nintends to move a table's toast table and index along with the table.\nWhat's your evidence for saying it didn't happen, and which PG version\nare you using exactly?\n\n> 2) Also, is there a good way to determine which pg_toast tables are \n> associated with any particular table and column?\n\npg_class.reltoastrelid and reltoastidxid. See\nhttp://www.postgresql.org/docs/8.1/static/storage.html\nhttp://www.postgresql.org/docs/8.1/static/catalog-pg-class.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Dec 2005 10:23:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE SET TABLESPACE and pg_toast " } ]
[ { "msg_contents": "> Now there goes Tom with his skeptical eye again, and here comes me\n> saying \"oops\" again. Further tests show that for this application\n\nI made the same mistake, fwiw. The big hit comes with command_string.\nHowever, row level stats bring a big enough penalty (~10% on my usage)\nthat I keep them turned off. The penalty is not just run time either,\nbut increased cpu time. It just isn't an essential feature so unless it\ncauses near zero extra load it will stay off on my servers.\n\nAdditionally, back when I was testing the win32/pg platform I was\ngetting random restarts of the stats collector when the server was under\nhigh load and row_level stats were on. This was a while back so this\nissue may or may not be resolved...it was really nasty because it\ncleared out pg_stats_activity which in turn ruined my admin tools. I\nshould probably give that another look.\n\nMerlin\n", "msg_date": "Fri, 16 Dec 2005 09:40:39 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How much expensive are row level statistics?" } ]
[ { "msg_contents": "In PostgreSQL 8.1, is the pg_autovacuum daemon affected by the\nvacuum_cost_* variables? I need to make sure that if we turn\nautovacuuming on when we upgrade to 8.1, we don't cause any i/o\nissues.\n\nThanks,\n\nChris\n", "msg_date": "Fri, 16 Dec 2005 14:40:22 -0500", "msg_from": "Chris Hoover <[email protected]>", "msg_from_op": true, "msg_subject": "8.1 - pg_autovacuum question" }, { "msg_contents": "Chris Hoover wrote:\n> In PostgreSQL 8.1, is the pg_autovacuum daemon affected by the\n> vacuum_cost_* variables? I need to make sure that if we turn\n> autovacuuming on when we upgrade to 8.1, we don't cause any i/o\n> issues.\n\nWhat pg_autovacuum daemon? The contrib one? I don't know. The\nintegrated one? Yes it is; and you can set autovacuum-specific values\nin postgresql.conf and table-specific values (used for autovacuum only)\nin pg_autovacuum.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 16 Dec 2005 17:26:55 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.1 - pg_autovacuum question" } ]
[ { "msg_contents": "I have a few small functions which I need to write. They will be hopefully \nquick running but will happen on almost every delete, insert and update on \nmy database (for audit purposes).\n\nI know I should be writing these in C but that's a bit beyond me. I was \ngoing to try PL/Python or PL/Perl or even PL/Ruby. Has anyone any idea \nwhich language is fastest, or is the data access going to swamp the overhead \nof small functions?\n\nThanks,\n\nBen \n\n\n", "msg_date": "Sun, 18 Dec 2005 01:10:21 -0000", "msg_from": "\"Ben Trewern\" <ben.trewern@_nospam_mowlem.com>", "msg_from_op": true, "msg_subject": "Speed of different procedural language" }, { "msg_contents": "On Sun, Dec 18, 2005 at 01:10:21AM -0000, Ben Trewern wrote:\n> I know I should be writing these in C but that's a bit beyond me. I was \n> going to try PL/Python or PL/Perl or even PL/Ruby. Has anyone any idea \n> which language is fastest, or is the data access going to swamp the overhead \n> of small functions?\n\nI'm not sure if it's what you ask for, but there _is_ a clear difference\nbetween the procedural languages -- I've had a 10x speed increase from\nrewriting PL/PgSQL stuff into PL/Perl, for instance. I'm not sure which ones\nwould be faster, though -- I believe Ruby is slower than Perl or Python\ngenerally, but I don't know how it all works out in a PL/* setting.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 21 Dec 2005 12:06:47 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed of different procedural language" }, { "msg_contents": "On Wed, Dec 21, 2005 at 12:06:47PM +0100, Steinar H. Gunderson wrote:\n> On Sun, Dec 18, 2005 at 01:10:21AM -0000, Ben Trewern wrote:\n> > I know I should be writing these in C but that's a bit beyond me. I was \n> > going to try PL/Python or PL/Perl or even PL/Ruby. Has anyone any idea \n> > which language is fastest, or is the data access going to swamp the overhead \n> > of small functions?\n> \n> I'm not sure if it's what you ask for, but there _is_ a clear difference\n> between the procedural languages -- I've had a 10x speed increase from\n> rewriting PL/PgSQL stuff into PL/Perl, for instance.\n\nThe difference is clear only in specific cases; just because you\nsaw a 10x increase in some cases doesn't mean you can expect that\nkind of increase, or indeed any increase, in others. I've seen\nPL/pgSQL beat all other PL/* challengers handily many times,\nespecially when the function does a lot of querying and looping\nthrough large result sets.\n\nI tend to use PL/pgSQL except in cases where PL/pgSQL can't do what\nI want or the job would be much easier in another language (e.g.,\nstring manipulation, for which I'd use PL/Perl or PL/Ruby). Even\nthen I might use the other language only to write small functions\nthat a PL/pgSQL function could call.\n\nAs Merlin suggested, maybe Ben could tell us what he wants to do\nthat he thinks should be written in C or a language other than\nPL/pgSQL. Without knowing what problem is to be solved it's near\nimpossible to recommend an appropriate tool.\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 21 Dec 2005 14:24:42 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed of different procedural language" }, { "msg_contents": "On Wed, Dec 21, 2005 at 02:24:42PM -0700, Michael Fuhr wrote:\n> The difference is clear only in specific cases; just because you\n> saw a 10x increase in some cases doesn't mean you can expect that\n> kind of increase, or indeed any increase, in others. I've seen\n> PL/pgSQL beat all other PL/* challengers handily many times,\n> especially when the function does a lot of querying and looping\n> through large result sets.\n\nThat's funny, my biggest problems with PL/PgSQL have been (among others)\nexactly with large result sets...\n\nAnyhow, the general idea is: It _does_ matter which one you use, so you'd\nbetter test if it matters to you :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 21 Dec 2005 22:38:10 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed of different procedural language" }, { "msg_contents": "On Wed, Dec 21, 2005 at 10:38:10PM +0100, Steinar H. Gunderson wrote:\n> On Wed, Dec 21, 2005 at 02:24:42PM -0700, Michael Fuhr wrote:\n> > The difference is clear only in specific cases; just because you\n> > saw a 10x increase in some cases doesn't mean you can expect that\n> > kind of increase, or indeed any increase, in others. I've seen\n> > PL/pgSQL beat all other PL/* challengers handily many times,\n> > especially when the function does a lot of querying and looping\n> > through large result sets.\n> \n> That's funny, my biggest problems with PL/PgSQL have been (among others)\n> exactly with large result sets...\n\nOut of curiosity, do you have a simple test case? I'd be interested\nin seeing what you're doing in PL/pgSQL that's contradicting what\nI'm seeing.\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 21 Dec 2005 15:10:28 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed of different procedural language" }, { "msg_contents": "On Wed, Dec 21, 2005 at 03:10:28PM -0700, Michael Fuhr wrote:\n>> That's funny, my biggest problems with PL/PgSQL have been (among others)\n>> exactly with large result sets...\n> Out of curiosity, do you have a simple test case? I'd be interested\n> in seeing what you're doing in PL/pgSQL that's contradicting what\n> I'm seeing.\n\nI'm not sure if I have the code anymore (it was under 7.4 or 8.0), but it was\nlargely scanning through ~2 million rows once, noting differences from the\nprevious rows as it went.\n\nIn that case, I didn't benchmark against any of the other PL/* languages, but\nit was pretty clear that even on a pretty speedy Opteron, it was CPU bound,\nwhich it really shouldn't have been.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 22 Dec 2005 02:08:23 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed of different procedural language" }, { "msg_contents": "On Thu, Dec 22, 2005 at 02:08:23AM +0100, Steinar H. Gunderson wrote:\n> On Wed, Dec 21, 2005 at 03:10:28PM -0700, Michael Fuhr wrote:\n> >> That's funny, my biggest problems with PL/PgSQL have been (among others)\n> >> exactly with large result sets...\n> > Out of curiosity, do you have a simple test case? I'd be interested\n> > in seeing what you're doing in PL/pgSQL that's contradicting what\n> > I'm seeing.\n> \n> I'm not sure if I have the code anymore (it was under 7.4 or 8.0), but it was\n> largely scanning through ~2 million rows once, noting differences from the\n> previous rows as it went.\n> \n> In that case, I didn't benchmark against any of the other PL/* languages, but\n> it was pretty clear that even on a pretty speedy Opteron, it was CPU bound,\n> which it really shouldn't have been.\n\nTry looping through two million rows with PL/Perl or PL/Tcl and\nyou'll probably see significantly worse performance than with\nPL/pgSQL -- so much worse that I'd be surprised to see those languages\nmake up the difference with whatever processing they'd be doing for\neach row unless it was something they're particularly good at and\nPL/pgSQL is particularly bad at.\n\nIn 8.1 PL/Perl has a couple of ways to fetch query results:\nspi_exec_query to fetch all the rows at once into a single data\nstructure, and spi_query/spi_fetchrow to fetch the rows one at a\ntime. In my tests with one million rows, spi_exec_query was around\n8 times slower than a loop in PL/pgSQL, not to mention requiring a\nlot of memory. spi_query/spi_fetchrow was about 25 times slower\nbut didn't require the amount of memory that spi_exec_query did.\nA PL/Tcl function that used spi_exec was about 10 times slower than\nPL/pgSQL, or only slightly slower than PL/Perl and spi_exec_query.\n\nIf you didn't benchmark the two million row query, do you have an\nexample that you did benchmark? I don't doubt that PL/Perl and\nother langauges can do some things faster than PL/pgSQL, but looping\nthrough large result sets doesn't seem to be one of them.\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 21 Dec 2005 19:13:46 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed of different procedural language" }, { "msg_contents": "Michael Fuhr <[email protected]> writes:\n> Try looping through two million rows with PL/Perl or PL/Tcl and\n> you'll probably see significantly worse performance than with\n> PL/pgSQL -- so much worse that I'd be surprised to see those languages\n> make up the difference with whatever processing they'd be doing for\n> each row unless it was something they're particularly good at and\n> PL/pgSQL is particularly bad at.\n\nI'd expect plpgsql to suck at purely computational tasks, compared to\nthe other PLs, but to win at tasks involving database access. These\nare two sides of the same coin really --- plpgsql is tightly tied to the\nPG query execution engine, to the extent of using it even for simply\nadding 2 and 2, but that also gives it relatively low overhead for\ninvoking a database query. Perl, Tcl, et al have their own\ncomputational engines and can easily beat the PG SQL engine for simple\narithmetic and string-pushing. But they pay a high overhead for\ncalling back into the database engine.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Dec 2005 22:45:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed of different procedural language " } ]
[ { "msg_contents": "I have the following table:\n\nCREATE TABLE timeblock\n(\n timeblockid int8 NOT NULL,\n starttime timestamp,\n endtime timestamp,\n duration int4,\n blocktypeid int8,\n domain_id int8,\n create_date timestamp,\n revision_date timestamp,\n scheduleid int8,\n CONSTRAINT timeblock_pkey PRIMARY KEY (timeblockid),\n CONSTRAINT fk25629e03312570b FOREIGN KEY (blocktypeid)\n REFERENCES blocktype (blocktypeid) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk25629e09be84177 FOREIGN KEY (domain_id)\n REFERENCES wa_common_domain (domain_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n) \nWITH OIDS;\n\nCREATE INDEX timeblock_blocktype_idx\n ON timeblock\n USING btree\n (blocktypeid);\n\nCREATE INDEX timeblock_date_idx\n ON timeblock\n USING btree\n (starttime, endtime);\n\nCREATE INDEX timeblockepoch_idx\n ON timeblock\n USING btree\n (date_trunc('minute'::text, starttime), (date_part('epoch'::text, \ndate_trunc('minute'::text, starttime)) * 1000::double precision), \ndate_trunc('minute'::text, endtime), (date_part('epoch'::text, \ndate_trunc('minute'::text, endtime)) * 1000::double precision));\n\nCREATE INDEX timeblockhourmin_idx\n ON timeblock\n USING btree\n (date_part('hour'::text, starttime), date_part('minute'::text, \nstarttime), date_part('hour'::text, endtime), date_part('minute'::text, \nendtime));\n\nCREATE INDEX timeblockid_idx\n ON timeblock\n USING btree\n (timeblockid);\n\n\nThere are also indexes on wa_common_domain and blocktype on pkeys.\n\nexplain analyze delete from timeblock where timeblockid = 666666\n\nIndex Scan using timeblockid_idx on timeblock (cost=0.00..5.28 rows=1 \nwidth=6) (actual time=0.022..0.022 rows=0 loops=1)\n Index Cond: (timeblockid = 666666)\nTotal runtime: 0.069 ms\n\n\nI need to routinely move data from the timeblock table to an archive \ntable with the same schema named timeblock_archive. I really need this \nto happen as quickly as possible, as the archive operation appears to \nreally tax the db server... \n\nI'd like some suggestions on how to get the deletes to happen faster, as \nwhile deleting individually appears to extremely fast, when I go to \ndelete lots of rows the operation takes an extremely long time to \ncomplete (5000 rows takes about 3 minutes, 1000000 rows takes almost \nclose to 4 hours or more depending upon server load; wall time btw).\n\ni've tried several different approaches doing the delete and I can't \nseem to make it much faster... anyone have any ideas?\n\nThe approaches I've taken both use a temp table to define the set that \nneeds to be deleted.\n\nHere's what I've tried:\n\nAttempt 1:\n----------\ndelete from timeblock where timeblockid in (select timeblockid from \ntimeblock_tmp)\n\n\nAttempt 2:\n----------\nnum_to_delete := (select count(1) from tmp_timeblock);\nRAISE DEBUG 'archiveDailyData(%): need to delete from timeblock [% \nrows]', timestart, num_to_delete;\ncur_offset := 0;\nwhile cur_offset < num_to_delete loop\n delete from timeblock where timeblockid in \n (select timeblockid from \n tmp_timeblock limit 100 offset cur_offset);\n get diagnostics num_affected = ROW_COUNT;\n RAISE DEBUG 'archiveDailyData(%): delete from timeblock [% rows] \ncur_offset = %', timestart, num_affected, cur_offset;\n cur_offset := cur_offset + 100;\nend loop;\n\n\nAttempt 3:\n----------\n num_to_delete := (select count(1) from tmp_timeblock);\n cur_offset := num_to_delete;\n RAISE DEBUG 'archiveDailyData(%): need to delete from timeblock [% \nrows]', timestart, num_to_delete;\n open del_cursor for select timeblockid from tmp_timeblock;\n loop\n fetch del_cursor into del_pkey;\n if not found then\n exit;\n else\n delete from timeblock where timeblockid = del_pkey;\n get diagnostics num_affected = ROW_COUNT;\n cur_offset := cur_offset - num_affected;\n if cur_offset % 1000 = 0 then \n RAISE DEBUG 'archiveDailyData(%): delete from timeblock [% \nleft]', timestart, cur_offset;\n end if;\n end if;\n end loop;\n close del_cursor;\n\n\nI've considered using min(starttime) and max(starttime) from the temp \ntable and doing: \n\ndelete from timeblock where starttime between min and max;\n\nhowever, I'm concerned about leaving orphan data, deleting too much data \nrunning into foreign key conflicts, etc.\n\ndropping the indexes on timeblock could be bad, as this table recieves \nhas a high volume on reads, inserts & updates.\n\nAny one have any suggestions?\n\nThanks,\n\nJim K\n", "msg_date": "Sat, 17 Dec 2005 21:10:40 -0800", "msg_from": "James Klo <[email protected]>", "msg_from_op": true, "msg_subject": "make bulk deletes faster?" }, { "msg_contents": "On Sat, Dec 17, 2005 at 09:10:40PM -0800, James Klo wrote:\n> I'd like some suggestions on how to get the deletes to happen faster, as \n> while deleting individually appears to extremely fast, when I go to \n> delete lots of rows the operation takes an extremely long time to \n> complete (5000 rows takes about 3 minutes, 1000000 rows takes almost \n> close to 4 hours or more depending upon server load; wall time btw).\n\nThose times do seem excessive -- do any other tables have foreign\nkey references to the table you're deleting from? If so, do those\ntables have indexes on the referring columns? Does this table or\nany referring table have triggers? Also, are you regularly vacuuming\nand analyzing your tables? Have you examined pg_locks to see if\nan unacquired lock might be slowing things down?\n\n-- \nMichael Fuhr\n", "msg_date": "Sun, 18 Dec 2005 19:36:16 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make bulk deletes faster?" }, { "msg_contents": "In article <[email protected]>,\n [email protected] (Michael Fuhr) wrote:\n\n> On Sat, Dec 17, 2005 at 09:10:40PM -0800, James Klo wrote:\n> > I'd like some suggestions on how to get the deletes to happen faster, as \n> > while deleting individually appears to extremely fast, when I go to \n> > delete lots of rows the operation takes an extremely long time to \n> > complete (5000 rows takes about 3 minutes, 1000000 rows takes almost \n> > close to 4 hours or more depending upon server load; wall time btw).\n> \n> Those times do seem excessive -- do any other tables have foreign\n> key references to the table you're deleting from? If so, do those\n> tables have indexes on the referring columns? Does this table or\n> any referring table have triggers? Also, are you regularly vacuuming\n> and analyzing your tables? Have you examined pg_locks to see if\n> an unacquired lock might be slowing things down?\n\nAs the table was originally created using Hibernate, yes, there are \nseveral key references, however I've already added indexes those tables \non referring keys. There are no triggers, we were running \npg_autovaccum, but found that it wasn't completing. I believe we \ndisabled, and are now running a cron every 4 hours. My archiving method, \nis also running analyze - as I figure after a mass deletes, it would \nprobably keep query speeds from degrading.)\n\nI've looked at pg_locks, but not sure I understand quite how to use it \nto determine if there are unacquired locks. I do know that we \noccasionally get some warnings from C3P0 that states it detects a \ndeadlock, and allocates emergency threads.\n\nBTW, If I didn't mention, we are using PG 8.1 on Red Hat Enterprise, 4GB \nRAM, 4 dual-core CPUs, think its RAID5 (looks like what I would consider \ntypical Linux partitioning /, /tmp, /usr, /var, /boot, /home). After \ntrolling the archives, and doing a bit of sleuthing on the DB, I'm lead \nto believe that this is more or less a default install of PG 8.1. As I'm \nrelatively new to PG, I'm not sure how it should be configured for our \nsetup. I would suspect that this could probably effect the speed of \ndeletes (and queries as well).\n\nThanks for any help you can provide.\n", "msg_date": "Mon, 19 Dec 2005 00:17:06 -0800", "msg_from": "James Klo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make bulk deletes faster?" }, { "msg_contents": "On Sat, 2005-12-17 at 21:10 -0800, James Klo wrote:\n> I need to routinely move data from the timeblock table to an archive \n> table with the same schema named timeblock_archive. I really need this \n> to happen as quickly as possible, as the archive operation appears to \n> really tax the db server... \n\nHave you considered partitioning?\n\nhttp://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html\n\nIf you can partition your timeblock table so that you archive an entire\npartition at a time, then you can delete the archived rows by just\ndropping (or truncating) that partition. AFAIK there's no way to\n\"re-parent\" a partition (e.g., from the timeblock table to the\ntimeblock_archive table).\n\nIf your app is particularly cooperative you might be able to use\npartitioning to avoid moving data around entirely. If table accesses\nare always qualified by something you can use as a partitioning key,\nthen partitioning can give you the speed benefits of a small table\nwithout the effort of keeping it cleared out.\n\nAnother good read, if you haven't yet, is\nhttp://powerpostgresql.com/Downloads/annotated_conf_80.html\nespecially the \"Memory\", \"Checkpoints\", and maybe \"WAL options\"\nsections. If you're doing large deletes then you may need to increase\nyour free space map settings--if a VACUUM VERBOSE finishes by saying\nthat you need more FSM pages, then the table may have gotten bloated\nover time (which can be fixed with a configuration change and a VACUUM\nFULL, though this will lock everything else out of the table while it's\nrunning).\n\nMitch\n\n", "msg_date": "Mon, 19 Dec 2005 02:39:31 -0800", "msg_from": "Mitch Skinner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make bulk deletes faster?" }, { "msg_contents": "On 12/18/05, James Klo <[email protected]> wrote:\n> explain analyze delete from timeblock where timeblockid = 666666\n>\n> Index Scan using timeblockid_idx on timeblock (cost=0.00..5.28 rows=1\n> width=6) (actual time=0.022..0.022 rows=0 loops=1)\n> Index Cond: (timeblockid = 666666)\n> Total runtime: 0.069 ms\n... snip ...\n> Here's what I've tried:\n>\n> Attempt 1:\n> ----------\n> delete from timeblock where timeblockid in (select timeblockid from\n> timeblock_tmp)\n\nThe DELETE in Attempt 1 contains a join, so if this is the way you're\nmainly specifying which rows to delete, you'll have to take into\naccount how efficient the join of timeblock and timeblock_tmp is. What\ndoes\n\nEXPLAIN ANALYZE select * from timeblock where timeblockid in (select\ntimeblockid from timeblock_tmp)\n\nor\n\nEXPLAIN ANALYZE delete from timeblock where timeblockid in (select\ntimeblockid from timeblock_tmp)\n\nsay?\n\nYou *should* at least get a \"Hash IN join\" for the outer loop, and\njust one Seq scan on timeblock_tmp. Otherwise, consider increasing\nyour sort_mem (postgresql 7.x) or work_mem (postgresql 8.x) settings.\nAnother alternative is to reduce the amount of rows being archive at\none go to fit in the amount of sort_mem or work_mem that allows the\n\"Hash IN Join\" plan. See\nhttp://www.postgresql.org/docs/8.1/static/runtime-config-resource.html#GUC-WORK-MEM\n\nOn the other hand, PostgreSQL 8.1's partitioning sounds like a better\nlong term solution that you might want to look into.\n", "msg_date": "Mon, 19 Dec 2005 22:47:27 +0800", "msg_from": "Ang Chin Han <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make bulk deletes faster?" }, { "msg_contents": "Mitch Skinner wrote:\n\n> Have you considered partitioning?\n> \n> http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html\n> \n> If you can partition your timeblock table so that you archive an entire\n> partition at a time, then you can delete the archived rows by just\n> dropping (or truncating) that partition. AFAIK there's no way to\n> \"re-parent\" a partition (e.g., from the timeblock table to the\n> timeblock_archive table).\n> \n> If your app is particularly cooperative you might be able to use\n> partitioning to avoid moving data around entirely. If table accesses\n> are always qualified by something you can use as a partitioning key,\n> then partitioning can give you the speed benefits of a small table\n> without the effort of keeping it cleared out.\n\nYes, I've considered partitioning as a long term change. I was thinking \nabout this for other reasons - mainly performance. If I go the \npartitioning route, would I need to even perform archival?\n\nThe larger problem that I need to solve is really twofold:\n\n1. Need to keep reads on timeblocks that are from the current day \nthrough the following seven days very fast, especially current day reads.\n\n2. Need to be able to maintain the timeblocks for reporting purposes, \nfor at least a year (potentially more). This could probably better \nhandled performing aggregate analysis, but this isn't on my current radar.\n\n> Another good read, if you haven't yet, is\n> http://powerpostgresql.com/Downloads/annotated_conf_80.html\n> especially the \"Memory\", \"Checkpoints\", and maybe \"WAL options\"\n> sections. If you're doing large deletes then you may need to increase\n> your free space map settings--if a VACUUM VERBOSE finishes by saying\n> that you need more FSM pages, then the table may have gotten bloated\n> over time (which can be fixed with a configuration change and a VACUUM\n> FULL, though this will lock everything else out of the table while it's\n> running).\n> \n\nThanks, I will look into this as well.\n", "msg_date": "Mon, 19 Dec 2005 11:10:50 -0800", "msg_from": "James Klo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make bulk deletes faster?" }, { "msg_contents": "On Mon, Dec 19, 2005 at 11:10:50AM -0800, James Klo wrote:\n> Yes, I've considered partitioning as a long term change. I was thinking \n> about this for other reasons - mainly performance. If I go the \n> partitioning route, would I need to even perform archival?\n\nNo. The idea is that you have your table split up into date ranges\n(perhaps each week gets it's own table). IE: table_2005w01,\ntable_2005w02, etc. You can do this with either inheritence or\nindividual tables and a UNION ALL view. In your case, inheritence is\nprobably the better way to go.\n\nNow, if you have everything broken down by weeks and you typically only\nneed to access 7 days worth of data, then generally you will only be\nreading from two tables, so those two tables should stay in memory, and\nindexes on them will be smaller. If desired, you can also play tricks on\nthe older tables surch as vacuum full or cluster to further reduce space\nusage and improve performance.\n\n> The larger problem that I need to solve is really twofold:\n> \n> 1. Need to keep reads on timeblocks that are from the current day \n> through the following seven days very fast, especially current day reads.\n> \n> 2. Need to be able to maintain the timeblocks for reporting purposes, \n> for at least a year (potentially more). This could probably better \n> handled performing aggregate analysis, but this isn't on my current radar.\n\nI've written an RRD-like implementation in SQL that might interest you;\nit's at http://rrs.decibel.org (though the svn web access appears to be\ndown right now...)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 20 Dec 2005 13:16:18 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make bulk deletes faster?" } ]
[ { "msg_contents": "Hi -\n\n\nCan anyone tell me how well PostgreSQL 8.x performs on the new Sun Ultrasparc\nT1 processor and architecture on Solaris 10? I have a custom built retail\nsales reporting that I developed using PostgreSQL 7.48 and PHP on a Fedora\nCore 3 intel box. I want to scale this application upwards to handle a\ndatabase that might grow to a 100 GB. Our company is green mission conscious\nnow so I was hoping I could use that to convince management to consider a Sun\nUltrasparc T1 or T2 system provided that if I can get the best performance\nout of it on PostgreSQL. So will newer versions of PostgreSQL (8.1.x) be\nable to take of advantage of the multiple cores on a T1 or T2? I cannot\nchange the database and this will be a hard sell unless I can convince them\nthat the performance advantages are too good to pass up. The company is\nmoving in the Win32 direction and so I have to provide rock solid reasons for\nwhy I want to use Solaris Sparc on a T1 or T2 server for this database\napplication instead of Windows on SQL Server.\n\n\nThanks,\nJuan\n\n-------------------------------------------------------\n", "msg_date": "Sun, 18 Dec 2005 11:35:15 -0500", "msg_from": "Juan Casero <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL and Ultrasparc T1" }, { "msg_contents": "On 12/18/05, Juan Casero <[email protected]> wrote:\n> Can anyone tell me how well PostgreSQL 8.x performs on the new Sun Ultrasparc\n> T1 processor and architecture on Solaris 10? I have a custom built retail\n> sales reporting that I developed using PostgreSQL 7.48 and PHP on a Fedora\n> Core 3 intel box. I want to scale this application upwards to handle a\n> database that might grow to a 100 GB. Our company is green mission conscious\n> now so I was hoping I could use that to convince management to consider a Sun\n> Ultrasparc T1 or T2 system provided that if I can get the best performance\n> out of it on PostgreSQL. So will newer versions of PostgreSQL (8.1.x) be\n> able to take of advantage of the multiple cores on a T1 or T2? I cannot\n> change the database and this will be a hard sell unless I can convince them\n> that the performance advantages are too good to pass up. The company is\n> moving in the Win32 direction and so I have to provide rock solid reasons for\n> why I want to use Solaris Sparc on a T1 or T2 server for this database\n> application instead of Windows on SQL Server.\n\nI do not know that anyone outside pilot orgs have received their\norders for the new T1 machines, so real world experience will not be\navailable yet. The big question is whether or not it manages the\nprocessors only for threads (in which case Postgresql won't benefit\nmuch) or for processes as well.\n\nPostgreSQL takes a \"process-parallel\" approach to parallelism, not\nthread-level. There are lost of historical reasons, but, that's just\nhte way it is for now.\n\nChris\n--\n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Sun, 18 Dec 2005 12:12:41 -0500", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "Juan,\n\nOn 12/18/05 8:35 AM, \"Juan Casero\" <[email protected]> wrote:\n\n> Can anyone tell me how well PostgreSQL 8.x performs on the new Sun Ultrasparc\n> T1 processor and architecture on Solaris 10? I have a custom built retail\n> sales reporting that I developed using PostgreSQL 7.48 and PHP on a Fedora\n> Core 3 intel box. I want to scale this application upwards to handle a\n> database that might grow to a 100 GB. Our company is green mission conscious\n> now so I was hoping I could use that to convince management to consider a Sun\n> Ultrasparc T1 or T2 system provided that if I can get the best performance\n> out of it on PostgreSQL. So will newer versions of PostgreSQL (8.1.x) be\n> able to take of advantage of the multiple cores on a T1 or T2? I cannot\n> change the database and this will be a hard sell unless I can convince them\n> that the performance advantages are too good to pass up. The company is\n> moving in the Win32 direction and so I have to provide rock solid reasons for\n> why I want to use Solaris Sparc on a T1 or T2 server for this database\n> application instead of Windows on SQL Server.\n\nThe Niagara CPUs are heavily multi-threaded and will require a lot of\nparallelism to be exposed to them in order to be effective.\n\nUntil Sun makes niagara-based machines with lots of I/O channels, there\nwon't be much I/O parallelism available to match the CPU parallelism.\n\nBizgres MPP will use the process and I/O parallelism of these big SMP\nmachines and the version based on Postgres 8.1 will be out in February.\n\n- Luke \n\n\n", "msg_date": "Sun, 18 Dec 2005 23:21:07 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "Sun Fire T2000 has 3 PCI-E and 1PCI-X slot free when shipped. Using \ndual fiber channel 2G adapters you can get about 200MB x 8 = 1600MB/sec \nIO bandwidth. Plus when 4G HBAs are supported that will double up. Now I \nthink generally that's good enough for 1TB raw data or 2-3 TB Database \nsize. Of course typically the database size in PostgreSQL space will be \nin the 100-500GB range so a Sun Fire T2000 can be a good fit with enough \narea to grow at a very reasonable price.\n\nOf course like someone mentioned if all you have is 1 connection using \npostgresql which cannot spawn helper processes/threads, this will be \nlimited by the single thread performance which is about 1.2Ghz compared \non Sun Fire T2000 to AMD64 (Sun Fire X4200) which pretty much has \nsimilar IO Bandwidth, same size chassis, but the individual AMD64 cores \nruns at about 2.4Ghz (I believe) and max you can get is 4 cores but you \nalso have to do a little trade off in terms of power consumption in lei \nof faster single thread performance. So Choices are available with both \narchitecture. .However if you have a webserver driving a postgreSQL \nbackend, then UltraSPARC T1 might be a better option if you suddenly \nwants to do 100s of db connections. The SunFire T2000 gives you 8 cores \nwith 32 threads in all running on the system. \n\nWith PostgreSQL 8.1 fix for SMP Bufferpool performance and with ZFS now \navailable in Solaris Express release, it would be interesting to see how \nthe combination of PostgreSQL 8.1 and ZFS works on Solaris since ZFS is \none of the perfect file systems for PostgreSQL where it wants all \ncomplexities (like block allocation, fragmentation, etc) to the \nunderlying file systems and not re-implement its own infrastructure.\n\nIf somebody is already conducting their own tests, do let me know. As \nsoon as I get some free cycles, I want to run ZFS with PostgreSQL using \nSolaris Express. If you have some preferred workloads do let me know.\n\nRegards,\nJignesh\n\n\nLuke Lonergan wrote:\n\n>Juan,\n>\n>On 12/18/05 8:35 AM, \"Juan Casero\" <[email protected]> wrote:\n>\n> \n>\n>>Can anyone tell me how well PostgreSQL 8.x performs on the new Sun Ultrasparc\n>>T1 processor and architecture on Solaris 10? I have a custom built retail\n>>sales reporting that I developed using PostgreSQL 7.48 and PHP on a Fedora\n>>Core 3 intel box. I want to scale this application upwards to handle a\n>>database that might grow to a 100 GB. Our company is green mission conscious\n>>now so I was hoping I could use that to convince management to consider a Sun\n>>Ultrasparc T1 or T2 system provided that if I can get the best performance\n>>out of it on PostgreSQL. So will newer versions of PostgreSQL (8.1.x) be\n>>able to take of advantage of the multiple cores on a T1 or T2? I cannot\n>>change the database and this will be a hard sell unless I can convince them\n>>that the performance advantages are too good to pass up. The company is\n>>moving in the Win32 direction and so I have to provide rock solid reasons for\n>>why I want to use Solaris Sparc on a T1 or T2 server for this database\n>>application instead of Windows on SQL Server.\n>> \n>>\n>\n>The Niagara CPUs are heavily multi-threaded and will require a lot of\n>parallelism to be exposed to them in order to be effective.\n>\n>Until Sun makes niagara-based machines with lots of I/O channels, there\n>won't be much I/O parallelism available to match the CPU parallelism.\n>\n>Bizgres MPP will use the process and I/O parallelism of these big SMP\n>machines and the version based on Postgres 8.1 will be out in February.\n>\n>- Luke \n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: don't forget to increase your free space map settings\n> \n>\n", "msg_date": "Mon, 19 Dec 2005 09:27:12 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "Jignesh,\n\n\nOn 12/19/05 6:27 AM, \"Jignesh K. Shah\" <[email protected]> wrote:\n\n> Sun Fire T2000 has 3 PCI-E and 1PCI-X slot free when shipped. Using\n> dual fiber channel 2G adapters you can get about 200MB x 8 = 1600MB/sec\n> IO bandwidth. Plus when 4G HBAs are supported that will double up. Now I\n> think generally that's good enough for 1TB raw data or 2-3 TB Database\n> size. Of course typically the database size in PostgreSQL space will be\n> in the 100-500GB range so a Sun Fire T2000 can be a good fit with enough\n> area to grow at a very reasonable price.\n\nThe free PCI slots don't indicate the I/O speed of the machine, otherwise\nI'll just go back 4 years and use a Xeon machine.\n\nCan you educate us a bit on the T-2000, like where can we find a technical\npublication that can answer the following:\n\nAre all of the PCI-E and PCI-X independent, mastering channels? Are they\nconnected via a crossbar or is it using the JBus? Is the usable memory\nbandwidth available to the HBAs and CPU double the 1,600MB/s, or 3,200MB/s?\n \n> Of course like someone mentioned if all you have is 1 connection using\n> postgresql which cannot spawn helper processes/threads, this will be\n> limited by the single thread performance which is about 1.2Ghz compared\n> on Sun Fire T2000 to AMD64 (Sun Fire X4200) which pretty much has\n> similar IO Bandwidth, same size chassis, but the individual AMD64 cores\n> runs at about 2.4Ghz (I believe) and max you can get is 4 cores but you\n> also have to do a little trade off in terms of power consumption in lei\n> of faster single thread performance. So Choices are available with both\n> architecture. .However if you have a webserver driving a postgreSQL\n> backend, then UltraSPARC T1 might be a better option if you suddenly\n> wants to do 100s of db connections. The SunFire T2000 gives you 8 cores\n> with 32 threads in all running on the system.\n\nSo - OLTP / webserver, that makes sense.\n \n- Luke\n\n\n", "msg_date": "Mon, 19 Dec 2005 09:30:21 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "\nHi Luke,\n\nI have gone to the max with 4 fibers on Sun Fire T2000. But I am not sure about the answers that you asked. Let me see if I can get answers for them. I am going to try to max out the IO on these systems with 8 fibers as soon as I get additional storage so stay tuned for that.\n\nBy the way you don't have to wait for my tests. Just get a trial server and try it on your own. If you don't like it return it.\n\nhttps://www.sun.com/emrkt/trycoolthreads/contactme.html\n\nCheck out Jonathan's blog for more details http://blogs.sun.com/jonathan\n\nHowever if you do try it with PostgreSQL, do let me know also with your experience. \n\nRegards,\nJignesh\n\n\n\n----- Original Message -----\nFrom: Luke Lonergan <[email protected]>\nDate: Monday, December 19, 2005 12:31 pm\nSubject: Re: [PERFORM] PostgreSQL and Ultrasparc T1\nTo: Jignesh Shah <[email protected]>\nCc: Juan Casero <[email protected]>, [email protected]\n\n> Jignesh,\n> \n> \n> On 12/19/05 6:27 AM, \"Jignesh K. Shah\" <[email protected]> wrote:\n> \n> > Sun Fire T2000 has 3 PCI-E and 1PCI-X slot free when shipped. Using\n> > dual fiber channel 2G adapters you can get about 200MB x 8 = \n> 1600MB/sec> IO bandwidth. Plus when 4G HBAs are supported that will \n> double up. Now I\n> > think generally that's good enough for 1TB raw data or 2-3 TB \n> Database> size. Of course typically the database size in PostgreSQL \n> space will be\n> > in the 100-500GB range so a Sun Fire T2000 can be a good fit with \n> enough> area to grow at a very reasonable price.\n> \n> The free PCI slots don't indicate the I/O speed of the machine, \n> otherwiseI'll just go back 4 years and use a Xeon machine.\n> \n> Can you educate us a bit on the T-2000, like where can we find a \n> technicalpublication that can answer the following:\n> \n> Are all of the PCI-E and PCI-X independent, mastering channels? \n> Are they\n> connected via a crossbar or is it using the JBus? Is the usable \n> memorybandwidth available to the HBAs and CPU double the 1,600MB/s, \n> or 3,200MB/s?\n> \n> > Of course like someone mentioned if all you have is 1 connection \n> using> postgresql which cannot spawn helper processes/threads, this \n> will be\n> > limited by the single thread performance which is about 1.2Ghz \n> compared> on Sun Fire T2000 to AMD64 (Sun Fire X4200) which pretty \n> much has\n> > similar IO Bandwidth, same size chassis, but the individual \n> AMD64 cores\n> > runs at about 2.4Ghz (I believe) and max you can get is 4 cores \n> but you\n> > also have to do a little trade off in terms of power consumption \n> in lei\n> > of faster single thread performance. So Choices are available \n> with both\n> > architecture. .However if you have a webserver driving a postgreSQL\n> > backend, then UltraSPARC T1 might be a better option if you suddenly\n> > wants to do 100s of db connections. The SunFire T2000 gives you 8 \n> cores> with 32 threads in all running on the system.\n> \n> So - OLTP / webserver, that makes sense.\n> \n> - Luke\n> \n> \n> \n", "msg_date": "Mon, 19 Dec 2005 14:29:58 -0500", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "Jignesh,\n\nOn 12/19/05 11:29 AM, \"Jignesh Shah\" <[email protected]> wrote:\n\n> I have gone to the max with 4 fibers on Sun Fire T2000. But I am not sure\n> about the answers that you asked. Let me see if I can get answers for them. I\n> am going to try to max out the IO on these systems with 8 fibers as soon as I\n> get additional storage so stay tuned for that.\n\nCool - how close did you get to 800MB/s?\n \n> By the way you don't have to wait for my tests. Just get a trial server and\n> try it on your own. If you don't like it return it.\n> \n> https://www.sun.com/emrkt/trycoolthreads/contactme.html\n\nDone - I'll certainly test Postgres / Bizgres on it - you know me ;-)\n \n> However if you do try it with PostgreSQL, do let me know also with your\n> experience.\n\nSee above.\n\nThe Niagara is UltraSparc III compatible - so the GCC compiler should emit\ngood code for it, right?\n\n- Luke\n\n\n", "msg_date": "Mon, 19 Dec 2005 11:37:15 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "Hi Luke,\n\nI got about 720 MB/sec to 730 MB/sec with plain dd tests on my current storage configuration (8 LUNS on 4 fibers) which slowed me down (10K rpm 146 GB disks FC) with 4 LUNS going through a longer pass to the disks (via a controller master array to slave JBODs to provide ) .\n\n extended device statistics \n r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device\n 0.8 14.0 0.0 0.0 0.0 0.3 0.0 17.8 0 4 c3t0d0\n 91.4 0.0 91.4 0.0 0.0 1.0 0.0 10.5 0 96 c0t40d0\n 96.0 0.0 96.0 0.0 0.0 1.0 0.0 10.0 0 96 c5t40d1\n 95.8 0.0 95.8 0.0 0.0 1.0 0.0 10.0 0 96 c0t40d1\n 96.8 0.0 96.8 0.0 0.0 1.0 0.0 9.9 0 96 c5t40d0\n 84.6 0.0 84.6 0.0 0.0 1.0 0.0 11.4 0 96 c4t46d1\n 85.6 0.0 85.6 0.0 0.0 1.0 0.0 11.2 0 96 c4t46d0\n 85.2 0.0 85.2 0.0 0.0 1.0 0.0 11.3 0 96 c2t46d1\n 85.4 0.0 85.4 0.0 0.0 1.0 0.0 11.3 0 96 c2t46d0\n\nI can probably bump it up a bit with fine storage tuning (LUN) but there is no limitation on the Sun Fire T2000 to bottleneck on anything plus dd tests are not the best throughput measurement tool.\n\nYes UltraSPARC T1 supports the SPARC V9 architecture and can support all the SPARC binaries already generated or newly generated using gcc or Sun Studio 11 which is also free.\nhttp://developers.sun.com/prodtech/cc/downloads/sun_studio/\n\n\n\nRegards,\nJignesh\n\n\n----- Original Message -----\nFrom: Luke Lonergan <[email protected]>\nDate: Monday, December 19, 2005 2:38 pm\nSubject: Re: [PERFORM] PostgreSQL and Ultrasparc T1\nTo: Jignesh Shah <[email protected]>\nCc: Juan Casero <[email protected]>, [email protected]\n\n> Jignesh,\n> \n> On 12/19/05 11:29 AM, \"Jignesh Shah\" <[email protected]> wrote:\n> \n> > I have gone to the max with 4 fibers on Sun Fire T2000. But I am \n> not sure\n> > about the answers that you asked. Let me see if I can get answers \n> for them. I\n> > am going to try to max out the IO on these systems with 8 fibers \n> as soon as I\n> > get additional storage so stay tuned for that.\n> \n> Cool - how close did you get to 800MB/s?\n> \n> > By the way you don't have to wait for my tests. Just get a trial \n> server and\n> > try it on your own. If you don't like it return it.\n> > \n> > https://www.sun.com/emrkt/trycoolthreads/contactme.html\n> \n> Done - I'll certainly test Postgres / Bizgres on it - you know me ;-)\n> \n> > However if you do try it with PostgreSQL, do let me know also \n> with your\n> > experience.\n> \n> See above.\n> \n> The Niagara is UltraSparc III compatible - so the GCC compiler \n> should emit\n> good code for it, right?\n> \n> - Luke\n> \n> \n> \n> ---------------------------(end of broadcast)-----------------------\n> ----\n> TIP 2: Don't 'kill -9' the postmaster\n> \n", "msg_date": "Mon, 19 Dec 2005 15:21:06 -0500", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "Jignesh,\n\nOn 12/19/05 12:21 PM, \"Jignesh Shah\" <[email protected]> wrote:\n\n> I got about 720 MB/sec to 730 MB/sec with plain dd tests on my current\n> storage configuration (8 LUNS on 4 fibers) which slowed me down (10K rpm 146\n> GB disks FC) with 4 LUNS going through a longer pass to the disks (via a\n> controller master array to slave JBODs to provide ) .\n> \n> extended device statistics\n> r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device\n> 0.8 14.0 0.0 0.0 0.0 0.3 0.0 17.8 0 4 c3t0d0\n> 91.4 0.0 91.4 0.0 0.0 1.0 0.0 10.5 0 96 c0t40d0\n> 96.0 0.0 96.0 0.0 0.0 1.0 0.0 10.0 0 96 c5t40d1\n> 95.8 0.0 95.8 0.0 0.0 1.0 0.0 10.0 0 96 c0t40d1\n\nCan you please explain these columns? R/s, is that millions of pages or\nextents or something? How do I translate this to 730 million bytes per\nsecond?\n\n- Luke\n\n\n", "msg_date": "Mon, 19 Dec 2005 17:34:31 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "Jignesh,\n\nOn 12/19/05 12:21 PM, \"Jignesh Shah\" <[email protected]> wrote:\n\n> extended device statistics\n> r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device\n> 0.8 14.0 0.0 0.0 0.0 0.3 0.0 17.8 0 4 c3t0d0\n> 91.4 0.0 91.4 0.0 0.0 1.0 0.0 10.5 0 96 c0t40d0\n> 96.0 0.0 96.0 0.0 0.0 1.0 0.0 10.0 0 96 c5t40d1\n> 95.8 0.0 95.8 0.0 0.0 1.0 0.0 10.0 0 96 c0t40d1\n> 96.8 0.0 96.8 0.0 0.0 1.0 0.0 9.9 0 96 c5t40d0\n> 84.6 0.0 84.6 0.0 0.0 1.0 0.0 11.4 0 96 c4t46d1\n> 85.6 0.0 85.6 0.0 0.0 1.0 0.0 11.2 0 96 c4t46d0\n> 85.2 0.0 85.2 0.0 0.0 1.0 0.0 11.3 0 96 c2t46d1\n> 85.4 0.0 85.4 0.0 0.0 1.0 0.0 11.3 0 96 c2t46d0\n\nDoh! Forget my last message, each of these is a single drive.\n\nWacky layout though - it looks like c0,c2,c3,c4,c5 - is that 5 controllers\nthere?\n\nAlso - what are the RAID options on this unit?\n\nTo get optimal performance on an 8 core unit, would we want to map 1 active\nprocess to each of these drives? Can the CPU run all 8 threads\nsimultaneously?\n\n- Luke\n\n\n", "msg_date": "Mon, 19 Dec 2005 17:37:35 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "On Sun, Dec 18, 2005 at 11:35:15AM -0500, Juan Casero wrote:\n> Can anyone tell me how well PostgreSQL 8.x performs on the new Sun Ultrasparc\n> T1 processor and architecture on Solaris 10? I have a custom built retail\n> sales reporting that I developed using PostgreSQL 7.48 and PHP on a Fedora\n\nPeople have seen some pretty big gains going from 7.4 to 8.1. I recently\nmigrated http://stats.distributed.net and the daily processing\n(basically OLAP) times were cut in half.\n\nAs someone else mentioned, IO is probably going to be your biggest\nconsideration, unless you have a lot of queries running at once.\nProbably your best bang for the buck will be from an Opteron-based\nserver with a good number of internal drives.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 20 Dec 2005 13:34:56 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" } ]
[ { "msg_contents": "From: [email protected] on behalf of Juan Casero\n\nQUOTE:\n\nHi -\n\n\nCan anyone tell me how well PostgreSQL 8.x performs on the new Sun Ultrasparc\nT1 processor and architecture on Solaris 10? I have a custom built retail\nsales reporting that I developed using PostgreSQL 7.48 and PHP on a Fedora\nCore 3 intel box. I want to scale this application upwards to handle a\ndatabase that might grow to a 100 GB. Our company is green mission conscious\nnow so I was hoping I could use that to convince management to consider a Sun\nUltrasparc T1 or T2 system provided that if I can get the best performance\nout of it on PostgreSQL. \n\nENDQUOTE:\n\nWell, generally, AMD 64 bit is going to be a better value for your dollar, and run faster than most Sparc based machines.\n\nAlso, PostgreSQL is generally faster under either BSD or Linux than under Solaris on the same box. This might or might not hold as you crank up the numbers of CPUs.\n\nPostgreSQL runs one process for connection. So, to use extra CPUs, you really need to have >1 connection running against the database. \n\nMostly, databases tend to be either I/O bound, until you give them a lot of I/O, then they'll be CPU bound.\n\nAfter that lots of memory, THEN more CPUs. Two CPUs is always useful, as one can be servicing the OS and another the database. But unless you're gonna have lots of users hooked up, more than 2 to 4 CPUs is usually a waste.\n\nSo, I'd recommend a dual core or dual dual core (i.e. 4 cores) AMD64 system with 2 or more gigs ram, and at least a pair of fast drives in a mirror with a hardare RAID controller with battery backed cache. If you'll be trundling through all 100 gigs of your data set regularly, then get all the memory you can put in a machine at a reasonable cost before buying lots of CPUs.\n\nBut without knowing what you're gonna be doing we can't really make solid recommendations...\n\n\n\n\n\nRE: [PERFORM] PostgreSQL and Ultrasparc T1\n\n\n\nFrom: [email protected] on behalf of Juan Casero\n\nQUOTE:\n\nHi -\n\n\nCan anyone tell me how well PostgreSQL 8.x performs on the new Sun Ultrasparc\nT1 processor and architecture on Solaris 10?   I have a custom built retail\nsales reporting that I developed using PostgreSQL 7.48 and PHP on a Fedora\nCore 3 intel box.  I want to scale this application upwards to handle a\ndatabase that might grow to a 100 GB.  Our company is green mission conscious\nnow so I was hoping I could use that to convince management to consider a Sun\nUltrasparc T1 or T2 system provided that if I can get the best performance\nout of it on PostgreSQL.\n\nENDQUOTE:\n\nWell, generally, AMD 64 bit is going to be a better value for your dollar, and run faster than most Sparc based machines.\n\nAlso, PostgreSQL is generally faster under either BSD or Linux than under Solaris on the same box.  This might or might not hold as you crank up the numbers of CPUs.\n\nPostgreSQL runs one process for connection.  So, to use extra CPUs, you really need to have >1 connection running against the database. \n\nMostly, databases tend to be either I/O bound, until you give them a lot of I/O, then they'll be CPU bound.\n\nAfter that lots of memory, THEN more CPUs.  Two CPUs is always useful, as one can be servicing the OS and another the database.  But unless you're gonna have lots of users hooked up, more than 2 to 4 CPUs is usually a waste.\n\nSo, I'd recommend a dual core or dual dual core (i.e. 4 cores) AMD64 system with 2 or more gigs ram, and at least a pair of fast drives in a mirror with a hardare RAID controller with battery backed cache.  If you'll be trundling through all 100 gigs of your data set regularly, then get all the memory you can put in a machine at a reasonable cost before buying lots of CPUs.\n\nBut without knowing what you're gonna be doing we can't really make solid recommendations...", "msg_date": "Mon, 19 Dec 2005 00:25:18 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "Ok. That is what I wanted to know. Right now this database is a PostgreSQL \n7.4.8 system. I am using it in a sort of DSS role. I have weekly summaries \nof the sales for our division going back three years. I have a PHP based \nwebapp that I wrote to give the managers access to this data. The webapp \nlets them make selections for reports and then it submits a parameterized \nquery to the database for execution. The returned data rows are displayed \nand formatted in their web browser. My largest sales table is about 13 \nmillion rows along with all the indexes it takes up about 20 gigabytes. I \nneed to scale this application up to nearly 100 gigabytes to handle daily \nsales summaries. Once we start looking at daily sales figures our database \nsize could grow ten to twenty times. I use postgresql because it gives me \nthe kind of enterprise database features I need to program the complex logic \nfor the queries. I also need the transaction isolation facilities it \nprovides so I can optimize the queries in plpgsql without worrying about \nmultiple users temp tables colliding with each other. Additionally, I hope \nto rewrite the front end application in JSP so maybe I could use the \nmultithreaded features of the Java to exploit a multicore multi-cpu system. \nThere are almost no writes to the database tables. The bulk of the \napplication is just executing parameterized queries and returning huge \namounts of data. I know bizgres is supposed to be better at this but I want \nto stay away from anything that is beta. I cannot afford for this thing to \ngo wrong. My reasoning for looking at the T1000/2000 was simply the large \nnumber of cores. I know postgresql uses a super server that forks copies of \nitself to handle incoming requests on port 5432. But I figured the number of \ncores on the T1000/2000 processors would be utilized by the forked copies of \nthe postgresql server. From the comments I have seen so far it does not look \nlike this is the case. We had originally sized up a dual processor dual core \nAMD opteron system from HP for this but I thought I could get more bang for \nthe buck on a T1000/2000. It now seems I may have been wrong. I am stronger \nin Linux than Solaris so I am not upset I am just trying to find the best \nhardware for the anticipated needs of this application.\n\nThanks,\nJuan\n\nOn Monday 19 December 2005 01:25, Scott Marlowe wrote:\n> From: [email protected] on behalf of Juan Casero\n>\n> QUOTE:\n>\n> Hi -\n>\n>\n> Can anyone tell me how well PostgreSQL 8.x performs on the new Sun\n> Ultrasparc T1 processor and architecture on Solaris 10? I have a custom\n> built retail sales reporting that I developed using PostgreSQL 7.48 and PHP\n> on a Fedora Core 3 intel box. I want to scale this application upwards to\n> handle a database that might grow to a 100 GB. Our company is green\n> mission conscious now so I was hoping I could use that to convince\n> management to consider a Sun Ultrasparc T1 or T2 system provided that if I\n> can get the best performance out of it on PostgreSQL.\n>\n> ENDQUOTE:\n>\n> Well, generally, AMD 64 bit is going to be a better value for your dollar,\n> and run faster than most Sparc based machines.\n>\n> Also, PostgreSQL is generally faster under either BSD or Linux than under\n> Solaris on the same box. This might or might not hold as you crank up the\n> numbers of CPUs.\n>\n> PostgreSQL runs one process for connection. So, to use extra CPUs, you\n> really need to have >1 connection running against the database.\n>\n> Mostly, databases tend to be either I/O bound, until you give them a lot of\n> I/O, then they'll be CPU bound.\n>\n> After that lots of memory, THEN more CPUs. Two CPUs is always useful, as\n> one can be servicing the OS and another the database. But unless you're\n> gonna have lots of users hooked up, more than 2 to 4 CPUs is usually a\n> waste.\n>\n> So, I'd recommend a dual core or dual dual core (i.e. 4 cores) AMD64 system\n> with 2 or more gigs ram, and at least a pair of fast drives in a mirror\n> with a hardare RAID controller with battery backed cache. If you'll be\n> trundling through all 100 gigs of your data set regularly, then get all the\n> memory you can put in a machine at a reasonable cost before buying lots of\n> CPUs.\n>\n> But without knowing what you're gonna be doing we can't really make solid\n> recommendations...\n", "msg_date": "Mon, 19 Dec 2005 19:32:25 -0500", "msg_from": "Juan Casero <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "I guess it depends on what you term as your metric for measurement.\nIf it is just one query execution time .. It may not be the best on \nUltraSPARC T1.\nBut if you have more than 8 complex queries running simultaneously, \nUltraSPARC T1 can do well compared comparatively provided the \napplication can scale also along with it.\n\nThe best way to approach is to figure out your peak workload, find an \naccurate way to measure the \"true\" metric and then design a benchmark \nfor it and run it on both servers.\n\nRegards,\nJignesh\n\n\nJuan Casero wrote:\n\n>Ok. That is what I wanted to know. Right now this database is a PostgreSQL \n>7.4.8 system. I am using it in a sort of DSS role. I have weekly summaries \n>of the sales for our division going back three years. I have a PHP based \n>webapp that I wrote to give the managers access to this data. The webapp \n>lets them make selections for reports and then it submits a parameterized \n>query to the database for execution. The returned data rows are displayed \n>and formatted in their web browser. My largest sales table is about 13 \n>million rows along with all the indexes it takes up about 20 gigabytes. I \n>need to scale this application up to nearly 100 gigabytes to handle daily \n>sales summaries. Once we start looking at daily sales figures our database \n>size could grow ten to twenty times. I use postgresql because it gives me \n>the kind of enterprise database features I need to program the complex logic \n>for the queries. I also need the transaction isolation facilities it \n>provides so I can optimize the queries in plpgsql without worrying about \n>multiple users temp tables colliding with each other. Additionally, I hope \n>to rewrite the front end application in JSP so maybe I could use the \n>multithreaded features of the Java to exploit a multicore multi-cpu system. \n>There are almost no writes to the database tables. The bulk of the \n>application is just executing parameterized queries and returning huge \n>amounts of data. I know bizgres is supposed to be better at this but I want \n>to stay away from anything that is beta. I cannot afford for this thing to \n>go wrong. My reasoning for looking at the T1000/2000 was simply the large \n>number of cores. I know postgresql uses a super server that forks copies of \n>itself to handle incoming requests on port 5432. But I figured the number of \n>cores on the T1000/2000 processors would be utilized by the forked copies of \n>the postgresql server. From the comments I have seen so far it does not look \n>like this is the case. We had originally sized up a dual processor dual core \n>AMD opteron system from HP for this but I thought I could get more bang for \n>the buck on a T1000/2000. It now seems I may have been wrong. I am stronger \n>in Linux than Solaris so I am not upset I am just trying to find the best \n>hardware for the anticipated needs of this application.\n>\n>Thanks,\n>Juan\n>\n>On Monday 19 December 2005 01:25, Scott Marlowe wrote:\n> \n>\n>>From: [email protected] on behalf of Juan Casero\n>>\n>>QUOTE:\n>>\n>>Hi -\n>>\n>>\n>>Can anyone tell me how well PostgreSQL 8.x performs on the new Sun\n>>Ultrasparc T1 processor and architecture on Solaris 10? I have a custom\n>>built retail sales reporting that I developed using PostgreSQL 7.48 and PHP\n>>on a Fedora Core 3 intel box. I want to scale this application upwards to\n>>handle a database that might grow to a 100 GB. Our company is green\n>>mission conscious now so I was hoping I could use that to convince\n>>management to consider a Sun Ultrasparc T1 or T2 system provided that if I\n>>can get the best performance out of it on PostgreSQL.\n>>\n>>ENDQUOTE:\n>>\n>>Well, generally, AMD 64 bit is going to be a better value for your dollar,\n>>and run faster than most Sparc based machines.\n>>\n>>Also, PostgreSQL is generally faster under either BSD or Linux than under\n>>Solaris on the same box. This might or might not hold as you crank up the\n>>numbers of CPUs.\n>>\n>>PostgreSQL runs one process for connection. So, to use extra CPUs, you\n>>really need to have >1 connection running against the database.\n>>\n>>Mostly, databases tend to be either I/O bound, until you give them a lot of\n>>I/O, then they'll be CPU bound.\n>>\n>>After that lots of memory, THEN more CPUs. Two CPUs is always useful, as\n>>one can be servicing the OS and another the database. But unless you're\n>>gonna have lots of users hooked up, more than 2 to 4 CPUs is usually a\n>>waste.\n>>\n>>So, I'd recommend a dual core or dual dual core (i.e. 4 cores) AMD64 system\n>>with 2 or more gigs ram, and at least a pair of fast drives in a mirror\n>>with a hardare RAID controller with battery backed cache. If you'll be\n>>trundling through all 100 gigs of your data set regularly, then get all the\n>>memory you can put in a machine at a reasonable cost before buying lots of\n>>CPUs.\n>>\n>>But without knowing what you're gonna be doing we can't really make solid\n>>recommendations...\n>> \n>>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n> \n>\n", "msg_date": "Mon, 19 Dec 2005 23:19:25 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "Jignesh K. Shah wrote:\n> I guess it depends on what you term as your metric for measurement.\n> If it is just one query execution time .. It may not be the best on \n> UltraSPARC T1.\n> But if you have more than 8 complex queries running simultaneously, \n> UltraSPARC T1 can do well compared comparatively provided the \n> application can scale also along with it.\n\nI just want to clarify one issue here. It's my understanding that the \n8-core, 4 hardware thread (known as strands) system is seen as a 32 cpu \nsystem by Solaris. \n\nSo, one could have up to 32 postgresql processes running in parallel on \nthe current systems (assuming the application can scale).\n\n-- Alan\n", "msg_date": "Tue, 20 Dec 2005 10:01:52 -0500", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "On Tue, 20 Dec 2005, Alan Stange wrote:\n\n> Jignesh K. Shah wrote:\n>> I guess it depends on what you term as your metric for measurement.\n>> If it is just one query execution time .. It may not be the best on \n>> UltraSPARC T1.\n>> But if you have more than 8 complex queries running simultaneously, \n>> UltraSPARC T1 can do well compared comparatively provided the application \n>> can scale also along with it.\n>\n> I just want to clarify one issue here. It's my understanding that the \n> 8-core, 4 hardware thread (known as strands) system is seen as a 32 cpu \n> system by Solaris. \n> So, one could have up to 32 postgresql processes running in parallel on the \n> current systems (assuming the application can scale).\n\nnote that like hyperthreading, the strands aren't full processors, their \nefficiancy depends on how much other threads shareing the core stall \nwaiting for external things.\n\nDavid Lang\n", "msg_date": "Tue, 20 Dec 2005 07:08:21 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "David Lang wrote:\n> On Tue, 20 Dec 2005, Alan Stange wrote:\n>\n>> Jignesh K. Shah wrote:\n>>> I guess it depends on what you term as your metric for measurement.\n>>> If it is just one query execution time .. It may not be the best on \n>>> UltraSPARC T1.\n>>> But if you have more than 8 complex queries running simultaneously, \n>>> UltraSPARC T1 can do well compared comparatively provided the \n>>> application can scale also along with it.\n>>\n>> I just want to clarify one issue here. It's my understanding that \n>> the 8-core, 4 hardware thread (known as strands) system is seen as a \n>> 32 cpu system by Solaris. So, one could have up to 32 postgresql \n>> processes running in parallel on the current systems (assuming the \n>> application can scale).\n>\n> note that like hyperthreading, the strands aren't full processors, \n> their efficiancy depends on how much other threads shareing the core \n> stall waiting for external things. \nExactly. \n", "msg_date": "Tue, 20 Dec 2005 10:14:44 -0500", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "On Tue, 20 Dec 2005, Alan Stange wrote:\n\n> David Lang wrote:\n>> On Tue, 20 Dec 2005, Alan Stange wrote:\n>> \n>>> Jignesh K. Shah wrote:\n>>>> I guess it depends on what you term as your metric for measurement.\n>>>> If it is just one query execution time .. It may not be the best on \n>>>> UltraSPARC T1.\n>>>> But if you have more than 8 complex queries running simultaneously, \n>>>> UltraSPARC T1 can do well compared comparatively provided the application \n>>>> can scale also along with it.\n>>> \n>>> I just want to clarify one issue here. It's my understanding that the \n>>> 8-core, 4 hardware thread (known as strands) system is seen as a 32 cpu \n>>> system by Solaris. So, one could have up to 32 postgresql processes \n>>> running in parallel on the current systems (assuming the application can \n>>> scale).\n>> \n>> note that like hyperthreading, the strands aren't full processors, their \n>> efficiancy depends on how much other threads shareing the core stall \n>> waiting for external things.\n> Exactly. Until we have a machine in hand (and substantial technical \n> documentation) we won't know all the limitations.\n\nby the way, when you do get your hands on it I would be interested to hear \nhow Linux compares to Solaris on the same hardware.\n\ngiven how new the hardware is it's also likly that linux won't identify \nthe hardware properly (either seeing it as 32 true processors or just as 8 \nwithout being able to use the strands), so the intitial tests may not \nreflect the Linux performance in a release or two.\n\nDavid Lang\n", "msg_date": "Tue, 20 Dec 2005 07:22:24 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "Jignesh,\n\nJuan says the following below:\n\n\"I figured the number of cores on the T1000/2000 processors would be\nutilized by the forked copies of the postgresql server. From the comments\nI have seen so far it does not look like this is the case.\"\n\nI think this needs to be refuted. Doesn't Solaris switch processes as well\nas threads (LWPs, whatever) equally well amongst cores? I realize the\nprocess context switch is more expensive than the thread switch, but\nSolaris will utilize all cores as processes or threads become ready to run,\ncorrect?\n\nBTW, it's great to see folks with your email address on the list. I feel\nit points to a brighter future for all involved.\n\nThanks,\n\nRick\n\n\n \n \"Jignesh K. Shah\" \n <[email protected] \n > To \n Sent by: Juan Casero <[email protected]> \n pgsql-performance cc \n -owner@postgresql [email protected] \n .org Subject \n Re: [PERFORM] PostgreSQL and \n Ultrasparc T1 \n 12/19/2005 11:19 \n PM \n \n \n \n \n\n\n\n\nI guess it depends on what you term as your metric for measurement.\nIf it is just one query execution time .. It may not be the best on\nUltraSPARC T1.\nBut if you have more than 8 complex queries running simultaneously,\nUltraSPARC T1 can do well compared comparatively provided the\napplication can scale also along with it.\n\nThe best way to approach is to figure out your peak workload, find an\naccurate way to measure the \"true\" metric and then design a benchmark\nfor it and run it on both servers.\n\nRegards,\nJignesh\n\n\nJuan Casero wrote:\n\n>Ok. That is what I wanted to know. Right now this database is a\nPostgreSQL\n>7.4.8 system. I am using it in a sort of DSS role. I have weekly\nsummaries\n>of the sales for our division going back three years. I have a PHP based\n>webapp that I wrote to give the managers access to this data. The webapp\n>lets them make selections for reports and then it submits a parameterized\n>query to the database for execution. The returned data rows are displayed\n\n>and formatted in their web browser. My largest sales table is about 13\n>million rows along with all the indexes it takes up about 20 gigabytes. I\n\n>need to scale this application up to nearly 100 gigabytes to handle daily\n>sales summaries. Once we start looking at daily sales figures our\ndatabase\n>size could grow ten to twenty times. I use postgresql because it gives me\n\n>the kind of enterprise database features I need to program the complex\nlogic\n>for the queries. I also need the transaction isolation facilities it\n>provides so I can optimize the queries in plpgsql without worrying about\n>multiple users temp tables colliding with each other. Additionally, I\nhope\n>to rewrite the front end application in JSP so maybe I could use the\n>multithreaded features of the Java to exploit a multicore multi-cpu\nsystem.\n>There are almost no writes to the database tables. The bulk of the\n>application is just executing parameterized queries and returning huge\n>amounts of data. I know bizgres is supposed to be better at this but I\nwant\n>to stay away from anything that is beta. I cannot afford for this thing\nto\n>go wrong. My reasoning for looking at the T1000/2000 was simply the large\n\n>number of cores. I know postgresql uses a super server that forks copies\nof\n>itself to handle incoming requests on port 5432. But I figured the number\nof\n>cores on the T1000/2000 processors would be utilized by the forked copies\nof\n>the postgresql server. From the comments I have seen so far it does not\nlook\n>like this is the case. We had originally sized up a dual processor dual\ncore\n>AMD opteron system from HP for this but I thought I could get more bang\nfor\n>the buck on a T1000/2000. It now seems I may have been wrong. I am\nstronger\n>in Linux than Solaris so I am not upset I am just trying to find the best\n>hardware for the anticipated needs of this application.\n>\n>Thanks,\n>Juan\n>\n>On Monday 19 December 2005 01:25, Scott Marlowe wrote:\n>\n>\n>>From: [email protected] on behalf of Juan Casero\n>>\n>>QUOTE:\n>>\n>>Hi -\n>>\n>>\n>>Can anyone tell me how well PostgreSQL 8.x performs on the new Sun\n>>Ultrasparc T1 processor and architecture on Solaris 10? I have a custom\n>>built retail sales reporting that I developed using PostgreSQL 7.48 and\nPHP\n>>on a Fedora Core 3 intel box. I want to scale this application upwards\nto\n>>handle a database that might grow to a 100 GB. Our company is green\n>>mission conscious now so I was hoping I could use that to convince\n>>management to consider a Sun Ultrasparc T1 or T2 system provided that if\nI\n>>can get the best performance out of it on PostgreSQL.\n>>\n>>ENDQUOTE:\n>>\n>>Well, generally, AMD 64 bit is going to be a better value for your\ndollar,\n>>and run faster than most Sparc based machines.\n>>\n>>Also, PostgreSQL is generally faster under either BSD or Linux than under\n>>Solaris on the same box. This might or might not hold as you crank up\nthe\n>>numbers of CPUs.\n>>\n>>PostgreSQL runs one process for connection. So, to use extra CPUs, you\n>>really need to have >1 connection running against the database.\n>>\n>>Mostly, databases tend to be either I/O bound, until you give them a lot\nof\n>>I/O, then they'll be CPU bound.\n>>\n>>After that lots of memory, THEN more CPUs. Two CPUs is always useful, as\n>>one can be servicing the OS and another the database. But unless you're\n>>gonna have lots of users hooked up, more than 2 to 4 CPUs is usually a\n>>waste.\n>>\n>>So, I'd recommend a dual core or dual dual core (i.e. 4 cores) AMD64\nsystem\n>>with 2 or more gigs ram, and at least a pair of fast drives in a mirror\n>>with a hardare RAID controller with battery backed cache. If you'll be\n>>trundling through all 100 gigs of your data set regularly, then get all\nthe\n>>memory you can put in a machine at a reasonable cost before buying lots\nof\n>>CPUs.\n>>\n>>But without knowing what you're gonna be doing we can't really make solid\n>>recommendations...\n>>\n>>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n", "msg_date": "Tue, 20 Dec 2005 11:50:51 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "But yes All LWPs (processes and threads) are switched across virtual \nCPUS . There is intelligence built in Solaris to understand which \nstrands are executing on which cores and it will balance out the cores \ntoo so if there are only 8 threads running they will essentially run on \nseparate cores rather than 2 cores with 8 threads.\n\nThe biggest limitation is application scaling. pgbench shows that with \nmore processes trying to bottleneck on same files will probably not \nperform better unless you tune your storage/file system. Those are the \nissues which we typically try to solve with community partners (vendors, \nopen source) since that gives the biggest benefits.\n\nBest example to verify in such multi-processes environment, do you see \ngreater than 60% avg CPU utilization in your dual/quad config \nXeons/Itaniums, then Sun Fire T2000 will help you a lot. However if you \nare stuck below 50% (for dual) or 25% (for quad) which means you are \npretty much stuck at 1 CPU performance and/or probably have more IO \nrelated contention then it won't help you with these systems.\n\nI hope you get the idea on when a workload will perform better on Sun \nFire T2000 without burning hands.\n\nI will try to test some more with PostgreSQL on these systems to kind of \nhighlight what can work or what will not work.\n\nIs pgbench the workload that you prefer? (It already has issues with \npg_xlog so my guess is it probably won't scale much)\nIf you have other workload informations let me know.\n\nThanks.\nRegards,\nJignesh\n\n\n\[email protected] wrote:\n\n>Jignesh,\n>\n>Juan says the following below:\n>\n>\"I figured the number of cores on the T1000/2000 processors would be\n>utilized by the forked copies of the postgresql server. From the comments\n>I have seen so far it does not look like this is the case.\"\n>\n>I think this needs to be refuted. Doesn't Solaris switch processes as well\n>as threads (LWPs, whatever) equally well amongst cores? I realize the\n>process context switch is more expensive than the thread switch, but\n>Solaris will utilize all cores as processes or threads become ready to run,\n>correct?\n>\n>BTW, it's great to see folks with your email address on the list. I feel\n>it points to a brighter future for all involved.\n>\n>Thanks,\n>\n>Rick\n>\n>\n> \n> \"Jignesh K. Shah\" \n> <[email protected] \n> > To \n> Sent by: Juan Casero <[email protected]> \n> pgsql-performance cc \n> -owner@postgresql [email protected] \n> .org Subject \n> Re: [PERFORM] PostgreSQL and \n> Ultrasparc T1 \n> 12/19/2005 11:19 \n> PM \n> \n> \n> \n> \n>\n>\n>\n>\n>I guess it depends on what you term as your metric for measurement.\n>If it is just one query execution time .. It may not be the best on\n>UltraSPARC T1.\n>But if you have more than 8 complex queries running simultaneously,\n>UltraSPARC T1 can do well compared comparatively provided the\n>application can scale also along with it.\n>\n>The best way to approach is to figure out your peak workload, find an\n>accurate way to measure the \"true\" metric and then design a benchmark\n>for it and run it on both servers.\n>\n>Regards,\n>Jignesh\n>\n>\n>Juan Casero wrote:\n>\n> \n>\n>>Ok. That is what I wanted to know. Right now this database is a\n>> \n>>\n>PostgreSQL\n> \n>\n>>7.4.8 system. I am using it in a sort of DSS role. I have weekly\n>> \n>>\n>summaries\n> \n>\n>>of the sales for our division going back three years. I have a PHP based\n>>webapp that I wrote to give the managers access to this data. The webapp\n>>lets them make selections for reports and then it submits a parameterized\n>>query to the database for execution. The returned data rows are displayed\n>> \n>>\n>\n> \n>\n>>and formatted in their web browser. My largest sales table is about 13\n>>million rows along with all the indexes it takes up about 20 gigabytes. I\n>> \n>>\n>\n> \n>\n>>need to scale this application up to nearly 100 gigabytes to handle daily\n>>sales summaries. Once we start looking at daily sales figures our\n>> \n>>\n>database\n> \n>\n>>size could grow ten to twenty times. I use postgresql because it gives me\n>> \n>>\n>\n> \n>\n>>the kind of enterprise database features I need to program the complex\n>> \n>>\n>logic\n> \n>\n>>for the queries. I also need the transaction isolation facilities it\n>>provides so I can optimize the queries in plpgsql without worrying about\n>>multiple users temp tables colliding with each other. Additionally, I\n>> \n>>\n>hope\n> \n>\n>>to rewrite the front end application in JSP so maybe I could use the\n>>multithreaded features of the Java to exploit a multicore multi-cpu\n>> \n>>\n>system.\n> \n>\n>>There are almost no writes to the database tables. The bulk of the\n>>application is just executing parameterized queries and returning huge\n>>amounts of data. I know bizgres is supposed to be better at this but I\n>> \n>>\n>want\n> \n>\n>>to stay away from anything that is beta. I cannot afford for this thing\n>> \n>>\n>to\n> \n>\n>>go wrong. My reasoning for looking at the T1000/2000 was simply the large\n>> \n>>\n>\n> \n>\n>>number of cores. I know postgresql uses a super server that forks copies\n>> \n>>\n>of\n> \n>\n>>itself to handle incoming requests on port 5432. But I figured the number\n>> \n>>\n>of\n> \n>\n>>cores on the T1000/2000 processors would be utilized by the forked copies\n>> \n>>\n>of\n> \n>\n>>the postgresql server. From the comments I have seen so far it does not\n>> \n>>\n>look\n> \n>\n>>like this is the case. We had originally sized up a dual processor dual\n>> \n>>\n>core\n> \n>\n>>AMD opteron system from HP for this but I thought I could get more bang\n>> \n>>\n>for\n> \n>\n>>the buck on a T1000/2000. It now seems I may have been wrong. I am\n>> \n>>\n>stronger\n> \n>\n>>in Linux than Solaris so I am not upset I am just trying to find the best\n>>hardware for the anticipated needs of this application.\n>>\n>>Thanks,\n>>Juan\n>>\n>>On Monday 19 December 2005 01:25, Scott Marlowe wrote:\n>>\n>>\n>> \n>>\n>>>From: [email protected] on behalf of Juan Casero\n>>>\n>>>QUOTE:\n>>>\n>>>Hi -\n>>>\n>>>\n>>>Can anyone tell me how well PostgreSQL 8.x performs on the new Sun\n>>>Ultrasparc T1 processor and architecture on Solaris 10? I have a custom\n>>>built retail sales reporting that I developed using PostgreSQL 7.48 and\n>>> \n>>>\n>PHP\n> \n>\n>>>on a Fedora Core 3 intel box. I want to scale this application upwards\n>>> \n>>>\n>to\n> \n>\n>>>handle a database that might grow to a 100 GB. Our company is green\n>>>mission conscious now so I was hoping I could use that to convince\n>>>management to consider a Sun Ultrasparc T1 or T2 system provided that if\n>>> \n>>>\n>I\n> \n>\n>>>can get the best performance out of it on PostgreSQL.\n>>>\n>>>ENDQUOTE:\n>>>\n>>>Well, generally, AMD 64 bit is going to be a better value for your\n>>> \n>>>\n>dollar,\n> \n>\n>>>and run faster than most Sparc based machines.\n>>>\n>>>Also, PostgreSQL is generally faster under either BSD or Linux than under\n>>>Solaris on the same box. This might or might not hold as you crank up\n>>> \n>>>\n>the\n> \n>\n>>>numbers of CPUs.\n>>>\n>>>PostgreSQL runs one process for connection. So, to use extra CPUs, you\n>>>really need to have >1 connection running against the database.\n>>>\n>>>Mostly, databases tend to be either I/O bound, until you give them a lot\n>>> \n>>>\n>of\n> \n>\n>>>I/O, then they'll be CPU bound.\n>>>\n>>>After that lots of memory, THEN more CPUs. Two CPUs is always useful, as\n>>>one can be servicing the OS and another the database. But unless you're\n>>>gonna have lots of users hooked up, more than 2 to 4 CPUs is usually a\n>>>waste.\n>>>\n>>>So, I'd recommend a dual core or dual dual core (i.e. 4 cores) AMD64\n>>> \n>>>\n>system\n> \n>\n>>>with 2 or more gigs ram, and at least a pair of fast drives in a mirror\n>>>with a hardare RAID controller with battery backed cache. If you'll be\n>>>trundling through all 100 gigs of your data set regularly, then get all\n>>> \n>>>\n>the\n> \n>\n>>>memory you can put in a machine at a reasonable cost before buying lots\n>>> \n>>>\n>of\n> \n>\n>>>CPUs.\n>>>\n>>>But without knowing what you're gonna be doing we can't really make solid\n>>>recommendations...\n>>>\n>>>\n>>> \n>>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n>>\n>> \n>>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n> \n>\n", "msg_date": "Tue, 20 Dec 2005 12:20:55 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" }, { "msg_contents": "On Tue, Dec 20, 2005 at 12:20:55PM -0500, Jignesh K. Shah wrote:\n> Is pgbench the workload that you prefer? (It already has issues with \n> pg_xlog so my guess is it probably won't scale much)\n> If you have other workload informations let me know.\n\n From what the user described, dbt3 would probably be the best benchmark\nto use. Note that they're basically read-only, which is absolutely not\nwhat pgbench does.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 20 Dec 2005 13:22:36 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" } ]
[ { "msg_contents": "8 HBAs at 200MB/sec would require a pretty significant Storage Processor\nbackend unless cost is not a factor. Once you achieve that, there's a\nquestion of sharing/balancing I/O requirements of various other\napplications/databases on that same shared backend storage...\n\nAnjan\n\n\n-----Original Message-----\nFrom: Jignesh K. Shah [mailto:[email protected]] \nSent: Monday, December 19, 2005 9:27 AM\nTo: Luke Lonergan\nCc: Juan Casero; [email protected]\nSubject: Re: [PERFORM] PostgreSQL and Ultrasparc T1\n\nSun Fire T2000 has 3 PCI-E and 1PCI-X slot free when shipped. Using \ndual fiber channel 2G adapters you can get about 200MB x 8 = 1600MB/sec \nIO bandwidth. Plus when 4G HBAs are supported that will double up. Now I\n\nthink generally that's good enough for 1TB raw data or 2-3 TB Database \nsize. Of course typically the database size in PostgreSQL space will be \nin the 100-500GB range so a Sun Fire T2000 can be a good fit with enough\n\narea to grow at a very reasonable price.\n\nOf course like someone mentioned if all you have is 1 connection using \npostgresql which cannot spawn helper processes/threads, this will be \nlimited by the single thread performance which is about 1.2Ghz compared \non Sun Fire T2000 to AMD64 (Sun Fire X4200) which pretty much has \nsimilar IO Bandwidth, same size chassis, but the individual AMD64 cores\n\nruns at about 2.4Ghz (I believe) and max you can get is 4 cores but you\n\nalso have to do a little trade off in terms of power consumption in lei \nof faster single thread performance. So Choices are available with both \narchitecture. .However if you have a webserver driving a postgreSQL \nbackend, then UltraSPARC T1 might be a better option if you suddenly \nwants to do 100s of db connections. The SunFire T2000 gives you 8 cores \nwith 32 threads in all running on the system. \n\nWith PostgreSQL 8.1 fix for SMP Bufferpool performance and with ZFS now \navailable in Solaris Express release, it would be interesting to see how\n\nthe combination of PostgreSQL 8.1 and ZFS works on Solaris since ZFS is \none of the perfect file systems for PostgreSQL where it wants all \ncomplexities (like block allocation, fragmentation, etc) to the \nunderlying file systems and not re-implement its own infrastructure.\n\nIf somebody is already conducting their own tests, do let me know. As \nsoon as I get some free cycles, I want to run ZFS with PostgreSQL using \nSolaris Express. If you have some preferred workloads do let me know.\n\nRegards,\nJignesh\n\n\nLuke Lonergan wrote:\n\n>Juan,\n>\n>On 12/18/05 8:35 AM, \"Juan Casero\" <[email protected]> wrote:\n>\n> \n>\n>>Can anyone tell me how well PostgreSQL 8.x performs on the new Sun\nUltrasparc\n>>T1 processor and architecture on Solaris 10? I have a custom built\nretail\n>>sales reporting that I developed using PostgreSQL 7.48 and PHP on a\nFedora\n>>Core 3 intel box. I want to scale this application upwards to handle\na\n>>database that might grow to a 100 GB. Our company is green mission\nconscious\n>>now so I was hoping I could use that to convince management to\nconsider a Sun\n>>Ultrasparc T1 or T2 system provided that if I can get the best\nperformance\n>>out of it on PostgreSQL. So will newer versions of PostgreSQL (8.1.x)\nbe\n>>able to take of advantage of the multiple cores on a T1 or T2? I\ncannot\n>>change the database and this will be a hard sell unless I can convince\nthem\n>>that the performance advantages are too good to pass up. The company\nis\n>>moving in the Win32 direction and so I have to provide rock solid\nreasons for\n>>why I want to use Solaris Sparc on a T1 or T2 server for this database\n>>application instead of Windows on SQL Server.\n>> \n>>\n>\n>The Niagara CPUs are heavily multi-threaded and will require a lot of\n>parallelism to be exposed to them in order to be effective.\n>\n>Until Sun makes niagara-based machines with lots of I/O channels, there\n>won't be much I/O parallelism available to match the CPU parallelism.\n>\n>Bizgres MPP will use the process and I/O parallelism of these big SMP\n>machines and the version based on Postgres 8.1 will be out in February.\n>\n>- Luke \n>\n>\n>\n>---------------------------(end of\nbroadcast)---------------------------\n>TIP 5: don't forget to increase your free space map settings\n> \n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n", "msg_date": "Mon, 19 Dec 2005 10:21:09 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Ultrasparc T1" } ]
[ { "msg_contents": "I have the following table:\n\n \n\nCREATE TABLE mytmp (\n\n Adv integer,\n\n Pub integer,\n\n Web integer,\n\n Tiempo timestamp,\n\n Num integer,\n\n Country varchar(2)\n\n);\n\n \n\nCREATE INDEX idx_mytmp ON mytmp(adv, pub, web);\n\n \n\nAnd with 16M rows this query:\n\n \n\nSELECT adv, pub, web, country, date_trunc('hour', tiempo), sum(num)\n\nFROM mytmp GROUP BY adv, pub, web, country, date_trunc('hour', tiempo)\n\n \n\nI've tried to create index in different columns but it seems that the group\nby clause doesn't use the index in any way.\n\n \n\nIs around there any stuff to accelerate the group by kind of clauses?\n\n \n\nThanks a lot.", "msg_date": "Mon, 19 Dec 2005 11:30:25 -0600", "msg_from": "\"Cristian Prieto\" <[email protected]>", "msg_from_op": true, "msg_subject": "Any way to optimize GROUP BY queries?" }, { "msg_contents": "\"Cristian Prieto\" <[email protected]> writes:\n\n> SELECT adv, pub, web, country, date_trunc('hour', tiempo), sum(num)\n> FROM mytmp GROUP BY adv, pub, web, country, date_trunc('hour', tiempo)\n> \n> I've tried to create index in different columns but it seems that the group\n> by clause doesn't use the index in any way.\n\nIf you had an index on < adv,pub,web,country,date_trunc('hour',tiemp) > then\nit would be capable of using the index however it would choose not to unless\nyou forced it to. Using the index would be slower.\n\n> Is around there any stuff to accelerate the group by kind of clauses?\n\nIncrease your work_mem (or sort_mem in older postgres versions), you can do\nthis for the server as a whole or just for this one session and set it back\nafter this one query. You can increase it up until it starts causing swapping\nat which point it would be counter productive.\n\nIf increasing work_mem doesn't allow a hash aggregate or at least an in-memory\nsort to handle it then putting the pgsql_tmp directory on a separate spindle\nmight help if you have any available.\n\n-- \ngreg\n\n", "msg_date": "19 Dec 2005 15:47:35 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any way to optimize GROUP BY queries?" }, { "msg_contents": "On Mon, Dec 19, 2005 at 03:47:35PM -0500, Greg Stark wrote:\n> Increase your work_mem (or sort_mem in older postgres versions), you can do\n> this for the server as a whole or just for this one session and set it back\n> after this one query. You can increase it up until it starts causing swapping\n> at which point it would be counter productive.\n\nJust remember that work_memory is per-operation, so it's easy to push\nthe box into swapping if the workload increases. You didn't say how much\nmemory you have, but I'd be careful if work_memory * max_connections\ngets very much larger than your total memory.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 20 Dec 2005 13:47:10 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any way to optimize GROUP BY queries?" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Mon, Dec 19, 2005 at 03:47:35PM -0500, Greg Stark wrote:\n>> Increase your work_mem (or sort_mem in older postgres versions), you can do\n>> this for the server as a whole or just for this one session and set it back\n>> after this one query. You can increase it up until it starts causing swapping\n>> at which point it would be counter productive.\n\n> Just remember that work_memory is per-operation, so it's easy to push\n> the box into swapping if the workload increases. You didn't say how much\n> memory you have, but I'd be careful if work_memory * max_connections\n> gets very much larger than your total memory.\n\nIt's considered good practice to have a relatively small default\nwork_mem setting (in postgresql.conf), and then let individual sessions\npush up the value locally with \"SET work_mem\" if they are going to\nexecute queries that need it. This works well as long as you only have\none or a few such \"heavy\" sessions at a time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Dec 2005 16:44:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any way to optimize GROUP BY queries? " } ]
[ { "msg_contents": "Hi,\n\n \n\nI am not sure if there's an obvious answer to this...If there's a choice\nof an external RAID10 (Fiber Channel 6 or 8 15Krpm drives) enabled\ndrives, what is more beneficial to store on it, the WAL, or the Database\nfiles? One of the other would go on the local RAID10 (4 drives, 15Krpm)\nalong with the OS.\n\n \n\nThis is a very busy database with high concurrent connections, random\nreads and writes. Checkpoint segments are 300 and interval is 6 mins.\nDatabase size is less than 50GB.\n\n \n\nIt has become a bit more confusing because I am trying to allot shared\nstorage across several hosts, and want to be careful not to overload one\nof the 2 storage processors.\n\n \n\nWhat should I check/monitor if more information is needed to determine\nthis?\n\n \n\nAppreciate some suggestions.\n\n \n\nThanks,\nAnjan\n\n \n\n \n \nThis email message and any included attachments constitute confidential\nand privileged information intended exclusively for the listed\naddressee(s). If you are not the intended recipient, please notify\nVantage by immediately telephoning 215-579-8390, extension 1158. In\naddition, please reply to this message confirming your receipt of the\nsame in error. A copy of your email reply can also be sent to\nmailto:[email protected] <blocked::mailto:[email protected]> .\nPlease do not disclose, copy, distribute or take any action in reliance\non the contents of this information. Kindly destroy all copies of this\nmessage and any attachments. Any other use of this email is prohibited.\nThank you for your cooperation. For more information about Vantage,\nplease visit our website at http://www.vantage.com.\n\n \n\n\n\n\n\n\n\n\n\n\nHi,\n \nI am not sure if there’s an obvious answer to this…If\nthere’s a choice of an external RAID10 (Fiber Channel 6 or 8 15Krpm\ndrives) enabled drives, what is more beneficial to store on it, the WAL, or the\nDatabase files? One of the other would go on the local RAID10 (4 drives,\n15Krpm) along with the OS.\n \nThis is a very busy database with high concurrent\nconnections, random reads and writes. Checkpoint segments are 300 and interval\nis 6 mins. Database size is less than 50GB.\n \nIt has become a bit more confusing because I am trying to\nallot shared storage across several hosts, and want to be careful not to\noverload one of the 2 storage processors.\n \nWhat should I check/monitor if more information is needed to\ndetermine this?\n \nAppreciate some suggestions.\n \nThanks,\nAnjan\n \n  This email message and any included attachments constitute confidential and privileged information intended exclusively for the listed addressee(s). If you are not the intended recipient, please notify Vantage by immediately telephoning 215-579-8390, extension 1158.  In addition, please reply to this message confirming your receipt of the same in error.  A copy of your email reply can also be sent to mailto:[email protected].  Please do not disclose, copy, distribute or take any action in reliance on the contents of this information.  Kindly destroy all copies of this message and any attachments.  Any other use of this email is prohibited.  Thank you for your cooperation.  For more information about Vantage, please visit our website at http://www.vantage.com.", "msg_date": "Mon, 19 Dec 2005 15:04:24 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "separate drives for WAL or pgdata files" }, { "msg_contents": "On Mon, 19 Dec 2005, Anjan Dave wrote:\n\n> I am not sure if there's an obvious answer to this...If there's a choice\n> of an external RAID10 (Fiber Channel 6 or 8 15Krpm drives) enabled\n> drives, what is more beneficial to store on it, the WAL, or the Database\n> files? One of the other would go on the local RAID10 (4 drives, 15Krpm)\n> along with the OS.\n\nthe WAL is small compared to the data, and it's mostly sequential access, \nso it doesn't need many spindles, it just needs them more-or-less \ndedicated to the WAL and not distracted by other things.\n\nthe data is large (by comparison), and is accessed randomly, so the more \nspindles that you can throw at it the better.\n\nIn your place I would consider making the server's internal drives into \ntwo raid1 pairs (one for the OS, one for the WAL), and then going with \nraid10 on the external drives for your data\n\n> This is a very busy database with high concurrent connections, random\n> reads and writes. Checkpoint segments are 300 and interval is 6 mins.\n> Database size is less than 50GB.\n\nthis is getting dangerously close to being able to fit in ram. I saw an \narticle over the weekend that Samsung is starting to produce 8G DIMM's, \nthat can go 8 to a controller (instead of 4 per as is currently done), \nwhen motherboards come out that support this you can have 64G of ram per \nopteron socket. it will be pricy, but the performance....\n\nin the meantime you can already go 4G/slot * 4 slots/socket and get 64G on \na 4-socket system. it won't be cheap, but the performance will blow away \nany disk-based system.\n\nfor persistant storage you can replicate from your ram-based system to a \ndisk-based system, and as long as your replication messages hit disk \nquickly you can allow the disk-based version to lag behind in it's updates \nduring your peak periods (as long as it is able to catch up with the \nwrites overnight), and as the disk-based version won't have to do the \nseeks for the reads it will be considerably faster then if it was doing \nall the work (especially if you have good, large battery-backed disk \ncaches to go with those drives to consolodate the writes)\n\n> It has become a bit more confusing because I am trying to allot shared\n> storage across several hosts, and want to be careful not to overload one\n> of the 2 storage processors.\n\nthere's danger here, if you share spindles with other apps you run the \nrisk of slowing down your database significantly. you may be better off \nwith fewer, but dedicated drives rather then more, but shared drives.\n\nDavid Lang\n\n", "msg_date": "Mon, 19 Dec 2005 19:20:56 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: separate drives for WAL or pgdata files" }, { "msg_contents": "On Mon, 19 Dec 2005, David Lang wrote:\n\n> this is getting dangerously close to being able to fit in ram. I saw an \n> article over the weekend that Samsung is starting to produce 8G DIMM's, that \n> can go 8 to a controller (instead of 4 per as is currently done), when \n> motherboards come out that support this you can have 64G of ram per opteron \n> socket. it will be pricy, but the performance....\n\na message on another mailing list got me to thinking, there is the horas \nproject that is aiming to put togeather 16 socket Opteron systems within a \nyear (they claim sooner, but I'm being pessimistic ;-), combine this with \nthese 8G dimms and you can have a SINGLE system with 1TB of ram on it \n(right at the limits of the Opteron's 40 bit external memory addressing)\n\n_wow_\n\nand the thing it that it won't take much change in the software stack to \ndeal with this.\n\nLinux is already running on machines with 1TB of ram (and 512 CPU's) so it \nwill run very well. Postgres probably needs some attention to it's locks, \nbut it is getting that attention now (and it will get more with the Sun \nNiagra chips being able to run 8 processes simultaniously)\n\njust think of the possibilities (if you have the money to afford the super \nmachine :-)\n\nDavid Lang\n\n", "msg_date": "Mon, 19 Dec 2005 19:48:15 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: separate drives for WAL or pgdata files" }, { "msg_contents": "On Mon, Dec 19, 2005 at 07:20:56PM -0800, David Lang wrote:\n> for persistant storage you can replicate from your ram-based system to a \n> disk-based system, and as long as your replication messages hit disk \n> quickly you can allow the disk-based version to lag behind in it's updates \n> during your peak periods (as long as it is able to catch up with the \n> writes overnight), and as the disk-based version won't have to do the \n> seeks for the reads it will be considerably faster then if it was doing \n> all the work (especially if you have good, large battery-backed disk \n> caches to go with those drives to consolodate the writes)\n\nHuh? Unless you're doing a hell of a lot of writing just run a normal\ninstance and make sure you have enough bandwidth to the drives with\npg_xlog on it. Make sure those drives are using a battery-backed raid\ncontroller too. You'll also need to tune things to make sure that\ncheckpoints never have much (if any) work to do when the occur, but you\nshould be able to set that up with proper bg_writer tuning.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 20 Dec 2005 13:52:20 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: separate drives for WAL or pgdata files" } ]
[ { "msg_contents": "Hi,\n \n We�re running 8.03 and I�m trying to understand why the following SELECT doesn�t use iarchave05 index.\n \n If you disable seqscan then iarchave05 index is used and the total runtime is about 50% less than when iarchave05 index is not used.\n \n Why is the optimizer not using iarchave05 index?\n \n select * from iparq.arript\n where\n (anocalc = 2005\n and rtrim(inscimob) = rtrim('010100101480010000')\n and codvencto2 = 1\n and parcela2 >= 0)\n or\n (anocalc = 2005\n and rtrim(inscimob) = rtrim('010100101480010000')\n and codvencto2 > 1)\n or\n (anocalc = 2005\n and rtrim(inscimob) > rtrim('010100101480010000'))\n or\n (anocalc > 2005)\n order by\n anocalc,\n inscimob,\n codvencto2,\n parcela2;\n \nExplain analyze with set enable_seqscan and enable_nestloop to on;\n QUERY PLAN &nbsp; \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=231852.08..232139.96 rows=115153 width=896) (actual time=38313.953..38998.019 rows=167601 loops=1)\n Sort Key: anocalc, inscimob, codvencto2, parcela2\n -> Seq Scan on arript (cost=0.00..170201.44 rows=115153 width=896) (actual time=56.979..13364.748 rows=167601 loops=1)\n Filter: (((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 = 1::numeric) AND (parcela2 >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 > 1::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) > '010100101480010000'::text)) OR (anocalc > 2005::numeric))\n Total runtime: 39247.521 ms\n(5 rows)\n Sort (cost=232243.19..232531.55 rows=115346 width=896) (actual time=46590.246..47225.910 rows=167601 loops=1)\n Sort Key: anocalc, inscimob, codvencto2, parcela2\n -> Seq Scan on arript (cost=0.00..170486.86 rows=115346 width=896) (actual time=54.573..13737.535 rows=167601 loops=1)\n Filter: (((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 = 1::numeric) AND (parcela2 >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 > 1::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) > '010100101480010000'::text)) OR (anocalc > 2005::numeric))\n Total runtime: 47479.861 ms\n(5 rows)\n \n Sort (cost=232281.07..232569.48 rows=115365 width=896) (actual time=40856.792..41658.379 rows=167601 loops=1)\n Sort Key: anocalc, inscimob, codvencto2, parcela2\n -> Seq Scan on arript (cost=0.00..170515.00 rows=115365 width=896) (actual time=58.584..13529.589 rows=167601 loops=1)\n Filter: (((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 = 1::numeric) AND (parcela2 >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 > 1::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) > '010100101480010000'::text)) OR (anocalc > 2005::numeric))\n Total runtime: 41909.792 ms\n(5 rows)\n Explain analyze with set enable_seqscan and enable_nestloop to off;\n ; QUERY PLAN &nbsp; \n Index Scan using iarchave05 on arript (cost=0.00..238964.80 rows=115255 width=896) (actual time=13408.139..19814.848 rows=167601 loops=1)\n Filter: (((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 = 1::numeric) AND (parcela2 >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 > 1::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) > '010100101480010000'::text)) OR (anocalc > 2005::numeric))\n Total runtime: 20110.892 ms\n(3 rows)\n \n Index Scan using iarchave05 on arript (cost=0.00..239091.81 rows=115320 width=896) (actual time=14238.672..21598.862 rows=167601 loops=1)\n Filter: (((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 = 1::numeric) AND (parcela2 >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 > 1::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) > '010100101480010000'::text)) OR (anocalc > 2005::numeric))\n Total runtime: 21967.840 ms\n(3 rows)\n \n Index Scan using iarchave05 on arript (cost=0.00..239115.06 rows=115331 width=896) (actual time=13863.863..20504.503 rows=167601 loops=1)\n Filter: (((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 = 1::numeric) AND (parcela2 >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 > 1::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) > '010100101480010000'::text)) OR (anocalc > 2005::numeric))\n Total runtime: 20768.244 ms\n(3 rows)\n Table definition:\n Table \"iparq.arript\"\n Column | Type | Modifiers\n-------------------+-----------------------+-----------\n anocalc | numeric(4,0) | not null\n cadastro | numeric(8,0) | not null\n codvencto | numeric(2,0) | not null\n parcela | numeric(2,0) | not null\n inscimob | character varying(18) | not null\n codvencto2 | numeric(2,0) | not null\n parcela2 | numeric(2,0) | not null\n codpropr | numeric(10,0) | not null\n dtaven | numeric(8,0) | not null\n...\n...\n...\nIndexes:\n \"pk_arript\" PRIMARY KEY, btree (anocalc, cadastro, codvencto, parcela)\n \"iarchave04\" UNIQUE, btree (cadastro, anocalc, codvencto, parcela)\n \"iarchave02\" btree (inscimob, anocalc, codvencto2, parcela2)\n \"iarchave03\" btree (codpropr, dtaven)\n \"iarchave05\" btree (anocalc, inscimob, codvencto2, parcela2)\n \n Thanks in advance!\n \n Benkendorf\n\n\t\t\n---------------------------------\n Yahoo! doce lar. Fa�a do Yahoo! sua homepage.\nHi,   We�re running 8.03 and I�m trying to understand why the following SELECT doesn�t use iarchave05 index.   If you disable seqscan then iarchave05 index is used and the total runtime is about 50% less than when iarchave05 index is not used.   Why is the optimizer not using iarchave05 index?  select * from iparq.arript where (anocalc = 2005 and rtrim(inscimob) = rtrim('010100101480010000') and codvencto2 = 1 and parcela2 >= 0) or (anocalc = 2005 and rtrim(inscimob) = rtrim('010100101480010000') and codvencto2 > 1) or (anocalc = 2005 and rtrim(inscimob) > rtrim('010100101480010000')) or (anocalc > 2005) order by anocalc, inscimob, codvencto2, parcela2;\nExplain analyze with  set enable_seqscan and enable_nestloop to\n on;                                                                                                                                                                   &nbs\n p; \n                                QUERY\n PLAN                                                                                                                                                                    &\n nbsp;&nb\nsp;  -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=231852.08..232139.96 rows=115153 width=896) (actual time=38313.953..38998.019 rows=167601 loops=1)   Sort Key: anocalc, inscimob, codvencto2, parcela2   ->  Seq Scan on arript  (cost=0.00..170201.44 rows=115153 width=896) (actual time=56.979..13364.748 rows=167601 loops=1)         Filter: (((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 = 1::numeric) AND (parcela2 >= 0::numeric)) OR ((anocalc =\n 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 > 1::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) > '010100101480010000'::text)) OR (anocalc > 2005::numeric)) Total runtime: 39247.521 ms(5 rows)  Sort  (cost=232243.19..232531.55 rows=115346 width=896) (actual time=46590.246..47225.910 rows=167601 loops=1)   Sort Key: anocalc, inscimob, codvencto2, parcela2   ->  Seq Scan on arript  (cost=0.00..170486.86 rows=115346 width=896) (actual time=54.573..13737.535 rows=167601 loops=1)         Filter: (((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 = 1::numeric) AND (parcela2 >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 > 1::numeric)) OR ((anocalc = 2005::nu\n meric)\n AND (rtrim((inscimob)::text) > '010100101480010000'::text)) OR (anocalc > 2005::numeric)) Total runtime: 47479.861 ms(5 rows)  Sort  (cost=232281.07..232569.48 rows=115365 width=896) (actual time=40856.792..41658.379 rows=167601 loops=1)   Sort Key: anocalc, inscimob, codvencto2, parcela2   ->  Seq Scan on arript  (cost=0.00..170515.00 rows=115365 width=896) (actual time=58.584..13529.589 rows=167601 loops=1)         Filter: (((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 = 1::numeric) AND (parcela2 >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 > 1::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) > '010100101480010000'::text)) OR (anocalc > 2005::numeric)) Total runtime: 41909.7\n 92\n ms(5 rows) Explain analyze with  set enable_seqscan and enable_nestloop to\n off;                                                                                                                                                                   &nb\n sp;&nbsp\n;                             QUERY\n PLAN                                                                                                                                                                    &\n nbsp;&nb\nsp;      Index Scan using iarchave05 on arript  (cost=0.00..238964.80 rows=115255 width=896) (actual time=13408.139..19814.848 rows=167601 loops=1)   Filter: (((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 = 1::numeric) AND (parcela2 >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 > 1::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) > '010100101480010000'::text)) OR (anocalc > 2005::numeric)) Total runtime: 20110.892 ms(3 rows)  Index Scan using iarchave05 on arript  (cost=0.00..239091.81 rows=115320 width=896) (actual time=14238.672..21598.862 rows=167601 loops=1)   Filter: (((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 = 1::numeric) AND (parcela2 >=\n 0::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 > 1::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) > '010100101480010000'::text)) OR (anocalc > 2005::numeric)) Total runtime: 21967.840 ms(3 rows)  Index Scan using iarchave05 on arript  (cost=0.00..239115.06 rows=115331 width=896) (actual time=13863.863..20504.503 rows=167601 loops=1)   Filter: (((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 = 1::numeric) AND (parcela2 >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) = '010100101480010000'::text) AND (codvencto2 > 1::numeric)) OR ((anocalc = 2005::numeric) AND (rtrim((inscimob)::text) > '010100101480010000'::text)) OR (anocalc > 2005::numeric)) Total runtime: 20768.244 ms(3 rows) Table\n definition:                 Table \"iparq.arript\"      Column       |         Type          | Modifiers-------------------+-----------------------+----------- anocalc           | numeric(4,0)          | not null cadastro          | numeric(8,0)          | not null codvencto         | numeric(2,0)          | not null parcela           | numeric(2,0)        &nbs\n p; | not\n null inscimob          | character varying(18) | not null codvencto2        | numeric(2,0)          | not null parcela2          | numeric(2,0)          | not null codpropr          | numeric(10,0)         | not null dtaven            | numeric(8,0)          | not null.........Indexes:    \"pk_arript\" PRIMARY KEY, btree (anocalc, cadastro, codvencto, parcela)    \"iarchave04\" UNIQUE, btree (cadastro, anocalc, codvencto, parcela)    \"iarchave02\" btree (inscimob, anocalc\n ,\n codvencto2, parcela2)    \"iarchave03\" btree (codpropr, dtaven)    \"iarchave05\" btree (anocalc, inscimob, codvencto2, parcela2)   Thanks in advance!   Benkendorf\n \nYahoo! doce lar. Fa�a do Yahoo! sua homepage.", "msg_date": "Mon, 19 Dec 2005 20:22:58 +0000 (GMT)", "msg_from": "Carlos Benkendorf <[email protected]>", "msg_from_op": true, "msg_subject": "Is the optimizer choice right?" }, { "msg_contents": "\nCarlos Benkendorf <[email protected]> writes:\n\n> Hi,\n> \n> We�re running 8.03 and I�m trying to understand why the following SELECT doesn�t use iarchave05 index.\n> \n> If you disable seqscan then iarchave05 index is used and the total runtime\n> is about 50% less than when iarchave05 index is not used.\n> \n> Why is the optimizer not using iarchave05 index?\n\nThe optimizer is calculating that the index scan would require more i/o than\nthe sequential scan and be slower. The only reason it isn't is because most of\nthe data is cached from your previous tests.\n\nIf this test accurately represents the production situation and most of this\ndata is in fact routinely cached then you might consider lowering the\nrandom_page_cost to represent this. The value of 4 is reasonable for actual\ni/o but if most of the data is cached then you effectively are getting\nsomething closer to 1. Try 2 or 1.5 or so.\n\nNote that the sequential scan has to scan the entire table. The index scan has\nto scan the entire table *and* the entire index, and in a pretty random order.\nIf the table didn't fit entirely in RAM it would end up reading the entire\ntable several times over.\n\n-- \ngreg\n\n", "msg_date": "19 Dec 2005 15:54:29 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is the optimizer choice right?" } ]
[ { "msg_contents": "Usually manufacturer's claims are tested in 'ideal' conditions, it may not translate well on bandwidth seen on the host side. A 2Gbps Fiber Channel connection would (ideally) give you about 250MB/sec per HBA. Not sure how it translates for GigE considering scsi protocol overheads, but you may want to confirm from them how they achieved 370MB/sec (hwo many iSCSI controllers, what file system, how many drives, what RAID type, block size, strip size, cache settings, etc), and whether it was physical I/O or cached. In other words, if someone has any benchmark numbers, that would be helpful.\r\n \r\nRegarding diskless iscsi boots for future servers, remember that it's a shared storage, if you have a busy server attached to your Nexsan, you may have to think twice on sharing the performance (throughput and IOPS of the storage controller) without impacting the existing hosts, unless you are zizing it now.\r\n \r\nAnd you want to have a pretty clean GigE network, more or less dedicated to this block traffic.\r\n \r\nLarge internal storage with more memory and AMD CPUs is an option as Luke had originally suggested. Check out Appro as well.\r\n \r\nI'd also be curious to know if someone has been using this (SATA/iSCSI/SAS) solution and what are some I/O numbers observed.\r\n \r\nThanks,\r\nAnjan\r\n\r\n\t-----Original Message----- \r\n\tFrom: Matthew Schumacher [mailto:[email protected]] \r\n\tSent: Mon 12/19/2005 7:41 PM \r\n\tTo: [email protected] \r\n\tCc: \r\n\tSubject: Re: [PERFORM] SAN/NAS options\r\n\t\r\n\t\r\n\r\n\tJim C. Nasby wrote: \r\n\t> On Wed, Dec 14, 2005 at 01:56:10AM -0500, Charles Sprickman wrote: \r\n\t> You'll note that I'm being somewhat driven by my OS of choice, FreeBSD. \r\n\t> \r\n\t>>Unlike Solaris or other commercial offerings, there is no nice volume \r\n\t>>management available. While I'd love to keep managing a dozen or so \r\n\t>>FreeBSD boxes, I could be persuaded to go to Solaris x86 if the volume \r\n\t>>management really shines and Postgres performs well on it. \r\n\t> \r\n\t> \r\n\t> Have you looked at vinum? It might not qualify as a true volume manager, \r\n\t> but it's still pretty handy. \r\n\r\n\tI am looking very closely at purchasing a SANRAD Vswitch 2000, a Nexsan \r\n\tSATABoy with SATA disks, and the Qlogic iscsi controller cards. \r\n\r\n\tNexsan claims up to 370MB/s sustained per controller and 44,500 IOPS but \r\n\tI'm not sure if that is good or bad. It's certainly faster than the LSI \r\n\tmegaraid controller I'm using now with a raid 1 mirror. \r\n\r\n\tThe sanrad box looks like it saves money in that you don't have to by \r\n\tcontroller cards for everything, but for I/O intensive servers such as \r\n\tthe database server, I would end up buying an iscsi controller card anyway. \r\n\r\n\tAt this point I'm not sure what the best solution is. I like the idea \r\n\tof having logical disks available though iscsi because of how flexible \r\n\tit is, but I really don't want to spend $20k (10 for the nexsan and 10 \r\n\tfor the sanrad) and end up with poor performance. \r\n\r\n\tOn other advantage to iscsi is that I can go completely diskless on my \r\n\tservers and boot from iscsi which means that I don't have to have spare \r\n\tdisks for each host, now I just have spare disks for the nexsan chassis. \r\n\r\n\tSo the question becomes: has anyone put postgres on an iscsi san, and if \r\n\tso how did it perform? \r\n\r\n\tschu \r\n\r\n\r\n\r\n\t---------------------------(end of broadcast)--------------------------- \r\n\tTIP 4: Have you searched our list archives? \r\n\r\n\t http://archives.postgresql.org \r\n\r\n", "msg_date": "Mon, 19 Dec 2005 20:24:59 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN/NAS options" } ]
[ { "msg_contents": "Re-ran it 3 times on each host - \r\n \r\nSun:\r\n-bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench\r\nstarting vacuum...end.\r\ntransaction type: TPC-B (sort of)\r\nscaling factor: 1\r\nnumber of clients: 10\r\nnumber of transactions per client: 3000\r\nnumber of transactions actually processed: 30000/30000\r\ntps = 827.810778 (including connections establishing)\r\ntps = 828.410801 (excluding connections establishing)\r\nreal 0m36.579s\r\nuser 0m1.222s\r\nsys 0m3.422s\r\n\r\nIntel:\r\n-bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench\r\nstarting vacuum...end.\r\ntransaction type: TPC-B (sort of)\r\nscaling factor: 1\r\nnumber of clients: 10\r\nnumber of transactions per client: 3000\r\nnumber of transactions actually processed: 30000/30000\r\ntps = 597.067503 (including connections establishing)\r\ntps = 597.606169 (excluding connections establishing)\r\nreal 0m50.380s\r\nuser 0m2.621s\r\nsys 0m7.818s\r\n\r\nThanks,\r\nAnjan\r\n \r\n\r\n\t-----Original Message----- \r\n\tFrom: Anjan Dave \r\n\tSent: Wed 12/7/2005 10:54 AM \r\n\tTo: Tom Lane \r\n\tCc: Vivek Khera; Postgresql Performance \r\n\tSubject: Re: [PERFORM] High context switches occurring \r\n\t\r\n\t\r\n\r\n\tThanks for your inputs, Tom. I was going after high concurrent clients, \r\n\tbut should have read this carefully - \r\n\r\n\t-s scaling_factor \r\n\t this should be used with -i (initialize) option. \r\n\t number of tuples generated will be multiple of the \r\n\t scaling factor. For example, -s 100 will imply 10M \r\n\t (10,000,000) tuples in the accounts table. \r\n\t default is 1. NOTE: scaling factor should be at least \r\n\t as large as the largest number of clients you intend \r\n\t to test; else you'll mostly be measuring update \r\n\tcontention. \r\n\r\n\tI'll rerun the tests. \r\n\r\n\tThanks, \r\n\tAnjan \r\n\r\n\r\n\t-----Original Message----- \r\n\tFrom: Tom Lane [mailto:[email protected]] \r\n\tSent: Tuesday, December 06, 2005 6:45 PM \r\n\tTo: Anjan Dave \r\n\tCc: Vivek Khera; Postgresql Performance \r\n\tSubject: Re: [PERFORM] High context switches occurring \r\n\r\n\t\"Anjan Dave\" <[email protected]> writes: \r\n\t> -bash-3.00$ time pgbench -c 1000 -t 30 pgbench \r\n\t> starting vacuum...end. \r\n\t> transaction type: TPC-B (sort of) \r\n\t> scaling factor: 1 \r\n\t> number of clients: 1000 \r\n\t> number of transactions per client: 30 \r\n\t> number of transactions actually processed: 30000/30000 \r\n\t> tps = 45.871234 (including connections establishing) \r\n\t> tps = 46.092629 (excluding connections establishing) \r\n\r\n\tI can hardly think of a worse way to run pgbench :-(. These numbers are \r\n\tabout meaningless, for two reasons: \r\n\r\n\t1. You don't want number of clients (-c) much higher than scaling factor \r\n\t(-s in the initialization step). The number of rows in the \"branches\" \r\n\ttable will equal -s, and since every transaction updates one \r\n\trandomly-chosen \"branches\" row, you will be measuring mostly row-update \r\n\tcontention overhead if there's more concurrent transactions than there \r\n\tare rows. In the case -s 1, which is what you've got here, there is no \r\n\tactual concurrency at all --- all the transactions stack up on the \r\n\tsingle branches row. \r\n\r\n\t2. Running a small number of transactions per client means that \r\n\tstartup/shutdown transients overwhelm the steady-state data. You should \r\n\tprobably run at least a thousand transactions per client if you want \r\n\trepeatable numbers. \r\n\r\n\tTry something like \"-s 10 -c 10 -t 3000\" to get numbers reflecting test \r\n\tconditions more like what the TPC council had in mind when they designed \r\n\tthis benchmark. I tend to repeat such a test 3 times to see if the \r\n\tnumbers are repeatable, and quote the middle TPS number as long as \r\n\tthey're not too far apart. \r\n\r\n\t regards, tom lane \r\n\r\n\r\n\t---------------------------(end of broadcast)--------------------------- \r\n\tTIP 5: don't forget to increase your free space map settings \r\n\r\n", "msg_date": "Mon, 19 Dec 2005 21:08:22 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High context switches occurring" }, { "msg_contents": "Guys -\n\nHelp me out here as I try to understand this benchmark. What is the Sun \nhardware and operating system we are talking about here and what is the intel \nhardware and operating system? What was the Sun version of PostgreSQL \ncompiled with? Gcc on Solaris (assuming sparc) or Sun studio? What was \nPostgreSQL compiled with on intel? Gcc on linux?\n\nThanks,\nJuan\n\nOn Monday 19 December 2005 21:08, Anjan Dave wrote:\n> Re-ran it 3 times on each host -\n>\n> Sun:\n> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 10\n> number of transactions per client: 3000\n> number of transactions actually processed: 30000/30000\n> tps = 827.810778 (including connections establishing)\n> tps = 828.410801 (excluding connections establishing)\n> real 0m36.579s\n> user 0m1.222s\n> sys 0m3.422s\n>\n> Intel:\n> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 10\n> number of transactions per client: 3000\n> number of transactions actually processed: 30000/30000\n> tps = 597.067503 (including connections establishing)\n> tps = 597.606169 (excluding connections establishing)\n> real 0m50.380s\n> user 0m2.621s\n> sys 0m7.818s\n>\n> Thanks,\n> Anjan\n>\n>\n> \t-----Original Message-----\n> \tFrom: Anjan Dave\n> \tSent: Wed 12/7/2005 10:54 AM\n> \tTo: Tom Lane\n> \tCc: Vivek Khera; Postgresql Performance\n> \tSubject: Re: [PERFORM] High context switches occurring\n>\n>\n>\n> \tThanks for your inputs, Tom. I was going after high concurrent clients,\n> \tbut should have read this carefully -\n>\n> \t-s scaling_factor\n> \t this should be used with -i (initialize) option.\n> \t number of tuples generated will be multiple of the\n> \t scaling factor. For example, -s 100 will imply 10M\n> \t (10,000,000) tuples in the accounts table.\n> \t default is 1. NOTE: scaling factor should be at least\n> \t as large as the largest number of clients you intend\n> \t to test; else you'll mostly be measuring update\n> \tcontention.\n>\n> \tI'll rerun the tests.\n>\n> \tThanks,\n> \tAnjan\n>\n>\n> \t-----Original Message-----\n> \tFrom: Tom Lane [mailto:[email protected]]\n> \tSent: Tuesday, December 06, 2005 6:45 PM\n> \tTo: Anjan Dave\n> \tCc: Vivek Khera; Postgresql Performance\n> \tSubject: Re: [PERFORM] High context switches occurring\n>\n> \t\"Anjan Dave\" <[email protected]> writes:\n> \t> -bash-3.00$ time pgbench -c 1000 -t 30 pgbench\n> \t> starting vacuum...end.\n> \t> transaction type: TPC-B (sort of)\n> \t> scaling factor: 1\n> \t> number of clients: 1000\n> \t> number of transactions per client: 30\n> \t> number of transactions actually processed: 30000/30000\n> \t> tps = 45.871234 (including connections establishing)\n> \t> tps = 46.092629 (excluding connections establishing)\n>\n> \tI can hardly think of a worse way to run pgbench :-(. These numbers are\n> \tabout meaningless, for two reasons:\n>\n> \t1. You don't want number of clients (-c) much higher than scaling factor\n> \t(-s in the initialization step). The number of rows in the \"branches\"\n> \ttable will equal -s, and since every transaction updates one\n> \trandomly-chosen \"branches\" row, you will be measuring mostly row-update\n> \tcontention overhead if there's more concurrent transactions than there\n> \tare rows. In the case -s 1, which is what you've got here, there is no\n> \tactual concurrency at all --- all the transactions stack up on the\n> \tsingle branches row.\n>\n> \t2. Running a small number of transactions per client means that\n> \tstartup/shutdown transients overwhelm the steady-state data. You should\n> \tprobably run at least a thousand transactions per client if you want\n> \trepeatable numbers.\n>\n> \tTry something like \"-s 10 -c 10 -t 3000\" to get numbers reflecting test\n> \tconditions more like what the TPC council had in mind when they designed\n> \tthis benchmark. I tend to repeat such a test 3 times to see if the\n> \tnumbers are repeatable, and quote the middle TPS number as long as\n> \tthey're not too far apart.\n>\n> \t regards, tom lane\n>\n>\n> \t---------------------------(end of broadcast)---------------------------\n> \tTIP 5: don't forget to increase your free space map settings\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Mon, 19 Dec 2005 23:16:36 -0500", "msg_from": "Juan Casero <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High context switches occurring" }, { "msg_contents": "Hi there,\n\nI see a very low performance and high context switches on our\ndual itanium2 slackware box (Linux ptah 2.6.14 #1 SMP)\nwith 8Gb of RAM, running 8.1_STABLE. Any tips here ?\n\npostgres@ptah:~/cvs/8.1/pgsql/contrib/pgbench$ time pgbench -s 10 -c 10 -t 3000 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 10\nnumber of transactions per client: 3000\nnumber of transactions actually processed: 30000/30000\ntps = 163.817425 (including connections establishing)\ntps = 163.830558 (excluding connections establishing)\n\nreal 3m3.374s\nuser 0m1.888s\nsys 0m2.472s\n\noutput from vmstat 2\n\n 2 1 0 4185104 197904 3213888 0 0 0 1456 673 6852 25 1 45 29\n 6 0 0 4184880 197904 3213888 0 0 0 1456 673 6317 28 2 49 21\n 0 1 0 4184656 197904 3213888 0 0 0 1464 671 7049 25 2 42 31\n 3 0 0 4184432 197904 3213888 0 0 0 1436 671 7073 25 1 44 29\n 0 1 0 4184432 197904 3213888 0 0 0 1460 671 7014 28 1 42 29\n 0 1 0 4184096 197920 3213872 0 0 0 1440 670 7065 25 2 42 31\n 0 1 0 4183872 197920 3213872 0 0 0 1444 671 6718 26 2 44 28\n 0 1 0 4183648 197920 3213872 0 0 0 1468 670 6525 15 3 50 33\n 0 1 0 4184352 197920 3213872 0 0 0 1584 676 6476 12 2 50 36\n 0 1 0 4193232 197920 3213872 0 0 0 1424 671 5848 12 1 50 37\n 0 0 0 4195536 197920 3213872 0 0 0 20 509 104 0 0 99 1\n 0 0 0 4195536 197920 3213872 0 0 0 1680 573 25 0 0 99 1\n 0 0 0 4195536 197920 3213872 0 0 0 0 504 22 0 0 100\n\nprocessor : 1\nvendor : GenuineIntel\narch : IA-64\nfamily : Itanium 2\nmodel : 2\nrevision : 2\narchrev : 0\nfeatures : branchlong\ncpu number : 0\ncpu regs : 4\ncpu MHz : 1600.010490\nitc MHz : 1600.010490\nBogoMIPS : 2392.06\nsiblings : 1\n\n\n\nOn Mon, 19 Dec 2005, Anjan Dave wrote:\n\n\n> Re-ran it 3 times on each host -\n>\n> Sun:\n> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 10\n> number of transactions per client: 3000\n> number of transactions actually processed: 30000/30000\n> tps = 827.810778 (including connections establishing)\n> tps = 828.410801 (excluding connections establishing)\n> real 0m36.579s\n> user 0m1.222s\n> sys 0m3.422s\n>\n> Intel:\n> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 10\n> number of transactions per client: 3000\n> number of transactions actually processed: 30000/30000\n> tps = 597.067503 (including connections establishing)\n> tps = 597.606169 (excluding connections establishing)\n> real 0m50.380s\n> user 0m2.621s\n> sys 0m7.818s\n>\n> Thanks,\n> Anjan\n>\n>\n> \t-----Original Message-----\n> \tFrom: Anjan Dave\n> \tSent: Wed 12/7/2005 10:54 AM\n> \tTo: Tom Lane\n> \tCc: Vivek Khera; Postgresql Performance\n> \tSubject: Re: [PERFORM] High context switches occurring\n>\n>\n>\n> \tThanks for your inputs, Tom. I was going after high concurrent clients,\n> \tbut should have read this carefully -\n>\n> \t-s scaling_factor\n> \t this should be used with -i (initialize) option.\n> \t number of tuples generated will be multiple of the\n> \t scaling factor. For example, -s 100 will imply 10M\n> \t (10,000,000) tuples in the accounts table.\n> \t default is 1. NOTE: scaling factor should be at least\n> \t as large as the largest number of clients you intend\n> \t to test; else you'll mostly be measuring update\n> \tcontention.\n>\n> \tI'll rerun the tests.\n>\n> \tThanks,\n> \tAnjan\n>\n>\n> \t-----Original Message-----\n> \tFrom: Tom Lane [mailto:[email protected]]\n> \tSent: Tuesday, December 06, 2005 6:45 PM\n> \tTo: Anjan Dave\n> \tCc: Vivek Khera; Postgresql Performance\n> \tSubject: Re: [PERFORM] High context switches occurring\n>\n> \t\"Anjan Dave\" <[email protected]> writes:\n> \t> -bash-3.00$ time pgbench -c 1000 -t 30 pgbench\n> \t> starting vacuum...end.\n> \t> transaction type: TPC-B (sort of)\n> \t> scaling factor: 1\n> \t> number of clients: 1000\n> \t> number of transactions per client: 30\n> \t> number of transactions actually processed: 30000/30000\n> \t> tps = 45.871234 (including connections establishing)\n> \t> tps = 46.092629 (excluding connections establishing)\n>\n> \tI can hardly think of a worse way to run pgbench :-(. These numbers are\n> \tabout meaningless, for two reasons:\n>\n> \t1. You don't want number of clients (-c) much higher than scaling factor\n> \t(-s in the initialization step). The number of rows in the \"branches\"\n> \ttable will equal -s, and since every transaction updates one\n> \trandomly-chosen \"branches\" row, you will be measuring mostly row-update\n> \tcontention overhead if there's more concurrent transactions than there\n> \tare rows. In the case -s 1, which is what you've got here, there is no\n> \tactual concurrency at all --- all the transactions stack up on the\n> \tsingle branches row.\n>\n> \t2. Running a small number of transactions per client means that\n> \tstartup/shutdown transients overwhelm the steady-state data. You should\n> \tprobably run at least a thousand transactions per client if you want\n> \trepeatable numbers.\n>\n> \tTry something like \"-s 10 -c 10 -t 3000\" to get numbers reflecting test\n> \tconditions more like what the TPC council had in mind when they designed\n> \tthis benchmark. I tend to repeat such a test 3 times to see if the\n> \tnumbers are repeatable, and quote the middle TPS number as long as\n> \tthey're not too far apart.\n>\n> \t regards, tom lane\n>\n>\n> \t---------------------------(end of broadcast)---------------------------\n> \tTIP 5: don't forget to increase your free space map settings\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Tue, 20 Dec 2005 08:26:29 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High context switches occurring" }, { "msg_contents": "It basically says pg_xlog is the bottleneck and move it to the disk with \nthe best response time that you can afford. :-)\nIncreasing checkpoint_segments doesn't seem to help much. Playing with \nwal_sync_method might change the behavior.\n\nFor proof .. On Solaris, the /tmp is like a RAM Drive...Of course DO NOT \nTRY ON PRODUCTION.\n\n-bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 3000\nnumber of transactions actually processed: 30000/30000\ntps = 356.578050 (including connections establishing)\ntps = 356.733043 (excluding connections establishing)\n\nreal 1m24.396s\nuser 0m2.550s\nsys 0m3.404s\n-bash-3.00$ mv pg_xlog /tmp\n-bash-3.00$ ln -s /tmp/pg_xlog pg_xlog\n-bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 3000\nnumber of transactions actually processed: 30000/30000\ntps = 2413.661323 (including connections establishing)\ntps = 2420.754581 (excluding connections establishing)\n\nreal 0m12.617s\nuser 0m2.229s\nsys 0m2.950s\n-bash-3.00$ rm pg_xlog\n-bash-3.00$ mv /tmp/pg_xlog pg_xlog\n-bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 3000\nnumber of transactions actually processed: 30000/30000\ntps = 350.227682 (including connections establishing)\ntps = 350.382825 (excluding connections establishing)\n\nreal 1m27.595s\nuser 0m2.537s\nsys 0m3.386s\n-bash-3.00$\n\n\nRegards,\nJignesh\n\n\nOleg Bartunov wrote:\n\n> Hi there,\n>\n> I see a very low performance and high context switches on our\n> dual itanium2 slackware box (Linux ptah 2.6.14 #1 SMP)\n> with 8Gb of RAM, running 8.1_STABLE. Any tips here ?\n>\n> postgres@ptah:~/cvs/8.1/pgsql/contrib/pgbench$ time pgbench -s 10 -c \n> 10 -t 3000 pgbench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 10\n> number of transactions per client: 3000\n> number of transactions actually processed: 30000/30000\n> tps = 163.817425 (including connections establishing)\n> tps = 163.830558 (excluding connections establishing)\n>\n> real 3m3.374s\n> user 0m1.888s\n> sys 0m2.472s\n>\n> output from vmstat 2\n>\n> 2 1 0 4185104 197904 3213888 0 0 0 1456 673 6852 \n> 25 1 45 29\n> 6 0 0 4184880 197904 3213888 0 0 0 1456 673 6317 \n> 28 2 49 21\n> 0 1 0 4184656 197904 3213888 0 0 0 1464 671 7049 \n> 25 2 42 31\n> 3 0 0 4184432 197904 3213888 0 0 0 1436 671 7073 \n> 25 1 44 29\n> 0 1 0 4184432 197904 3213888 0 0 0 1460 671 7014 \n> 28 1 42 29\n> 0 1 0 4184096 197920 3213872 0 0 0 1440 670 7065 \n> 25 2 42 31\n> 0 1 0 4183872 197920 3213872 0 0 0 1444 671 6718 \n> 26 2 44 28\n> 0 1 0 4183648 197920 3213872 0 0 0 1468 670 6525 \n> 15 3 50 33\n> 0 1 0 4184352 197920 3213872 0 0 0 1584 676 6476 \n> 12 2 50 36\n> 0 1 0 4193232 197920 3213872 0 0 0 1424 671 5848 \n> 12 1 50 37\n> 0 0 0 4195536 197920 3213872 0 0 0 20 509 104 \n> 0 0 99 1\n> 0 0 0 4195536 197920 3213872 0 0 0 1680 573 25 \n> 0 0 99 1\n> 0 0 0 4195536 197920 3213872 0 0 0 0 504 22 \n> 0 0 100\n>\n> processor : 1\n> vendor : GenuineIntel\n> arch : IA-64\n> family : Itanium 2\n> model : 2\n> revision : 2\n> archrev : 0\n> features : branchlong\n> cpu number : 0\n> cpu regs : 4\n> cpu MHz : 1600.010490\n> itc MHz : 1600.010490\n> BogoMIPS : 2392.06\n> siblings : 1\n>\n>\n>\n> On Mon, 19 Dec 2005, Anjan Dave wrote:\n>\n>\n>> Re-ran it 3 times on each host -\n>>\n>> Sun:\n>> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench\n>> starting vacuum...end.\n>> transaction type: TPC-B (sort of)\n>> scaling factor: 1\n>> number of clients: 10\n>> number of transactions per client: 3000\n>> number of transactions actually processed: 30000/30000\n>> tps = 827.810778 (including connections establishing)\n>> tps = 828.410801 (excluding connections establishing)\n>> real 0m36.579s\n>> user 0m1.222s\n>> sys 0m3.422s\n>>\n>> Intel:\n>> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench\n>> starting vacuum...end.\n>> transaction type: TPC-B (sort of)\n>> scaling factor: 1\n>> number of clients: 10\n>> number of transactions per client: 3000\n>> number of transactions actually processed: 30000/30000\n>> tps = 597.067503 (including connections establishing)\n>> tps = 597.606169 (excluding connections establishing)\n>> real 0m50.380s\n>> user 0m2.621s\n>> sys 0m7.818s\n>>\n>> Thanks,\n>> Anjan\n>>\n>>\n>> -----Original Message-----\n>> From: Anjan Dave\n>> Sent: Wed 12/7/2005 10:54 AM\n>> To: Tom Lane\n>> Cc: Vivek Khera; Postgresql Performance\n>> Subject: Re: [PERFORM] High context switches occurring\n>>\n>>\n>>\n>> Thanks for your inputs, Tom. I was going after high concurrent \n>> clients,\n>> but should have read this carefully -\n>>\n>> -s scaling_factor\n>> this should be used with -i (initialize) option.\n>> number of tuples generated will be multiple of the\n>> scaling factor. For example, -s 100 will imply 10M\n>> (10,000,000) tuples in the accounts table.\n>> default is 1. NOTE: scaling factor should be at \n>> least\n>> as large as the largest number of clients you intend\n>> to test; else you'll mostly be measuring update\n>> contention.\n>>\n>> I'll rerun the tests.\n>>\n>> Thanks,\n>> Anjan\n>>\n>>\n>> -----Original Message-----\n>> From: Tom Lane [mailto:[email protected]]\n>> Sent: Tuesday, December 06, 2005 6:45 PM\n>> To: Anjan Dave\n>> Cc: Vivek Khera; Postgresql Performance\n>> Subject: Re: [PERFORM] High context switches occurring\n>>\n>> \"Anjan Dave\" <[email protected]> writes:\n>> > -bash-3.00$ time pgbench -c 1000 -t 30 pgbench\n>> > starting vacuum...end.\n>> > transaction type: TPC-B (sort of)\n>> > scaling factor: 1\n>> > number of clients: 1000\n>> > number of transactions per client: 30\n>> > number of transactions actually processed: 30000/30000\n>> > tps = 45.871234 (including connections establishing)\n>> > tps = 46.092629 (excluding connections establishing)\n>>\n>> I can hardly think of a worse way to run pgbench :-(. These \n>> numbers are\n>> about meaningless, for two reasons:\n>>\n>> 1. You don't want number of clients (-c) much higher than scaling \n>> factor\n>> (-s in the initialization step). The number of rows in the \n>> \"branches\"\n>> table will equal -s, and since every transaction updates one\n>> randomly-chosen \"branches\" row, you will be measuring mostly \n>> row-update\n>> contention overhead if there's more concurrent transactions than \n>> there\n>> are rows. In the case -s 1, which is what you've got here, there \n>> is no\n>> actual concurrency at all --- all the transactions stack up on the\n>> single branches row.\n>>\n>> 2. Running a small number of transactions per client means that\n>> startup/shutdown transients overwhelm the steady-state data. You \n>> should\n>> probably run at least a thousand transactions per client if you want\n>> repeatable numbers.\n>>\n>> Try something like \"-s 10 -c 10 -t 3000\" to get numbers \n>> reflecting test\n>> conditions more like what the TPC council had in mind when they \n>> designed\n>> this benchmark. I tend to repeat such a test 3 times to see if the\n>> numbers are repeatable, and quote the middle TPS number as long as\n>> they're not too far apart.\n>>\n>> regards, tom lane\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n>>\n>\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n> Sternberg Astronomical Institute, Moscow University, Russia\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(495)939-16-83, +007(495)939-23-83\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Tue, 20 Dec 2005 01:44:39 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High context switches occurring" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> I see a very low performance and high context switches on our\n> dual itanium2 slackware box (Linux ptah 2.6.14 #1 SMP)\n> with 8Gb of RAM, running 8.1_STABLE. Any tips here ?\n\n> postgres@ptah:~/cvs/8.1/pgsql/contrib/pgbench$ time pgbench -s 10 -c 10 -t 3000 pgbench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 10\n\nYou can't expect any different with more clients than scaling factor :-(.\n\nNote that -s is only effective when supplied with -i; it's basically\nignored during an actual test run.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Dec 2005 09:41:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High context switches occurring " }, { "msg_contents": "On Tue, 20 Dec 2005, Tom Lane wrote:\n\n> Oleg Bartunov <[email protected]> writes:\n>> I see a very low performance and high context switches on our\n>> dual itanium2 slackware box (Linux ptah 2.6.14 #1 SMP)\n>> with 8Gb of RAM, running 8.1_STABLE. Any tips here ?\n>\n>> postgres@ptah:~/cvs/8.1/pgsql/contrib/pgbench$ time pgbench -s 10 -c 10 -t 3000 pgbench\n>> starting vacuum...end.\n>> transaction type: TPC-B (sort of)\n>> scaling factor: 1\n>> number of clients: 10\n>\n> You can't expect any different with more clients than scaling factor :-(.\n\nArgh :) I copy'n pasted from previous message.\n\nI still wondering with very poor performance of my server. Moving\npgdata to RAID6 helped - about 600 tps. Then, I moved pg_xlog to separate\ndisk and got strange error messages\n\npostgres@ptah:~$ time pgbench -c 10 -t 3000 pgbench\nstarting vacuum...end.\nClient 0 aborted in state 8: ERROR: integer out of range\nClient 7 aborted in state 8: ERROR: integer out of range\n\ndropdb,createdb helped, but performance is about 160 tps.\n\nLow-end AMD64 with SATA disks gives me ~400 tps in spite of\ndisks on itanium2 faster ( 80MB/sec ) than on AMD64 (60MB/sec).\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Tue, 20 Dec 2005 18:18:20 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High context switches occurring " }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> I still wondering with very poor performance of my server. Moving\n> pgdata to RAID6 helped - about 600 tps. Then, I moved pg_xlog to separate\n> disk and got strange error messages\n\n> postgres@ptah:~$ time pgbench -c 10 -t 3000 pgbench\n> starting vacuum...end.\n> Client 0 aborted in state 8: ERROR: integer out of range\n> Client 7 aborted in state 8: ERROR: integer out of range\n\nI've seen that too, after re-using an existing pgbench database enough\ntimes. I think that the way the test script is written, the adjustments\nto the branch balances are always in the same direction, and so\neventually the fields overflow. It's irrelevant to performance though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Dec 2005 10:28:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High context switches occurring " } ]
[ { "msg_contents": "Hello,\n\nWe have a database containing PostGIS MAP data, it is accessed mainly\nvia JDBC. There are multiple simultaneous read-only connections taken\nfrom the JBoss connection pooling, and there usually are no active\nwriters. We use connection.setReadOnly(true).\n\nNow my question is what is best performance-wise, if it does make any\ndifference at all:\n\nHaving autocommit on or off? (I presume \"off\")\n\nUsing commit or rollback?\n\nCommitting / rolling back occasionally (e. G. when returning the\nconnection to the pool) or not at all (until the pool closes the\nconnection)?\n\nThanks,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Tue, 20 Dec 2005 11:40:53 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": true, "msg_subject": "Read only transactions - Commit or Rollback" }, { "msg_contents": "Markus Schaber schrieb:\n> Hello,\n> \n> We have a database containing PostGIS MAP data, it is accessed mainly\n> via JDBC. There are multiple simultaneous read-only connections taken\n> from the JBoss connection pooling, and there usually are no active\n> writers. We use connection.setReadOnly(true).\n> \n> Now my question is what is best performance-wise, if it does make any\n> difference at all:\n> \n> Having autocommit on or off? (I presume \"off\")\n\n\nIf you are using large ResultSets, it is interesting to know that \nStatement.setFetchSize() does not do anything as long as you have \nautocommit on. So you might want to always disable autocommit and set a \nreasonable fetch size with large results, or otherwise have serious \nmemory problems in Java/JDBC.\n", "msg_date": "Tue, 20 Dec 2005 13:05:30 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read only transactions - Commit or Rollback" } ]
[ { "msg_contents": "afaik, this should be completely neglectable.\n\nstarting a transaction implies write access. if there is none, You do not need to think about transactions, because there are none.\n\npostgres needs to schedule the writing transactions with the reading ones, anyway.\n\nBut I am not that performance profession anyway ;-)\n\n\nregards,\nMarcus\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected]\n[mailto:[email protected]]Im Auftrag von Markus\nSchaber\nGesendet: Dienstag, 20. Dezember 2005 11:41\nAn: PostgreSQL Performance List\nBetreff: [PERFORM] Read only transactions - Commit or Rollback\n\n\nHello,\n\nWe have a database containing PostGIS MAP data, it is accessed mainly\nvia JDBC. There are multiple simultaneous read-only connections taken\nfrom the JBoss connection pooling, and there usually are no active\nwriters. We use connection.setReadOnly(true).\n\nNow my question is what is best performance-wise, if it does make any\ndifference at all:\n\nHaving autocommit on or off? (I presume \"off\")\n\nUsing commit or rollback?\n\nCommitting / rolling back occasionally (e. G. when returning the\nconnection to the pool) or not at all (until the pool closes the\nconnection)?\n\nThanks,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n\n", "msg_date": "Tue, 20 Dec 2005 11:55:23 +0100", "msg_from": "=?iso-8859-1?Q?N=F6rder-Tuitje=2C_Marcus?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Read only transactions - Commit or Rollback" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\nNďż˝rder-Tuitje wrote:\n|> We have a database containing PostGIS MAP data, it is accessed\n|> mainly via JDBC. There are multiple simultaneous read-only\n|> connections taken from the JBoss connection pooling, and there\n|> usually are no active writers. We use connection.setReadOnly(true).\n|>\n|> Now my question is what is best performance-wise, if it does make\n|> any difference at all:\n|>\n|> Having autocommit on or off? (I presume \"off\")\n|>\n|> Using commit or rollback?\n|>\n|> Committing / rolling back occasionally (e. G. when returning the\n|> connection to the pool) or not at all (until the pool closes the\n|> connection)?\n|>\n| afaik, this should be completely neglectable.\n|\n| starting a transaction implies write access. if there is none, You do\n| not need to think about transactions, because there are none.\n|\n| postgres needs to schedule the writing transactions with the reading\n| ones, anyway.\n|\n| But I am not that performance profession anyway ;-)\n\nHello, Marcus, Nďż˝rder, list.\n\nWhat about isolation? For several dependent calculations, MVCC doesn't\nhappen a bit with autocommit turned on, right?\n\nCheers,\n- --\n~ Grega Bremec\n~ gregab at p0f dot net\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.0 (GNU/Linux)\n\niD8DBQFDp+2afu4IwuB3+XoRA6j3AJ0Ri0/NrJtHg4xBNcFsVFFW0XvCoQCfereo\naX6ThZIlPL0RhETJK9IcqtU=\n=xalw\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 20 Dec 2005 12:41:09 +0100", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read only transactions - Commit or Rollback" }, { "msg_contents": "Hi, Marcus,\n\nN�rder-Tuitje wrote:\n> afaik, this should be completely neglectable.\n> \n> starting a transaction implies write access. if there is none, You do\n> not need to think about transactions, because there are none.\n\nHmm, I always thought that the transaction will be opened at the first\nstatement, because there _could_ be a parallel writing transaction\nstarted later.\n\n> postgres needs to schedule the writing transactions with the reading\n> ones, anyway.\n\nAs I said, there usually are no writing transactions on the same database.\n\nBtw, there's another setting that might make a difference:\n\nHaving ACID-Level SERIALIZABLE or READ COMMITED?\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Tue, 20 Dec 2005 13:03:15 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read only transactions - Commit or Rollback" }, { "msg_contents": "Markus Schaber writes:\n\n> As I said, there usually are no writing transactions on the same database.\n>\n> Btw, there's another setting that might make a difference:\n>\n> Having ACID-Level SERIALIZABLE or READ COMMITED?\n\nWell, if nonrepeatable or phantom reads would pose a problem because\nof those occasional writes, you wouldn't be considering autocommit for\nperformance reasons either, would you?\n\nregards,\nAndreas\n-- \n", "msg_date": "Tue, 20 Dec 2005 13:27:05 +0100", "msg_from": "Andreas Seltenreich <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read only transactions - Commit or Rollback" }, { "msg_contents": "Hello, Andreas,\n\nAndreas Seltenreich wrote:\n\n\n>>Btw, there's another setting that might make a difference:\n>>Having ACID-Level SERIALIZABLE or READ COMMITED?\n> \n> Well, if nonrepeatable or phantom reads would pose a problem because\n> of those occasional writes, you wouldn't be considering autocommit for\n> performance reasons either, would you?\n\nYes, the question is purely performance-wise. We don't care about any\nread/write conflicts in this special case.\n\nSome time ago, I had some tests with large bulk insertions, and it\nturned out that SERIALIZABLE seemed to be 30% faster, which surprised us.\n\nThat's why I ask this questions, and mainly because we currently cannot\nperform a large bunch of benchmarking.\n\nThanks,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Tue, 20 Dec 2005 13:31:53 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read only transactions - Commit or Rollback" }, { "msg_contents": "Markus Schaber <[email protected]> writes:\n> Some time ago, I had some tests with large bulk insertions, and it\n> turned out that SERIALIZABLE seemed to be 30% faster, which surprised us.\n\nThat surprises me too --- can you provide details on the test case so\nother people can reproduce it? AFAIR the only performance difference\nbetween SERIALIZABLE and READ COMMITTED is the frequency with which\ntransaction status snapshots are taken; your report suggests you were\nspending 30% of the time in GetSnapshotData, which is a lot higher than\nI've ever seen in a profile.\n\nAs to the original question, a transaction that hasn't modified the\ndatabase does not bother to write either a commit or abort record to\npg_xlog. I think you'd be very hard pressed to measure any speed\ndifference between saying COMMIT and saying ROLLBACK after a read-only\ntransaction. It'd be worth your while to let transactions run longer\nto minimize their startup/shutdown overhead, but there's a point of\ndiminishing returns --- you don't want client code leaving transactions\nopen for hours, because of the negative side-effects of holding locks\nthat long (eg, VACUUM can't reclaim dead rows).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Dec 2005 10:48:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read only transactions - Commit or Rollback " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> That surprises me too --- can you provide details on the test case so\n> other people can reproduce it? AFAIR the only performance difference\n> between SERIALIZABLE and READ COMMITTED is the frequency with which\n> transaction status snapshots are taken; your report suggests you were\n> spending 30% of the time in GetSnapshotData, which is a lot higher than\n> I've ever seen in a profile.\n\nPerhaps it reduced the amount of i/o concurrent vacuums were doing?\n\n-- \ngreg\n\n", "msg_date": "20 Dec 2005 11:02:29 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read only transactions - Commit or Rollback" }, { "msg_contents": "Hi, Tom,\n\nTom Lane wrote:\n\n>>Some time ago, I had some tests with large bulk insertions, and it\n>>turned out that SERIALIZABLE seemed to be 30% faster, which surprised us.\n> \n> That surprises me too --- can you provide details on the test case so\n> other people can reproduce it? AFAIR the only performance difference\n> between SERIALIZABLE and READ COMMITTED is the frequency with which\n> transaction status snapshots are taken; your report suggests you were\n> spending 30% of the time in GetSnapshotData, which is a lot higher than\n> I've ever seen in a profile.\n\nIt was in my previous Job two years ago, so I don't have access to the\nexact code, and my memory is foggy. It was PostGIS 0.8 and PostgreSQL 7.4.\n\nAFAIR, it was inserting into a table with about 6 columns and some\nindices, some columns having database-provided values (now() and a\nSERIAL column), where the other columns (a PostGIS Point, a long, a\nforeign key into another table) were set via the aplication. We tried\ndifferent insertion methods (INSERT, prepared statements, a pgjdbc patch\nto allow COPY support), different bunch sizes and different number of\nparallel connections to get the highest overall insert speed. However,\nthe project never went productive the way it was designed initially.\n\nAs you write about transaction snapshots: It may be that the PostgreSQL\nconfig was not optimized well enough, and the hard disk was rather slow.\n\n> As to the original question, a transaction that hasn't modified the\n> database does not bother to write either a commit or abort record to\n> pg_xlog. I think you'd be very hard pressed to measure any speed\n> difference between saying COMMIT and saying ROLLBACK after a read-only\n> transaction. It'd be worth your while to let transactions run longer\n> to minimize their startup/shutdown overhead, but there's a point of\n> diminishing returns --- you don't want client code leaving transactions\n> open for hours, because of the negative side-effects of holding locks\n> that long (eg, VACUUM can't reclaim dead rows).\n\nOkay, so I'll stick with my current behaviour (Autocommit off and\nROLLBACK after each bunch of work).\n\nThanks,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Tue, 20 Dec 2005 17:07:00 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read only transactions - Commit or Rollback" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> That surprises me too --- can you provide details on the test case so\n>> other people can reproduce it? AFAIR the only performance difference\n>> between SERIALIZABLE and READ COMMITTED is the frequency with which\n>> transaction status snapshots are taken; your report suggests you were\n>> spending 30% of the time in GetSnapshotData, which is a lot higher than\n>> I've ever seen in a profile.\n\n> Perhaps it reduced the amount of i/o concurrent vacuums were doing?\n\nCan't see how it would do that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Dec 2005 11:16:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read only transactions - Commit or Rollback " } ]
[ { "msg_contents": "Mmmm, good question.\n\nMVCC blocks reading processes when data is modified. using autocommit implies that each modification statement is an atomic operation.\n\non a massive readonly table, where no data is altered, MVCC shouldn't have any effect (but this is only an assumption) basing on\n\nhttp://en.wikipedia.org/wiki/Mvcc\n\nusing rowlevel locks with write access should make most of the mostly available to reading-only sessions, but this is an assumption only, too.\n\nmaybe the community knows a little more ;-)\n\nregards,\nmarcus\n\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] [mailto:[email protected]]Im Auftrag von Grega Bremec\nGesendet: Dienstag, 20. Dezember 2005 12:41\nAn: PostgreSQL Performance List\nBetreff: Re: [PERFORM] Read only transactions - Commit or Rollback\n\n\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\nNörder-Tuitje wrote:\n|> We have a database containing PostGIS MAP data, it is accessed\n|> mainly via JDBC. There are multiple simultaneous read-only\n|> connections taken from the JBoss connection pooling, and there\n|> usually are no active writers. We use connection.setReadOnly(true).\n|>\n|> Now my question is what is best performance-wise, if it does make\n|> any difference at all:\n|>\n|> Having autocommit on or off? (I presume \"off\")\n|>\n|> Using commit or rollback?\n|>\n|> Committing / rolling back occasionally (e. G. when returning the\n|> connection to the pool) or not at all (until the pool closes the\n|> connection)?\n|>\n| afaik, this should be completely neglectable.\n|\n| starting a transaction implies write access. if there is none, You do\n| not need to think about transactions, because there are none.\n|\n| postgres needs to schedule the writing transactions with the reading\n| ones, anyway.\n|\n| But I am not that performance profession anyway ;-)\n\nHello, Marcus, Nörder, list.\n\nWhat about isolation? For several dependent calculations, MVCC doesn't\nhappen a bit with autocommit turned on, right?\n\nCheers,\n- --\n~ Grega Bremec\n~ gregab at p0f dot net\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.0 (GNU/Linux)\n\niD8DBQFDp+2afu4IwuB3+XoRA6j3AJ0Ri0/NrJtHg4xBNcFsVFFW0XvCoQCfereo\naX6ThZIlPL0RhETJK9IcqtU=\n=xalw\n-----END PGP SIGNATURE-----\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "Tue, 20 Dec 2005 12:54:06 +0100", "msg_from": "=?iso-8859-2?Q?N=F6rder-Tuitje=2C_Marcus?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Read only transactions - Commit or Rollback" }, { "msg_contents": "On 12/20/05, Nörder-Tuitje, Marcus <[email protected]> wrote:\n\n> MVCC blocks reading processes when data is modified.\n\nThat is incorrect. The main difference between 2PL and MVCC is that\nreaders are never blocked under MVCC.\n\ngreetings,\nNicolas\n\n--\nNicolas Barbier\nhttp://www.gnu.org/philosophy/no-word-attachments.html\n", "msg_date": "Tue, 20 Dec 2005 14:06:15 +0100", "msg_from": "Nicolas Barbier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read only transactions - Commit or Rollback" } ]
[ { "msg_contents": "Hi!\n\nWhat do you suggest for the next problem?\nWe have complex databases with some 100million rows (2-3million new \nrecords per month). Our current servers are working on low resposibility \nin these days, so we have to buy new hardver for database server. Some \nweeks ago we started to work with PostgreSQL8.1, which solved the \nproblem for some months.\nThere are some massive, hard query execution, which are too slow (5-10 \nor more minutes). The parallel processing is infrequent (rarely max. 4-5 \nparallel query execution).\nSo we need high performance in query execution with medium parallel \nprocessability.\nWhat's your opinion what productions could help us? What is the best or \nonly better choice?\nThe budget line is about 30 000$ - 40 000$.\n\nRegards, Atesz\n", "msg_date": "Tue, 20 Dec 2005 19:27:15 +0100", "msg_from": "Antal Attila <[email protected]>", "msg_from_op": true, "msg_subject": "What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Tue, Dec 20, 2005 at 07:27:15PM +0100, Antal Attila wrote:\n> We have complex databases with some 100million rows (2-3million new \n\nHow much space does that equate to?\n\n> records per month). Our current servers are working on low resposibility \n> in these days, so we have to buy new hardver for database server. Some \n> weeks ago we started to work with PostgreSQL8.1, which solved the \n> problem for some months.\n> There are some massive, hard query execution, which are too slow (5-10 \n> or more minutes). The parallel processing is infrequent (rarely max. 4-5 \n> parallel query execution).\n> So we need high performance in query execution with medium parallel \n> processability.\n> What's your opinion what productions could help us? What is the best or \n> only better choice?\n> The budget line is about 30 000$ - 40 000$.\n\nHave you optimized the queries?\n\nItems that generally have the biggest impact on performance in\ndecreasing order:\n1. System architecture\n2. Database design\n3. (for long-running/problem queries) Query plans\n4. Disk I/O\n5. Memory\n6. CPU\n\nSo, I'd make sure that the queries have been optimized (and that\nincludes tuning postgresql.conf) before assuming you need more hardware.\n\nBased on what you've told us (very little parallelization), then your\nbiggest priority is probably either disk IO or memory (or both). Without\nknowing the size of your database/working set it's difficult to provide\nmore specific advice.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 20 Dec 2005 14:06:09 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "\nOn Dec 20, 2005, at 1:27 PM, Antal Attila wrote:\n\n> The budget line is about 30 000$ - 40 000$.\n\nLike Jim said, without more specifics it is hard to give more \nspecific recommendations, but I'm architecting something like this \nfor my current app which needs ~100GB disk space. I made room to \ngrow in my configuration:\n\ndual opteron 2.2GHz\n4GB RAM\nLSI MegaRAID 320-2X\n14-disk SCSI U320 enclosure with 15k RPM drives, 7 connected to each \nchannel on the RAID.\n 1 pair in RAID1 mirror for OS + pg_xlog\n rest in RAID10 with each mirrored pair coming from opposite SCSI \nchannels for data\n\nI run FreeBSD but whatever you prefer should be sufficient if it is \nnot windows.\n\nI don't know how prices are in Hungary, but around here something \nlike this with 36GB drives comes to around $11,000 or $12,000.\n\nThe place I concentrate on is the disk I/O bandwidth which is why I \nprefer Opteron over Intel XEON.\n\n", "msg_date": "Tue, 20 Dec 2005 16:08:20 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Can you elaborate on the reasons the opteron is better than the Xeon when it \ncomes to disk io? I have a PostgreSQL 7.4.8 box running a DSS. One of our \ntables is about 13 million rows. I had a number of queries against this \ntable that used innner joins on 5 or 6 tables including the 13 million row \none. The performance was atrocious. The database itself is about 20 gigs \nbut I want it to scale to 100 gigs. I tuned postgresql as best I could and \ngave the server huge amounts of memory for caching as well. I also tweaked \nthe cost parameters for a sequential scan vs an index scan of the query \noptimizer and used the query explain mechanism to get some idea of what the \noptimizer was doing and where I should index the tables. When I added the \nsixth table to the inner join the query performance took a nose dive. \nAdmittedly this system is a single PIII 1000Mhz with 1.2 gigs of ram and no \nraid. I do have two Ultra 160 scsi drives with the database tables mount \npoint on a partition on one physical drive and pg_xlog mount point on another \npartition of the second drive. I have been trying to get my employer to \nspring for new hardware ($8k to $10k) which I had planned to be a dual - dual \ncore opteron system from HP. Until they agree to spend the money I resorted \nto writing a plpgsql functions to handle the queries. Inside plpgsql I can \nbreak the query apart into seperate stages each of which runs much faster. I \ncan use temporary tables to store intermediate results without worrying about \ntemp table collisions with different users thanks to transaction isolation. \nI am convinced we need new hardware to scale this application *but* I agree \nwith the consensus voiced here that it is more important to optimize the \nquery first before going out to buy new hardware. I was able to do things \nwith PostgreSQL on this cheap server that I could never imagine doing with \nSQL server or even oracle on such a low end box. My OS is Fedora Core 3 but \nI wonder if anyone has tested and benchmarked PostgreSQL on the new Sun x64 \nservers running Solaris 10 x86.\n\nThanks,\nJuan\n\nOn Tuesday 20 December 2005 16:08, Vivek Khera wrote:\n> On Dec 20, 2005, at 1:27 PM, Antal Attila wrote:\n> > The budget line is about 30 000$ - 40 000$.\n>\n> Like Jim said, without more specifics it is hard to give more\n> specific recommendations, but I'm architecting something like this\n> for my current app which needs ~100GB disk space. I made room to\n> grow in my configuration:\n>\n> dual opteron 2.2GHz\n> 4GB RAM\n> LSI MegaRAID 320-2X\n> 14-disk SCSI U320 enclosure with 15k RPM drives, 7 connected to each\n> channel on the RAID.\n> 1 pair in RAID1 mirror for OS + pg_xlog\n> rest in RAID10 with each mirrored pair coming from opposite SCSI\n> channels for data\n>\n> I run FreeBSD but whatever you prefer should be sufficient if it is\n> not windows.\n>\n> I don't know how prices are in Hungary, but around here something\n> like this with 36GB drives comes to around $11,000 or $12,000.\n>\n> The place I concentrate on is the disk I/O bandwidth which is why I\n> prefer Opteron over Intel XEON.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n", "msg_date": "Tue, 20 Dec 2005 19:50:47 -0500", "msg_from": "Juan Casero <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Tue, 20 Dec 2005, Juan Casero wrote:\n\n> Date: Tue, 20 Dec 2005 19:50:47 -0500\n> From: Juan Casero <[email protected]>\n> To: [email protected]\n> Subject: Re: [PERFORM] What's the best hardver for PostgreSQL 8.1?\n> \n> Can you elaborate on the reasons the opteron is better than the Xeon when it\n> comes to disk io?\n\nthe opteron is cheaper so you have more money to spend on disks :-)\n\nalso when you go into multi-cpu systems the front-side-bus design of the \nXeon's can easily become your system bottleneck so that you can't take \nadvantage of all the CPU's becouse they stall waiting for memory accesses, \nOpteron systems have a memory bus per socket so the more CPU's you have \nthe more memory bandwidth you have.\n\n\n> The database itself is about 20 gigs\n> but I want it to scale to 100 gigs.\n\nhow large is the working set? in your tests you ran into swapping on your \n1.2G system, buying a dual opteron with 16gigs of ram will allow you to \nwork with much larger sets of data, and you can go beyond that if needed.\n\nDavid Lang\n", "msg_date": "Tue, 20 Dec 2005 18:46:31 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Jim C. Nasby wrote:\n\n>How much space does that equate to?\n> \n>\n>Have you optimized the queries?\n>\n>Items that generally have the biggest impact on performance in\n>decreasing order:\n>1. System architecture\n>2. Database design\n>3. (for long-running/problem queries) Query plans\n>4. Disk I/O\n>5. Memory\n>6. CPU\n>\n>So, I'd make sure that the queries have been optimized (and that\n>includes tuning postgresql.conf) before assuming you need more hardware.\n>\n>Based on what you've told us (very little parallelization), then your\n>biggest priority is probably either disk IO or memory (or both). Without\n>knowing the size of your database/working set it's difficult to provide\n>more specific advice.\n> \n>\nHi!\n\nWe have 3 Compaq Proliant ML530 servers with dual Xeon 2.8GHz \nprocessors, 3 GB DDR RAM, Ultra Wide SCSI RAID5 10000rpm and 1000Gbit \nethernet. We partitioned our databases among these machines, but there \nare cross refrences among the machines theoretically. Now the size of \ndatas is about 100-110GB. We've used these servers for 3 years with \nDebian Linux. We have already optimized the given queries and the \npostgresql.conf. We tried more tricks and ideas and we read and asked on \nmailing lists. We cannot do anything, we should buy new server for the \ndatabases, because we develop our system for newer services, so the size \nwill grow along. After that we need better responsiblility and shorter \nexecution time for the big queries (These queries are too complicated to \ndiscuss here, and more times we optimized with plpgsql stored procedures.).\nThe PostgreSQL 8.1 solved more paralellization and overload problem, the \naverage load is decreased significantly on our servers. But the big \nqueries aren't fast enough. We think the hardver is the limit. Generally \n2 parallel guery running in working hours, after we make backups at night.\n\nRegards, Atesz\n\n", "msg_date": "Wed, 21 Dec 2005 11:38:43 +0100", "msg_from": "Antal Attila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Juan Casero wrote:\n> Can you elaborate on the reasons the opteron is better than the Xeon when it \n> comes to disk io? I have a PostgreSQL 7.4.8 box running a DSS. One of our \n\nOpterons have 64-bit IOMMU -- Xeons don't. That means in 64-bit mode, \ntransfers to > 4GB, the OS must allocated the memory < 4GB, DMA to that \nblock and then the CPU must do extra work in copying the memory to > \n4GB. Versus on the Opteron, it's done by the IO adaptor using DMA in the \nbackground.\n", "msg_date": "Wed, 21 Dec 2005 15:57:56 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "I just sent my boss an email asking him for a Sun v20z with dual 2.2 Ghz \nopterons, 2 Gigs of RAM and RAID 1. I would have liked a better server \ncapable of RAID but that seems to be out of his budget right now. Ok so I \nassume I get this Sun box. Most likely I will go with Linux since it is a \nfair bet he doesn't want to pay for the Solaris 10 x86 license. Although I \nkind of like the idea of using Solaris 10 x86 for this. I will assume I \nneed to install the x64 kernel that comes with say Fedora Core 4. Should I \nrun the Postgresql 8.x binaries in 32 bit mode or 64 bit mode? My instinct \ntells me 64 bit mode is most efficient for our database size about 20 gigs \nright now but may grow to 100 gigs in a year or so. I just finished loading \na 20 gig database on a dual 900 Mhz Ultrasparc III system with 2 gigs of ram \nand about 768 megs of shared memory available for the posgresql server \nrunning Solaris 10. The load has smoked a P4 3.2 Ghz system I am using also \nwith 2 gigs of ram running postgresql 8.0.3. I mean I started the sparc \nload after the P4 load. The sparc load has finished already rebuilding the \ndatabase from a pg_dump file but the P4 system is still going. The p4 has \n1.3 Gigs of shared memory allocated to postgresql. How about them apples?\n\n\nThanks,\nJuan\n\nOn Wednesday 21 December 2005 18:57, William Yu wrote:\n> Juan Casero wrote:\n> > Can you elaborate on the reasons the opteron is better than the Xeon when\n> > it comes to disk io? I have a PostgreSQL 7.4.8 box running a DSS. One\n> > of our\n>\n> Opterons have 64-bit IOMMU -- Xeons don't. That means in 64-bit mode,\n> transfers to > 4GB, the OS must allocated the memory < 4GB, DMA to that\n> block and then the CPU must do extra work in copying the memory to >\n> 4GB. Versus on the Opteron, it's done by the IO adaptor using DMA in the\n> background.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n", "msg_date": "Wed, 21 Dec 2005 22:09:48 -0500", "msg_from": "Juan Casero <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Sorry folks. I had a couple of glasses of wine as I wrote this. Anyway I \noriginally wanted the box to have more than two drives so I could do RAID 5 \nbut that is going to cost too much. Also, contrary to my statement below it \nseems to me I should run the 32 bit postgresql server on the 64 bit kernel. \nWould you agree this will probably yield the best performance? I know it \ndepends alot on the system but for now this database is about 20 gigabytes. \nNot too large right now but it may grow 5x in the next year.\n\nThanks,\nJuan\n\nOn Wednesday 21 December 2005 22:09, Juan Casero wrote:\n> I just sent my boss an email asking him for a Sun v20z with dual 2.2 Ghz\n> opterons, 2 Gigs of RAM and RAID 1. I would have liked a better server\n> capable of RAID but that seems to be out of his budget right now. Ok so I\n> assume I get this Sun box. Most likely I will go with Linux since it is a\n> fair bet he doesn't want to pay for the Solaris 10 x86 license. Although I\n> kind of like the idea of using Solaris 10 x86 for this. I will assume I\n> need to install the x64 kernel that comes with say Fedora Core 4. Should I\n> run the Postgresql 8.x binaries in 32 bit mode or 64 bit mode? My\n> instinct tells me 64 bit mode is most efficient for our database size about\n> 20 gigs right now but may grow to 100 gigs in a year or so. I just\n> finished loading a 20 gig database on a dual 900 Mhz Ultrasparc III system\n> with 2 gigs of ram and about 768 megs of shared memory available for the\n> posgresql server running Solaris 10. The load has smoked a P4 3.2 Ghz\n> system I am using also with 2 gigs of ram running postgresql 8.0.3. I\n> mean I started the sparc load after the P4 load. The sparc load has\n> finished already rebuilding the database from a pg_dump file but the P4\n> system is still going. The p4 has 1.3 Gigs of shared memory allocated to\n> postgresql. How about them apples?\n>\n>\n> Thanks,\n> Juan\n>\n> On Wednesday 21 December 2005 18:57, William Yu wrote:\n> > Juan Casero wrote:\n> > > Can you elaborate on the reasons the opteron is better than the Xeon\n> > > when it comes to disk io? I have a PostgreSQL 7.4.8 box running a\n> > > DSS. One of our\n> >\n> > Opterons have 64-bit IOMMU -- Xeons don't. That means in 64-bit mode,\n> > transfers to > 4GB, the OS must allocated the memory < 4GB, DMA to that\n> > block and then the CPU must do extra work in copying the memory to >\n> > 4GB. Versus on the Opteron, it's done by the IO adaptor using DMA in the\n> > background.\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: explain analyze is your friend\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Wed, 21 Dec 2005 22:31:54 -0500", "msg_from": "Juan Casero <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "AFAIK there are no licensing costs for solaris, unless you are talking \nabout a software support agreement, which is not required.\n\nJuan Casero wrote:\n\n>I just sent my boss an email asking him for a Sun v20z with dual 2.2 Ghz \n>opterons, 2 Gigs of RAM and RAID 1. I would have liked a better server \n>capable of RAID but that seems to be out of his budget right now. Ok so I \n>assume I get this Sun box. Most likely I will go with Linux since it is a \n>fair bet he doesn't want to pay for the Solaris 10 x86 license. Although I \n>kind of like the idea of using Solaris 10 x86 for this. I will assume I \n>need to install the x64 kernel that comes with say Fedora Core 4. Should I \n>run the Postgresql 8.x binaries in 32 bit mode or 64 bit mode? My instinct \n>tells me 64 bit mode is most efficient for our database size about 20 gigs \n>right now but may grow to 100 gigs in a year or so. I just finished loading \n>a 20 gig database on a dual 900 Mhz Ultrasparc III system with 2 gigs of ram \n>and about 768 megs of shared memory available for the posgresql server \n>running Solaris 10. The load has smoked a P4 3.2 Ghz system I am using also \n>with 2 gigs of ram running postgresql 8.0.3. I mean I started the sparc \n>load after the P4 load. The sparc load has finished already rebuilding the \n>database from a pg_dump file but the P4 system is still going. The p4 has \n>1.3 Gigs of shared memory allocated to postgresql. How about them apples?\n>\n>\n>Thanks,\n>Juan\n>\n>On Wednesday 21 December 2005 18:57, William Yu wrote:\n> \n>\n>>Juan Casero wrote:\n>> \n>>\n>>>Can you elaborate on the reasons the opteron is better than the Xeon when\n>>>it comes to disk io? I have a PostgreSQL 7.4.8 box running a DSS. One\n>>>of our\n>>> \n>>>\n>>Opterons have 64-bit IOMMU -- Xeons don't. That means in 64-bit mode,\n>>transfers to > 4GB, the OS must allocated the memory < 4GB, DMA to that\n>>block and then the CPU must do extra work in copying the memory to >\n>>4GB. Versus on the Opteron, it's done by the IO adaptor using DMA in the\n>>background.\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 6: explain analyze is your friend\n>> \n>>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n> \n>\n\n\n\n\n\n\nAFAIK there are no licensing costs for solaris, unless you are talking\nabout a software support agreement, which is not required.\n\nJuan Casero wrote:\n\nI just sent my boss an email asking him for a Sun v20z with dual 2.2 Ghz \nopterons, 2 Gigs of RAM and RAID 1. I would have liked a better server \ncapable of RAID but that seems to be out of his budget right now. Ok so I \nassume I get this Sun box. Most likely I will go with Linux since it is a \nfair bet he doesn't want to pay for the Solaris 10 x86 license. Although I \nkind of like the idea of using Solaris 10 x86 for this. I will assume I \nneed to install the x64 kernel that comes with say Fedora Core 4. Should I \nrun the Postgresql 8.x binaries in 32 bit mode or 64 bit mode? My instinct \ntells me 64 bit mode is most efficient for our database size about 20 gigs \nright now but may grow to 100 gigs in a year or so. I just finished loading \na 20 gig database on a dual 900 Mhz Ultrasparc III system with 2 gigs of ram \nand about 768 megs of shared memory available for the posgresql server \nrunning Solaris 10. The load has smoked a P4 3.2 Ghz system I am using also \nwith 2 gigs of ram running postgresql 8.0.3. I mean I started the sparc \nload after the P4 load. The sparc load has finished already rebuilding the \ndatabase from a pg_dump file but the P4 system is still going. The p4 has \n1.3 Gigs of shared memory allocated to postgresql. How about them apples?\n\n\nThanks,\nJuan\n\nOn Wednesday 21 December 2005 18:57, William Yu wrote:\n \n\nJuan Casero wrote:\n \n\nCan you elaborate on the reasons the opteron is better than the Xeon when\nit comes to disk io? I have a PostgreSQL 7.4.8 box running a DSS. One\nof our\n \n\nOpterons have 64-bit IOMMU -- Xeons don't. That means in 64-bit mode,\ntransfers to > 4GB, the OS must allocated the memory < 4GB, DMA to that\nblock and then the CPU must do extra work in copying the memory to >\n4GB. Versus on the Opteron, it's done by the IO adaptor using DMA in the\nbackground.\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly", "msg_date": "Wed, 21 Dec 2005 19:34:00 -0800", "msg_from": "Tom Arthurs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Hi Juan,\n\nSolaris 10 license is for free.. Infact I believe you do receive the \nmedia with Sun Fire V20z. If you want support then there are various \n\"pay\" plans depending on the level of support. If not your license \nallows you Right to Use anyway for free.\n\nThat said I haven't done much testing with 32/64 bit differences. \nHowever for long term purposes, 64-bit always seems to be the safe bet. \nAs for your load performance, lot of it depends on your file system \nlayout also.\n\nRegards,\nJignesh\n\n\n\nJuan Casero wrote:\n\n>I just sent my boss an email asking him for a Sun v20z with dual 2.2 Ghz \n>opterons, 2 Gigs of RAM and RAID 1. I would have liked a better server \n>capable of RAID but that seems to be out of his budget right now. Ok so I \n>assume I get this Sun box. Most likely I will go with Linux since it is a \n>fair bet he doesn't want to pay for the Solaris 10 x86 license. Although I \n>kind of like the idea of using Solaris 10 x86 for this. I will assume I \n>need to install the x64 kernel that comes with say Fedora Core 4. Should I \n>run the Postgresql 8.x binaries in 32 bit mode or 64 bit mode? My instinct \n>tells me 64 bit mode is most efficient for our database size about 20 gigs \n>right now but may grow to 100 gigs in a year or so. I just finished loading \n>a 20 gig database on a dual 900 Mhz Ultrasparc III system with 2 gigs of ram \n>and about 768 megs of shared memory available for the posgresql server \n>running Solaris 10. The load has smoked a P4 3.2 Ghz system I am using also \n>with 2 gigs of ram running postgresql 8.0.3. I mean I started the sparc \n>load after the P4 load. The sparc load has finished already rebuilding the \n>database from a pg_dump file but the P4 system is still going. The p4 has \n>1.3 Gigs of shared memory allocated to postgresql. How about them apples?\n>\n>\n>Thanks,\n>Juan\n>\n>On Wednesday 21 December 2005 18:57, William Yu wrote:\n> \n>\n>>Juan Casero wrote:\n>> \n>>\n>>>Can you elaborate on the reasons the opteron is better than the Xeon when\n>>>it comes to disk io? I have a PostgreSQL 7.4.8 box running a DSS. One\n>>>of our\n>>> \n>>>\n>>Opterons have 64-bit IOMMU -- Xeons don't. That means in 64-bit mode,\n>>transfers to > 4GB, the OS must allocated the memory < 4GB, DMA to that\n>>block and then the CPU must do extra work in copying the memory to >\n>>4GB. Versus on the Opteron, it's done by the IO adaptor using DMA in the\n>>background.\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 6: explain analyze is your friend\n>> \n>>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n>\n", "msg_date": "Wed, 21 Dec 2005 23:07:47 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Wed, 21 Dec 2005, Juan Casero wrote:\n\n> Date: Wed, 21 Dec 2005 22:31:54 -0500\n> From: Juan Casero <[email protected]>\n> To: [email protected]\n> Subject: Re: [PERFORM] What's the best hardver for PostgreSQL 8.1?\n> \n> Sorry folks. I had a couple of glasses of wine as I wrote this. Anyway I\n> originally wanted the box to have more than two drives so I could do RAID 5\n> but that is going to cost too much. Also, contrary to my statement below it\n> seems to me I should run the 32 bit postgresql server on the 64 bit kernel.\n> Would you agree this will probably yield the best performance?\n\nyou definantly need a 64 bit kernel to address as much ram as you will \nneed.\n\nthe question of 32 bit vs 64 bit postgres needs to be benchmarked, but my \ninclination is that you probably do want 64 bit for that as well.\n\n64 bit binaries are slightly larger then 32 bit ones (less so on x86/AMD64 \nthen on any other mixed platform though), but the 64 bit version also has \naccess to twice as many registers as a 32 bit one, and the Opteron chips \nhave some other features that become availabel in 64 bit mode (or more \nuseful)\n\nlike everything else this needs benchmarks to prove with your workload \n(I'm trying to get some started, but haven't had a chance yet)\n\nDavid Lang\n\n> I know it\n> depends alot on the system but for now this database is about 20 gigabytes.\n> Not too large right now but it may grow 5x in the next year.\n>\n> Thanks,\n> Juan\n>\n> On Wednesday 21 December 2005 22:09, Juan Casero wrote:\n>> I just sent my boss an email asking him for a Sun v20z with dual 2.2 Ghz\n>> opterons, 2 Gigs of RAM and RAID 1. I would have liked a better server\n>> capable of RAID but that seems to be out of his budget right now. Ok so I\n>> assume I get this Sun box. Most likely I will go with Linux since it is a\n>> fair bet he doesn't want to pay for the Solaris 10 x86 license. Although I\n>> kind of like the idea of using Solaris 10 x86 for this. I will assume I\n>> need to install the x64 kernel that comes with say Fedora Core 4. Should I\n>> run the Postgresql 8.x binaries in 32 bit mode or 64 bit mode? My\n>> instinct tells me 64 bit mode is most efficient for our database size about\n>> 20 gigs right now but may grow to 100 gigs in a year or so. I just\n>> finished loading a 20 gig database on a dual 900 Mhz Ultrasparc III system\n>> with 2 gigs of ram and about 768 megs of shared memory available for the\n>> posgresql server running Solaris 10. The load has smoked a P4 3.2 Ghz\n>> system I am using also with 2 gigs of ram running postgresql 8.0.3. I\n>> mean I started the sparc load after the P4 load. The sparc load has\n>> finished already rebuilding the database from a pg_dump file but the P4\n>> system is still going. The p4 has 1.3 Gigs of shared memory allocated to\n>> postgresql. How about them apples?\n>>\n>>\n>> Thanks,\n>> Juan\n>>\n>> On Wednesday 21 December 2005 18:57, William Yu wrote:\n>>> Juan Casero wrote:\n>>>> Can you elaborate on the reasons the opteron is better than the Xeon\n>>>> when it comes to disk io? I have a PostgreSQL 7.4.8 box running a\n>>>> DSS. One of our\n>>>\n>>> Opterons have 64-bit IOMMU -- Xeons don't. That means in 64-bit mode,\n>>> transfers to > 4GB, the OS must allocated the memory < 4GB, DMA to that\n>>> block and then the CPU must do extra work in copying the memory to >\n>>> 4GB. Versus on the Opteron, it's done by the IO adaptor using DMA in the\n>>> background.\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 6: explain analyze is your friend\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Thu, 22 Dec 2005 19:12:30 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Ok thanks. I think I will go with 64 bit everything on the box. If I can get \nthe Sun Fire V20Z then I will stick with Solaris 10 x86 and download the 64 \nbit PostgreSQL 8.1 binaries from blastwave.org. I develop the PHP code to \nmy DSS system on my Windows XP laptop. Normally, I test the code on this \nlaptop but let it hit the live database when I want to run some tests. Well \njust this afternoon I installed PostgreSQL 8.1.1 on my windows laptop and \nrebuilt the the entire live database instance on there from a pg_dump \narchive. I am blown away by the performance increase in PostgreSQL 8.1.x. \nHas anyone else had a chance to test it? All the queries I run against it \nare remarkably fast but more importantly I can see that the two cores of my \nHyper Threaded P4 are being used. One of the questions I posted on this \nlist was whether PostgreSQL could make use of the large number of cores \navailable on the Ultrasparc T1000/T2000 cores. I am beginning to think that \nwith PostgreSQL 8.1.x the buffer manager could indeed use all those cores. \nThis could make running a DSS or OLTP on an Ultrasparc T1000/T2000 with \nPostgreSQL a much better bargain than on an intel system. Any thoughts?\n\nThanks,\nJuan\n\nOn Thursday 22 December 2005 22:12, David Lang wrote:\n> On Wed, 21 Dec 2005, Juan Casero wrote:\n> > Date: Wed, 21 Dec 2005 22:31:54 -0500\n> > From: Juan Casero <[email protected]>\n> > To: [email protected]\n> > Subject: Re: [PERFORM] What's the best hardver for PostgreSQL 8.1?\n> >\n> > Sorry folks. I had a couple of glasses of wine as I wrote this. Anyway\n> > I originally wanted the box to have more than two drives so I could do\n> > RAID 5 but that is going to cost too much. Also, contrary to my\n> > statement below it seems to me I should run the 32 bit postgresql server\n> > on the 64 bit kernel. Would you agree this will probably yield the best\n> > performance?\n>\n> you definantly need a 64 bit kernel to address as much ram as you will\n> need.\n>\n> the question of 32 bit vs 64 bit postgres needs to be benchmarked, but my\n> inclination is that you probably do want 64 bit for that as well.\n>\n> 64 bit binaries are slightly larger then 32 bit ones (less so on x86/AMD64\n> then on any other mixed platform though), but the 64 bit version also has\n> access to twice as many registers as a 32 bit one, and the Opteron chips\n> have some other features that become availabel in 64 bit mode (or more\n> useful)\n>\n> like everything else this needs benchmarks to prove with your workload\n> (I'm trying to get some started, but haven't had a chance yet)\n>\n> David Lang\n>\n> > I know it\n> > depends alot on the system but for now this database is about 20\n> > gigabytes. Not too large right now but it may grow 5x in the next year.\n> >\n> > Thanks,\n> > Juan\n> >\n> > On Wednesday 21 December 2005 22:09, Juan Casero wrote:\n> >> I just sent my boss an email asking him for a Sun v20z with dual 2.2 Ghz\n> >> opterons, 2 Gigs of RAM and RAID 1. I would have liked a better server\n> >> capable of RAID but that seems to be out of his budget right now. Ok so\n> >> I assume I get this Sun box. Most likely I will go with Linux since it\n> >> is a fair bet he doesn't want to pay for the Solaris 10 x86 license. \n> >> Although I kind of like the idea of using Solaris 10 x86 for this. I\n> >> will assume I need to install the x64 kernel that comes with say Fedora\n> >> Core 4. Should I run the Postgresql 8.x binaries in 32 bit mode or 64\n> >> bit mode? My instinct tells me 64 bit mode is most efficient for our\n> >> database size about 20 gigs right now but may grow to 100 gigs in a year\n> >> or so. I just finished loading a 20 gig database on a dual 900 Mhz\n> >> Ultrasparc III system with 2 gigs of ram and about 768 megs of shared\n> >> memory available for the posgresql server running Solaris 10. The load\n> >> has smoked a P4 3.2 Ghz system I am using also with 2 gigs of ram\n> >> running postgresql 8.0.3. I mean I started the sparc load after the P4\n> >> load. The sparc load has finished already rebuilding the database from\n> >> a pg_dump file but the P4 system is still going. The p4 has 1.3 Gigs of\n> >> shared memory allocated to postgresql. How about them apples?\n> >>\n> >>\n> >> Thanks,\n> >> Juan\n> >>\n> >> On Wednesday 21 December 2005 18:57, William Yu wrote:\n> >>> Juan Casero wrote:\n> >>>> Can you elaborate on the reasons the opteron is better than the Xeon\n> >>>> when it comes to disk io? I have a PostgreSQL 7.4.8 box running a\n> >>>> DSS. One of our\n> >>>\n> >>> Opterons have 64-bit IOMMU -- Xeons don't. That means in 64-bit mode,\n> >>> transfers to > 4GB, the OS must allocated the memory < 4GB, DMA to that\n> >>> block and then the CPU must do extra work in copying the memory to >\n> >>> 4GB. Versus on the Opteron, it's done by the IO adaptor using DMA in\n> >>> the background.\n> >>>\n> >>> ---------------------------(end of\n> >>> broadcast)--------------------------- TIP 6: explain analyze is your\n> >>> friend\n> >>\n> >> ---------------------------(end of broadcast)---------------------------\n> >> TIP 1: if posting/reading through Usenet, please send an appropriate\n> >> subscribe-nomail command to [email protected] so that your\n> >> message can get through to the mailing list cleanly\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: Don't 'kill -9' the postmaster\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n", "msg_date": "Thu, 22 Dec 2005 23:10:10 -0500", "msg_from": "Juan Casero <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Thu, 22 Dec 2005, Juan Casero wrote:\n\n> Ok thanks. I think I will go with 64 bit everything on the box. If I can get\n> the Sun Fire V20Z then I will stick with Solaris 10 x86 and download the 64\n> bit PostgreSQL 8.1 binaries from blastwave.org. I develop the PHP code to\n> my DSS system on my Windows XP laptop. Normally, I test the code on this\n> laptop but let it hit the live database when I want to run some tests. Well\n> just this afternoon I installed PostgreSQL 8.1.1 on my windows laptop and\n> rebuilt the the entire live database instance on there from a pg_dump\n> archive. I am blown away by the performance increase in PostgreSQL 8.1.x.\n> Has anyone else had a chance to test it? All the queries I run against it\n> are remarkably fast but more importantly I can see that the two cores of my\n> Hyper Threaded P4 are being used. One of the questions I posted on this\n> list was whether PostgreSQL could make use of the large number of cores\n> available on the Ultrasparc T1000/T2000 cores. I am beginning to think that\n> with PostgreSQL 8.1.x the buffer manager could indeed use all those cores.\n> This could make running a DSS or OLTP on an Ultrasparc T1000/T2000 with\n> PostgreSQL a much better bargain than on an intel system. Any thoughts?\n\nif you have enough simultanious transactions, and your I/O systems (disk \nand memory interfaces) can keep up with your needs then postgres can use \nquite a few cores.\n\nthere are some limits that will show up with more cores, but I don't think \nit's well known where they are (this will also be very dependant on your \nworkload as well). there was the discussion within the last month or two \nthat hit the postgres weekly news where more attention is being paied to \nthe locking mechanisms used so this is an area under active development \n(note especially that some locking strategies that work well with multiple \nfull cores can be crippling with virtual cores (Intel HT etc).\n\nbut it boils down to the fact that there just isn't enough experiance with \nthe new sun systems to know how well they will work. they could end up \nbeing fabulous speed demons, or dogs (and it could even be both, depending \non your workload)\n\nDavid Lang\n\n> Thanks,\n> Juan\n>\n> On Thursday 22 December 2005 22:12, David Lang wrote:\n>> On Wed, 21 Dec 2005, Juan Casero wrote:\n>>> Date: Wed, 21 Dec 2005 22:31:54 -0500\n>>> From: Juan Casero <[email protected]>\n>>> To: [email protected]\n>>> Subject: Re: [PERFORM] What's the best hardver for PostgreSQL 8.1?\n>>>\n>>> Sorry folks. I had a couple of glasses of wine as I wrote this. Anyway\n>>> I originally wanted the box to have more than two drives so I could do\n>>> RAID 5 but that is going to cost too much. Also, contrary to my\n>>> statement below it seems to me I should run the 32 bit postgresql server\n>>> on the 64 bit kernel. Would you agree this will probably yield the best\n>>> performance?\n>>\n>> you definantly need a 64 bit kernel to address as much ram as you will\n>> need.\n>>\n>> the question of 32 bit vs 64 bit postgres needs to be benchmarked, but my\n>> inclination is that you probably do want 64 bit for that as well.\n>>\n>> 64 bit binaries are slightly larger then 32 bit ones (less so on x86/AMD64\n>> then on any other mixed platform though), but the 64 bit version also has\n>> access to twice as many registers as a 32 bit one, and the Opteron chips\n>> have some other features that become availabel in 64 bit mode (or more\n>> useful)\n>>\n>> like everything else this needs benchmarks to prove with your workload\n>> (I'm trying to get some started, but haven't had a chance yet)\n>>\n>> David Lang\n>>\n>>> I know it\n>>> depends alot on the system but for now this database is about 20\n>>> gigabytes. Not too large right now but it may grow 5x in the next year.\n>>>\n>>> Thanks,\n>>> Juan\n>>>\n>>> On Wednesday 21 December 2005 22:09, Juan Casero wrote:\n>>>> I just sent my boss an email asking him for a Sun v20z with dual 2.2 Ghz\n>>>> opterons, 2 Gigs of RAM and RAID 1. I would have liked a better server\n>>>> capable of RAID but that seems to be out of his budget right now. Ok so\n>>>> I assume I get this Sun box. Most likely I will go with Linux since it\n>>>> is a fair bet he doesn't want to pay for the Solaris 10 x86 license.\n>>>> Although I kind of like the idea of using Solaris 10 x86 for this. I\n>>>> will assume I need to install the x64 kernel that comes with say Fedora\n>>>> Core 4. Should I run the Postgresql 8.x binaries in 32 bit mode or 64\n>>>> bit mode? My instinct tells me 64 bit mode is most efficient for our\n>>>> database size about 20 gigs right now but may grow to 100 gigs in a year\n>>>> or so. I just finished loading a 20 gig database on a dual 900 Mhz\n>>>> Ultrasparc III system with 2 gigs of ram and about 768 megs of shared\n>>>> memory available for the posgresql server running Solaris 10. The load\n>>>> has smoked a P4 3.2 Ghz system I am using also with 2 gigs of ram\n>>>> running postgresql 8.0.3. I mean I started the sparc load after the P4\n>>>> load. The sparc load has finished already rebuilding the database from\n>>>> a pg_dump file but the P4 system is still going. The p4 has 1.3 Gigs of\n>>>> shared memory allocated to postgresql. How about them apples?\n>>>>\n>>>>\n>>>> Thanks,\n>>>> Juan\n>>>>\n>>>> On Wednesday 21 December 2005 18:57, William Yu wrote:\n>>>>> Juan Casero wrote:\n>>>>>> Can you elaborate on the reasons the opteron is better than the Xeon\n>>>>>> when it comes to disk io? I have a PostgreSQL 7.4.8 box running a\n>>>>>> DSS. One of our\n>>>>>\n>>>>> Opterons have 64-bit IOMMU -- Xeons don't. That means in 64-bit mode,\n>>>>> transfers to > 4GB, the OS must allocated the memory < 4GB, DMA to that\n>>>>> block and then the CPU must do extra work in copying the memory to >\n>>>>> 4GB. Versus on the Opteron, it's done by the IO adaptor using DMA in\n>>>>> the background.\n>>>>>\n>>>>> ---------------------------(end of\n>>>>> broadcast)--------------------------- TIP 6: explain analyze is your\n>>>>> friend\n>>>>\n>>>> ---------------------------(end of broadcast)---------------------------\n>>>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>>>> subscribe-nomail command to [email protected] so that your\n>>>> message can get through to the mailing list cleanly\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 2: Don't 'kill -9' the postmaster\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 2: Don't 'kill -9' the postmaster\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Thu, 22 Dec 2005 20:14:53 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "\nOn Dec 22, 2005, at 11:14 PM, David Lang wrote:\n\n> but it boils down to the fact that there just isn't enough \n> experiance with the new sun systems to know how well they will \n> work. they could end up being fabulous speed demons, or dogs (and \n> it could even be both, depending on your workload)\n\nThe v20z isn't the newest sun hardware anyhow... The X2100, X4100, \nand X4200 are. I've been trying to buy an X4100 for going on three \nweeks now but the local sun reseller is making it very hard. you'd \nthink they'd actually want to go out of their way to make a sale but \nthey seem to do the opposite.\n\nfor those of you who say 'well, it is a small sale' my original \nrequest was for over $50k in equipment, and after a while decided \nthat other equipment from other vendors who do care was sufficient, \nand only the opteron boxes needed to come from sun. add a zero return \npolicy and you wonder how they expect to keep in business....\n\nsorry, i had to vent.\n\nbut once it does come in I'll be glad to post up some numbers :-)\n\n", "msg_date": "Fri, 23 Dec 2005 11:23:28 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Vivek Khera wrote:\n\n> and only the \n> opteron boxes needed to come from sun. add a zero return policy and you \n> wonder how they expect to keep in business....\n> \n> sorry, i had to vent.\n>\n\nJust out of interest - why did the opterons need to come from Sun?\n\n\n\n\n", "msg_date": "Sat, 24 Dec 2005 11:15:45 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "\nOn Dec 23, 2005, at 5:15 PM, Mark Kirkwood wrote:\n\n> Vivek Khera wrote:\n>\n>> and only the opteron boxes needed to come from sun. add a zero \n>> return policy and you wonder how they expect to keep in business....\n>> sorry, i had to vent.\n>>\n>\n> Just out of interest - why did the opterons need to come from Sun?\n\nThere are three tier-1 vendors selling opteron: IBM, Sun, and HP. \nHP's have historically had slow RAID configurations, and IBM tries to \nhide them and only offers really one model, a 1U unit. I've already \nbeen through buying opteron systems from the smaller vendors and it \nbasically wasted a lot of my time due to what seems like quality \ncontrol issues.\n\nSo it could be Sun or IBM. IBM seems to make it harder to buy from \nthem than Sun...\n\n", "msg_date": "Fri, 23 Dec 2005 23:00:40 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Wed, 21 Dec 2005 22:31:54 -0500\nJuan Casero <[email protected]> wrote:\n\n> Sorry folks. I had a couple of glasses of wine as I wrote this.\n> Anyway I originally wanted the box to have more than two drives so I\n> could do RAID 5 but that is going to cost too much. Also, contrary\n> to my statement below it seems to me I should run the 32 bit\n> postgresql server on the 64 bit kernel. Would you agree this will\n> probably yield the best performance? I know it depends alot on the\n> system but for now this database is about 20 gigabytes. Not too large\n> right now but it may grow 5x in the next year.\n\n You definitely DO NOT want to do RAID 5 on a database server. That\n is probably the worst setup you could have, I've seen it have lower\n performance than just a single hard disk. \n\n RAID 1 and RAID 1+0 are optimal, but you want to stay far away from\n RAID 5. IMHO RAID 5 is only useful on near line backup servers or\n Samba file servers where space is more important than speed. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Sat, 24 Dec 2005 13:50:42 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "At 02:50 PM 12/24/2005, Frank Wiles wrote:\n>On Wed, 21 Dec 2005 22:31:54 -0500\n>Juan Casero <[email protected]> wrote:\n>\n> > Sorry folks. I had a couple of glasses of wine as I wrote this.\n> > Anyway I originally wanted the box to have more than two drives so I\n> > could do RAID 5 but that is going to cost too much. Also, contrary\n> > to my statement below it seems to me I should run the 32 bit\n> > postgresql server on the 64 bit kernel. Would you agree this will\n> > probably yield the best performance? I know it depends alot on the\n> > system but for now this database is about 20 gigabytes. Not too large\n> > right now but it may grow 5x in the next year.\n>\n> You definitely DO NOT want to do RAID 5 on a database server. That\n> is probably the worst setup you could have, I've seen it have lower\n> performance than just a single hard disk.\n>\n> RAID 1 and RAID 1+0 are optimal, but you want to stay far away from\n> RAID 5. IMHO RAID 5 is only useful on near line backup servers or\n> Samba file servers where space is more important than speed.\nThat's a bit misleading. RAID 5 excels when you want read speed but \ndon't care as much about write speed. Writes are typical ~2/3 the \nspeed of reads on a typical decent RAID 5 set up.\n\nSide Note: Some years ago Mylex had a family of fast (for the time) \nRAID 5 HW controllers that actually read and wrote at the same \nspeed. IBM bought them to kill them and protect LSI Logic. Mylex \nX24's (?IIRC the model number correctly?) are still reasonable HW.\n\nSo if you have tables that are read often and written to rarely or \nnot at all, putting them on RAID 5 is optimal. In both data mining \nlike and OLTP like apps there are usually at least some such tables.\n\nRAID 1 is good for stuff where speed doesn't matter and all you are \nlooking for is an insurance policy.\n\nRAID 10 is the best way to get high performance on both reads and \nwrites, but it has a significantly greater cost for the same amount \nof usable physical media.\n\nIf you've got the budget or are dealing with small enough physical \nstorage needs, by all means use RAID 10. OTOH, if you are dealing \nwith large enterprise class apps like Sarbanes Oxley compliance, \nmedical and/or insurance, etc, etc, the storage needs can get so \nlarge that RAID 10 for everything or even most things is not \npossible. Even if economically feasible.\n\nRAID levels are like any other tool. Each is useful in the proper \ncircumstances.\n\nHappy holidays,\nRon Peacetree\n\n\n", "msg_date": "Sat, 24 Dec 2005 16:31:30 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "\n>\n> If you've got the budget or are dealing with small enough physical \n> storage needs, by all means use RAID 10. OTOH, if you are dealing \n> with large enterprise class apps like Sarbanes Oxley compliance, \n> medical and/or insurance, etc, etc, the storage needs can get so large \n> that RAID 10 for everything or even most things is not possible. Even \n> if economically feasible.\n>\n> RAID levels are like any other tool. Each is useful in the proper \n> circumstances.\n>\nThere is also RAID 50 which is quite nice.\n\nJoshua D. Drake\n\n\n> Happy holidays,\n> Ron Peacetree\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n\n\n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: PLphp, PLperl - http://www.commandprompt.com/\n\n", "msg_date": "Sat, 24 Dec 2005 13:42:00 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Sat, 24 Dec 2005, Ron wrote:\n\n> At 02:50 PM 12/24/2005, Frank Wiles wrote:\n>> Juan Casero <[email protected]> wrote:\n>> \n>> > Sorry folks. I had a couple of glasses of wine as I wrote this.\n>> > Anyway I originally wanted the box to have more than two drives so I\n>> > could do RAID 5 but that is going to cost too much. Also, contrary\n>> > to my statement below it seems to me I should run the 32 bit\n>> > postgresql server on the 64 bit kernel. Would you agree this will\n>> > probably yield the best performance? I know it depends alot on the\n>> > system but for now this database is about 20 gigabytes. Not too large\n>> > right now but it may grow 5x in the next year.\n>> \n>> You definitely DO NOT want to do RAID 5 on a database server. That\n>> is probably the worst setup you could have, I've seen it have lower\n>> performance than just a single hard disk.\n>> \n>> RAID 1 and RAID 1+0 are optimal, but you want to stay far away from\n>> RAID 5. IMHO RAID 5 is only useful on near line backup servers or\n>> Samba file servers where space is more important than speed.\n> That's a bit misleading. RAID 5 excels when you want read speed but don't \n> care as much about write speed. Writes are typical ~2/3 the speed of reads \n> on a typical decent RAID 5 set up.\n>\n> So if you have tables that are read often and written to rarely or not at \n> all, putting them on RAID 5 is optimal. In both data mining like and OLTP \n> like apps there are usually at least some such tables.\n\nraid 5 is bad for random writes as you state, but how does it do for \nsequential writes (for example data mining where you do a large import at \none time, but seldom do other updates). I'm assuming a controller with a \nreasonable amount of battery-backed cache.\n\nDavid Lang\n", "msg_date": "Sat, 24 Dec 2005 13:54:21 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "At 04:42 PM 12/24/2005, Joshua D. Drake wrote:\n\n\n>>If you've got the budget or are dealing with small enough physical \n>>storage needs, by all means use RAID 10. OTOH, if you are dealing \n>>with large enterprise class apps like Sarbanes Oxley compliance, \n>>medical and/or insurance, etc, etc, the storage needs can get so \n>>large that RAID 10 for everything or even most things is not \n>>possible. Even if economically feasible.\n>>\n>>RAID levels are like any other tool. Each is useful in the proper \n>>circumstances.\n>There is also RAID 50 which is quite nice.\nThe \"quite nice\" part that Joshua is referring to is that RAID 50 \ngets most of the write performance of RAID 10 w/o using nearly as \nmany HD's as RAID 10. OTOH, there still is a significant increase in \nthe number of HD's used, and that means MBTF's become more frequent \nbut you are not getting protection levels you would with RAID 10.\n\nIME RAID 50 gets mixed reviews. My two biggest issues are\na= Admin of RAID 50 is more complex than the other commonly used \nversions (1, 10, 5, and 6)\nb= Once a HD failure takes place, you suffer a _permenent_ \nperformance drop, even after the automatic volume rebuild, until you \ntake the entire RAID 50 array off line, reinitialize it, and rebuild \nit from scratch.\n\nIME \"a\" and \"b\" make RAID 50 inappropriate for any but the biggest \nand most dedicated of DB admin groups.\n\nYMMV,\nRon\n\n\n", "msg_date": "Sat, 24 Dec 2005 17:24:58 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "David Lang wrote:\n> raid 5 is bad for random writes as you state, but how does it do for \n> sequential writes (for example data mining where you do a large import \n> at one time, but seldom do other updates). I'm assuming a controller \n> with a reasonable amount of battery-backed cache.\n\nRandom write performance (small block that only writes to 1 drive):\n1 write requires N-1 reads + N writes --> 1/2N-1 %\n\nSequential write performance (write big enough block to use all N drives):\nN-1 Write requires N writes --> N-1/N %\n\nAssuming enough cache so all reads/writes are done in 1 transaction + \nonboard processor calcs RAID parity fast enough to not cause an extra delay.\n", "msg_date": "Sat, 24 Dec 2005 14:36:57 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "At 04:54 PM 12/24/2005, David Lang wrote:\n\n>raid 5 is bad for random writes as you state, but how does it do for \n>sequential writes (for example data mining where you do a large \n>import at one time, but seldom do other updates). I'm assuming a \n>controller with a reasonable amount of battery-backed cache.\nThe issue with RAID 5 writes centers on the need to recalculate \nchecksums for the ECC blocks distributed across the array and then \nwrite the new ones to physical media.\n\nCaches help, and the bigger the cache the better, but once you are \ndoing enough writes fast enough (and that doesn't take much even with \na few GBs of cache) the recalculate-checksums-and-write-new-ones \noverhead will decrease the write speed of real data. Bear in mind \nthat the HD's _raw_ write speed hasn't been decreased. Those HD's \nare pounding away as fast as they can for you. Your _effective_ or \n_data level_ write speed is what decreases due to overhead.\n\nSide Note: people often forget the other big reason to use RAID 10 \nover RAID 5. RAID 5 is always only 2 HD failures from data \nloss. RAID 10 can lose up to 1/2 the HD's in the array w/o data loss \nunless you get unlucky and lose both members of a RAID 1 set.\n\nThis can be seen as an example of the classic space vs. time trade \noff in performance tuning. You can use 2x the HDs you need and \nimplement RAID 10 for best performance and reliability or you can \ndedicate less HD's to RAID and implement RAID 5 for less (write) \nperformance and lower reliability.\n\nTANSTAAFL.\nRon Peacetree\n\n\n", "msg_date": "Sat, 24 Dec 2005 17:45:20 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Hi,\n\n> b= Once a HD failure takes place, you suffer a _permenent_ performance \n> drop, even after the automatic volume rebuild, until you take the entire \n> RAID 50 array off line, reinitialize it, and rebuild it from scratch.\n\nWhere did you get that crazy idea? When you have replaced the drive and the \nRAID is rebuilt, you have exactly the same situation as before the drive \nfailed. Why would you get less performance?\nSander.\n\n\n", "msg_date": "Sat, 24 Dec 2005 23:54:19 +0100", "msg_from": "\"Sander Steffann\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Sat, Dec 24, 2005 at 05:45:20PM -0500, Ron wrote:\n>Caches help, and the bigger the cache the better, but once you are \n>doing enough writes fast enough (and that doesn't take much even with \n>a few GBs of cache) the recalculate-checksums-and-write-new-ones \n>overhead will decrease the write speed of real data. Bear in mind \n>that the HD's _raw_ write speed hasn't been decreased. Those HD's \n>are pounding away as fast as they can for you. Your _effective_ or \n>_data level_ write speed is what decreases due to overhead.\n\nYou're overgeneralizing. Assuming a large cache and a sequential write,\nthere's need be no penalty for raid 5. (For random writes you may\nneed to read unrelated blocks in order to calculate parity, but for\nlarge sequential writes the parity blocks should all be read from\ncache.) A modern cpu can calculate parity for raid 5 on the order of\ngigabytes per second, and even crummy embedded processors can do\nhundreds of megabytes per second. You may have run into some lousy\nimplementations, but you should be much more specific about what\nhardware you're talking about instead of making sweeping\ngeneralizations.\n\n>Side Note: people often forget the other big reason to use RAID 10 \n>over RAID 5. RAID 5 is always only 2 HD failures from data \n>loss. RAID 10 can lose up to 1/2 the HD's in the array w/o data loss \n>unless you get unlucky and lose both members of a RAID 1 set.\n\nIOW, your RAID 10 is only 2 HD failures from data loss also. If that's\nan issue you need to go with RAID 6 or add another disk to each mirror.\n\nMike Stone\n", "msg_date": "Sun, 25 Dec 2005 12:37:55 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "It's irrelavent what controller, you still have to actualy write the\nparity blocks, which slows down your write speed because you have to\nwrite n+n/2 blocks. instead of just n blocks making the system write\n50% more data.\n\nRAID 5 must write 50% more data to disk therefore it will always be slower.\n\nAlex.\n\nOn 12/25/05, Michael Stone <[email protected]> wrote:\n> On Sat, Dec 24, 2005 at 05:45:20PM -0500, Ron wrote:\n> >Caches help, and the bigger the cache the better, but once you are\n> >doing enough writes fast enough (and that doesn't take much even with\n> >a few GBs of cache) the recalculate-checksums-and-write-new-ones\n> >overhead will decrease the write speed of real data. Bear in mind\n> >that the HD's _raw_ write speed hasn't been decreased. Those HD's\n> >are pounding away as fast as they can for you. Your _effective_ or\n> >_data level_ write speed is what decreases due to overhead.\n>\n> You're overgeneralizing. Assuming a large cache and a sequential write,\n> there's need be no penalty for raid 5. (For random writes you may\n> need to read unrelated blocks in order to calculate parity, but for\n> large sequential writes the parity blocks should all be read from\n> cache.) A modern cpu can calculate parity for raid 5 on the order of\n> gigabytes per second, and even crummy embedded processors can do\n> hundreds of megabytes per second. You may have run into some lousy\n> implementations, but you should be much more specific about what\n> hardware you're talking about instead of making sweeping\n> generalizations.\n>\n> >Side Note: people often forget the other big reason to use RAID 10\n> >over RAID 5. RAID 5 is always only 2 HD failures from data\n> >loss. RAID 10 can lose up to 1/2 the HD's in the array w/o data loss\n> >unless you get unlucky and lose both members of a RAID 1 set.\n>\n> IOW, your RAID 10 is only 2 HD failures from data loss also. If that's\n> an issue you need to go with RAID 6 or add another disk to each mirror.\n>\n> Mike Stone\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n", "msg_date": "Mon, 26 Dec 2005 12:32:19 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Mon, 26 Dec 2005, Alex Turner wrote:\n\n> It's irrelavent what controller, you still have to actualy write the\n> parity blocks, which slows down your write speed because you have to\n> write n+n/2 blocks. instead of just n blocks making the system write\n> 50% more data.\n>\n> RAID 5 must write 50% more data to disk therefore it will always be slower.\n\nraid5 writes n+1 blocks not n+n/2 (unless n=2 for a 3-disk raid). you can \nhave a 15+1 disk raid5 array for example\n\nhowever raid1 (and raid10) have to write 2*n blocks to disk. so if you are \ntalking about pure I/O needed raid5 wins hands down. (the same 16 drives \nwould be a 8+8 array)\n\nwhat slows down raid 5 is that to modify a block you have to read blocks \nfrom all your drives to re-calculate the parity. this interleaving of \nreads and writes when all you are logicly doing is writes can really hurt. \n(this is why I asked the question that got us off on this tangent, when \ndoing new writes to an array you don't have to read the blocks as they are \nblank, assuming your cacheing is enough so that you can write blocksize*n \nbefore the system starts actually writing the data)\n\nDavid Lang\n\n> Alex.\n>\n> On 12/25/05, Michael Stone <[email protected]> wrote:\n>> On Sat, Dec 24, 2005 at 05:45:20PM -0500, Ron wrote:\n>>> Caches help, and the bigger the cache the better, but once you are\n>>> doing enough writes fast enough (and that doesn't take much even with\n>>> a few GBs of cache) the recalculate-checksums-and-write-new-ones\n>>> overhead will decrease the write speed of real data. Bear in mind\n>>> that the HD's _raw_ write speed hasn't been decreased. Those HD's\n>>> are pounding away as fast as they can for you. Your _effective_ or\n>>> _data level_ write speed is what decreases due to overhead.\n>>\n>> You're overgeneralizing. Assuming a large cache and a sequential write,\n>> there's need be no penalty for raid 5. (For random writes you may\n>> need to read unrelated blocks in order to calculate parity, but for\n>> large sequential writes the parity blocks should all be read from\n>> cache.) A modern cpu can calculate parity for raid 5 on the order of\n>> gigabytes per second, and even crummy embedded processors can do\n>> hundreds of megabytes per second. You may have run into some lousy\n>> implementations, but you should be much more specific about what\n>> hardware you're talking about instead of making sweeping\n>> generalizations.\n>>\n>>> Side Note: people often forget the other big reason to use RAID 10\n>>> over RAID 5. RAID 5 is always only 2 HD failures from data\n>>> loss. RAID 10 can lose up to 1/2 the HD's in the array w/o data loss\n>>> unless you get unlucky and lose both members of a RAID 1 set.\n>>\n>> IOW, your RAID 10 is only 2 HD failures from data loss also. If that's\n>> an issue you need to go with RAID 6 or add another disk to each mirror.\n>>\n>> Mike Stone\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n", "msg_date": "Mon, 26 Dec 2005 10:11:00 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Yes, but those blocks in RAID 10 are largely irrelevant as they are to\nindependant disks. In RAID 5 you have to write parity to an 'active'\ndrive that is part of the stripe. (They are irrelevant unless of\ncourse you are maxing out your SCSI bus - yet another reason why SATA\ncan be faster than SCSI, particularly in RAID 10, every channel is\nindependant).\n\nSorry - my math for RAID 5 was a bit off - I don't know why I was\nconsidering only a three dirve situation - which is the worst. It's\nn+1 you are right. still, for small arrays thats a big penalty. \nStill, there is definately a penatly contrary to the assertion of the\norignal poster.\n\nI agree totally that the read+parity-calc+write in the worst case is\ntotaly bad, which is why I alway recommend people should _never ever_\nuse RAID 5. In this day and age of large capacity chassis, and large\ncapacity SATA drives, RAID 5 is totally inapropriate IMHO for _any_\napplication least of all databases.\n\nIn reality I have yet to benchmark a system where RAID 5 on the same\nnumber of drives with 8 drives or less in a single array beat a RAID\n10 with the same number of drives. I would definately be interested\nin a SCSI card that could actualy achieve the theoretical performance\nof RAID 5 especially under Linux.\n\nWith RAID 5 you get to watch you system crumble and fail when a drive\nfails and the array goes into a failed state. It's just not worth it.\n\nAlex.\n\n\nOn 12/26/05, David Lang <[email protected]> wrote:\n> On Mon, 26 Dec 2005, Alex Turner wrote:\n>\n> > It's irrelavent what controller, you still have to actualy write the\n> > parity blocks, which slows down your write speed because you have to\n> > write n+n/2 blocks. instead of just n blocks making the system write\n> > 50% more data.\n> >\n> > RAID 5 must write 50% more data to disk therefore it will always be slower.\n>\n> raid5 writes n+1 blocks not n+n/2 (unless n=2 for a 3-disk raid). you can\n> have a 15+1 disk raid5 array for example\n>\n> however raid1 (and raid10) have to write 2*n blocks to disk. so if you are\n> talking about pure I/O needed raid5 wins hands down. (the same 16 drives\n> would be a 8+8 array)\n>\n> what slows down raid 5 is that to modify a block you have to read blocks\n> from all your drives to re-calculate the parity. this interleaving of\n> reads and writes when all you are logicly doing is writes can really hurt.\n> (this is why I asked the question that got us off on this tangent, when\n> doing new writes to an array you don't have to read the blocks as they are\n> blank, assuming your cacheing is enough so that you can write blocksize*n\n> before the system starts actually writing the data)\n>\n> David Lang\n>\n> > Alex.\n> >\n> > On 12/25/05, Michael Stone <[email protected]> wrote:\n> >> On Sat, Dec 24, 2005 at 05:45:20PM -0500, Ron wrote:\n> >>> Caches help, and the bigger the cache the better, but once you are\n> >>> doing enough writes fast enough (and that doesn't take much even with\n> >>> a few GBs of cache) the recalculate-checksums-and-write-new-ones\n> >>> overhead will decrease the write speed of real data. Bear in mind\n> >>> that the HD's _raw_ write speed hasn't been decreased. Those HD's\n> >>> are pounding away as fast as they can for you. Your _effective_ or\n> >>> _data level_ write speed is what decreases due to overhead.\n> >>\n> >> You're overgeneralizing. Assuming a large cache and a sequential write,\n> >> there's need be no penalty for raid 5. (For random writes you may\n> >> need to read unrelated blocks in order to calculate parity, but for\n> >> large sequential writes the parity blocks should all be read from\n> >> cache.) A modern cpu can calculate parity for raid 5 on the order of\n> >> gigabytes per second, and even crummy embedded processors can do\n> >> hundreds of megabytes per second. You may have run into some lousy\n> >> implementations, but you should be much more specific about what\n> >> hardware you're talking about instead of making sweeping\n> >> generalizations.\n> >>\n> >>> Side Note: people often forget the other big reason to use RAID 10\n> >>> over RAID 5. RAID 5 is always only 2 HD failures from data\n> >>> loss. RAID 10 can lose up to 1/2 the HD's in the array w/o data loss\n> >>> unless you get unlucky and lose both members of a RAID 1 set.\n> >>\n> >> IOW, your RAID 10 is only 2 HD failures from data loss also. If that's\n> >> an issue you need to go with RAID 6 or add another disk to each mirror.\n> >>\n> >> Mike Stone\n> >>\n> >> ---------------------------(end of broadcast)---------------------------\n> >> TIP 6: explain analyze is your friend\n> >>\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n>\n", "msg_date": "Mon, 26 Dec 2005 18:04:40 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Mon, 26 Dec 2005, Alex Turner wrote:\n\n> \n> Yes, but those blocks in RAID 10 are largely irrelevant as they are to\n> independant disks. In RAID 5 you have to write parity to an 'active'\n> drive that is part of the stripe. (They are irrelevant unless of\n> course you are maxing out your SCSI bus - yet another reason why SATA\n> can be faster than SCSI, particularly in RAID 10, every channel is\n> independant).\n\nI don't understand your 'active' vs 'inactive' drive argument, in raid 1 \nor 1+0 all drives are active.\n\nwith good components you need to worry about maxing out your PCI bus as \nmuch as any other one (this type of thing is where the hardware raid has a \ndefinante advantage since the card handles the extra I/O, not your system)\n\n> Sorry - my math for RAID 5 was a bit off - I don't know why I was\n> considering only a three dirve situation - which is the worst. It's\n> n+1 you are right. still, for small arrays thats a big penalty.\n> Still, there is definately a penatly contrary to the assertion of the\n> orignal poster.\n>\n> I agree totally that the read+parity-calc+write in the worst case is\n> totaly bad, which is why I alway recommend people should _never ever_\n> use RAID 5. In this day and age of large capacity chassis, and large\n> capacity SATA drives, RAID 5 is totally inapropriate IMHO for _any_\n> application least of all databases.\n>\n> In reality I have yet to benchmark a system where RAID 5 on the same\n> number of drives with 8 drives or less in a single array beat a RAID\n> 10 with the same number of drives. I would definately be interested\n> in a SCSI card that could actualy achieve the theoretical performance\n> of RAID 5 especially under Linux.\n\nbut it's not a 'same number of drives' comparison you should be makeing.\n\nif you have a 8 drive RAID5 array you need to compare it with a 14 drive \nRAID1/10 array.\n\n> With RAID 5 you get to watch you system crumble and fail when a drive\n> fails and the array goes into a failed state. It's just not worth it.\n\nspeed is worth money (and therefor number of drives) in some cases, but \nnot in all cases. also the speed penalty when you have a raid drive fail \nvaries based on your controller\n\nit's wrong to flatly rule out any RAID configuration, they all have their \nplace and the important thing is to understand what the advantages and \ndisadvantages are for each of them so you can know when to use each one.\n\nfor example I have a situation I am looking at where RAID0 is looking \nappropriate for a database (a multi-TB array that gets completely reloaded \nevery month or so as data expires and new data is loaded from the \nauthoritative source, adding another 16 drives to get redundancy isn't \nreasonable)\n\nDavid Lang\n\n> Alex.\n>\n>\n> On 12/26/05, David Lang <[email protected]> wrote:\n>> On Mon, 26 Dec 2005, Alex Turner wrote:\n>>\n>>> It's irrelavent what controller, you still have to actualy write the\n>>> parity blocks, which slows down your write speed because you have to\n>>> write n+n/2 blocks. instead of just n blocks making the system write\n>>> 50% more data.\n>>>\n>>> RAID 5 must write 50% more data to disk therefore it will always be slower.\n>>\n>> raid5 writes n+1 blocks not n+n/2 (unless n=2 for a 3-disk raid). you can\n>> have a 15+1 disk raid5 array for example\n>>\n>> however raid1 (and raid10) have to write 2*n blocks to disk. so if you are\n>> talking about pure I/O needed raid5 wins hands down. (the same 16 drives\n>> would be a 8+8 array)\n>>\n>> what slows down raid 5 is that to modify a block you have to read blocks\n>> from all your drives to re-calculate the parity. this interleaving of\n>> reads and writes when all you are logicly doing is writes can really hurt.\n>> (this is why I asked the question that got us off on this tangent, when\n>> doing new writes to an array you don't have to read the blocks as they are\n>> blank, assuming your cacheing is enough so that you can write blocksize*n\n>> before the system starts actually writing the data)\n>>\n>> David Lang\n>>\n>>> Alex.\n>>>\n>>> On 12/25/05, Michael Stone <[email protected]> wrote:\n>>>> On Sat, Dec 24, 2005 at 05:45:20PM -0500, Ron wrote:\n>>>>> Caches help, and the bigger the cache the better, but once you are\n>>>>> doing enough writes fast enough (and that doesn't take much even with\n>>>>> a few GBs of cache) the recalculate-checksums-and-write-new-ones\n>>>>> overhead will decrease the write speed of real data. Bear in mind\n>>>>> that the HD's _raw_ write speed hasn't been decreased. Those HD's\n>>>>> are pounding away as fast as they can for you. Your _effective_ or\n>>>>> _data level_ write speed is what decreases due to overhead.\n>>>>\n>>>> You're overgeneralizing. Assuming a large cache and a sequential write,\n>>>> there's need be no penalty for raid 5. (For random writes you may\n>>>> need to read unrelated blocks in order to calculate parity, but for\n>>>> large sequential writes the parity blocks should all be read from\n>>>> cache.) A modern cpu can calculate parity for raid 5 on the order of\n>>>> gigabytes per second, and even crummy embedded processors can do\n>>>> hundreds of megabytes per second. You may have run into some lousy\n>>>> implementations, but you should be much more specific about what\n>>>> hardware you're talking about instead of making sweeping\n>>>> generalizations.\n>>>>\n>>>>> Side Note: people often forget the other big reason to use RAID 10\n>>>>> over RAID 5. RAID 5 is always only 2 HD failures from data\n>>>>> loss. RAID 10 can lose up to 1/2 the HD's in the array w/o data loss\n>>>>> unless you get unlucky and lose both members of a RAID 1 set.\n>>>>\n>>>> IOW, your RAID 10 is only 2 HD failures from data loss also. If that's\n>>>> an issue you need to go with RAID 6 or add another disk to each mirror.\n>>>>\n>>>> Mike Stone\n>>>>\n>>>> ---------------------------(end of broadcast)---------------------------\n>>>> TIP 6: explain analyze is your friend\n>>>>\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 4: Have you searched our list archives?\n>>>\n>>> http://archives.postgresql.org\n>>>\n>>\n>\n", "msg_date": "Mon, 26 Dec 2005 15:27:18 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On 12/26/05, David Lang <[email protected]> wrote:\n> raid5 writes n+1 blocks not n+n/2 (unless n=2 for a 3-disk raid). you can\n> have a 15+1 disk raid5 array for example\n>\n> however raid1 (and raid10) have to write 2*n blocks to disk. so if you are\n> talking about pure I/O needed raid5 wins hands down. (the same 16 drives\n> would be a 8+8 array)\n>\n> what slows down raid 5 is that to modify a block you have to read blocks\n> from all your drives to re-calculate the parity. this interleaving of\n> reads and writes when all you are logicly doing is writes can really hurt.\n> (this is why I asked the question that got us off on this tangent, when\n> doing new writes to an array you don't have to read the blocks as they are\n> blank, assuming your cacheing is enough so that you can write blocksize*n\n> before the system starts actually writing the data)\n\nNot exactly true.\n\nLet's assume you have a 4+1 RAID5 (drives A, B, C, D and E),\nand you want to update drive A. Let's assume the parity\nis stored in this particular write on drive E.\n\nOne way to write it is:\n write A,\n read A, B, C, D,\n combine A+B+C+D and write it E.\n (4 reads + 2 writes)\n\nThe other way to write it is:\n read oldA,\n read old parity oldE\n write newA,\n write E = oldE + (newA-oldA) -- calculate difference between new and\nold A, and apply it to old parity, then write\n (2 reads + 2 writes)\n\nThe more drives you have, the smarter it is to use the second approach,\nunless of course A, B, C and D are available in the cache, which is the\nniciest situation.\n\n Regards,\n Dawid\n", "msg_date": "Tue, 27 Dec 2005 08:24:45 +0100", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Mon, Dec 26, 2005 at 12:32:19PM -0500, Alex Turner wrote:\n>It's irrelavent what controller, you still have to actualy write the\n>parity blocks, which slows down your write speed because you have to\n>write n+n/2 blocks. instead of just n blocks making the system write\n>50% more data.\n>\n>RAID 5 must write 50% more data to disk therefore it will always be\n>slower.\n\nAt this point you've drifted into complete nonsense mode. \n\nOn Mon, Dec 26, 2005 at 10:11:00AM -0800, David Lang wrote:\n>what slows down raid 5 is that to modify a block you have to read blocks \n>from all your drives to re-calculate the parity. this interleaving of \n>reads and writes when all you are logicly doing is writes can really hurt. \n>(this is why I asked the question that got us off on this tangent, when \n>doing new writes to an array you don't have to read the blocks as they are \n>blank, assuming your cacheing is enough so that you can write blocksize*n \n>before the system starts actually writing the data)\n\nCorrect; there's no reason for the controller to read anything back if\nyour write will fill a complete stripe. That's why I said that there\nisn't a \"RAID 5 penalty\" assuming you've got a reasonably fast\ncontroller and you're doing large sequential writes (or have enough\ncache that random writes can be batched as large sequential writes). \n\nOn Mon, Dec 26, 2005 at 06:04:40PM -0500, Alex Turner wrote:\n>Yes, but those blocks in RAID 10 are largely irrelevant as they are to\n>independant disks. In RAID 5 you have to write parity to an 'active'\n>drive that is part of the stripe. \n\nOnce again, this doesn't make any sense. Can you explain which parts of\na RAID 10 array are inactive?\n\n>I agree totally that the read+parity-calc+write in the worst case is\n>totaly bad, which is why I alway recommend people should _never ever_\n>use RAID 5. In this day and age of large capacity chassis, and large\n>capacity SATA drives, RAID 5 is totally inapropriate IMHO for _any_\n>application least of all databases.\n\nSo I've got a 14 drive chassis full of 300G SATA disks and need at least\n3.5TB of data storage. In your mind the only possible solution is to buy\nanother 14 drive chassis? Must be nice to never have a budget. Must be a\nhard sell if you've bought decent enough hardware that your benchmarks\ncan't demonstrate a difference between a RAID 5 and a RAID 10\nconfiguration on that chassis except in degraded mode (and the customer\ndoesn't want to pay double for degraded mode performance). \n\n>In reality I have yet to benchmark a system where RAID 5 on the same\n>number of drives with 8 drives or less in a single array beat a RAID\n>10 with the same number of drives. \n\nWell, those are frankly little arrays, probably on lousy controllers...\n\nMike Stone\n", "msg_date": "Tue, 27 Dec 2005 08:35:27 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "At 08:35 AM 12/27/2005, Michael Stone wrote:\n>On Mon, Dec 26, 2005 at 10:11:00AM -0800, David Lang wrote:\n>>what slows down raid 5 is that to modify a block you have to read \n>>blocks from all your drives to re-calculate the parity. this \n>>interleaving of reads and writes when all you are logicly doing is \n>>writes can really hurt. (this is why I asked the question that got \n>>us off on this tangent, when doing new writes to an array you don't \n>>have to read the blocks as they are blank, assuming your cacheing \n>>is enough so that you can write blocksize*n before the system \n>>starts actually writing the data)\n>\n>Correct; there's no reason for the controller to read anything back \n>if your write will fill a complete stripe. That's why I said that \n>there isn't a \"RAID 5 penalty\" assuming you've got a reasonably fast \n>controller and you're doing large sequential writes (or have enough \n>cache that random writes can be batched as large sequential writes).\n\nSorry. A decade+ RWE in production with RAID 5 using controllers as \nbad as Adaptec and as good as Mylex, Chaparral, LSI Logic (including \ntheir Engino stuff), and Xyratex under 5 different OS's (Sun, Linux, \nM$, DEC, HP) on each of Oracle, SQL Server, DB2, mySQL, and pg shows \nthat RAID 5 writes are slower than RAID 5 reads\n\nWith the one notable exception of the Mylex controller that was so \ngood IBM bought Mylex to put them out of business.\n\nEnough IO load, random or sequential, will cause the effect no matter \nhow much cache you have or how fast the controller is.\n\nThe even bigger problem that everyone is ignoring here is that large \nRAID 5's spend increasingly larger percentages of their time with 1 \nfailed HD in them. The math of having that many HDs operating \nsimultaneously 24x7 makes it inevitable.\n\nThis means you are operating in degraded mode an increasingly larger \npercentage of the time under exactly the circumstance you least want \nto be. In addition, you are =one= HD failure from data loss on that \narray an increasingly larger percentage of the time under exactly the \nleast circumstances you want to be.\n\nRAID 5 is not a silver bullet.\n\n\n> On Mon, Dec 26, 2005 at 06:04:40PM -0500, Alex Turner wrote:\n>>Yes, but those blocks in RAID 10 are largely irrelevant as they are \n>>to independant disks. In RAID 5 you have to write parity to an \n>>'active' drive that is part of the stripe.\n>\n>Once again, this doesn't make any sense. Can you explain which parts of\n>a RAID 10 array are inactive?\n>\n>>I agree totally that the read+parity-calc+write in the worst case \n>>is totaly bad, which is why I alway recommend people should _never \n>>ever_ use RAID 5. In this day and age of large capacity chassis, \n>>and large capacity SATA drives, RAID 5 is totally inapropriate IMHO \n>>for _any_ application least of all databases.\nI vote with Michael here. This is an extreme position to take that \ncan't be followed under many circumstances ITRW.\n\n\n>So I've got a 14 drive chassis full of 300G SATA disks and need at \n>least 3.5TB of data storage. In your mind the only possible solution \n>is to buy another 14 drive chassis? Must be nice to never have a budget.\n\nI think you mean an infinite budget. That's even assuming it's \npossible to get the HD's you need. I've had arrays that used all the \nspace I could give them in 160 HD cabinets. Two 160 HD cabinets was \nneither within the budget nor going to perform well. I =had= to use \nRAID 5. RAID 10 was just not usage efficient enough.\n\n\n>Must be a hard sell if you've bought decent enough hardware that \n>your benchmarks can't demonstrate a difference between a RAID 5 and \n>a RAID 10 configuration on that chassis except in degraded mode (and \n>the customer doesn't want to pay double for degraded mode performance)\n\nI have =never= had this situation. RAID 10 latency is better than \nRAID 5 latency. RAID 10 write speed under heavy enough load, of any \ntype, is faster than RAID 5 write speed under the same \ncircumstances. RAID 10 robustness is better as well.\n\nProblem is that sometimes budget limits or number of HDs needed \nlimits mean you can't use RAID 10.\n\n\n>>In reality I have yet to benchmark a system where RAID 5 on the \n>>same number of drives with 8 drives or less in a single array beat \n>>a RAID 10 with the same number of drives.\n>\n>Well, those are frankly little arrays, probably on lousy controllers...\nNah. Regardless of controller I can take any RAID 5 and any RAID 10 \nbuilt on the same HW under the same OS running the same DBMS and \n=guarantee= there is an IO load above which it can be shown that the \nRAID 10 will do writes faster than the RAID 5. The only exception in \nmy career thus far has been the aforementioned Mylex controller.\n\nOTOH, sometimes you have no choice but to \"take the hit\" and use RAID 5.\n\n\ncheers,\nRon\n\n\n", "msg_date": "Tue, 27 Dec 2005 11:50:16 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "\nHistorically, I have heard that RAID5 is only faster than RAID10 if\nthere are six or more drives.\n\n---------------------------------------------------------------------------\n\nRon wrote:\n> At 08:35 AM 12/27/2005, Michael Stone wrote:\n> >On Mon, Dec 26, 2005 at 10:11:00AM -0800, David Lang wrote:\n> >>what slows down raid 5 is that to modify a block you have to read \n> >>blocks from all your drives to re-calculate the parity. this \n> >>interleaving of reads and writes when all you are logicly doing is \n> >>writes can really hurt. (this is why I asked the question that got \n> >>us off on this tangent, when doing new writes to an array you don't \n> >>have to read the blocks as they are blank, assuming your cacheing \n> >>is enough so that you can write blocksize*n before the system \n> >>starts actually writing the data)\n> >\n> >Correct; there's no reason for the controller to read anything back \n> >if your write will fill a complete stripe. That's why I said that \n> >there isn't a \"RAID 5 penalty\" assuming you've got a reasonably fast \n> >controller and you're doing large sequential writes (or have enough \n> >cache that random writes can be batched as large sequential writes).\n> \n> Sorry. A decade+ RWE in production with RAID 5 using controllers as \n> bad as Adaptec and as good as Mylex, Chaparral, LSI Logic (including \n> their Engino stuff), and Xyratex under 5 different OS's (Sun, Linux, \n> M$, DEC, HP) on each of Oracle, SQL Server, DB2, mySQL, and pg shows \n> that RAID 5 writes are slower than RAID 5 reads\n> \n> With the one notable exception of the Mylex controller that was so \n> good IBM bought Mylex to put them out of business.\n> \n> Enough IO load, random or sequential, will cause the effect no matter \n> how much cache you have or how fast the controller is.\n> \n> The even bigger problem that everyone is ignoring here is that large \n> RAID 5's spend increasingly larger percentages of their time with 1 \n> failed HD in them. The math of having that many HDs operating \n> simultaneously 24x7 makes it inevitable.\n> \n> This means you are operating in degraded mode an increasingly larger \n> percentage of the time under exactly the circumstance you least want \n> to be. In addition, you are =one= HD failure from data loss on that \n> array an increasingly larger percentage of the time under exactly the \n> least circumstances you want to be.\n> \n> RAID 5 is not a silver bullet.\n> \n> \n> > On Mon, Dec 26, 2005 at 06:04:40PM -0500, Alex Turner wrote:\n> >>Yes, but those blocks in RAID 10 are largely irrelevant as they are \n> >>to independant disks. In RAID 5 you have to write parity to an \n> >>'active' drive that is part of the stripe.\n> >\n> >Once again, this doesn't make any sense. Can you explain which parts of\n> >a RAID 10 array are inactive?\n> >\n> >>I agree totally that the read+parity-calc+write in the worst case \n> >>is totaly bad, which is why I alway recommend people should _never \n> >>ever_ use RAID 5. In this day and age of large capacity chassis, \n> >>and large capacity SATA drives, RAID 5 is totally inapropriate IMHO \n> >>for _any_ application least of all databases.\n> I vote with Michael here. This is an extreme position to take that \n> can't be followed under many circumstances ITRW.\n> \n> \n> >So I've got a 14 drive chassis full of 300G SATA disks and need at \n> >least 3.5TB of data storage. In your mind the only possible solution \n> >is to buy another 14 drive chassis? Must be nice to never have a budget.\n> \n> I think you mean an infinite budget. That's even assuming it's \n> possible to get the HD's you need. I've had arrays that used all the \n> space I could give them in 160 HD cabinets. Two 160 HD cabinets was \n> neither within the budget nor going to perform well. I =had= to use \n> RAID 5. RAID 10 was just not usage efficient enough.\n> \n> \n> >Must be a hard sell if you've bought decent enough hardware that \n> >your benchmarks can't demonstrate a difference between a RAID 5 and \n> >a RAID 10 configuration on that chassis except in degraded mode (and \n> >the customer doesn't want to pay double for degraded mode performance)\n> \n> I have =never= had this situation. RAID 10 latency is better than \n> RAID 5 latency. RAID 10 write speed under heavy enough load, of any \n> type, is faster than RAID 5 write speed under the same \n> circumstances. RAID 10 robustness is better as well.\n> \n> Problem is that sometimes budget limits or number of HDs needed \n> limits mean you can't use RAID 10.\n> \n> \n> >>In reality I have yet to benchmark a system where RAID 5 on the \n> >>same number of drives with 8 drives or less in a single array beat \n> >>a RAID 10 with the same number of drives.\n> >\n> >Well, those are frankly little arrays, probably on lousy controllers...\n> Nah. Regardless of controller I can take any RAID 5 and any RAID 10 \n> built on the same HW under the same OS running the same DBMS and \n> =guarantee= there is an IO load above which it can be shown that the \n> RAID 10 will do writes faster than the RAID 5. The only exception in \n> my career thus far has been the aforementioned Mylex controller.\n> \n> OTOH, sometimes you have no choice but to \"take the hit\" and use RAID 5.\n> \n> \n> cheers,\n> Ron\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 27 Dec 2005 12:51:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Tue, Dec 27, 2005 at 11:50:16AM -0500, Ron wrote:\n>Sorry. A decade+ RWE in production with RAID 5 using controllers as \n>bad as Adaptec and as good as Mylex, Chaparral, LSI Logic (including \n>their Engino stuff), and Xyratex under 5 different OS's (Sun, Linux, \n>M$, DEC, HP) on each of Oracle, SQL Server, DB2, mySQL, and pg shows \n>that RAID 5 writes are slower than RAID 5 reads\n\nWhat does that have to do with anything? That wasn't the question...\n\n>RAID 5 is not a silver bullet.\n\nWho said it was? Nothing is, not even RAID 10. The appropriate thing to\ndo is to make decisions based on requirements, not to make sweeping\nstatements that eliminate entire categories of solutions based on hand\nwaving.\n\nMike Stone\n", "msg_date": "Tue, 27 Dec 2005 14:05:16 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Bruce,\n\nOn 12/27/05 9:51 AM, \"Bruce Momjian\" <[email protected]> wrote:\n\n> Historically, I have heard that RAID5 is only faster than RAID10 if\n> there are six or more drives.\n\nI think the real question here is \"faster for what?\" Also, just like the\noptimizer tunables for cpu/disk/memory speed relationships, the standing\nguidance for RAID has become outdated. Couple that with the predominance of\nreally bad hardware RAID controllers and people not testing them or\nreporting their performance (HP, Adaptec, LSI, Dell) and we've got a mess.\n\nAll we can really do is report success with various point solutions.\n\nRAID5 and RAID50 work fine for our customers who do OLAP type applications\nwhich are read-mostly. However, it only works well on good hardware and\nsoftware, which at this time include the HW RAID controllers from 3Ware and\nreputedly Areca and SW using Linux SW RAID.\n\nI've heard that the external storage RAID controllers from EMC work well,\nand I'd suspect there are others, but none of the host-based SCSI HW RAID\ncontrollers I've tested work well on Linux. I say Linux, because I'm pretty\nsure that the HP smartarray controllers work well on Windows, but the Linux\ndriver is so bad I'd say it doesn't work at all.\n\nWRT RAID10, it seems like throwing double the number of disks at the\nproblems is something to be avoided if possible, though the random write\nperformance may be important for OLTP. I think this assertion should be\nretested however in light of the increased speed of checksumming hardware\nand / or CPUs and faster, more effective drive electronics (write combining,\nwrite cache, etc).\n\n- Luke \n\n\n", "msg_date": "Tue, 27 Dec 2005 11:18:19 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Bruce,\n\nOn 12/27/05 9:51 AM, \"Bruce Momjian\" <[email protected]> wrote:\n\n> Historically, I have heard that RAID5 is only faster than RAID10 if\n> there are six or more drives.\n\nSpeaking of testing / proof, check this site out:\n\n http://www.wlug.org.nz/HarddiskBenchmarks\n\nI really like the idea - post your bonnie++ results so people can learn from\nyour configurations.\n\nWe've built a performance reporting site, but we can't seem to get it into\nshape for release. I'd really like to light a performance leaderboard /\nexperiences site up somewhere...\n\n- Luke\n\n\n", "msg_date": "Tue, 27 Dec 2005 11:24:17 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "At 02:05 PM 12/27/2005, Michael Stone wrote:\n>On Tue, Dec 27, 2005 at 11:50:16AM -0500, Ron wrote:\n>>Sorry. A decade+ RWE in production with RAID 5 using controllers \n>>as bad as Adaptec and as good as Mylex, Chaparral, LSI Logic \n>>(including their Engino stuff), and Xyratex under 5 different OS's \n>>(Sun, Linux, M$, DEC, HP) on each of Oracle, SQL Server, DB2, \n>>mySQL, and pg shows that RAID 5 writes are slower than RAID 5 reads\n>\n>What does that have to do with anything? That wasn't the question...\nYour quoted position is \"there isn't a 'RAID 5 penalty' assuming \nyou've got a reasonably fast controller and you're doing large \nsequential writes (or have enough cache that random writes can be \nbatched as large sequential writes).\"\n\nMy experience across a wide range of HW, OSs, DBMS, and applications \nsays you are wrong. Given enough IO, RAID 5 takes a bigger \nperformance hit for writes than RAID 10 does.\n\nEnough IO, sequential or otherwise, will result in a situation where \na RAID 10 array using the same number of HDs (and therefore of ~1/2 \nthe usable capacity) will have better write performance than the \nequivalent RAID 5 built using the same number of HDs.\nThere is a 'RAID 5 write penalty'.\n\nSaid RAID 10 array will also be more robust than a RAID 5 built using \nthe same number of HDs.\n\nOTOH, that does not make RAID 5 \"bad\". Nor are statements like \n\"Never use RAID 5!\" realistic or reasonable.\n\nAlso, performance is not the only or even most important reason for \nchoosing RAID 10 or RAID 50 over RAID 5. Robustness considerations \ncan be more important than performance ones.\n\ncheers,\nRon\n\n\n", "msg_date": "Tue, 27 Dec 2005 14:57:13 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Tue, Dec 27, 2005 at 02:57:13PM -0500, Ron wrote:\n>Your quoted position is \"there isn't a 'RAID 5 penalty' assuming \n>you've got a reasonably fast controller and you're doing large \n>sequential writes (or have enough cache that random writes can be \n>batched as large sequential writes).\"\n\nAnd you said that RAID 5 writes are slower than reads. That's a\ncompletely different statement. The traditional meaning of \"RAID 5\npenalty\" is the cost of reading a stripe to calculate parity if only a\nsmall part of the stripe changes. It has a special name because it can\nresult in a condition that the performance is catastrophically worse\nthan an optimal workload, or even the single-disk non-RAID case. It's\nstill an issue, but might not be relevant for a particular workload.\n(Hence the recommendation to benchmark.) \n\n>My experience across a wide range of HW, OSs, DBMS, and applications\n>says you are wrong. Given enough IO, RAID 5 takes a bigger \n>performance hit for writes than RAID 10 does.\n\nI don't understand why you keep using the pejorative term \"performance\nhit\". Try describing the \"performance characteristics\" instead. Also,\nclaims about performance claims based on experience are fairly useless.\nEither you have data to provide (in which case claiming vast experience\nis unnecessary) or you don't.\n\n>Said RAID 10 array will also be more robust than a RAID 5 built using \n>the same number of HDs.\n\nAnd a RAID 6 will be more robust than either. Basing reliability on\n\"hopefully you wont have both disks in a mirror fail\" is just silly.\nEither you need double disk failure protection or you don't.\n\nMike Stone\n", "msg_date": "Tue, 27 Dec 2005 16:15:18 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "At 04:15 PM 12/27/2005, Michael Stone wrote:\n>I don't understand why you keep using the pejorative term \"performance\n>hit\". Try describing the \"performance characteristics\" instead.\n\npe·jor·a·tive ( P ) Pronunciation Key (p-jôr-tv, -jr-, pj-rtv, pj-)\nadj.\nTending to make or become worse.\nDisparaging; belittling.\n\nRAID 5 write performance is significantly enough \nless than RAID 5 read performance as to be a \nmatter of professional note and concern. That's \nnot \"disparaging or belittling\" nor is it \n\"tending to make or become worse\". It's \nmeasurable fact that has an adverse impact on \ncapacity planning, budgeting, HW deployment, etc.\n\nIf you consider calling a provable decrease in \nperformance while doing a certain task that has \nsuch effects \"a hit\" or \"bad\" pejorative, you are \nusing a definition for the word that is different than the standard one.\n\n\n>Also, claims about performance claims based on experience are fairly useless.\n>Either you have data to provide (in which case claiming vast experience\n>is unnecessary) or you don't.\n\nMy experience _is_ the data provided. Isn't it \nconvenient for you that I don't have the records \nfor every job I've done in 20 years, nor do I \nnecessarily have the right to release some \nspecifics for some of what I do have. I've said \nwhat I can as a service to the \ncommunity. Including to you. Your reaction \nimplies that I and others with perhaps equally or \nmore valuable experience to share shouldn't bother.\n\n\"One of the major differences between Man and \nBeast is that Man learns from others experience.\"\n\nIt's also impressive that you evidently seem to \nbe implying that you do such records for your own \njob experience _and_ that you have the legal \nright to publish them. In which case, please \nfeel free to impress me further by doing so.\n\n\n>>Said RAID 10 array will also be more robust \n>>than a RAID 5 built using the same number of HDs.\n>\n>And a RAID 6 will be more robust than either. Basing reliability on\n>\"hopefully you wont have both disks in a mirror fail\" is just silly.\n>Either you need double disk failure protection or you don't.\nThat statement is incorrect and ignores both \nprobability and real world statistical failure patterns.\n\nThe odds of a RAID 10 array of n HDs suffering a \nfailure that loses data are less than the odds of \nit happening in a RAID 6 array of n HDs. You are \ncorrect that RAID 6 is more robust than RAID 5.\n\ncheers,\nRon\n\n\n", "msg_date": "Tue, 27 Dec 2005 17:47:52 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Hi, William,\n\nWilliam Yu wrote:\n\n> Random write performance (small block that only writes to 1 drive):\n> 1 write requires N-1 reads + N writes --> 1/2N-1 %\n\nThis is not true. Most Raid-5 engines use XOR or similar checksum\nmethods. As opposed to cryptographic checksums, those can be updated and\ncorrected incrementally.\n\ncheck_new = check_old xor data_old xor data_new\n\nSo 2 reads and 2 writes are enough: read data and checksum, then adjust\nthe checksum via the data difference, and write data and new checksum.\n\nAnd often, the old data block still is in cache, accounting to 1 read\nand two writes.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Thu, 05 Jan 2006 17:44:05 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" } ]
[ { "msg_contents": "Sun hardware is a 4 CPU (8 cores) v40z, Dell is 6850 Quad XEON (8\ncores), both have 16GB RAM, and 2 internal drives, one drive has OS +\ndata and second drive has pg_xlog.\n\nRedHat AS4.0 U2 64-bit on both servers, PG8.1, 64bit RPMs.\n\nThanks,\nAnjan\n\n\n\n-----Original Message-----\nFrom: Juan Casero [mailto:[email protected]] \nSent: Monday, December 19, 2005 11:17 PM\nTo: [email protected]\nSubject: Re: [PERFORM] High context switches occurring\n\nGuys -\n\nHelp me out here as I try to understand this benchmark. What is the Sun\n\nhardware and operating system we are talking about here and what is the\nintel \nhardware and operating system? What was the Sun version of PostgreSQL \ncompiled with? Gcc on Solaris (assuming sparc) or Sun studio? What was\n\nPostgreSQL compiled with on intel? Gcc on linux?\n\nThanks,\nJuan\n\nOn Monday 19 December 2005 21:08, Anjan Dave wrote:\n> Re-ran it 3 times on each host -\n>\n> Sun:\n> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 10\n> number of transactions per client: 3000\n> number of transactions actually processed: 30000/30000\n> tps = 827.810778 (including connections establishing)\n> tps = 828.410801 (excluding connections establishing)\n> real 0m36.579s\n> user 0m1.222s\n> sys 0m3.422s\n>\n> Intel:\n> -bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 10\n> number of transactions per client: 3000\n> number of transactions actually processed: 30000/30000\n> tps = 597.067503 (including connections establishing)\n> tps = 597.606169 (excluding connections establishing)\n> real 0m50.380s\n> user 0m2.621s\n> sys 0m7.818s\n>\n> Thanks,\n> Anjan\n>\n>\n> \t-----Original Message-----\n> \tFrom: Anjan Dave\n> \tSent: Wed 12/7/2005 10:54 AM\n> \tTo: Tom Lane\n> \tCc: Vivek Khera; Postgresql Performance\n> \tSubject: Re: [PERFORM] High context switches occurring\n>\n>\n>\n> \tThanks for your inputs, Tom. I was going after high concurrent\nclients,\n> \tbut should have read this carefully -\n>\n> \t-s scaling_factor\n> \t this should be used with -i (initialize) option.\n> \t number of tuples generated will be multiple of\nthe\n> \t scaling factor. For example, -s 100 will imply\n10M\n> \t (10,000,000) tuples in the accounts table.\n> \t default is 1. NOTE: scaling factor should be at\nleast\n> \t as large as the largest number of clients you\nintend\n> \t to test; else you'll mostly be measuring update\n> \tcontention.\n>\n> \tI'll rerun the tests.\n>\n> \tThanks,\n> \tAnjan\n>\n>\n> \t-----Original Message-----\n> \tFrom: Tom Lane [mailto:[email protected]]\n> \tSent: Tuesday, December 06, 2005 6:45 PM\n> \tTo: Anjan Dave\n> \tCc: Vivek Khera; Postgresql Performance\n> \tSubject: Re: [PERFORM] High context switches occurring\n>\n> \t\"Anjan Dave\" <[email protected]> writes:\n> \t> -bash-3.00$ time pgbench -c 1000 -t 30 pgbench\n> \t> starting vacuum...end.\n> \t> transaction type: TPC-B (sort of)\n> \t> scaling factor: 1\n> \t> number of clients: 1000\n> \t> number of transactions per client: 30\n> \t> number of transactions actually processed: 30000/30000\n> \t> tps = 45.871234 (including connections establishing)\n> \t> tps = 46.092629 (excluding connections establishing)\n>\n> \tI can hardly think of a worse way to run pgbench :-(. These\nnumbers are\n> \tabout meaningless, for two reasons:\n>\n> \t1. You don't want number of clients (-c) much higher than\nscaling factor\n> \t(-s in the initialization step). The number of rows in the\n\"branches\"\n> \ttable will equal -s, and since every transaction updates one\n> \trandomly-chosen \"branches\" row, you will be measuring mostly\nrow-update\n> \tcontention overhead if there's more concurrent transactions than\nthere\n> \tare rows. In the case -s 1, which is what you've got here,\nthere is no\n> \tactual concurrency at all --- all the transactions stack up on\nthe\n> \tsingle branches row.\n>\n> \t2. Running a small number of transactions per client means that\n> \tstartup/shutdown transients overwhelm the steady-state data.\nYou should\n> \tprobably run at least a thousand transactions per client if you\nwant\n> \trepeatable numbers.\n>\n> \tTry something like \"-s 10 -c 10 -t 3000\" to get numbers\nreflecting test\n> \tconditions more like what the TPC council had in mind when they\ndesigned\n> \tthis benchmark. I tend to repeat such a test 3 times to see if\nthe\n> \tnumbers are repeatable, and quote the middle TPS number as long\nas\n> \tthey're not too far apart.\n>\n> \t regards, tom lane\n>\n>\n> \t---------------------------(end of\nbroadcast)---------------------------\n> \tTIP 5: don't forget to increase your free space map settings\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that\nyour\n> message can get through to the mailing list cleanly\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n", "msg_date": "Tue, 20 Dec 2005 14:50:03 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High context switches occurring" } ]
[ { "msg_contents": "Hello,\n\nWe have the next scenario:\n\nLinux box with postgresql 7.2.1-1 (client)\nWindows XP with postgresql 8.1.1 (server)\nWindows XP with postgresql 8.1.1 (client)\n\nAll connected in 10Mb LAN\n\nIn server box, we have a table with 65000 rows and usign \"psql\" we have \nthese results:\n\nLinux box with psql version 7.2.1 versus Windows XP server:\n\nselect * from <table>; -> 7 seconds aprox. to obtain a results.\nNetwork utilization: 100%\n\nWindows XP client box with psql version 8.1.1 versus Windows XP server:\n\nselect * from <table>; -> 60 seconds aprox. to obtain a results!!!!!!!!!!!!\nNetwork utilization: 3%\n\nWindows XP server box with psql version 8.1.1 versus Windows XP server:\n\nselect * from <table>; -> <1 seconds aprox. to obtain a results.\nNetwork utilization: 0%\n\nIs a really problem, because we are migrating a old server to 8.0 \nversion in a windows box, and our application works really slow....\n\nThanks in advance,\n\nJosep Maria\n\n-- \n\nJosep Maria Pinyol i Fontseca\nResponsable �rea de programaci�\n\nENDEPRO - Enginyeria de programari\nPasseig Anselm Clav�, 19 Bx. 08263 Call�s (Barcelona)\nTel. +34 936930018 - Mob. +34 600310755 - Fax. +34 938361994\[email protected] - http://www.endepro.com\n\n\nAquest missatge i els documents en el seu cas adjunts, \nes dirigeixen exclusivament al seu destinatari i poden contenir \ninformaci� reservada i/o CONFIDENCIAL, us del qual no est� \nautoritzat ni la divulgaci� del mateix, prohibit per la legislaci� \nvigent (Llei 32/2002 SSI-CE). Si ha rebut aquest missatge per error, \nli demanem que ens ho comuniqui immediatament per la mateixa via o \nb� per tel�fon (+34936930018) i procedeixi a la seva destrucci�. \nAquest e-mail no podr� considerar-se SPAM.\n\nEste mensaje, y los documentos en su caso anexos, \nse dirigen exclusivamente a su destinatario y pueden contener \ninformaci�n reservada y/o CONFIDENCIAL cuyo uso no \nautorizado o divulgaci�n est� prohibida por la legislaci�n \nvigente (Ley 32/2002 SSI-CE). Si ha recibido este mensaje por error, \nle rogamos que nos lo comunique inmediatamente por esta misma v�a o \npor tel�fono (+34936930018) y proceda a su destrucci�n. \nEste e-mail no podr� considerarse SPAM.\n\nThis message and the enclosed documents are directed exclusively \nto its receiver and can contain reserved and/or confidential \ninformation, from which use isn�t allowed its divulgation, forbidden \nby the current legislation (Law 32/2002 SSI-CE). If you have received \nthis message by mistake, we kindly ask you to communicate it to us \nright away by the same way or by phone (+34936930018) and destruct it. \nThis e-mail can�t be considered as SPAM. \n\n", "msg_date": "Wed, 21 Dec 2005 10:05:02 +0100", "msg_from": "Josep Maria Pinyol Fontseca <[email protected]>", "msg_from_op": true, "msg_subject": "Windows performance again" }, { "msg_contents": "Josep Maria Pinyol Fontseca wrote:\n> \n> Linux box with psql version 7.2.1 versus Windows XP server:\n> \n> select * from <table>; -> 7 seconds aprox. to obtain a results.\n> Network utilization: 100%\n> \n> Windows XP client box with psql version 8.1.1 versus Windows XP server:\n> \n> select * from <table>; -> 60 seconds aprox. to obtain a results!!!!!!!!!!!!\n> Network utilization: 3%\n> \n> Windows XP server box with psql version 8.1.1 versus Windows XP server:\n> \n> select * from <table>; -> <1 seconds aprox. to obtain a results.\n> Network utilization: 0%\n\nIt's *got* to be the network configuration on the client machine. I'd be \ntempted to install ethereal on the linux box and watch for the \ndifference between the two networked sessions.\n\nI'm guessing it might be something to do with TCP/IP window/buffer sizes \nand delays on ACKs - this certainly used to be an issue on older MS \nsystems, but I must admit I thought they'd fixed it for XP.\n\nIf not that, could some firewall/security system be slowing network traffic?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 21 Dec 2005 09:58:48 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows performance again" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n\n> Josep Maria Pinyol Fontseca wrote:\n\n> > Windows XP client box with psql version 8.1.1 versus Windows XP server:\n> > select * from <table>; -> 60 seconds aprox. to obtain a results!!!!!!!!!!!!\n> > Network utilization: 3%\n\nThe 60 seconds sounds suspiciously like a DNS problem.\n\n-- \ngreg\n\n", "msg_date": "21 Dec 2005 09:48:23 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows performance again" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n> Josep Maria Pinyol Fontseca wrote:\n>> Network utilization: 0%\n\n> It's *got* to be the network configuration on the client machine.\n\nWe've seen gripes of this sort before --- check the list archives for\npossible fixes. I seem to recall something about a \"QoS patch\", as\nwell as suggestions to get rid of third-party packages that might be\ninterfering with the TCP stack.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Dec 2005 11:10:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows performance again " } ]