threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi,\n\npostgres 8.0.1, mac os x 10.3.9\n\ni have a select with multiple OR's combined with one AND:\n\nexplain analyze SELECT t0.ATTRIBUTE_TYPE FROM ATTRIBUTE_VALUE t0 WHERE \n(((t0.ATTRIBUTE_TYPE = 'pb'::varchar(10) OR t0.ATTRIBUTE_TYPE = \n'po'::varchar(10) OR t0.ATTRIBUTE_TYPE = 'pn'::varchar(10) OR \nt0.ATTRIBUTE_TYPE = 'ps'::varchar(10))) AND t0.ID_ATTRIBUTE = \n17::int8);\n\nThe result is the following. It shows that postgres does not use an \nindex which makes the select pretty slow.\n\nSeq Scan on attribute_value t0 (cost=0.00..529.13 rows=208 width=5) \n(actual time=66.591..66.591 rows=0 loops=1)\n Filter: ((((attribute_type)::text = 'pb'::text) OR \n((attribute_type)::text = 'po'::text) OR ((attribute_type)::text = \n'pn'::text) OR ((attribute_type)::text = 'ps'::text)) AND (id_attribute \n= 17::bigint))\n Total runtime: 66.664 ms\n(3 rows)\n\n\nWhen i remove one OR qualifier one can see that now an index is used.\n\nexplain analyze SELECT t0.ATTRIBUTE_TYPE FROM ATTRIBUTE_VALUE t0 WHERE \n(((t0.ATTRIBUTE_TYPE = 'pb'::varchar(10) OR t0.ATTRIBUTE_TYPE = \n'po'::varchar(10) OR t0.ATTRIBUTE_TYPE = 'pn'::varchar(10))) AND \nt0.ID_ATTRIBUTE = 17::int8);\n\nIndex Scan using attribute_value__attribute_type__id_attribute, \nattribute_value__attribute_type__id_attribute, \nattribute_value__attribute_type__id_attribute on attribute_value t0 \n(cost=0.00..451.82 rows=137 width=5) (actual time=0.301..0.301 rows=0 \nloops=1)\n Index Cond: ((((attribute_type)::text = 'pb'::text) AND \n(id_attribute = 17::bigint)) OR (((attribute_type)::text = 'po'::text) \nAND (id_attribute = 17::bigint)) OR (((attribute_type)::text = \n'pn'::text) AND (id_attribute = 17::bigint)))\n Filter: ((((attribute_type)::text = 'pb'::text) OR \n((attribute_type)::text = 'po'::text) OR ((attribute_type)::text = \n'pn'::text)) AND (id_attribute = 17::bigint))\n Total runtime: 0.414 ms\n(4 rows)\n\nWhen i do 'set enable_seqscan=no' the index is used of course. \nUnfortunately the sql is generated on the fly and its not easy, more or \nless impossible to selectively enable / disable seqscan. Any hint how \nto force postgres to use the index even with more OR parts?\n\nregards, David\n\n",
"msg_date": "Thu, 12 May 2005 11:07:41 +0200",
"msg_from": "David Teran <[email protected]>",
"msg_from_op": true,
"msg_subject": "AND OR combination: index not being used"
},
{
"msg_contents": "David Teran <[email protected]> writes:\n> Any hint how \n> to force postgres to use the index even with more OR parts?\n\nMore up-to-date statistics would evidently help; the thing is estimating\nhundreds of rows returned and actually finding none.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 May 2005 10:15:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AND OR combination: index not being used "
},
{
"msg_contents": "\nOn 12.05.2005, at 16:15, Tom Lane wrote:\n\n> David Teran <[email protected]> writes:\n>> Any hint how\n>> to force postgres to use the index even with more OR parts?\n>\n> More up-to-date statistics would evidently help; the thing is \n> estimating\n> hundreds of rows returned and actually finding none.\n>\nI always do a 'vacuum analyze' if something does not work as expected. \nBut this did not help. Any other tip?\n\nregards, David\n\n",
"msg_date": "Fri, 13 May 2005 00:38:22 +0200",
"msg_from": "David Teran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AND OR combination: index not being used "
}
] |
[
{
"msg_contents": "Hello,\n\n\nI'd like to tune Postgres for large data import (using Copy from).\n\n\nhere are a few steps already done:\n\n\n\n1) use 3 different disks for:\n\n\t-1: source data\n\t-2: index tablespaces\n\t-3: data tablespaces\n\t\n\t\n2) define all foreign keys as initially deferred\n\n\n3) tune some parameters:\n\n\n\n\tmax_connections =20\n\tshared_buffers =30000 \n\twork_mem = 8192 \n\tmaintenance_work_mem = 32768 \n\tcheckpoint_segments = 12\n\n\t(I also modified the kernel accordingly)\n\n\n\n\n4) runs VACUUM regulary\n\n\nThe server runs RedHat and has 1GB RAM\n\nIn the production (which may run on a better server), I plan to: \n\n- import a few millions rows per day,\n- keep up to ca 100 millions rows in the db\n- delete older data\n\n\n\n\nI've seen a few posting on hash/btree indexes, which say that hash index do\nnot work very well in Postgres;\ncurrently, I only use btree indexes. Could I gain performances whole using\nhash indexes as well ?\n\nHow does Postgres handle concurrent copy from on: same table / different\ntables ?\n\n\nI'd be glad on any further suggestion on how to further increase my\nperformances.\n\n\n\n\nMarc\n\n\n\n\n-- \n+++ Lassen Sie Ihren Gedanken freien Lauf... z.B. per FreeSMS +++\nGMX bietet bis zu 100 FreeSMS/Monat: http://www.gmx.net/de/go/mail\n",
"msg_date": "Thu, 12 May 2005 12:34:46 +0200 (MEST)",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "tuning Postgres for large data import (using Copy from)"
},
{
"msg_contents": "\"Marc Mamin\" <[email protected]> writes:\n> 1) use 3 different disks for:\n\n> \t-1: source data\n> \t-2: index tablespaces\n> \t-3: data tablespaces\n\nIt's probably much more important to know where you put the WAL.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 May 2005 10:31:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning Postgres for large data import (using Copy from) "
},
{
"msg_contents": "Marc Mamin wrote:\n> Hello,\n>\n\nI'm not an expert, but I'll give some suggestions.\n\n>\n> I'd like to tune Postgres for large data import (using Copy from).\n>\n\nI believe that COPY FROM <file> is supposed to be faster than COPY FROM\nSTDIN, but <file> must be available to the backend process. If you can\ndo it, you should think about it, as it eliminates the communication\nbetween the client and the backend.\n\n>\n> here are a few steps already done:\n>\n>\n>\n> 1) use 3 different disks for:\n>\n> \t-1: source data\n> \t-2: index tablespaces\n> \t-3: data tablespaces\n> \t\n\nMake sure pg_xlog is on it's own filesystem. It contains the\nwrite-ahead-log, and putting it by itself keeps the number of seeks\ndown. If you are constrained, I think pg_xlog is more important than\nmoving the index tablespaces.\n\n> \t\n> 2) define all foreign keys as initially deferred\n>\n>\n> 3) tune some parameters:\n>\n>\n>\n> \tmax_connections =20\n> \tshared_buffers =30000\n> \twork_mem = 8192\n> \tmaintenance_work_mem = 32768\n> \tcheckpoint_segments = 12\n>\n> \t(I also modified the kernel accordingly)\n>\n\nDon't forget to increase your free space map if you are going to be\ndoing deletes frequently.\n\n>\n>\n>\n> 4) runs VACUUM regulary\n>\n>\n> The server runs RedHat and has 1GB RAM\n>\n> In the production (which may run on a better server), I plan to:\n>\n> - import a few millions rows per day,\n> - keep up to ca 100 millions rows in the db\n> - delete older data\n>\n>\n>\n>\n> I've seen a few posting on hash/btree indexes, which say that hash index do\n> not work very well in Postgres;\n> currently, I only use btree indexes. Could I gain performances whole using\n> hash indexes as well ?\n>\nI doubt it.\n\n> How does Postgres handle concurrent copy from on: same table / different\n> tables ?\n>\n\nI think it is better with different tables. If using the same table, and\nthere are indexes, it has to grab a lock for updating the index, which\ncauses contention between 2 processes writing to the same table.\n\n>\n> I'd be glad on any further suggestion on how to further increase my\n> performances.\n>\n\nSince you are deleting data often, and copying often, I might recommend\nusing a partition scheme with a view to bind everything together. That\nway you can just drop the old table rather than doing a delete. I don't\nknow how this would affect foreign key references.\n\nBut basically you can create a new table, and do a copy without having\nany indexes, then build the indexes, analyze, update the view.\n\nAnd when deleting you can update the view, and drop the old table.\n\nSomething like this:\n\nCREATE TABLE table_2005_05_11 AS (blah);\nCOPY FROM ... ;\nCREATE INDEX blah ON table_2005_05_11(blah);\nCREATE OR REPLACE VIEW table AS\n\tSELECT * FROM table_2005_05_10\n\tUNION ALL SELECT * FROM table_2005_05_11;\nVACUUM ANALYZE table_2005_05_11;\n...\n\nJohn\n=:->\n\n>\n>\n>\n> Marc\n>\n>\n>\n>",
"msg_date": "Thu, 12 May 2005 09:53:31 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning Postgres for large data import (using Copy from)"
},
{
"msg_contents": "Marc,\n\n> 1) use 3 different disks for:\n>\n> \t-1: source data\n> \t-2: index tablespaces\n> \t-3: data tablespaces\n\nOthers have already told you about the importance of relocating WAL. If you \nare going to be building indexes on the imported data, you might find it \nbeneficial to relocate pgsql_tmp for the database in question as well. \nAlso, I generally find it more beneficial to seperate the few largest tables \nto their own disk resources than to put all tables on one resource and all \ndisks on another. For example, for TPCH-like tests, I do\narray0: OS and pgsql_tmp\narray1: LINEITEM\narray2: LINEITEM Indexes\narray3: all other tables and indexes\narray4: pg_xlog\narray5: source data\n\nThis allows me to load a 100G (actually 270G) TPCH-like database in < 2 hours, \nnot counting index-building.\n\n> 2) define all foreign keys as initially deferred\n\nIt would be better to drop them before import and recreate them afterwards. \nSame for indexes unless those indexes are over 2G in size.\n\n> \tmax_connections =20\n> \tshared_buffers =30000\n> \twork_mem = 8192\n\nNot high enough, unless you have very little RAM. On an 8G machine I'm using \n256MB. You might want to use 64MB or 128MB.\n\n> \tmaintenance_work_mem = 32768\n\nREALLY not high enough. You're going to need to build big indexes and \npossibly vacuum large tables. I use the maximum of 1.98GB. Use up to 1/3 of \nyour RAM for this.\n\n> \tcheckpoint_segments = 12\n\nAlso way too low. Put pg_xlog on its own disk, give in 128 to 512 segments \n(up to 8G).\n\n> The server runs RedHat and has 1GB RAM\n\nMake sure you're running a 2.6.10+ kernel. Make sure ext3 is set noatime, \ndata=writeback. Buy more RAM. Etc.\n\n> How does Postgres handle concurrent copy from on: same table / different\n> tables ?\n\nSame table is useless; the imports will effectively serialize (unless you use \npseudo-partitioning). You can parallel load on multiple tables up to the \nlower of your number of disk channels or number of processors.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 12 May 2005 10:25:41 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning Postgres for large data import (using Copy from)"
}
] |
[
{
"msg_contents": "Ross,\n\n> Memcached is a PG memory store, I gather,\n\nNope. It's a hyperfast resident-in-memory hash that allows you to stash stuff \nlike user session information and even materialized query set results. \nThanks to SeanC, we even have a plugin, pgmemcached.\n\n> but...what is squid, lighttpd? \n> anything directly PG-related?\n\nNo. These are all related to making the web server do more. The idea is \nNOT to hit the database every time you have to serve up a web page, and \npossibly not to hit the web server either. For example, you can use squid 3 \nfor \"reverse\" caching in front of your web server, and serve far more page \nviews than you could with Apache alone.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 12 May 2005 11:51:44 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning / Clustering"
}
] |
[
{
"msg_contents": "Hello,\n\nI am facing a problem in optimizing the query shown below.\n\nMost queries in the application do only find about 20 to 100 matching rows.\n\nThe query joins the table taufgaben_mitarbeiter to taufgaben on which a \ncondition like the following \"where clause\" is frequently used.\n\nwhere\nam.fmitarbeiter_id = 54\n\nthen there is a nested join to taufgaben -> tprojekt -> tkunden_kst -> \ntkunden.\n\nWhat I would like to achieve is that before joining all the tables that \nthe join of \n\ntaufgaben_mitarbeiter \n(... from\ntaufgaben left join taufgaben_mitarbeiter am\non taufgaben.fid = am.faufgaben_id)\n\nis done and that the where condition is evaluated. Than an index scan to join the other data is run.\nWhat is happening at the moment (if I understood the explain analyze) is that the full join is done and at the end the where condition is done.\n\nThe query with seqscan and nestloop enabled takes about 3 seconds.\nThe query with both disabled takes 0.52 seconds\nThe query with only nestlop disabled takes 0.6 seconds\nand\nwith only sesscan disabled takes about 3 seconds.\n\nBelow you can find the explain analyze from \"seqscan and nestloop enabled\" and from both disabled. The problem seems to be right at the beginning when the rows are badly estimated.\n...\nMerge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Nested Loop (cost=1621.51..1729.28 rows=6 width=2541) (actual time=328.000..3125.000 rows=1118 loops=1)\"\n...\nI am using PostgreSQL 8.0 on Windows\n\nThank you for any idea\n\n\n\n-- \nKind Regards / Viele Grᅵᅵe\n\nSebastian Hennebrueder\n\n-----\nhttp://www.laliluna.de/tutorials.html\nTutorials for Java, Struts, JavaServer Faces, JSP, Hibernate, EJB and more.\n\nenabled seqscan and nested_loop\n\nexplain analyze\nSELECT taufgaben.fid AS taufgaben_fid, taufgaben.fprojekt_id AS\n taufgaben_fprojekt_id, taufgaben.fnummer AS taufgaben_fnummer,\n taufgaben.fbudget AS taufgaben_fbudget, taufgaben.ftyp AS taufgaben_ftyp,\n taufgaben.fberechnungsart AS taufgaben_fberechnungsart,\n taufgaben.fverrechnung_extern AS taufgaben_fverrechnung_extern,\n taufgaben.fverrechnungsbasis AS taufgaben_fverrechnungsbasis,\n taufgaben.fstatus AS taufgaben_fstatus, taufgaben.fkurzbeschreibung AS\n taufgaben_fkurzbeschreibung, taufgaben.fansprechpartner AS\n taufgaben_fansprechpartner, taufgaben.fanforderer AS taufgaben_fanforderer,\n taufgaben.fstandort_id AS taufgaben_fstandort_id, taufgaben.fwunschtermin\n AS taufgaben_fwunschtermin, taufgaben.fstarttermin AS\n taufgaben_fstarttermin, taufgaben.fgesamtaufwand AS\n taufgaben_fgesamtaufwand, taufgaben.fistaufwand AS taufgaben_fistaufwand,\n taufgaben.fprio AS taufgaben_fprio, taufgaben.ftester AS taufgaben_ftester,\n taufgaben.ffaellig AS taufgaben_ffaellig, taufgaben.flevel AS\n taufgaben_flevel, taufgaben.fkategorie AS taufgaben_fkategorie,\n taufgaben.feintragbearbeitung AS taufgaben_feintragbearbeitung,\n taufgaben.fbearbeitungsstatus AS taufgaben_fbearbeitungsstatus,\n taufgaben.fsolllimit AS taufgaben_fsolllimit, taufgaben.fistlimit AS\n taufgaben_fistlimit, taufgaben.fpauschalbetrag AS\n taufgaben_fpauschalbetrag, taufgaben.frechnungslaeufe_id AS\n taufgaben_frechnungslaeufe_id, taufgaben.fzuberechnen AS\n taufgaben_fzuberechnen, tprojekte.fid AS tprojekte_fid,\n tprojekte.fbezeichnung AS tprojekte_fbezeichnung, tprojekte.fprojektnummer\n AS tprojekte_fprojektnummer, tprojekte.fbudget AS tprojekte_fbudget,\n tprojekte.fverrechnung_extern AS tprojekte_fverrechnung_extern,\n tprojekte.fstatus AS tprojekte_fstatus, tprojekte.fkunden_kst_id AS\n tprojekte_fkunden_kst_id, tprojekte.fverrechnungsbasis AS\n tprojekte_fverrechnungsbasis, tprojekte.fberechnungsart AS\n tprojekte_fberechnungsart, tprojekte.fprojekttyp AS tprojekte_fprojekttyp,\n tprojekte.fkostentraeger_id AS tprojekte_fkostentraeger_id,\n tprojekte.fprojektleiter_id AS tprojekte_fprojektleiter_id,\n tprojekte.fpauschalsatz AS tprojekte_fpauschalsatz,\n tprojekte.frechnungslaeufe_id AS tprojekte_frechnungslaeufe_id,\n tprojekte.fzuberechnen AS tprojekte_fzuberechnen, tprojekte.faufschlagrel\n AS tprojekte_faufschlagrel, tprojekte.faufschlagabs AS\n tprojekte_faufschlagabs, tprojekte.fbearbeitungsstatus AS\n tprojekte_fbearbeitungsstatus, tuser.fusername AS tuser_fusername,\n tuser.fpassword AS tuser_fpassword, tuser.fvorname AS tuser_fvorname,\n tuser.fnachname AS tuser_fnachname, tuser.fismitarbeiter AS\n tuser_fismitarbeiter, tuser.flevel AS tuser_flevel, tuser.fkuerzel AS\n tuser_fkuerzel, taufgaben.floesungsbeschreibung AS\n taufgaben_floesungsbeschreibung, taufgaben.ffehlerbeschreibung AS\n taufgaben_ffehlerbeschreibung, taufgaben.faufgabenstellung AS\n taufgaben_faufgabenstellung, taufgaben.fkritischeaenderungen AS\n taufgaben_fkritischeaenderungen, taufgaben.fbdeaufgabenersteller_id AS\n taufgaben_fbdeaufgabenersteller_id, taufgaben.fzufaktorieren AS\n taufgaben_fzufaktorieren, tprojekte.fzufaktorieren AS\n tprojekte_fzufaktorieren, taufgaben.fisdirty AS taufgaben_fisdirty,\n taufgaben.fnf_kunde_stunden AS taufgaben_fnf_kunde_stunden,\n taufgaben.fzf_kunde_stunden AS taufgaben_fzf_kunde_stunden,\n taufgaben.fbf_kunde_stunden AS taufgaben_fbf_kunde_stunden,\n taufgaben.fnf_kunde_betrag AS taufgaben_fnf_kunde_betrag,\n taufgaben.fzf_kunde_betrag AS taufgaben_fzf_kunde_betrag,\n taufgaben.fbf_kunde_betrag AS taufgaben_fbf_kunde_betrag,\n tprojekte.feurobudget AS tprojekte_feurobudget, tprojekte.fnf_kunde_stunden\n AS tprojekte_fnf_kunde_stunden, tprojekte.fzf_kunde_stunden AS\n tprojekte_fzf_kunde_stunden, tprojekte.fbf_kunde_stunden AS\n tprojekte_fbf_kunde_stunden, tprojekte.fnf_kunde_betrag AS\n tprojekte_fnf_kunde_betrag, tprojekte.fzf_kunde_betrag AS\n tprojekte_fzf_kunde_betrag, tprojekte.fbf_kunde_betrag AS\n tprojekte_fbf_kunde_betrag, tprojekte.fisdirty AS tprojekte_fisdirty,\n tprojekte.fgesamt_brutto_betrag AS tprojekte_fgesamt_brutto_betrag,\n tprojekte.fgesamt_brutto_stunden AS tprojekte_fgesamt_brutto_stunden,\n tprojekte.fgesamt_netto_stunden AS tprojekte_fgesamt_netto_stunden,\n taufgaben.fgesamt_brutto_stunden AS taufgaben_fgesamt_brutto_stunden,\n taufgaben.fgesamt_brutto_betrag AS taufgaben_fgesamt_brutto_betrag,\n taufgaben.fhinweisgesendet AS taufgaben_fhinweisgesendet,\n taufgaben.fwarnunggesendet AS taufgaben_fwarnunggesendet,\n tprojekte.fhinweisgesendet AS tprojekte_fhinweisgesendet,\n tprojekte.fwarnunggesendet AS tprojekte_fwarnunggesendet,\n tuser.femailadresse AS tuser_femailadresse, taufgaben.fnfgesamtaufwand AS\n taufgaben_fnfgesamtaufwand, taufgaben.fnf_netto_stunden AS\n taufgaben_fnf_netto_stunden, taufgaben.fnf_brutto_stunden AS\n taufgaben_fnf_brutto_stunden, taufgaben.fnfhinweisgesendet AS\n taufgaben_fnfhinweisgesendet, taufgaben.fnfwarnunggesendet AS\n taufgaben_fnfwarnunggesendet, tprojekte.fnfgesamtaufwand AS\n tprojekte_fnfgesamtaufwand, tprojekte.fnf_netto_stunden AS\n tprojekte_fnf_netto_stunden, tprojekte.fnf_brutto_stunden AS\n tprojekte_fnf_brutto_stunden, tprojekte.fnfhinweisgesendet AS\n tprojekte_fnfhinweisgesendet, tprojekte.fnfwarnunggesendet AS\n tprojekte_fnfwarnunggesendet, taufgaben.fhatzeiten AS taufgaben_fhatzeiten,\n tprojekte.fhatzeiten AS tprojekte_fhatzeiten,\n taufgaben.fnichtpublicrechnungsfaehig AS\n taufgaben_fnichtpublicrechnungsfaehig,\n taufgaben.fnichtpublicrechnungsfaehigbetrag AS\n taufgaben_fnichtpublicrechnungsfaehigbetrag, taufgaben.fnichtberechenbar AS\n taufgaben_fnichtberechenbar, taufgaben.fnichtberechenbarbetrag AS\n taufgaben_fnichtberechenbarbetrag, tprojekte.fnichtpublicrechnungsfaehig AS\n tprojekte_fnichtpublicrechnungsfaehig,\n tprojekte.fnichtpublicrechnungsfaehigbetrag AS\n tprojekte_fnichtpublicrechnungsfaehigbetrag, tprojekte.fnichtberechenbar AS\n tprojekte_fnichtberechenbar, tprojekte.fnichtberechenbarbetrag AS\n tprojekte_fnichtberechenbarbetrag, taufgaben.finternertester AS\n taufgaben_finternertester, taufgaben.finterngetestet AS\n taufgaben_finterngetestet, tkunden_kst.fbezeichnung AS tkunden_kst_name,\n tkunden.fname AS tkunden_name, tabteilungen.fname AS tabteilungen_fname,\n tkostenstellen.fnummer AS tkostenstellen_fnummer, tkostentraeger.fnummer AS\n tkostentraeger_fnummer, taufgaben.fanzahlbearbeiter AS\n taufgaben_fanzahlbearbeiter, patchdaten.faufgaben_id AS pataid\nFROM\ntaufgaben_mitarbeiter am\nleft join\n\n\n ((((((((taufgaben LEFT JOIN (\n SELECT DISTINCT taufgaben_patches.faufgaben_id\n FROM taufgaben_patches\n ORDER BY taufgaben_patches.faufgaben_id\n ) patchdaten ON ((taufgaben.fid = patchdaten.faufgaben_id))) JOIN tprojekte\n ON ((taufgaben.fprojekt_id = tprojekte.fid))) JOIN tuser ON\n ((tprojekte.fprojektleiter_id = tuser.fid))) JOIN tkunden_kst ON\n ((tprojekte.fkunden_kst_id = tkunden_kst.fid))) JOIN tkunden ON\n ((tkunden_kst.fkunden_id = tkunden.fid))) JOIN tkostentraeger ON\n ((tprojekte.fkostentraeger_id = tkostentraeger.fid))) JOIN\n tkostenstellen ON ((tkostentraeger.fkostenstellen_id =\n tkostenstellen.fid))) JOIN tabteilungen ON\n ((tkostenstellen.fabteilungen_id = tabteilungen.fid)))\non taufgaben.fid = am.faufgaben_id\nwhere\nam.fmitarbeiter_id = 54\nand\ntaufgaben.fbearbeitungsstatus <> 2\n\n\"Merge Join (cost=1729.11..1837.08 rows=1 width=2541) (actual time=531.000..3125.000 rows=62 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Nested Loop (cost=1621.51..1729.28 rows=6 width=2541) (actual time=328.000..3125.000 rows=1118 loops=1)\"\n\" Join Filter: (\"outer\".fprojekt_id = \"inner\".fid)\"\n\" -> Merge Left Join (cost=1490.70..1497.67 rows=1120 width=1047) (actual time=172.000..220.000 rows=1118 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Sort (cost=1211.46..1214.26 rows=1120 width=1043) (actual time=109.000..109.000 rows=1118 loops=1)\"\n\" Sort Key: taufgaben.fid\"\n\" -> Seq Scan on taufgaben (cost=0.00..853.88 rows=1120 width=1043) (actual time=0.000..109.000 rows=1120 loops=1)\"\n\" Filter: (fbearbeitungsstatus <> 2)\"\n\" -> Sort (cost=279.23..279.73 rows=200 width=4) (actual time=63.000..63.000 rows=4773 loops=1)\"\n\" Sort Key: patchdaten.faufgaben_id\"\n\" -> Subquery Scan patchdaten (cost=0.00..271.59 rows=200 width=4) (actual time=0.000..31.000 rows=4773 loops=1)\"\n\" -> Unique (cost=0.00..269.59 rows=200 width=4) (actual time=0.000..31.000 rows=4773 loops=1)\"\n\" -> Index Scan using idx_aufpa_aufgabeid on taufgaben_patches (cost=0.00..253.74 rows=6340 width=4) (actual time=0.000..0.000 rows=6340 loops=1)\"\n\" -> Materialize (cost=130.81..130.85 rows=4 width=1494) (actual time=0.140..0.877 rows=876 loops=1118)\"\n\" -> Merge Join (cost=130.53..130.81 rows=4 width=1494) (actual time=156.000..203.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fkunden_id = \"inner\".fid)\"\n\" -> Sort (cost=127.06..127.08 rows=6 width=1455) (actual time=156.000..156.000 rows=876 loops=1)\"\n\" Sort Key: tkunden_kst.fkunden_id\"\n\" -> Merge Join (cost=126.35..126.99 rows=6 width=1455) (actual time=109.000..140.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fprojektleiter_id = \"inner\".fid)\"\n\" -> Sort (cost=118.57..118.59 rows=9 width=580) (actual time=109.000..109.000 rows=876 loops=1)\"\n\" Sort Key: tprojekte.fprojektleiter_id\"\n\" -> Merge Join (cost=117.89..118.43 rows=9 width=580) (actual time=62.000..93.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fkunden_kst_id = \"inner\".fid)\"\n\" -> Sort (cost=114.61..114.69 rows=31 width=508) (actual time=62.000..62.000 rows=876 loops=1)\"\n\" Sort Key: tprojekte.fkunden_kst_id\"\n\" -> Merge Join (cost=109.11..113.84 rows=31 width=508) (actual time=31.000..62.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".fkostentraeger_id)\"\n\" -> Sort (cost=13.40..13.42 rows=7 width=162) (actual time=0.000..0.000 rows=158 loops=1)\"\n\" Sort Key: tkostentraeger.fid\"\n\" -> Merge Join (cost=12.41..13.31 rows=7 width=162) (actual time=0.000..0.000 rows=158 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".fkostenstellen_id)\"\n\" -> Sort (cost=3.06..3.08 rows=7 width=119) (actual time=0.000..0.000 rows=19 loops=1)\"\n\" Sort Key: tkostenstellen.fid\"\n\" -> Merge Join (cost=2.76..2.96 rows=7 width=119) (actual time=0.000..0.000 rows=19 loops=1)\"\n\" Merge Cond: (\"outer\".fabteilungen_id = \"inner\".fid)\"\n\" -> Sort (cost=1.59..1.64 rows=19 width=55) (actual time=0.000..0.000 rows=19 loops=1)\"\n\" Sort Key: tkostenstellen.fabteilungen_id\"\n\" -> Seq Scan on tkostenstellen (cost=0.00..1.19 rows=19 width=55) (actual time=0.000..0.000 rows=19 loops=1)\"\n\" -> Sort (cost=1.17..1.19 rows=7 width=76) (actual time=0.000..0.000 rows=19 loops=1)\"\n\" Sort Key: tabteilungen.fid\"\n\" -> Seq Scan on tabteilungen (cost=0.00..1.07 rows=7 width=76) (actual time=0.000..0.000 rows=7 loops=1)\"\n\" -> Sort (cost=9.35..9.74 rows=158 width=55) (actual time=0.000..0.000 rows=158 loops=1)\"\n\" Sort Key: tkostentraeger.fkostenstellen_id\"\n\" -> Seq Scan on tkostentraeger (cost=0.00..3.58 rows=158 width=55) (actual time=0.000..0.000 rows=158 loops=1)\"\n\" -> Sort (cost=95.71..97.90 rows=878 width=354) (actual time=31.000..31.000 rows=877 loops=1)\"\n\" Sort Key: tprojekte.fkostentraeger_id\"\n\" -> Seq Scan on tprojekte (cost=0.00..52.78 rows=878 width=354) (actual time=0.000..31.000 rows=878 loops=1)\"\n\" -> Sort (cost=3.28..3.42 rows=58 width=80) (actual time=0.000..0.000 rows=892 loops=1)\"\n\" Sort Key: tkunden_kst.fid\"\n\" -> Seq Scan on tkunden_kst (cost=0.00..1.58 rows=58 width=80) (actual time=0.000..0.000 rows=58 loops=1)\"\n\" -> Sort (cost=7.78..8.05 rows=109 width=883) (actual time=0.000..0.000 rows=950 loops=1)\"\n\" Sort Key: tuser.fid\"\n\" -> Seq Scan on tuser (cost=0.00..4.09 rows=109 width=883) (actual time=0.000..0.000 rows=109 loops=1)\"\n\" -> Sort (cost=3.46..3.56 rows=40 width=51) (actual time=0.000..0.000 rows=887 loops=1)\"\n\" Sort Key: tkunden.fid\"\n\" -> Seq Scan on tkunden (cost=0.00..2.40 rows=40 width=51) (actual time=0.000..0.000 rows=40 loops=1)\"\n\" -> Sort (cost=107.60..107.69 rows=35 width=4) (actual time=0.000..0.000 rows=765 loops=1)\"\n\" Sort Key: am.faufgaben_id\"\n\" -> Index Scan using idx_tauf_mit_mitid on taufgaben_mitarbeiter am (cost=0.00..106.70 rows=35 width=4) (actual time=0.000..0.000 rows=765 loops=1)\"\n\" Index Cond: (fmitarbeiter_id = 54)\"\n\"Total runtime: 3125.000 ms\"\n\n\n############################################################################\nset enable_nestloop to off;\nset enable_seqscan to off;\n\n\n\"Merge Join (cost=4230.83..4231.04 rows=1 width=2541) (actual time=485.000..500.000 rows=62 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Sort (cost=4123.23..4123.24 rows=6 width=2541) (actual time=469.000..485.000 rows=1118 loops=1)\"\n\" Sort Key: taufgaben.fid\"\n\" -> Merge Join (cost=4117.47..4123.15 rows=6 width=2541) (actual time=297.000..406.000 rows=1120 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".fprojekt_id)\"\n\" -> Sort (cost=263.53..263.54 rows=4 width=1494) (actual time=141.000..141.000 rows=876 loops=1)\"\n\" Sort Key: tprojekte.fid\"\n\" -> Merge Join (cost=247.95..263.49 rows=4 width=1494) (actual time=94.000..109.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fprojektleiter_id = \"inner\".fid)\"\n\" -> Sort (cost=247.95..247.96 rows=7 width=619) (actual time=94.000..94.000 rows=876 loops=1)\"\n\" Sort Key: tprojekte.fprojektleiter_id\"\n\" -> Merge Join (cost=246.86..247.85 rows=7 width=619) (actual time=47.000..78.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fkostentraeger_id = \"inner\".fid)\"\n\" -> Sort (cost=222.01..222.45 rows=176 width=465) (actual time=47.000..47.000 rows=878 loops=1)\"\n\" Sort Key: tprojekte.fkostentraeger_id\"\n\" -> Merge Join (cost=20.63..215.44 rows=176 width=465) (actual time=0.000..32.000 rows=878 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".fkunden_kst_id)\"\n\" -> Sort (cost=20.63..20.73 rows=40 width=119) (actual time=0.000..0.000 rows=58 loops=1)\"\n\" Sort Key: tkunden_kst.fid\"\n\" -> Merge Join (cost=8.34..19.57 rows=40 width=119) (actual time=0.000..0.000 rows=58 loops=1)\"\n\" Merge Cond: (\"outer\".fkunden_id = \"inner\".fid)\"\n\" -> Sort (cost=8.34..8.48 rows=58 width=80) (actual time=0.000..0.000 rows=58 loops=1)\"\n\" Sort Key: tkunden_kst.fkunden_id\"\n\" -> Index Scan using pk__kunden_kst__30c33ec3 on tkunden_kst (cost=0.00..6.64 rows=58 width=80) (actual time=0.000..0.000 rows=58 loops=1)\"\n\" -> Index Scan using tkunden_tbl_kunden_pk on tkunden (cost=0.00..10.44 rows=40 width=51) (actual time=0.000..0.000 rows=59 loops=1)\"\n\" -> Index Scan using idx_kunden_kst_id on tprojekte (cost=0.00..190.66 rows=878 width=354) (actual time=0.000..0.000 rows=878 loops=1)\"\n\" -> Sort (cost=24.86..24.87 rows=7 width=162) (actual time=0.000..0.000 rows=923 loops=1)\"\n\" Sort Key: tkostentraeger.fid\"\n\" -> Merge Join (cost=12.52..24.76 rows=7 width=162) (actual time=0.000..0.000 rows=158 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".fkostenstellen_id)\"\n\" -> Sort (cost=12.52..12.54 rows=7 width=119) (actual time=0.000..0.000 rows=19 loops=1)\"\n\" Sort Key: tkostenstellen.fid\"\n\" -> Merge Join (cost=0.00..12.42 rows=7 width=119) (actual time=0.000..0.000 rows=19 loops=1)\"\n\" Merge Cond: (\"outer\".fabteilungen_id = \"inner\".fid)\"\n\" -> Index Scan using abteilungkostenstellen on tkostenstellen (cost=0.00..6.21 rows=19 width=55) (actual time=0.000..0.000 rows=19 loops=1)\"\n\" -> Index Scan using fld_id on tabteilungen (cost=0.00..6.08 rows=7 width=76) (actual time=0.000..0.000 rows=19 loops=1)\"\n\" -> Index Scan using idx_kostenstellen_id on tkostentraeger (cost=0.00..11.74 rows=158 width=55) (actual time=0.000..0.000 rows=158 loops=1)\"\n\" -> Index Scan using pk_tuser on tuser (cost=0.00..15.20 rows=109 width=883) (actual time=0.000..0.000 rows=950 loops=1)\"\n\" -> Sort (cost=3853.94..3856.74 rows=1120 width=1047) (actual time=156.000..156.000 rows=1120 loops=1)\"\n\" Sort Key: taufgaben.fprojekt_id\"\n\" -> Merge Left Join (cost=279.23..3496.35 rows=1120 width=1047) (actual time=47.000..156.000 rows=1120 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Index Scan using idx_taufgaben_fid on taufgaben (cost=0.00..3212.95 rows=1120 width=1043) (actual time=0.000..31.000 rows=1120 loops=1)\"\n\" Filter: (fbearbeitungsstatus <> 2)\"\n\" -> Sort (cost=279.23..279.73 rows=200 width=4) (actual time=47.000..47.000 rows=4773 loops=1)\"\n\" Sort Key: patchdaten.faufgaben_id\"\n\" -> Subquery Scan patchdaten (cost=0.00..271.59 rows=200 width=4) (actual time=0.000..31.000 rows=4773 loops=1)\"\n\" -> Unique (cost=0.00..269.59 rows=200 width=4) (actual time=0.000..31.000 rows=4773 loops=1)\"\n\" -> Index Scan using idx_aufpa_aufgabeid on taufgaben_patches (cost=0.00..253.74 rows=6340 width=4) (actual time=0.000..16.000 rows=6340 loops=1)\"\n\" -> Sort (cost=107.60..107.69 rows=35 width=4) (actual time=0.000..0.000 rows=765 loops=1)\"\n\" Sort Key: am.faufgaben_id\"\n\" -> Index Scan using idx_tauf_mit_mitid on taufgaben_mitarbeiter am (cost=0.00..106.70 rows=35 width=4) (actual time=0.000..0.000 rows=765 loops=1)\"\n\" Index Cond: (fmitarbeiter_id = 54)\"\n\"Total runtime: 500.000 ms\"\n\n\n",
"msg_date": "Fri, 13 May 2005 00:32:37 +0200",
"msg_from": "Sebastian Hennebrueder <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimize complex join to use where condition before join"
},
{
"msg_contents": "Solution to my problem.\nI added indexes to each foreign_key (there had been some missing). I\nwill try tomorrow by daylight what influence this had actually. Only the\nindexes did not change anything! Even with lower random_page_costs and\nhigher shared mem.\n\nThe big change was the following\nI created a view which holds a part of the query. The part is the nested\njoin I am doing from rpojekt, tkunden_kst, ....\nSee below\n\nThan I changed my query to include the view which improved the\nperformance from 3000 to 450 ms which is quite good now.\n\nBut I am having two more question\na) ###############\nI estimated the theoretical speed a little bit higher.\nThe query without joining the view takes about 220 ms. A query to the\nview with a condition projekt_id in ( x,y,z), beeing x,y,z all the\nprojekt I got with the first query, takes 32 ms.\nSo my calculation is\nquery a 220\nquery b to view with project in ... 32\n= 252 ms\n+ some time to add the adequate row from query b to one of the 62 rows\nfrom query a\nThis sometime seems to be quite high with 200 ms\n\nor alternative\nquery a 220 ms\nfor each of the 62 rows a query to the view with project_id = x\n220\n62*2 ms\n= 344 ms + some time to assemble all this.\n=> 100 ms for assembling. This is quite a lot or am I wrong\n\nb) ###################\nMy query does take about 200 ms. Most of the time is taken by the\nfollowing part\nLEFT JOIN (\n SELECT DISTINCT taufgaben_patches.faufgaben_id\n FROM taufgaben_patches\n ORDER BY taufgaben_patches.faufgaben_id\n ) patchdaten ON taufgaben.fid = patchdaten.faufgaben_id\n\nWhat I want to achieve is one column in my query beeing null or not null\nand indicating if there is a patch which includes the aufgabe (engl.: task)\nIs there a better way?\n\n-- \nKind Regards / Viele Grᅵᅵe\n\nSebastian Hennebrueder\n\n-----\nhttp://www.laliluna.de/tutorials.html\nTutorials for Java, Struts, JavaServer Faces, JSP, Hibernate, EJB and more.\n\n##################\n\nBelow you can find\nquery solution I found\nexplain analyze of the complete query (my solution)\nexplain analyze of query a\nexplain analyze of view with one project_id as condition\n\n\nexplain analyze\nSELECT taufgaben.fid AS taufgaben_fid,\ntaufgaben.fprojekt_id AS taufgaben_fprojekt_id,\ntaufgaben.fnummer AS taufgaben_fnummer,\n taufgaben.fbudget AS taufgaben_fbudget,\n taufgaben.ftyp AS taufgaben_ftyp,\n taufgaben.fberechnungsart AS taufgaben_fberechnungsart,\n taufgaben.fverrechnung_extern AS taufgaben_fverrechnung_extern,\n taufgaben.fverrechnungsbasis AS taufgaben_fverrechnungsbasis,\n taufgaben.fstatus AS taufgaben_fstatus, taufgaben.fkurzbeschreibung AS\n taufgaben_fkurzbeschreibung, taufgaben.fansprechpartner AS\n taufgaben_fansprechpartner, taufgaben.fanforderer AS\ntaufgaben_fanforderer,\n taufgaben.fstandort_id AS taufgaben_fstandort_id,\ntaufgaben.fwunschtermin\n AS taufgaben_fwunschtermin, taufgaben.fstarttermin AS\n taufgaben_fstarttermin, taufgaben.fgesamtaufwand AS\n taufgaben_fgesamtaufwand, taufgaben.fistaufwand AS\ntaufgaben_fistaufwand,\n taufgaben.fprio AS taufgaben_fprio, taufgaben.ftester AS\ntaufgaben_ftester,\n taufgaben.ffaellig AS taufgaben_ffaellig, taufgaben.flevel AS\n taufgaben_flevel, taufgaben.fkategorie AS taufgaben_fkategorie,\n taufgaben.feintragbearbeitung AS taufgaben_feintragbearbeitung,\n taufgaben.fbearbeitungsstatus AS taufgaben_fbearbeitungsstatus,\n taufgaben.fsolllimit AS taufgaben_fsolllimit, taufgaben.fistlimit AS\n taufgaben_fistlimit, taufgaben.fpauschalbetrag AS\n taufgaben_fpauschalbetrag, taufgaben.frechnungslaeufe_id AS\n taufgaben_frechnungslaeufe_id, taufgaben.fzuberechnen AS\n taufgaben_fzuberechnen,\ntaufgaben.floesungsbeschreibung AS\n taufgaben_floesungsbeschreibung, taufgaben.ffehlerbeschreibung AS\n taufgaben_ffehlerbeschreibung, taufgaben.faufgabenstellung AS\n taufgaben_faufgabenstellung, taufgaben.fkritischeaenderungen AS\n taufgaben_fkritischeaenderungen, taufgaben.fbdeaufgabenersteller_id AS\n taufgaben_fbdeaufgabenersteller_id, taufgaben.fzufaktorieren AS\n taufgaben_fzufaktorieren,\n taufgaben.fisdirty AS taufgaben_fisdirty,\n taufgaben.fnf_kunde_stunden AS taufgaben_fnf_kunde_stunden,\n taufgaben.fzf_kunde_stunden AS taufgaben_fzf_kunde_stunden,\n taufgaben.fbf_kunde_stunden AS taufgaben_fbf_kunde_stunden,\n taufgaben.fnf_kunde_betrag AS taufgaben_fnf_kunde_betrag,\n taufgaben.fzf_kunde_betrag AS taufgaben_fzf_kunde_betrag,\n taufgaben.fbf_kunde_betrag AS taufgaben_fbf_kunde_betrag,\n taufgaben.fgesamt_brutto_stunden AS taufgaben_fgesamt_brutto_stunden,\n taufgaben.fgesamt_brutto_betrag AS taufgaben_fgesamt_brutto_betrag,\n taufgaben.fhinweisgesendet AS taufgaben_fhinweisgesendet,\n taufgaben.fwarnunggesendet AS taufgaben_fwarnunggesendet,\n taufgaben.fnfgesamtaufwand AS\n taufgaben_fnfgesamtaufwand, taufgaben.fnf_netto_stunden AS\n taufgaben_fnf_netto_stunden, taufgaben.fnf_brutto_stunden AS\n taufgaben_fnf_brutto_stunden, taufgaben.fnfhinweisgesendet AS\n taufgaben_fnfhinweisgesendet, taufgaben.fnfwarnunggesendet AS\n taufgaben_fnfwarnunggesendet,\ntaufgaben.fhatzeiten AS taufgaben_fhatzeiten,\n taufgaben.fnichtpublicrechnungsfaehig AS\n taufgaben_fnichtpublicrechnungsfaehig,\n taufgaben.fnichtpublicrechnungsfaehigbetrag AS\n taufgaben_fnichtpublicrechnungsfaehigbetrag,\ntaufgaben.fnichtberechenbar AS\n taufgaben_fnichtberechenbar, taufgaben.fnichtberechenbarbetrag AS\n taufgaben_fnichtberechenbarbetrag,\n taufgaben.finternertester AS\n taufgaben_finternertester, taufgaben.finterngetestet AS\n taufgaben_finterngetestet,\ntaufgaben.fanzahlbearbeiter AS taufgaben_fanzahlbearbeiter,\n\n patchdaten.faufgaben_id AS pataid\n , vprojekt.*\nFROM\ntaufgaben\nLEFT JOIN (\n SELECT DISTINCT taufgaben_patches.faufgaben_id\n FROM taufgaben_patches\n ORDER BY taufgaben_patches.faufgaben_id\n ) patchdaten ON taufgaben.fid = patchdaten.faufgaben_id\nleft join taufgaben_mitarbeiter am on taufgaben.fid = am.faufgaben_id\njoin vprojekt on taufgaben.fprojekt_id = vprojekt.tprojekte_fid\nwhere\nam.fmitarbeiter_id = 54\nand\ntaufgaben.fbearbeitungsstatus <> 2\n\n;\nand got the following:\n\n\"Merge Join (cost=1739.31..1739.38 rows=1 width=2541) (actual\ntime=438.000..454.000 rows=62 loops=1)\"\n\" Merge Cond: (\"outer\".fprojekt_id = \"inner\".fid)\"\n\" -> Sort (cost=1608.41..1608.43 rows=7 width=1047) (actual\ntime=235.000..235.000 rows=62 loops=1)\"\n\" Sort Key: taufgaben.fprojekt_id\"\n\" -> Merge Join (cost=1598.30..1608.31 rows=7 width=1047)\n(actual time=172.000..235.000 rows=62 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Merge Left Join (cost=1490.70..1497.67 rows=1120\nwidth=1047) (actual time=157.000..235.000 rows=1118 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Sort (cost=1211.46..1214.26 rows=1120\nwidth=1043) (actual time=94.000..94.000 rows=1118 loops=1)\"\n\" Sort Key: taufgaben.fid\"\n\" -> Seq Scan on taufgaben (cost=0.00..853.88\nrows=1120 width=1043) (actual time=0.000..94.000 rows=1120 loops=1)\"\n\" Filter: (fbearbeitungsstatus <> 2)\"\n\" -> Sort (cost=279.23..279.73 rows=200 width=4)\n(actual time=63.000..63.000 rows=4773 loops=1)\"\n\" Sort Key: patchdaten.faufgaben_id\"\n\" -> Subquery Scan patchdaten\n(cost=0.00..271.59 rows=200 width=4) (actual time=0.000..16.000\nrows=4773 loops=1)\"\n\" -> Unique (cost=0.00..269.59 rows=200\nwidth=4) (actual time=0.000..16.000 rows=4773 loops=1)\"\n\" -> Index Scan using\nidx_aufpa_aufgabeid on taufgaben_patches (cost=0.00..253.74 rows=6340\nwidth=4) (actual time=0.000..16.000 rows=6340 loops=1)\"\n\" -> Sort (cost=107.60..107.69 rows=35 width=4) (actual\ntime=0.000..0.000 rows=765 loops=1)\"\n\" Sort Key: am.faufgaben_id\"\n\" -> Index Scan using idx_tauf_mit_mitid on\ntaufgaben_mitarbeiter am (cost=0.00..106.70 rows=35 width=4) (actual\ntime=0.000..0.000 rows=765 loops=1)\"\n\" Index Cond: (fmitarbeiter_id = 54)\"\n\" -> Sort (cost=130.90..130.91 rows=6 width=1494) (actual\ntime=203.000..203.000 rows=916 loops=1)\"\n\" Sort Key: tprojekte.fid\"\n\" -> Merge Join (cost=130.53..130.82 rows=6 width=1494) (actual\ntime=156.000..203.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fkunden_id = \"inner\".fid)\"\n\" -> Sort (cost=127.06..127.08 rows=6 width=1455) (actual\ntime=156.000..156.000 rows=876 loops=1)\"\n\" Sort Key: tkunden_kst.fkunden_id\"\n\" -> Merge Join (cost=126.35..126.99 rows=6\nwidth=1455) (actual time=125.000..156.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fprojektleiter_id =\n\"inner\".fid)\"\n\" -> Sort (cost=118.57..118.59 rows=9\nwidth=580) (actual time=109.000..109.000 rows=876 loops=1)\"\n\" Sort Key: tprojekte.fprojektleiter_id\"\n\" -> Merge Join (cost=117.89..118.43\nrows=9 width=580) (actual time=62.000..94.000 rows=876 loops=1)\"\n\" Merge Cond:\n(\"outer\".fkunden_kst_id = \"inner\".fid)\"\n\" -> Sort (cost=114.61..114.69\nrows=31 width=508) (actual time=62.000..62.000 rows=876 loops=1)\"\n\" Sort Key:\ntprojekte.fkunden_kst_id\"\n\" -> Merge Join\n(cost=109.11..113.84 rows=31 width=508) (actual time=31.000..62.000\nrows=876 loops=1)\"\n\" Merge Cond:\n(\"outer\".fid = \"inner\".fkostentraeger_id)\"\n\" -> Sort\n(cost=13.40..13.42 rows=7 width=162) (actual time=0.000..0.000 rows=158\nloops=1)\"\n\" Sort Key:\ntkostentraeger.fid\"\n\" -> Merge Join\n(cost=12.41..13.31 rows=7 width=162) (actual time=0.000..0.000 rows=158\nloops=1)\"\n\" Merge\nCond: (\"outer\".fid = \"inner\".fkostenstellen_id)\"\n\" -> Sort\n(cost=3.06..3.08 rows=7 width=119) (actual time=0.000..0.000 rows=19\nloops=1)\"\n\"\nSort Key: tkostenstellen.fid\"\n\" ->\nMerge Join (cost=2.76..2.96 rows=7 width=119) (actual time=0.000..0.000\nrows=19 loops=1)\"\n\"\nMerge Cond: (\"outer\".fabteilungen_id = \"inner\".fid)\"\n\"\n-> Sort (cost=1.59..1.64 rows=19 width=55) (actual time=0.000..0.000\nrows=19 loops=1)\"\n\" \n\nSort Key: tkostenstellen.fabteilungen_id\"\n\" \n\n-> Seq Scan on tkostenstellen (cost=0.00..1.19 rows=19 width=55)\n(actual time=0.000..0.000 rows=19 loops=1)\"\n\"\n-> Sort (cost=1.17..1.19 rows=7 width=76) (actual time=0.000..0.000\nrows=19 loops=1)\"\n\" \n\nSort Key: tabteilungen.fid\"\n\" \n\n-> Seq Scan on tabteilungen (cost=0.00..1.07 rows=7 width=76) (actual\ntime=0.000..0.000 rows=7 loops=1)\"\n\" -> Sort\n(cost=9.35..9.74 rows=158 width=55) (actual time=0.000..0.000 rows=158\nloops=1)\"\n\"\nSort Key: tkostentraeger.fkostenstellen_id\"\n\" ->\nSeq Scan on tkostentraeger (cost=0.00..3.58 rows=158 width=55) (actual\ntime=0.000..0.000 rows=158 loops=1)\"\n\" -> Sort\n(cost=95.71..97.90 rows=878 width=354) (actual time=31.000..46.000\nrows=877 loops=1)\"\n\" Sort Key:\ntprojekte.fkostentraeger_id\"\n\" -> Seq Scan on\ntprojekte (cost=0.00..52.78 rows=878 width=354) (actual\ntime=0.000..31.000 rows=878 loops=1)\"\n\" -> Sort (cost=3.28..3.42\nrows=58 width=80) (actual time=0.000..0.000 rows=892 loops=1)\"\n\" Sort Key: tkunden_kst.fid\"\n\" -> Seq Scan on\ntkunden_kst (cost=0.00..1.58 rows=58 width=80) (actual\ntime=0.000..0.000 rows=58 loops=1)\"\n\" -> Sort (cost=7.78..8.05 rows=109\nwidth=883) (actual time=16.000..16.000 rows=950 loops=1)\"\n\" Sort Key: tuser.fid\"\n\" -> Seq Scan on tuser (cost=0.00..4.09\nrows=109 width=883) (actual time=0.000..0.000 rows=109 loops=1)\"\n\" -> Sort (cost=3.46..3.56 rows=40 width=51) (actual\ntime=0.000..0.000 rows=887 loops=1)\"\n\" Sort Key: tkunden.fid\"\n\" -> Seq Scan on tkunden (cost=0.00..2.40 rows=40\nwidth=51) (actual time=0.000..0.000 rows=40 loops=1)\"\n\"Total runtime: 454.000 ms\"\n\n\n\n\nCREATE OR REPLACE VIEW \"public\".\"vprojekt\"\nAS\nSELECT tprojekte.fid AS tprojekte_fid, tprojekte.fbezeichnung AS\n tprojekte_fbezeichnung, tprojekte.fprojektnummer AS\n tprojekte_fprojektnummer, tprojekte.fbudget AS tprojekte_fbudget,\n tprojekte.fverrechnung_extern AS tprojekte_fverrechnung_extern,\n tprojekte.fstatus AS tprojekte_fstatus, tprojekte.fkunden_kst_id AS\n tprojekte_fkunden_kst_id, tprojekte.fverrechnungsbasis AS\n tprojekte_fverrechnungsbasis, tprojekte.fberechnungsart AS\n tprojekte_fberechnungsart, tprojekte.fprojekttyp AS\ntprojekte_fprojekttyp,\n tprojekte.fkostentraeger_id AS tprojekte_fkostentraeger_id,\n tprojekte.fprojektleiter_id AS tprojekte_fprojektleiter_id,\n tprojekte.fpauschalsatz AS tprojekte_fpauschalsatz,\n tprojekte.frechnungslaeufe_id AS tprojekte_frechnungslaeufe_id,\n tprojekte.fzuberechnen AS tprojekte_fzuberechnen,\ntprojekte.faufschlagrel\n AS tprojekte_faufschlagrel, tprojekte.faufschlagabs AS\n tprojekte_faufschlagabs, tprojekte.fbearbeitungsstatus AS\n tprojekte_fbearbeitungsstatus, tprojekte.fzufaktorieren AS\n tprojekte_fzufaktorieren, tprojekte.feurobudget AS\ntprojekte_feurobudget,\n tprojekte.fnf_kunde_stunden AS tprojekte_fnf_kunde_stunden,\n tprojekte.fzf_kunde_stunden AS tprojekte_fzf_kunde_stunden,\n tprojekte.fbf_kunde_stunden AS tprojekte_fbf_kunde_stunden,\n tprojekte.fnf_kunde_betrag AS tprojekte_fnf_kunde_betrag,\n tprojekte.fzf_kunde_betrag AS tprojekte_fzf_kunde_betrag,\n tprojekte.fbf_kunde_betrag AS tprojekte_fbf_kunde_betrag,\n tprojekte.fisdirty AS tprojekte_fisdirty,\ntprojekte.fgesamt_brutto_betrag\n AS tprojekte_fgesamt_brutto_betrag, tprojekte.fgesamt_brutto_stunden AS\n tprojekte_fgesamt_brutto_stunden, tprojekte.fgesamt_netto_stunden AS\n tprojekte_fgesamt_netto_stunden, tprojekte.fhinweisgesendet AS\n tprojekte_fhinweisgesendet, tprojekte.fwarnunggesendet AS\n tprojekte_fwarnunggesendet, tprojekte.fnfgesamtaufwand AS\n tprojekte_fnfgesamtaufwand, tprojekte.fnf_netto_stunden AS\n tprojekte_fnf_netto_stunden, tprojekte.fnf_brutto_stunden AS\n tprojekte_fnf_brutto_stunden, tprojekte.fnfhinweisgesendet AS\n tprojekte_fnfhinweisgesendet, tprojekte.fnfwarnunggesendet AS\n tprojekte_fnfwarnunggesendet, tprojekte.fhatzeiten AS\ntprojekte_fhatzeiten,\n tprojekte.fnichtpublicrechnungsfaehig AS\n tprojekte_fnichtpublicrechnungsfaehig,\n tprojekte.fnichtpublicrechnungsfaehigbetrag AS\n tprojekte_fnichtpublicrechnungsfaehigbetrag,\ntprojekte.fnichtberechenbar AS\n tprojekte_fnichtberechenbar, tprojekte.fnichtberechenbarbetrag AS\n tprojekte_fnichtberechenbarbetrag, tuser.fusername AS tuser_fusername,\n tuser.fpassword AS tuser_fpassword, tuser.fvorname AS tuser_fvorname,\n tuser.fnachname AS tuser_fnachname, tuser.fismitarbeiter AS\n tuser_fismitarbeiter, tuser.flevel AS tuser_flevel, tuser.fkuerzel AS\n tuser_fkuerzel, tuser.femailadresse AS tuser_femailadresse,\n tkunden_kst.fbezeichnung AS tkunden_kst_name, tkunden.fname AS\n tkunden_name, tabteilungen.fname AS tabteilungen_fname,\n tkostenstellen.fnummer AS tkostenstellen_fnummer,\ntkostentraeger.fnummer AS\n tkostentraeger_fnummer\nFROM ((((((tprojekte JOIN tuser ON ((tprojekte.fprojektleiter_id =\ntuser.fid)))\n JOIN tkunden_kst ON ((tprojekte.fkunden_kst_id = tkunden_kst.fid))) JOIN\n tkunden ON ((tkunden_kst.fkunden_id = tkunden.fid))) JOIN\ntkostentraeger ON\n ((tprojekte.fkostentraeger_id = tkostentraeger.fid))) JOIN\ntkostenstellen\n ON ((tkostentraeger.fkostenstellen_id = tkostenstellen.fid))) JOIN\n tabteilungen ON ((tkostenstellen.fabteilungen_id = tabteilungen.fid)));\n\n\n\n\nquery a\n\n\"Merge Join (cost=1598.30..1608.31 rows=7 width=1047) (actual \ntime=140.000..218.000 rows=62 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Merge Left Join (cost=1490.70..1497.67 rows=1120 width=1047) \n(actual time=140.000..218.000 rows=1118 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Sort (cost=1211.46..1214.26 rows=1120 width=1043) (actual \ntime=78.000..78.000 rows=1118 loops=1)\"\n\" Sort Key: taufgaben.fid\"\n\" -> Seq Scan on taufgaben (cost=0.00..853.88 rows=1120 \nwidth=1043) (actual time=0.000..78.000 rows=1120 loops=1)\"\n\" Filter: (fbearbeitungsstatus <> 2)\"\n\" -> Sort (cost=279.23..279.73 rows=200 width=4) (actual \ntime=62.000..62.000 rows=4773 loops=1)\"\n\" Sort Key: patchdaten.faufgaben_id\"\n\" -> Subquery Scan patchdaten (cost=0.00..271.59 rows=200 \nwidth=4) (actual time=0.000..32.000 rows=4773 loops=1)\"\n\" -> Unique (cost=0.00..269.59 rows=200 width=4) \n(actual time=0.000..16.000 rows=4773 loops=1)\"\n\" -> Index Scan using idx_aufpa_aufgabeid on \ntaufgaben_patches (cost=0.00..253.74 rows=6340 width=4) (actual \ntime=0.000..0.000 rows=6340 loops=1)\"\n\" -> Sort (cost=107.60..107.69 rows=35 width=4) (actual \ntime=0.000..0.000 rows=765 loops=1)\"\n\" Sort Key: am.faufgaben_id\"\n\" -> Index Scan using idx_tauf_mit_mitid on \ntaufgaben_mitarbeiter am (cost=0.00..106.70 rows=35 width=4) (actual \ntime=0.000..0.000 rows=765 loops=1)\"\n\" Index Cond: (fmitarbeiter_id = 54)\"\n\"Total runtime: 218.000 ms\"\n\n\n\nexplain analyze\nSELECT taufgaben.fid AS taufgaben_fid,\ntaufgaben.fprojekt_id AS taufgaben_fprojekt_id,\ntaufgaben.fnummer AS taufgaben_fnummer,\n taufgaben.fbudget AS taufgaben_fbudget,\n taufgaben.ftyp AS taufgaben_ftyp,\n taufgaben.fberechnungsart AS taufgaben_fberechnungsart,\n taufgaben.fverrechnung_extern AS taufgaben_fverrechnung_extern,\n taufgaben.fverrechnungsbasis AS taufgaben_fverrechnungsbasis,\n taufgaben.fstatus AS taufgaben_fstatus, taufgaben.fkurzbeschreibung AS\n taufgaben_fkurzbeschreibung, taufgaben.fansprechpartner AS\n taufgaben_fansprechpartner, taufgaben.fanforderer AS \ntaufgaben_fanforderer,\n taufgaben.fstandort_id AS taufgaben_fstandort_id, \ntaufgaben.fwunschtermin\n AS taufgaben_fwunschtermin, taufgaben.fstarttermin AS\n taufgaben_fstarttermin, taufgaben.fgesamtaufwand AS\n taufgaben_fgesamtaufwand, taufgaben.fistaufwand AS \ntaufgaben_fistaufwand,\n taufgaben.fprio AS taufgaben_fprio, taufgaben.ftester AS \ntaufgaben_ftester,\n taufgaben.ffaellig AS taufgaben_ffaellig, taufgaben.flevel AS\n taufgaben_flevel, taufgaben.fkategorie AS taufgaben_fkategorie,\n taufgaben.feintragbearbeitung AS taufgaben_feintragbearbeitung,\n taufgaben.fbearbeitungsstatus AS taufgaben_fbearbeitungsstatus,\n taufgaben.fsolllimit AS taufgaben_fsolllimit, taufgaben.fistlimit AS\n taufgaben_fistlimit, taufgaben.fpauschalbetrag AS\n taufgaben_fpauschalbetrag, taufgaben.frechnungslaeufe_id AS\n taufgaben_frechnungslaeufe_id, taufgaben.fzuberechnen AS\n taufgaben_fzuberechnen,\ntaufgaben.floesungsbeschreibung AS\n taufgaben_floesungsbeschreibung, taufgaben.ffehlerbeschreibung AS\n taufgaben_ffehlerbeschreibung, taufgaben.faufgabenstellung AS\n taufgaben_faufgabenstellung, taufgaben.fkritischeaenderungen AS\n taufgaben_fkritischeaenderungen, taufgaben.fbdeaufgabenersteller_id AS\n taufgaben_fbdeaufgabenersteller_id, taufgaben.fzufaktorieren AS\n taufgaben_fzufaktorieren,\n taufgaben.fisdirty AS taufgaben_fisdirty,\n taufgaben.fnf_kunde_stunden AS taufgaben_fnf_kunde_stunden,\n taufgaben.fzf_kunde_stunden AS taufgaben_fzf_kunde_stunden,\n taufgaben.fbf_kunde_stunden AS taufgaben_fbf_kunde_stunden,\n taufgaben.fnf_kunde_betrag AS taufgaben_fnf_kunde_betrag,\n taufgaben.fzf_kunde_betrag AS taufgaben_fzf_kunde_betrag,\n taufgaben.fbf_kunde_betrag AS taufgaben_fbf_kunde_betrag,\n taufgaben.fgesamt_brutto_stunden AS taufgaben_fgesamt_brutto_stunden,\n taufgaben.fgesamt_brutto_betrag AS taufgaben_fgesamt_brutto_betrag,\n taufgaben.fhinweisgesendet AS taufgaben_fhinweisgesendet,\n taufgaben.fwarnunggesendet AS taufgaben_fwarnunggesendet,\n taufgaben.fnfgesamtaufwand AS\n taufgaben_fnfgesamtaufwand, taufgaben.fnf_netto_stunden AS\n taufgaben_fnf_netto_stunden, taufgaben.fnf_brutto_stunden AS\n taufgaben_fnf_brutto_stunden, taufgaben.fnfhinweisgesendet AS\n taufgaben_fnfhinweisgesendet, taufgaben.fnfwarnunggesendet AS\n taufgaben_fnfwarnunggesendet,\ntaufgaben.fhatzeiten AS taufgaben_fhatzeiten,\n taufgaben.fnichtpublicrechnungsfaehig AS\n taufgaben_fnichtpublicrechnungsfaehig,\n taufgaben.fnichtpublicrechnungsfaehigbetrag AS\n taufgaben_fnichtpublicrechnungsfaehigbetrag, \ntaufgaben.fnichtberechenbar AS\n taufgaben_fnichtberechenbar, taufgaben.fnichtberechenbarbetrag AS\n taufgaben_fnichtberechenbarbetrag,\n taufgaben.finternertester AS\n taufgaben_finternertester, taufgaben.finterngetestet AS\n taufgaben_finterngetestet,\ntaufgaben.fanzahlbearbeiter AS taufgaben_fanzahlbearbeiter\n, patchdaten.faufgaben_id AS pataid\n -- , vprojekt.*\nFROM\ntaufgaben\nLEFT JOIN (\n SELECT DISTINCT taufgaben_patches.faufgaben_id\n FROM taufgaben_patches\n ) patchdaten ON taufgaben.fid = patchdaten.faufgaben_id\n\nleft join taufgaben_mitarbeiter am on taufgaben.fid = am.faufgaben_id\n\n--join vprojekt on taufgaben.fprojekt_id = vprojekt.tprojekte_fid\nwhere\nam.fmitarbeiter_id = 54\nand\ntaufgaben.fbearbeitungsstatus <> 2\n\n;\n\n\n\n##########################################################\n\nquery b using the select from the view\n\n\"Nested Loop (cost=0.00..24.44 rows=1 width=1494) (actual \ntime=0.000..0.000 rows=1 loops=1)\"\n\" Join Filter: (\"outer\".fprojektleiter_id = \"inner\".fid)\"\n\" -> Nested Loop (cost=0.00..18.98 rows=1 width=619) (actual \ntime=0.000..0.000 rows=1 loops=1)\"\n\" Join Filter: (\"outer\".fabteilungen_id = \"inner\".fid)\"\n\" -> Nested Loop (cost=0.00..17.83 rows=1 width=555) (actual \ntime=0.000..0.000 rows=1 loops=1)\"\n\" Join Filter: (\"outer\".fkostenstellen_id = \"inner\".fid)\"\n\" -> Nested Loop (cost=0.00..16.40 rows=1 width=512) \n(actual time=0.000..0.000 rows=1 loops=1)\"\n\" -> Nested Loop (cost=0.00..11.17 rows=1 \nwidth=465) (actual time=0.000..0.000 rows=1 loops=1)\"\n\" Join Filter: (\"outer\".fkunden_id = \"inner\".fid)\"\n\" -> Nested Loop (cost=0.00..8.27 rows=1 \nwidth=426) (actual time=0.000..0.000 rows=1 loops=1)\"\n\" Join Filter: (\"outer\".fkunden_kst_id = \n\"inner\".fid)\"\n\" -> Index Scan using aaaaaprojekte_pk \non tprojekte (cost=0.00..5.97 rows=1 width=354) (actual \ntime=0.000..0.000 rows=1 loops=1)\"\n\" Index Cond: (fid = 2153)\"\n\" -> Seq Scan on tkunden_kst \n(cost=0.00..1.58 rows=58 width=80) (actual time=0.000..0.000 rows=58 \nloops=1)\"\n\" -> Seq Scan on tkunden (cost=0.00..2.40 \nrows=40 width=51) (actual time=0.000..0.000 rows=40 loops=1)\"\n\" -> Index Scan using aaaaakostentraeger_pk on \ntkostentraeger (cost=0.00..5.21 rows=1 width=55) (actual \ntime=0.000..0.000 rows=1 loops=1)\"\n\" Index Cond: (\"outer\".fkostentraeger_id = \ntkostentraeger.fid)\"\n\" -> Seq Scan on tkostenstellen (cost=0.00..1.19 rows=19 \nwidth=55) (actual time=0.000..0.000 rows=19 loops=1)\"\n\" -> Seq Scan on tabteilungen (cost=0.00..1.07 rows=7 width=76) \n(actual time=0.000..0.000 rows=7 loops=1)\"\n\" -> Seq Scan on tuser (cost=0.00..4.09 rows=109 width=883) (actual \ntime=0.000..0.000 rows=109 loops=1)\"\n\"Total runtime: 0.000 ms\"\n\n\n\nexplain analyze\nSELECT tprojekte.fid AS tprojekte_fid, tprojekte.fbezeichnung AS\n tprojekte_fbezeichnung, tprojekte.fprojektnummer AS\n tprojekte_fprojektnummer, tprojekte.fbudget AS tprojekte_fbudget,\n tprojekte.fverrechnung_extern AS tprojekte_fverrechnung_extern,\n tprojekte.fstatus AS tprojekte_fstatus, tprojekte.fkunden_kst_id AS\n tprojekte_fkunden_kst_id, tprojekte.fverrechnungsbasis AS\n tprojekte_fverrechnungsbasis, tprojekte.fberechnungsart AS\n tprojekte_fberechnungsart, tprojekte.fprojekttyp AS \ntprojekte_fprojekttyp,\n tprojekte.fkostentraeger_id AS tprojekte_fkostentraeger_id,\n tprojekte.fprojektleiter_id AS tprojekte_fprojektleiter_id,\n tprojekte.fpauschalsatz AS tprojekte_fpauschalsatz,\n tprojekte.frechnungslaeufe_id AS tprojekte_frechnungslaeufe_id,\n tprojekte.fzuberechnen AS tprojekte_fzuberechnen, \ntprojekte.faufschlagrel\n AS tprojekte_faufschlagrel, tprojekte.faufschlagabs AS\n tprojekte_faufschlagabs, tprojekte.fbearbeitungsstatus AS\n tprojekte_fbearbeitungsstatus, tprojekte.fzufaktorieren AS\n tprojekte_fzufaktorieren, tprojekte.feurobudget AS \ntprojekte_feurobudget,\n tprojekte.fnf_kunde_stunden AS tprojekte_fnf_kunde_stunden,\n tprojekte.fzf_kunde_stunden AS tprojekte_fzf_kunde_stunden,\n tprojekte.fbf_kunde_stunden AS tprojekte_fbf_kunde_stunden,\n tprojekte.fnf_kunde_betrag AS tprojekte_fnf_kunde_betrag,\n tprojekte.fzf_kunde_betrag AS tprojekte_fzf_kunde_betrag,\n tprojekte.fbf_kunde_betrag AS tprojekte_fbf_kunde_betrag,\n tprojekte.fisdirty AS tprojekte_fisdirty, \ntprojekte.fgesamt_brutto_betrag\n AS tprojekte_fgesamt_brutto_betrag, tprojekte.fgesamt_brutto_stunden AS\n tprojekte_fgesamt_brutto_stunden, tprojekte.fgesamt_netto_stunden AS\n tprojekte_fgesamt_netto_stunden, tprojekte.fhinweisgesendet AS\n tprojekte_fhinweisgesendet, tprojekte.fwarnunggesendet AS\n tprojekte_fwarnunggesendet, tprojekte.fnfgesamtaufwand AS\n tprojekte_fnfgesamtaufwand, tprojekte.fnf_netto_stunden AS\n tprojekte_fnf_netto_stunden, tprojekte.fnf_brutto_stunden AS\n tprojekte_fnf_brutto_stunden, tprojekte.fnfhinweisgesendet AS\n tprojekte_fnfhinweisgesendet, tprojekte.fnfwarnunggesendet AS\n tprojekte_fnfwarnunggesendet, tprojekte.fhatzeiten AS \ntprojekte_fhatzeiten,\n tprojekte.fnichtpublicrechnungsfaehig AS\n tprojekte_fnichtpublicrechnungsfaehig,\n tprojekte.fnichtpublicrechnungsfaehigbetrag AS\n tprojekte_fnichtpublicrechnungsfaehigbetrag, \ntprojekte.fnichtberechenbar AS\n tprojekte_fnichtberechenbar, tprojekte.fnichtberechenbarbetrag AS\n tprojekte_fnichtberechenbarbetrag, tuser.fusername AS tuser_fusername,\n tuser.fpassword AS tuser_fpassword, tuser.fvorname AS tuser_fvorname,\n tuser.fnachname AS tuser_fnachname, tuser.fismitarbeiter AS\n tuser_fismitarbeiter, tuser.flevel AS tuser_flevel, tuser.fkuerzel AS\n tuser_fkuerzel, tuser.femailadresse AS tuser_femailadresse,\n tkunden_kst.fbezeichnung AS tkunden_kst_name, tkunden.fname AS\n tkunden_name, tabteilungen.fname AS tabteilungen_fname,\n tkostenstellen.fnummer AS tkostenstellen_fnummer, \ntkostentraeger.fnummer AS\n tkostentraeger_fnummer\nFROM ((((((tprojekte JOIN tuser ON ((tprojekte.fprojektleiter_id = \ntuser.fid)))\n JOIN tkunden_kst ON ((tprojekte.fkunden_kst_id = tkunden_kst.fid))) \nJOIN\n tkunden ON ((tkunden_kst.fkunden_id = tkunden.fid))) JOIN \ntkostentraeger ON\n ((tprojekte.fkostentraeger_id = tkostentraeger.fid))) JOIN \ntkostenstellen\n ON ((tkostentraeger.fkostenstellen_id = tkostenstellen.fid))) JOIN\n tabteilungen ON ((tkostenstellen.fabteilungen_id = tabteilungen.fid)))\n\nwhere tprojekte.fid = 2153\n\n",
"msg_date": "Fri, 13 May 2005 04:14:41 +0200",
"msg_from": "Sebastian Hennebrueder <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize complex join to use where condition before"
},
{
"msg_contents": "Solution not found as I thought. I integrated the query in a view and \nthe query plan became very bad once again.\nThe reason is that when I am using the view I have the joins in a \ndiffererent order.\n\nDoes anyone have an idea to solve this.\n\nSebastian\n\na) bad order but the one I have in my application\nexplain analyze\nSELECT taufgaben.fid AS taufgaben_fid, taufgaben.fprojekt_id AS\n taufgaben_fprojekt_id, taufgaben.fnummer AS taufgaben_fnummer,\n taufgaben.fbudget AS taufgaben_fbudget,\n taufgaben.ftyp AS taufgaben_ftyp,\n taufgaben.fberechnungsart AS taufgaben_fberechnungsart,\n taufgaben.fverrechnung_extern AS taufgaben_fverrechnung_extern,\n taufgaben.fverrechnungsbasis AS taufgaben_fverrechnungsbasis,\n taufgaben.fstatus AS taufgaben_fstatus,\n taufgaben.fkurzbeschreibung AS\n taufgaben_fkurzbeschreibung,\n taufgaben.fansprechpartner AS\n taufgaben_fansprechpartner,\n taufgaben.fanforderer AS taufgaben_fanforderer,\n taufgaben.fstandort_id AS taufgaben_fstandort_id,\n taufgaben.fwunschtermin AS taufgaben_fwunschtermin,\n taufgaben.fstarttermin AS taufgaben_fstarttermin,\n taufgaben.fgesamtaufwand AS taufgaben_fgesamtaufwand,\n taufgaben.fistaufwand AS taufgaben_fistaufwand,\n taufgaben.fprio AS taufgaben_fprio,\n taufgaben.ftester AS taufgaben_ftester,\n taufgaben.ffaellig AS taufgaben_ffaellig,\n taufgaben.flevel AS taufgaben_flevel,\n taufgaben.fkategorie AS taufgaben_fkategorie,\n taufgaben.feintragbearbeitung AS taufgaben_feintragbearbeitung,\n taufgaben.fbearbeitungsstatus AS taufgaben_fbearbeitungsstatus,\n taufgaben.fsolllimit AS taufgaben_fsolllimit,\n taufgaben.fistlimit AS taufgaben_fistlimit,\n taufgaben.fpauschalbetrag AS taufgaben_fpauschalbetrag,\n taufgaben.frechnungslaeufe_id AS taufgaben_frechnungslaeufe_id,\n taufgaben.fzuberechnen AS taufgaben_fzuberechnen,\n taufgaben.floesungsbeschreibung AS taufgaben_floesungsbeschreibung,\n taufgaben.ffehlerbeschreibung AS taufgaben_ffehlerbeschreibung,\n taufgaben.faufgabenstellung AS taufgaben_faufgabenstellung,\n taufgaben.fkritischeaenderungen AS taufgaben_fkritischeaenderungen,\n taufgaben.fbdeaufgabenersteller_id AS \ntaufgaben_fbdeaufgabenersteller_id,\n taufgaben.fzufaktorieren AS taufgaben_fzufaktorieren,\n taufgaben.fisdirty AS taufgaben_fisdirty,\n taufgaben.fnf_kunde_stunden AS taufgaben_fnf_kunde_stunden,\n taufgaben.fzf_kunde_stunden AS taufgaben_fzf_kunde_stunden,\n taufgaben.fbf_kunde_stunden AS taufgaben_fbf_kunde_stunden,\n taufgaben.fnf_kunde_betrag AS taufgaben_fnf_kunde_betrag,\n taufgaben.fzf_kunde_betrag AS taufgaben_fzf_kunde_betrag,\n taufgaben.fbf_kunde_betrag AS taufgaben_fbf_kunde_betrag,\n taufgaben.fgesamt_brutto_stunden AS taufgaben_fgesamt_brutto_stunden,\n taufgaben.fgesamt_brutto_betrag AS taufgaben_fgesamt_brutto_betrag,\n taufgaben.fhinweisgesendet AS taufgaben_fhinweisgesendet,\n taufgaben.fwarnunggesendet AS taufgaben_fwarnunggesendet,\n taufgaben.fnfgesamtaufwand AS taufgaben_fnfgesamtaufwand,\n taufgaben.fnf_netto_stunden AS taufgaben_fnf_netto_stunden,\n taufgaben.fnf_brutto_stunden AS taufgaben_fnf_brutto_stunden,\n taufgaben.fnfhinweisgesendet AS taufgaben_fnfhinweisgesendet,\n taufgaben.fnfwarnunggesendet AS taufgaben_fnfwarnunggesendet,\n taufgaben.fhatzeiten AS taufgaben_fhatzeiten,\n taufgaben.fnichtpublicrechnungsfaehig AS \ntaufgaben_fnichtpublicrechnungsfaehig,\n taufgaben.fnichtpublicrechnungsfaehigbetrag AS \ntaufgaben_fnichtpublicrechnungsfaehigbetrag,\n taufgaben.fnichtberechenbar AS taufgaben_fnichtberechenbar,\n taufgaben.fnichtberechenbarbetrag AS \ntaufgaben_fnichtberechenbarbetrag,\n taufgaben.finternertester AS taufgaben_finternertester,\n taufgaben.finterngetestet AS taufgaben_finterngetestet,\n taufgaben.fanzahlbearbeiter AS taufgaben_fanzahlbearbeiter,\n patchdaten.faufgaben_id AS pataid\n ,vprojekt.*\nFROM taufgaben LEFT JOIN (\n SELECT DISTINCT taufgaben_patches.faufgaben_id\n FROM taufgaben_patches\n ) patchdaten ON taufgaben.fid = patchdaten.faufgaben_id\nJOIN vprojekt ON taufgaben.fprojekt_id = vprojekt.tprojekte_fid\n\njoin taufgaben_mitarbeiter am on taufgaben.fid = am.faufgaben_id\n\nwhere\nam.fmitarbeiter_id = 54\nand\ntaufgaben.fbearbeitungsstatus <> 2\n\n\n\"Nested Loop (cost=1349.13..1435.29 rows=1 width=2541) (actual \ntime=1640.000..3687.000 rows=62 loops=1)\"\n\" Join Filter: (\"inner\".fid = \"outer\".faufgaben_id)\"\n\" -> Index Scan using idx_tauf_mit_mitid on taufgaben_mitarbeiter am \n(cost=0.00..80.65 rows=35 width=4) (actual time=0.000..0.000 rows=765 \nloops=1)\"\n\" Index Cond: (fmitarbeiter_id = 54)\"\n\" -> Materialize (cost=1349.13..1349.20 rows=7 width=2541) (actual \ntime=0.531..1.570 rows=1120 loops=765)\"\n\" -> Merge Join (cost=1343.42..1349.13 rows=7 width=2541) \n(actual time=406.000..515.000 rows=1120 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".fprojekt_id)\"\n\" -> Sort (cost=130.89..130.90 rows=6 width=1494) (actual \ntime=203.000..203.000 rows=876 loops=1)\"\n\" Sort Key: tprojekte.fid\"\n\" -> Merge Join (cost=130.52..130.81 rows=6 \nwidth=1494) (actual time=156.000..187.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fkunden_id = \"inner\".fid)\"\n\" -> Sort (cost=127.06..127.07 rows=6 \nwidth=1455) (actual time=156.000..156.000 rows=876 loops=1)\"\n\" Sort Key: tkunden_kst.fkunden_id\"\n\" -> Merge Join (cost=126.34..126.98 \nrows=6 width=1455) (actual time=109.000..125.000 rows=876 loops=1)\"\n\" Merge Cond: \n(\"outer\".fprojektleiter_id = \"inner\".fid)\"\n\" -> Sort (cost=118.56..118.58 \nrows=9 width=580) (actual time=109.000..109.000 rows=876 loops=1)\"\n\" Sort Key: \ntprojekte.fprojektleiter_id\"\n\" -> Merge Join \n(cost=117.88..118.42 rows=9 width=580) (actual time=62.000..93.000 \nrows=876 loops=1)\"\n\" Merge Cond: \n(\"outer\".fkunden_kst_id = \"inner\".fid)\"\n\" -> Sort \n(cost=114.60..114.68 rows=31 width=508) (actual time=62.000..62.000 \nrows=876 loops=1)\"\n\" Sort Key: \ntprojekte.fkunden_kst_id\"\n\" -> Merge Join \n(cost=109.10..113.84 rows=31 width=508) (actual time=31.000..62.000 \nrows=876 loops=1)\"\n\" Merge \nCond: (\"outer\".fid = \"inner\".fkostentraeger_id)\"\n\" -> Sort \n(cost=13.40..13.41 rows=7 width=162) (actual time=0.000..0.000 rows=158 \nloops=1)\"\n\" \nSort Key: tkostentraeger.fid\"\n\" -> \nMerge Join (cost=3.06..13.30 rows=7 width=162) (actual \ntime=0.000..0.000 rows=158 loops=1)\"\n\" \nMerge Cond: (\"outer\".fkostenstellen_id = \"inner\".fid)\"\n\" \n-> Index Scan using idx_kostenstellen_id on tkostentraeger \n(cost=0.00..9.74 rows=158 width=55) (actual time=0.000..0.000 rows=158 \nloops=1)\"\n\" \n-> Sort (cost=3.06..3.08 rows=7 width=119) (actual time=0.000..0.000 \nrows=158 loops=1)\"\n\" \nSort Key: tkostenstellen.fid\"\n\" \n-> Merge Join (cost=2.76..2.96 rows=7 width=119) (actual \ntime=0.000..0.000 rows=19 loops=1)\"\n\" \nMerge Cond: (\"outer\".fabteilungen_id = \"inner\".fid)\"\n\" \n-> Sort (cost=1.59..1.64 rows=19 width=55) (actual time=0.000..0.000 \nrows=19 loops=1)\"\n\" \nSort Key: tkostenstellen.fabteilungen_id\"\n\" \n-> Seq Scan on tkostenstellen (cost=0.00..1.19 rows=19 width=55) \n(actual time=0.000..0.000 rows=19 loops=1)\"\n\" \n-> Sort (cost=1.17..1.19 rows=7 width=76) (actual time=0.000..0.000 \nrows=19 loops=1)\"\n\" \nSort Key: tabteilungen.fid\"\n\" \n-> Seq Scan on tabteilungen (cost=0.00..1.07 rows=7 width=76) (actual \ntime=0.000..0.000 rows=7 loops=1)\"\n\" -> Sort \n(cost=95.71..97.90 rows=878 width=354) (actual time=31.000..46.000 \nrows=877 loops=1)\"\n\" \nSort Key: tprojekte.fkostentraeger_id\"\n\" -> \nSeq Scan on tprojekte (cost=0.00..52.78 rows=878 width=354) (actual \ntime=0.000..15.000 rows=878 loops=1)\"\n\" -> Sort \n(cost=3.28..3.42 rows=58 width=80) (actual time=0.000..0.000 rows=892 \nloops=1)\"\n\" Sort Key: \ntkunden_kst.fid\"\n\" -> Seq Scan on \ntkunden_kst (cost=0.00..1.58 rows=58 width=80) (actual \ntime=0.000..0.000 rows=58 loops=1)\"\n\" -> Sort (cost=7.78..8.05 \nrows=109 width=883) (actual time=0.000..0.000 rows=950 loops=1)\"\n\" Sort Key: tuser.fid\"\n\" -> Seq Scan on tuser \n(cost=0.00..4.09 rows=109 width=883) (actual time=0.000..0.000 rows=109 \nloops=1)\"\n\" -> Sort (cost=3.46..3.56 rows=40 width=51) \n(actual time=0.000..0.000 rows=887 loops=1)\"\n\" Sort Key: tkunden.fid\"\n\" -> Seq Scan on tkunden \n(cost=0.00..2.40 rows=40 width=51) (actual time=0.000..0.000 rows=40 \nloops=1)\"\n\" -> Sort (cost=1212.53..1215.33 rows=1120 width=1047) \n(actual time=203.000..203.000 rows=1120 loops=1)\"\n\" Sort Key: taufgaben.fprojekt_id\"\n\" -> Merge Left Join (cost=1148.83..1155.80 \nrows=1120 width=1047) (actual time=140.000..203.000 rows=1120 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Sort (cost=910.60..913.40 rows=1120 \nwidth=1043) (actual time=78.000..78.000 rows=1120 loops=1)\"\n\" Sort Key: taufgaben.fid\"\n\" -> Seq Scan on taufgaben \n(cost=0.00..853.88 rows=1120 width=1043) (actual time=0.000..78.000 \nrows=1120 loops=1)\"\n\" Filter: (fbearbeitungsstatus <> 2)\"\n\" -> Sort (cost=238.23..238.73 rows=200 \nwidth=4) (actual time=62.000..93.000 rows=4773 loops=1)\"\n\" Sort Key: patchdaten.faufgaben_id\"\n\" -> Subquery Scan patchdaten \n(cost=0.00..230.59 rows=200 width=4) (actual time=0.000..47.000 \nrows=4773 loops=1)\"\n\" -> Unique (cost=0.00..228.59 \nrows=200 width=4) (actual time=0.000..0.000 rows=4773 loops=1)\"\n\" -> Index Scan using \nidx_aufpa_aufgabeid on taufgaben_patches (cost=0.00..212.74 rows=6340 \nwidth=4) (actual time=0.000..0.000 rows=6340 loops=1)\"\n\"Total runtime: 3703.000 ms\"\n\n\n\n\ngood order\n\nexplain analyze\nSELECT taufgaben.fid AS taufgaben_fid, taufgaben.fprojekt_id AS\n taufgaben_fprojekt_id, taufgaben.fnummer AS taufgaben_fnummer,\n taufgaben.fbudget AS taufgaben_fbudget,\n taufgaben.ftyp AS taufgaben_ftyp,\n taufgaben.fberechnungsart AS taufgaben_fberechnungsart,\n taufgaben.fverrechnung_extern AS taufgaben_fverrechnung_extern,\n taufgaben.fverrechnungsbasis AS taufgaben_fverrechnungsbasis,\n taufgaben.fstatus AS taufgaben_fstatus,\n taufgaben.fkurzbeschreibung AS\n taufgaben_fkurzbeschreibung,\n taufgaben.fansprechpartner AS\n taufgaben_fansprechpartner,\n taufgaben.fanforderer AS taufgaben_fanforderer,\n taufgaben.fstandort_id AS taufgaben_fstandort_id,\n taufgaben.fwunschtermin AS taufgaben_fwunschtermin,\n taufgaben.fstarttermin AS taufgaben_fstarttermin,\n taufgaben.fgesamtaufwand AS taufgaben_fgesamtaufwand,\n taufgaben.fistaufwand AS taufgaben_fistaufwand,\n taufgaben.fprio AS taufgaben_fprio,\n taufgaben.ftester AS taufgaben_ftester,\n taufgaben.ffaellig AS taufgaben_ffaellig,\n taufgaben.flevel AS taufgaben_flevel,\n taufgaben.fkategorie AS taufgaben_fkategorie,\n taufgaben.feintragbearbeitung AS taufgaben_feintragbearbeitung,\n taufgaben.fbearbeitungsstatus AS taufgaben_fbearbeitungsstatus,\n taufgaben.fsolllimit AS taufgaben_fsolllimit,\n taufgaben.fistlimit AS taufgaben_fistlimit,\n taufgaben.fpauschalbetrag AS taufgaben_fpauschalbetrag,\n taufgaben.frechnungslaeufe_id AS taufgaben_frechnungslaeufe_id,\n taufgaben.fzuberechnen AS taufgaben_fzuberechnen,\n taufgaben.floesungsbeschreibung AS taufgaben_floesungsbeschreibung,\n taufgaben.ffehlerbeschreibung AS taufgaben_ffehlerbeschreibung,\n taufgaben.faufgabenstellung AS taufgaben_faufgabenstellung,\n taufgaben.fkritischeaenderungen AS taufgaben_fkritischeaenderungen,\n taufgaben.fbdeaufgabenersteller_id AS \ntaufgaben_fbdeaufgabenersteller_id,\n taufgaben.fzufaktorieren AS taufgaben_fzufaktorieren,\n taufgaben.fisdirty AS taufgaben_fisdirty,\n taufgaben.fnf_kunde_stunden AS taufgaben_fnf_kunde_stunden,\n taufgaben.fzf_kunde_stunden AS taufgaben_fzf_kunde_stunden,\n taufgaben.fbf_kunde_stunden AS taufgaben_fbf_kunde_stunden,\n taufgaben.fnf_kunde_betrag AS taufgaben_fnf_kunde_betrag,\n taufgaben.fzf_kunde_betrag AS taufgaben_fzf_kunde_betrag,\n taufgaben.fbf_kunde_betrag AS taufgaben_fbf_kunde_betrag,\n taufgaben.fgesamt_brutto_stunden AS taufgaben_fgesamt_brutto_stunden,\n taufgaben.fgesamt_brutto_betrag AS taufgaben_fgesamt_brutto_betrag,\n taufgaben.fhinweisgesendet AS taufgaben_fhinweisgesendet,\n taufgaben.fwarnunggesendet AS taufgaben_fwarnunggesendet,\n taufgaben.fnfgesamtaufwand AS taufgaben_fnfgesamtaufwand,\n taufgaben.fnf_netto_stunden AS taufgaben_fnf_netto_stunden,\n taufgaben.fnf_brutto_stunden AS taufgaben_fnf_brutto_stunden,\n taufgaben.fnfhinweisgesendet AS taufgaben_fnfhinweisgesendet,\n taufgaben.fnfwarnunggesendet AS taufgaben_fnfwarnunggesendet,\n taufgaben.fhatzeiten AS taufgaben_fhatzeiten,\n taufgaben.fnichtpublicrechnungsfaehig AS \ntaufgaben_fnichtpublicrechnungsfaehig,\n taufgaben.fnichtpublicrechnungsfaehigbetrag AS \ntaufgaben_fnichtpublicrechnungsfaehigbetrag,\n taufgaben.fnichtberechenbar AS taufgaben_fnichtberechenbar,\n taufgaben.fnichtberechenbarbetrag AS \ntaufgaben_fnichtberechenbarbetrag,\n taufgaben.finternertester AS taufgaben_finternertester,\n taufgaben.finterngetestet AS taufgaben_finterngetestet,\n taufgaben.fanzahlbearbeiter AS taufgaben_fanzahlbearbeiter,\n patchdaten.faufgaben_id AS pataid\n ,vprojekt.*\nFROM taufgaben LEFT JOIN (\n SELECT DISTINCT taufgaben_patches.faufgaben_id\n FROM taufgaben_patches\n ) patchdaten ON taufgaben.fid = patchdaten.faufgaben_id\n\njoin taufgaben_mitarbeiter am on taufgaben.fid = am.faufgaben_id\n\nJOIN vprojekt ON taufgaben.fprojekt_id = vprojekt.tprojekte_fid\n\n\nwhere\nam.fmitarbeiter_id = 54\nand\ntaufgaben.fbearbeitungsstatus <> 2\n\n\"Merge Join (cost=1371.38..1371.45 rows=1 width=2541) (actual \ntime=422.000..438.000 rows=62 loops=1)\"\n\" Merge Cond: (\"outer\".fprojekt_id = \"inner\".fid)\"\n\" -> Sort (cost=1240.49..1240.51 rows=7 width=1047) (actual \ntime=219.000..219.000 rows=62 loops=1)\"\n\" Sort Key: taufgaben.fprojekt_id\"\n\" -> Merge Join (cost=1230.38..1240.39 rows=7 width=1047) \n(actual time=157.000..219.000 rows=62 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Merge Left Join (cost=1148.83..1155.80 rows=1120 \nwidth=1047) (actual time=141.000..203.000 rows=1118 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Sort (cost=910.60..913.40 rows=1120 \nwidth=1043) (actual time=94.000..94.000 rows=1118 loops=1)\"\n\" Sort Key: taufgaben.fid\"\n\" -> Seq Scan on taufgaben (cost=0.00..853.88 \nrows=1120 width=1043) (actual time=0.000..94.000 rows=1120 loops=1)\"\n\" Filter: (fbearbeitungsstatus <> 2)\"\n\" -> Sort (cost=238.23..238.73 rows=200 width=4) \n(actual time=47.000..47.000 rows=4773 loops=1)\"\n\" Sort Key: patchdaten.faufgaben_id\"\n\" -> Subquery Scan patchdaten \n(cost=0.00..230.59 rows=200 width=4) (actual time=0.000..47.000 \nrows=4773 loops=1)\"\n\" -> Unique (cost=0.00..228.59 rows=200 \nwidth=4) (actual time=0.000..31.000 rows=4773 loops=1)\"\n\" -> Index Scan using \nidx_aufpa_aufgabeid on taufgaben_patches (cost=0.00..212.74 rows=6340 \nwidth=4) (actual time=0.000..15.000 rows=6340 loops=1)\"\n\" -> Sort (cost=81.54..81.63 rows=35 width=4) (actual \ntime=16.000..16.000 rows=765 loops=1)\"\n\" Sort Key: am.faufgaben_id\"\n\" -> Index Scan using idx_tauf_mit_mitid on \ntaufgaben_mitarbeiter am (cost=0.00..80.65 rows=35 width=4) (actual \ntime=0.000..16.000 rows=765 loops=1)\"\n\" Index Cond: (fmitarbeiter_id = 54)\"\n\" -> Sort (cost=130.89..130.90 rows=6 width=1494) (actual \ntime=203.000..203.000 rows=916 loops=1)\"\n\" Sort Key: tprojekte.fid\"\n\" -> Merge Join (cost=130.52..130.81 rows=6 width=1494) (actual \ntime=156.000..203.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fkunden_id = \"inner\".fid)\"\n\" -> Sort (cost=127.06..127.07 rows=6 width=1455) (actual \ntime=156.000..156.000 rows=876 loops=1)\"\n\" Sort Key: tkunden_kst.fkunden_id\"\n\" -> Merge Join (cost=126.34..126.98 rows=6 \nwidth=1455) (actual time=110.000..141.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fprojektleiter_id = \n\"inner\".fid)\"\n\" -> Sort (cost=118.56..118.58 rows=9 \nwidth=580) (actual time=110.000..110.000 rows=876 loops=1)\"\n\" Sort Key: tprojekte.fprojektleiter_id\"\n\" -> Merge Join (cost=117.88..118.42 \nrows=9 width=580) (actual time=63.000..94.000 rows=876 loops=1)\"\n\" Merge Cond: \n(\"outer\".fkunden_kst_id = \"inner\".fid)\"\n\" -> Sort (cost=114.60..114.68 \nrows=31 width=508) (actual time=63.000..63.000 rows=876 loops=1)\"\n\" Sort Key: \ntprojekte.fkunden_kst_id\"\n\" -> Merge Join \n(cost=109.10..113.84 rows=31 width=508) (actual time=31.000..63.000 \nrows=876 loops=1)\"\n\" Merge Cond: \n(\"outer\".fid = \"inner\".fkostentraeger_id)\"\n\" -> Sort \n(cost=13.40..13.41 rows=7 width=162) (actual time=0.000..0.000 rows=158 \nloops=1)\"\n\" Sort Key: \ntkostentraeger.fid\"\n\" -> Merge Join \n(cost=3.06..13.30 rows=7 width=162) (actual time=0.000..0.000 rows=158 \nloops=1)\"\n\" Merge \nCond: (\"outer\".fkostenstellen_id = \"inner\".fid)\"\n\" -> Index \nScan using idx_kostenstellen_id on tkostentraeger (cost=0.00..9.74 \nrows=158 width=55) (actual time=0.000..0.000 rows=158 loops=1)\"\n\" -> Sort \n(cost=3.06..3.08 rows=7 width=119) (actual time=0.000..0.000 rows=158 \nloops=1)\"\n\" \nSort Key: tkostenstellen.fid\"\n\" -> \nMerge Join (cost=2.76..2.96 rows=7 width=119) (actual time=0.000..0.000 \nrows=19 loops=1)\"\n\" \nMerge Cond: (\"outer\".fabteilungen_id = \"inner\".fid)\"\n\" \n-> Sort (cost=1.59..1.64 rows=19 width=55) (actual time=0.000..0.000 \nrows=19 loops=1)\"\n\" \nSort Key: tkostenstellen.fabteilungen_id\"\n\" \n-> Seq Scan on tkostenstellen (cost=0.00..1.19 rows=19 width=55) \n(actual time=0.000..0.000 rows=19 loops=1)\"\n\" \n-> Sort (cost=1.17..1.19 rows=7 width=76) (actual time=0.000..0.000 \nrows=19 loops=1)\"\n\" \nSort Key: tabteilungen.fid\"\n\" \n-> Seq Scan on tabteilungen (cost=0.00..1.07 rows=7 width=76) (actual \ntime=0.000..0.000 rows=7 loops=1)\"\n\" -> Sort \n(cost=95.71..97.90 rows=878 width=354) (actual time=31.000..31.000 \nrows=877 loops=1)\"\n\" Sort Key: \ntprojekte.fkostentraeger_id\"\n\" -> Seq Scan on \ntprojekte (cost=0.00..52.78 rows=878 width=354) (actual \ntime=0.000..31.000 rows=878 loops=1)\"\n\" -> Sort (cost=3.28..3.42 \nrows=58 width=80) (actual time=0.000..0.000 rows=892 loops=1)\"\n\" Sort Key: tkunden_kst.fid\"\n\" -> Seq Scan on \ntkunden_kst (cost=0.00..1.58 rows=58 width=80) (actual \ntime=0.000..0.000 rows=58 loops=1)\"\n\" -> Sort (cost=7.78..8.05 rows=109 \nwidth=883) (actual time=0.000..0.000 rows=950 loops=1)\"\n\" Sort Key: tuser.fid\"\n\" -> Seq Scan on tuser (cost=0.00..4.09 \nrows=109 width=883) (actual time=0.000..0.000 rows=109 loops=1)\"\n\" -> Sort (cost=3.46..3.56 rows=40 width=51) (actual \ntime=0.000..0.000 rows=887 loops=1)\"\n\" Sort Key: tkunden.fid\"\n\" -> Seq Scan on tkunden (cost=0.00..2.40 rows=40 \nwidth=51) (actual time=0.000..0.000 rows=40 loops=1)\"\n\"Total runtime: 438.000 ms\"\n\n\n",
"msg_date": "Fri, 13 May 2005 04:45:50 +0200",
"msg_from": "Sebastian Hennebrueder <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize complex join to use where condition before"
},
{
"msg_contents": "\nSebastian Hennebrueder <[email protected]> writes:\n\n> User-Agent: Mozilla Thunderbird 1.0 (Windows/20041206)\n> ...\n> \n> \"Nested Loop (cost=1349.13..1435.29 rows=1 width=2541) (actual \n> time=1640.000..3687.000 rows=62 loops=1)\"\n> \" Join Filter: (\"inner\".fid = \"outer\".faufgaben_id)\"\n> \" -> Index Scan using idx_tauf_mit_mitid on taufgaben_mitarbeiter am \n> (cost=0.00..80.65 rows=35 width=4) (actual time=0.000..0.000 rows=765 \n> loops=1)\"\n\nIs it really Mozilla Thunderbird that's causing this new craptastic mangling\nof plans in people's mails? I was assuming it was some new idea of how to mess\nup people's mail coming out of Exchange or Lotus or some other such \"corporate\nmessaging\" software that only handled SMTP mail as an afterthought. This is,\nuh, disappointing.\n\n-- \ngreg\n\n",
"msg_date": "13 May 2005 01:56:44 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize complex join to use where condition before"
},
{
"msg_contents": "Greg Stark wrote:\n> Sebastian Hennebrueder <[email protected]> writes:\n>\n>\n>>User-Agent: Mozilla Thunderbird 1.0 (Windows/20041206)\n>>...\n>>\n>>\"Nested Loop (cost=1349.13..1435.29 rows=1 width=2541) (actual\n>>time=1640.000..3687.000 rows=62 loops=1)\"\n>>\" Join Filter: (\"inner\".fid = \"outer\".faufgaben_id)\"\n>>\" -> Index Scan using idx_tauf_mit_mitid on taufgaben_mitarbeiter am\n>>(cost=0.00..80.65 rows=35 width=4) (actual time=0.000..0.000 rows=765\n>>loops=1)\"\n>\n>\n> Is it really Mozilla Thunderbird that's causing this new craptastic mangling\n> of plans in people's mails? I was assuming it was some new idea of how to mess\n> up people's mail coming out of Exchange or Lotus or some other such \"corporate\n> messaging\" software that only handled SMTP mail as an afterthought. This is,\n> uh, disappointing.\n>\n\nAre you talking about the quotes, or just the fact that it is wrapped?\n\nI don't know where the quotes came from, but in Thunderbird if you are\nwriting in text mode (not html) it defaults to wrapping the text at\nsomething like 78 characters. That includes copy/paste text.\n\nIf you want it to *not* wrap, it turns out that \"Paste as quotation\"\nwill not wrap, but then you have to remove the \"> \" from the beginning\nof every line.\n\nIn html mode, it also defaults to wrapping, but if you switch to\nPREFORMAT text first, it doesn't wrap.\n\nAt least, those are the tricks that I've found. Safest bet is to just\nuse an attachment, though.\n\nJohn\n=:->",
"msg_date": "Fri, 13 May 2005 13:32:42 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize complex join to use where condition before"
},
{
"msg_contents": "I found a solution to improve my query. I do not know why but the \nstatistics for all column has been 0.\nI changed this to 10 for index columns and to 20 for all foreign key \ncolumns.\nand to 100 for foreign key columns.\nI set the random page cost to 2\nand now the query runs as expected.\n\nMany thanks to all of the posts in my and in other threads which helped \na lot.\n\nSebastian\n\n\"Merge Join (cost=1325.06..1329.96 rows=6 width=2558) (actual \ntime=344.000..344.000 rows=6 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Sort (cost=1269.57..1271.91 rows=934 width=2541) (actual \ntime=344.000..344.000 rows=773 loops=1)\"\n\" Sort Key: taufgaben.fid\"\n\" -> Merge Join (cost=1205.09..1223.49 rows=934 width=2541) (actual \ntime=219.000..313.000 rows=936 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".fprojekt_id)\"\n\" -> Sort (cost=302.08..304.27 rows=876 width=1494) (actual \ntime=156.000..156.000 rows=876 loops=1)\"\n\" Sort Key: tprojekte.fid\"\n\" -> Merge Join (cost=237.42..259.27 rows=876 width=1494) (actual \ntime=109.000..141.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".fprojektleiter_id)\"\n\" -> Index Scan using pk_tuser on tuser (cost=0.00..9.13 rows=109 \nwidth=883) (actual time=0.000..0.000 rows=101 loops=1)\"\n\" -> Sort (cost=237.42..239.61 rows=876 width=619) (actual \ntime=109.000..109.000 rows=876 loops=1)\"\n\" Sort Key: tprojekte.fprojektleiter_id\"\n\" -> Merge Join (cost=181.17..194.60 rows=876 width=619) (actual \ntime=63.000..94.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".fkunden_kst_id)\"\n\" -> Sort (cost=9.51..9.66 rows=58 width=119) (actual \ntime=0.000..0.000 rows=58 loops=1)\"\n\" Sort Key: tkunden_kst.fid\"\n\" -> Merge Join (cost=6.74..7.81 rows=58 width=119) (actual \ntime=0.000..0.000 rows=58 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".fkunden_id)\"\n\" -> Sort (cost=3.46..3.56 rows=40 width=51) (actual \ntime=0.000..0.000 rows=40 loops=1)\"\n\" Sort Key: tkunden.fid\"\n\" -> Seq Scan on tkunden (cost=0.00..2.40 rows=40 width=51) \n(actual time=0.000..0.000 rows=40 loops=1)\"\n\" -> Sort (cost=3.28..3.42 rows=58 width=80) (actual \ntime=0.000..0.000 rows=58 loops=1)\"\n\" Sort Key: tkunden_kst.fkunden_id\"\n\" -> Seq Scan on tkunden_kst (cost=0.00..1.58 rows=58 \nwidth=80) (actual time=0.000..0.000 rows=58 loops=1)\"\n\" -> Sort (cost=171.66..173.85 rows=876 width=508) (actual \ntime=63.000..63.000 rows=876 loops=1)\"\n\" Sort Key: tprojekte.fkunden_kst_id\"\n\" -> Merge Join (cost=114.91..128.85 rows=876 width=508) \n(actual time=31.000..47.000 rows=876 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".fkostentraeger_id)\"\n\" -> Sort (cost=19.20..19.60 rows=158 width=162) (actual \ntime=0.000..0.000 rows=158 loops=1)\"\n\" Sort Key: tkostentraeger.fid\"\n\" -> Merge Join (cost=3.49..13.43 rows=158 width=162) \n(actual time=0.000..0.000 rows=158 loops=1)\"\n\" Merge Cond: (\"outer\".fkostenstellen_id = \"inner\".fid)\"\n\" -> Index Scan using idx_kostenstellen_id on \ntkostentraeger (cost=0.00..7.18 rows=158 width=55) (actual \ntime=0.000..0.000 rows=158 loops=1)\"\n\" -> Sort (cost=3.49..3.53 rows=19 width=119) (actual \ntime=0.000..0.000 rows=158 loops=1)\"\n\" Sort Key: tkostenstellen.fid\"\n\" -> Merge Join (cost=2.76..3.08 rows=19 width=119) \n(actual time=0.000..0.000 rows=19 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".fabteilungen_id)\"\n\" -> Sort (cost=1.17..1.19 rows=7 width=76) (actual \ntime=0.000..0.000 rows=7 loops=1)\"\n\" Sort Key: tabteilungen.fid\"\n\" -> Seq Scan on tabteilungen (cost=0.00..1.07 rows=7 \nwidth=76) (actual time=0.000..0.000 rows=7 loops=1)\"\n\" -> Sort (cost=1.59..1.64 rows=19 width=55) (actual \ntime=0.000..0.000 rows=19 loops=1)\"\n\" Sort Key: tkostenstellen.fabteilungen_id\"\n\" -> Seq Scan on tkostenstellen (cost=0.00..1.19 \nrows=19 width=55) (actual time=0.000..0.000 rows=19 loops=1)\"\n\" -> Sort (cost=95.71..97.90 rows=878 width=354) (actual \ntime=31.000..31.000 rows=877 loops=1)\"\n\" Sort Key: tprojekte.fkostentraeger_id\"\n\" -> Seq Scan on tprojekte (cost=0.00..52.78 rows=878 \nwidth=354) (actual time=0.000..31.000 rows=878 loops=1)\"\n\" -> Sort (cost=903.01..905.35 rows=936 width=1047) (actual \ntime=63.000..63.000 rows=936 loops=1)\"\n\" Sort Key: taufgaben.fprojekt_id\"\n\" -> Nested Loop Left Join (cost=0.28..856.82 rows=936 width=1047) \n(actual time=0.000..63.000 rows=936 loops=1)\"\n\" Join Filter: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Index Scan using idx_taufgaben_bstatus on taufgaben \n(cost=0.00..835.47 rows=936 width=1043) (actual time=0.000..0.000 \nrows=936 loops=1)\"\n\" Index Cond: (fbearbeitungsstatus < 2)\"\n\" -> Materialize (cost=0.28..0.29 rows=1 width=4) (actual \ntime=0.000..0.000 rows=1 loops=936)\"\n\" -> Subquery Scan patchdaten (cost=0.00..0.28 rows=1 width=4) \n(actual time=0.000..0.000 rows=1 loops=1)\"\n\" -> Limit (cost=0.00..0.27 rows=1 width=4) (actual \ntime=0.000..0.000 rows=1 loops=1)\"\n\" -> Merge Join (cost=0.00..1706.77 rows=6340 width=4) (actual \ntime=0.000..0.000 rows=1 loops=1)\"\n\" Merge Cond: (\"outer\".fid = \"inner\".faufgaben_id)\"\n\" -> Index Scan using idx_taufgaben_fid on taufgaben \n(cost=0.00..1440.61 rows=6070 width=8) (actual time=0.000..0.000 rows=1 \nloops=1)\"\n\" -> Index Scan using idx_aufpa_aufgabeid on \ntaufgaben_patches (cost=0.00..171.74 rows=6340 width=4) (actual \ntime=0.000..0.000 rows=1 loops=1)\"\n\" -> Sort (cost=55.49..55.57 rows=35 width=17) (actual time=0.000..0.000 \nrows=270 loops=1)\"\n\" Sort Key: am.faufgaben_id\"\n\" -> Index Scan using idx_tauf_mit_mitid on taufgaben_mitarbeiter am \n(cost=0.00..54.59 rows=35 width=17) (actual time=0.000..0.000 rows=270 \nloops=1)\"\n\" Index Cond: (fmitarbeiter_id = 58)\"\n\"Total runtime: 344.000 ms\"\n",
"msg_date": "Sat, 14 May 2005 00:07:05 +0200",
"msg_from": "Sebastian Hennebrueder <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize complex join to use where condition before"
},
{
"msg_contents": "Sebastian Hennebrueder wrote:\n\n> I found a solution to improve my query. I do not know why but the\n> statistics for all column has been 0.\n> I changed this to 10 for index columns and to 20 for all foreign key\n> columns.\n> and to 100 for foreign key columns.\n> I set the random page cost to 2\n> and now the query runs as expected.\n>\n> Many thanks to all of the posts in my and in other threads which\n> helped a lot.\n>\n> Sebastian\n\n\nI think 0 = use default. But still, changing to 20 and 100 probably\nfixes your problems.\n\nJohn\n=:->",
"msg_date": "Fri, 13 May 2005 17:15:25 -0500",
"msg_from": "John Arbash Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize complex join to use where condition before"
}
] |
[
{
"msg_contents": "Hello,\n\nI could not find any recommandations for the level of set statistics and \nwhat a specific level does actually mean.\nWhat is the difference between 1, 50 and 100? What is recommanded for a \ntable or column?\n\n-- \nKind Regards / Viele Grᅵᅵe\n\nSebastian Hennebrueder\n\n-----\nhttp://www.laliluna.de/tutorials.html\nTutorials for Java, Struts, JavaServer Faces, JSP, Hibernate, EJB and more.\n\n",
"msg_date": "Fri, 13 May 2005 04:54:16 +0200",
"msg_from": "Sebastian Hennebrueder <[email protected]>",
"msg_from_op": true,
"msg_subject": "Recommendations for set statistics"
},
{
"msg_contents": "Sebastian Hennebrueder wrote:\n> Hello,\n>\n> I could not find any recommandations for the level of set statistics and\n> what a specific level does actually mean.\n> What is the difference between 1, 50 and 100? What is recommanded for a\n> table or column?\n>\n\nDefault I believe is 10. The higher the number, the more statistics are\nkept, with a maximum of 1000.\n\nThe default is a little bit low for columns used in foreign keys, though\nfrequently it is okay.\nWhen problems start, try setting them to 100 or 200. Higher is more\naccurate, but takes longer to compute, *and* takes longer when planning\nthe optimal query method. It can be worth it, though.\n\nJohn\n=:->",
"msg_date": "Thu, 12 May 2005 22:36:01 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommendations for set statistics"
},
{
"msg_contents": "After a long battle with technology, [email protected] (Sebastian Hennebrueder), an earthling, wrote:\n> I could not find any recommandations for the level of set statistics\n> and what a specific level does actually mean.\n> What is the difference between 1, 50 and 100? What is recommanded for\n> a table or column?\n\nThe numbers represent the numbers of \"bins\" used to establish\nhistograms that estimate how the data looks.\n\nThe default is to have 10 bins, and 300 items are sampled at ANALYZE\ntime per bin.\n\n1 would probably be rather bad, having very little ability to express\nthe distribution of data. 100 bins would be 10x as expensive to\nstore than 10, but would provide a much distribution.\n\nIt is widely believed that a somewhat larger default than 10 would be\na \"good thing,\" as it seems to be fairly common for 10 to be too small\nto allow statistics to be stable. But nobody has done any formal\nevaluation as to whether it would make sense to jump from 10 to:\n\n - 15?\n - 20?\n - 50?\n - 100?\n - More than that?\n\nIf we could show that 90% of the present \"wrong results\" that come\nfrom the default of 10 could be addressed by an increase to 20 bins,\nand the remainder could be left to individual tuning, well, getting\nrid of 90% of the \"query plan errors\" would seem worthwhile.\n\nI'd hope that a moderate (e.g. - from 10 to 20) increase, which would\nbe pretty cheap, would help a fair bit, but there is no evidence one\nway or the other. Unfortunately, nobody has come up with a decent way\nof evaluating how much good a change to the default would actually do.\n\nIf you can discover an evaluation scheme, your results are likely to\nget an ear.\n-- \n\"cbbrowne\",\"@\",\"gmail.com\"\nhttp://linuxdatabases.info/info/lsf.html\n\"In 1555, Nostradamus wrote: 'Come the millennium, month 12, in the\nhome of greatest power, the village idiot will come forth to be\nacclaimed the leader.'\"\n",
"msg_date": "Fri, 13 May 2005 00:16:24 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommendations for set statistics"
},
{
"msg_contents": "Chris,\n\n> It is widely believed that a somewhat larger default than 10 would be\n> a \"good thing,\" as it seems to be fairly common for 10 to be too small\n> to allow statistics to be stable. But nobody has done any formal\n> evaluation as to whether it would make sense to jump from 10 to:\n>\n> - 15?\n> - 20?\n> - 50?\n> - 100?\n> - More than that?\n\nMy anecdotal experience is that if more than 10 is required, you generally \nneed to jump to at least 100, and more often 250. On the other end, I've \ngenerally not found any difference between 400 and 1000 when it comes to \n\"bad\" queries.\n\nI have an unfinished patch in the works which goes through and increases the \nstats_target for all *indexed* columns to 100 or so. However, I've needed \nto work up a test case to prove the utility of it.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 13 May 2005 09:22:11 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommendations for set statistics"
}
] |
[
{
"msg_contents": "\n Hello,\n\n We have problems with one postgresql database with high\ndata change rate. Actually we are already under pressure\nto change postgresql to Oracle.\n\n I cannot post schema and queries to list but can do this\nprivately.\n\n Tables are not big (20000-150000 rows each) but have very high\nturnover rate - 100+ updates/inserts/deletes/selects per second.\nSo contents of database changes very fast. Problem is that when\npg_autovacuum does vacuum those changes slows down too much.\nAnd we keep autovacuum quite aggressive (-v 1000 -V 0.5 -a 1000\n-A 0.1 -s 10) to not bloat database and to avoid bigger impact.\nanalyze seems not to impact performance too much.\n\n Tables have 2-3 indexes each and one table have foreign key\ncontraint. Postgresql is 8.0.1. vmstat shows that IO and CPU are\nnot saturated. DB is on RAID1+0 controller with battery backed write\ncache.\n\n What can we tune to improve performance in our case? Please help\nto defend PostgreSQL against Oracle in this case :).\n\n Thanks,\n\n Mindaugas\n\n",
"msg_date": "Fri, 13 May 2005 15:52:38 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL strugling during high load"
},
{
"msg_contents": "On Fri, May 13, 2005 at 03:52:38PM +0300, Mindaugas Riauba wrote:\n> Tables are not big (20000-150000 rows each) but have very high\n> turnover rate - 100+ updates/inserts/deletes/selects per second.\n> So contents of database changes very fast. Problem is that when\n> pg_autovacuum does vacuum those changes slows down too much.\n> And we keep autovacuum quite aggressive (-v 1000 -V 0.5 -a 1000\n> -A 0.1 -s 10) to not bloat database and to avoid bigger impact.\n> analyze seems not to impact performance too much.\n\nAre you using the bgwriter?\nSee http://developer.postgresql.org/~wieck/vacuum_cost/ for details.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 13 May 2005 15:00:37 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load"
},
{
"msg_contents": "\"Mindaugas Riauba\" <[email protected]> writes:\n> ... So contents of database changes very fast. Problem is that when\n> pg_autovacuum does vacuum those changes slows down too much.\n\nThe \"vacuum cost\" parameters can be adjusted to make vacuums fired\nby pg_autovacuum less of a burden. I haven't got any specific numbers\nto suggest, but perhaps someone else does.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 May 2005 09:42:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load "
},
{
"msg_contents": "\n> > ... So contents of database changes very fast. Problem is that when\n> > pg_autovacuum does vacuum those changes slows down too much.\n>\n> The \"vacuum cost\" parameters can be adjusted to make vacuums fired\n> by pg_autovacuum less of a burden. I haven't got any specific numbers\n> to suggest, but perhaps someone else does.\n\n It looks like that not only vacuum causes our problems. vacuum_cost\nseems to lower vacuum impact but we are still noticing slow queries \"storm\".\nWe are logging queries that takes >2000ms to process.\n And there is quiet periods and then suddenly 30+ slow queries appears in\nlog within the same second. What else could cause such behaviour? WAL log\nswitch? One WAL file seems to last <1 minute.\n\n And also in slow queries log only function call is shown. Maybe it is\npossible\nto get exact query which slows everything down in the serverlog?\n\n Thanks,\n\n Mindaugas\n\n",
"msg_date": "Fri, 13 May 2005 17:10:01 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL strugling during high load "
},
{
"msg_contents": "\"Mindaugas Riauba\" <[email protected]> writes:\n> It looks like that not only vacuum causes our problems. vacuum_cost\n> seems to lower vacuum impact but we are still noticing slow queries \"storm\".\n> We are logging queries that takes >2000ms to process.\n> And there is quiet periods and then suddenly 30+ slow queries appears in\n> log within the same second. What else could cause such behaviour?\n\nCheckpoints? You should ensure that the checkpoint settings are such\nthat checkpoints don't happen too often (certainly not oftener than\nevery five minutes or so), and make sure the bgwriter is configured\nto dribble out dirty pages at a reasonable rate, so that the next\ncheckpoint doesn't have a whole load of stuff to write.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 May 2005 10:12:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load "
},
{
"msg_contents": "\n> > It looks like that not only vacuum causes our problems. vacuum_cost\n> > seems to lower vacuum impact but we are still noticing slow queries\n\"storm\".\n> > We are logging queries that takes >2000ms to process.\n> > And there is quiet periods and then suddenly 30+ slow queries appears\nin\n> > log within the same second. What else could cause such behaviour?\n>\n> Checkpoints? You should ensure that the checkpoint settings are such\n> that checkpoints don't happen too often (certainly not oftener than\n> every five minutes or so), and make sure the bgwriter is configured\n> to dribble out dirty pages at a reasonable rate, so that the next\n> checkpoint doesn't have a whole load of stuff to write.\n\n bgwriter settings are default. bgwriter_delay=200, bgwriter_maxpages=100,\nbgwriter_percent=1. checkpoint_segments=8, checkpoint_timeout=300,\ncheckpoint_warning=30.\n\n But there's no checkpoint warnings in serverlog. And btw we are running\nwith fsync=off (yes I know the consequences).\n\n Database from the viewpoint of disk is practically write only since amount\nof data is smaller than memory available. I also added some 'vmstat 1'\noutput.\n\n How to get more even load. As you see neither disk nor cpu looks too busy.\n\n Thanks,\n\n Mindaugas\n\n procs memory swap io system\ncpu\n r b w swpd free buff cache si so bi bo in cs us sy\nid\n 1 0 0 194724 12140 10220 1045356 0 1 33 24 60 20 13 3\n83\n 2 0 0 194724 11988 10228 1045464 0 0 12 0 1147 6107 13 4\n82\n 0 2 0 194724 12172 10284 1046076 0 0 244 20692 2067 3117 8 8\n84\n 1 0 0 194724 12164 10280 1045912 0 0 0 4 876 8831 15 11\n74\n 3 0 0 194724 11704 10328 1045952 0 0 24 2116 928 5122 13 12\n75\n 1 0 0 194724 11444 10236 1046264 0 0 340 0 1048 6538 19 10\n71\n 1 0 0 194724 11924 10236 1045816 0 0 0 0 885 7616 14 20\n66\n 0 0 0 194724 11408 10252 1044824 0 0 28 5488 959 4749 11 14\n75\n 1 0 0 194724 11736 10296 1042992 0 0 460 2868 1001 4116 12 12\n75\n 0 0 0 194724 12024 10296 1043064 0 0 36 0 903 5081 13 12\n76\n 1 0 0 194724 12404 10240 1043440 0 0 280 0 899 4246 12 12\n75\n 1 0 0 194724 13128 10236 1043472 0 0 0 0 1016 5394 12 10\n78\n 0 4 0 194724 13064 10244 1043652 0 0 0 14736 1882 9290 10 15\n74\n 0 4 0 194724 13056 10252 1043660 0 0 0 6012 1355 2378 2 3\n95\n12 21 0 194724 13140 10220 1043640 0 0 8 4 723 2984 5 3\n92\n 1 0 0 194724 13712 10228 1043956 0 0 200 0 1144 10310 30 21\n50\n 0 0 0 194724 13100 10220 1043992 0 0 4 0 840 4676 15 14\n71\n 0 0 0 194724 13048 10296 1041212 0 0 4 6132 918 4074 10 10\n80\n 0 0 0 194724 12688 10324 1041508 0 0 240 1864 849 3873 12 11\n77\n 2 0 0 194724 12544 10240 1041944 0 0 32 0 1171 4844 14 7\n78\n 1 0 0 194724 12384 10232 1041756 0 0 4 0 973 6063 16 9\n75\n 1 0 0 194724 12904 10244 1042116 0 0 264 6052 1049 4762 15 14\n71\n 0 0 0 194724 12616 10236 1042164 0 0 8 0 883 4748 13 8\n79\n 2 0 0 194724 12576 10288 1042460 0 0 252 3136 857 3929 13 15\n73\n 2 0 0 194724 12156 10284 1042504 0 0 0 0 858 8832 13 6\n81\n 2 0 0 194724 12024 10284 1042556 0 0 0 0 834 4229 16 10\n74\n 3 1 0 194724 12024 10364 1043096 0 0 316 10328 1024 5686 14 7\n80\n 0 5 0 194724 12024 10352 1043116 0 0 4 7996 2156 2816 4 5\n90\n 0 4 0 194724 12024 10360 1043124 0 0 4 8560 1369 2700 6 5\n90\n 3 0 0 194724 12024 10264 1043124 0 0 0 4 1037 5132 14 15\n71\n 1 1 0 194724 11876 10264 1043176 0 0 4 0 932 7761 20 20\n6\n\n",
"msg_date": "Fri, 13 May 2005 17:45:45 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL strugling during high load "
},
{
"msg_contents": "Mindaugas Riauba wrote:\n\n>>The \"vacuum cost\" parameters can be adjusted to make vacuums fired\n>>by pg_autovacuum less of a burden. I haven't got any specific numbers\n>>to suggest, but perhaps someone else does.\n> \n> It looks like that not only vacuum causes our problems. vacuum_cost\n> seems to lower vacuum impact but we are still noticing slow queries \"storm\".\n> We are logging queries that takes >2000ms to process.\n> And there is quiet periods and then suddenly 30+ slow queries appears in\n> log within the same second. What else could cause such behaviour?\n\nI've seen that happen when you're placing (explicitly or\n*implicitly*) locks on the records you're trying to update/delete.\n\nIf you're willing to investigate, `pg_locks' system view holds\ninformation about db locks.\n\n-- \nCosimo\n",
"msg_date": "Fri, 13 May 2005 17:23:13 +0200",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load"
},
{
"msg_contents": "\n> >>The \"vacuum cost\" parameters can be adjusted to make vacuums fired\n> >>by pg_autovacuum less of a burden. I haven't got any specific numbers\n> >>to suggest, but perhaps someone else does.\n> >\n> > It looks like that not only vacuum causes our problems. vacuum_cost\n> > seems to lower vacuum impact but we are still noticing slow queries\n\"storm\".\n> > We are logging queries that takes >2000ms to process.\n> > And there is quiet periods and then suddenly 30+ slow queries appears\nin\n> > log within the same second. What else could cause such behaviour?\n>\n> I've seen that happen when you're placing (explicitly or\n> *implicitly*) locks on the records you're trying to update/delete.\n>\n> If you're willing to investigate, `pg_locks' system view holds\n> information about db locks.\n\n Hm. Yes. Number of locks varies quite alot (10-600). Now what to\ninvestigate\nfurther? We do not use explicit locks in our functions. We use quite simple\nupdate/delete where key=something;\n Some sample (select * from pg_locks order by pid) is below.\n\n Thanks,\n\n Mindaugas\n\n | | 584302172 | 11836 | ExclusiveLock | t\n 17236 | 17230 | | 11836 | AccessShareLock | t\n 17236 | 17230 | | 11836 | RowExclusiveLock | t\n 127103 | 17230 | | 11836 | RowExclusiveLock | t\n 127106 | 17230 | | 11836 | RowExclusiveLock | t\n 127109 | 17230 | | 11836 | AccessShareLock | t\n 127109 | 17230 | | 11836 | RowExclusiveLock | t\n 127109 | 17230 | | 11837 | AccessShareLock | t\n 127109 | 17230 | | 11837 | RowExclusiveLock | t\n 17236 | 17230 | | 11837 | AccessShareLock | t\n 17236 | 17230 | | 11837 | RowExclusiveLock | t\n 127106 | 17230 | | 11837 | RowExclusiveLock | t\n 127103 | 17230 | | 11837 | RowExclusiveLock | t\n | | 584302173 | 11837 | ExclusiveLock | t\n 127103 | 17230 | | 11838 | RowExclusiveLock | t\n 17236 | 17230 | | 11838 | RowExclusiveLock | t\n 127109 | 17230 | | 11838 | RowExclusiveLock | t\n | | 584302174 | 11838 | ExclusiveLock | t\n 17285 | 17230 | | 11838 | AccessShareLock | t\n 17251 | 17230 | | 11838 | AccessShareLock | t\n 130516 | 17230 | | 11838 | AccessShareLock | t\n 127106 | 17230 | | 11838 | RowExclusiveLock | t\n 17278 | 17230 | | 11838 | AccessShareLock | t\n\n",
"msg_date": "Fri, 13 May 2005 18:41:43 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL strugling during high load"
},
{
"msg_contents": "On Fri, May 13, 2005 at 05:45:45PM +0300, Mindaugas Riauba wrote:\n> But there's no checkpoint warnings in serverlog. And btw we are running\n> with fsync=off (yes I know the consequences).\n\nJust a note here; since you have battery-backed hardware cache, you\nprobably won't notice that much of a slowdown with fsync=on. However, you are\nalready pushed, so... :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 13 May 2005 18:12:14 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load"
},
{
"msg_contents": "\"Mindaugas Riauba\" <[email protected]> writes:\n> Hm. Yes. Number of locks varies quite alot (10-600). Now what to\n> investigate\n> further? We do not use explicit locks in our functions. We use quite simple\n> update/delete where key=something;\n> Some sample (select * from pg_locks order by pid) is below.\n\nThe sample doesn't show any lock issues (there are no processes waiting\nfor ungranted locks). The thing that typically burns people is foreign\nkey conflicts. In current releases, if you have a foreign key reference\nthen an insert in the referencing table takes an exclusive row lock on\nthe referenced (master) row --- which means that two inserts using the\nsame foreign key value block each other.\n\nYou can alleviate the issue by making all your foreign key checks\ndeferred, but that just shortens the period of time the lock is held.\nThere will be a real solution in PG 8.1, which has sharable row locks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 May 2005 12:42:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load "
},
{
"msg_contents": "Quoting Tom Lane <[email protected]>:\n\n> \"Mindaugas Riauba\" <[email protected]> writes:\n> > ... So contents of database changes very fast. Problem is that\n> when\n> > pg_autovacuum does vacuum those changes slows down too much.\n> \n> The \"vacuum cost\" parameters can be adjusted to make vacuums fired\n> by pg_autovacuum less of a burden. I haven't got any specific\n> numbers\n> to suggest, but perhaps someone else does.\n\nI solved one problem by cranking sleep scaling to -S 20.\nIt made pg_autovacuum back off longer during extended periods of heavy\ndisk-intensive query activity. Our update activity is near-constant\ninsert rate, then once or twice a day, massive deletes.\n-- \nDreams come true, not free.\n\n",
"msg_date": "Fri, 13 May 2005 10:33:22 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load "
},
{
"msg_contents": "Mindaugas Riauba wrote:\n\n>>The \"vacuum cost\" parameters can be adjusted to make vacuums fired\n>>by pg_autovacuum less of a burden. I haven't got any specific numbers\n>>to suggest, but perhaps someone else does.\n>> \n>>\n>\n> It looks like that not only vacuum causes our problems. vacuum_cost\n>seems to lower vacuum impact but we are still noticing slow queries \"storm\".\n>We are logging queries that takes >2000ms to process.\n> And there is quiet periods and then suddenly 30+ slow queries appears in\n>log within the same second. What else could cause such behaviour? WAL log\n>switch? One WAL file seems to last <1 minute.\n> \n>\n\nHow long are these quite periods? Do the \"strom\" periods correspond to \npg_autovacuum loops? I have heard from one person who had LOTS of \ndatabases and tables that caused the pg_autovacuum to create a noticable \nload just updateing all its stats. The solution in that case was to add \na small delay insidet the inner pg_autovacuum loop.\n",
"msg_date": "Sun, 15 May 2005 22:26:50 -0400",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load"
},
{
"msg_contents": "Actually, that solution didn't work so well. Even very small delays \nin the loop caused the entire loop to perform too slowly to be useful \nin the production environment. I ended up producing a small patch out \nof it :P, but we ended up using pgpool to reduce connections from \nanother part of the app, which made the pg_autovacuum spikes less \ntroublesome overall.\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source: Open Your i�\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn May 15, 2005, at 9:26 PM, Matthew T. O'Connor wrote:\n\n> Mindaugas Riauba wrote:\n>\n>\n>>> The \"vacuum cost\" parameters can be adjusted to make vacuums fired\n>>> by pg_autovacuum less of a burden. I haven't got any specific \n>>> numbers\n>>> to suggest, but perhaps someone else does.\n>>>\n>>>\n>>\n>> It looks like that not only vacuum causes our problems. vacuum_cost\n>> seems to lower vacuum impact but we are still noticing slow \n>> queries \"storm\".\n>> We are logging queries that takes >2000ms to process.\n>> And there is quiet periods and then suddenly 30+ slow queries \n>> appears in\n>> log within the same second. What else could cause such behaviour? \n>> WAL log\n>> switch? One WAL file seems to last <1 minute.\n>>\n>>\n>\n> How long are these quite periods? Do the \"strom\" periods \n> correspond to pg_autovacuum loops? I have heard from one person \n> who had LOTS of databases and tables that caused the pg_autovacuum \n> to create a noticable load just updateing all its stats. The \n> solution in that case was to add a small delay insidet the inner \n> pg_autovacuum loop.\n",
"msg_date": "Sun, 15 May 2005 21:45:54 -0500",
"msg_from": "\"Thomas F. O'Connell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load"
},
{
"msg_contents": "\n> > Hm. Yes. Number of locks varies quite alot (10-600). Now what to\n> > investigate\n> > further? We do not use explicit locks in our functions. We use quite\nsimple\n> > update/delete where key=something;\n> > Some sample (select * from pg_locks order by pid) is below.\n>\n> The sample doesn't show any lock issues (there are no processes waiting\n> for ungranted locks). The thing that typically burns people is foreign\n> key conflicts. In current releases, if you have a foreign key reference\n> then an insert in the referencing table takes an exclusive row lock on\n> the referenced (master) row --- which means that two inserts using the\n> same foreign key value block each other.\n>\n> You can alleviate the issue by making all your foreign key checks\n> deferred, but that just shortens the period of time the lock is held.\n> There will be a real solution in PG 8.1, which has sharable row locks.\n\n In such case our foreign key contraint should not be an issue since it\nis on msg_id which is pretty much unique among concurrent transactions.\n\n And I noticed that \"storms\" happens along with higher write activity. If\nbo in vmstat shows 25+MB in 2s then most likely I will get \"storm\" of slow\nqueries in serverlog. How to even write activity? fsync=off, bgwriter\nsettings\nare default.\n\n And is it possible to log which query in function takes the longest time\nto complete?\n\n Also do not know if it matters but PG database is on ext3 partition with\ndata=journal option.\n\n Thanks,\n\n Mindaugas\n\n",
"msg_date": "Mon, 16 May 2005 13:05:04 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL strugling during high load "
},
{
"msg_contents": "Tom\n\nThanks for the post - I think I am getting this problem for\na synthetic workload at high connection loads. The whole\nsystem seems to stop. \n\nCan you give some examples on what to try out in the .conf file?\n\nI tried\nbgwriter_all_percent = 30, 10, and 3\n\nWhich I understand to mean 30%, 10% and 3% of the dirty pages should be\nwritten out *between* checkpoints.\n\nI didn't see any change in effect.\n\n/regards\nDon C.\n\nTom Lane wrote:\n\n>\"Mindaugas Riauba\" <[email protected]> writes:\n> \n>\n>> It looks like that not only vacuum causes our problems. vacuum_cost\n>>seems to lower vacuum impact but we are still noticing slow queries \"storm\".\n>>We are logging queries that takes >2000ms to process.\n>> And there is quiet periods and then suddenly 30+ slow queries appears in\n>>log within the same second. What else could cause such behaviour?\n>> \n>>\n>\n>Checkpoints? You should ensure that the checkpoint settings are such\n>that checkpoints don't happen too often (certainly not oftener than\n>every five minutes or so), and make sure the bgwriter is configured\n>to dribble out dirty pages at a reasonable rate, so that the next\n>checkpoint doesn't have a whole load of stuff to write.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n>\n\n",
"msg_date": "Thu, 19 May 2005 12:54:03 -0400",
"msg_from": "Donald Courtney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load"
}
] |
[
{
"msg_contents": "We are up and somewhat happy.\n\n \n\nI have been following threads (in case you don't know I bought a 4 proc Dell\nrecently) and the Opteron seems the way to go.\n\nI just called HP for a quote, but don't want to make any mistakes.\n\n \n\nIs the battery backed cache good or bad for Postgres?\n\n \n\nThey are telling me I can only get a duel channel card if I want hardware\nraid 10 on the 14 drives.\n\nI can get two cards but it has to be 7 and 7 (software raid?) which does not\nsound like it fixes my single point of failure (one of the listers mentioned\nmy current system has 3 such single points).\n\n \n\nAny of you hardware gurus spell out the optimal machine (I am hoping to be\naround 15K, might be able to go more if it's a huge difference, I spent 30k\non the Dell).\n\nI do not have to go HP, and after seeing the fail ratio from Monarch from\none lister I am bit scared shopping there.\n\nWas there a conclusion on where is best to get one (I really want two one\nfor development too).\n\n \n\n \n\nJoel Fradkin\n\n \n\nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n\n \n\[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n\n \n\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nWe are up and somewhat happy.\n \nI have been following threads (in case you don’t know\nI bought a 4 proc Dell recently) and the Opteron seems the way to go.\nI just called HP for a quote, but don’t want to make\nany mistakes.\n \nIs the battery backed cache good or bad for Postgres?\n \nThey are telling me I can only get a duel channel card if I\nwant hardware raid 10 on the 14 drives.\nI can get two cards but it has to be 7 and 7 (software\nraid?) which does not sound like it fixes my single point of failure (one of\nthe listers mentioned my current system has 3 such single points).\n \nAny of you hardware gurus spell out the optimal machine (I\nam hoping to be around 15K, might be able to go more if it’s a huge\ndifference, I spent 30k on the Dell).\nI do not have to go HP, and after seeing the fail ratio from\nMonarch from one lister I am bit scared shopping there.\nWas there a conclusion on where is best to get one (I really\nwant two one for development too).\n \n \nJoel Fradkin\n\n \n\nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n\n \n\[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\n© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the\nintended recipient, please contact the sender by reply email and delete and\ndestroy all copies of the original message, including attachments.",
"msg_date": "Fri, 13 May 2005 15:27:55 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "ok you all win what is best opteron (I dont want a hosed system\n again)"
},
{
"msg_contents": "Joel Fradkin wrote:\n\n> Is the battery backed cache good or bad for Postgres?\n>\nBattery-backed avoids corruption if you have an unexpected power loss. \nIt's considered mandatory with large-cache write-back controllers if you \ncan't afford to lose any data.\n\n> They are telling me I can only get a duel channel card if I want \n> hardware raid 10 on the 14 drives.\n>\n> I can get two cards but it has to be 7 and 7 (software raid?) which \n> does not sound like it fixes my single point of failure (one of the \n> listers mentioned my current system has 3 such single points).\n>\nSounds like you need to try another vendor. Are you aiming for two RAID \n10 arrays or one RAID 10 and one RAID 5?\n\n> Any of you hardware gurus spell out the optimal machine (I am hoping \n> to be around 15K, might be able to go more if it�s a huge difference, \n> I spent 30k on the Dell).\n>\n> I do not have to go HP, and after seeing the fail ratio from Monarch \n> from one lister I am bit scared shopping there.\n>\nThere's unlikely to be many common components between their workstation \nand server offerings. You would expect case, power, graphics, \nmotherboard, storage controller and drives to all be different. But I'd \nchallenge that 50% failure stat anyway. Which components exactly? Hard \ndrives? Power supplies?\n\n> Was there a conclusion on where is best to get one (I really want two \n> one for development too).\n>\nAlmost anyone can build a workstation or tower server, and almost anyone \nelse can service it for you. It gets trickier when you're talking 2U and \nespecially 1U. But really, these too can be maintained by anyone \ncompetent. So I wonder about some people's obsession with \nvendor-provided service.\n\nRealistically, most Opteron solutions will use a Tyan motherboard (no \nidea if this includes HP). For 4-way systems, there's currently only the \nS4882, which includes an LSI dual channel SCSI controller. Different \nvendors get to use different cases and cooling solutions and pick a \ndifferent brand/model of hard drive, but that's about it.\n\nTyan now also sells complete servers - hardly a stretch seeing they \nalready make the most important bit (after the CPU). Given the level of \ninterest in this forum, here's their list of US resellers:\n\nhttp://www.tyan.com/products/html/us_alwa.html\n\nIf it's a tower server, build it yourself or pay someone to do it. It \nreally isn't challenging for anyone knowledgeable about hardware.\n",
"msg_date": "Sat, 14 May 2005 10:03:06 +1000",
"msg_from": "David Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ok you all win what is best opteron (I dont want a"
},
{
"msg_contents": "Thank you much for the info.\nI will take a look. I think the prices I have been seeing may exclude us\ngetting another 4 proc box this soon. My boss asked me to get something in\nthe 15K range (I spent 30 on the Dell). \nThe HP seemed to run around 30 but it had a lot more drives then the dell\n(speced it with 14 10k drives).\n\nI can and will most likely build it myself to try getting a bit more bang\nfor the buck and it is a second server so if it dies it should not be a\ncatastrophie.\n\nFYI everyone using our system (after a week of dealing with many bugs) have\nbeen saying how much they like the speed.\nI did have to do a lot of creative ideas to get it working in a way that\nappears faster to the client.\nStuff like the queries default to limit 50 and as they hit next I up the\nlimit (also a flag to just show all records and a count, it used to default\nto that). The two worst queries (our case and audit applications) I created\ndenormalized files and maintain them through code. All reporting comes off\nthose and it is lightning fast.\n\nI just want to say again thanks to everyone who has helped me in the past\nfew months.\n\nJoel Fradkin\n \nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n \[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n \n\n \n\n-----Original Message-----\nFrom: David Brown [mailto:[email protected]] \nSent: Friday, May 13, 2005 7:03 PM\nTo: Joel Fradkin\nCc: [email protected]\nSubject: Re: [PERFORM] ok you all win what is best opteron (I dont want a\nhosed system again)\n\nJoel Fradkin wrote:\n\n> Is the battery backed cache good or bad for Postgres?\n>\nBattery-backed avoids corruption if you have an unexpected power loss. \nIt's considered mandatory with large-cache write-back controllers if you \ncan't afford to lose any data.\n\n> They are telling me I can only get a duel channel card if I want \n> hardware raid 10 on the 14 drives.\n>\n> I can get two cards but it has to be 7 and 7 (software raid?) which \n> does not sound like it fixes my single point of failure (one of the \n> listers mentioned my current system has 3 such single points).\n>\nSounds like you need to try another vendor. Are you aiming for two RAID \n10 arrays or one RAID 10 and one RAID 5?\n\n> Any of you hardware gurus spell out the optimal machine (I am hoping \n> to be around 15K, might be able to go more if it's a huge difference, \n> I spent 30k on the Dell).\n>\n> I do not have to go HP, and after seeing the fail ratio from Monarch \n> from one lister I am bit scared shopping there.\n>\nThere's unlikely to be many common components between their workstation \nand server offerings. You would expect case, power, graphics, \nmotherboard, storage controller and drives to all be different. But I'd \nchallenge that 50% failure stat anyway. Which components exactly? Hard \ndrives? Power supplies?\n\n> Was there a conclusion on where is best to get one (I really want two \n> one for development too).\n>\nAlmost anyone can build a workstation or tower server, and almost anyone \nelse can service it for you. It gets trickier when you're talking 2U and \nespecially 1U. But really, these too can be maintained by anyone \ncompetent. So I wonder about some people's obsession with \nvendor-provided service.\n\nRealistically, most Opteron solutions will use a Tyan motherboard (no \nidea if this includes HP). For 4-way systems, there's currently only the \nS4882, which includes an LSI dual channel SCSI controller. Different \nvendors get to use different cases and cooling solutions and pick a \ndifferent brand/model of hard drive, but that's about it.\n\nTyan now also sells complete servers - hardly a stretch seeing they \nalready make the most important bit (after the CPU). Given the level of \ninterest in this forum, here's their list of US resellers:\n\nhttp://www.tyan.com/products/html/us_alwa.html\n\nIf it's a tower server, build it yourself or pay someone to do it. It \nreally isn't challenging for anyone knowledgeable about hardware.\n\n",
"msg_date": "Sat, 14 May 2005 14:19:20 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ok you all win what is best opteron (I dont want a hosed system\n\tagain)"
},
{
"msg_contents": "Joel,\n\n> The two worst queries (our case and audit applications) I created\n> denormalized files and maintain them through code. All reporting comes off\n> those and it is lightning fast.\n\nThis can often be called for. I'm working on a 400GB data warehouse right \nnow, and almost *all* of our queries run from materialized aggregate tables.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 14 May 2005 14:19:15 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ok you all win what is best opteron (I dont want a hosed system\n\tagain)"
},
{
"msg_contents": "> This can often be called for. I'm working on a 400GB data warehouse right \n> now, and almost *all* of our queries run from materialized aggregate tables.\n\nI thought that was pretty much the definition of data warehousing! :-)\n--\nMike Nolan\n",
"msg_date": "Sat, 14 May 2005 17:16:52 -0500 (CDT)",
"msg_from": "Mike Nolan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ok you all win what is best opteron (I dont want a hosed system\n\tagain)"
},
{
"msg_contents": "4-way SMP Opteron system is actually pretty damn cheap -- if you get \n2xDual Core versus 4xSingle. I just ordered a 2x265 (4x1.8ghz) system \nand the price was about $1300 more than a 2x244 (2x1.8ghz).\n\nNow you might ask, is a 2xDC comparable to 4x1? Here's some benchmarks \nI've found that showing DC versus Single @ the same clock rates/same # \ncores.\n\nSpecIntRate Windows:\n4x846 = 56.7\n2x270 = 62.6\n\nSpecFPRate Windows:\n4x846 = 52.5\n2x270 = 55.3\n\nSpecWeb99SSL:\n4x846 = 3399\n2x270 = 4100 (2 870s were used)\n\nSpecjbb2000 IBM JVM:\n4x848 = 146385\n4x275 = 157432\n\nWhat it looks like is a DC system is about 1 clock blip faster than a \ncorresponding single core SMP system. E.g. if you have a 2xDC @ 1.8ghz, \nyou need a 4x1 @ 2ghz to match the speed. (In some benchmarks, the \ndifference is 2 clock steps up.)\n\nOn the surface, it looks pretty amazing that a 4x1 Opteron with twice \nthe memory bandwidth is slower than a corresponding 2xDC. (DC Opterons \nuse the same socket as plain jane Opterons so they use the same 2xDDR \nmemory setup.) It turns out the latency in a 2xDC setup is just so much \nlower and most apps like lower latency than higher bandwidth. Look at \nthe diagram of the following Tyan 4-processor MB:\n\nftp://ftp.tyan.com/datasheets/d_s4882_100.pdf\n\nTake particular note of the lack of diagonal lines connecting CPUs. What \nthis means is if a process running on CPU0 needs memory attached to \nCPU3, it must request either CPU1 or CPU2 to forward the request for it. \nWithout NUMA support, we're looking at 25% of memory access runs @ 50ns, \n50% 110ns, 25% 170ns. (Rough numbers, I'd have to do a lot of googling \nto the find the exact latencies but I'm just too lazy now.)\n\nNow consider a 2xDC system. The 2 cores inside a single package are \nconnected by an immensely fast internal SRQ connection. As long as \nthere's no bandwidth limitation, both cores have fullspeed access to \nmemory while core-to-core snooping on each respective cache is roughly \n10ns. So memory access speeds look like so: 50% 50ns, 50% 110ns.\n\nIf the memory locations you are need to access happen to be contained in \nthe L1/L2 cache, this makes the difference even more pronounced. You \nthen get memory access patterns for 4x1: 25% 5ns, 50% 65ns, 25% 125ns \nversus 2xDC: 25% 5ns, 25% 15ns, 50% 65ns.\n\n\n\nJoel Fradkin wrote:\n> Thank you much for the info.\n> I will take a look. I think the prices I have been seeing may exclude us\n> getting another 4 proc box this soon. My boss asked me to get something in\n> the 15K range (I spent 30 on the Dell). \n> The HP seemed to run around 30 but it had a lot more drives then the dell\n> (speced it with 14 10k drives).\n",
"msg_date": "Sat, 14 May 2005 17:43:48 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ok you all win what is best opteron (I dont want a hosed system"
},
{
"msg_contents": "\nWilliam Yu <[email protected]> writes:\n\n> It turns out the latency in a 2xDC setup is just so much lower and most apps\n> like lower latency than higher bandwidth.\n\nYou haven't tested anything about \"most apps\". You tested what the SpecFoo\napps prefer. If you're curious about which Postgres prefers you'll have to\ntest with Postgres.\n\nI'm not sure whether it will change the conclusion but I expect Postgres will\nlike bandwidth better than random benchmarks do.\n\n\n-- \ngreg\n\n",
"msg_date": "15 May 2005 15:36:10 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ok you all win what is best opteron (I dont want a hosed system"
},
{
"msg_contents": "I say most apps because it's true. :) I would suggest that pretty much \nevery app (other than video/audio streaming) people think are \nbandwidth-limited are actually latency-limited. Take the SpecFoo tests. \nSure I would have rather seen SAP/TPC/etc that would be more relevant to \nPostgres but there aren't any apples-to-apples comparisons available \nyet. But there's something to consider here. What people in the past \nhave believed is that memory bandwidth is the key to Spec numbers -- \nSpecFP isn't a test of floating point performance, it's a test of memory \nbandwidth. Or is it? Numbers for DC Opterons show lower latency/lower \nbandwith beating higher latency/higher bandwidth in what was supposedly \nbandwidth limited. What may actually be happening is extra bandwidth \nisn't actually used directly by the app itself -- instead the CPU uses \nit for prefetching to hide latency.\n\nScrounging around for more numbers, I've found benchmarks at Anandtech \nthat relate better to Postgres. He has a \"Order Entry\" OLTP app running \non MS-SQL. 1xDC beats 2x1 -- 2xDC beats 4x1.\n\norder entry reads\n2x248 - 235113\n1x175 - 257192\n4x848 - 360014\n2x275 - 392643\n\norder entry writes\n2x248 - 235107\n1x175 - 257184\n4x848 - 360008\n2x275 - 392634\n\norder entry stored procedures\n2x248 - 2939\n1x175 - 3215\n4x848 - 4500\n2x275 - 4908\n\n\n\n\n\nGreg Stark wrote:\n\n>William Yu <[email protected]> writes:\n>\n> \n>\n>>It turns out the latency in a 2xDC setup is just so much lower and most apps\n>>like lower latency than higher bandwidth.\n>> \n>>\n>\n>You haven't tested anything about \"most apps\". You tested what the SpecFoo\n>apps prefer. If you're curious about which Postgres prefers you'll have to\n>test with Postgres.\n>\n>I'm not sure whether it will change the conclusion but I expect Postgres will\n>like bandwidth better than random benchmarks do.\n>\n>\n> \n>\n\n",
"msg_date": "Sun, 15 May 2005 13:39:39 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ok you all win what is best opteron (I dont want a"
}
] |
[
{
"msg_contents": "Joel wrote:\nI have been following threads (in case you don't know I bought a 4 proc\nDell recently) and the Opteron seems the way to go.\nI just called HP for a quote, but don't want to make any mistakes.\n[snip]\n\nAt your level of play it's the DL585.\nHave you checked out http://www.swt.com? \n\nMerlin\n",
"msg_date": "Fri, 13 May 2005 15:55:16 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ok you all win what is best opteron (I dont want a hosed system\n\tagain)"
}
] |
[
{
"msg_contents": "\n>> Question, though: is HP still using their proprietary RAID \n>card? And, if so, \n>> have they fixed its performance problems?\n>\n>According to my folks here, we're using the CCISS controllers, so I\n>guess they are. The systems are nevertheless performing very well --\n>we did a load test that was pretty impressive. Also, Chris Browne\n>pointed me to this for the drivers:\n>\n>http://sourceforge.net/projects/cciss/\n\nThat driver is for all the remotely modern HP cards. I think the one\nwith performance problems was the builtin one they user to have\n(SmartArray 5i). AFAIK, the new builtins (6i) are a lot better. And the\nhigh-end add-on cards I've never had any performance problems with -\nlinux and windows both.\n\n//Magnus\n",
"msg_date": "Fri, 13 May 2005 23:22:13 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Whence the Opterons?"
}
] |
[
{
"msg_contents": "Past recommendations for a good RAID card (for SCSI) have been the LSI \nMegaRAID 2x. This unit comes with 128MB of RAM on-board. Has anyone \nfound by increasing the on-board RAM, did Postgresql performed better?\n\nThanks.\n\nSteve Poe\n",
"msg_date": "Fri, 13 May 2005 17:16:01 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql Performance via the LSI MegaRAID 2x Card"
},
{
"msg_contents": "Steve,\n\n> Past recommendations for a good RAID card (for SCSI) have been the LSI\n> MegaRAID 2x. This unit comes with 128MB of RAM on-board. Has anyone\n> found by increasing the on-board RAM, did Postgresql performed better?\n\nMy informal tests showed no difference between 64MB and 256MB.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 13 May 2005 17:28:46 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance via the LSI MegaRAID 2x Card"
},
{
"msg_contents": "I'm sure there's some corner case where more memory helps. If you \nconsider that 1GB of RAM is about $100, I'd max out memory on the \ncontroller just for the hell of it.\n\n\nJosh Berkus wrote:\n\n> Steve,\n> \n> \n>>Past recommendations for a good RAID card (for SCSI) have been the LSI\n>>MegaRAID 2x. This unit comes with 128MB of RAM on-board. Has anyone\n>>found by increasing the on-board RAM, did Postgresql performed better?\n> \n> \n> My informal tests showed no difference between 64MB and 256MB.\n> \n",
"msg_date": "Sun, 15 May 2005 14:15:28 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance via the LSI MegaRAID 2x Card"
},
{
"msg_contents": "William,\n\n> I'm sure there's some corner case where more memory helps.\n\nQUite possibly. These were not scientific tests.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 15 May 2005 20:27:24 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance via the LSI MegaRAID 2x Card"
}
] |
[
{
"msg_contents": "Hello �\n\nIm using source postgresql 8.0.3 under FreeBSD and already install, the database is running well. \n\n \n\nI want to install a pgbench, but I can�t install it, coz an error occur.\n\nI try to �make all� in directory ~/src/interfaces/lipq/ \n\nThe messages are :\n\n. . . . .\n\n. . . . .\n\n\"../../../src/Makefile.global\", line 546: Need an operator\n\n\"../../../src/Makefile.global\", line 553: Missing dependency operator\n\n\"../../../src/Makefile.global\", line 554: Missing dependency operator\n\n\"/home/postgres/postgresql-8.0.3/src/nls-global.mk\", line 29: Need an operator\n\n\"/home/postgres/postgresql-8.0.3/src/nls-global.mk\", line 31: Need an operator\n\n\"/home/postgres/postgresql-8.0.3/src/nls-global.mk\", line 33: Need an operator\n\n\"/home/postgres/postgresql-8.0.3/src/nls-global.mk\", line 38: Need an operator\n\n\"/home/postgres/postgresql-8.0.3/src/nls-global.mk\", line 40: Need an operator\n\nError expanding embedded variable.\n\n \n\nAny body can help me J\n\nSorry Im newbie\n\nThanks\n\n \n\n\n\n\n---------------------------------\nFind local movie times and trailers on Yahoo! Movies.\n\n\nHello �\nIm using source postgresql 8.0.3 under FreeBSD and already install, the database is running well. \n \nI want to install a pgbench, but I can�t install it, coz an error occur.\nI try to �make all� in directory ~/src/interfaces/lipq/ \nThe messages are :\n. . . . .\n. . . . .\n\"../../../src/Makefile.global\", line 546: Need an operator\n\"../../../src/Makefile.global\", line 553: Missing dependency operator\n\"../../../src/Makefile.global\", line 554: Missing dependency operator\n\"/home/postgres/postgresql-8.0.3/src/nls-global.mk\", line 29: Need an operator\n\"/home/postgres/postgresql-8.0.3/src/nls-global.mk\", line 31: Need an operator\n\"/home/postgres/postgresql-8.0.3/src/nls-global.mk\", line 33: Need an operator\n\"/home/postgres/postgresql-8.0.3/src/nls-global.mk\", line 38: Need an operator\n\"/home/postgres/postgresql-8.0.3/src/nls-global.mk\", line 40: Need an operator\nError expanding embedded variable.\n \nAny body can help me J\nSorry Im newbie\nThanks\n \nFind local movie times and trailers on Yahoo! Movies.",
"msg_date": "Sun, 15 May 2005 19:37:07 +1000 (EST)",
"msg_from": "Andre Nas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Error when try installing pgbench ?"
},
{
"msg_contents": "On May 15, 2005, at 5:37 AM, Andre Nas wrote:\n\n> Hello �\n> Im using source postgresql 8.0.3 under FreeBSD and already \n> install, the database is running well.\n>\n\nNot that this has much to do with performance, but the problem is \nthat you need to use gmake to build postgres stuff. The BSD make \ndoesn't know about the GNU extensions/changes to Makefile syntax.\n\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806\n\n\n\nOn May 15, 2005, at 5:37 AM, Andre Nas wrote:Hello …Im using source postgresql 8.0.3 under FreeBSD and already install, the database is running well. Not that this has much to do with performance, but the problem is that you need to use gmake to build postgres stuff. The BSD make doesn't know about the GNU extensions/changes to Makefile syntax. Vivek Khera, Ph.D. +1-301-869-4449 x806",
"msg_date": "Wed, 18 May 2005 15:08:17 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [pgsql-benchmarks] Error when try installing pgbench ?"
},
{
"msg_contents": "you need to use gmake perhaps?\n",
"msg_date": "Sat, 21 May 2005 09:42:00 +0200",
"msg_from": "Tomaz Borstnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Error when try installing pgbench ?"
}
] |
[
{
"msg_contents": "I was recently running a test with multiple client shell processes\nrunning psql commands (inserts) when all the client processes appeared\nto hang simultaneously. I assumed that I had an application deadlock\nsomewhere, but after a few seconds - less than a minute, but certainly\nnoticeable - all the clients picked up again and went on their way.\n \nIn the database log at that time there was a \"recycling transaction log\"\nmessage which seems to correspond to the time when the clients were\npaused, though I don't have it concretely correlated. \n \nI've seen these messages in the log before, and am aware of the need to\nincrease checkpoint_segments, but I wasn't aware that recycling a\ntransaction log could be that damaging to performance. There may have\nbeen some local hiccup in this case, but I'm wondering if recycling is\nknown to be a big hit in general, and if I should strive to tune so that\nit never happens (if that's possible)?\n \nThanks.\n\n- DAP\n------------------------------------------------------------------------\n----------\nDavid Parker Tazz Networks (401) 709-5130\n \n\n\n \n\n\n\n\n\nI was recently \nrunning a test with multiple client shell processes running psql commands \n(inserts) when all the client processes appeared to hang simultaneously. I \nassumed that I had an application deadlock somewhere, but after a few seconds - \nless than a minute, but certainly noticeable - all the clients picked up again \nand went on their way.\n \nIn the database log \nat that time there was a \"recycling transaction log\" message which seems to \ncorrespond to the time when the clients were paused, though I don't have it \nconcretely correlated. \n \nI've seen these \nmessages in the log before, and am aware of the need to increase \ncheckpoint_segments, but I wasn't aware that recycling a transaction log could \nbe that damaging to performance. There may have been some local hiccup in this \ncase, but I'm wondering if recycling is known to be a big hit in general, and if \nI should strive to tune so that it never happens (if that's \npossible)?\n \nThanks.\n- \nDAP----------------------------------------------------------------------------------David \nParker Tazz Networks (401) \n709-5130",
"msg_date": "Sun, 15 May 2005 20:22:13 -0400",
"msg_from": "\"David Parker\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "checkpoint segments"
},
{
"msg_contents": "David,\n\n> I've seen these messages in the log before, and am aware of the need to\n> increase checkpoint_segments, but I wasn't aware that recycling a\n> transaction log could be that damaging to performance. There may have\n> been some local hiccup in this case, but I'm wondering if recycling is\n> known to be a big hit in general, and if I should strive to tune so that\n> it never happens (if that's possible)?\n\nYes, and yes. Simply allocating more checkpoint segments (which can eat a \nlot of disk space -- requirements are 16mb*(2 * segments +1) ) will prevent \nthis problem.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 15 May 2005 20:26:02 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checkpoint segments"
},
{
"msg_contents": "\"David Parker\" <[email protected]> writes:\n> I was recently running a test with multiple client shell processes\n> running psql commands (inserts) when all the client processes appeared\n> to hang simultaneously. I assumed that I had an application deadlock\n> somewhere, but after a few seconds - less than a minute, but certainly\n> noticeable - all the clients picked up again and went on their way.\n>\n> In the database log at that time there was a \"recycling transaction log\"\n> message which seems to correspond to the time when the clients were\n> paused, though I don't have it concretely correlated.\n\nI think what you saw was the disk being hogged by checkpoint writes.\n\"Recycling transaction log\" is a routine operation, and by itself is a\nreasonably cheap operation, but it's only done as the last step in a\ncheckpoint (in fact, from a technical point of view, it's done after the\ncheckpoint finishes). My guess is that the actual performance hit\noccurred while the checkpoint was pushing out dirty buffers.\n\nWhat you want is to reduce the amount of deferred I/O that has to happen\nwhen a checkpoint occurs. There is not any way to do that before PG\n8.0 (the obvious idea of reducing the interval between checkpoints is\ncounterproductive, IMHO). In 8.0 you can fool around with the bgwriter\nparameters with an eye to \"dribbling out\" writes of dirty pages between\ncheckpoints.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 May 2005 00:34:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checkpoint segments "
},
{
"msg_contents": "On Sun, May 15, 2005 at 08:22:13PM -0400, David Parker wrote:\n\n> In the database log at that time there was a \"recycling transaction log\"\n> message which seems to correspond to the time when the clients were\n> paused, though I don't have it concretely correlated. \n\nMaybe what you need is make the bgwriter more aggressive, so that I/O is\nmore evenly spread between checkpoint intervals -- that way, at\ncheckpoint there's less work to do.\n\n> I've seen these messages in the log before, and am aware of the need to\n> increase checkpoint_segments, but I wasn't aware that recycling a\n> transaction log could be that damaging to performance. There may have\n> been some local hiccup in this case, but I'm wondering if recycling is\n> known to be a big hit in general, and if I should strive to tune so that\n> it never happens (if that's possible)?\n\nWell, recycling is actually a *good* thing -- it saves you from having\nto remove WAL segment files and allocate new files for the new logs. So\nwhat you really want doesn't have anything to do with the recycling\nitself, but rather with the simultaneous checkpoint that's going on at\nthe same time.\n\n-- \nAlvaro Herrera (<alvherre[a]surnet.cl>)\nLicensee shall have no right to use the Licensed Software\nfor productive or commercial use. (Licencia de StarOffice 6.0 beta)\n",
"msg_date": "Mon, 16 May 2005 00:39:20 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checkpoint segments"
},
{
"msg_contents": "On Sun, May 15, 2005 at 08:26:02PM -0700, Josh Berkus wrote:\n> David,\n> \n> > I've seen these messages in the log before, and am aware of the need to\n> > increase checkpoint_segments, but I wasn't aware that recycling a\n> > transaction log could be that damaging to performance. There may have\n> > been some local hiccup in this case, but I'm wondering if recycling is\n> > known to be a big hit in general, and if I should strive to tune so that\n> > it never happens (if that's possible)?\n> \n> Yes, and yes. Simply allocating more checkpoint segments (which can eat a \n> lot of disk space -- requirements are 16mb*(2 * segments +1) ) will prevent \n> this problem.\n\nHmm? I disagree -- it will only make things worse when the checkpoint\ndoes occur.\n\n-- \nAlvaro Herrera (<alvherre[a]surnet.cl>)\n\"Lo esencial es invisible para los ojos\" (A. de Saint Ex�pery)\n",
"msg_date": "Mon, 16 May 2005 00:40:53 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checkpoint segments"
},
{
"msg_contents": "Alvaro,\n\n> > Yes, and yes. Simply allocating more checkpoint segments (which can eat\n> > a lot of disk space -- requirements are 16mb*(2 * segments +1) ) will\n> > prevent this problem.\n>\n> Hmm? I disagree -- it will only make things worse when the checkpoint\n> does occur.\n\nUnless you allocate enough logs that you don't need to checkpoint until the \nload is over with. In multiple data tests involving large quantities of \ndata loading, increasing the number of checkpoints and the checkpoint \ninterval has been an overall benefit to overall load speed. It's possible \nthat the checkpoints which do occur are worse, but they're not enough worse \nto counterbalance their infrequency.\n\nI have not yet been able to do a full scalability series on bgwriter.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 16 May 2005 09:17:20 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checkpoint segments"
}
] |
[
{
"msg_contents": "We're having a performance problem with PostgresQL 8.0.2 running on\nRHEL3 Update 4. There is a frequently updated table logging all our ADSL\ncustomer logins which has 2 related triggers. An INSERT on that table,\n\"calls\", takes about 300ms to execute according to the logs, and the\nprocess takes up to 30% of the server CPU. When removing the triggers it\ndrops to 10-20ms.\n\nI am posting the table structure of all the tables involved, the\ntriggers and the indexes. This also happens when the \"calls\" table is\nempty. The \"currentip\" and \"basicbytes\" tables contain about 8000\nrecords each. The \"newest\" table is always being emptied by a cron\nprocess. I am vacuuming the database daily. I really don't understand\nwhat I am missing here - what else can be optimized or indexed? Is it\nnormal that the INSERT is taking so long? We're running PostgreSQL on a\npretty fast server, so it's not a problem of old/slow hardware either.\n\nAs you can see, this is pretty basic stuff when compared to what others\nare doing, so it shouldn't cause such an issue. Apparently I'm really\nmissing something here... :-)\n\nThank you everyone for your help\n-Manuel\n\n\n\nCREATE TABLE calls\n(\n nasidentifier varchar(16) NOT NULL,\n nasport int4 NOT NULL,\n acctsessionid varchar(10) NOT NULL,\n acctstatustype int2 NOT NULL,\n username varchar(32) NOT NULL,\n acctdelaytime int4,\n acctsessiontime int4,\n framedaddress varchar(16),\n acctterminatecause int2,\n accountid int4,\n serverid int4,\n callerid varchar(15),\n connectinfo varchar(32),\n acctinputoctets int4,\n acctoutputoctets int4,\n ascendfilter varchar(50),\n ascendtelnetprofile varchar(15),\n framedprotocol int2,\n acctauthentic int2,\n ciscoavpair varchar(50),\n userservice int2,\n \"class\" varchar(15),\n nasportdnis varchar(255),\n nasporttype int2,\n cisconasport varchar(50),\n acctinputpackets int4,\n acctoutputpackets int4,\n calldate timestamp\n) \n\nCREATE INDEX i_ip\n ON calls\n USING btree\n (framedaddress);\n\nCREATE INDEX i_username\n ON calls\n USING btree\n (username);\n\n\nCREATE TRIGGER trigger_update_bytes\n AFTER INSERT\n ON calls\n FOR EACH ROW\n EXECUTE PROCEDURE update_basic_bytes();\n\nCREATE OR REPLACE FUNCTION update_basic_bytes()\n RETURNS \"trigger\" AS\n$BODY$\nbegin\n\tif (new.acctstatustype=2) then\n\t\tif exists(select username from basicbytes where\nusername=new.username) then\n\t\t\tupdate basicbytes set\ninbytes=inbytes+new.acctinputoctets,\noutbytes=outbytes+new.acctoutputoctets, lastupdate=new.calldate where\nusername=new.username;\n\t\telse\n\t\t\tinsert into basicbytes\n(username,inbytes,outbytes,lastupdate) values\n(new.username,new.acctinputoctets,new.acctoutputoctets,new.calldate);\n\t\tend if;\n\tend if;\n\treturn null;\nend\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE;\n\nCREATE TRIGGER trigger_update_ip\n AFTER INSERT\n ON calls\n FOR EACH ROW\n EXECUTE PROCEDURE update_ip();\n\nCREATE OR REPLACE FUNCTION update_ip()\n RETURNS \"trigger\" AS\n$BODY$\nbegin\n\tdelete from currentip where ip is null;\n\tdelete from currentip where ip=new.framedaddress;\n\tif (new.acctstatustype=1) then\n\t\tdelete from currentip where username=new.username;\n\t\tdelete from newest where username=new.username;\n\t\tinsert into currentip (ip,username) values\n(new.framedaddress,new.username);\n\t\tinsert into newest (ip,username) values\n(new.framedaddress,new.username);\n\tend if;\n\treturn null;\nend;\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE;\n\nCREATE TABLE basicbytes\n(\n username varchar(32) NOT NULL,\n inbytes int8,\n outbytes int8,\n lastupdate timestamp,\n lastreset timestamp\n) \n\nCREATE INDEX i_basic_username\n ON basicbytes\n USING btree\n (username);\n\nCREATE TABLE currentip\n(\n ip varchar(50),\n username varchar(50)\n) \n\nCREATE INDEX i_currentip_username\n ON currentip\n USING btree\n (username);\n\nCREATE TABLE newest\n(\n ip varchar(50),\n username varchar(50)\n) \n\nCREATE INDEX i_newest_username\n ON newest\n USING btree\n (username);\n\n\n",
"msg_date": "Mon, 16 May 2005 15:06:49 +0200",
"msg_from": "\"Manuel Wenger\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trigger performance problem"
},
{
"msg_contents": "\"Manuel Wenger\" <[email protected]> writes:\n> We're having a performance problem with PostgresQL 8.0.2 running on\n> RHEL3 Update 4. There is a frequently updated table logging all our ADSL\n> customer logins which has 2 related triggers. An INSERT on that table,\n> \"calls\", takes about 300ms to execute according to the logs, and the\n> process takes up to 30% of the server CPU. When removing the triggers it\n> drops to 10-20ms.\n\nYou need to figure out exactly which operation(s) inside the triggers\nis so expensive. You could try removing commands one at a time and\ntiming the modified triggers.\n\nJust on general principles, I'd guess that this might be the problem:\n\n> \tdelete from currentip where ip is null;\n\nSince an IS NULL test isn't indexable by a normal index, this is going\nto cause a full scan of the currentip table every time. I don't really\nunderstand why you need that executed every time anyway ... why is it\nthis trigger's responsibility to clean out null IPs? But if you really\ndo need to make that run quickly, you could create a partial index with\na WHERE clause of \"ip is null\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 May 2005 13:47:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger performance problem "
}
] |
[
{
"msg_contents": "\n",
"msg_date": "Tue, 17 May 2005 19:53:39 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Trigger performance problem"
}
] |
[
{
"msg_contents": "On Tue, May 17, 2005 at 06:58:20PM -0400, Wei Weng wrote:\n> This time it worked! But VACUUM FULL requires an exclusive lock on the \n> table which I don't really want to grant. So my question is: why is VACUUM \n> ANALYZE didn't do the job? Is there any setting I can tweak to make a \n> VACUUM without granting a exclusive lock?\n\nYou just didn't vacuum often enough. Plain VACUUM (with ANALYZE or not) only\ndeletes dead rows, it does not reclaim the space used for them (and thus does\nnot compress the remaining ones into fewer pages, so they take less time to\nscan). If you simply VACUUM regularily (try autovacuum from contrib, it will\nprobably be useful) the problem simply will never be as bad as you describe\nhere, and you won't need to VACUUM FULL.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 18 May 2005 00:57:26 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is there any other way to do this?"
},
{
"msg_contents": "Hi, I have a small table that has only 23 rows, but I do frequent updates( \nevery second ) on it.\n\nAfter running the updates for a while, the performance of SELECT from that \ntable has deteriated into something like 30 seconds.\n\nSo, natually, I did a VACUUM ANALYZE first. Here is the VERBOSE output.\n\nTest=> VACUUM VERBOSE analyze schedule ;\nINFO: vacuuming \"public.schedule\"\nINFO: index \"schedule_pkey\" now contains 23 row versions in 2519 pages\nDETAIL: 2499 index pages have been deleted, 2499 are currently reusable.\nCPU 0.27s/0.04u sec elapsed 12.49 sec.\nINFO: \"schedule\": found 0 removable, 23 nonremovable row versions in 37638 \npages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 974282 unused item pointers.\n0 pages are entirely empty.\nCPU 3.64s/0.48u sec elapsed 76.15 sec.\nINFO: vacuuming \"pg_toast.pg_toast_22460\"\nINFO: index \"pg_toast_22460_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 0.03 sec.\nINFO: \"pg_toast_22460\": found 0 removable, 0 nonremovable row versions in 0 \npages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.01s/0.00u sec elapsed 0.03 sec.\nINFO: analyzing \"public.schedule\"\nINFO: \"schedule\": 37638 pages, 23 rows sampled, 23 estimated total rows\nVACUUM\n\nAnd it didn't help at all. The explain of the query still shows up as:\n\nTest=> explain select id from schedule;\n QUERY PLAN\n-------------------------------------------------------------\n Seq Scan on schedule (cost=0.00..37638.23 rows=23 width=4)\n(1 row)\n\nIt still takes 30 seconds to finish a simple query. ugh.\n\nSo I then tried VACUUM FULL schedule. Here is the output:\n\nfazzt=> VACUUM FULL VERBOSE schedule ;\nINFO: vacuuming \"public.schedule\"\nINFO: \"schedule\": found 0 removable, 23 nonremovable row versions in 37638 \npages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 253 to 418 bytes long.\nThere were 974282 unused item pointers.\nTotal free space (including removable row versions) is 303672968 bytes.\n37629 pages are or will become empty, including 0 at the end of the table.\n37638 pages containing 303672968 free bytes are potential move destinations.\nCPU 3.08s/0.50u sec elapsed 28.64 sec.\nINFO: index \"schedule_pkey\" now contains 23 row versions in 2182 pages\nDETAIL: 0 index row versions were removed.\n2162 index pages have been deleted, 2162 are currently reusable.\nCPU 0.28s/0.02u sec elapsed 10.90 sec.\nINFO: \"schedule\": moved 13 row versions, truncated 37638 to 1 pages\nDETAIL: CPU 10.83s/10.96u sec elapsed 370.42 sec.\nINFO: index \"schedule_pkey\" now contains 23 row versions in 2182 pages\nDETAIL: 13 index row versions were removed.\n2162 index pages have been deleted, 2162 are currently reusable.\nCPU 0.20s/0.05u sec elapsed 10.33 sec.\nINFO: vacuuming \"pg_toast.pg_toast_22460\"\nINFO: \"pg_toast_22460\": found 0 removable, 0 nonremovable row versions in 0 \npages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 0 to 0 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 0 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_toast_22460_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n\n\nThis time it worked! But VACUUM FULL requires an exclusive lock on the table \nwhich I don't really want to grant. So my question is: why is VACUUM ANALYZE \ndidn't do the job? Is there any setting I can tweak to make a VACUUM without \ngranting a exclusive lock?\n\nThanks!\n\n\nWei\n\n\n",
"msg_date": "Tue, 17 May 2005 18:58:20 -0400",
"msg_from": "Wei Weng <[email protected]>",
"msg_from_op": false,
"msg_subject": "Is there any other way to do this?"
},
{
"msg_contents": "> This time it worked! But VACUUM FULL requires an exclusive lock on the \n> table which I don't really want to grant. So my question is: why is \n> VACUUM ANALYZE didn't do the job? Is there any setting I can tweak to \n> make a VACUUM without granting a exclusive lock?\n\nYou need to run normal vacuum analyze every few minutes or so, to stop \nit growing. I suggest pg_autovacuum.\n\nChris\n",
"msg_date": "Wed, 18 May 2005 10:12:35 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there any other way to do this?"
}
] |
[
{
"msg_contents": "I've been doing some work to try and identify the actual costs\nassociated with an index scan with some limited sucess. What's been run\nso far can be seen at http://stats.distributed.net/~decibel. But there's\na couple problems. First, I can't use the box exclusively for this\ntesting, which results in some result inconsistencies. Second, I've been\nusing a dataset that I can't make public, which means no one else can\nrun these tests on different hardware.\n\nSo what I think would be useful is some way to generate a known dataset,\nand then be able to run tests against it on different machines. In the\ncase of testing index scans, we need to be able to vary correlation,\nwhich so far I've been doing by ordering by different columns. I suspect\nit will also be important to test with different tuple sizes. There's\nalso the question of whether or not the cache should be flushed for each\nrun or not.\n\nDoes this sound like a good way to determine actual costs for index\nscans (and hopefully other access methods in the future)? If so, what\nwould be a good way to implement this?\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 17 May 2005 19:22:36 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning planner cost estimates"
},
{
"msg_contents": "Jim,\n\n> I've been doing some work to try and identify the actual costs\n> associated with an index scan with some limited sucess. What's been run\n> so far can be seen at http://stats.distributed.net/~decibel. But there's\n> a couple problems. First, I can't use the box exclusively for this\n> testing, which results in some result inconsistencies.\n\nI can get you access to boxes. Chat on IRC?\n\n> Second, I've been \n> using a dataset that I can't make public, which means no one else can\n> run these tests on different hardware.\n\nThen use one of the DBT databases.\n\n> In the\n> case of testing index scans, we need to be able to vary correlation,\n> which so far I've been doing by ordering by different columns. I suspect\n> it will also be important to test with different tuple sizes. There's\n> also the question of whether or not the cache should be flushed for each\n> run or not.\n>\n> Does this sound like a good way to determine actual costs for index\n> scans (and hopefully other access methods in the future)? If so, what\n> would be a good way to implement this?\n\nWell, the problem is that what we need to index scans is a formula, rather \nthan a graph. The usefulness of benchmarking index scan cost is so that we \ncan test our formula for accuracy and precision. However, such a formula \n*does* need to take into account concurrent activity, updates, etc ... that \nis, it needs to approximately estimate the relative cost on a live database, \nnot a test one.\n\nThis is also going to be a moving target because Tom's in-memory-bitmapping \nchanges relative cost equations.\n\nI think a first step would be, in fact, to develop a tool that allows us to \nput EXPLAIN ANALYZE results in a database table. Without that, there is no \npossibility of statistical-scale analysis.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 19 May 2005 09:31:47 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning planner cost estimates"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> I think a first step would be, in fact, to develop a tool that allows us to \n> put EXPLAIN ANALYZE results in a database table. Without that, there is no \n> possibility of statistical-scale analysis.\n\nAFAIK you can do that today using, eg, plpgsql:\n\n\tfor rec in explain analyze ... loop\n\t\tinsert into table values(rec.\"QUERY PLAN\");\n\tend loop;\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 May 2005 13:16:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning planner cost estimates "
},
{
"msg_contents": "Tom,\n\n> \tfor rec in explain analyze ... loop\n> \t\tinsert into table values(rec.\"QUERY PLAN\");\n> \tend loop;\n\nI need to go further than that and parse the results as well. And preserve \nrelationships and nesting levels. \n\nHmmmm ... what's the indenting formula for nesting levels?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 19 May 2005 10:21:05 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning planner cost estimates"
},
{
"msg_contents": "On Thu, May 19, 2005 at 09:31:47AM -0700, Josh Berkus wrote:\n> > In the\n> > case of testing index scans, we need to be able to vary correlation,\n> > which so far I've been doing by ordering by different columns. I suspect\n> > it will also be important to test with different tuple sizes. There's\n> > also the question of whether or not the cache should be flushed for each\n> > run or not.\n> >\n> > Does this sound like a good way to determine actual costs for index\n> > scans (and hopefully other access methods in the future)? If so, what\n> > would be a good way to implement this?\n> \n> Well, the problem is that what we need to index scans is a formula, rather \n> than a graph. The usefulness of benchmarking index scan cost is so that we \n\nTrue, but having a graphical representation of how different input\nvariables (such as correlation) affect runtime is a good way to derive\nsuch a formula, or at least point you in the right direction.\n\n> can test our formula for accuracy and precision. However, such a formula \n> *does* need to take into account concurrent activity, updates, etc ... that \n> is, it needs to approximately estimate the relative cost on a live database, \n> not a test one.\n\nWell, that raises an interesting issue, because AFAIK none of the cost\nestimate functions currently do that. Heck, AFAIK even the piggyback seqscan\ncode doesn't take other seqscans into account.\n\nAnother issue is: what state should the buffers/disk cache be in? In the\nthread that kicked all this off Tom noted that my results were skewed\nbecause of caching, so I changed my tests to flush the disk cache as\neffectively as I could (by running a program that would consume enough\navailable memory to just start the box swapping), but I don't think\nthat's necessarily realistic. Though at least it should preclude the\nneed to run tests multiple times on an otherwise idle box in order to\n'pre-seed' the cache (not that that's any more realistic). If you don't\nuse one of these techniques you end up with results that depend on what\ntest was run before the current one...\n\n> This is also going to be a moving target because Tom's in-memory-bitmapping \n> changes relative cost equations.\n\nI thought those all had seperate costing functions...? In any case, if\nwe have a cost estimation tool it will make it much easier to derive\ncost estimation functions.\n\n> I think a first step would be, in fact, to develop a tool that allows us to \n> put EXPLAIN ANALYZE results in a database table. Without that, there is no \n> possibility of statistical-scale analysis.\n\nRather than trying to parse all possible output, ISTM it would be much\nbetter if there was a way to access the info directly. Would it be\ndifficult to have an option that produces output that is a set of\ndifferent fields? I'm thinking something like:\n\nLevel (basically how far something's indented)\nParent node (what node a child node is feeding)\nnode_id (some kind of identifier for each step)\noperation\n(estimate|actual)_(startup|total|rows|width|loops)\nother (something to hold index condition, filter, etc)\n\nBut ultimately, I'm not sure if this is really required or not, because\nI don't see that we need to use explain when running queries. In fact,\nit's possibly desireable that we don't, because of the overhead it\nincurs. We would want to log an explain (maybe analyze) just to make\nsure we knew what the optimizer was doing, but I think we shouldn't need\nthe info to produce cost estimates.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Fri, 20 May 2005 15:20:17 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning planner cost estimates"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Thu, May 19, 2005 at 09:31:47AM -0700, Josh Berkus wrote:\n>> can test our formula for accuracy and precision. However, such a formula \n>> *does* need to take into account concurrent activity, updates, etc ... that \n>> is, it needs to approximately estimate the relative cost on a live database,\n>> not a test one.\n\n> Well, that raises an interesting issue, because AFAIK none of the cost\n> estimate functions currently do that.\n\nI'm unconvinced that it'd be a good idea, either. People already\ncomplain that the planner's choices change when they ANALYZE; if the\ncurrent load factor or something like that were to be taken into account\nthen you'd *really* have a problem with irreproducible behavior.\n\nIt might make sense to have something a bit more static, perhaps a GUC\nvariable that says \"plan on the assumption that there's X amount of\nconcurrent activity\". I'm not sure what scale to measure X on, nor\nexactly how this would factor into the estimates anyway --- but at least\nthis approach would maintain reproducibility of behavior.\n\n> Another issue is: what state should the buffers/disk cache be in?\n\nThe current cost models are all based on the assumption that every query\nstarts from ground zero: nothing in cache. Which is pretty bogus in\nmost real-world scenarios. We need to think about ways to tune that\nassumption, too. Maybe this is actually the same discussion, because\ncertainly one of the main impacts of a concurrent environment is on what\nyou can expect to find in cache.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 May 2005 16:47:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning planner cost estimates "
},
{
"msg_contents": "Jim,\n\n> Well, that raises an interesting issue, because AFAIK none of the cost\n> estimate functions currently do that. Heck, AFAIK even the piggyback\n> seqscan code doesn't take other seqscans into account.\n\nSure. But you're striving for greater accuracy, no?\n\nActually, all that's really needed in the way of concurrent activity is a \ncalculated factor that lets us know how likely a particular object is to be \ncached, either in the fs cache or the pg cache (with different factors for \neach presumably) based on history. Right now, that's based on \nestimated_cache_size, which is rather innacurate: a table which is queried \nonce a month has the exact same cost factors as one which is queried every \n2.1 seconds. This would mean an extra column in pg_stats I suppose.\n\n> But ultimately, I'm not sure if this is really required or not, because\n> I don't see that we need to use explain when running queries. In fact,\n> it's possibly desireable that we don't, because of the overhead it\n> incurs. We would want to log an explain (maybe analyze) just to make\n> sure we knew what the optimizer was doing, but I think we shouldn't need\n> the info to produce cost estimates.\n\nWell, the problem is that you need to know how much time the index scan took \nvs. other query steps. I don't see a way to do this other than an anayze.\n\n-- \n__Aglio Database Solutions_______________\nJosh Berkus\t\t Consultant\[email protected]\t www.agliodbs.com\nPh: 415-752-2500\tFax: 415-752-2387\n2166 Hayes Suite 200\tSan Francisco, CA\n",
"msg_date": "Fri, 20 May 2005 15:23:16 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning planner cost estimates"
},
{
"msg_contents": "On Fri, May 20, 2005 at 04:47:38PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > On Thu, May 19, 2005 at 09:31:47AM -0700, Josh Berkus wrote:\n> >> can test our formula for accuracy and precision. However, such a formula \n> >> *does* need to take into account concurrent activity, updates, etc ... that \n> >> is, it needs to approximately estimate the relative cost on a live database,\n> >> not a test one.\n> \n> > Well, that raises an interesting issue, because AFAIK none of the cost\n> > estimate functions currently do that.\n> \n> I'm unconvinced that it'd be a good idea, either. People already\n> complain that the planner's choices change when they ANALYZE; if the\n> current load factor or something like that were to be taken into account\n> then you'd *really* have a problem with irreproducible behavior.\n> \n> It might make sense to have something a bit more static, perhaps a GUC\n> variable that says \"plan on the assumption that there's X amount of\n> concurrent activity\". I'm not sure what scale to measure X on, nor\n> exactly how this would factor into the estimates anyway --- but at least\n> this approach would maintain reproducibility of behavior.\n\nOr allowing the load of the machine to affect query plans dynamically is\nsomething that could be disabled by default, so presumably if you turn\nit on it means you know what you're doing.\n\nOf course this is all academic until we have a means to actually measure\nhow much system load affects the different things we estimate cost for,\nand I don't see that happening until we have a system for measuring how\nchanging different input variables affects costs.\n\n> > Another issue is: what state should the buffers/disk cache be in?\n> \n> The current cost models are all based on the assumption that every query\n> starts from ground zero: nothing in cache. Which is pretty bogus in\n> most real-world scenarios. We need to think about ways to tune that\n> assumption, too. Maybe this is actually the same discussion, because\n> certainly one of the main impacts of a concurrent environment is on what\n> you can expect to find in cache.\n\nWell, load doesn't directly effect cache efficiency; it's really a\nquestion of the ratios of how often different things in the database are\naccessed. If you wanted to get a crude idea of how likely pages from\nsome relation are to be in cache, you could take periodic snapshots of\nio stats and see what percentage of the IO done in a given time period\nwas on the relation you're interested in as compared to the rest of the\ndatabase. But I think this is probably still a 2nd order effect.\n\nIn terms of a testing system, here's what I'm thinking of. For each cost\nestimate, there will be a number of input variables we want to vary, and\nthen check to see how changes in them effect run time. Using index scan\nas a simple example, 1st order variables will likely be index and table\nsize (especially in relation to cache size), and correlation. So, we\nneed some kind of a test harness that can vary these variables\n(prefferably one at a time), and run a test case after each change. It\nwould then need to store the timing info, possibly along with other\ninformation (such as explain output). If I'm the one to write this it'll\nend up in perl, since that's the only language I know that would be able\nto accomplish this. DBT seems to be a reasonable test database to do\nthis testing with, especially since it would provide a ready means to\nprovide external load.\n\nDoes this sound like a reasonable approach? Also, how important do\npeople think it is to use explain analyze output instead of just doing\nSELECT count(*) FROM (query you actually want to test)? (The select\ncount(*) wrapper is just a means to throw away the results since we\ndon't really want to worry about data transfer times, etc). The testing\nI've done (http://stats.distributed.net/~decibel/base.log) shows explain\nanalyze to be almost 5x slower than select count(*), so it seems a big\ngain if we can avoid that.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Fri, 20 May 2005 17:40:55 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning planner cost estimates"
},
{
"msg_contents": "On Fri, May 20, 2005 at 03:23:16PM -0700, Josh Berkus wrote:\n> Jim,\n> \n> > Well, that raises an interesting issue, because AFAIK none of the cost\n> > estimate functions currently do that. Heck, AFAIK even the piggyback\n> > seqscan code doesn't take other seqscans into account.\n> \n> Sure. But you're striving for greater accuracy, no?\n> \n> Actually, all that's really needed in the way of concurrent activity is a \n> calculated factor that lets us know how likely a particular object is to be \n> cached, either in the fs cache or the pg cache (with different factors for \n> each presumably) based on history. Right now, that's based on \n> estimated_cache_size, which is rather innacurate: a table which is queried \n> once a month has the exact same cost factors as one which is queried every \n> 2.1 seconds. This would mean an extra column in pg_stats I suppose.\n\nTrue, though that's a somewhat different issue that what the load on the\nbox is (see the reply I just posted). Load on the box (particuarly IO\nload) will also play a factor for things; for example, it probably means\nseqscans end up costing a lot more than random IO does, because the disk\nheads are being sent all over the place anyway.\n\n> > But ultimately, I'm not sure if this is really required or not, because\n> > I don't see that we need to use explain when running queries. In fact,\n> > it's possibly desireable that we don't, because of the overhead it\n> > incurs. We would want to log an explain (maybe analyze) just to make\n> > sure we knew what the optimizer was doing, but I think we shouldn't need\n> > the info to produce cost estimates.\n> \n> Well, the problem is that you need to know how much time the index scan took \n> vs. other query steps. I don't see a way to do this other than an anayze.\n\nTrue, but that can be done by a seperate seqscan step. I would argue\nthat doing it that way is actually more accurate, because the overhead\nof explain analyze is huge and tends to swamp other factors out. As I\nmentioned in my other email, my tests show explain analyze select * from\ntable is 5x slower than select count(*) from table.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Fri, 20 May 2005 17:49:17 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning planner cost estimates"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Does this sound like a reasonable approach? Also, how important do\n> people think it is to use explain analyze output instead of just doing\n> SELECT count(*) FROM (query you actually want to test)? (The select\n> count(*) wrapper is just a means to throw away the results since we\n> don't really want to worry about data transfer times, etc). The testing\n> I've done (http://stats.distributed.net/~decibel/base.log) shows explain\n> analyze to be almost 5x slower than select count(*), so it seems a big\n> gain if we can avoid that.\n\nI'd go with the select count(*) --- I can't imagine that we will be\ntrying to model the behavior of anything so complex that we really need\nexplain analyze output. (On the other hand, recording explain output is\na good idea to make sure you are testing what you think you are testing.)\n\nActually, it might be worth using \"select count(null)\", which should\navoid the calls to int8inc. I think this doesn't matter so much in CVS\ntip, but certainly in existing releases the palloc overhead involved is\nnoticeable.\n\nBTW, 5x is an awful lot; I've not noticed overheads more than about 2x.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 May 2005 19:06:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning planner cost estimates "
},
{
"msg_contents": "On Fri, 2005-05-20 at 15:23 -0700, Josh Berkus wrote:\n> > Well, that raises an interesting issue, because AFAIK none of the cost\n> > estimate functions currently do that. Heck, AFAIK even the piggyback\n> > seqscan code doesn't take other seqscans into account.\n> \n> Sure. But you're striving for greater accuracy, no?\n> \n> Actually, all that's really needed in the way of concurrent activity is a \n> calculated factor that lets us know how likely a particular object is to be \n> cached, either in the fs cache or the pg cache (with different factors for \n> each presumably) based on history. Right now, that's based on \n> estimated_cache_size, which is rather innacurate: a table which is queried \n> once a month has the exact same cost factors as one which is queried every \n> 2.1 seconds. This would mean an extra column in pg_stats I suppose.\n\nHmmm...not sure that would be a good thing.\n\neffective_cache_size isn't supposed to be set according to how much of a\ntable is in cache when the query starts. The setting is supposed to\nreflect how much cache is *available* for the current index scan, when\nperforming an index scan on a table that is not in clustered sequence.\nThe more out of sequence the table is, the more memory is required to\navoid doing any repeated I/Os during the scan. Of course, if there are\nmany users, the available cache may be much reduced.\n\nBest regards, Simon Riggs\n\n\n",
"msg_date": "Mon, 23 May 2005 21:46:21 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning planner cost estimates"
}
] |
[
{
"msg_contents": "Dear Gurus,\n\nI don't think it's a bug, I just don't understand what's behind this. If \nthere's a paper or something on this, please point me there.\n\nVersion: 7.4.6\nLocale: hu_HU (in case that matters)\nDump: see below sig.\n\nAbstract:\nCreate a table with (at least) two fields, say i and o.\nCreate three indexes on (i), (o), (i,o)\nInsert enough rows to test.\nTry to replace min/max aggregates with indexable queries such as:\n\nSELECT o FROM t WHERE i = 1 ORDER BY o LIMIT 1;\n\nProblem #1: This tends to use one of the single-column indexes (depending on \nthe frequency of the indexed element), not the two-column index. Also, I'm \nnot perfectly sure but maybe the planner is right. Why?\n\nProblem #2: If I drop the problematic 1-col index, it uses the 2-col index, \nbut sorts after that. (and probably that's why the planner was right in #1) Why?\n\nBelow is an example that isn't perfect; also, IRL I use a second field of \ntype date.\n\nProblem #3: It seems that an opposite index (o, i) works differently but \nstill not always. Why?\n\nIn case it matters, I may be able to reproduce the original problem with \noriginal data.\n\nTIA,\n--\nG.\n\n# CREATE TABLE t(i int, o int);\nCREATE TABLE\n# CREATE INDEX t_i on t (i);\nCREATE INDEX\n# CREATE INDEX t_o on t (o);\nCREATE INDEX\n# CREATE INDEX t_io on t (i, o);\nCREATE INDEX\n# INSERT INTO t SELECT 1, p.oid::int FROM pg_proc p WHERE Pronamespace=11;\nINSERT 0 1651\n# explain analyze select * from t where i=1 order by o limit 1;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3.37 rows=1 width=8) (actual time=0.028..0.029 rows=1 \nloops=1)\n -> Index Scan using t_o on t (cost=0.00..20.20 rows=6 width=8) (actual \ntime=0.025..0.025 rows=1 loops=1)\n Filter: (i = 1)\n Total runtime: 0.082 ms\n(4 rows)\n\n# drop index t_o;\nDROP INDEX\n# explain analyze select * from t where i=1 order by o limit 1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Limit (cost=6.14..6.14 rows=1 width=8) (actual time=4.624..4.625 rows=1 \nloops=1)\n -> Sort (cost=6.14..6.15 rows=6 width=8) (actual time=4.619..4.619 \nrows=1 loops=1)\n Sort Key: o\n -> Index Scan using t_io on t (cost=0.00..6.11 rows=6 width=8) \n(actual time=0.026..2.605 rows=1651 loops=1)\n Index Cond: (i = 1)\n Total runtime: 4.768 ms\n(6 rows)\n\n[local]:tir=#\n\n",
"msg_date": "Wed, 18 May 2005 16:30:17 +0200",
"msg_from": "=?ISO-8859-2?Q?Sz=FBcs_G=E1bor?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "where+orderby+limit not (always) using appropriate index?"
},
{
"msg_contents": "\n\n> SELECT o FROM t WHERE i = 1 ORDER BY o LIMIT 1;\n\n\tuse :\n\tORDER BY i, o\n\n\tIf you have a multicol index and want to order on it, you should help the \nplanner by ORDERing BY all of the columns in the index...\n\tIt bit me a few times ;)\n",
"msg_date": "Wed, 18 May 2005 17:06:51 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where+orderby+limit not (always) using appropriate index?"
},
{
"msg_contents": "=?ISO-8859-2?Q?Sz=FBcs_G=E1bor?= <[email protected]> writes:\n> Create a table with (at least) two fields, say i and o.\n> Create three indexes on (i), (o), (i,o)\n> Insert enough rows to test.\n> Try to replace min/max aggregates with indexable queries such as:\n\n> SELECT o FROM t WHERE i = 1 ORDER BY o LIMIT 1;\n\n> Problem #1: This tends to use one of the single-column indexes (depending on \n> the frequency of the indexed element), not the two-column index. Also, I'm \n> not perfectly sure but maybe the planner is right. Why?\n\nTo get the planner to use the double-column index, you have to use an\nORDER BY that matches the index, eg\n\n\tSELECT o FROM t WHERE i = 1 ORDER BY i,o LIMIT 1;\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 May 2005 11:14:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where+orderby+limit not (always) using appropriate index? "
}
] |
[
{
"msg_contents": "What platform is this?\r\n \r\nWe had similar issue (PG 7.4.7). Raising number of checkpoint segments to 125, seperating the WAL to a different LUN helped, but it's still not completely gone.\r\n \r\nAs far as disk I/O is concerned for flushing the buffers out, I am not ruling out the combination of Dell PERC4 RAID card, and the RH AS 3.0 Update3 being a problem.\r\n \r\nThanks,\r\nAnjan\r\n\r\n\t-----Original Message----- \r\n\tFrom: Donald Courtney [mailto:[email protected]] \r\n\tSent: Thu 5/19/2005 12:54 PM \r\n\tTo: Tom Lane \r\n\tCc: [email protected] \r\n\tSubject: Re: [PERFORM] PostgreSQL strugling during high load\r\n\t\r\n\t\r\n\r\n\tTom \r\n\r\n\tThanks for the post - I think I am getting this problem for \r\n\ta synthetic workload at high connection loads. The whole \r\n\tsystem seems to stop. \r\n\r\n\tCan you give some examples on what to try out in the .conf file? \r\n\r\n\tI tried \r\n\tbgwriter_all_percent = 30, 10, and 3 \r\n\r\n\tWhich I understand to mean 30%, 10% and 3% of the dirty pages should be \r\n\twritten out *between* checkpoints. \r\n\r\n\tI didn't see any change in effect. \r\n\r\n\t/regards \r\n\tDon C. \r\n\r\n\tTom Lane wrote: \r\n\r\n\t>\"Mindaugas Riauba\" <[email protected]> writes: \r\n\t> \r\n\t> \r\n\t>> It looks like that not only vacuum causes our problems. vacuum_cost \r\n\t>>seems to lower vacuum impact but we are still noticing slow queries \"storm\". \r\n\t>>We are logging queries that takes >2000ms to process. \r\n\t>> And there is quiet periods and then suddenly 30+ slow queries appears in \r\n\t>>log within the same second. What else could cause such behaviour? \r\n\t>> \r\n\t>> \r\n\t> \r\n\t>Checkpoints? You should ensure that the checkpoint settings are such \r\n\t>that checkpoints don't happen too often (certainly not oftener than \r\n\t>every five minutes or so), and make sure the bgwriter is configured \r\n\t>to dribble out dirty pages at a reasonable rate, so that the next \r\n\t>checkpoint doesn't have a whole load of stuff to write. \r\n\t> \r\n\t> regards, tom lane \r\n\t> \r\n\t>---------------------------(end of broadcast)--------------------------- \r\n\t>TIP 1: subscribe and unsubscribe commands go to [email protected] \r\n\t> \r\n\t> \r\n\r\n\r\n\t---------------------------(end of broadcast)--------------------------- \r\n\tTIP 7: don't forget to increase your free space map settings \r\n\r\n",
"msg_date": "Thu, 19 May 2005 14:12:29 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL strugling during high load"
},
{
"msg_contents": "Anjan Dave wrote:\n\n>What platform is this?\n> \n> \n>\nIts a DELL RH 4 with the xlog on a seperate external mounted file system.\nThe data directory is on a external mounted file system as well.\n\n>We had similar issue (PG 7.4.7). Raising number of checkpoint segments to 125, seperating the WAL to a different LUN helped, but it's still not completely gone.\n> \n> \n>\nI'll try raising the number. I guess the bg* config variables don't do \nmuch?\n\nthanks\n\n>As far as disk I/O is concerned for flushing the buffers out, I am not ruling out the combination of Dell PERC4 RAID card, and the RH AS 3.0 Update3 being a problem.\n> \n>Thanks,\n>Anjan\n>\n>\t-----Original Message----- \n>\tFrom: Donald Courtney [mailto:[email protected]] \n>\tSent: Thu 5/19/2005 12:54 PM \n>\tTo: Tom Lane \n>\tCc: [email protected] \n>\tSubject: Re: [PERFORM] PostgreSQL strugling during high load\n>\t\n>\t\n>\n>\tTom \n>\n>\tThanks for the post - I think I am getting this problem for \n>\ta synthetic workload at high connection loads. The whole \n>\tsystem seems to stop. \n>\n>\tCan you give some examples on what to try out in the .conf file? \n>\n>\tI tried \n>\tbgwriter_all_percent = 30, 10, and 3 \n>\n>\tWhich I understand to mean 30%, 10% and 3% of the dirty pages should be \n>\twritten out *between* checkpoints. \n>\n>\tI didn't see any change in effect. \n>\n>\t/regards \n>\tDon C. \n>\n>\tTom Lane wrote: \n>\n>\t>\"Mindaugas Riauba\" <[email protected]> writes: \n>\t> \n>\t> \n>\t>> It looks like that not only vacuum causes our problems. vacuum_cost \n>\t>>seems to lower vacuum impact but we are still noticing slow queries \"storm\". \n>\t>>We are logging queries that takes >2000ms to process. \n>\t>> And there is quiet periods and then suddenly 30+ slow queries appears in \n>\t>>log within the same second. What else could cause such behaviour? \n>\t>> \n>\t>> \n>\t> \n>\t>Checkpoints? You should ensure that the checkpoint settings are such \n>\t>that checkpoints don't happen too often (certainly not oftener than \n>\t>every five minutes or so), and make sure the bgwriter is configured \n>\t>to dribble out dirty pages at a reasonable rate, so that the next \n>\t>checkpoint doesn't have a whole load of stuff to write. \n>\t> \n>\t> regards, tom lane \n>\t> \n>\t>---------------------------(end of broadcast)--------------------------- \n>\t>TIP 1: subscribe and unsubscribe commands go to [email protected] \n>\t> \n>\t> \n>\n>\n>\t---------------------------(end of broadcast)--------------------------- \n>\tTIP 7: don't forget to increase your free space map settings \n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n>\n\n",
"msg_date": "Thu, 19 May 2005 14:29:27 -0400",
"msg_from": "Donald Courtney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load"
},
{
"msg_contents": "On May 19, 2005, at 2:12 PM, Anjan Dave wrote:\n\n> As far as disk I/O is concerned for flushing the buffers out, I am \n> not ruling out the combination of Dell PERC4 RAID card\n>\n\nThat'd be my first guess as to I/O speed issues. I have some dell \nhardware that by all means should be totally blowing out my other \nboxes in speed, but the I/O sucks out the wazoo. I'm migrating to \nopteron based DB servers with LSI branded cards (not the Dell re- \nbranded ones).\n\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806",
"msg_date": "Thu, 19 May 2005 15:43:03 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load"
},
{
"msg_contents": "Anjan,\n\n> As far as disk I/O is concerned for flushing the buffers out, I am not\n> ruling out the combination of Dell PERC4 RAID card, and the RH AS 3.0\n> Update3 being a problem.\n\nYou know that Update4 is out, yes? \nUpdate3 is currenly throttling your I/O by about 50%.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 19 May 2005 12:57:44 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load"
},
{
"msg_contents": "Josh Berkus wrote:\n\n>Anjan,\n>\n> \n>\n>>As far as disk I/O is concerned for flushing the buffers out, I am not\n>>ruling out the combination of Dell PERC4 RAID card, and the RH AS 3.0\n>>Update3 being a problem.\n>> \n>>\n>\n>You know that Update4 is out, yes? \n>Update3 is currenly throttling your I/O by about 50%.\n> \n>\n\nIs that 50% just for the Dell PERC4 RAID on RH AS 3.0? Sound like\nsevere context switching.\n\nSteve Poe\n\n",
"msg_date": "Thu, 19 May 2005 13:10:16 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL strugling during high load"
}
] |
[
{
"msg_contents": "Yes, I am using it another DB/application. Few more days and I'll have a\nfree hand on this box as well.\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Thursday, May 19, 2005 3:58 PM\nTo: Anjan Dave\nCc: Donald Courtney; Tom Lane; [email protected]\nSubject: Re: [PERFORM] PostgreSQL strugling during high load\n\nAnjan,\n\n> As far as disk I/O is concerned for flushing the buffers out, I am not\n> ruling out the combination of Dell PERC4 RAID card, and the RH AS 3.0\n> Update3 being a problem.\n\nYou know that Update4 is out, yes? \nUpdate3 is currenly throttling your I/O by about 50%.\n\n-- \n\n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n",
"msg_date": "Thu, 19 May 2005 16:19:58 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL strugling during high load"
}
] |
[
{
"msg_contents": "Looking for some general advice on correlated subqueries vs. joins.\n\nWhich of these plans is likely to perform better. One table is a master \nrecord table for entities and their IDs (nv_products), the other \nrepresents a transitive closure of parent/child relationships (for a \ntree) of ID's in the master record table (and so is larger) \n(ssv_product_children).\n\nThe query is, in english: for direct children of an ID, return the ones \nfor which isrel is true.\n\nI have only a tiny demo table set for which there is only one record \nmatched by the queries below, it's hard to guess at how deep or branchy \na production table might be, so I'm trying to develop a general query \nstrategy and learn a thing or two about pgsql.\n\n\nHere's the join:\n\n# explain select child_pid from ssv_product_children, nv_products where \nnv_products.id = ssv_product_children.child_pid and \nssv_product_children.pid = 1 and nv_products.isrel = 't';\n QUERY PLAN\n--------------------------------------------------------------------------\n Hash Join (cost=1.22..2.47 rows=2 width=8)\n Hash Cond: (\"outer\".child_pid = \"inner\".id)\n -> Seq Scan on ssv_product_children (cost=0.00..1.18 rows=9 width=4)\n Filter: (pid = 1)\n -> Hash (cost=1.21..1.21 rows=4 width=4)\n -> Seq Scan on nv_products (cost=0.00..1.21 rows=4 width=4)\n Filter: (isrel = true)\n(7 rows)\n\n\nHere's the correlated subquery:\n\n\n# explain select child_pid from ssv_product_children where pid = 1 and \nchild_pid = (select nv_products.id from nv_products where nv_products.id \n= child_pid and isrel = 't');\n QUERY PLAN\n---------------------------------------------------------------------\n Seq Scan on ssv_product_children (cost=0.00..18.78 rows=1 width=4)\n Filter: ((pid = 1) AND (child_pid = (subplan)))\n SubPlan\n -> Seq Scan on nv_products (cost=0.00..1.26 rows=1 width=4)\n Filter: ((id = $0) AND (isrel = true))\n(5 rows)\n\n\nThanks for any advice.\n",
"msg_date": "Thu, 19 May 2005 17:50:14 -0400",
"msg_from": "Jeffrey Tenny <[email protected]>",
"msg_from_op": true,
"msg_subject": "Which is better, correlated subqueries or joins?"
},
{
"msg_contents": "\nHello,\n\nIt always depends on the dataset but you should try an explain analyze \non each query. It will tell you which one is more efficient for your \nparticular data.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Here's the join:\n> \n> # explain select child_pid from ssv_product_children, nv_products where \n> nv_products.id = ssv_product_children.child_pid and \n> ssv_product_children.pid = 1 and nv_products.isrel = 't';\n> QUERY PLAN\n> --------------------------------------------------------------------------\n> Hash Join (cost=1.22..2.47 rows=2 width=8)\n> Hash Cond: (\"outer\".child_pid = \"inner\".id)\n> -> Seq Scan on ssv_product_children (cost=0.00..1.18 rows=9 width=4)\n> Filter: (pid = 1)\n> -> Hash (cost=1.21..1.21 rows=4 width=4)\n> -> Seq Scan on nv_products (cost=0.00..1.21 rows=4 width=4)\n> Filter: (isrel = true)\n> (7 rows)\n> \n> \n> Here's the correlated subquery:\n> \n> \n> # explain select child_pid from ssv_product_children where pid = 1 and \n> child_pid = (select nv_products.id from nv_products where nv_products.id \n> = child_pid and isrel = 't');\n> QUERY PLAN\n> ---------------------------------------------------------------------\n> Seq Scan on ssv_product_children (cost=0.00..18.78 rows=1 width=4)\n> Filter: ((pid = 1) AND (child_pid = (subplan)))\n> SubPlan\n> -> Seq Scan on nv_products (cost=0.00..1.26 rows=1 width=4)\n> Filter: ((id = $0) AND (isrel = true))\n> (5 rows)\n> \n> \n> Thanks for any advice.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n",
"msg_date": "Thu, 19 May 2005 14:53:52 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which is better, correlated subqueries or joins?"
}
] |
[
{
"msg_contents": "Hi,\n\nI am using postgresql in small (almost trivial) application in which I \npull some data out of a Cobol C/ISAM file and write it into a pgsl \ntable. My users can then use the data however they want by interfacing \nto the data from OpenOffice.org.\n\nThe amount of data written is about 60MB and takes a few minutes on a \n1200Mhz Athlon with a single 60MB IDE drive running Fedora Core 3 with \npgsql 7.4.7. I'd like to speed up the DB writes a bit if possible. \nData integrity is not at all critical as the database gets dropped, \ncreated, and populated immediately before each use. Filesystem is ext3, \ndata=ordered and I need to keep it that way as there is other data in \nthe filesystem that I do care about. I have not done any tuning in the \nconfig file yet, and was wondering what things would likely speed up \nwrites in this situation.\n\nI'm doing the writes individually. Is there a better way? Combining \nthem all into a transaction or something?\n\nThanks,\nSteve Bergman\n",
"msg_date": "Thu, 19 May 2005 17:21:07 -0500",
"msg_from": "Steve Bergman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing for writes. Data integrity not critical"
},
{
"msg_contents": "On Thu, May 19, 2005 at 05:21:07PM -0500, Steve Bergman wrote:\n> I'm doing the writes individually. Is there a better way? Combining \n> them all into a transaction or something?\n\nBatching them all in one or a few transactions will speed it up a _lot_.\nUsing COPY would help a bit more on top of that.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 20 May 2005 00:27:31 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing for writes. Data integrity not critical"
},
{
"msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> On Thu, May 19, 2005 at 05:21:07PM -0500, Steve Bergman wrote:\n>> I'm doing the writes individually. Is there a better way? Combining \n>> them all into a transaction or something?\n\n> Batching them all in one or a few transactions will speed it up a _lot_.\n> Using COPY would help a bit more on top of that.\n\nAlso, if you really don't need to worry about data integrity, turning\noff fsync in the config file will probably help. (Though since it's\nan IDE drive, maybe not, as the drive may be lying about write complete\nanyway.)\n\nIncreasing checkpoint_segments will help too, at the cost of disk space\n(about 32MB per increment in the value, IIRC). I'd suggest pushing it\nup enough so you don't incur a checkpoint while the time-critical\noperation runs. checkpoint_timeout may be too small too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 May 2005 19:45:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing for writes. Data integrity not critical "
},
{
"msg_contents": "> I'm doing the writes individually. Is there a better way? Combining \n> them all into a transaction or something?\n\nUse COPY of course :)\n\nOr at worst bundle 1000 inserts at a time in a transation...\n\nAnd if you seriously do not care about your data at all, set fsync = off \n in you postgresql.conf for a mega speedup.\n\nChris\n",
"msg_date": "Fri, 20 May 2005 09:45:42 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing for writes. Data integrity not critical"
},
{
"msg_contents": "Is using a ramdisk in situations like this entirely ill-advised then? \nWhen data integrity isn't a huge issue and you really need good write \nperformance it seems like it wouldn't hurt too much. Unless I am \nmissing something?\n\nOn 20 May 2005, at 02:45, Christopher Kings-Lynne wrote:\n\n>> I'm doing the writes individually. Is there a better way? \n>> Combining them all into a transaction or something?\n>>\n>\n> Use COPY of course :)\n>\n> Or at worst bundle 1000 inserts at a time in a transation...\n>\n> And if you seriously do not care about your data at all, set fsync \n> = off in you postgresql.conf for a mega speedup.\n>\n> Chris\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n>\n>\n\n",
"msg_date": "Sat, 21 May 2005 16:16:30 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing for writes. Data integrity not critical"
},
{
"msg_contents": "I am interested in optimising write performance as well, the machine \nI am testing on is maxing out around 450 UPDATEs a second which is \nquite quick I suppose. I haven't tried turning fsync off yet. The \ntable has...a lot of indices as well. They are mostly pretty simple \npartial indexes though.\n\nI would usually just shuv stuff into memcached, but I need to store \nand sort (in realtime) 10's of thousands of rows. (I am experimenting \nwith replacing some in house toplist generating stuff with a PG \ndatabase.) The partial indexes are basically the only thing which \nmakes the table usable btw.\n\nThe read performance is pretty damn good, but for some reason I chose \nto wrote the benchmark script in PHP, which can totally destroy the \naccuracy of your results if you decide to call pg_fetch_*(), even \npg_affected_rows() can skew things significantly.\n\nSo any ideas how to improve the number of writes I can do a second? \nThe existing system sorts everything by the desired column when a \nrequest is made, and the data it sorts is updated in realtime (whilst \nit isn't being sorted.) And it can sustain the read/write load (to \nmemory) just fine. If I PG had heap tables this would probably not be \na problem at all, but it does, so it is. Running it in a ramdisk \nwould be acceptable, it's just annoying to create the db everytime \nthe machine goes down. And having to run the entire PG instance off \nof the ramdisk isn't great either.\n\nOn 19 May 2005, at 23:21, Steve Bergman wrote:\n\n> Hi,\n>\n> I am using postgresql in small (almost trivial) application in \n> which I pull some data out of a Cobol C/ISAM file and write it into \n> a pgsl table. My users can then use the data however they want by \n> interfacing to the data from OpenOffice.org.\n>\n> The amount of data written is about 60MB and takes a few minutes on \n> a 1200Mhz Athlon with a single 60MB IDE drive running Fedora Core 3 \n> with pgsql 7.4.7. I'd like to speed up the DB writes a bit if \n> possible. Data integrity is not at all critical as the database \n> gets dropped, created, and populated immediately before each use. \n> Filesystem is ext3, data=ordered and I need to keep it that way as \n> there is other data in the filesystem that I do care about. I have \n> not done any tuning in the config file yet, and was wondering what \n> things would likely speed up writes in this situation.\n>\n> I'm doing the writes individually. Is there a better way? \n> Combining them all into a transaction or something?\n>\n> Thanks,\n> Steve Bergman\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n>\n\n",
"msg_date": "Sat, 21 May 2005 18:27:18 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing for writes. Data integrity not critical"
}
] |
[
{
"msg_contents": "explain analyze SELECT audit , store , question , month , year , week ,\nweekday , myaudittotalscore , active , auditnum , answer , quarter , y_n ,\nregion , district , audittype , status , keyedby , questiondisplay , qtext ,\nqdescr , answerdisplay , answertext , customauditnum , dateaudittaken ,\ndatecompleted , dateauditkeyed , datekeyingcomplete , section , createdby ,\ndivision , auditscoredesc , locationnum , text_response ,\nSum(questionpointsavailable) , Sum(pointsscored) \n\nfrom viwAuditCube where clientnum ='RSI' \n\nGROUP BY audit, store, question, month, year, week, weekday,\nmyaudittotalscore, active, auditnum, answer, quarter, y_n, region, district,\naudittype, status, keyedby, questiondisplay, qtext, qdescr, answerdisplay,\nanswertext, customauditnum, dateaudittaken, datecompleted, dateauditkeyed,\ndatekeyingcomplete, section, createdby, division, auditscoredesc,\nlocationnum, text_response ORDER BY audit, store, question, month, year,\nweek, weekday, myaudittotalscore, active, auditnum, answer, quarter, y_n,\nregion, district, audittype, status, keyedby, questiondisplay, qtext,\nqdescr, answerdisplay, answertext, customauditnum, dateaudittaken,\ndatecompleted, dateauditkeyed, datekeyingcomplete, section, createdby,\ndivision, auditscoredesc, locationnum, text_response\n\n \n\n \n\nJoel Fradkin\n\n \n\nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n\n \n\[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n\n \n\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nexplain analyze SELECT audit , store , question , month ,\nyear , week , weekday , myaudittotalscore , active , auditnum , answer ,\nquarter , y_n , region , district , audittype , status , keyedby , questiondisplay\n, qtext , qdescr , answerdisplay , answertext , customauditnum , dateaudittaken\n, datecompleted , dateauditkeyed , datekeyingcomplete , section , createdby ,\ndivision , auditscoredesc , locationnum , text_response , Sum(questionpointsavailable)\n, Sum(pointsscored) \nfrom viwAuditCube where clientnum ='RSI' \nGROUP BY audit, store, question, month, year, week, weekday,\nmyaudittotalscore, active, auditnum, answer, quarter, y_n, region, district, audittype,\nstatus, keyedby, questiondisplay, qtext, qdescr, answerdisplay, answertext, customauditnum,\ndateaudittaken, datecompleted, dateauditkeyed, datekeyingcomplete, section, createdby,\ndivision, auditscoredesc, locationnum, text_response ORDER BY audit, store,\nquestion, month, year, week, weekday, myaudittotalscore, active, auditnum,\nanswer, quarter, y_n, region, district, audittype, status, keyedby, questiondisplay,\nqtext, qdescr, answerdisplay, answertext, customauditnum, dateaudittaken, datecompleted,\ndateauditkeyed, datekeyingcomplete, section, createdby, division, auditscoredesc,\nlocationnum, text_response\n \n \nJoel Fradkin\n\n \n\nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n\n \n\[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\n© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the\nintended recipient, please contact the sender by reply email and delete and\ndestroy all copies of the original message, including attachments.",
"msg_date": "Fri, 20 May 2005 15:47:21 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance on a querry with many group by's any way to speed it up?"
}
] |
[
{
"msg_contents": "Sorry I tried a few times to break this email up (guess there must be a size\nlimit?).\n\nAny one interested in seeing the explain for the speed of many group by's\nquestion just email me.\n\n \n\nBasically the sql is built by a dynamic cube product from data dynamics.\n\nI can edit it prior to it running, but it runs very slow even off the\nflattened file.\n\nI am guessing the product needs the data from the sql I supplied in previous\npost.\n\n \n\nI don't have any ideas to speed it up as I cant store an aggregate flat\ntable without updating it when updates and inserts are made and that would\nbe too time consuming.\n\nAny ideas for how to approach getting the same data set in a faster manner\nare greatly appreciated.\n\n \n\nJoel Fradkin\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nSorry I tried a few times to break this email up (guess\nthere must be a size limit?).\nAny one interested in seeing the explain for the speed of\nmany group by’s question just email me.\n \nBasically the sql is built by a dynamic cube product from\ndata dynamics.\nI can edit it prior to it running, but it runs very slow\neven off the flattened file.\nI am guessing the product needs the data from the sql I\nsupplied in previous post.\n \nI don’t have any ideas to speed it up as I cant store\nan aggregate flat table without updating it when updates and inserts are made\nand that would be too time consuming.\nAny ideas for how to approach getting the same data set in a\nfaster manner are greatly appreciated.\n \nJoel Fradkin",
"msg_date": "Fri, 20 May 2005 15:58:29 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "cant seem to post the explain"
}
] |
[
{
"msg_contents": "Begin forwarded message:\n\n> From: Yves Vindevogel <[email protected]>\n> Date: Mon 23 May 2005 19:23:16 CEST\n> To: [email protected]\n> Subject: Index on table when using DESC clause\n>\n> Hi,\n>\n> I have a table with multiple fields. Two of them are documentname and \n> pages\n> I have indexes on documentname and on pages, and one extra on \n> documentname + pages\n>\n> However, when I query my db using for instance order by pages, \n> documentname, it is very fast.\n> If I use order by pages desc, documentname, it is not fast at all, \n> like it is not using the index properly at all.\n>\n> How can I avoid this ?\n>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n\n>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.",
"msg_date": "Mon, 23 May 2005 19:41:19 +0200",
"msg_from": "Yves Vindevogel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Index on table when using DESC clause"
},
{
"msg_contents": "On Mon, May 23, 2005 at 07:41:19PM +0200, Yves Vindevogel wrote:\n> However, when I query my db using for instance order by pages, \n> documentname, it is very fast. \n> If I use order by pages desc, documentname, it is not fast at \n> all, like it is not using the index properly at all. \n\nMake an index on \"pages desc, documentname asc\".\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 23 May 2005 20:03:52 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Index on table when using DESC clause"
},
{
"msg_contents": "I tried that, but\n\ncreate index ixTest on table1 (pages desc, documentname)\n\ngives me a syntax error\n\n\nOn 23 May 2005, at 20:03, Steinar H. Gunderson wrote:\n\n> On Mon, May 23, 2005 at 07:41:19PM +0200, Yves Vindevogel wrote:\n>> However, when I query my db using for instance order by pages,\n>> documentname, it is very fast.\n>> If I use order by pages desc, documentname, it is not fast at\n>> all, like it is not using the index properly at all.\n>\n> Make an index on \"pages desc, documentname asc\".\n>\n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if \n> your\n> joining column's datatypes do not match\n>\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.",
"msg_date": "Mon, 23 May 2005 21:02:17 +0200",
"msg_from": "Yves Vindevogel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: Index on table when using DESC clause"
},
{
"msg_contents": "You didn't say what version of PostgreSQL you're trying.\nI recall old version doesn't used index for backward pagination.\n\nOleg\n\nOn Mon, 23 May 2005, Yves Vindevogel wrote:\n\n> I tried that, but\n>\n> create index ixTest on table1 (pages desc, documentname)\n>\n> gives me a syntax error\n>\n>\n> On 23 May 2005, at 20:03, Steinar H. Gunderson wrote:\n>\n>> On Mon, May 23, 2005 at 07:41:19PM +0200, Yves Vindevogel wrote:\n>>> However, when I query my db using for instance order by pages,\n>>> documentname, it is very fast.\n>>> If I use order by pages desc, documentname, it is not fast at\n>>> all, like it is not using the index properly at all.\n>> \n>> Make an index on \"pages desc, documentname asc\".\n>> \n>> /* Steinar */\n>> -- \n>> Homepage: http://www.sesse.net/\n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 9: the planner will ignore your desire to choose an index scan if your\n>> joining column's datatypes do not match\n>> \n>> \n> Met vriendelijke groeten,\n> Bien ? vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n",
"msg_date": "Mon, 23 May 2005 23:46:38 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Index on table when using DESC clause"
},
{
"msg_contents": "As far as I know, to use a straight index Postgres requires either\n\nORDER BY pages, description -- or --\nORDER BY pages DESC, description DESC.\n\nIf you want the results by pages DESC, description ASC, then you have to \nmake an index on an expression or define your own operator or something \nesoteric like that. I would think the ability to have an index where the \ncolumns don't all collate in the same direction would be an easy feature \nto add.",
"msg_date": "Mon, 23 May 2005 13:27:15 -0700",
"msg_from": "Andrew Lazarus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Index on table when using DESC clause"
},
{
"msg_contents": "God I love the sheer brilliance of that minus trick :-))\nTnx a lot\n\nBTW: Are there any plans to change this kind of indexing behaviour ?\nIt makes no sense at all, and, it makes databases slow when you don't \nknow about this.\n\nOn 23 May 2005, at 23:15, Andrew Lazarus wrote:\n\n> What you are trying to do makes perfect sense, but for some strange \n> reason, Postgres doesn't like to do it. In a PG index, all of the \n> columns are always stored in ascending order. So if you have an ORDER \n> BY that is all ASC, it can start from the start of the index. And if \n> you have an ORDER BY that is all DESC, it can start from the end. But \n> if you want one column (like pages) DESC and the other (description) \n> ASC, then PG will use a sequential scan or something else slow and \n> stupid.\n>\n> Other RDBMS know how to do this, by supporting the\n>\n> CREATE INDEX foo ON bar(baz DESC, baz2 ASC)\n>\n> syntax. For PG, you need to fool it with an index on an expression, or \n> a custom operator, or something. I once just made an extra column and \n> used a trigger to be sure that -myvariable was in it at all times \n> (-pages for you) and then made my index on the extra column. Since the \n> extra column in ASC order is the same as the original in DESC, it \n> works.\n> <andrew.vcf>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.",
"msg_date": "Mon, 23 May 2005 23:18:49 +0200",
"msg_from": "Yves Vindevogel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: Index on table when using DESC clause"
}
] |
[
{
"msg_contents": "I just got a question from one our QA guys who is configuring a RAID 10\ndisk that is destined to hold a postgresql database. The disk\nconfiguration procedure is asking him if he wants to optimize for\nsequential or random access. My first thought is that random is what we\nwould want, but then I started wondering if it's not that simple, and my\nknowledge of stuff at the hardware level is, well, limited.....\n \nIf it were your QA guy, what would you tell him?\n\n- DAP\n------------------------------------------------------------------------\n----------\nDavid Parker Tazz Networks (401) 709-5130\n \n\n\n \n\n\n\n\n\nI just got a \nquestion from one our QA guys who is configuring a RAID 10 disk that is destined \nto hold a postgresql database. The disk configuration procedure is asking him if \nhe wants to optimize for sequential or random access. My first thought is that \nrandom is what we would want, but then I started wondering if it's not that \nsimple, and my knowledge of stuff at the hardware level is, well, \nlimited.....\n \nIf it were your QA \nguy, what would you tell him?\n- \nDAP----------------------------------------------------------------------------------David \nParker Tazz Networks (401) \n709-5130",
"msg_date": "Mon, 23 May 2005 16:58:22 -0400",
"msg_from": "\"David Parker\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "seqential vs random io"
},
{
"msg_contents": "David Parker wrote:\n> I just got a question from one our QA guys who is configuring a RAID 10\n> disk that is destined to hold a postgresql database. The disk\n> configuration procedure is asking him if he wants to optimize for\n> sequential or random access. My first thought is that random is what we\n> would want, but then I started wondering if it's not that simple, and my\n> knowledge of stuff at the hardware level is, well, limited.....\n> \n> If it were your QA guy, what would you tell him?\n> \n> - DAP\n\nRandom. Sequential is always pretty fast, it is random that hurts.\n\nThe only time I would say sequential is if you were planning on\nstreaming large files (like iso images) with low load.\n\nBut for a DB, even a sequential scan will probably not be that much data.\n\nAt least, that's my 2c.\n\nJohn\n=:->",
"msg_date": "Mon, 23 May 2005 17:23:41 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqential vs random io"
},
{
"msg_contents": "David,\n\n> > I just got a question from one our QA guys who is configuring a RAID 10\n> > disk that is destined to hold a postgresql database. The disk\n> > configuration procedure is asking him if he wants to optimize for\n> > sequential or random access. My first thought is that random is what we\n> > would want, but then I started wondering if it's not that simple, and my\n> > knowledge of stuff at the hardware level is, well, limited.....\n> >\n> > If it were your QA guy, what would you tell him?\n\nDepends on the type of database. OLTP or Web == random access. Data \nWarehouse == sequential access.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 23 May 2005 15:32:35 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqential vs random io"
}
] |
[
{
"msg_contents": "I would tell him to go for the random, which is what most DBs would be by nature. What you need to understand will be the cache parameters, read/write cache amount, and stripe size, depending on your controller type and whatever it defaults to on these things.\r\n \r\nThanks,\r\nAnjan\r\n\r\n\t-----Original Message----- \r\n\tFrom: David Parker [mailto:[email protected]] \r\n\tSent: Mon 5/23/2005 4:58 PM \r\n\tTo: [email protected] \r\n\tCc: \r\n\tSubject: [PERFORM] seqential vs random io\r\n\t\r\n\t\r\n\tI just got a question from one our QA guys who is configuring a RAID 10 disk that is destined to hold a postgresql database. The disk configuration procedure is asking him if he wants to optimize for sequential or random access. My first thought is that random is what we would want, but then I started wondering if it's not that simple, and my knowledge of stuff at the hardware level is, well, limited.....\r\n\t \r\n\tIf it were your QA guy, what would you tell him?\r\n\r\n\t- DAP\r\n\t----------------------------------------------------------------------------------\r\n\tDavid Parker Tazz Networks (401) 709-5130\r\n\t \r\n\t\r\n\r\n\t \r\n\r\n",
"msg_date": "Mon, 23 May 2005 18:35:44 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seqential vs random io"
}
] |
[
{
"msg_contents": "Hi,\n\nI have some experience with MSSQL and am examining\nPostgreSQL. I'm running under Windows. I like what I\nsee so far, but I'm hoping for some performance\nadvice:\n\n1. My test database has 7 million records. \n2. There are two columns - an integer and a char\ncolumn called Day which has a random value of Mon or\nTues, etc. in it.\n3. I made an index on Day.\n\nMy query is:\n\nselect count(*) from mtable where day='Mon'\n\nResults:\n\n1. P3 600 512MB RAM MSSQL. It takes about 4-5 secs to\nrun. If I run a few queries and everything is cached,\nit is sometimes just 1 second.\n\n2. Athlon 1.3 Ghz 1GB RAM. PostgreSQL takes 7 seconds.\nI have played with the buffers setting and currently\nhave it at 7500. At 20000 it took over 20 seconds to\nrun.\n\n5 seconds vs 7 isn't that big of a deal, but 1 second\nvs 7 seconds is. Also, the slower performance is with\nmuch lesser hardware.\n\nAny ideas to try?\n\nThanks much,\nMark\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Mon, 23 May 2005 22:47:15 -0700 (PDT)",
"msg_from": "mark durrant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Select performance vs. mssql"
},
{
"msg_contents": "\n> select count(*) from mtable where day='Mon'\n> \n> Results:\n> \n> 1. P3 600 512MB RAM MSSQL. It takes about 4-5 secs to\n> run. If I run a few queries and everything is cached,\n> it is sometimes just 1 second.\n> \n> 2. Athlon 1.3 Ghz 1GB RAM. PostgreSQL takes 7 seconds.\n> I have played with the buffers setting and currently\n> have it at 7500. At 20000 it took over 20 seconds to\n> run.\n> \n> 5 seconds vs 7 isn't that big of a deal, but 1 second\n> vs 7 seconds is. Also, the slower performance is with\n> much lesser hardware.\n\nPost the result of this for us:\n\nexplain analyze select count(*) from mtable where day='Mon';\n\nOn both machines.\n\nChris\n",
"msg_date": "Tue, 24 May 2005 14:18:36 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select performance vs. mssql"
}
] |
[
{
"msg_contents": "> Post the result of this for us:\n> \n> explain analyze select count(*) from mtable where\n> day='Mon';\n> \n> On both machines.\n\nHi Chris --\n\nPostgreSQL Machine:\n\"Aggregate (cost=140122.56..140122.56 rows=1 width=0)\n(actual time=24516.000..24516.000 rows=1 loops=1)\"\n\" -> Index Scan using \"day\" on mtable \n(cost=0.00..140035.06 rows=35000 width=0) (actual\ntime=47.000..21841.000 rows=1166025 loops=1)\"\n\" Index Cond: (\"day\" = 'Mon'::bpchar)\"\n\"Total runtime: 24516.000 ms\"\n(Note this took 24 seconds after fresh reboot, next\nexecution was 11, and execution without explain\nanalyze was 6.7 seconds)\n\nMSSQL Machine:\nThat \"Explain Analyze\" command doesn't work for MSSQL,\nbut I did view the Query plan. 97% of it was \"Scanning\na particular range of rows from a nonclustered index\"\n\nThanks for your help --Mark\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Mon, 23 May 2005 23:40:34 -0700 (PDT)",
"msg_from": "mark durrant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Select performance vs. mssql"
},
{
"msg_contents": "mark durrant wrote:\n> PostgreSQL Machine:\n> \"Aggregate (cost=140122.56..140122.56 rows=1 width=0)\n> (actual time=24516.000..24516.000 rows=1 loops=1)\"\n> \" -> Index Scan using \"day\" on mtable \n> (cost=0.00..140035.06 rows=35000 width=0) (actual\n> time=47.000..21841.000 rows=1166025 loops=1)\"\n> \" Index Cond: (\"day\" = 'Mon'::bpchar)\"\n> \"Total runtime: 24516.000 ms\"\n\nHave you run ANALYZE?\n\nClustering the table on the \"day\" index (via the CLUSTER command) would \nbe worth trying.\n\n-Neil\n",
"msg_date": "Tue, 24 May 2005 16:47:31 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select performance vs. mssql"
},
{
"msg_contents": "Mark,\n\n> MSSQL Machine:\n> That \"Explain Analyze\" command doesn't work for MSSQL,\n\ntry this:\nset showplan_all on\ngo\nselect ...\ngo\n\nHarald\n",
"msg_date": "Tue, 24 May 2005 10:49:05 +0200",
"msg_from": "\"Harald Lau (Sector-X)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select performance vs. mssql"
}
] |
[
{
"msg_contents": "First, thanks for all the helpful replies. I've\nlistened to the suggestions and done some more digging\nand have results:\n\nI did show_plan_all in MSSQL and found that it was\ndoing an Index Scan. I've read someplace that if the\ndata you need is all in the index, then MSSQL has a\nfeature/hack where it does not have to go to the\ntable, it can do my COUNT using the index alone. I\nthink this explains the 1 second query performance.\n\nI changed the query to also include the other column\nwhich is not indexed. The results were MSSQL now used\na TableScan and was MUCH slower than PostgreSQL. \n\nI clustered the index on MSSQL and PostgreSQL and\nincreased buffers to 15000 on PGSQL. I saw a\nnoticeable performance increase on both. On the more\ncomplicated query, PostgreSQL is now 3.5 seconds.\nMSSQL is faster again doing an index scan and is at 2\nseconds. Remember the MSSQL machine has a slower CPU\nas well.\n\nMy interpretations:\n\n--Given having to do a table scan, PostgreSQL seems to\nbe faster. The hardware on my PostrgreSQL machine is\nnicer than the MSSQL one, so perhaps they are just\nabout the same speed with speed determined by the\ndisk.\n\n--Tuning helps. Clustered index cut my query time\ndown. More buffers helped. \n\n--As Chris pointed out, how real-world is this test?\nHis point is valid. The database we're planning will\nhave a lot of rows and require a lot of summarization\n(hence my attempt at a \"test\"), but we shouldn't be\npulling a million rows at a time.\n\n--MSSQL's ability to hit the index only and not having\nto go to the table itself results in a _big_\nperformance/efficiency gain. If someone who's in\ndevelopment wants to pass this along, it would be a\nnice addition to PostgreSQL sometime in the future.\nI'd suspect that as well as making one query faster,\nit would make everything else faster/more scalable as\nthe server load is so much less.\n\nThanks again,\n\nMark\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nYahoo! Small Business - Try our new Resources site\nhttp://smallbusiness.yahoo.com/resources/\n",
"msg_date": "Tue, 24 May 2005 08:36:36 -0700 (PDT)",
"msg_from": "mark durrant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Select performance vs. mssql"
},
{
"msg_contents": "On Tue, May 24, 2005 at 08:36:36 -0700,\n mark durrant <[email protected]> wrote:\n> \n> --MSSQL's ability to hit the index only and not having\n> to go to the table itself results in a _big_\n> performance/efficiency gain. If someone who's in\n> development wants to pass this along, it would be a\n> nice addition to PostgreSQL sometime in the future.\n> I'd suspect that as well as making one query faster,\n> it would make everything else faster/more scalable as\n> the server load is so much less.\n\nThis gets brought up a lot. The problem is that the index doesn't include\ninformation about whether the current transaction can see the referenced\nrow. Putting this information in the index will add significant overhead\nto every update and the opinion of the developers is that this would be\na net loss overall.\n",
"msg_date": "Tue, 24 May 2005 12:27:13 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select performance vs. mssql"
},
{
"msg_contents": "Until you start worrying about MVC - we have had problems with the MSSQL \nimplementation of read consistency because of this 'feature'.\n\nAlex Turner\nNetEconomist\n\nOn 5/24/05, Bruno Wolff III <[email protected]> wrote:\n> \n> On Tue, May 24, 2005 at 08:36:36 -0700,\n> mark durrant <[email protected]> wrote:\n> >\n> > --MSSQL's ability to hit the index only and not having\n> > to go to the table itself results in a _big_\n> > performance/efficiency gain. If someone who's in\n> > development wants to pass this along, it would be a\n> > nice addition to PostgreSQL sometime in the future.\n> > I'd suspect that as well as making one query faster,\n> > it would make everything else faster/more scalable as\n> > the server load is so much less.\n> \n> This gets brought up a lot. The problem is that the index doesn't include\n> information about whether the current transaction can see the referenced\n> row. Putting this information in the index will add significant overhead\n> to every update and the opinion of the developers is that this would be\n> a net loss overall.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\nUntil you start worrying about MVC - we have had problems with the\nMSSQL implementation of read consistency because of this 'feature'.\n\nAlex Turner\nNetEconomistOn 5/24/05, Bruno Wolff III <[email protected]> wrote:\nOn Tue, May 24, 2005 at 08:36:36 -0700, mark durrant <[email protected]> wrote:>> --MSSQL's ability to hit the index only and not having> to go to the table itself results in a _big_\n> performance/efficiency gain. If someone who's in> development wants to pass this along, it would be a> nice addition to PostgreSQL sometime in the future.> I'd suspect that as well as making one query faster,\n> it would make everything else faster/more scalable as> the server load is so much less.This gets brought up a lot. The problem is that the index doesn't includeinformation about whether the current transaction can see the referenced\nrow. Putting this information in the index will add significant overheadto every update and the opinion of the developers is that this would bea net loss overall.---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster",
"msg_date": "Tue, 24 May 2005 19:12:14 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select performance vs. mssql"
},
{
"msg_contents": "> --As Chris pointed out, how real-world is this test?\n> His point is valid. The database we're planning will\n> have a lot of rows and require a lot of summarization\n> (hence my attempt at a \"test\"), but we shouldn't be\n> pulling a million rows at a time.\n\nIf you want to do lots of aggregate analysis, I suggest you create a \nsepearate summary table, and create triggers on the main table to \nmaintain your summaries in the other table...\n\n> --MSSQL's ability to hit the index only and not having\n> to go to the table itself results in a _big_\n> performance/efficiency gain. If someone who's in\n> development wants to pass this along, it would be a\n> nice addition to PostgreSQL sometime in the future.\n> I'd suspect that as well as making one query faster,\n> it would make everything else faster/more scalable as\n> the server load is so much less.\n\nThis is well-known and many databases do it. However, due to MVCC \nconsiderations in PostgreSQL, it's not feasible for us to implement it...\n\nChris\n",
"msg_date": "Wed, 25 May 2005 09:29:36 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select performance vs. mssql"
},
{
"msg_contents": "On Wed, May 25, 2005 at 09:29:36AM +0800, Christopher Kings-Lynne wrote:\n> >--MSSQL's ability to hit the index only and not having\n> >to go to the table itself results in a _big_\n> >performance/efficiency gain. If someone who's in\n> >development wants to pass this along, it would be a\n> >nice addition to PostgreSQL sometime in the future.\n> >I'd suspect that as well as making one query faster,\n> >it would make everything else faster/more scalable as\n> >the server load is so much less.\n> \n> This is well-known and many databases do it. However, due to MVCC \n> considerations in PostgreSQL, it's not feasible for us to implement it...\n\nWasn't there a plan to store some visibility info in indexes? IIRC the\nidea was that a bit would be set in the index tuple indicating that all\ntransactions that wouldn't be able to see that index value were\ncomplete, meaning that there was no reason to hit the heap for that\ntuple.\n\nI looked on the TODO but didn't see this, maybe it fell through the\ncracks?\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Sun, 29 May 2005 11:33:12 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select performance vs. mssql"
},
{
"msg_contents": "* Bruno Wolff III <[email protected]> wrote:\n\n<snip>\n\n> This gets brought up a lot. The problem is that the index doesn't include\n> information about whether the current transaction can see the referenced\n> row. Putting this information in the index will add significant overhead\n> to every update and the opinion of the developers is that this would be\n> a net loss overall.\n\nwouldn't it work well to make this feature optionally for each \nindex ? There could be some flag on the index (ie set at create \ntime) which tells postgres whether to store mvcc information.\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n",
"msg_date": "Fri, 8 Jul 2005 16:00:24 +0200",
"msg_from": "Enrico Weigelt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select performance vs. mssql"
},
{
"msg_contents": "Enrico Weigelt wrote:\n> Bruno Wolff III wrote:\n>> \n>> This gets brought up a lot. The problem is that the index doesn't include\n>> information about whether the current transaction can see the referenced\n>> row. Putting this information in the index will add significant overhead\n>> to every update and the opinion of the developers is that this would be\n>> a net loss overall.\n> \n> wouldn't it work well to make this feature optionally for each \n> index ? There could be some flag on the index (ie set at create \n> time) which tells postgres whether to store mvcc information.\n\nThere is no reason to assume it can't work.\n\nThere is little reason to assume that it will be the best \nsolution in many circumstances.\n\nThere is a big reason why people are sceptical: there is no patch.\n\n\nThe issue has been debated and beaten to death. People have \nformed their opinions and are unlikely to change their position. \nIf you want to convince people, your best bet is to submit a \npatch and have OSDL measure the performance improvement.\n\nJochem\n\n",
"msg_date": "Fri, 08 Jul 2005 23:47:27 +0200",
"msg_from": "Jochem van Dieten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select performance vs. mssql"
}
] |
[
{
"msg_contents": "Hi all,\n\n From whatever reading and surfing I have done, I have found that postgres is\ngood. Actually I myself am a fan of postgres as compared to mysql. However I\nwant to have some frank opinions before I decide something. Following are\nsome of the aspects of my schema, and our concerns --\n\n- We have around 150 tables on the DB\n- We have lot of foreign keys between the tables\n- Couple of tables are going to have around couple of hundereds of millions\nof records (300 Million right now and would grow). Few of this tables are\nfairly wide with around 32 columns, and have around 3-4 columns which are\nforeign keys and refer to other tables\n- Most of the DB usage is Selects. We would have some inserts but that would\nbe like a nightly or a monthly process\n\n\nOur only concern with going with postgres is speed. I haven't done a speed\ntest yet so I can't speak. But the major concern is that the selects and\ninserts are going to be much much slower on postgres than on mysql. I dont\nknow how true this is. I know this is a postgres forum so everyone will say\npostgres is better but I am just looking for some help and advise I guess\n!!!\n\nI am not trying to start a mysql vs postgres war so please dont\nmisunderstand me .... I tried to look around for mysql vs postgres articles,\nbut most of them said mysql is better in speed. However those articles were\nvery old so I dont know about recent stage. Please comment !!!\n\nThanks,\nAmit\n\n",
"msg_date": "Tue, 24 May 2005 13:05:53 -0400",
"msg_from": "Amit V Shah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "> \n> I am not trying to start a mysql vs postgres war so please dont\n> misunderstand me .... I tried to look around for mysql vs postgres articles,\n> but most of them said mysql is better in speed. However those articles were\n> very old so I dont know about recent stage. Please comment !!!\n\nIt is my experience that MySQL is faster under smaller load scenarios. \nSay 5 - 10 connections only doing simple SELECTS. E.g; a dymanic website.\n\nIt is also my experience that PostgreSQL is faster and more stable under\nconsistent and heavy load. I have customers you regularly are using up \nto 500 connections.\n\nNote that alot of this depends on how your database is designed. Foreign \nkeys slow things down.\n\nI think it would be important for you to look at your overall goal of \nmigration. MySQL is really not a bad product \"IF\" you are willing to \nwork within its limitations.\n\nPostgreSQL is a real RDMS, it is like Oracle or DB2 and comes with a \ncomparable feature set. Only you can decide if that is what you need.\n\nSincerely,\n\nJoshua D. Drake\nCommand Prompt, Inc.\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n",
"msg_date": "Tue, 24 May 2005 10:14:35 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "Amit,\n\n> - We have lot of foreign keys between the tables\n\nDo you need these keys to be enforced? Last I checked, MySQL was still \nhaving trouble with foriegn keys.\n\n> - Most of the DB usage is Selects. We would have some inserts but that\n> would be like a nightly or a monthly process\n\nSo transaction integrity is not a real concern? This sounds like a data \nwarehouse; wanna try Bizgres? (www.bizgres.org)\n\n> Our only concern with going with postgres is speed. I haven't done a speed\n> test yet so I can't speak. But the major concern is that the selects and\n> inserts are going to be much much slower on postgres than on mysql. I dont\n> know how true this is. I know this is a postgres forum so everyone will say\n> postgres is better but I am just looking for some help and advise I guess\n\nWell, the relative speed depends on what you're doing. You want slow, try a \ntransaction rollback on a large InnoDB table ;-) PostgreSQL/Bizgres will \nalso be implementing bitmapped indexes and table partitioning very soon, so \nwe're liable to pull way ahead of MySQL on very large databases.\n\n> I am not trying to start a mysql vs postgres war so please dont\n> misunderstand me .... I tried to look around for mysql vs postgres\n> articles, but most of them said mysql is better in speed. \n\nAlso I'll bet most of those articles were based on either website use or \nsingle-threaded simple-sql tests. Not a read data warehousing situatiion.\n\nIt's been my personal experience that MySQL does not scale well beyond about \n75GB without extensive support from MySQL AB. PostgreSQL more easily scales \nup to 200GB, and to as much as 1TB with tuning expertise.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 24 May 2005 10:37:48 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "\n\tIt's common knowledge, it seems, that MySQL without transactions will be \na lot faster than Postgres on Inserts. And on Updates too, that is, unless \nyou have more than a few concurrent concurrent connections, at which point \nthe MySQL full table lock will just kill everything. And you don't have \ntransactions, of course, and if something goes wrong, bye bye data, or \nfunky stuff happens, like half-commited transactions if a constraint is \nviolated in an INSERT SELECT, or you get 0 January 0000 or 31 February, \netc.\n\tI heard it said that MySQL with transactions (InnoDB) is slower than \npostgres. I'd believe it... and you still get 00-00-0000 as a date for \nfree.\n\tBut from your use case postgres doesn't sound like a problem, yours \nsounds like a few big batched COPY's which are really really fast.\n\n\tAnd about SELECTs, this is really from an experience I had a few months \nago, from a e-commerce site... well, to put it nicely, MySQL's planner \ndon't know shit when it comes to doing anything a bit complicated. I had \nthis query to show the \"also purchased\" products on a page, and also a few \nother queries, best buys in this category, related products, etc..., \nnothing very complicated really, at worst they were 4-table joins... and \nwith 50K products MySQL planned it horrendously and it took half a second \n! Seq scans every times... I had to split the query in two, one to get the \nproduct id's, another one to get the products.\n\tI took the sql, put it in postgres with the usual changes (typenames, \netc...) but same indexes, same data... the query took half a millisecond. \nWell... what can I say ?\n\n\tAlso when you sit in front of the psql or mysql command line, it's an \nentirely different experience. One is a pleasure to work with... the other \none is just a pain.\n\n",
"msg_date": "Tue, 24 May 2005 23:17:46 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "In article <[email protected]>,\nJosh Berkus <[email protected]> wrote:\n\n>So transaction integrity is not a real concern?\n\nI know of all too many people that consider that to be\ntrue. <sigh> They simply don't understand the problem.\n\n--\nhttp://www.spinics.net/linux/\n\n",
"msg_date": "Sun, 29 May 2005 04:28:25 -0000",
"msg_from": "[email protected] ()",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
}
] |
[
{
"msg_contents": "> - Most of the DB usage is Selects. We would have some inserts but that\n> would be like a nightly or a monthly process\n\nSo transaction integrity is not a real concern? This sounds like a data \nwarehouse; wanna try Bizgres? (www.bizgres.org)\n\nI took a look at this. I have a few concerns with bizgres though -- I am\nusing jetspeed portal engine and Hibernate as my O/R Mapping layer. I know\nfor sure that they dont support bizgres. Now the question is what difference\nis there between bizgres and postgres ... I guess I will try to look around\nthe website more and find out, but if there is something you would like to\ncomment, that would be very helpful ...\n\nThanks,\nAmit\n\n",
"msg_date": "Tue, 24 May 2005 13:56:54 -0400",
"msg_from": "Amit V Shah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "On Tue, May 24, 2005 at 01:56:54PM -0400, Amit V Shah wrote:\n> I took a look at this. I have a few concerns with bizgres though -- I am\n> using jetspeed portal engine and Hibernate as my O/R Mapping layer.\n\nIf you have problems with performance, you might want to look into using JDBC\ndirectly instead of using Hibernate. I know groups of people who are rather\nless-than-happy with it performance-wise :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 24 May 2005 20:01:57 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "Amit,\n\n> I took a look at this. I have a few concerns with bizgres though -- I am\n> using jetspeed portal engine and Hibernate as my O/R Mapping layer. I know\n> for sure that they dont support bizgres. Now the question is what\n> difference is there between bizgres and postgres ... I guess I will try to\n> look around the website more and find out, but if there is something you\n> would like to comment, that would be very helpful ...\n\nBizgres is PostgreSQL. Just a different packaging of it, with some patches \nwhich are not yet in the main PostgreSQL. Also, it's currently beta.\n\n--Josh\n\n-- \n__Aglio Database Solutions_______________\nJosh Berkus\t\t Consultant\[email protected]\t www.agliodbs.com\nPh: 415-752-2500\tFax: 415-752-2387\n2166 Hayes Suite 200\tSan Francisco, CA\n",
"msg_date": "Tue, 24 May 2005 11:17:32 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
}
] |
[
{
"msg_contents": "I'm far from an expert, so this may be off-base... but\nperhaps a suggestion would be to allow a hint to be\nsent to the optimizer if the user doesn't care that\nthe result is \"approximate\" maybe then this wouldn't\nrequire adding more overhead to the indexes.\n\nMSSQL has something like this with (nolock) \ni.e. select count(*) from customers (nolock) where\nname like 'Mark%' \n\nRegardless, I'm very impressed with PostgreSQL and I\nthink we're moving ahead with it.\n\nMark\n\n--- Bruno Wolff III <[email protected]> wrote:\n> On Tue, May 24, 2005 at 08:36:36 -0700,\n> mark durrant <[email protected]> wrote:\n> > \n> > --MSSQL's ability to hit the index only and not\n> having\n> > to go to the table itself results in a _big_\n> > performance/efficiency gain. If someone who's in\n> > development wants to pass this along, it would be\n> a\n> > nice addition to PostgreSQL sometime in the\n> future.\n> > I'd suspect that as well as making one query\n> faster,\n> > it would make everything else faster/more scalable\n> as\n> > the server load is so much less.\n> \n> This gets brought up a lot. The problem is that the\n> index doesn't include\n> information about whether the current transaction\n> can see the referenced\n> row. Putting this information in the index will add\n> significant overhead\n> to every update and the opinion of the developers is\n> that this would be\n> a net loss overall.\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nYahoo! Small Business - Try our new Resources site\nhttp://smallbusiness.yahoo.com/resources/\n",
"msg_date": "Tue, 24 May 2005 12:13:27 -0700 (PDT)",
"msg_from": "mark durrant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Select performance vs. mssql"
},
{
"msg_contents": "Folks,\n\n> > This gets brought up a lot. The problem is that the\n> > index doesn't include\n> > information about whether the current transaction\n> > can see the referenced\n> > row. Putting this information in the index will add\n> > significant overhead\n> > to every update and the opinion of the developers is\n> > that this would be\n> > a net loss overall.\n\nPretty much. There has been discussion about allowing index-only access to \n\"frozen\" tables, i.e. archive partitions. But it all sort of hinges on \nsomeone implementing it and testing ....\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 24 May 2005 16:35:14 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select performance vs. mssql"
},
{
"msg_contents": "\n> Pretty much. There has been discussion about allowing index-only access \n> to\n> \"frozen\" tables, i.e. archive partitions. But it all sort of hinges on\n> someone implementing it and testing ....\n\n\tWould be interesting as a parameter to set at index creation (ie. if you \nknow this table will have a lot of reads and few writes)... like create an \nindex on columns X,Y keeping data on columns X,Y and Z...\n\tBut in this case do you still need the table ?\n\tOr even create a table type where the table and the index are one, like \nan auto-clustered table...\n\tI don't know if it would be used that often, though ;)\n\n",
"msg_date": "Wed, 25 May 2005 02:14:01 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select performance vs. mssql"
},
{
"msg_contents": "On Tue, May 24, 2005 at 04:35:14PM -0700, Josh Berkus wrote:\n>Pretty much. There has been discussion about allowing index-only access to \n>\"frozen\" tables, i.e. archive partitions. But it all sort of hinges on \n>someone implementing it and testing ....\n\nIs there any way to expose the planner estimate? For some purposes it's\nenough to just give a rough ballpark (e.g., a google-esque \"results 1-10\nof approximately 10000000\") so a user knows whether its worth even\nstarting to page through.\n\nMike Stone\n",
"msg_date": "Tue, 24 May 2005 20:20:39 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select performance vs. mssql"
},
{
"msg_contents": "Michael Stone wrote:\n\n> On Tue, May 24, 2005 at 04:35:14PM -0700, Josh Berkus wrote:\n>\n>> Pretty much. There has been discussion about allowing index-only\n>> access to \"frozen\" tables, i.e. archive partitions. But it all sort\n>> of hinges on someone implementing it and testing ....\n>\n>\n> Is there any way to expose the planner estimate? For some purposes it's\n> enough to just give a rough ballpark (e.g., a google-esque \"results 1-10\n> of approximately 10000000\") so a user knows whether its worth even\n> starting to page through.\n>\n> Mike Stone\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\nWell, you could always do:\n\nEXPLAIN SELECT ...\n\nAnd then parse out the rows= in the first line.\n\nJohn\n=:->",
"msg_date": "Tue, 24 May 2005 19:38:08 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select performance vs. mssql"
}
] |
[
{
"msg_contents": "Wondering if someone could explain a pecularity for me:\n\nWe have a database which takes 1000ms to perform a certain query on.\n\nIf I pg_dump that database then create a new database (e.g. \"tempdb\") and upload the dump file (thus making a duplicate) then the same query only takes 190ms !!\n\nVacuum, vacuum analyse, and vacuum full analyse does not seem to have an impact on these times.\n\nCan anyone explain why this may be occurring and how I might be able to keep the original database running at the same speed as \"tempdb\"?\n\nThanks in advance,\n\nDave.\n\n\n\n\n\n\n\nWondering if someone could explain a pecularity for \nme:We have a database which takes 1000ms to perform a certain query \non.If I pg_dump that database then create a new database (e.g. \"tempdb\") \nand upload the dump file (thus making a duplicate) then the same query only \ntakes 190ms !!\nVacuum, vacuum analyse, and vacuum full analyse \ndoes not seem to have an impact on these times.Can anyone explain why \nthis may be occurring and how I might be able to keep the original database \nrunning at the same speed as \"tempdb\"?Thanks in \nadvance,Dave.",
"msg_date": "Wed, 25 May 2005 10:07:49 +0800",
"msg_from": "\"SpaceBallOne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can anyone explain this: duplicate dbs."
},
{
"msg_contents": "> Can anyone explain why this may be occurring and how I might be able to \n> keep the original database running at the same speed as \"tempdb\"?\n\nYou're not vacuuming anywhere near often enough. Read up the database \nmaintenance section of the manual. Then, set up contrib/pg_autovacuum \nto vacuum your database regularly, or make a cron job to run \"vacuumdb \n-a -z -q\" once an hour, say.\n\nYou can fix for the case when you haven't been vacuuming enough by a \nonce off VACUUM FULL ANALYZE command, but this will lock tables \nexclusively as it does its work.\n\nChris\n",
"msg_date": "Wed, 25 May 2005 10:21:46 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can anyone explain this: duplicate dbs."
},
{
"msg_contents": "> If I pg_dump that database then create a new database (e.g. \"tempdb\") \n> and upload the dump file (thus making a duplicate) then the same query \n> only takes 190ms !!\n> Vacuum, vacuum analyse, and vacuum full analyse does not seem to have an \n> impact on these times.\n\nDamn, for some reason I didn't read that you had already tried vacuum \nfull. In that case, I can't explain it except perhaps you aren't \nvacuuming properly, or the right thing, or it's a disk cache thing.\n\nChris\n",
"msg_date": "Wed, 25 May 2005 10:22:53 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can anyone explain this: duplicate dbs."
},
{
"msg_contents": "SpaceBallOne wrote:\n\n> Wondering if someone could explain a pecularity for me:\n>\n> We have a database which takes 1000ms to perform a certain query on.\n>\n> If I pg_dump that database then create a new database (e.g. \"tempdb\")\n> and upload the dump file (thus making a duplicate) then the same query\n> only takes 190ms !!\n> Vacuum, vacuum analyse, and vacuum full analyse does not seem to have\n> an impact on these times.\n>\n> Can anyone explain why this may be occurring and how I might be able\n> to keep the original database running at the same speed as \"tempdb\"?\n>\n> Thanks in advance,\n>\n> Dave.\n\nWhat version of postgres?\n\nThere are a few possibilities. If you are having a lot of updates to the\ntable, you can get index bloat. And vacuum doesn't fix indexes. You have\nto \"REINDEX\" to do that. Though REINDEX has the same lock that VACUUM\nFULL has, so you need to be a little careful with it.\n\nProbably better is to do CLUSTER, as it does a REINDEX and a sort, so\nyour table ends up nicer when you are done.\n\nAlso, older versions of postgres had a worse time with index bloat. One\nthing that caused a lot of problem is a table that you insert into over\ntime, so that all the values are incrementing. If you are deleting older\nentries, that area won't be re-used because they fall at the back end. I\nbelieve newer versions have been fixed.\n\nBy the way, I think doing:\n\nCREATE DATABASE tempdb WITH TEMPLATE = originaldb;\n\nIs a much faster way of doing dump and load. I *think* it would recreate\nindexes, etc. If it just does a copy it may not show the dump/restore\nimprovement.\n\nJohn\n=:->",
"msg_date": "Tue, 24 May 2005 21:39:15 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can anyone explain this: duplicate dbs."
},
{
"msg_contents": "> What version of postgres?\n\n8.0.2 ... but I think I've seen this before on 7.3 ...\n\n> There are a few possibilities. If you are having a lot of updates to the\n> table, you can get index bloat. And vacuum doesn't fix indexes. You have\n> to \"REINDEX\" to do that. Though REINDEX has the same lock that VACUUM\n> FULL has, so you need to be a little careful with it.\n\n> Probably better is to do CLUSTER, as it does a REINDEX and a sort, so\n> your table ends up nicer when you are done.\n\nThanks, will try those next time this problem crops up (i just deleted / \nrecreated the database to speed things for its users in the office ... \nprobably should have held off to see if I could find a solution first!).\n\nYes, the database / table-in-question does have a lot of updates, deletes, \nand new rows (relatively speaking for a small business).\n\nWould CLUSTER / REINDEX still have an effect if our queries were done via \nsequential scan? This is a old database (as in built by me when i was just \nstarting to learn unix / postgres) so the database design is pretty horrible \n(little normalisation, no indexes).\n\nHave taken Chris's advice onboard too and setup cron to do a vacuumdb hourly \ninstead of my weekly vacuum.\n\nCheers,\n\nDave.\n\n\n\n",
"msg_date": "Wed, 25 May 2005 10:53:07 +0800",
"msg_from": "\"SpaceBallOne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can anyone explain this: duplicate dbs."
},
{
"msg_contents": "SpaceBallOne wrote:\n\n>> What version of postgres?\n>\n>\n> 8.0.2 ... but I think I've seen this before on 7.3 ...\n>\n>> There are a few possibilities. If you are having a lot of updates to the\n>> table, you can get index bloat. And vacuum doesn't fix indexes. You have\n>> to \"REINDEX\" to do that. Though REINDEX has the same lock that VACUUM\n>> FULL has, so you need to be a little careful with it.\n>\n>\n>> Probably better is to do CLUSTER, as it does a REINDEX and a sort, so\n>> your table ends up nicer when you are done.\n>\n>\n> Thanks, will try those next time this problem crops up (i just deleted\n> / recreated the database to speed things for its users in the office\n> ... probably should have held off to see if I could find a solution\n> first!).\n>\n> Yes, the database / table-in-question does have a lot of updates,\n> deletes, and new rows (relatively speaking for a small business).\n>\n> Would CLUSTER / REINDEX still have an effect if our queries were done\n> via sequential scan? This is a old database (as in built by me when i\n> was just starting to learn unix / postgres) so the database design is\n> pretty horrible (little normalisation, no indexes).\n\nWell, my first recommendation is to put in some indexes. :) They are\nrelatively easy to setup and can drastically improve select performance.\n\nWhat version of postgres are you using?\nWhat does it say at the end of \"VACUUM FULL ANALYZE VERBOSE\", that\nshould tell you how many free pages were reclaimed and how big your free\nspace map should be.\n\nIf you only did 1 VACUUM FULL, you might try another, as it sounds like\nyour tables aren't properly filled. I'm pretty sure vacuum only removes\nempty pages/marks locations for the free space map so they can be\nre-used, while vacuum full will move entries around to create free pages.\n\nIt sounds like it didn't do it properly.\n\nBut even so, CLUSTER is still your friend, as it allows you to \"presort\"\nthe rows in your tables.\n\n>\n> Have taken Chris's advice onboard too and setup cron to do a vacuumdb\n> hourly instead of my weekly vacuum.\n>\n> Cheers,\n>\n> Dave.\n>\n>\nJohn\n=:->",
"msg_date": "Tue, 24 May 2005 22:00:34 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can anyone explain this: duplicate dbs."
},
{
"msg_contents": "> Would CLUSTER / REINDEX still have an effect if our queries were done \n> via sequential scan? \n\nSELECTS don't write to the database, so they have no effect at all on \nvacuuming/analyzing. You only need to worry about that with writes.\n\n> This is a old database (as in built by me when i \n> was just starting to learn unix / postgres) so the database design is \n> pretty horrible (little normalisation, no indexes).\n\nNo indexes? Bloody hell :D\n\nUse EXPLAIN ANALYZE SELECT ... ; on all of your selects to see where \nthey are slow and where you can add indexes...\n\nChris\n",
"msg_date": "Wed, 25 May 2005 11:00:54 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can anyone explain this: duplicate dbs."
},
{
"msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n>> If I pg_dump that database then create a new database (e.g. \"tempdb\") \n>> and upload the dump file (thus making a duplicate) then the same query \n>> only takes 190ms !!\n>> Vacuum, vacuum analyse, and vacuum full analyse does not seem to have an \n>> impact on these times.\n\n> Damn, for some reason I didn't read that you had already tried vacuum \n> full.\n\nI'm thinking index bloat, and a PG version too old for vacuum full to\nrecover any index space. But without any information about PG version\nor EXPLAIN ANALYZE results, we're all just guessing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 May 2005 23:01:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can anyone explain this: duplicate dbs. "
},
{
"msg_contents": "John A Meinel <[email protected]> writes:\n> By the way, I think doing:\n\n> CREATE DATABASE tempdb WITH TEMPLATE = originaldb;\n\n> Is a much faster way of doing dump and load. I *think* it would recreate\n> indexes, etc. If it just does a copy it may not show the dump/restore\n> improvement.\n\nCREATE DATABASE just does a physical copy, so it won't do anything at\nall for bloat issues.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 May 2005 23:02:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can anyone explain this: duplicate dbs. "
},
{
"msg_contents": "On Tue, May 24, 2005 at 21:39:15 -0500,\n John A Meinel <[email protected]> wrote:\n> \n> By the way, I think doing:\n> \n> CREATE DATABASE tempdb WITH TEMPLATE = originaldb;\n> \n> Is a much faster way of doing dump and load. I *think* it would recreate\n> indexes, etc. If it just does a copy it may not show the dump/restore\n> improvement.\n\nYou need to be careful when doing this. See section 18.3 of the 8.0 docs\nfor caveats.\n",
"msg_date": "Tue, 24 May 2005 22:32:46 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can anyone explain this: duplicate dbs."
}
] |
[
{
"msg_contents": "64-bit PG 8.0.2. is up and running on AIX5.3/power5\n\nYES! ! !\n\nThe major thing: setting some quirky LDFLAGS. \n\nAnyone interested in details, please ping. \n\nThanks to Nick Addington, Vincent Vanwynsberghe, my SA, Sergey, \nand Tom Lane (for good-natured nudging)\n\n\nMy Next Task: Finding a Stress Test Harness to Load, and Query Data. \n\nAnyone have ideas? \n\nI am eagerly awaiting the DESTRUCTION of Oracle around here, and\n\"yes\" I am an oracle DBA and think it's very good technology. \n\nSmiling, \n\nRoss Mohan\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Mohan, Ross\nSent: Wednesday, May 25, 2005 1:11 PM\nTo: [email protected]\nSubject: Re: [PORTS] Which library has these symbols? \n\n\nTom, \n\nthey're all over the place, repeated in different\nlibraries, kind of a pain. Didn't realize that. I'll\njust give linker a bunch of LIBPATH and LIBNAME directives\nand have it run around. \n\n\n# ar -t ./postgresql-8.0.2/src/interfaces/ecpg/ecpglib/libecpg.a | egrep 'dirmod|path|pgstr|pgsleep' \npath.o\n\n# ar -t ./postgresql-8.0.2/src/interfaces/ecpg/pgtypeslib/libpgtypes.a | egrep 'dirmod|path|pgstr|pgsleep' pgstrcasecmp.o\n\n# ar -t ./postgresql-8.0.2/src/interfaces/libpq/libpq.a | egrep 'dirmod|path|pgstr|pgsleep' \npgstrcasecmp.o \n\n# ar -t ./postgresql-8.0.2/src/port/libpgport.a | egrep 'dirmod|path|pgstr|pgsleep' \ndirmod.o\npath.o \npgsleep.o\npgstrcasecmp.o\n\n# ar -t ./postgresql-8.0.2/src/port/libpgport_srv.a | egrep 'dirmod|path|pgstr|pgsleep' dirmod_srv.o path.o pgsleep.o pgstrcasecmp.o\n\n\n\nI **really** want this in 64bit......funny this problem only shows up in 64, not 32 mode. <sigh> \n\n\nThanks for commenting --- That's ALWAYS welcome!\n\n-- Ross\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, May 24, 2005 10:53 PM\nTo: Mohan, Ross\nCc: [email protected]\nSubject: Re: [PORTS] Which library has these symbols? \n\n\n\"Mohan, Ross\" <[email protected]> writes:\n> So Close, Yet So Far!\n\nThe specific symbols being complained of should be in libpgport_srv (see src/port). Dunno why your platform is ignoring that library. When you find out, let us know ;-)\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n\n\n\n\nRE: [PORTS] Which library has these symbols? -- Eureka\n\n\n\n64-bit PG 8.0.2. is up and running on AIX5.3/power5\n\nYES! ! !\n\nThe major thing: setting some quirky LDFLAGS. \n\nAnyone interested in details, please ping. \n\nThanks to Nick Addington, Vincent Vanwynsberghe, my SA, Sergey, \nand Tom Lane (for good-natured nudging)\n\n\nMy Next Task: Finding a Stress Test Harness to Load, and Query Data. \n\nAnyone have ideas? \n\nI am eagerly awaiting the DESTRUCTION of Oracle around here, and\n\"yes\" I am an oracle DBA and think it's very good technology. \n\nSmiling, \n\nRoss Mohan\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Mohan, Ross\nSent: Wednesday, May 25, 2005 1:11 PM\nTo: [email protected]\nSubject: Re: [PORTS] Which library has these symbols? \n\n\nTom, \n\nthey're all over the place, repeated in different\nlibraries, kind of a pain. Didn't realize that. I'll\njust give linker a bunch of LIBPATH and LIBNAME directives\nand have it run around. \n\n\n# ar -t ./postgresql-8.0.2/src/interfaces/ecpg/ecpglib/libecpg.a | egrep 'dirmod|path|pgstr|pgsleep' \npath.o\n\n# ar -t ./postgresql-8.0.2/src/interfaces/ecpg/pgtypeslib/libpgtypes.a | egrep 'dirmod|path|pgstr|pgsleep' pgstrcasecmp.o\n# ar -t ./postgresql-8.0.2/src/interfaces/libpq/libpq.a | egrep 'dirmod|path|pgstr|pgsleep' \npgstrcasecmp.o \n\n# ar -t ./postgresql-8.0.2/src/port/libpgport.a | egrep 'dirmod|path|pgstr|pgsleep' \ndirmod.o\npath.o \npgsleep.o\npgstrcasecmp.o\n\n# ar -t ./postgresql-8.0.2/src/port/libpgport_srv.a | egrep 'dirmod|path|pgstr|pgsleep' dirmod_srv.o path.o pgsleep.o pgstrcasecmp.o\n\n\nI **really** want this in 64bit......funny this problem only shows up in 64, not 32 mode. <sigh> \n\n\nThanks for commenting --- That's ALWAYS welcome!\n\n-- Ross\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, May 24, 2005 10:53 PM\nTo: Mohan, Ross\nCc: [email protected]\nSubject: Re: [PORTS] Which library has these symbols? \n\n\n\"Mohan, Ross\" <[email protected]> writes:\n> So Close, Yet So Far!\n\nThe specific symbols being complained of should be in libpgport_srv (see src/port). Dunno why your platform is ignoring that library. When you find out, let us know ;-)\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq",
"msg_date": "Wed, 25 May 2005 18:06:05 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] Which library has these symbols? -- Eureka"
},
{
"msg_contents": "Oops! [email protected] (\"Mohan, Ross\") was seen spray-painting on a wall:\n> 64-bit PG 8.0.2. is up and running on AIX5.3/power5\n>\n> YES! ! !\n>\n> The major thing:� setting some quirky LDFLAGS.\n>\n> Anyone interested in details, please ping.\n\nThis is definitely a matter worthy of interest.\n\nIt would be well worth taking a peek at $PG_SOURCE_HOME/doc/FAQ_AIX\nand seeing if there are notes worth adding to it.\n\nAlternatively, bounce the details over to myself and/or Andrew\nHammond, and we can see about submitting the documentation patch ;-).\n-- \n\"cbbrowne\",\"@\",\"gmail.com\"\nhttp://cbbrowne.com/info/spreadsheets.html\n\"I visited a company that was doing programming in BASIC in Panama\nCity and I asked them if they resented that the BASIC keywords were in\nEnglish. The answer was: ``Do you resent that the keywords for\ncontrol of actions in music are in Italian?''\" -- Kent M Pitman\n",
"msg_date": "Wed, 25 May 2005 17:30:41 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which library has these symbols? -- Eureka"
},
{
"msg_contents": "> My Next Task: Finding a Stress Test Harness to Load, and Query Data.\n> \n> Anyone have ideas?\n> \n> I am eagerly awaiting the * DESTRUCTION* ** of Oracle around here, and\n> \"yes\" I am an oracle DBA and think it's */ very /*// good technology.\n\nHave you tried the simple 'gmake test'?\n\nOther than that, try http://osdb.sourceforge.net/ perhaps...\n\nChris\n",
"msg_date": "Thu, 26 May 2005 09:40:08 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] Which library has these symbols? -- Eureka"
}
] |
[
{
"msg_contents": "Hi all\ni dont know if this is normal, but if yes i would like to know why and\nhow I could do it another way other than using unions.\n\n(I tried on postgresql 7.4 and 8.0.3, made my vacuum analyse just before)\nHere is my simple query:\n\nselect * \nfrom rt_node n, rt_edge e\nwhere node_id = 2 \nand e.start_node_id = n.node_id;\n\nwhich give me the following query plan:\n\n Nested Loop (cost=0.00..79.46 rows=24 width=60)\n -> Index Scan using rt_node_pkey on rt_node n (cost=0.00..5.94\nrows=1 width=36)\n Index Cond: (node_id = 2)\n -> Index Scan using rt_edge_start_node on rt_edge e \n(cost=0.00..73.28 rows=24 width=24)\n Index Cond: (start_node_id = 2)\n\n\nBut if I plug another condition with a OR like this:\nselect * \nfrom rt_node n, rt_edge e\nwhere node_id = 2 \nand (e.start_node_id = n.node_id or e.end_node_id = n.node_id);\n\nI get this plan, it stop using the index!:\n\n Nested Loop (cost=0.00..158.94 rows=4 width=60)\n Join Filter: ((\"inner\".start_node_id = \"outer\".node_id) OR\n(\"inner\".end_node_id = \"outer\".node_id))\n -> Index Scan using rt_node_pkey on rt_node n (cost=0.00..5.94\nrows=1 width=36)\n Index Cond: (node_id = 2)\n -> Seq Scan on rt_edge e (cost=0.00..81.60 rows=4760 width=24)\n\nI tried SET enable_seqscan = OFF and it give me this (woah) :\n\n Nested Loop (cost=100000000.00..100000158.94 rows=4 width=60)\n Join Filter: ((\"inner\".start_node_id = \"outer\".node_id) OR\n(\"inner\".end_node_id = \"outer\".node_id))\n -> Index Scan using rt_node_pkey on rt_node n (cost=0.00..5.94\nrows=1 width=36)\n Index Cond: (node_id = 2)\n -> Seq Scan on rt_edge e (cost=100000000.00..100000081.60\nrows=4760 width=24)\n\nThese are my tables definitions:\nCREATE TABLE rt_node (\n node_id INTEGER PRIMARY KEY\n);\n\nCREATE TABLE rt_edge (\n edge_id INTEGER PRIMARY KEY,\n start_node_id INTEGER NOT NULL,\n end_node_id INTEGER NOT NULL,\n CONSTRAINT start_node_ref FOREIGN KEY (start_node_id) REFERENCES\nrt_node(node_id),\n CONSTRAINT end_node_ref FOREIGN KEY (end_node_id) REFERENCES\nrt_node(node_id)\n );\n \n CREATE INDEX rt_edge_start_node ON rt_edge(start_node_id);\n CREATE INDEX rt_edge_end_node ON rt_edge(end_node_id);\n\n\nI cant figure why it cant use my index\nI know I can use a UNION instead on two query like the first one only\ndifferent on \"start_node_id\"/\"end_node_id\", and it works,\nbut this is a part of a bigger query which is already ugly and I hate\nusing 5 lines for something I could in 5 words.\n\nthank you!\n",
"msg_date": "Wed, 25 May 2005 16:12:00 -0400",
"msg_from": "Jocelyn Turcotte <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inner join on two OR conditions dont use index"
},
{
"msg_contents": "Jocelyn Turcotte wrote:\n\n>Hi all\n>i dont know if this is normal, but if yes i would like to know why and\n>how I could do it another way other than using unions.\n>\n>\n\nThe only thing that *might* work is if you used an index on both keys.\nSo if you did:\n\nCREATE INDEX rt_edge_start_end_node ON rt_edge(start_node_id,end_node_id);\n\nThe reason is that in an \"OR\" construct, you have to check both for being true. So in the general case where you don't know the correlation between the columns, you have to check all of the entries, because even if you know the status of one side of the OR, you don't know the other.\n\nAnother possibility would be to try this index:\n\nCREATE INDEX rt_edge_stare_or_end ON rt_edge(start_node_id OR end_node_id);\n\nI'm not sure how smart the planner can be, though.\n\nJohn\n=:->",
"msg_date": "Wed, 25 May 2005 15:18:52 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inner join on two OR conditions dont use index"
},
{
"msg_contents": "Thanks John\n\nit dont seems to work, but in my context I only needed data from the\nrt_node table so I tried this:\n\nselect *\nfrom rt_node n\nwhere node_id = 2\nand exists (select edge_id from rt_edge where start_node_id =\nn.node_id or end_node_id = n.node_id)\n\nand it gave me this plan (even if I remove the stupid node_id = 2 condition):\n\n Index Scan using rt_node_pkey on rt_node n (cost=0.00..6.15 rows=1 width=25)\n Index Cond: (node_id = 2)\n Filter: (subplan)\n SubPlan\n -> Index Scan using rt_edge_start_node, rt_edge_end_node on\nrt_edge (cost=0.00..12.56 rows=4 width=4)\n Index Cond: ((start_node_id = $0) OR (end_node_id = $0))\n\n\nthis time it use my two indexes, maybe because he know that the same\nvalue is compared in the two condition... I should ask my mother if\nshe got an idea, mothers know a lot of stuff!\n\nOn 5/25/05, John A Meinel <[email protected]> wrote:\n> Jocelyn Turcotte wrote:\n> \n> >Hi all\n> >i dont know if this is normal, but if yes i would like to know why and\n> >how I could do it another way other than using unions.\n> >\n> >\n> \n> The only thing that *might* work is if you used an index on both keys.\n> So if you did:\n> \n> CREATE INDEX rt_edge_start_end_node ON rt_edge(start_node_id,end_node_id);\n> \n> The reason is that in an \"OR\" construct, you have to check both for being true. So in the general case where you don't know the correlation between the columns, you have to check all of the entries, because even if you know the status of one side of the OR, you don't know the other.\n> \n> Another possibility would be to try this index:\n> \n> CREATE INDEX rt_edge_stare_or_end ON rt_edge(start_node_id OR end_node_id);\n> \n> I'm not sure how smart the planner can be, though.\n> \n> John\n> =:->\n> \n> \n> \n> \n>\n",
"msg_date": "Wed, 25 May 2005 17:22:10 -0400",
"msg_from": "Jocelyn Turcotte <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inner join on two OR conditions dont use index"
},
{
"msg_contents": "Jocelyn Turcotte <[email protected]> writes:\n> But if I plug another condition with a OR like this:\n> select * \n> from rt_node n, rt_edge e\n> where node_id = 2 \n> and (e.start_node_id = n.node_id or e.end_node_id = n.node_id);\n\n> I get this plan, it stop using the index!:\n\nI'm afraid you're stuck with faking it with a UNION for now; the current\nplanner is incapable of recognizing that a join OR condition can be\nhandled with OR indexscans.\n\nFWIW, this is fixed for 8.1 --- in CVS tip, I get this from your\nexample:\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------\n Nested Loop (cost=2.06..17.57 rows=2 width=16)\n -> Index Scan using rt_node_pkey on rt_node n (cost=0.00..4.82 rows=1 width=4)\n Index Cond: (node_id = 2)\n -> Bitmap Heap Scan on rt_edge e (cost=2.06..12.47 rows=18 width=12)\n Recheck Cond: ((e.start_node_id = \"outer\".node_id) OR (e.end_node_id = \"outer\".node_id))\n -> BitmapOr (cost=2.06..2.06 rows=18 width=0)\n -> Bitmap Index Scan on rt_edge_start_node (cost=0.00..1.03 rows=9 width=0)\n Index Cond: (e.start_node_id = \"outer\".node_id)\n -> Bitmap Index Scan on rt_edge_end_node (cost=0.00..1.03 rows=9 width=0)\n Index Cond: (e.end_node_id = \"outer\".node_id)\n(10 rows)\n\n(This is with no data in the tables, so the cost estimates are small,\nbut it does show that the planner knows how to generate this kind of\nquery plan now.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 May 2005 17:43:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inner join on two OR conditions dont use index "
}
] |
[
{
"msg_contents": "You just couldn't help yourself, could you? :-)\n\n",
"msg_date": "Wed, 25 May 2005 20:36:54 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "test - pls delete and ignore"
}
] |
[
{
"msg_contents": "\nHi:\n\n I have a table called sensors:\n\n Table \"public.sensor\"\n Column | Type | Modifiers\n-----------------+--------------------------+-------------------------------------------------\n sensor_id | integer | not null default \nnextval('sensor_id_seq'::text)\n sensor_model_id | integer | not null\n serial_number | character varying(50) | not null\n purchase_date | timestamp with time zone | not null\n variable_id | integer | not null\n datalink_id | integer | not null\n commentary | text |\nIndexes:\n \"sensor_pkey\" PRIMARY KEY, btree (sensor_id)\nForeign-key constraints:\n \"datalink_id_exists\" FOREIGN KEY (datalink_id) REFERENCES \ndatalink(datalink_id) ON DELETE RESTRICT\n \"sensor_model_id_exists\" FOREIGN KEY (sensor_model_id) REFERENCES \nsensor_model(sensor_model_id) ON DELETE RESTRICT\n \"variable_id_exists\" FOREIGN KEY (variable_id) REFERENCES \nvariable(variable_id) ON DELETE RESTRICT\n\n\nCurrently, it has only 19 rows. But when I try to delete a row, it takes\nforever. I tried restarting the server. I tried a full vacuum to no \navail. I tried the following:\n\nexplain analyze delete from sensor where sensor_id = 12;\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Seq Scan on sensor (cost=0.00..1.25 rows=1 width=6) (actual \ntime=0.055..0.068 rows=1 loops=1)\n Filter: (sensor_id = 12)\n Total runtime: 801641.333 ms\n(3 rows)\n\nCan anybody help me out? Thanks so much!\n",
"msg_date": "Thu, 26 May 2005 10:57:53 -0400 (EDT)",
"msg_from": "Colton A Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "poor performance involving a small table"
},
{
"msg_contents": "Colton A Smith wrote:\n> \n> Hi:\n> \n> I have a table called sensors:\n> \n> Table \"public.sensor\"\n> Column | Type | Modifiers\n> -----------------+--------------------------+------------------------------------------------- \n> \n> sensor_id | integer | not null default \n> nextval('sensor_id_seq'::text)\n> sensor_model_id | integer | not null\n> serial_number | character varying(50) | not null\n> purchase_date | timestamp with time zone | not null\n> variable_id | integer | not null\n> datalink_id | integer | not null\n> commentary | text |\n> Indexes:\n> \"sensor_pkey\" PRIMARY KEY, btree (sensor_id)\n> Foreign-key constraints:\n> \"datalink_id_exists\" FOREIGN KEY (datalink_id) REFERENCES \n> datalink(datalink_id) ON DELETE RESTRICT\n> \"sensor_model_id_exists\" FOREIGN KEY (sensor_model_id) REFERENCES \n> sensor_model(sensor_model_id) ON DELETE RESTRICT\n> \"variable_id_exists\" FOREIGN KEY (variable_id) REFERENCES \n> variable(variable_id) ON DELETE RESTRICT\n> \n> \n> Currently, it has only 19 rows. But when I try to delete a row, it takes\n> forever. I tried restarting the server. I tried a full vacuum to no \n> avail. I tried the following:\n> \n> explain analyze delete from sensor where sensor_id = 12;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------ \n> \n> Seq Scan on sensor (cost=0.00..1.25 rows=1 width=6) (actual \n> time=0.055..0.068 rows=1 loops=1)\n> Filter: (sensor_id = 12)\n> Total runtime: 801641.333 ms\n> (3 rows)\n> \n> Can anybody help me out? Thanks so much!\n> \n\nI'd say the obvious issue would be your foreign keys slowing things down. Have \nyou analyzed the referenced tables, and indexed the columns on the referenced \ntables?\n\n-- \n_______________________________\n\nThis e-mail may be privileged and/or confidential, and the sender does\nnot waive any related rights and obligations. Any distribution, use or\ncopying of this e-mail or the information it contains by other than an\nintended recipient is unauthorized. If you received this e-mail in\nerror, please advise me (by return e-mail or otherwise) immediately.\n_______________________________\n",
"msg_date": "Mon, 30 May 2005 15:00:37 -0700",
"msg_from": "Bricklen Anderson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor performance involving a small table"
},
{
"msg_contents": "> Seq Scan on sensor (cost=0.00..1.25 rows=1 width=6) (actual \n> time=0.055..0.068 rows=1 loops=1)\n> Filter: (sensor_id = 12)\n> Total runtime: 801641.333 ms\n> (3 rows)\n> \n> Can anybody help me out? Thanks so much!\n\nDoes your table have millions of dead rows? Do you vacuum once an hour? \n Run VACUUM FULL ANALYE sensor;\n\nChris\n\n",
"msg_date": "Tue, 31 May 2005 09:52:26 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor performance involving a small table"
}
] |
[
{
"msg_contents": "Hi,\n\nI've got a query that I think the query optimiser should be able\nto work it's magic on but it doesn't! I've had a look around and\nasked on the IRC channel and found that the current code doesn't\nattempt to optimise for what I'm asking it to do at the moment.\nHere's a bad example:\n\n SELECT u.txt\n FROM smalltable t, (\n SELECT id, txt FROM largetable1\n UNION ALL\n SELECT id, txt FROM largetable2) u\n WHERE t.id = u.id\n AND t.foo = 'bar';\n\nI was hoping that \"smalltable\" would get moved up into the union,\nbut it doesn't at the moment and the database does a LOT of extra\nwork. In this case, I can manually do quite a couple of transforms\nto move things around and it does the right thing:\n\n SELECT txt\n FROM (\n SELECT l.id as lid, r.id as rid, r.foo, l.txt\n FROM largetable1 l, smalltable r\n UNION ALL\n SELECT l.id as lid, r.id as rid, r.foo, l.txt\n FROM largetable1 l, smalltable r)\n WHERE foo = 'bar';\n AND lid = rid\n\nThe optimiser is intelligent enough to move the where clauses up\ninto the union and end end up with a reasonably optimal query.\nUnfortunatly, in real life, the query is much larger and reorganising\neverything manually isn't really feasible!\n\nIs this a good place to ask about this or is it more in the realm\nof the hackers mailing list?\n\nThanks,\n Sam\n",
"msg_date": "Thu, 26 May 2005 16:22:03 +0100",
"msg_from": "Sam Mason <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimising queries involving unions"
},
{
"msg_contents": "Sam Mason <[email protected]> writes:\n> Here's a bad example:\n\n> SELECT u.txt\n> FROM smalltable t, (\n> SELECT id, txt FROM largetable1\n> UNION ALL\n> SELECT id, txt FROM largetable2) u\n> WHERE t.id = u.id\n> AND t.foo = 'bar';\n\n> I was hoping that \"smalltable\" would get moved up into the union,\n> but it doesn't at the moment and the database does a LOT of extra\n> work.\n\nI'm afraid we're a long way away from being able to do that; the\nparse/plan representation of UNION wasn't chosen with an eye to\nbeing able to optimize it at all :-(. We can push restriction\nclauses down into a union, but we can't do much with join clauses,\nbecause they necessarily refer to tables that don't even exist\nwithin the sub-query formed by the UNION.\n\nIt'd be nice to fix this someday, but don't hold your breath ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 May 2005 12:53:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising queries involving unions "
},
{
"msg_contents": "Tom Lane wrote:\n>It'd be nice to fix this someday, but don't hold your breath ...\n\nThanks for the response!\n\nIs it even worth me thinking about trying to figure out how to make\nthe current code do this sort of thing? or is it just not going to\nhappen with the code as it is?\n\n\n Sam\n",
"msg_date": "Thu, 26 May 2005 18:42:03 +0100",
"msg_from": "Sam Mason <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimising queries involving unions"
},
{
"msg_contents": "Sam Mason <[email protected]> writes:\n> Tom Lane wrote:\n>> It'd be nice to fix this someday, but don't hold your breath ...\n\n> Is it even worth me thinking about trying to figure out how to make\n> the current code do this sort of thing?\n\nProbably not :-(. What we need is to integrate UNION (and the other\nset-ops) into the normal querytree structure so that the planner can\nconsider alternative plans within its existing framework. That requires\nsome fundamental changes in the Query structure --- in particular, we\nhave to get rid of the current situation that there is exactly one\ntargetlist per rangetable. Decoupling targetlists and rangetables would\nhave some other benefits too (INSERT ... SELECT would get a lot cleaner)\nbut it's a wide-ranging change, and I think could only usefully be\ntackled by someone who is already pretty familiar with the code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 May 2005 14:02:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising queries involving unions "
},
{
"msg_contents": "Hi,\n\nI'm using a workaround for this kind of issues:\n\n\nconsider:\n\n\tselect A from \n\n\t (select B from T1 where C \n\t union\n\t select B from T2 where C \n\t union\n\t select B from T3 where C \n\t ) foo\n\twhere D\n\t\n\t\nin your case:\n\nSELECT u.txt\n FROM (\n SELECT id, txt FROM largetable1,smalltable t WHERE t.id = u.id AND\nt.foo = 'bar'\n UNION ALL\n SELECT id, txt FROM largetable2,smalltable t WHERE t.id = u.id AND\nt.foo = 'bar'\n ) u\n \n\n\n\nand\n\n\tselect A from foo where C and D\n\n(A, B, C, D being everything you want, C and D may also include \"GROUP\nBY,ORDER...)\n\nThe first version will be handled correctly by the optimiser, whereas in the\nsecond version, \nPostgres will first build the UNION and then run the query on it.\n\n\n\n\nI'm having large tables with identical structure, one per day.\nInstead of defining a view on all tables, \nI' using functions that \"distribute\" my query on all tables.\n\nThe only issue if that I need to define a type that match the result\nstructure and each return type needs its own function.\n\n\nExample:\n(The first parameter is a schema name, the four next corresponds to A, B, C,\nD\n\n\n\n\n\n---------------------\ncreate type T_i2_vc1 as (int_1 int,int_2 int,vc_1 varchar);\n\nCREATE OR REPLACE FUNCTION\nvq_T_i2_vc1(varchar,varchar,varchar,varchar,varchar) RETURNS setof T_i2_vc1\nAS $$\n\n\nDECLARE\n result T_i2_vc1%rowtype;\n mviews RECORD;\n sql varchar;\n counter int;\nBEGIN\n select into counter 1;\n \n\t -- loop on all daily tables\n\t FOR mviews IN SELECT distinct this_day FROM daylist order by plainday\ndesc LOOP\n\n\t\tIF counter =1 THEN\n\t\t select INTO sql 'SELECT '||mviews.this_day||' AS plainday, '||$2||'\nFROM '||$3||'_'||mviews.plainday||' WHERE '||$4;\n\t\tELSE\n\t\t select INTO sql sql||' UNION ALL SELECT '||mviews.this_day||' AS\nplainday, '||$2||' FROM '||$3||'_'||mviews.plainday||' WHERE '||$4;\n\t\tEND IF;\n\n\t select into counter counter+1;\n\t END LOOP;\n\t \n\t select INTO sql 'SELECT '||$1||' FROM ('||sql||')foo '||$5;\n \n for result in EXECUTE (sql) LOOP\n return NEXT result; \n end loop;\n return ;\n\nEND;\n$$ LANGUAGE plpgsql;\n\n\n\nNote: in your case the function shoud have a further parameter to join\nlargetable(n) to smalltable in the \"sub queries\"\n\nHTH,\n\nMarc\n\n\n\n\n\n> I've got a query that I think the query optimiser should be able\n> to work it's magic on but it doesn't! I've had a look around and\n> asked on the IRC channel and found that the current code doesn't\n> attempt to optimise for what I'm asking it to do at the moment.\n> Here's a bad example:\n> \n> SELECT u.txt\n> FROM smalltable t, (\n> SELECT id, txt FROM largetable1\n> UNION ALL\n> SELECT id, txt FROM largetable2) u\n> WHERE t.id = u.id\n> AND t.foo = 'bar';\n> \n> I was hoping that \"smalltable\" would get moved up into the union,\n> but it doesn't at the moment and the database does a LOT of extra\n> work. In this case, I can manually do quite a couple of transforms\n> to move things around and it does the right thing:\n> \n> SELECT txt\n> FROM (\n> SELECT l.id as lid, r.id as rid, r.foo, l.txt\n> FROM largetable1 l, smalltable r\n> UNION ALL\n> SELECT l.id as lid, r.id as rid, r.foo, l.txt\n> FROM largetable1 l, smalltable r)\n> WHERE foo = 'bar';\n> AND lid = rid\n> \n> The optimiser is intelligent enough to move the where clauses up\n> into the union and end end up with a reasonably optimal query.\n> Unfortunatly, in real life, the query is much larger and reorganising\n> everything manually isn't really feasible!\n\n-- \nWeitersagen: GMX DSL-Flatrates mit Tempo-Garantie!\nAb 4,99 Euro/Monat: http://www.gmx.net/de/go/dsl\n",
"msg_date": "Fri, 27 May 2005 09:40:31 +0200 (MEST)",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising queries involving unions"
}
] |
[
{
"msg_contents": "I am seeing vastly different performance characteristics for almost the\nexact same query. \nCan someone help me break this down and figure out why the one query\ntakes so much longer than the other?\n\nLooking at the explain analyze output, I see that the loops value on the\ninnermost index scan when bucket = 3 is way out of wack with the others.\n\n\nHere's the query...the only thing that changes from run to run is the\nbucket number.\n\nFor some strange reason the id and bucket types are bigint although they\ndo not need to be. \n\nShared buffers is 48000 \nsort_mem is 32767\n\nThis is on 7.4.2 I'm seeing the same thing on 7.4.7 as well.\n\n\nexplain analyze\n select \nt0.filename,\nt2.filename as parentname,\nt0.st_atime,\nt0.size,\nt0.ownernameid,\nt0.filetypeid,\nt0.groupnameid,\nt0.groupnameid,\nt0.id,\nt0.filename \nfrom Nodes_215335885080_1114059806 as t0 inner join \nfileftypebkt_215335885080_1114059806 as t1 on t0.id=t1.fileid inner join\n\ndirs_215335885080_1114059806 as t2 on t0.parentnameid=t2.filenameid \nwhere t1.bucket=3 order by t0.filename asc offset 0 limit 25\n\n\nHere's the bucket distribution..i have clustered the index on the bucket\nvalue.\n\n bucket | count \n--------+---------\n 9 | 13420\n 8 | 274053\n 7 | 2187261\n 6 | 1395\n 5 | 45570\n 4 | 2218830\n 3 | 16940\n 2 | 818405\n 1 | 4092\n(9 rows)\n\n\nAnd the explain analyzes for bucket values of 3 7 and 8\n\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n--------------------------------------------------------------------\n Limit (cost=0.00..18730.19 rows=25 width=112) (actual\ntime=89995.190..400863.350 rows=25 loops=1)\n -> Nested Loop (cost=0.00..48333634.41 rows=64513 width=112)\n(actual time=89995.172..400863.043 rows=25 loops=1)\n -> Nested Loop (cost=0.00..47944899.32 rows=64513 width=69)\n(actual time=89971.894..400484.701 rows=25 loops=1)\n -> Index Scan using\nxnodes_215335885080_1114059806_filename on nodes_215335885080_1114059806\nt0 (cost=0.00..19090075.03 rows=4790475 width=69) (actual\ntime=0.074..319084.540 rows=713193 loops=1)\n -> Index Scan using\nxfileftypebkt_215335885080_1114059806_fileid on\nfileftypebkt_215335885080_1114059806 t1 (cost=0.00..6.01 rows=1\nwidth=8) (actual time=0.101..0.101 rows=0 loops=713193)\n Index Cond: (\"outer\".id = t1.fileid)\n Filter: (bucket = 3)\n -> Index Scan using xdirs_215335885080_1114059806_filenameid\non dirs_215335885080_1114059806 t2 (cost=0.00..6.01 rows=1 width=59)\n(actual time=15.096..15.103 rows=1 loops=25)\n Index Cond: (\"outer\".parentnameid = t2.filenameid)\n Total runtime: 400863.747 ms\n(10 rows)\n\n\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n--------------------------------------------------------------\n Limit (cost=0.00..785.15 rows=25 width=112) (actual\ntime=173.935..552.075 rows=25 loops=1)\n -> Nested Loop (cost=0.00..59327691.44 rows=1889045 width=112)\n(actual time=173.917..551.763 rows=25 loops=1)\n -> Nested Loop (cost=0.00..47944899.32 rows=1889045 width=69)\n(actual time=151.198..303.463 rows=25 loops=1)\n -> Index Scan using\nxnodes_215335885080_1114059806_filename on nodes_215335885080_1114059806\nt0 (cost=0.00..19090075.03 rows=4790475 width=69) (actual\ntime=0.225..82.328 rows=6930 loops=1)\n -> Index Scan using\nxfileftypebkt_215335885080_1114059806_fileid on\nfileftypebkt_215335885080_1114059806 t1 (cost=0.00..6.01 rows=1\nwidth=8) (actual time=0.019..0.019 rows=0 loops=6930)\n Index Cond: (\"outer\".id = t1.fileid)\n Filter: (bucket = 7)\n -> Index Scan using xdirs_215335885080_1114059806_filenameid\non dirs_215335885080_1114059806 t2 (cost=0.00..6.01 rows=1 width=59)\n(actual time=9.894..9.901 rows=1 loops=25)\n Index Cond: (\"outer\".parentnameid = t2.filenameid)\n Total runtime: 552.519 ms\n(10 rows)\n\n\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-----------------------------------------------------------\n Limit (cost=0.00..18730.19 rows=25 width=112) (actual\ntime=81.271..330.404 rows=25 loops=1)\n -> Nested Loop (cost=0.00..48333634.41 rows=64513 width=112)\n(actual time=81.254..330.107 rows=25 loops=1)\n -> Nested Loop (cost=0.00..47944899.32 rows=64513 width=69)\n(actual time=4.863..8.164 rows=25 loops=1)\n -> Index Scan using\nxnodes_215335885080_1114059806_filename on nodes_215335885080_1114059806\nt0 (cost=0.00..19090075.03 rows=4790475 width=69) (actual\ntime=0.204..2.576 rows=75 loops=1)\n -> Index Scan using\nxfileftypebkt_215335885080_1114059806_fileid on\nfileftypebkt_215335885080_1114059806 t1 (cost=0.00..6.01 rows=1\nwidth=8) (actual time=0.054..0.057 rows=0 loops=75)\n Index Cond: (\"outer\".id = t1.fileid)\n Filter: (bucket = 8)\n -> Index Scan using xdirs_215335885080_1114059806_filenameid\non dirs_215335885080_1114059806 t2 (cost=0.00..6.01 rows=1 width=59)\n(actual time=12.841..12.847 rows=1 loops=25)\n Index Cond: (\"outer\".parentnameid = t2.filenameid)\n Total runtime: 330.835 ms\n(10 rows)\n\n\nThanks,\n\nbrad\n",
"msg_date": "Thu, 26 May 2005 10:36:51 -0500",
"msg_from": "\"Brad Might\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Specific query performance problem help requested - postgresql 7.4"
}
] |
[
{
"msg_contents": " I am seeing vastly different performance characteristics for almost the\nexact same query. \nCan someone help me break this down and figure out why the one query\ntakes so much longer than the other?\n\nLooking at the explain analyze output, I see that the loops value on the\ninnermost index scan when bucket = 3 is way out of wack with the others.\n\n\nHere's the query...the only thing that changes from run to run is the\nbucket number.\n\nFor some strange reason the id and bucket types are bigint although they\ndo not need to be. \n\nShared buffers is 48000\nsort_mem is 32767\n\nThis is on 7.4.2 I'm seeing the same thing on 7.4.7 as well.\n\n\nexplain analyze\n select\nt0.filename,\nt2.filename as parentname,\nt0.st_atime,\nt0.size,\nt0.ownernameid,\nt0.filetypeid,\nt0.groupnameid,\nt0.groupnameid,\nt0.id,\nt0.filename\nfrom Nodes_215335885080_1114059806 as t0 inner join\nfileftypebkt_215335885080_1114059806 as t1 on t0.id=t1.fileid inner join\ndirs_215335885080_1114059806 as t2 on t0.parentnameid=t2.filenameid\nwhere t1.bucket=3 order by t0.filename asc offset 0 limit 25\n\n\nHere's the bucket distribution..i have clustered the index on the bucket\nvalue.\n\n bucket | count \n--------+---------\n 9 | 13420\n 8 | 274053\n 7 | 2187261\n 6 | 1395\n 5 | 45570\n 4 | 2218830\n 3 | 16940\n 2 | 818405\n 1 | 4092\n(9 rows)\n\n\nAnd the explain analyzes for bucket values of 3 7 and 8\n\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n--------------------------------------------------------------------\n Limit (cost=0.00..18730.19 rows=25 width=112) (actual\ntime=89995.190..400863.350 rows=25 loops=1)\n -> Nested Loop (cost=0.00..48333634.41 rows=64513 width=112)\n(actual time=89995.172..400863.043 rows=25 loops=1)\n -> Nested Loop (cost=0.00..47944899.32 rows=64513 width=69)\n(actual time=89971.894..400484.701 rows=25 loops=1)\n -> Index Scan using\nxnodes_215335885080_1114059806_filename on nodes_215335885080_1114059806\nt0 (cost=0.00..19090075.03 rows=4790475 width=69) (actual\ntime=0.074..319084.540 rows=713193 loops=1)\n -> Index Scan using\nxfileftypebkt_215335885080_1114059806_fileid on\nfileftypebkt_215335885080_1114059806 t1 (cost=0.00..6.01 rows=1\nwidth=8) (actual time=0.101..0.101 rows=0 loops=713193)\n Index Cond: (\"outer\".id = t1.fileid)\n Filter: (bucket = 3)\n -> Index Scan using xdirs_215335885080_1114059806_filenameid\non dirs_215335885080_1114059806 t2 (cost=0.00..6.01 rows=1 width=59)\n(actual time=15.096..15.103 rows=1 loops=25)\n Index Cond: (\"outer\".parentnameid = t2.filenameid) Total\nruntime: 400863.747 ms (10 rows)\n\n\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n--------------------------------------------------------------\n Limit (cost=0.00..785.15 rows=25 width=112) (actual\ntime=173.935..552.075 rows=25 loops=1)\n -> Nested Loop (cost=0.00..59327691.44 rows=1889045 width=112)\n(actual time=173.917..551.763 rows=25 loops=1)\n -> Nested Loop (cost=0.00..47944899.32 rows=1889045 width=69)\n(actual time=151.198..303.463 rows=25 loops=1)\n -> Index Scan using\nxnodes_215335885080_1114059806_filename on nodes_215335885080_1114059806\nt0 (cost=0.00..19090075.03 rows=4790475 width=69) (actual\ntime=0.225..82.328 rows=6930 loops=1)\n -> Index Scan using\nxfileftypebkt_215335885080_1114059806_fileid on\nfileftypebkt_215335885080_1114059806 t1 (cost=0.00..6.01 rows=1\nwidth=8) (actual time=0.019..0.019 rows=0 loops=6930)\n Index Cond: (\"outer\".id = t1.fileid)\n Filter: (bucket = 7)\n -> Index Scan using xdirs_215335885080_1114059806_filenameid\non dirs_215335885080_1114059806 t2 (cost=0.00..6.01 rows=1 width=59)\n(actual time=9.894..9.901 rows=1 loops=25)\n Index Cond: (\"outer\".parentnameid = t2.filenameid) Total\nruntime: 552.519 ms (10 rows)\n\n\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-----------------------------------------------------------\n Limit (cost=0.00..18730.19 rows=25 width=112) (actual\ntime=81.271..330.404 rows=25 loops=1)\n -> Nested Loop (cost=0.00..48333634.41 rows=64513 width=112)\n(actual time=81.254..330.107 rows=25 loops=1)\n -> Nested Loop (cost=0.00..47944899.32 rows=64513 width=69)\n(actual time=4.863..8.164 rows=25 loops=1)\n -> Index Scan using\nxnodes_215335885080_1114059806_filename on nodes_215335885080_1114059806\nt0 (cost=0.00..19090075.03 rows=4790475 width=69) (actual\ntime=0.204..2.576 rows=75 loops=1)\n -> Index Scan using\nxfileftypebkt_215335885080_1114059806_fileid on\nfileftypebkt_215335885080_1114059806 t1 (cost=0.00..6.01 rows=1\nwidth=8) (actual time=0.054..0.057 rows=0 loops=75)\n Index Cond: (\"outer\".id = t1.fileid)\n Filter: (bucket = 8)\n -> Index Scan using xdirs_215335885080_1114059806_filenameid\non dirs_215335885080_1114059806 t2 (cost=0.00..6.01 rows=1 width=59)\n(actual time=12.841..12.847 rows=1 loops=25)\n Index Cond: (\"outer\".parentnameid = t2.filenameid) Total\nruntime: 330.835 ms (10 rows)\n\n\nThanks,\n\nbrad\n",
"msg_date": "Thu, 26 May 2005 12:55:32 -0500",
"msg_from": "\"Brad Might\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Specific query performance problem help requested - postgresql 7.4"
},
{
"msg_contents": "\"Brad Might\" <[email protected]> writes:\n> Can someone help me break this down and figure out why the one query\n> takes so much longer than the other?\n\nIt looks to me like there's a correlation between filename and bucket,\nsuch that the indexscan in filename order takes much longer to run\nacross the first 25 rows with bucket = 3 than it does to run across\nthe first 25 with bucket = 7 or bucket = 8. It's not just a matter of\nthere being fewer rows with bucket = 3 ... the cost differential is much\nlarger than is explained by the count ratios. The bucket = 3 rows have\nto be lurking further to the back of the filename order than the others.\n\n> Here's the bucket distribution..i have clustered the index on the bucket\n> value.\n\nIf you have an index on bucket, it's not doing you any good here anyway,\nsince you wrote the constraint as a crosstype operator (\"3\" is int4 not\nint8). It might help to explicitly cast the constant to int8.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 May 2005 14:31:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specific query performance problem help requested - postgresql\n\t7.4"
}
] |
[
{
"msg_contents": "Looks like I modified that constraint since the original has '3' and\nexplaining that shows the one I ended up running and posting has 3. \nWhn I explain on the original version it shows filter: (bucket =\n3::bigint)\n\nCan you elaborate on what you mean by:\n> The bucket = 3 rows have to be lurking further to the back of the\nfilename order than the others\n\nHow does this apply to the index on filename?\n\nIt is possible that the data values are skewed, is there any way I can\ngracefully handle this condition?\nThis query is being used to extract data for interactive display and the\ntime for bucket 3 is so out of \nwhack with all the others (I've run this across all buckets and only\nbucket 3 has the horrendous excecution\ntimes)\n\n\nAny suggestions for working around this problem to speed up execution?\n\n\nThanks for the help\n\nbrad\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Thursday, May 26, 2005 1:32 PM\nTo: Brad Might\nCc: [email protected]\nSubject: Re: [PERFORM] Specific query performance problem help requested\n- postgresql 7.4 \n\n\"Brad Might\" <[email protected]> writes:\n> Can someone help me break this down and figure out why the one query \n> takes so much longer than the other?\n\nIt looks to me like there's a correlation between filename and bucket,\nsuch that the indexscan in filename order takes much longer to run\nacross the first 25 rows with bucket = 3 than it does to run across the\nfirst 25 with bucket = 7 or bucket = 8. It's not just a matter of there\nbeing fewer rows with bucket = 3 ... the cost differential is much\nlarger than is explained by the count ratios. The bucket = 3 rows have\nto be lurking further to the back of the filename order than the others.\n\n> Here's the bucket distribution..i have clustered the index on the \n> bucket value.\n\nIf you have an index on bucket, it's not doing you any good here anyway,\nsince you wrote the constraint as a crosstype operator (\"3\" is int4 not\nint8). It might help to explicitly cast the constant to int8.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 26 May 2005 13:41:44 -0500",
"msg_from": "\"Brad Might\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Specific query performance problem help requested - postgresql\n\t7.4"
}
] |
[
{
"msg_contents": "I have some queries that have significan't slowed down in the last\ncouple days. It's gone from 10 seconds to over 2 mins.\n\nThe cpu has never gone over 35% in the servers lifetime, but the load\naverage is over 8.0 right now. I'm assuming this is probably due to\ndisk io.\n\nI need some help setting up postgres so that it doesn't need to go to\ndisk. I think the shared_buffers and effective_cache_size values are\nthe one's I need to look at.\n\nWould setting shmmax and smmall to 90% or so of available mem and\nputting a lot for postgres be helpful?\n\nEffective cach size says this: \nSets the planner's assumption about the effective size of the disk\ncache (that is, the portion of the kernel's disk cache that will be\nused for PostgreSQL data files).\n\nDoes that mean the total available ram? Or what's left over from shared_buffers?\n\nI've tried different things and not much has been working. Is there a\ngood way to ensure that most of the tables accessed in postgres will\nbe cached in mem and not have to go to disk?\n\nIf I'm joining a lot of tables, should the sort_mem be set high also?\nDo shared_buffers, effective_cache_size, and sort_mem all use\ndifferent mem? Or are they seperate?\n\nI've looked for information and haven't found any useful pages about this.\n\nAny help would be greatly appreciated.\n\nThanks.\n\n-Josh\n",
"msg_date": "Thu, 26 May 2005 15:24:16 -0500",
"msg_from": "Josh Close <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow queries, possibly disk io"
},
{
"msg_contents": "Josh Close wrote:\n\n>I have some queries that have significan't slowed down in the last\n>couple days. It's gone from 10 seconds to over 2 mins.\n>\n>The cpu has never gone over 35% in the servers lifetime, but the load\n>average is over 8.0 right now. I'm assuming this is probably due to\n>disk io.\n>\n>I need some help setting up postgres so that it doesn't need to go to\n>disk. I think the shared_buffers and effective_cache_size values are\n>the one's I need to look at.\n>\n>Would setting shmmax and smmall to 90% or so of available mem and\n>putting a lot for postgres be helpful?\n>\n>\nSetting shared buffers above something like 10-30% of memory is counter\nproductive.\n\n>Effective cach size says this:\n>Sets the planner's assumption about the effective size of the disk\n>cache (that is, the portion of the kernel's disk cache that will be\n>used for PostgreSQL data files).\n>\n>Does that mean the total available ram? Or what's left over from shared_buffers?\n>\n>I've tried different things and not much has been working. Is there a\n>good way to ensure that most of the tables accessed in postgres will\n>be cached in mem and not have to go to disk?\n>\n>If I'm joining a lot of tables, should the sort_mem be set high also?\n>Do shared_buffers, effective_cache_size, and sort_mem all use\n>different mem? Or are they seperate?\n>\n>\n>\nIncreasing sort_mem can help with various activities, but increasing it\ntoo much can cause you to swap, which kills performance. The caution is\nthat you will likely use at least 1 sort_mem per connection, and can\nlikely use more than one if the query is complicated.\n\neffective_cache_size changes how Postgres plans queries, but given the\nsame query plan, it doesn't change performance at all.\n\n>I've looked for information and haven't found any useful pages about this.\n>\n>Any help would be greatly appreciated.\n>\n>Thanks.\n>\n>-Josh\n>\n>\n\nJohn\n=:->",
"msg_date": "Thu, 26 May 2005 16:14:39 -0500",
"msg_from": "John Arbash Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queries, possibly disk io"
},
{
"msg_contents": "On 5/26/05, Josh Close <[email protected]> wrote:\n> I have some queries that have significan't slowed down in the last\n> couple days. It's gone from 10 seconds to over 2 mins.\n> \n> The cpu has never gone over 35% in the servers lifetime, but the load\n> average is over 8.0 right now. I'm assuming this is probably due to\n> disk io.\n> \n> I need some help setting up postgres so that it doesn't need to go to\n> disk. I think the shared_buffers and effective_cache_size values are\n> the one's I need to look at.\n\nFew \"mandatory\" questions:\n\n1. Do you vacuum your db on regular basis? :)\n\n2. Perhaps statistics for tables in question are out of date, did you\n try alter table set statistics?\n\n3. explain analyze of the slow query?\n\n4. if you for some reason cannot give explain analyze, please try to\ndescribe the type of query (what kind of join(s)) and amount of data\nfound in the tables.\n\n2 minutes from 10 seconds is a huge leap, and it may mean that\nPostgreSQL for some reason is not planning as well as it could.\nThrowing more RAM at the problem can help, but it would be better\nto hint the planner to do the right thing. It may be a good time to\nplay with planner variables. :)\n\n Regards,\n Dawid\n",
"msg_date": "Thu, 26 May 2005 23:23:15 +0200",
"msg_from": "Dawid Kuroczko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queries, possibly disk io"
},
{
"msg_contents": "> I have some queries that have significan't slowed down in the last\n> couple days. It's gone from 10 seconds to over 2 mins.\n> \n> The cpu has never gone over 35% in the servers lifetime, but the load\n> average is over 8.0 right now. I'm assuming this is probably due to\n> disk io.\n\nYou sure it's not a severe lack of vacuuming that's the problem?\n\nChris\n",
"msg_date": "Fri, 27 May 2005 09:29:52 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queries, possibly disk io"
},
{
"msg_contents": "> Setting shared buffers above something like 10-30% of memory is counter\n> productive.\n\nWhat is the reason behind it being counter productive? If shared\nbuffers are at 30%, should effective cache size be at 70%? How do\nthose two relate?\n\n> \n> Increasing sort_mem can help with various activities, but increasing it\n> too much can cause you to swap, which kills performance. The caution is\n> that you will likely use at least 1 sort_mem per connection, and can\n> likely use more than one if the query is complicated.\n\nI have a max of 100 connections and 2 gigs of mem. Right now the sort\nmem is a 4 megs. How much higher could I put that?\n\n-Josh\n",
"msg_date": "Fri, 27 May 2005 07:52:16 -0500",
"msg_from": "Josh Close <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow queries, possibly disk io"
},
{
"msg_contents": "> Few \"mandatory\" questions:\n>\n> 1. Do you vacuum your db on regular basis? :)\n\nIt's vacuumed once every hour. The table sizes and data are constantly changing.\n\n>\n> 2. Perhaps statistics for tables in question are out of date, did you\n> try alter table set statistics?\n\nNo I haven't. What would that do for me?\n\n>\n> 3. explain analyze of the slow query?\n\nHere is the function that is ran:\n\nCREATE OR REPLACE FUNCTION adaption.funmsgspermin()\n RETURNS int4 AS\n'\nDECLARE\n this_rServerIds RECORD;\n this_sQuery TEXT;\n this_iMsgsPerMin INT;\n this_rNumSent RECORD;\n\nBEGIN\n this_iMsgsPerMin := 0;\n FOR this_rServerIds IN\n SELECT iId\n FROM adaption.tblServers\n LOOP\n this_sQuery := \\'\n SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent\n FROM adaption.tblBatchHistory_\\' || this_rServerIds.iId || \\'\n WHERE tStamp > now() - interval \\'\\'5 mins\\'\\';\n \\';\n FOR this_rNumSent IN EXECUTE this_sQuery LOOP\n this_iMsgsPerMin := this_iMsgsPerMin + this_rNumSent.iNumSent;\n END LOOP;\n END LOOP;\n\n this_iMsgsPerMin := this_iMsgsPerMin / 5;\n\n RETURN this_iMsgsPerMin;\nEND;\n'\nLANGUAGE 'plpgsql' VOLATILE;\n\nHere is the explain analyze of one loops of the sum:\n\nAggregate (cost=31038.04..31038.04 rows=1 width=4) (actual\ntime=14649.602..14649.604 rows=1 loops=1)\n -> Seq Scan on tblbatchhistory_1 (cost=0.00..30907.03 rows=52401\nwidth=4) (actual time=6339.223..14648.433 rows=919 loops=1)\n Filter: (tstamp > (now() - '00:05:00'::interval))\nTotal runtime: 14649.709 ms\n\n>\n> 4. if you for some reason cannot give explain analyze, please try to\n> describe the type of query (what kind of join(s)) and amount of data\n> found in the tables.\n>\n> 2 minutes from 10 seconds is a huge leap, and it may mean that\n> PostgreSQL for some reason is not planning as well as it could.\n> Throwing more RAM at the problem can help, but it would be better\n> to hint the planner to do the right thing. It may be a good time to\n> play with planner variables. :)\n\nIs there any documentation on planner vars? And how would I throw more\nram at it? It has 2 gigs right now. How do I know if postgres is using\nthat?\n\n-Josh\n",
"msg_date": "Fri, 27 May 2005 08:04:39 -0500",
"msg_from": "Josh Close <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow queries, possibly disk io"
},
{
"msg_contents": "On 5/26/05, Christopher Kings-Lynne <[email protected]> wrote:\n> > I have some queries that have significan't slowed down in the last\n> > couple days. It's gone from 10 seconds to over 2 mins.\n> >\n> > The cpu has never gone over 35% in the servers lifetime, but the load\n> > average is over 8.0 right now. I'm assuming this is probably due to\n> > disk io.\n> \n> You sure it's not a severe lack of vacuuming that's the problem?\n> \n\nIt's vacuumed hourly. If it needs to be more than that I could do it I\nguess. But from everything I've been told, hourly should be enough.\n\n-Josh\n",
"msg_date": "Fri, 27 May 2005 08:05:47 -0500",
"msg_from": "Josh Close <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow queries, possibly disk io"
},
{
"msg_contents": "Josh Close <[email protected]> writes:\n> this_sQuery := \\'\n> SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent\n> FROM adaption.tblBatchHistory_\\' || this_rServerIds.iId || \\'\n> WHERE tStamp > now() - interval \\'\\'5 mins\\'\\';\n> \\';\n\n> Here is the explain analyze of one loops of the sum:\n\n> Aggregate (cost=31038.04..31038.04 rows=1 width=4) (actual\n> time=14649.602..14649.604 rows=1 loops=1)\n> -> Seq Scan on tblbatchhistory_1 (cost=0.00..30907.03 rows=52401\n> width=4) (actual time=6339.223..14648.433 rows=919 loops=1)\n> Filter: (tstamp > (now() - '00:05:00'::interval))\n> Total runtime: 14649.709 ms\n\nI think you really want that seqscan to be an indexscan, instead.\nI'm betting this is PG 7.4.something? If so, probably the only\nway to make it happen is to simplify the now() expression to a constant:\n\n SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent\n FROM adaption.tblBatchHistory_\\' || this_rServerIds.iId || \\'\n WHERE tStamp > \\\\\\'' || (now() - interval \\'5 mins\\')::text ||\n \\'\\\\\\'\\';\n\nbecause pre-8.0 the planner won't realize that the inequality is\nselective enough to favor an indexscan, unless it's comparing to\na simple constant.\n\n(BTW, 8.0's dollar quoting makes this sort of thing a lot less painful)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 May 2005 10:29:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queries, possibly disk io "
},
{
"msg_contents": "> I think you really want that seqscan to be an indexscan, instead.\n> I'm betting this is PG 7.4.something? If so, probably the only\n> way to make it happen is to simplify the now() expression to a constant:\n> \n> SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent\n> FROM adaption.tblBatchHistory_\\' || this_rServerIds.iId || \\'\n> WHERE tStamp > \\\\\\'' || (now() - interval \\'5 mins\\')::text ||\n> \\'\\\\\\'\\';\n\nThe dollar sign thing would be a lot easier. I can't get this to work.\nI'm using a db manager where I can just use ' instead of \\'. How would\nit look for that? In other words, it doesn't have the \"create or\nreplace function as ' --stuff ' language 'plpgsql'\" it just has the\nactual function. Makes things a little easier. I'm getting an error at\nor near \"5\".\n\n> \n> because pre-8.0 the planner won't realize that the inequality is\n> selective enough to favor an indexscan, unless it's comparing to\n> a simple constant.\n> \n> (BTW, 8.0's dollar quoting makes this sort of thing a lot less painful)\n> \n> regards, tom lane\n> \n\n\n-- \n-Josh\n",
"msg_date": "Fri, 27 May 2005 09:54:52 -0500",
"msg_from": "Josh Close <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow queries, possibly disk io"
},
{
"msg_contents": "Doing the query\n\nexplain\nSELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent\nFROM adaption.tblBatchHistory_1\nWHERE tStamp > ( now() - interval '5 mins' )::text\n\ngives me this:\n\nAggregate (cost=32138.33..32138.33 rows=1 width=4)\n-> Seq Scan on tblbatchhistory_1 (cost=0.00..31996.10 rows=56891 width=4)\nFilter: ((tstamp)::text > ((now() - '00:05:00'::interval))::text)\n\nStill not an index scan.\n\nOn 5/27/05, Tom Lane <[email protected]> wrote:\n> Josh Close <[email protected]> writes:\n> > this_sQuery := \\'\n> > SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent\n> > FROM adaption.tblBatchHistory_\\' || this_rServerIds.iId || \\'\n> > WHERE tStamp > now() - interval \\'\\'5 mins\\'\\';\n> > \\';\n> \n> > Here is the explain analyze of one loops of the sum:\n> \n> > Aggregate (cost=31038.04..31038.04 rows=1 width=4) (actual\n> > time=14649.602..14649.604 rows=1 loops=1)\n> > -> Seq Scan on tblbatchhistory_1 (cost=0.00..30907.03 rows=52401\n> > width=4) (actual time=6339.223..14648.433 rows=919 loops=1)\n> > Filter: (tstamp > (now() - '00:05:00'::interval))\n> > Total runtime: 14649.709 ms\n> \n> I think you really want that seqscan to be an indexscan, instead.\n> I'm betting this is PG 7.4.something? If so, probably the only\n> way to make it happen is to simplify the now() expression to a constant:\n> \n> SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent\n> FROM adaption.tblBatchHistory_\\' || this_rServerIds.iId || \\'\n> WHERE tStamp > \\\\\\'' || (now() - interval \\'5 mins\\')::text ||\n> \\'\\\\\\'\\';\n> \n> because pre-8.0 the planner won't realize that the inequality is\n> selective enough to favor an indexscan, unless it's comparing to\n> a simple constant.\n> \n> (BTW, 8.0's dollar quoting makes this sort of thing a lot less painful)\n> \n> regards, tom lane\n> \n\n\n-- \n-Josh\n",
"msg_date": "Fri, 27 May 2005 10:00:23 -0500",
"msg_from": "Josh Close <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow queries, possibly disk io"
},
{
"msg_contents": "Hi,\n\nI had some disk io issues recently with NFS, I found the command 'iostat \n-x 5' to be a great help when using Linux.\n\nFor example here is the output when I do a 10GB file transfer onto hdc\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s \navgrq-sz avgqu-sz await svctm %util\nhdc 0.00 875.95 0.00 29.66 0.00 7244.89 0.00 3622.44 \n244.27 3.07 103.52 1.78 5.27\n\nThe last field show the disk is 5.27% busy.\n\nI have seen this value at 100%, adding more server brought it under 100%.\nIt seems that if you hit 100% problems sort of cascade all over that \nplace. For example Apache connections went right up and hit their max.\n\nI am not sure how accurate the % is but it has work pretty well for me.\n\nPerhaps use this command in another window with you run your SQL and see \nwhat it shows.\n\nHTH.\nKind regards,\nRudi.\n",
"msg_date": "Mon, 30 May 2005 15:16:49 -0700",
"msg_from": "Rudi Starcevic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queries, possibly disk io"
},
{
"msg_contents": "I didn't see iostat as available to install, but I'm using dstat to see this.\n\nThe server has constant disk reads averaging around 50M and quite a\nfew in the 60M range. This is when selects are being done, which is\nalmost always. I would think if postgres is grabbing everything from\nmemory that this wouldn't happen. This is why I think there must be\nsome way to allocate more mem to postgres.\n\nThere is 2 gigs of mem in this server. Here are my current settings.\n\nmax_connections = 100\nshared_buffers = 50000\nsort_mem = 4096\nvacuum_mem = 32768\neffective_cache_size = 450000\n\nShared buffers is set to 10% of total mem. Effective cache size is 90% of mem.\n\nIs there anything that can be done to have postgres grab more from\nmemory rather than disk?\n\n\nOn 5/30/05, Rudi Starcevic <[email protected]> wrote:\n> Hi,\n> \n> I had some disk io issues recently with NFS, I found the command 'iostat\n> -x 5' to be a great help when using Linux.\n> \n> For example here is the output when I do a 10GB file transfer onto hdc\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\n> avgrq-sz avgqu-sz await svctm %util\n> hdc 0.00 875.95 0.00 29.66 0.00 7244.89 0.00 3622.44\n> 244.27 3.07 103.52 1.78 5.27\n> \n> The last field show the disk is 5.27% busy.\n> \n> I have seen this value at 100%, adding more server brought it under 100%.\n> It seems that if you hit 100% problems sort of cascade all over that\n> place. For example Apache connections went right up and hit their max.\n> \n> I am not sure how accurate the % is but it has work pretty well for me.\n> \n> Perhaps use this command in another window with you run your SQL and see\n> what it shows.\n> \n> HTH.\n> Kind regards,\n> Rudi.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n\n-- \n-Josh\n",
"msg_date": "Tue, 31 May 2005 08:17:31 -0500",
"msg_from": "Josh Close <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow queries, possibly disk io"
},
{
"msg_contents": "Josh Close <[email protected]> writes:\n> There is 2 gigs of mem in this server. Here are my current settings.\n\n> max_connections = 100\n> shared_buffers = 50000\n> sort_mem = 4096\n> vacuum_mem = 32768\n> effective_cache_size = 450000\n\n> Shared buffers is set to 10% of total mem. Effective cache size is 90% of mem.\n\nUh, shared_buffers and effective_cache_size are both measured in pages,\nwhich are 8K apiece unless you built with a nondefault BLCKSZ. So the\nabove calculations are off ...\n\n> Is there anything that can be done to have postgres grab more from\n> memory rather than disk?\n\nIt's not so much a matter of what Postgres will do as what the kernel\nwill do. Check to see if there is some limit on how much memory the\nkernel will set aside for disk buffers. Plain old \"top\" will generally\ntell you what is going on, though interpreting its output sometimes\nrequires some wizardry.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 May 2005 10:33:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queries, possibly disk io "
},
{
"msg_contents": "On 5/31/05, Martin Fandel <[email protected]> wrote:\n> In the documentation of\n> http://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n> is the shared_buffers set to 1/3 of the availble RAM. You're set\n> 50000*8/1024=391 MB SHMEM. The effective_cache_size in your\n> configuration is 450000*8/1024=3516 MB SHMEM. That's 3907MB\n> of RAM but you have less than 2048MB availble.\n\n\nI wrote that wrong, there is actually 4 gigs of ram available.\n\n\n> \n> What value do you have in /proc/sys/kernel/shmmax?\n> \n> I'm really new at using postgres and i have not many experience\n> but maybe you can try to use 1/3 (682MB/87424)for shared_buffers\n> and 2/3 (1365MB/174720) for the effective_cache_size? But i these\n> settings are to high too.\n> \n> best regards\n> Martin\n",
"msg_date": "Tue, 31 May 2005 11:36:49 -0500",
"msg_from": "Josh Close <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow queries, possibly disk io"
},
{
"msg_contents": ">On 5/31/05, Martin Fandel <[email protected]> wrote:\n>> In the documentation of\n>> http://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n>> is the shared_buffers set to 1/3 of the availble RAM.\n\nWell, it says \"you should never use more than 1/3 of your available RAM\"\nwhich is not quite the same as \"it is set to 1/3.\" I'd even say, never\nset it higher than 1/10 of your available RAM, unless you know what\nyou're doing and why you're doing it.\n\nServus\n Manfred\n",
"msg_date": "Tue, 31 May 2005 20:18:13 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queries, possibly disk io"
},
{
"msg_contents": "On Fri, 2005-05-27 at 07:52 -0500, Josh Close wrote:\n> > Setting shared buffers above something like 10-30% of memory is counter\n> > productive.\n> \n> What is the reason behind it being counter productive? If shared\n> buffers are at 30%, should effective cache size be at 70%? How do\n> those two relate?\n\nThey don't relate. \n\nshared_buffers = 50000 is enough. More than that will give bgwriter\nissues.\n\neffective_cache_size changes whether indexes are selected or not. Higher\nsettings favour indexed access.\n\n> > \n> > Increasing sort_mem can help with various activities, but increasing it\n> > too much can cause you to swap, which kills performance. The caution is\n> > that you will likely use at least 1 sort_mem per connection, and can\n> > likely use more than one if the query is complicated.\n> \n> I have a max of 100 connections and 2 gigs of mem. Right now the sort\n> mem is a 4 megs. How much higher could I put that?\n> \n\nPlease post your server hardware config all in one go. You have more\nthan 2 CPUs, yes?\n\nAlso, mention what bgwriter settings are. You may need to turn those\ndown a bit.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Wed, 01 Jun 2005 10:06:19 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queries, possibly disk io"
},
{
"msg_contents": "TIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n\nBut INT2, INT4, INT8 and \"SERIAL\" are considered to be a unique datatype.\nAm I Right?\n\nThanks,\n\nMarc\n\n-- \nGeschenkt: 3 Monate GMX ProMail gratis + 3 Ausgaben stern gratis\n++ Jetzt anmelden & testen ++ http://www.gmx.net/de/go/promail ++\n",
"msg_date": "Wed, 1 Jun 2005 11:45:06 +0200 (MEST)",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "TIP 9: the planner will ignore... & datatypes "
},
{
"msg_contents": "On Wed, Jun 01, 2005 at 11:45:06AM +0200, Marc Mamin wrote:\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n> \n> But INT2, INT4, INT8 and \"SERIAL\" are considered to be a unique datatype.\n> Am I Right?\n\nNo, they weren't when this tip was written. As of 8.0 however this tip\nis no longer the complete truth; we do allow cross-type index scans.\n\n-- \nAlvaro Herrera (<alvherre[a]surnet.cl>)\n\"I suspect most samba developers are already technically insane...\nOf course, since many of them are Australians, you can't tell.\" (L. Torvalds)\n",
"msg_date": "Wed, 1 Jun 2005 09:40:09 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TIP 9: the planner will ignore... & datatypes"
}
] |
[
{
"msg_contents": "hi-\n\ni would like to see if someone could recommend something\nto make my query run faster.\n\nSystem specs:\nPostgreSQL 7.4.2 on RedHat 9\ndual AMD Athlon 2GHz processors\n1 gig memory\nmirrored 7200 RPM IDE disks\n\nValues in postgresql.conf:\nshared_buffers = 1000\nsort_mem is commented out\neffective_cache_size is commented out\nrandom_page_cost is commented out\n\nRelevant tables:\nproduct\n-------\n id serial\n productlistid integer\n vendorid integer\n item varchar(32)\n descrip varchar(256)\n price double\n\nvendor\n------\n id serial\n vendorname varchar(64)\n\nA view i made in order to easily retrieve the vendor name:\ncreate view productvendorview as select p.id, p.productlistid, \nv.vendorname, p.item, p.descrip, p.price from product p, vendor v where \np.vendorid = v.id;\n\nHere are some indices i have created:\ncreate index product_plid on product (productlistid); \ncreate index product_plidloweritem on product (productlistid, lower(item) varchar_pattern_ops);\ncreate index product_plidlowerdescrip on product (productlistid, lower(descrip) varchar_pattern_ops);\n\nHere is the query in question:\nselect * from productvendorview where (productlistid=3 or productlistid=5 \nor productlistid=4) and (lower(item) like '9229%' or lower(descrip) like \n'toner%') order by vendorname,item limit 100;\n\nThis query scans 412,457 records.\n\nHere is the EXPLAIN ANALYZE for the query:\n\n Limit (cost=45718.83..45719.08 rows=100 width=108) (actual time=39093.636..39093.708 rows=100 loops=1)\n -> Sort (cost=45718.83..45727.48 rows=3458 width=108) (actual time=39093.629..39093.655 rows=100 loops=1)\n Sort Key: v.vendorname, p.item\n -> Hash Join (cost=22.50..45515.57 rows=3458 width=108) (actual time=95.490..39062.927 rows=2440 loops=1)\n Hash Cond: (\"outer\".vendorid = \"inner\".id)\n -> Seq Scan on test p (cost=0.00..45432.57 rows=3457 width=62) (actual time=89.066..39041.654 rows=2444 loops=1)\n Filter: (((productlistid = 3) OR (productlistid = 5) OR (productlistid = 4)) AND\n ((lower((item)::text) ~~ '9229%'::text) OR (lower((descrip)::text) ~~ 'toner%'::text)))\n -> Hash (cost=20.00..20.00 rows=1000 width=54) (actual time=6.289..6.289 rows=0 loops=1)\n -> Seq Scan on vendor v (cost=0.00..20.00 rows=1000 width=54) (actual time=0.060..3.653 rows=2797 loops=1)\n Total runtime: 39094.713 ms\n(10 rows)\n\n\nThanks!\n-Clark\n",
"msg_date": "Thu, 26 May 2005 16:54:45 -0400 (EDT)",
"msg_from": "list <[email protected]>",
"msg_from_op": true,
"msg_subject": "tuning"
},
{
"msg_contents": "list wrote:\n> hi-\n> \n> i would like to see if someone could recommend something\n> to make my query run faster.\n> \n> \n> Values in postgresql.conf:\n> shared_buffers = 1000\n> sort_mem is commented out\n> effective_cache_size is commented out\n> random_page_cost is commented out\n> \n\nI would increase shared_buffers (say 5000 - 10000), and also \neffective_cache_size (say around 20000 - 50000 - but work out how much \nmemory this box has free or cached and adjust accordingly).\n\n From your explain output, it looks like sorting is not too much of a \nproblem - so you can leave it unchanged (for this query anyway).\n\n> Here is the query in question:\n> select * from productvendorview where (productlistid=3 or \n> productlistid=5 or productlistid=4) and (lower(item) like '9229%' or \n> lower(descrip) like 'toner%') order by vendorname,item limit 100;\n>\n\nYou might want to break this into 2 queries and union them, so you can \n(potentially) use the indexes on productlistid,lower(item) and \nproductlistid, lower(descrip) separately.\n\n\n> This query scans 412,457 records.\n> \n> Here is the EXPLAIN ANALYZE for the query:\n> \n> Limit (cost=45718.83..45719.08 rows=100 width=108) (actual \n> time=39093.636..39093.708 rows=100 loops=1)\n> -> Sort (cost=45718.83..45727.48 rows=3458 width=108) (actual \n> time=39093.629..39093.655 rows=100 loops=1)\n> Sort Key: v.vendorname, p.item\n> -> Hash Join (cost=22.50..45515.57 rows=3458 width=108) \n> (actual time=95.490..39062.927 rows=2440 loops=1)\n> Hash Cond: (\"outer\".vendorid = \"inner\".id)\n> -> Seq Scan on test p (cost=0.00..45432.57 rows=3457 \n> width=62) (actual time=89.066..39041.654 rows=2444 loops=1)\n> Filter: (((productlistid = 3) OR (productlistid = \n> 5) OR (productlistid = 4)) AND\n> ((lower((item)::text) ~~ '9229%'::text) OR \n> (lower((descrip)::text) ~~ 'toner%'::text)))\n> -> Hash (cost=20.00..20.00 rows=1000 width=54) (actual \n> time=6.289..6.289 rows=0 loops=1)\n> -> Seq Scan on vendor v (cost=0.00..20.00 \n> rows=1000 width=54) (actual time=0.060..3.653 rows=2797 loops=1)\n> Total runtime: 39094.713 ms\n> (10 rows)\n> \n\nI guess the relation 'test' is a copy of product (?)\n\nCheers\n\nMark\n\n\n",
"msg_date": "Tue, 31 May 2005 10:41:59 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning"
}
] |
[
{
"msg_contents": "Hi @ all,\n\ni'm trying to tune my postgresql-db but i don't know if the values are\nright \nset. \n\nI use the following environment for the postgres-db:\n\n######### Hardware ############\ncpu: 2x P4 3Ghz \nram: 1024MB DDR 266Mhz\n\npartitions:\n/dev/sda3 23G 9,6G 13G 44% /\n/dev/sda1 11G 156M 9,9G 2% /var\n/dev/sdb1 69G 13G 57G 19% /var/lib/pgsql\n\n/dev/sda is in raid 1 (2x 35GB / 10000upm / sca)\n/dev/sdb is in raid 10 (4x 35GB / 10000upm / sca)\n######### /Hardware ############\n\n######### Config ############\n/etc/sysctl.conf:\nkernel.shmall = 786432000\nkernel.shmmax = 786432000\n\n/etc/fstab:\n/dev/sdb1 /var/lib/pgsql reiserfs acl,user_xattr,noatime,data=writeback\n1 2\n\n/var/lib/pgsql/data/postgresql.conf\nsuperuser_reserved_connections = 2\nshared_buffers = 3000\nwork_mem = 131072\nmaintenance_work_mem = 131072\nmax_stack_depth = 2048\nmax_fsm_pages = 20000\nmax_fsm_relations = 1000\nmax_files_per_process = 1000\nvacuum_cost_delay = 10\nvacuum_cost_page_hit = 1\nvacuum_cost_page_miss = 10\nvacuum_cost_page_dirty = 20\nvacuum_cost_limit = 200\nbgwriter_delay = 200\nbgwriter_percent = 1\nbgwriter_maxpages = 100\nfsync = true\nwal_sync_method = fsync\nwal_buffers = 64\ncommit_delay = 0\ncommit_siblings = 5\ncheckpoint_segments = 256\ncheckpoint_timeout = 900\ncheckpoint_warning = 30\neffective_cache_size = 10000\nrandom_page_cost = 4\ncpu_tuple_cost = 0.01\ncpu_index_tuple_cost = 0.001\ncpu_operator_cost = 0.0025\ngeqo = true\ngeqo_threshold = 12\ngeqo_effort = 5\ngeqo_pool_size = 0\ngeqo_generations = 0\ngeqo_selection_bias = 2.0\ndeadlock_timeout = 1000\nmax_locks_per_transaction = 64\n######### /Config ############\n\n######### Transactions ############\nwe have about 115-300 transactions/min in about 65 tables.\n######### /Transactions ############\n\nI'm really new at using postgres. So i need some experience to set this \nparameters in the postgresql- and the system-config. I can't find\nstandard\ncalculations for this. :/ The postgresql-documentation doesn't help me\nto\nset the best values for this.\n\nThe database must be high-availble. I configured rsync to sync the\ncomplete \n/var/lib/pgsql-directory to my hot-standby. On the hotstandby i will\nmake the\ndumps of the database to improve the performance of the master-db. \n\nIn my tests the synchronization works fine. I synchronised the hole\ndirectory\nand restarted the database of the hotstandby. While restarting,\npostgresql turned\nback the old (not archived) wals and the database of my hotstandby was \nconsistent. Is this solution recommended? Or must i use archived wal's\nwith \nreal system-snapshots?\n\nbest regards,\n\nMartin Fandel\n\n\n\n\n\n\n\nHi @ all,\n\ni'm trying to tune my postgresql-db but i don't know if the values are right \nset. \n\nI use the following environment for the postgres-db:\n\n######### Hardware ############\ncpu: 2x P4 3Ghz \nram: 1024MB DDR 266Mhz\n\npartitions:\n/dev/sda3 23G 9,6G 13G 44% /\n/dev/sda1 11G 156M 9,9G 2% /var\n/dev/sdb1 69G 13G 57G 19% /var/lib/pgsql\n\n/dev/sda is in raid 1 (2x 35GB / 10000upm / sca)\n/dev/sdb is in raid 10 (4x 35GB / 10000upm / sca)\n######### /Hardware ############\n\n######### Config ############\n/etc/sysctl.conf:\nkernel.shmall = 786432000\nkernel.shmmax = 786432000\n\n/etc/fstab:\n/dev/sdb1 /var/lib/pgsql reiserfs acl,user_xattr,noatime,data=writeback 1 2\n\n/var/lib/pgsql/data/postgresql.conf\nsuperuser_reserved_connections = 2\nshared_buffers = 3000\nwork_mem = 131072\nmaintenance_work_mem = 131072\nmax_stack_depth = 2048\nmax_fsm_pages = 20000\nmax_fsm_relations = 1000\nmax_files_per_process = 1000\nvacuum_cost_delay = 10\nvacuum_cost_page_hit = 1\nvacuum_cost_page_miss = 10\nvacuum_cost_page_dirty = 20\nvacuum_cost_limit = 200\nbgwriter_delay = 200\nbgwriter_percent = 1\nbgwriter_maxpages = 100\nfsync = true\nwal_sync_method = fsync\nwal_buffers = 64\ncommit_delay = 0\ncommit_siblings = 5\ncheckpoint_segments = 256\ncheckpoint_timeout = 900\ncheckpoint_warning = 30\neffective_cache_size = 10000\nrandom_page_cost = 4\ncpu_tuple_cost = 0.01\ncpu_index_tuple_cost = 0.001\ncpu_operator_cost = 0.0025\ngeqo = true\ngeqo_threshold = 12\ngeqo_effort = 5\ngeqo_pool_size = 0\ngeqo_generations = 0\ngeqo_selection_bias = 2.0\ndeadlock_timeout = 1000\nmax_locks_per_transaction = 64\n######### /Config ############\n\n######### Transactions ############\nwe have about 115-300 transactions/min in about 65 tables.\n######### /Transactions ############\n\nI'm really new at using postgres. So i need some experience to set this \nparameters in the postgresql- and the system-config. I can't find standard\ncalculations for this. :/ The postgresql-documentation doesn't help me to\nset the best values for this.\n\nThe database must be high-availble. I configured rsync to sync the complete \n/var/lib/pgsql-directory to my hot-standby. On the hotstandby i will make the\ndumps of the database to improve the performance of the master-db. \n\nIn my tests the synchronization works fine. I synchronised the hole directory\nand restarted the database of the hotstandby. While restarting, postgresql turned\nback the old (not archived) wals and the database of my hotstandby was \nconsistent. Is this solution recommended? Or must i use archived wal's with \nreal system-snapshots?\n\nbest regards,\n\nMartin Fandel",
"msg_date": "Fri, 27 May 2005 15:41:52 +0200",
"msg_from": "\"Martin Fandel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql-8.0.1 performance tuning"
},
{
"msg_contents": "Martin Fandel wrote:\n\n> Hi @ all,\n>\n> i'm trying to tune my postgresql-db but i don't know if the values are\n> right\n> set.\n>\n> I use the following environment for the postgres-db:\n>\n> ######### Hardware ############\n> cpu: 2x P4 3Ghz\n> ram: 1024MB DDR 266Mhz\n>\n> partitions:\n> /dev/sda3 23G 9,6G 13G 44% /\n> /dev/sda1 11G 156M 9,9G 2% /var\n> /dev/sdb1 69G 13G 57G 19% /var/lib/pgsql\n>\n> /dev/sda is in raid 1 (2x 35GB / 10000upm / sca)\n> /dev/sdb is in raid 10 (4x 35GB / 10000upm / sca)\n> ######### /Hardware ############\n\nYou probably want to put the pg_xlog file onto /dev/sda rather than\nhaving it in /dev/sdb. Having it separate from the data usually boosts\nperformance a lot. I believe you can just mv it to a different\ndirectory, and then recreate it as a symlink. (Stop the database first :)\n\n>\n> ######### Config ############\n> /etc/sysctl.conf:\n> kernel.shmall = 786432000\n> kernel.shmmax = 786432000\n>\nNot really sure about these two.\n\n> /etc/fstab:\n> /dev/sdb1 /var/lib/pgsql reiserfs\n> acl,user_xattr,noatime,data=writeback 1 2\n>\nSeems decent.\n\n> /var/lib/pgsql/data/postgresql.conf\n> superuser_reserved_connections = 2\n> shared_buffers = 3000\n> work_mem = 131072\n> maintenance_work_mem = 131072\n\nThese both seem pretty large. But it depends on how many concurrent\nconnections doing sorting/hashing/etc you expect. If you are only\nexpecting 1 connection, these are probably fine. Otherwise with 1GB of\nRAM I would probably make work_mem more like 4096/8192.\nRemember, running out of work_mem means postgres will spill to disk,\nslowing that query. Running out of RAM causes the system to swap, making\neverything slow.\n\n> max_stack_depth = 2048\n> max_fsm_pages = 20000\n> max_fsm_relations = 1000\n> max_files_per_process = 1000\n> vacuum_cost_delay = 10\n> vacuum_cost_page_hit = 1\n> vacuum_cost_page_miss = 10\n> vacuum_cost_page_dirty = 20\n> vacuum_cost_limit = 200\n> bgwriter_delay = 200\n> bgwriter_percent = 1\n> bgwriter_maxpages = 100\n> fsync = true\n> wal_sync_method = fsync\n> wal_buffers = 64\n> commit_delay = 0\n> commit_siblings = 5\n> checkpoint_segments = 256\n> checkpoint_timeout = 900\n> checkpoint_warning = 30\n> effective_cache_size = 10000\n> random_page_cost = 4\n> cpu_tuple_cost = 0.01\n> cpu_index_tuple_cost = 0.001\n> cpu_operator_cost = 0.0025\n> geqo = true\n> geqo_threshold = 12\n> geqo_effort = 5\n> geqo_pool_size = 0\n> geqo_generations = 0\n> geqo_selection_bias = 2.0\n> deadlock_timeout = 1000\n> max_locks_per_transaction = 64\n> ######### /Config ############\n>\n> ######### Transactions ############\n> we have about 115-300 transactions/min in about 65 tables.\n> ######### /Transactions ############\n>\n> I'm really new at using postgres. So i need some experience to set this\n> parameters in the postgresql- and the system-config. I can't find standard\n> calculations for this. :/ The postgresql-documentation doesn't help me to\n> set the best values for this.\n>\n> The database must be high-availble. I configured rsync to sync the\n> complete\n> /var/lib/pgsql-directory to my hot-standby. On the hotstandby i will\n> make the\n> dumps of the database to improve the performance of the master-db.\n>\nI didn't think an rsync was completely valid. Probably you should look\nmore into Slony.\nhttp://slony.info\n\nIt is a single-master asynchronous replication system. I believe it is\npretty easy to setup, and does what you really want.\n\n> In my tests the synchronization works fine. I synchronised the hole\n> directory\n> and restarted the database of the hotstandby. While restarting,\n> postgresql turned\n> back the old (not archived) wals and the database of my hotstandby was\n> consistent. Is this solution recommended? Or must i use archived wal's\n> with\n> real system-snapshots?\n>\n> best regards,\n>\n> Martin Fandel\n\nJohn\n=:->",
"msg_date": "Tue, 31 May 2005 13:46:33 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql-8.0.1 performance tuning"
},
{
"msg_contents": "Martin Fandel wrote:\n\n> i'm trying to tune my postgresql-db but i don't know if the values are \n> I use the following environment for the postgres-db:\n\nI assumed you're running Linux here, you don't mention it.\n\n> ######### Hardware ############\n> cpu: 2x P4 3Ghz\n> ram: 1024MB DDR 266Mhz\n\nI think 1Gb RAM is quite minimal, nowadays.\nRead below.\n\n> partitions:\n> /dev/sda3 23G 9,6G 13G 44% /\n> /dev/sda1 11G 156M 9,9G 2% /var\n> /dev/sdb1 69G 13G 57G 19% /var/lib/pgsql\n> \n> /dev/sda is in raid 1 (2x 35GB / 10000upm / sca)\n> /dev/sdb is in raid 10 (4x 35GB / 10000upm / sca)\n\nI've seen good performance boost (and machine load lowered)\nswitching to 15k rpm disks.\n\n> ######### Config ############\n> /etc/sysctl.conf:\n> kernel.shmall = 786432000\n> kernel.shmmax = 786432000\n\nI think you have a problem here.\nkernel.shmmax should *not* be set to an amount of RAM, but\nto maximum number of shared memory pages, which on a typical linux system\nis 4kb. Google around:\n\n http://www.google.com/search?q=kernel.shmall+tuning+postgresql+shared+memory\n\n> /etc/fstab:\n> /dev/sdb1 /var/lib/pgsql reiserfs acl,user_xattr,noatime,data=writeback 1 2\n\nI use similar settings on ext3 (which I'm told it is slower than reiser\nor xfs or jfs).\n\nI indicate the values I use for a machine with 4Gb RAM\nand more 15 krpm disks but layout similar to yours.\n(3 x RAID1 arrays for os, logs, ... and 1 x RAID10 array with 12 disks)\n\nFor Pg configuration (others please comment on these values,\nit is invaluable to have feedback from this list).\n\n> /var/lib/pgsql/data/postgresql.conf\n> superuser_reserved_connections = 2\n> shared_buffers = 3000\n16384\n\n> work_mem = 131072\n32768\n\n> maintenance_work_mem = 131072\n262144\n\n> max_fsm_pages = 20000\n200000\n\n> fsync = true\nfalse\n\n> commit_delay = 0\n> commit_siblings = 5\nIf you have an high transactions volume, you should\nreally investigate on these ones.\n\n> effective_cache_size = 10000\n40000\n\n> random_page_cost = 4\nCheck out for unwanted \"seq scans\". If you have really fast\ndisks, you should experiment lowering a little this parameter.\n\n> max_locks_per_transaction = 64\n512\n\n> I'm really new at using postgres. So i need some experience to set this\n> parameters in the postgresql- and the system-config. I can't find standard\n> calculations for this. :/ The postgresql-documentation doesn't help me to\n> set the best values for this.\n\nThere's no such thing as \"standard calculations\" :-)\n\n> The database must be high-availble. I configured rsync to sync the complete\n> /var/lib/pgsql-directory to my hot-standby\n > [...]\n> In my tests the synchronization works fine. I synchronised the hole \n> consistent.\n > [...]\n > Is this solution recommended? Or must i use archived wal's with\n> real system-snapshots?\n\nIn some situations, I also used rsync to do the job.\nObviously, always stop the postmaster before syncing.\n\nMaybe you can look at \"slony\", if you haven't yet.\n\n http://www.slony.info\n\n-- \nCosimo\n",
"msg_date": "Wed, 01 Jun 2005 07:30:37 +0200",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql-8.0.1 performance tuning"
},
{
"msg_contents": "Cosimo Streppone wrote:\n> ######### Config ############\n>> /etc/sysctl.conf:\n>> kernel.shmall = 786432000\n>> kernel.shmmax = 786432000\n> \n> \n> I think you have a problem here.\n> kernel.shmmax should *not* be set to an amount of RAM, but\n> to maximum number of shared memory pages, which on a typical linux system\n> is 4kb. Google around:\n> \n> \n>\n\nThis is somewhat confusing :\n\nkernel.shmmax is in bytes (max single segment size)\nkernel.shmall is in (4k) pages (max system wide allocated segment pages)\n\ncheers\n\nMark\n\n\n",
"msg_date": "Wed, 01 Jun 2005 18:33:22 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql-8.0.1 performance tuning"
},
{
"msg_contents": "Mark Kirkwood ha scritto:\n\n> Cosimo Streppone wrote:\n> \n>> ######### Config ############\n>>\n>>> /etc/sysctl.conf:\n>>> kernel.shmall = 786432000\n>>> kernel.shmmax = 786432000\n>>\n>> I think you have a problem here.\n>> kernel.shmmax should *not* be set to an amount of RAM, but\n\nSorry, I thought \"shmall\" but written \"shmmax\".\nThanks Mark!\n\n-- \nCosimo\n",
"msg_date": "Wed, 01 Jun 2005 08:44:18 +0200",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql-8.0.1 performance tuning"
},
{
"msg_contents": "Cosimo Streppone wrote:\n> Mark Kirkwood ha scritto:\n> \n>> Cosimo Streppone wrote:\n>>\n>>> ######### Config ############\n>>>\n>>>> /etc/sysctl.conf:\n>>>> kernel.shmall = 786432000\n>>>> kernel.shmmax = 786432000\n>>>\n>>>\n>>> I think you have a problem here.\n>>> kernel.shmmax should *not* be set to an amount of RAM, but\n> \n> \n> Sorry, I thought \"shmall\" but written \"shmmax\".\n> Thanks Mark!\n> \n\nHehe - happens to me all the time!\n\nOn the shmall front - altho there is *probably* no real performance \nimpact setting it to the same as shmmax (i.e. allowing 4096 allocations \nof size shmmax!), it is overkill. In addition it does allow for a DOS by \na program that allocates thousands of segments (or somehow starts \nthousands of Pg servers on different ports...)!\n\nFor a dedicated Pg server I would size shmall using a calculation along \nthe lines of:\n\nshmall = (no. of postgresql servers) * (shmmax/4096)\n\n\nIf there are other daemons on the box that need to use shared memory, \nthen add their likely requirements to shmall too!\n\ncheers\n\nMark\n",
"msg_date": "Wed, 01 Jun 2005 19:22:42 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql-8.0.1 performance tuning"
},
{
"msg_contents": "Hi John,\n\nthank you very much for the answer :). I moved the pg_xlog to another\npartition and made a symlink to it. Know the database is much more\nfaster than before. A sample select which was finished in 68seconds \nbefore, is now finished in 58seconds :).\n\nI will test the other changes today also and will write a feedback\nafter testing. :) \n\nThanks a lot. I'm very confusing to tuning the postgresql-db. #:-)\n\nbest regards\nMartin\n\n\nAm Dienstag, den 31.05.2005, 13:46 -0500 schrieb John A Meinel:\n> Martin Fandel wrote:\n> \n> > Hi @ all,\n> >\n> > i'm trying to tune my postgresql-db but i don't know if the values are\n> > right\n> > set.\n> >\n> > I use the following environment for the postgres-db:\n> >\n> > ######### Hardware ############\n> > cpu: 2x P4 3Ghz\n> > ram: 1024MB DDR 266Mhz\n> >\n> > partitions:\n> > /dev/sda3 23G 9,6G 13G 44% /\n> > /dev/sda1 11G 156M 9,9G 2% /var\n> > /dev/sdb1 69G 13G 57G 19% /var/lib/pgsql\n> >\n> > /dev/sda is in raid 1 (2x 35GB / 10000upm / sca)\n> > /dev/sdb is in raid 10 (4x 35GB / 10000upm / sca)\n> > ######### /Hardware ############\n> \n> You probably want to put the pg_xlog file onto /dev/sda rather than\n> having it in /dev/sdb. Having it separate from the data usually boosts\n> performance a lot. I believe you can just mv it to a different\n> directory, and then recreate it as a symlink. (Stop the database first :)\n> \n> >\n> > ######### Config ############\n> > /etc/sysctl.conf:\n> > kernel.shmall = 786432000\n> > kernel.shmmax = 786432000\n> >\n> Not really sure about these two.\n> \n> > /etc/fstab:\n> > /dev/sdb1 /var/lib/pgsql reiserfs\n> > acl,user_xattr,noatime,data=writeback 1 2\n> >\n> Seems decent.\n> \n> > /var/lib/pgsql/data/postgresql.conf\n> > superuser_reserved_connections = 2\n> > shared_buffers = 3000\n> > work_mem = 131072\n> > maintenance_work_mem = 131072\n> \n> These both seem pretty large. But it depends on how many concurrent\n> connections doing sorting/hashing/etc you expect. If you are only\n> expecting 1 connection, these are probably fine. Otherwise with 1GB of\n> RAM I would probably make work_mem more like 4096/8192.\n> Remember, running out of work_mem means postgres will spill to disk,\n> slowing that query. Running out of RAM causes the system to swap, making\n> everything slow.\n> \n> > max_stack_depth = 2048\n> > max_fsm_pages = 20000\n> > max_fsm_relations = 1000\n> > max_files_per_process = 1000\n> > vacuum_cost_delay = 10\n> > vacuum_cost_page_hit = 1\n> > vacuum_cost_page_miss = 10\n> > vacuum_cost_page_dirty = 20\n> > vacuum_cost_limit = 200\n> > bgwriter_delay = 200\n> > bgwriter_percent = 1\n> > bgwriter_maxpages = 100\n> > fsync = true\n> > wal_sync_method = fsync\n> > wal_buffers = 64\n> > commit_delay = 0\n> > commit_siblings = 5\n> > checkpoint_segments = 256\n> > checkpoint_timeout = 900\n> > checkpoint_warning = 30\n> > effective_cache_size = 10000\n> > random_page_cost = 4\n> > cpu_tuple_cost = 0.01\n> > cpu_index_tuple_cost = 0.001\n> > cpu_operator_cost = 0.0025\n> > geqo = true\n> > geqo_threshold = 12\n> > geqo_effort = 5\n> > geqo_pool_size = 0\n> > geqo_generations = 0\n> > geqo_selection_bias = 2.0\n> > deadlock_timeout = 1000\n> > max_locks_per_transaction = 64\n> > ######### /Config ############\n> >\n> > ######### Transactions ############\n> > we have about 115-300 transactions/min in about 65 tables.\n> > ######### /Transactions ############\n> >\n> > I'm really new at using postgres. So i need some experience to set this\n> > parameters in the postgresql- and the system-config. I can't find standard\n> > calculations for this. :/ The postgresql-documentation doesn't help me to\n> > set the best values for this.\n> >\n> > The database must be high-availble. I configured rsync to sync the\n> > complete\n> > /var/lib/pgsql-directory to my hot-standby. On the hotstandby i will\n> > make the\n> > dumps of the database to improve the performance of the master-db.\n> >\n> I didn't think an rsync was completely valid. Probably you should look\n> more into Slony.\n> http://slony.info\n> \n> It is a single-master asynchronous replication system. I believe it is\n> pretty easy to setup, and does what you really want.\n> \n> > In my tests the synchronization works fine. I synchronised the hole\n> > directory\n> > and restarted the database of the hotstandby. While restarting,\n> > postgresql turned\n> > back the old (not archived) wals and the database of my hotstandby was\n> > consistent. Is this solution recommended? Or must i use archived wal's\n> > with\n> > real system-snapshots?\n> >\n> > best regards,\n> >\n> > Martin Fandel\n> \n> John\n> =:->\n> \n\n",
"msg_date": "Wed, 1 Jun 2005 10:50:31 +0200",
"msg_from": "\"Martin Fandel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql-8.0.1 performance tuning"
},
{
"msg_contents": "On Wed, Jun 01, 2005 at 07:30:37AM +0200, Cosimo Streppone wrote:\n>>fsync = true\n> false\n\nJust setting fsync=false without considering the implications is a _bad_\nidea...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 1 Jun 2005 11:57:54 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql-8.0.1 performance tuning"
},
{
"msg_contents": "Steinar wrote:\n\n> On Wed, Jun 01, 2005 at 07:30:37AM +0200, Cosimo Streppone wrote:\n> \n> > > fsync = true\n> > false\n>\n> Just setting fsync=false without considering the implications is a _bad_\n> idea...\n\nI totally agree on that.\n\n-- \nCosimo\n\n",
"msg_date": "Wed, 01 Jun 2005 12:22:08 +0200",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql-8.0.1 performance tuning"
},
{
"msg_contents": "Yes, i think also that this setting should be enabled :). \n\nAm Mittwoch, den 01.06.2005, 11:57 +0200 schrieb Steinar H. Gunderson:\n> On Wed, Jun 01, 2005 at 07:30:37AM +0200, Cosimo Streppone wrote:\n> >>fsync = true\n> > false\n> \n> Just setting fsync=false without considering the implications is a _bad_\n> idea...\n> \n> /* Steinar */\n\n",
"msg_date": "Wed, 1 Jun 2005 12:26:29 +0200",
"msg_from": "\"Martin Fandel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql-8.0.1 performance tuning"
},
{
"msg_contents": "Hi,\n\nhmmm i don't understand which are the best values for shmmax and shmall.\nI've googled around but every site says something different.\n\nI've 2GB of RAM now and set it to:\n\nkernel.shmmax=715827882\nkernel.shmall=2097152\n\nIs that value ok for 2GB of RAM?\n\nI've set the shared_buffers in my postgresql.conf to 87381 \n(87381*8*1024 = ~715827882). \n\nCan I use www.powerpostgresql.com as reference to set this \nparameters? Or which site can i use?\n\nBest regards,\nMartin\n\nAm Mittwoch, den 01.06.2005, 11:57 +0200 schrieb Steinar H. Gunderson:\n> On Wed, Jun 01, 2005 at 07:30:37AM +0200, Cosimo Streppone wrote:\n> >>fsync = true\n> > false\n> \n> Just setting fsync=false without considering the implications is a\n_bad_\n> idea...\n> \n> /* Steinar */\n\n\nAm Mittwoch, den 01.06.2005, 11:57 +0200 schrieb Steinar H. Gunderson:\n> On Wed, Jun 01, 2005 at 07:30:37AM +0200, Cosimo Streppone wrote:\n> >>fsync = true\n> > false\n> \n> Just setting fsync=false without considering the implications is a _bad_\n> idea...\n> \n> /* Steinar */\n\n",
"msg_date": "Thu, 2 Jun 2005 14:50:00 +0200",
"msg_from": "\"Martin Fandel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql-8.0.1 performance tuning"
},
{
"msg_contents": "Ups,\ni'm sorry. i've set the following values:\n\npostgresql.conf:\nshared_buffers = 70000\neffective_cache_size = 1744762\nwork_mem = 32768\nmaintenance_work_mem = 262144\nmax_fsm_pages = 200000\n\nsysctl.conf:\nvm.swappiness=10\nkernel.shmmax=715827882\nkernel.shmall=2097152\n\nAre the values ok for a 2 GB machine? I'm testing these settings\nwith contrib/pgbench. With this configuration i become up to 200tps\nincluding connection establishing. Is that value ok for this hardware?:\n\n1xP4 3Ghz (hyperthreading enabled)\n2GB 266 Mhz RAM CL2.5\n\npg_xlog is on sda (raid1 with two 10k discs) and the database on\nsdb(raid10 with four 10k discs).\n\nMy Linux distribution is Suse Linux 9.3 with postgresql 8.0.1.\n\nbest regards,\nMartin\n\nAm Donnerstag, den 02.06.2005, 14:50 +0200 schrieb Martin Fandel:\n> Hi,\n> \n> hmmm i don't understand which are the best values for shmmax and shmall.\n> I've googled around but every site says something different.\n> \n> I've 2GB of RAM now and set it to:\n> \n> kernel.shmmax=715827882\n> kernel.shmall=2097152\n> \n> Is that value ok for 2GB of RAM?\n> \n> I've set the shared_buffers in my postgresql.conf to 87381 \n> (87381*8*1024 = ~715827882). \n> \n> Can I use www.powerpostgresql.com as reference to set this \n> parameters? Or which site can i use?\n> \n> Best regards,\n> Martin\n> \n> Am Mittwoch, den 01.06.2005, 11:57 +0200 schrieb Steinar H. Gunderson:\n> > On Wed, Jun 01, 2005 at 07:30:37AM +0200, Cosimo Streppone wrote:\n> > >>fsync = true\n> > > false\n> > \n> > Just setting fsync=false without considering the implications is a\n> _bad_\n> > idea...\n> > \n> > /* Steinar */\n> \n> \n> Am Mittwoch, den 01.06.2005, 11:57 +0200 schrieb Steinar H. Gunderson:\n> > On Wed, Jun 01, 2005 at 07:30:37AM +0200, Cosimo Streppone wrote:\n> > >>fsync = true\n> > > false\n> > \n> > Just setting fsync=false without considering the implications is a _bad_\n> > idea...\n> > \n> > /* Steinar */\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n",
"msg_date": "Thu, 2 Jun 2005 15:10:03 +0200",
"msg_from": "\"Martin Fandel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql-8.0.1 performance tuning"
},
{
"msg_contents": "I've forgotten the settings for the pgbench-tests. I use 150 clients\nwith 5 transactions each.\n\n\nAm Donnerstag, den 02.06.2005, 15:10 +0200 schrieb Martin Fandel:\n> Ups,\n> i'm sorry. i've set the following values:\n> \n> postgresql.conf:\n> shared_buffers = 70000\n> effective_cache_size = 1744762\n> work_mem = 32768\n> maintenance_work_mem = 262144\n> max_fsm_pages = 200000\n> \n> sysctl.conf:\n> vm.swappiness=10\n> kernel.shmmax=715827882\n> kernel.shmall=2097152\n> \n> Are the values ok for a 2 GB machine? I'm testing these settings\n> with contrib/pgbench. With this configuration i become up to 200tps\n> including connection establishing. Is that value ok for this hardware?:\n> \n> 1xP4 3Ghz (hyperthreading enabled)\n> 2GB 266 Mhz RAM CL2.5\n> \n> pg_xlog is on sda (raid1 with two 10k discs) and the database on\n> sdb(raid10 with four 10k discs).\n> \n> My Linux distribution is Suse Linux 9.3 with postgresql 8.0.1.\n> \n> best regards,\n> Martin\n> \n> Am Donnerstag, den 02.06.2005, 14:50 +0200 schrieb Martin Fandel:\n> > Hi,\n> > \n> > hmmm i don't understand which are the best values for shmmax and shmall.\n> > I've googled around but every site says something different.\n> > \n> > I've 2GB of RAM now and set it to:\n> > \n> > kernel.shmmax=715827882\n> > kernel.shmall=2097152\n> > \n> > Is that value ok for 2GB of RAM?\n> > \n> > I've set the shared_buffers in my postgresql.conf to 87381 \n> > (87381*8*1024 = ~715827882). \n> > \n> > Can I use www.powerpostgresql.com as reference to set this \n> > parameters? Or which site can i use?\n> > \n> > Best regards,\n> > Martin\n> > \n> > Am Mittwoch, den 01.06.2005, 11:57 +0200 schrieb Steinar H. Gunderson:\n> > > On Wed, Jun 01, 2005 at 07:30:37AM +0200, Cosimo Streppone wrote:\n> > > >>fsync = true\n> > > > false\n> > > \n> > > Just setting fsync=false without considering the implications is a\n> > _bad_\n> > > idea...\n> > > \n> > > /* Steinar */\n> > \n> > \n> > Am Mittwoch, den 01.06.2005, 11:57 +0200 schrieb Steinar H. Gunderson:\n> > > On Wed, Jun 01, 2005 at 07:30:37AM +0200, Cosimo Streppone wrote:\n> > > >>fsync = true\n> > > > false\n> > > \n> > > Just setting fsync=false without considering the implications is a _bad_\n> > > idea...\n> > > \n> > > /* Steinar */\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> \n\n",
"msg_date": "Thu, 2 Jun 2005 15:12:10 +0200",
"msg_from": "\"Martin Fandel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql-8.0.1 performance tuning"
},
{
"msg_contents": "On 6/1/05, Mark Kirkwood <[email protected]> wrote:\n> Cosimo Streppone wrote:\n> > ######### Config ############\n> >> /etc/sysctl.conf:\n> >> kernel.shmall = 786432000\n> >> kernel.shmmax = 786432000\n> >\n> > I think you have a problem here.\n> > kernel.shmmax should *not* be set to an amount of RAM, but\n> > to maximum number of shared memory pages, which on a typical linux system\n> > is 4kb. Google around:\n> >\n> This is somewhat confusing :\n> \n> kernel.shmmax is in bytes (max single segment size)\n> kernel.shmall is in (4k) pages (max system wide allocated segment pages)\n\nCan someone resummarize the situation with these linux parameters for\nthe dummies? I thought I had my calculations all sorted out but now\nI've confused myself again.\n\nThe documentation at\nhttp://www.postgresql.org/docs/8.0/interactive/kernel-resources.html\nputs the same figure into both values but the posts here seem to\nsuggest that is wrong?\nOr is it different on a 2.4 kernel and the documentation needs updating?\n\nIn my specific case I have about 800meg of memory on a linux 2.4 kernel box.\n\nBased on the powerpostgresql.com Performance Checklist [1] and\nAnnotated Postgresql.conf [2] I understand that:\n-I should have less than 1/3 of my total memory as shared_buffers\n-For my server 15000 is a fairly reasonable starting point for\nshared_buffers which is ~120MB\n-I have 100 max_connections.\n\nSo I was going to set SHMMAX to 134217728 (ie 128 Meg)\n\nWhat should SHMALL be?\n\nThe current system values are\npostgres@localhost:~/data$ cat /proc/sys/kernel/shmmax\n33554432\npostgres@localhost:~/data$ cat /proc/sys/kernel/shmall\n2097152\n\nie SHMALL seems to be 1/16 of SHMMAX\n\n\nPaul\n\n[1] http://www.powerpostgresql.com/PerfList/\n[2] http://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n",
"msg_date": "Fri, 3 Jun 2005 13:45:38 +1000",
"msg_from": "Paul McGarry <[email protected]>",
"msg_from_op": false,
"msg_subject": "SHMMAX / SHMALL Was (Re: postgresql-8.0.1 performance tuning)"
},
{
"msg_contents": "Paul McGarry wrote:\n\n> Based on the powerpostgresql.com Performance Checklist [1] and\n> Annotated Postgresql.conf [2] I understand that:\n> -I should have less than 1/3 of my total memory as shared_buffers\n> -For my server 15000 is a fairly reasonable starting point for\n> shared_buffers which is ~120MB\n> -I have 100 max_connections.\n> \n> So I was going to set SHMMAX to 134217728 (ie 128 Meg)\n> \n> What should SHMALL be?\n> \n> The current system values are\n> postgres@localhost:~/data$ cat /proc/sys/kernel/shmmax\n> 33554432\n> postgres@localhost:~/data$ cat /proc/sys/kernel/shmall\n> 2097152\n> \n> ie SHMALL seems to be 1/16 of SHMMAX\n> \n\nNo - shmall is in 4k pages _ so this amounts to 8G! This is fine - \nunless you wish to decrease it in order to prevent too many shared \nmemory applications running.\n\nBTW - the docs have been amended for 8.1 to suggest shmmax=134217728 and \nshmall=2097152 (was going to point you at them - but I cannot find them \non the Postgresql site anymore...).\n\nThere seems to be some longstanding confusion in the Linux community \nabout the units for shmall (some incorrect documentation from Oracle on \nthe issue does not help I am sure....) - to the point where I downloaded \nkernel source to check (reproducing here):\n\n\nlinux-2.6.11.1/include/linux/shm.h:13->\n\n#define SHMMAX 0x2000000 /* max shared seg size (bytes) */\n#define SHMMIN 1 /* min shared seg size (bytes) */\n#define SHMMNI 4096 /* max num of segs system wide */\n#define SHMALL (SHMMAX/PAGE_SIZE*(SHMMNI/16)) /* max shm system wide \n(pages) */\n#define SHMSEG SHMMNI\n\n\nHope that helps\n\nBest wishes\n\nMark\n",
"msg_date": "Fri, 03 Jun 2005 18:11:54 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SHMMAX / SHMALL Was (Re: postgresql-8.0.1 performance"
},
{
"msg_contents": "Aah ok :) \n\nI've set my values now as follow (2GB RAM):\n\nSHMMAX=`cat /proc/meminfo | grep MemTotal | cut -d: -f 2 | awk '{print\n$1*1024/3}'`\necho kernel.shmmax=${SHMMAX} >> /etc/sysctl.conf\nSHMALL=`expr ${SHMALL} / 4096 \\* \\( 4096 / 16 \\)`\necho kernel.shmall=${SHMALL} >> /etc/sysctl.conf\n\nsysctl.conf:\nkernel.shmmax=708329472\nkernel.shmall=44270592\n\npostgresql.conf:\nmax_connections=500\nshared_buffers=40000 # ~312MB, min. 1000, max ~ 83000\n\nbest regards,\nMartin\n\n\nAm Freitag, den 03.06.2005, 18:11 +1200 schrieb Mark Kirkwood:\n> Paul McGarry wrote:\n> \n> > Based on the powerpostgresql.com Performance Checklist [1] and\n> > Annotated Postgresql.conf [2] I understand that:\n> > -I should have less than 1/3 of my total memory as shared_buffers\n> > -For my server 15000 is a fairly reasonable starting point for\n> > shared_buffers which is ~120MB\n> > -I have 100 max_connections.\n> > \n> > So I was going to set SHMMAX to 134217728 (ie 128 Meg)\n> > \n> > What should SHMALL be?\n> > \n> > The current system values are\n> > postgres@localhost:~/data$ cat /proc/sys/kernel/shmmax\n> > 33554432\n> > postgres@localhost:~/data$ cat /proc/sys/kernel/shmall\n> > 2097152\n> > \n> > ie SHMALL seems to be 1/16 of SHMMAX\n> > \n> \n> No - shmall is in 4k pages _ so this amounts to 8G! This is fine - \n> unless you wish to decrease it in order to prevent too many shared \n> memory applications running.\n> \n> BTW - the docs have been amended for 8.1 to suggest shmmax=134217728 and \n> shmall=2097152 (was going to point you at them - but I cannot find them \n> on the Postgresql site anymore...).\n> \n> There seems to be some longstanding confusion in the Linux community \n> about the units for shmall (some incorrect documentation from Oracle on \n> the issue does not help I am sure....) - to the point where I downloaded \n> kernel source to check (reproducing here):\n> \n> \n> linux-2.6.11.1/include/linux/shm.h:13->\n> \n> #define SHMMAX 0x2000000 /* max shared seg size (bytes) */\n> #define SHMMIN 1 /* min shared seg size (bytes) */\n> #define SHMMNI 4096 /* max num of segs system wide */\n> #define SHMALL (SHMMAX/PAGE_SIZE*(SHMMNI/16)) /* max shm system wide \n> (pages) */\n> #define SHMSEG SHMMNI\n> \n> \n> Hope that helps\n> \n> Best wishes\n> \n> Mark\n\n",
"msg_date": "Fri, 3 Jun 2005 10:49:04 +0200",
"msg_from": "\"Martin Fandel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SHMMAX / SHMALL Was (Re: postgresql-8.0.1 performance tuning)"
},
{
"msg_contents": "Martin Fandel wrote:\n> Aah ok :) \n> \n> I've set my values now as follow (2GB RAM):\n> \n> SHMMAX=`cat /proc/meminfo | grep MemTotal | cut -d: -f 2 | awk '{print\n> $1*1024/3}'`\n> echo kernel.shmmax=${SHMMAX} >> /etc/sysctl.conf\n> SHMALL=`expr ${SHMALL} / 4096 \\* \\( 4096 / 16 \\)`\n> echo kernel.shmall=${SHMALL} >> /etc/sysctl.conf\n> \n> sysctl.conf:\n> kernel.shmmax=708329472\n> kernel.shmall=44270592\n> \n> postgresql.conf:\n> max_connections=500\n> shared_buffers=40000 # ~312MB, min. 1000, max ~ 83000\n> \n\nHmmm - shmall set to 168G... err why? Apologies for nit picking a little \n- but shmall seems unreasonably high. I can't see much reason for \nsetting it bigger than (physical RAM in bytes)/4096 myself. So in your \ncase this is 2*(1024*1024*1024)/4096 = 524288\n\nCheers\n\nMark\n\n",
"msg_date": "Fri, 03 Jun 2005 21:10:24 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SHMMAX / SHMALL Was (Re: postgresql-8.0.1 performance"
},
{
"msg_contents": "ok i set it to 524288. ;)\n\nAm Freitag, den 03.06.2005, 21:10 +1200 schrieb Mark Kirkwood:\n> Martin Fandel wrote:\n> > Aah ok :) \n> > \n> > I've set my values now as follow (2GB RAM):\n> > \n> > SHMMAX=`cat /proc/meminfo | grep MemTotal | cut -d: -f 2 | awk '{print\n> > $1*1024/3}'`\n> > echo kernel.shmmax=${SHMMAX} >> /etc/sysctl.conf\n> > SHMALL=`expr ${SHMALL} / 4096 \\* \\( 4096 / 16 \\)`\n> > echo kernel.shmall=${SHMALL} >> /etc/sysctl.conf\n> > \n> > sysctl.conf:\n> > kernel.shmmax=708329472\n> > kernel.shmall=44270592\n> > \n> > postgresql.conf:\n> > max_connections=500\n> > shared_buffers=40000 # ~312MB, min. 1000, max ~ 83000\n> > \n> \n> Hmmm - shmall set to 168G... err why? Apologies for nit picking a little \n> - but shmall seems unreasonably high. I can't see much reason for \n> setting it bigger than (physical RAM in bytes)/4096 myself. So in your \n> case this is 2*(1024*1024*1024)/4096 = 524288\n> \n> Cheers\n> \n> Mark\n> \n\n",
"msg_date": "Fri, 3 Jun 2005 11:19:03 +0200",
"msg_from": "\"Martin Fandel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SHMMAX / SHMALL Was (Re: postgresql-8.0.1 performance tuning)"
}
] |
[
{
"msg_contents": "What are the effect of having a table with arround 500 insert/update/delete on two to eight table in a time frame of 2 minutes 24/24h, when you have oid enabled versus the same setup when you dont have oid?\n\nThat deployment is done on a postgres with 8 to 9 databases, each having those 2 to 8 high load tables with oid enabled.\n\nWould the oid colum slow down table scan when you have over 20 millions row?\n\nWould the cost of maintaining the oid column inside thoses high load tables when there is no oid reference used for data seeking costy for postgres ressources!?\n\n\n\nEric Lauzon\n[Recherche & Développement]\nAbove Sécurité / Above Security\nTél : (450) 430-8166\nFax : (450) 430-1858 \n",
"msg_date": "Fri, 27 May 2005 13:05:57 -0400",
"msg_from": "\"Eric Lauzon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "OID vs overall system performances on high load databases."
},
{
"msg_contents": "On Fri, 2005-05-27 at 13:05 -0400, Eric Lauzon wrote:\n> What are the effect of having a table with arround 500\n> insert/update/delete on two to eight table in a time frame of 2\n> minutes 24/24h, when you have oid enabled versus the same setup when\n> you dont have oid?\n> \n> That deployment is done on a postgres with 8 to 9 databases, each\n> having those 2 to 8 high load tables with oid enabled.\n> \n> Would the oid colum slow down table scan when you have over 20\n> millions row?\n> \n> Would the cost of maintaining the oid column inside thoses high load\n> tables when there is no oid reference used for data seeking costy for\n> postgres ressources!?\n\nThe OID column is an extra few bytes on each row. If you don't have any\nuse for it (and let's face it: most of us don't), then create your\ntables \"without OID\".\n\nThe amount of impact that it makes will depend on what the general row\nsize is. If they are rows with a couple of integers then the size of an\nOID column will be a significant portion of the size of each row, and\nremoving it will make the physical on-disk data size significantly\nsmaller. If the size of the average row is (e.g.) 2k then the OID will\nonly be a very small fraction of the data, and removing it will only\nmake a small difference.\n\nRegards,\n\t\t\t\t\tAndrew McMillan.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n ... I want a COLOR T.V. and a VIBRATING BED!!!\n-------------------------------------------------------------------------",
"msg_date": "Sat, 28 May 2005 17:08:10 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OID vs overall system performances on high load"
},
{
"msg_contents": "\n\n> The OID column is an extra few bytes on each row. If you don't have any\n> use for it (and let's face it: most of us don't), then create your\n> tables \"without OID\".\n\n\n\tAlso there are some useful hacks using the oid which don't work if it \nwraps around, thus preventing it from wrapping around by not using on \nevery table could be useful in some cases...\n",
"msg_date": "Sat, 28 May 2005 19:36:42 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OID vs overall system performances on high load"
}
] |
[
{
"msg_contents": "Would I be correct in assuming that the following two indexes are \ncompletely redundant except for the fact that one complains about \nuniqueness constraint violations and the other does not?\n\nOr is there are legitimate use for having BOTH indexes?\n\nI'm trying to figure out if it's okay to delete the non-unique index.\n(I have a bunch of tables suffering this malady from some problematic \napplication code).\n\n Table \"public.erf\"\n Column | Type | Modifiers\n --------+---------+-----------\n rid | integer | not null\n cid | integer | not null\n Indexes: erf_rid_key unique btree (rid),\n\t erf_rid_idx btree (rid)\n\n Index \"public.erf_rid_idx\"\n Column | Type\n --------+---------\n rid | integer\n btree, for table \"public.erf\"\n\n Index \"public.erf_rid_key\"\n Column | Type\n --------+---------\n rid | integer\n unique, btree, for table \"public.erf\"\n\n\n",
"msg_date": "Fri, 27 May 2005 17:28:01 -0400",
"msg_from": "Jeffrey Tenny <[email protected]>",
"msg_from_op": true,
"msg_subject": "Redundant indexes?"
},
{
"msg_contents": "Jeffrey Tenny <[email protected]> writes:\n> Would I be correct in assuming that the following two indexes are \n> completely redundant except for the fact that one complains about \n> uniqueness constraint violations and the other does not?\n\nYup ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 May 2005 20:09:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Redundant indexes? "
}
] |
[
{
"msg_contents": "Hi -\n\nI have a table of about 3 million rows of city \"aliases\" that I need \nto query using LIKE - for example:\n\nselect * from city_alias where city_name like '%FRANCISCO'\n\n\nWhen I do an EXPLAIN ANALYZE on the above query, the result is:\n\n Seq Scan on city_alias (cost=0.00..59282.31 rows=2 width=42) \n(actual time=73.369..3330.281 rows=407 loops=1)\n Filter: ((name)::text ~~ '%FRANCISCO'::text)\nTotal runtime: 3330.524 ms\n(3 rows)\n\n\nthis is a query that our system needs to do a LOT. Is there any way \nto improve the performance on this either with changes to our query \nor by configuring the database deployment? We have an index on \ncity_name but when using the % operator on the front of the query \nstring postgresql can't use the index .\n\nThanks for any help.\n\nMike\n",
"msg_date": "Sun, 29 May 2005 08:27:26 -0500",
"msg_from": "Michael Engelhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "sequential scan performance"
},
{
"msg_contents": "On Sun, May 29, 2005 at 08:27:26AM -0500, Michael Engelhart wrote:\n> this is a query that our system needs to do a LOT. Is there any way \n> to improve the performance on this either with changes to our query \n> or by configuring the database deployment? We have an index on \n> city_name but when using the % operator on the front of the query \n> string postgresql can't use the index .\n\nTry tsearch2 from contrib, it might help you.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sun, 29 May 2005 15:47:13 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan performance"
},
{
"msg_contents": "> When I do an EXPLAIN ANALYZE on the above query, the result is:\n> \n> Seq Scan on city_alias (cost=0.00..59282.31 rows=2 width=42) (actual \n> time=73.369..3330.281 rows=407 loops=1)\n> Filter: ((name)::text ~~ '%FRANCISCO'::text)\n> Total runtime: 3330.524 ms\n> (3 rows)\n> \n> \n> this is a query that our system needs to do a LOT. Is there any way \n> to improve the performance on this either with changes to our query or \n> by configuring the database deployment? We have an index on city_name \n> but when using the % operator on the front of the query string \n> postgresql can't use the index .\n\nOf course not. There really is now way to make your literal query above \nfast. You could try making a functional index on the reverse() of the \nstring and querying for the reverse() of 'francisco'.\n\nOr, if you want a general full text index, you should absolutely be \nusing contrib/tsearch2.\n\nChris\n",
"msg_date": "Sun, 29 May 2005 22:43:08 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan performance"
},
{
"msg_contents": "Michael,\n\nI'd recommend our contrib/pg_trgm module, which provides\ntrigram based fuzzy search and return results ordered by similarity\nto your query. Read http://www.sai.msu.su/~megera/postgres/gist/pg_trgm/README.pg_trgm\nfor more details.\n\nOleg\nOn Sun, 29 May 2005, Michael Engelhart wrote:\n\n> Hi -\n>\n> I have a table of about 3 million rows of city \"aliases\" that I need to query \n> using LIKE - for example:\n>\n> select * from city_alias where city_name like '%FRANCISCO'\n>\n>\n> When I do an EXPLAIN ANALYZE on the above query, the result is:\n>\n> Seq Scan on city_alias (cost=0.00..59282.31 rows=2 width=42) (actual \n> time=73.369..3330.281 rows=407 loops=1)\n> Filter: ((name)::text ~~ '%FRANCISCO'::text)\n> Total runtime: 3330.524 ms\n> (3 rows)\n>\n>\n> this is a query that our system needs to do a LOT. Is there any way to \n> improve the performance on this either with changes to our query or by \n> configuring the database deployment? We have an index on city_name but when \n> using the % operator on the front of the query string postgresql can't use \n> the index .\n>\n> Thanks for any help.\n>\n> Mike\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n",
"msg_date": "Sun, 29 May 2005 23:44:32 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan performance"
},
{
"msg_contents": "Thanks everyone for all the suggestions. I'll check into those \ncontrib modules.\n\nMichael\nOn May 29, 2005, at 2:44 PM, Oleg Bartunov wrote:\n\n> Michael,\n>\n> I'd recommend our contrib/pg_trgm module, which provides\n> trigram based fuzzy search and return results ordered by similarity\n> to your query. Read http://www.sai.msu.su/~megera/postgres/gist/ \n> pg_trgm/README.pg_trgm\n> for more details.\n>\n> Oleg\n> On Sun, 29 May 2005, Michael Engelhart wrote:\n>\n>\n>> Hi -\n>>\n>> I have a table of about 3 million rows of city \"aliases\" that I \n>> need to query using LIKE - for example:\n>>\n>> select * from city_alias where city_name like '%FRANCISCO'\n>>\n>>\n>> When I do an EXPLAIN ANALYZE on the above query, the result is:\n>>\n>> Seq Scan on city_alias (cost=0.00..59282.31 rows=2 width=42) \n>> (actual time=73.369..3330.281 rows=407 loops=1)\n>> Filter: ((name)::text ~~ '%FRANCISCO'::text)\n>> Total runtime: 3330.524 ms\n>> (3 rows)\n>>\n>>\n>> this is a query that our system needs to do a LOT. Is there any \n>> way to improve the performance on this either with changes to our \n>> query or by configuring the database deployment? We have an \n>> index on city_name but when using the % operator on the front of \n>> the query string postgresql can't use the index .\n>>\n>> Thanks for any help.\n>>\n>> Mike\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to \n>> [email protected])\n>>\n>>\n>\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Mon, 30 May 2005 11:33:28 -0500",
"msg_from": "Michael Engelhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sequential scan performance"
},
{
"msg_contents": "On Sun, May 29, 2005 at 08:27:26AM -0500, Michael Engelhart wrote:\n> Hi -\n> \n> I have a table of about 3 million rows of city \"aliases\" that I need \n> to query using LIKE - for example:\n> \n> select * from city_alias where city_name like '%FRANCISCO'\n> \n> \n> When I do an EXPLAIN ANALYZE on the above query, the result is:\n> \n> Seq Scan on city_alias (cost=0.00..59282.31 rows=2 width=42) \n> (actual time=73.369..3330.281 rows=407 loops=1)\n> Filter: ((name)::text ~~ '%FRANCISCO'::text)\n> Total runtime: 3330.524 ms\n> (3 rows)\n> \n> \n> this is a query that our system needs to do a LOT. Is there any way \n> to improve the performance on this either with changes to our query \n> or by configuring the database deployment? We have an index on \n> city_name but when using the % operator on the front of the query \n> string postgresql can't use the index .\n\nIf that's really what you're doing (the wildcard is always at the beginning)\nthen something like this\n\n create index city_name_idx on foo (reverse(city_name));\n\n select * from city_alias where reverse(city_name) like reverse('%FRANCISCO');\n\nshould do just what you need.\n\nI use this, with a plpgsql implementation of reverse, and it works nicely.\n\nCREATE OR REPLACE FUNCTION reverse(text) RETURNS text AS '\nDECLARE\n original alias for $1;\n reverse_str text;\n i int4;\nBEGIN\n reverse_str = '''';\n FOR i IN REVERSE LENGTH(original)..1 LOOP\n reverse_str = reverse_str || substr(original,i,1);\n END LOOP;\n return reverse_str;\nEND;'\nLANGUAGE 'plpgsql' IMMUTABLE;\n\n\nSomeone will no doubt suggest using tsearch2, and you might want to\ntake a look at it if you actually need full-text search, but my\nexperience has been that it's too slow to be useful in production, and\nit's not needed for the simple \"leading wildcard\" case.\n\nCheers,\n Steve\n",
"msg_date": "Mon, 30 May 2005 09:53:40 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan performance"
}
] |
[
{
"msg_contents": "I am still in the dark due to my lack of knowledge on internal OID management,but\ni would presume that a table with OID enable and that has high load would require\nsome more work from pgsql internal to maintain the OID index for the database.\n \nSo OID can be beneficial on static tables, or tables that you want to be able to manipulate\nwith pgadmin X , but can a table without OID increase performances on insert,delete,update,COPY?\n \nI am not really worried about disk space that an OID collumn can take, but i was wandering if an \ninsert in a table of 20 millions and more that has oid would slow the insert process. Since OID seem\nto act as a global index mabey maintaning that index can become costy over high table load by postgresql\nbackend.\n \n-Eric Lauzon\n \n \n \n \n \n \n",
"msg_date": "Sun, 29 May 2005 16:17:11 -0400",
"msg_from": "\"Eric Lauzon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "OID vs overall system performances on high load"
},
{
"msg_contents": "On Sun, 2005-05-29 at 16:17 -0400, Eric Lauzon wrote:\n> I am still in the dark due to my lack of knowledge on internal OID management,but\n> i would presume that a table with OID enable and that has high load would require\n> some more work from pgsql internal to maintain the OID index for the database.\n> \n> So OID can be beneficial on static tables, or tables that you want to be able to manipulate\n> with pgadmin X , but can a table without OID increase performances on insert,delete,update,COPY?\n> \n> I am not really worried about disk space that an OID collumn can take, but i was wandering if an \n> insert in a table of 20 millions and more that has oid would slow the insert process. Since OID seem\n> to act as a global index mabey maintaning that index can become costy over high table load by postgresql\n> backend.\n\nThere is no OID index, unless you create one.\n\nThe disk space that an OID column can take has an effect on performance:\nreducing the amount of physical disk reads will mean that more of your\nreal data is cached, and so forth. How much effect it will have will\ndepend on the relative size of the OID column and the other columns in\nyour data.\n\nRegards,\n\t\t\t\t\tAndrew McMillan.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n http://survey.net.nz/ - any more questions?\n-------------------------------------------------------------------------",
"msg_date": "Mon, 30 May 2005 09:46:43 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OID vs overall system performances on high load"
},
{
"msg_contents": "\"Eric Lauzon\" <[email protected]> writes:\n> I am still in the dark due to my lack of knowledge on internal OID management,but\n> i would presume that a table with OID enable and that has high load would require\n> some more work from pgsql internal to maintain the OID index for the database.\n\nThere is no \"OID index\"; at least not unless you choose to create one\nfor a given table. The only thing particularly special about OID is\nthat there is an internal database-wide sequence generator for assigning\nnew values. Otherwise it works a whole lot like a serial column.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 May 2005 17:47:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OID vs overall system performances on high load "
},
{
"msg_contents": "On Sun, 2005-05-29 at 16:17 -0400, Eric Lauzon wrote:\n> So OID can be beneficial on static tables\n\nOIDs aren't beneficial on \"static tables\"; unless you have unusual\nrequirements[1], there is no benefit to having OIDs on user-created\ntables (see the default_with_oids GUC var, which will default to \"false\"\nin 8.1)\n\n-Neil\n\n[1] Such as a column that references a column in the system catalogs.\n\n",
"msg_date": "Mon, 30 May 2005 14:02:44 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OID vs overall system performances on high load"
}
] |
[
{
"msg_contents": "We have a production database with transaction-style data, in most of the\ntables we have a timestamp attribute \"created\" telling the creation time of\nthe table row. Naturally, this attribute is always increasing.\n\nBy now we are hitting the limit where the table data does not fit in caches\nanymore. We have a report section where there are constantly requests for\nthings like \"sum up all transactions for the last two weeks\", and those\nrequests seem to do a full table scan, even though only the last parts of\nthe table is needed - so by now those reports have started to cause lots of\niowait.\n\nIs there any way to avoid this, apart from adding memory linearly with\ndatabase growth, make adjunct tables for historical rows, or build a\nseparate data warehousing system? There must be some simpler solutions,\nright?\n\n-- \nTobias Brox, Beijing\n",
"msg_date": "Mon, 30 May 2005 17:19:51 +0800",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "timestamp indexing"
},
{
"msg_contents": "On Mon, May 30, 2005 at 05:19:51PM +0800, Tobias Brox wrote:\n>\n> We have a production database with transaction-style data, in most of the\n> tables we have a timestamp attribute \"created\" telling the creation time of\n> the table row. Naturally, this attribute is always increasing.\n\nThe message subject is \"timestamp indexing\" but you don't mention\nwhether you have an index on the timestamp column. Do you?\n\n> By now we are hitting the limit where the table data does not fit in caches\n> anymore. We have a report section where there are constantly requests for\n> things like \"sum up all transactions for the last two weeks\", and those\n> requests seem to do a full table scan, even though only the last parts of\n> the table is needed - so by now those reports have started to cause lots of\n> iowait.\n\nCould you post an example query and its EXPLAIN ANALYZE output? If\nthe query uses a sequential scan then it might also be useful to see\nthe EXPLAIN ANALYZE output with enable_seqscan turned off. Since\ncaching can cause a query to be significantly faster after being run\nseveral times, it might be a good idea to run EXPLAIN ANALYZE three\ntimes and post the output of the last run -- that should put the\nqueries under comparison on a somewhat equal footing (i.e., we don't\nwant to be misled about how much faster one query is than another\nsimply because one query happened to use more cached data on a\nparticular run).\n\nHow many records are in the tables you're querying? Are you regularly\nvacuuming and analyzing the database or the individual tables? Are\nany of the tables clustered? If so, on what indexes and how often\nare you re-clustering them? What version of PostgreSQL are you using?\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Mon, 30 May 2005 07:54:29 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: timestamp indexing"
},
{
"msg_contents": "[Michael Fuhr - Mon at 07:54:29AM -0600]\n> The message subject is \"timestamp indexing\" but you don't mention\n> whether you have an index on the timestamp column. Do you?\n\nYes. Sorry for not beeing explicit on that.\n\n> Could you post an example query and its EXPLAIN ANALYZE output? If\n> the query uses a sequential scan then it might also be useful to see\n> the EXPLAIN ANALYZE output with enable_seqscan turned off. Since\n> caching can cause a query to be significantly faster after being run\n> several times, it might be a good idea to run EXPLAIN ANALYZE three\n> times and post the output of the last run -- that should put the\n> queries under comparison on a somewhat equal footing (i.e., we don't\n> want to be misled about how much faster one query is than another\n> simply because one query happened to use more cached data on a\n> particular run).\n\nThe actual statement was with 6 or 7 joins and very lengthy. I reduced\nit to a simple single join query which still did a sequential scan\nrather than an index scan (as expected), and I believe I already did a\nfollow-up mail including \"explain analyze\". All \"explain analyze\" in my\nprevious mail was run until the resulting execution time had stabilized,\nrelatively. I will try with \"set enable_seqscan off\" when I get back to\nthe office.\n\n> How many records are in the tables you're querying? \n\nAlso answered on in my follow-up.\n\n> Are you regularly\n> vacuuming and analyzing the database or the individual tables?\n\nVacuum is run nightly, and I also did a manual \"vacuum analyze table\" on\nthe table in question.\n\n> Are\n> any of the tables clustered? If so, on what indexes and how often\n> are you re-clustering them?\n\nHuh? :)\n\n> What version of PostgreSQL are you using?\n\nAlso answered in my follow-up - \"not yet pg8\" :)\n\n",
"msg_date": "Mon, 30 May 2005 19:08:16 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: timestamp indexing"
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n>> What version of PostgreSQL are you using?\n\n> Also answered in my follow-up - \"not yet pg8\" :)\n\nYour followup hasn't shown up here yet, but if the query is written like\n\tWHERE timestampcol >= now() - interval 'something'\nthen the pre-8.0 planner is not capable of making a good estimate of the\nselectivity of the WHERE clause. One solution is to fold the timestamp\ncomputation to a constant on the client side.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 May 2005 13:57:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: timestamp indexing "
},
{
"msg_contents": "[Tom Lane - Mon at 01:57:54PM -0400]\n> Your followup hasn't shown up here yet, \n\nI'll check up on that and resend it.\n\n> but if the query is written like\n> \tWHERE timestampcol >= now() - interval 'something'\n> then the pre-8.0 planner is not capable of making a good estimate of the\n> selectivity of the WHERE clause.\n\n> One solution is to fold the timestamp\n> computation to a constant on the client side.\n\nI don't think there are any of that in the production; we always make the\ntimestamps on the client side.\n\nAs to my original problem, I looked up on table clustering on google.\nRight, for report performance, we store some aggregates in the table which\nare updated several times. If I've understood it correctly, the row will\nphysically be moved to the tail of the table every time the attribute is\nupdated. I understand that it may make sense to do a full table scan if a\nrandom 10% of the rows should be selected. Forcing the usage of the index\ncaused a tiny improvement of performance, but only after running it some few\ntimes to be sure the index got buffered :-)\n\n-- \nTobias Brox, Beijing\n",
"msg_date": "Tue, 31 May 2005 10:06:25 +0800",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: timestamp indexing"
},
{
"msg_contents": "[Tobias Brox - Tue at 10:06:25AM +0800]\n> [Tom Lane - Mon at 01:57:54PM -0400]\n> > Your followup hasn't shown up here yet, \n> \n> I'll check up on that and resend it.\n\nHrm ... messed-up mail configuration I suppose. Here we go:\n\nPaul McGarry unintentionally sent a request for more details off the list,\nsince it was intended for the list I'll send my reply here.\n\nWhile writing up the reply, and doing research, I discovered that this is\nnot a problem with indexing timestamps per se, but more with a query of the\nkind \"give me 5% of the table\"; it seems like it will often prefer to do a\nfull table scan instead of going via the index.\n\nI think that when I had my university courses on databases, we also learned\nabout flat indexes, where the whole index has to be rebuilt whenever a field\nis updated or inserted in the middle, and I also think we learned that the\ntable usually would be sorted physically by the primary key on the disk. As\nlong as we have strictly incrementing primary keys and timestamps, such a\nsetup would probably be more efficient for queries of the kind \"give me all\nactivity for the last two weeks\"?\n\nHere follows my reply to Paul, including some gory details:\n\n[Paul McGarry - Mon at 07:59:35PM +1000]\n> What version of postgresql are you using and what are the exact\n> datatypes and queries?\n\nWe are still using 7.4.6, but I suppose that if our issues are completely or\npartially solved in pg 8, that would make a good case for upgrading :-)\n\nThe datatypes I'm indexing are timestamp without time zone.\n\nActually I may be on the wrong hunting ground now - the production system\nfroze completely some days ago basically due to heavy iowait and load on the\ndatabase server, rendering postgresql completely unresponsive - and back\nthen we had too poor logging to find out what queries that was causing it to\ngrind to a halt, and since we've never such a bad problem before, we didn't\nknow how to handle the situation (we just restarted the entire postgresql;\nif we had been just killing the processes running the rogue database\nqueries, we would have had very good tracks of it in the logs).\n\nI digress. The last days I've looked through profiling logs, and I'm\nchecking if the accumulatively worst queries can be tuned somehow. Most of\nthem are big joins, but I'm a bit concerned of the amounts of \"Seq Scan\"\nreturned by \"explain\" despite the fact that only a small fraction of the\ntables are queried. I reduced the problem to a simple \"select * from table\nwhere created>xxx\" and discovered that it still won't use index, and still\nwill be costly (though of course not much compared to the big joined query).\n\nThe \"ticket\" table have less than a million rows, around 50k made the last\nten days:\n\nNBET=> explain analyze select * from ticket where created>'2005-05-20';\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------\n Seq Scan on ticket (cost=0.00..19819.91 rows=89553 width=60) (actual time=535.884..1018.268 rows=53060 loops=1)\n Filter: (created > '2005-05-20 00:00:00'::timestamp without time zone)\n Total runtime: 1069.514 ms\n(3 rows)\n\nAnyway, it seems to me that \"indexing on timestamp\" is not the real issue\nhere, because when restricting by primary key (numeric, sequential ID) the\nexecution time is the same or worse, still doing a sequence scan:\n\nNBET=> explain analyze select * from ticket where id>711167;\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------\n Seq Scan on ticket (cost=0.00..19819.91 rows=92273 width=60) (actual\ntime=550.855..1059.843 rows=53205 loops=1)\n Filter: (id > 711167)\n Total runtime: 1110.469 ms\n(3 rows)\n \n\nI've tried running equivalent queries on a table with twice as many rows and\nwidth=180, it will pull from the index both when querying by ID and\ntimestamp, and it will usually spend less time.\n\nRunning \"select * from ticket\" seems to execute ~2x slower than when having\nthe restriction.\n\n> I have a 7.3 database with a \"timestamp with time zone\" field and we\n> have to be very careful to explicitly cast values as that in queries\n> if it is to use the index correctly. I believe it's an issue that is\n> cleared up in newer versions though.\n\nI suppose so - as said, restricting by primary key didn't improve the\nperformance significantly, so I was clearly wrong indicating that this is a\nspecial issue with indexing a timestamp. \n\n-- \nTobias Brox, Beijing\n",
"msg_date": "Tue, 31 May 2005 10:20:11 +0800",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: timestamp indexing"
},
{
"msg_contents": "What does \n\nSET enable_seqscan = false;\nEXPLAIN ANALYZE SELECT * FROM ...\n\nget you? Is it faster?\n\nBTW, I suspect this behavior is because the estimates for the cost of an\nindex scan don't give an appropriate weight to the correlation of the\nindex. The 'sort and index' thread on this list from a few months ago\nhas more info.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Thu, 9 Jun 2005 13:04:53 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: timestamp indexing"
},
{
"msg_contents": "[Jim C. Nasby - Thu at 01:04:53PM -0500]\n> What does \n> \n> SET enable_seqscan = false;\n> EXPLAIN ANALYZE SELECT * FROM ...\n> \n> get you? Is it faster?\n\nI was experimenting with this some weeks ago, by now our database server has\nquite low load numbers and I haven't gotten any complaints about anything\nthat is too slow, so I have temporary stopped working with this issue - so I\nwill not contribute with any more gory details at the moment. :-)\n\nI concluded with that our \"problem\" is that we (for performance reasons)\nstore aggregated statistics in the \"wrong\" tables, and since updating a row\nin pg effectively means creating a new physical row in the database, the\nrows in the table are not in chronological order. If \"last months activity\"\npresents like 7% of the rows from the table is to be fetched, the planner\nwill usually think that a seq scan is better. As time pass by and the table\ngrows, it will jump to index scans.\n\nThe \"old\" stuff in the database eventually grow historical, so the\naggregated statistics will not be updated for most of those rows. Hence a\nforced index scan will often be a bit faster than a suggested table scan. I\nexperimented, and doing an index scan for the 3rd time would usually be\nfaster than doing a full table scan for the 3rd time, but with things not\nbeeing in cache, the planner was right to suggest that seq scan was faster\ndue to less disk seeks.\n\nThe long term solution for this problem is to build a separate data\nwarehouse system. The short time solution is to not care at all\n(eventually, buy more memory).\n\nAs long as the queries is on the form \"give me everything since last\nmonday\", it is at least theoretically possible to serve this through partial\nindices, and have a cronjob dropping the old indices and creating new every\nweek.\n\nDoing table clustering night time would probably also be a solution, but I\nhaven't cared to test it out yet. I'm a bit concerned about\nperformance/locking issues.\n\n-- \nTobias Brox, +47-91700050\nTallinn, Europe\n",
"msg_date": "Thu, 9 Jun 2005 22:43:56 +0300",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: timestamp indexing"
}
] |
[
{
"msg_contents": "What about xeon and postgresql, i have been told that \npostgresql wouldn't perform as well when running\nunder xeon processors due to some cache trick that postgresql\nuses? \n\nWhy? Any fix? Rumors? AMD Advocates? Problems with HT??\n\nWould that problems only be true for 7.4.x? I didin't found\nany comprehensive analysis/explanation for this matters beside\npeople saying , stop using xeon and postgresql.\n\nEnlightment please...\n\nEric Lauzon\n[Recherche & Développement]\nAbove Sécurité / Above Security\nTél : (450) 430-8166\nFax : (450) 430-1858 \n",
"msg_date": "Mon, 30 May 2005 09:43:12 -0400",
"msg_from": "\"Eric Lauzon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql and xeon."
},
{
"msg_contents": "Eric,\n\n> What about xeon and postgresql, i have been told that\n> postgresql wouldn't perform as well when running\n> under xeon processors due to some cache trick that postgresql\n> uses?\n\nSearch the archives of this list. This has been discussed ad nauseum.\nwww.pgsql.ru\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 30 May 2005 09:19:40 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql and xeon."
},
{
"msg_contents": "On Mon, May 30, 2005 at 09:19:40AM -0700, Josh Berkus wrote:\n> Search the archives of this list. This has been discussed ad nauseum.\n> www.pgsql.ru\n\nI must admit I still haven't really understood it -- I know that it appears\non multiple operating systems, on multiple architectures, but most with Xeon\nCPUs, and that it's probably related to the poor memory bandwidth between the\nCPUs, but that's about it. I've read the threads I could find on the list\narchives, but I've yet to see somebody pinpoint exactly what in PostgreSQL is\ncausing this.\n\nLast time someone claimed this was bascially understood and \"just a lot of\nwork to fix\", I asked for pointers to a more detailed analysis, but nobody\nanswered. Care to explain? :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 30 May 2005 18:54:47 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql and xeon."
},
{
"msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> I must admit I still haven't really understood it -- I know that it appears\n> on multiple operating systems, on multiple architectures, but most with Xeon\n> CPUs, and that it's probably related to the poor memory bandwidth between the\n> CPUs, but that's about it. I've read the threads I could find on the list\n> archives, but I've yet to see somebody pinpoint exactly what in PostgreSQL is\n> causing this.\n\nThe problem appears to be that heavy contention for a spinlock is\nextremely expensive on multiprocessor Xeons --- apparently, the CPUs\nwaste tremendous amounts of time passing around exclusive ownership\nof the memory cache line containing the spinlock. While any SMP system\nis likely to have some issues here, the Xeons seem to be particularly\nbad at it.\n\nIn the case that was discussed extensively last spring, the lock that\nwas causing the problem was the BufMgrLock. Since 8.0 we've rewritten\nthe buffer manager in hopes of reducing contention, but I don't know\nif the problem is really gone or not. The buffer manager is hardly the\nonly place with the potential for heavy contention...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 May 2005 13:15:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql and xeon. "
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm having another problem with a query that takes to long, because \nthe appropriate index is not used.\n\nI found some solutions to this problem, but I think Postgres should do \nan index scan in all cases.\n\nTo show the problem I've attached a small script with a testcase.\n\nThanks in advance\n\nSebastian",
"msg_date": "Mon, 30 May 2005 17:54:28 +0200",
"msg_from": "=?ISO-8859-1?Q?Sebastian_B=F6ck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index not used on join with inherited tables"
},
{
"msg_contents": "Sebastian,\n\n> I'm having another problem with a query that takes to long, because\n> the appropriate index is not used.\n\nPostgreSQL is not currently able to push down join criteria into UNIONed \nsubselects. It's a TODO. \n\nAlso, if you're using inherited tables, it's unnecessary to use UNION; just \nselect from the parent.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 30 May 2005 09:21:46 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index not used on join with inherited tables"
},
{
"msg_contents": "Josh Berkus wrote:\n> Sebastian,\n> \n> \n>>I'm having another problem with a query that takes to long, because\n>>the appropriate index is not used.\n> \n> \n> PostgreSQL is not currently able to push down join criteria into UNIONed \n> subselects. It's a TODO.\n\nAnd the appends in a \"SELECT * from parent\" are UNIONs, aren't they?\n\n> Also, if you're using inherited tables, it's unnecessary to use UNION; just \n> select from the parent.\n\nYes, but then no index is used...\n\nSebastian\n\n\n",
"msg_date": "Mon, 30 May 2005 18:36:39 +0200",
"msg_from": "=?UTF-8?B?U2ViYXN0aWFuIELDtmNr?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index not used on join with inherited tables"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Steinar H. Gunderson\n> Sent: 30 mai 2005 12:55\n> To: [email protected]\n> Subject: Re: [PERFORM] Postgresql and xeon.\n> \n> On Mon, May 30, 2005 at 09:19:40AM -0700, Josh Berkus wrote:\n> > Search the archives of this list. This has been discussed \n> ad nauseum.\n> > www.pgsql.ru\n> \n> I must admit I still haven't really understood it -- I know \n> that it appears on multiple operating systems, on multiple \n> architectures, but most with Xeon CPUs, and that it's \n> probably related to the poor memory bandwidth between the \n> CPUs, but that's about it. I've read the threads I could find \n> on the list archives, but I've yet to see somebody pinpoint \n> exactly what in PostgreSQL is causing this.\n> \n> Last time someone claimed this was bascially understood and \n> \"just a lot of work to fix\", I asked for pointers to a more \n> detailed analysis, but nobody answered. Care to explain? :-)\n\nSame here archives references are just overview but no real data....\nto where and why, i would state pg 7.4.8 and kernel 2.6 with preemptive scheduler\nand dual xeon 3.2 ghz 6 gig of ram.\n\n\nEric Lauzon\n[Recherche & Développement]\nAbove Sécurité / Above Security\nTél : (450) 430-8166\nFax : (450) 430-1858 \n",
"msg_date": "Mon, 30 May 2005 13:02:12 -0400",
"msg_from": "\"Eric Lauzon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql and xeon."
}
] |
[
{
"msg_contents": "Hi,\n\nDoes it make a difference in performance and/or disc space if I\n\n1) drop index / vacuumdb -zf / create index\nor\n2) drop index / create index / vacuumdb -zf\n\nI guess it makes a diff for the --analyze, not ?\n\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.",
"msg_date": "Tue, 31 May 2005 00:06:49 +0200",
"msg_from": "Yves Vindevogel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Drop / create indexes and vacuumdb"
}
] |
[
{
"msg_contents": "This is a multi-part message in MIME format.\n\n--bound1117506666\nContent-Type: text/plain\nContent-Transfer-Encoding: 7bit\n\nColton A Smith <[email protected]> wrote ..\n\n------------------------------------------------------------------------------------------------\n> Seq Scan on sensor (cost=0.00..1.25 rows=1 width=6) (actual \n> time=0.055..0.068 rows=1 loops=1)\n> Filter: (sensor_id = 12)\n> Total runtime: 801641.333 ms\n> (3 rows)\n\n\nDo you have some foreign keys pointing in the other direction? In other words, is there another table such that a delete on sensors causing a delete (or a check of some key) in another table? EXPLAIN doesn't show these. And that might be a big table missing an index.\n\n--bound1117506666--\n",
"msg_date": "Mon, 30 May 2005 19:31:06 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: poor performance involving a small table"
}
] |
[
{
"msg_contents": "I read in the manual today:\n\n Indexes are not used for IS NULL clauses by default. The best way to use\n indexes in such cases is to create a partial index using an IS NULL\n predicate.\n \nThis is from the documentation for PostgreSQL 8. I did not find anything\nequivalent in the 7.4.8-documentation.\n \nI wasn't aware of this until it became an issue :-) Well, so I follow the\ntip but in vain. Reduced and reproduced like this in PostgreSQL 7.4.7:\n\ntest=# create table mock(a int, b int);\nCREATE TABLE\ntest=# create index b_is_null on mock((b IS NULL));\nCREATE INDEX\ntest=# insert into mock values (10,20);\nINSERT 70385040 1\ntest=# insert into mock values (20,30);\nINSERT 70385041 1\ntest=# insert into mock values (30, NULL);\nINSERT 70385042 1\ntest=# set enable_seqscan=off; \nSET\ntest=# explain select * from mock where b is NULL;\n QUERY PLAN \n--------------------------------------------------------------------\n Seq Scan on mock (cost=100000000.00..100000020.00 rows=6 width=8)\n Filter: (b IS NULL)\n(2 rows)\n\nvacuum analyze also didn't help to recognize the index ;-)\n\nAny tips? Rewrite the application to not use NULL-values? Hide under\nbedclothes and hope the problem goes away? Install more memory in the\nserver? :-)\n\n-- \nTobias Brox, Beijing\n",
"msg_date": "Tue, 31 May 2005 11:02:07 +0800",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index on a NULL-value"
},
{
"msg_contents": "On Tue, May 31, 2005 at 11:02:07 +0800,\n Tobias Brox <[email protected]> wrote:\n> I read in the manual today:\n> \n> Indexes are not used for IS NULL clauses by default. The best way to use\n> indexes in such cases is to create a partial index using an IS NULL\n> predicate.\n> \n> This is from the documentation for PostgreSQL 8. I did not find anything\n> equivalent in the 7.4.8-documentation.\n> \n> I wasn't aware of this until it became an issue :-) Well, so I follow the\n> tip but in vain. Reduced and reproduced like this in PostgreSQL 7.4.7:\n> \n> test=# create table mock(a int, b int);\n> CREATE TABLE\n> test=# create index b_is_null on mock((b IS NULL));\n> CREATE INDEX\n> test=# insert into mock values (10,20);\n> INSERT 70385040 1\n> test=# insert into mock values (20,30);\n> INSERT 70385041 1\n> test=# insert into mock values (30, NULL);\n> INSERT 70385042 1\n> test=# set enable_seqscan=off; \n> SET\n> test=# explain select * from mock where b is NULL;\n> QUERY PLAN \n> --------------------------------------------------------------------\n> Seq Scan on mock (cost=100000000.00..100000020.00 rows=6 width=8)\n> Filter: (b IS NULL)\n> (2 rows)\n> \n> vacuum analyze also didn't help to recognize the index ;-)\n\nIt isn't surprising that an index wasn't used since a sequential scan is\ngoing to be faster in your test case.\n\nIf you want to test this out, you to want use realistically sized tables.\n",
"msg_date": "Mon, 30 May 2005 22:16:53 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on a NULL-value"
},
{
"msg_contents": "[Tobias Brox - Tue at 11:02:07AM +0800]\n> test=# explain select * from mock where b is NULL;\n> QUERY PLAN \n> --------------------------------------------------------------------\n> Seq Scan on mock (cost=100000000.00..100000020.00 rows=6 width=8)\n> Filter: (b IS NULL)\n> (2 rows)\n\n(...)\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\nThat tip helped me :-)\n\ntest=# explain select * from mock where (b IS NULL)=true;\n QUERY PLAN \n\n----------------------------------------------------------------------\n Index Scan using b_is_null on mock (cost=0.00..4.68 rows=1 width=8)\n Index Cond: ((b IS NULL) = true)\n(2 rows)\n\n-- \nTobias Brox, Beijing\n",
"msg_date": "Tue, 31 May 2005 11:21:20 +0800",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index on a NULL-value"
},
{
"msg_contents": "[Tobias Brox]\n> test=# set enable_seqscan=off; \n\n[Bruno Wolff III - Mon at 10:16:53PM -0500]\n> It isn't surprising that an index wasn't used since a sequential scan is\n> going to be faster in your test case.\n> \n> If you want to test this out, you to want use realistically sized tables.\n\nWrong. In this case I was not wondering about the planners choise of not\nusing the index, but the fact that the planner could not find the index at\nall. Reproducing it on a simple table in a test environment was a valid\nstrategy to solve this specific problem.\n\n-- \nTobias Brox, Beijing\n",
"msg_date": "Tue, 31 May 2005 11:31:58 +0800",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index on a NULL-value"
},
{
"msg_contents": "On Tue, May 31, 2005 at 11:21:20 +0800,\n Tobias Brox <[email protected]> wrote:\n> [Tobias Brox - Tue at 11:02:07AM +0800]\n> > test=# explain select * from mock where b is NULL;\n> > QUERY PLAN \n> > --------------------------------------------------------------------\n> > Seq Scan on mock (cost=100000000.00..100000020.00 rows=6 width=8)\n> > Filter: (b IS NULL)\n> > (2 rows)\n> \n> (...)\n> \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: the planner will ignore your desire to choose an index scan if your\n> > joining column's datatypes do not match\n> \n> That tip helped me :-)\n> \n> test=# explain select * from mock where (b IS NULL)=true;\n> QUERY PLAN \n> \n> ----------------------------------------------------------------------\n> Index Scan using b_is_null on mock (cost=0.00..4.68 rows=1 width=8)\n> Index Cond: ((b IS NULL) = true)\n> (2 rows)\n\nLooked back at your first example and saw that you didn't use a partial\nindex which is why you had to contort things to make it possible to\nuse an indexed search. (Though the planner really should have done this\nsince all of the rows should be in one disk block and doing an index\nscan should require doing more disk reads than a sequential scan for\nthe test case you used.)\n\nYou want something like this:\nCREATE INDEX b_is_null ON mock(b) WHERE b IS NULL;\n\nThe advantage is that the index can be a lot smaller than an index over all\nof the rows in the case where only a small fraction of rows have a null value\nfor b. (If this isn't the case you probably don't want the index.)\n",
"msg_date": "Mon, 30 May 2005 22:36:33 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on a NULL-value"
},
{
"msg_contents": "[Bruno Wolff III - Mon at 10:36:33PM -0500]\n> You want something like this:\n> CREATE INDEX b_is_null ON mock(b) WHERE b IS NULL;\n\nOh, cool. I wasn't aware that this is possible. This would probably help\nus a lot of places. :-)\n\n-- \nTobias Brox, Beijing\n",
"msg_date": "Tue, 31 May 2005 11:45:29 +0800",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index on a NULL-value"
},
{
"msg_contents": "On Tue, May 31, 2005 at 11:31:58 +0800,\n Tobias Brox <[email protected]> wrote:\n> [Tobias Brox]\n> > test=# set enable_seqscan=off; \n> \n> [Bruno Wolff III - Mon at 10:16:53PM -0500]\n> > It isn't surprising that an index wasn't used since a sequential scan is\n> > going to be faster in your test case.\n> > \n> > If you want to test this out, you to want use realistically sized tables.\n> \n> Wrong. In this case I was not wondering about the planners choise of not\n> using the index, but the fact that the planner could not find the index at\n> all. Reproducing it on a simple table in a test environment was a valid\n> strategy to solve this specific problem.\n\nI missed that you turned sequential scans off for your test.\n",
"msg_date": "Mon, 30 May 2005 23:08:01 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on a NULL-value"
},
{
"msg_contents": "Bruno Wolff III <[email protected]> writes:\n> Looked back at your first example and saw that you didn't use a partial\n> index which is why you had to contort things to make it possible to\n> use an indexed search.\n\nFWIW, there is code in CVS tip that recognizes the connection between\nan index on a boolean expression and a WHERE clause testing that\nexpression. It's not quite perfect --- using Tobias' example I see\n\nregression=# explain select * from mock where b is NULL;\n QUERY PLAN \n------------------------------------------------------------------------\n Index Scan using b_is_null on mock (cost=0.00..51.67 rows=10 width=8)\n Index Cond: ((b IS NULL) = true)\n Filter: (b IS NULL)\n(3 rows)\n\nso there's a useless filter condition still being generated. But it\ngets the job done as far as using the index, anyway.\n\n> You want something like this:\n> CREATE INDEX b_is_null ON mock(b) WHERE b IS NULL;\n\nI think best practice for something like this is to make the partial\nindex's columns be something different from what the partial condition\ntests. Done as above, every actual index entry will be a null, so the\nentry contents are just dead weight. Instead do, say,\n\nCREATE INDEX b_is_null ON mock(a) WHERE b IS NULL;\n\nwhere a is chosen as a column that you frequently also test in\nconjunction with \"b IS NULL\". That is, the above index can efficiently\nhandle queries like\n\n\t... WHERE a = 42 AND b IS NULL ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 May 2005 00:18:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on a NULL-value "
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n\n> [Bruno Wolff III - Mon at 10:36:33PM -0500]\n> > You want something like this:\n> > CREATE INDEX b_is_null ON mock(b) WHERE b IS NULL;\n> \n> Oh, cool. I wasn't aware that this is possible. This would probably help\n> us a lot of places. :-)\n\nYeah it's a cool feature.\n\nI'm not 100% sure but I think it still won't consider this index unless the\ncolumn being indexed is used in some indexable operation. So for example if\nyou had \n\nCREATE INDEX b_null on mock(other) WHERE b IS NULL;\n\nand something like\n\n SELECT * FROM b WHERE b IS NULL ORDER BY other\nor\n SELECT * FROM b where other > 0 AND b IS NULL\n\nthen it would be a candidate because the ORDER BY or the other > 0 make the\nindex look relevant. But I don't think (again I'm not 100% sure) that the\npartial index WHERE clause is considered in picking which indexes to consider.\n\nIt *is* considered in evaluating which index is the best one to use and\nwhether it's better than a sequential scan. Just not in the initial choice of\nwhich indexes to look at at all.\n\n-- \ngreg\n\n",
"msg_date": "31 May 2005 00:21:25 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on a NULL-value"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> then it would be a candidate because the ORDER BY or the other > 0 make the\n> index look relevant. But I don't think (again I'm not 100% sure) that the\n> partial index WHERE clause is considered in picking which indexes to consider.\n\nNope, the partial index will be considered simply on the strength of its\npredicate matching the WHERE clause.\n\nOf course, if you can get some additional mileage by having the index\ncontents be useful, that's great --- but it's not necessary.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 May 2005 01:12:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on a NULL-value "
},
{
"msg_contents": "> CREATE INDEX b_is_null ON mock(a) WHERE b IS NULL;\n> \n> where a is chosen as a column that you frequently also test in\n> conjunction with \"b IS NULL\". That is, the above index can efficiently\n> handle queries like\n> \n> \t... WHERE a = 42 AND b IS NULL ...\n\nThis is wonderful, it seems like most of our problems (probably also\nregarding the \"index on timestamp\"-thread I started separately) can be\nsolved with partial indexing on expressions. No need to hide under\nbedclothes anymore ;-)\n\n-- \nTobias Brox, Beijing\n",
"msg_date": "Tue, 31 May 2005 13:59:32 +0800",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index on a NULL-value"
},
{
"msg_contents": "[Tobias Brox - Tue at 11:02:07AM +0800]\n> I read in the manual today:\n> \n> Indexes are not used for IS NULL clauses by default. The best way to use\n> indexes in such cases is to create a partial index using an IS NULL\n> predicate.\n\nI have summarized this thread in a postgresql doc user comment, posted at\nhttp://www.postgresql.org/docs/current/interactive/sql-createindex.html\n\nI think it's a good thing to do, since it can be difficult to search the\nmailing list archives :-)\n\n-- \nTobias Brox, Beijing\n",
"msg_date": "Wed, 1 Jun 2005 13:42:51 +0800",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index on a NULL-value"
}
] |
[
{
"msg_contents": "Hi\n\n \n\nI'm trying to move an existing solution from MySQL to PostgreSQL. As it\nis now the solution has 4 tables where data in inserted by an\napplication. At regular intervals (10min) data from these tables is\nconsolidated and moved to another table for reporting purposes. There\nexist many instances of these reporting tables and in total they are\nexpected to hold about 500 million rows. There are about 200 of these\nreporting tables at the moment with data split among them. When a\nrequest comes in all these tables are searched. While moving to\nPostgreSQL is it a good idea to move from using multiple tables to one\ntable for so many rows? \n\n\n\n\n\n\n\n\n\n\nHi\n \nI’m trying to move an existing solution from MySQL\nto PostgreSQL. As it is now the solution has 4 tables where data in inserted by\nan application. At regular intervals (10min) data from these tables is\nconsolidated and moved to another table for reporting purposes. There exist\nmany instances of these reporting tables and in total they are expected to hold\nabout 500 million rows. There are about 200 of these reporting tables at the\nmoment with data split among them. When a request comes in all these tables are\nsearched. While moving to PostgreSQL is it a good idea to move from using\nmultiple tables to one table for so many rows?",
"msg_date": "Tue, 31 May 2005 11:37:48 +0200",
"msg_from": "\"Praveen Raja\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "very large table"
},
{
"msg_contents": "\"Praveen Raja\" <[email protected]> writes:\n> I'm trying to move an existing solution from MySQL to PostgreSQL. As it\n> is now the solution has 4 tables where data in inserted by an\n> application. At regular intervals (10min) data from these tables is\n> consolidated and moved to another table for reporting purposes. There\n> exist many instances of these reporting tables and in total they are\n> expected to hold about 500 million rows. There are about 200 of these\n> reporting tables at the moment with data split among them. When a\n> request comes in all these tables are searched. While moving to\n> PostgreSQL is it a good idea to move from using multiple tables to one\n> table for so many rows? \n\nIf the multiple tables represent a partitioning scheme that makes sense\nto your application (ie, you can tell a priori which tables to look in\nfor a given query) then it's probably worth keeping. But it sounds like\nthey don't make that much sense, since you mention searching all the tables.\nIn that case you should think about consolidating.\n\nThere is lots of stuff in the recent list archives about partitioned\ntables; might be worth reading, even though much of it is talking about\nfeatures we don't yet have. It would point out the issues you need\nto think about --- for example, do you periodically discard some of the\ndata, and if so do the tables correspond to the discard units? DROP\nTABLE is a lot quicker than trying to DELETE and then VACUUM a portion\nof a very large table.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 May 2005 10:06:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very large table "
},
{
"msg_contents": "On Tue, 2005-05-31 at 11:37 +0200, Praveen Raja wrote:\n> I’m trying to move an existing solution from MySQL to PostgreSQL. As\n> it is now the solution has 4 tables where data in inserted by an\n> application. At regular intervals (10min) data from these tables is\n> consolidated and moved to another table for reporting purposes. There\n> exist many instances of these reporting tables and in total they are\n> expected to hold about 500 million rows. There are about 200 of these\n> reporting tables at the moment with data split among them. When a\n> request comes in all these tables are searched. \n\n> While moving to PostgreSQL is it a good idea to move from using\n> multiple tables to one table for so many rows? \n\nNo. All of the same reasoning applies. \n\nTry to keep each table small enough to fit easily in RAM.\n\nMake sure you specify WITHOUT OIDS on the main data tables.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Wed, 01 Jun 2005 09:57:19 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very large table"
}
] |
[
{
"msg_contents": "Hi,\n\nI would like to start a little survey who is running postgresql on an \n8way or more machine (Intel, Sparc, AMD no matter). Purpose: find out \nhow postgresql runs in high performance areas.\n\nPlease fillout:\n\nMachine (Vendor, Product):\nArchitecture (Intel/Sparc/AMD/IBM):\nProcessors (Type/Number/GHz):\nRAM:\nOperating System:\nPostgreSQL Version:\nDatabase size (GB):\nDisk system:\nType of application:\nYour email contact:\nWilling to answer questions in this group:\nComments:\n\n\nPlease answer here or to me. I compile the results and feed them back here.\n\nRegards,\n\nDirk\n\n",
"msg_date": "Tue, 31 May 2005 15:16:37 +0200",
"msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "SURVEY: who is running postgresql on 8 or more CPUs?"
},
{
"msg_contents": "Hi,\n\nI just got one reply for this survey. Is almost nobody using postgresql \non 8+ machines?\n\nRegards,\n\nDirk\n\nDirk Lutzeb�ck wrote:\n\n> Hi,\n>\n> I would like to start a little survey who is running postgresql on an \n> 8way or more machine (Intel, Sparc, AMD no matter). Purpose: find out \n> how postgresql runs in high performance areas.\n>\n> Please fillout:\n>\n> Machine (Vendor, Product):\n> Architecture (Intel/Sparc/AMD/IBM):\n> Processors (Type/Number/GHz):\n> RAM:\n> Operating System:\n> PostgreSQL Version:\n> Database size (GB):\n> Disk system:\n> Type of application:\n> Your email contact:\n> Willing to answer questions in this group:\n> Comments:\n>\n>\n> Please answer here or to me. I compile the results and feed them back \n> here.\n>\n> Regards,\n>\n> Dirk\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n\n-- \nDirk Lutzeb�ck <[email protected]> Tel +49.30.5362.1635 Fax .1638\nCTO AEC/communications GmbH, Berlin, Germany, http://www.aeccom.com\n\n",
"msg_date": "Thu, 02 Jun 2005 10:28:03 +0200",
"msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SURVEY: who is running postgresql on 8 or more CPUs?"
},
{
"msg_contents": "On 6/2/05, Dirk Lutzebäck <[email protected]> wrote:\n> I just got one reply for this survey. Is almost nobody using postgresql\n> on 8+ machines?\n\nMy guess is when someone is using PostgreSQL on 8+ machine, she's\nin highly competitive (or sensitive) market and either cannot give\ncompany's work details to everyone or simply doesn't want to.\n\nProbably if you asked 'I am thinking about buying 8-way Opteron\nbox, does PostgreSQL have problems with such hardware' you\nwould get a response.\n\nBut surveys are awfully close to statistics and many people simply\ndoesn't like them. (They say that 46.7% of statisticts are just made\nup ;-)).\n\n Regards,\n Dawid\n",
"msg_date": "Thu, 2 Jun 2005 14:12:01 +0200",
"msg_from": "Dawid Kuroczko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SURVEY: who is running postgresql on 8 or more CPUs?"
},
{
"msg_contents": "Hi Dawid,\n\npostgresql is open source and we also want it to be used in high \nperformance areas. What's wrong with people telling on which machines \nthey use it? I don't care about business details but techinal details \nwould be quite interesting. In the end it is interesting to know how you \nneed to tune postgresql on high end machines and how well they perform \non the different highend platforms. This is meant to be more a field \nstudy and not a benchmark. We know that Opteron performs well but what \nare people actually using in high performance areas? Does postgresql run \non an E10000? Who did it?\n\nRegards,\n\nDirk\n\nDawid Kuroczko wrote:\n\n>On 6/2/05, Dirk Lutzeb�ck <[email protected]> wrote:\n> \n>\n>>I just got one reply for this survey. Is almost nobody using postgresql\n>>on 8+ machines?\n>> \n>>\n>\n>My guess is when someone is using PostgreSQL on 8+ machine, she's\n>in highly competitive (or sensitive) market and either cannot give\n>company's work details to everyone or simply doesn't want to.\n>\n>Probably if you asked 'I am thinking about buying 8-way Opteron\n>box, does PostgreSQL have problems with such hardware' you\n>would get a response.\n>\n>But surveys are awfully close to statistics and many people simply\n>doesn't like them. (They say that 46.7% of statisticts are just made\n>up ;-)).\n>\n> Regards,\n> Dawid\n> \n>\n\n-- \nDirk Lutzeb�ck <[email protected]> Tel +49.30.5362.1635 Fax .1638\nCTO AEC/communications GmbH, Berlin, Germany, http://www.aeccom.com\n\n",
"msg_date": "Thu, 02 Jun 2005 14:28:12 +0200",
"msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SURVEY: who is running postgresql on 8 or more CPUs?"
},
{
"msg_contents": "On Tue, 31 May 2005, Dirk Lutzeb�ck wrote:\n\n> Date: Tue, 31 May 2005 15:16:37 +0200\n> From: Dirk Lutzeb�ck <[email protected]>\n> To: [email protected]\n> Subject: [PERFORM] SURVEY: who is running postgresql on 8 or more CPUs?\n>\n> Hi,\n>\n> I would like to start a little survey who is running postgresql on an\n> 8way or more machine (Intel, Sparc, AMD no matter). Purpose: find out\n> how postgresql runs in high performance areas.\n>\n> Please fillout:\n>\n> Machine (Vendor, Product): TX200 Fujitsu siemens\n> Architecture (Intel/Sparc/AMD/IBM): Intel\n> Processors (Type/Number/GHz): bi-Xeon 2.8G\n> RAM: 3g\n> Operating System: Unixware 714\n> PostgreSQL Version: 8.0.3\n> Database size (GB): 6G\n> Disk system: 6xU320 36G SCSI (software raid)\n> Type of application: from accounting to game\n> Your email contact: [email protected]\n> Willing to answer questions in this group: yes\n> Comments:\n>\n>\n> Please answer here or to me. I compile the results and feed them back here.\n>\n> Regards,\n>\n> Dirk\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n\n-- \nOlivier PRENANT \t Tel: +33-5-61-50-97-00 (Work)\n6, Chemin d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n",
"msg_date": "Thu, 2 Jun 2005 17:57:04 +0200 (MET DST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: SURVEY: who is running postgresql on 8 or more CPUs?"
}
] |
[
{
"msg_contents": "I have five PC's accessing a PG database that is mounted on a Dell Windows\n2003 server. The PC's are accessing the database with a Fujitsu cobol\nprogram via ODBC (all machines have same (newest) ODBC driver from PG). 2\nof the machines are the newest I have and both pretty identically configured\nbut are very slow by comparison to the others. My colleagues and I are\nstill in the exploration / decision process, we have been working with and\nlearning the database about 2 months.\n \nI'm looking to see if anyone knows of O/S or hardware issues right off the\nbat or can recommend a debug method, log checking, etc. path we might\nfollow.\n \nThe program in question reads the PG database and displays matching query\nresults on a cobol screen, for the point of this topic that is all it is\ndoing. We run the same query from each PC which returns 15 records out of a\n6,000 record customer DB.\n \nThe machines:\n \n- 2 are 2.0 Ghz Dells with 512 Ram & XP SP2 - they take just over 2 minutes\n- 1 AMD 2.4 with 256 Ram & XP SP2 - just under 2 secs.\n- 1 AMD 900 Mhz with 256 Ram & XP SP 1 - just under 2 secs\n- 1 Intel 266 Mhz with 256 Ram & Windows 2000 - 11-13 secs\n \n \nThanks,\n \nJustin Davis\nRapid Systems, Inc.\n800.356.8952\n \n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.322 / Virus Database: 267.3.0 - Release Date: 5/30/2005\n \n\n\n\n\n\nI have five PC's \naccessing a PG database that is mounted on a Dell Windows 2003 \nserver. The PC's are accessing the database with a Fujitsu cobol program \nvia ODBC (all machines have same (newest) ODBC driver from \nPG). 2 of the machines are the newest I have and both pretty \nidentically configured but are very slow by comparison to the others. My \ncolleagues and I are still in the exploration / decision process, we have been \nworking with and learning the database about 2 months.\n \nI'm looking to see \nif anyone knows of O/S or hardware issues right off the bat or can recommend a \ndebug method, log checking, etc. path we might follow.\n \nThe program in \nquestion reads the PG database and displays matching query results on a cobol \nscreen, for the point of this topic that is all it is doing. We run the \nsame query from each PC which returns 15 records out of a 6,000 record customer \nDB.\n \nThe \nmachines:\n \n- 2 are 2.0 Ghz \nDells with 512 Ram & XP SP2 - they take just over 2 \nminutes\n- 1 AMD 2.4 with 256 \nRam & XP SP2 - just under 2 secs.\n- 1 AMD 900 Mhz with \n256 Ram & XP SP 1 - just under 2 secs\n- 1 Intel 266 Mhz \nwith 256 Ram & Windows 2000 - 11-13 secs\n \n \nThanks,\n \nJustin Davis\nRapid Systems, Inc.\n800.356.8952",
"msg_date": "Tue, 31 May 2005 13:02:12 -0400",
"msg_from": "\"Justin Davis\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "'Fastest' PC's are slowest in the house"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nJustin Davis wrote:\n| I have five PC's accessing a PG database that is mounted on a\n| Dell Windows 2003 server. The PC's are accessing the database with a\n| Fujitsu cobol program via ODBC (all machines have same (newest) ODBC\n| driver from PG). 2 of the machines are the newest I have and both\n| pretty identically configured but are very slow by comparison to the\n| others. My colleagues and I are still in the exploration / decision\n| process, we have been working with and learning the database about 2\nmonths.\n|\n| I'm looking to see if anyone knows of O/S or hardware issues right off\n| the bat or can recommend a debug method, log checking, etc. path we\n| might follow.\n|\n| The program in question reads the PG database and displays matching\n| query results on a cobol screen, for the point of this topic that is all\n| it is doing. We run the same query from each PC which returns 15\n| records out of a 6,000 record customer DB.\n|\n| The machines:\n|\n| - 2 are 2.0 Ghz Dells with 512 Ram & XP SP2 - they take just over 2\nminutes\n| - 1 AMD 2.4 with 256 Ram & XP SP2 - just under 2 secs.\n| - 1 AMD 900 Mhz with 256 Ram & XP SP 1 - just under 2 secs\n| - 1 Intel 266 Mhz with 256 Ram & Windows 2000 - 11-13 secs\n|\n\nHello, Justin.\n\nWhile re-reading your post, I was (still) under the impression that\nthose machines are all client machines and that there is only one\ndatabase they are all accessing. Is my impression true?\n\nIf so, then I'm afraid there must be some other issue you've been\nhitting, because from the viewpoint of a postmaster, it is completely\nirrelevant who the client is. Unless so, can you please provide some\nevidence that the issue at hand really has to do with the PostgreSQL\nquery shipping to those Dells (profiling, for example), so we have\nsomething to work from?\n\nMy assertion though is that there's either an issue in the ODBC layer,\nor the COBOL program you're running (be it your code or the runtime).\n\nWhile at it, and completely unrelated, I'm not sure that, both from the\nperformance and reliability viewpoint, running production PostgreSQL on\na Windows machine may be the best possible decision. If you have the\nluxury of experimenting, and unless your side-goal is to run-proof the\nWindows version of PostgreSQL, I'd suggest you try a couple of\nalternatives, such as Linux, BSD or even Solaris, whichever you feel\nwill offer you better future support.\n\nIf you choose to run it on Windows afterall, I'd kindly advise you to do\nyour best to stay on the safe side of the story with a double-checked\nbackup strategy, solely because the Windows version of PostgreSQL is a\nnew product and not widely used in production environments, so there is\nnot much expertise yet in the specifics of keeping it performant, stable\nand most of all, how to tackle things after the worst has happened.\n\nKind regards,\n- --\nGrega Bremec\ngregab at p0f dot net\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (GNU/Linux)\n\niD8DBQFCnqfgfu4IwuB3+XoRApSRAJ0aJYEIEnJZlw2TeLtSO/1+qmoLHACbBAjS\nLahS3A/YMgVthkvnQ3AJcXg=\n=Cl6f\n-----END PGP SIGNATURE-----\n",
"msg_date": "Thu, 02 Jun 2005 08:32:00 +0200",
"msg_from": "Grega Bremec <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'Fastest' PC's are slowest in the house"
}
] |
[
{
"msg_contents": "\nDo to moderator error (namely, mine), several hundred messages (spread \nacross all the lists) were just approved ...\n\nSorry for all the incoming junk :(\n\n----\nMarc G. Fournier Hub.Org Networking Services (http://www.hub.org)\nEmail: [email protected] Yahoo!: yscrappy ICQ: 7615664\n",
"msg_date": "Tue, 31 May 2005 14:45:56 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Major flood of mail to lists ..."
}
] |
[
{
"msg_contents": "\n Hello,\n\n Our database increases in size 2.5 times during the day.\nWhat to do to avoid this? Autovacuum running with quite\naggressive settings, FSM settings are high enough.\n\n Database size should be more or less constant but it\nhas high turnover rate (100+ insert/update/delete per second).\n\n Below is \"du -sk\" of database dir during the day. On 4:05\nfull vacuum+reindex runs and database size is once again\nreduced.\n\n Thanks,\n\n Mindaugas\n\nTue May 31 11:00:01 EEST 2005\n533808 /ora/pgsql/base/465436/\nTue May 31 11:30:01 EEST 2005\n567344 /ora/pgsql/base/465436/\nTue May 31 12:00:01 EEST 2005\n578632 /ora/pgsql/base/465436/\nTue May 31 12:30:01 EEST 2005\n586336 /ora/pgsql/base/465436/\nTue May 31 13:00:01 EEST 2005\n594716 /ora/pgsql/base/465436/\nTue May 31 13:30:01 EEST 2005\n604932 /ora/pgsql/base/465436/\nTue May 31 14:00:01 EEST 2005\n613668 /ora/pgsql/base/465436/\nTue May 31 14:30:01 EEST 2005\n625752 /ora/pgsql/base/465436/\nTue May 31 15:00:01 EEST 2005\n637704 /ora/pgsql/base/465436/\nTue May 31 15:30:01 EEST 2005\n649700 /ora/pgsql/base/465436/\nTue May 31 16:00:01 EEST 2005\n657392 /ora/pgsql/base/465436/\nTue May 31 16:30:02 EEST 2005\n668228 /ora/pgsql/base/465436/\nTue May 31 17:00:01 EEST 2005\n676332 /ora/pgsql/base/465436/\nTue May 31 17:30:01 EEST 2005\n686376 /ora/pgsql/base/465436/\nTue May 31 18:00:01 EEST 2005\n694080 /ora/pgsql/base/465436/\nTue May 31 18:30:02 EEST 2005\n705876 /ora/pgsql/base/465436/\nTue May 31 19:00:01 EEST 2005\n713916 /ora/pgsql/base/465436/\nTue May 31 19:30:01 EEST 2005\n725460 /ora/pgsql/base/465436/\nTue May 31 20:00:01 EEST 2005\n733892 /ora/pgsql/base/465436/\nTue May 31 20:30:01 EEST 2005\n745344 /ora/pgsql/base/465436/\nTue May 31 21:00:01 EEST 2005\n753048 /ora/pgsql/base/465436/\nTue May 31 21:30:02 EEST 2005\n768228 /ora/pgsql/base/465436/\nTue May 31 22:00:01 EEST 2005\n804796 /ora/pgsql/base/465436/\nTue May 31 22:30:01 EEST 2005\n858840 /ora/pgsql/base/465436/\nTue May 31 23:00:02 EEST 2005\n902684 /ora/pgsql/base/465436/\nTue May 31 23:30:01 EEST 2005\n939796 /ora/pgsql/base/465436/\nWed Jun 1 00:00:02 EEST 2005\n990840 /ora/pgsql/base/465436/\nWed Jun 1 00:30:11 EEST 2005\n1005316 /ora/pgsql/base/465436/\nWed Jun 1 01:00:02 EEST 2005\n1011408 /ora/pgsql/base/465436/\nWed Jun 1 01:30:01 EEST 2005\n1010888 /ora/pgsql/base/465436/\nWed Jun 1 02:00:01 EEST 2005\n1010872 /ora/pgsql/base/465436/\nWed Jun 1 02:30:01 EEST 2005\n1010784 /ora/pgsql/base/465436/\nWed Jun 1 03:00:02 EEST 2005\n1003260 /ora/pgsql/base/465436/\nWed Jun 1 03:30:02 EEST 2005\n1003372 /ora/pgsql/base/465436/\nWed Jun 1 04:00:01 EEST 2005\n1003380 /ora/pgsql/base/465436/\nWed Jun 1 04:30:01 EEST 2005\n426508 /ora/pgsql/base/465436/\nWed Jun 1 05:00:01 EEST 2005\n429036 /ora/pgsql/base/465436/\nWed Jun 1 05:30:01 EEST 2005\n432156 /ora/pgsql/base/465436/\nWed Jun 1 06:00:01 EEST 2005\n433332 /ora/pgsql/base/465436/\nWed Jun 1 06:30:01 EEST 2005\n435052 /ora/pgsql/base/465436/\nWed Jun 1 07:00:02 EEST 2005\n439908 /ora/pgsql/base/465436/\nWed Jun 1 07:30:01 EEST 2005\n450144 /ora/pgsql/base/465436/\nWed Jun 1 08:00:01 EEST 2005\n471120 /ora/pgsql/base/465436/\nWed Jun 1 08:30:02 EEST 2005\n490712 /ora/pgsql/base/465436/\nWed Jun 1 09:00:01 EEST 2005\n501652 /ora/pgsql/base/465436/\nWed Jun 1 09:30:01 EEST 2005\n530128 /ora/pgsql/base/465436/\nWed Jun 1 10:00:01 EEST 2005\n541580 /ora/pgsql/base/465436/\nWed Jun 1 10:30:01 EEST 2005\n571204 /ora/pgsql/base/465436/\n\n",
"msg_date": "Wed, 1 Jun 2005 10:43:06 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to avoid database bloat"
},
{
"msg_contents": "Mindaugas Riauba wrote:\n> Hello,\n> \n> Our database increases in size 2.5 times during the day.\n> What to do to avoid this? Autovacuum running with quite\n> aggressive settings, FSM settings are high enough.\n> \n> Database size should be more or less constant but it\n> has high turnover rate (100+ insert/update/delete per second).\n> \n> Below is \"du -sk\" of database dir during the day. On 4:05\n> full vacuum+reindex runs and database size is once again\n> reduced.\n> \n> Thanks,\n> \n> Mindaugas\n> \n> Tue May 31 11:00:01 EEST 2005\n> 533808 /ora/pgsql/base/465436/\n> Tue May 31 11:30:01 EEST 2005\n> 567344 /ora/pgsql/base/465436/\n> Tue May 31 12:00:01 EEST 2005\n> 578632 /ora/pgsql/base/465436/\n> Tue May 31 12:30:01 EEST 2005\n> 586336 /ora/pgsql/base/465436/\n> Tue May 31 13:00:01 EEST 2005\n> 594716 /ora/pgsql/base/465436/\n> Tue May 31 13:30:01 EEST 2005\n> 604932 /ora/pgsql/base/465436/\n> Tue May 31 14:00:01 EEST 2005\n> 613668 /ora/pgsql/base/465436/\n> Tue May 31 14:30:01 EEST 2005\n> 625752 /ora/pgsql/base/465436/\n> Tue May 31 15:00:01 EEST 2005\n> 637704 /ora/pgsql/base/465436/\n> Tue May 31 15:30:01 EEST 2005\n> 649700 /ora/pgsql/base/465436/\n> Tue May 31 16:00:01 EEST 2005\n> 657392 /ora/pgsql/base/465436/\n> Tue May 31 16:30:02 EEST 2005\n> 668228 /ora/pgsql/base/465436/\n> Tue May 31 17:00:01 EEST 2005\n> 676332 /ora/pgsql/base/465436/\n> Tue May 31 17:30:01 EEST 2005\n> 686376 /ora/pgsql/base/465436/\n> Tue May 31 18:00:01 EEST 2005\n> 694080 /ora/pgsql/base/465436/\n> Tue May 31 18:30:02 EEST 2005\n> 705876 /ora/pgsql/base/465436/\n> Tue May 31 19:00:01 EEST 2005\n> 713916 /ora/pgsql/base/465436/\n> Tue May 31 19:30:01 EEST 2005\n> 725460 /ora/pgsql/base/465436/\n> Tue May 31 20:00:01 EEST 2005\n> 733892 /ora/pgsql/base/465436/\n> Tue May 31 20:30:01 EEST 2005\n> 745344 /ora/pgsql/base/465436/\n> Tue May 31 21:00:01 EEST 2005\n> 753048 /ora/pgsql/base/465436/\n> Tue May 31 21:30:02 EEST 2005\n> 768228 /ora/pgsql/base/465436/\n> Tue May 31 22:00:01 EEST 2005\n> 804796 /ora/pgsql/base/465436/\n> Tue May 31 22:30:01 EEST 2005\n> 858840 /ora/pgsql/base/465436/\n> Tue May 31 23:00:02 EEST 2005\n> 902684 /ora/pgsql/base/465436/\n> Tue May 31 23:30:01 EEST 2005\n> 939796 /ora/pgsql/base/465436/\n> Wed Jun 1 00:00:02 EEST 2005\n> 990840 /ora/pgsql/base/465436/\n> Wed Jun 1 00:30:11 EEST 2005\n> 1005316 /ora/pgsql/base/465436/\n> Wed Jun 1 01:00:02 EEST 2005\n> 1011408 /ora/pgsql/base/465436/\n> Wed Jun 1 01:30:01 EEST 2005\n> 1010888 /ora/pgsql/base/465436/\n> Wed Jun 1 02:00:01 EEST 2005\n> 1010872 /ora/pgsql/base/465436/\n> Wed Jun 1 02:30:01 EEST 2005\n> 1010784 /ora/pgsql/base/465436/\n> Wed Jun 1 03:00:02 EEST 2005\n> 1003260 /ora/pgsql/base/465436/\n> Wed Jun 1 03:30:02 EEST 2005\n> 1003372 /ora/pgsql/base/465436/\n> Wed Jun 1 04:00:01 EEST 2005\n> 1003380 /ora/pgsql/base/465436/\n> Wed Jun 1 04:30:01 EEST 2005\n> 426508 /ora/pgsql/base/465436/\n> Wed Jun 1 05:00:01 EEST 2005\n> 429036 /ora/pgsql/base/465436/\n> Wed Jun 1 05:30:01 EEST 2005\n> 432156 /ora/pgsql/base/465436/\n> Wed Jun 1 06:00:01 EEST 2005\n> 433332 /ora/pgsql/base/465436/\n> Wed Jun 1 06:30:01 EEST 2005\n> 435052 /ora/pgsql/base/465436/\n> Wed Jun 1 07:00:02 EEST 2005\n> 439908 /ora/pgsql/base/465436/\n> Wed Jun 1 07:30:01 EEST 2005\n> 450144 /ora/pgsql/base/465436/\n> Wed Jun 1 08:00:01 EEST 2005\n> 471120 /ora/pgsql/base/465436/\n> Wed Jun 1 08:30:02 EEST 2005\n> 490712 /ora/pgsql/base/465436/\n> Wed Jun 1 09:00:01 EEST 2005\n> 501652 /ora/pgsql/base/465436/\n> Wed Jun 1 09:30:01 EEST 2005\n> 530128 /ora/pgsql/base/465436/\n> Wed Jun 1 10:00:01 EEST 2005\n> 541580 /ora/pgsql/base/465436/\n> Wed Jun 1 10:30:01 EEST 2005\n> 571204 /ora/pgsql/base/465436/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n\nrun autovacuum more often.\n",
"msg_date": "Wed, 01 Jun 2005 10:13:02 +0200",
"msg_from": "stig erikson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to avoid database bloat"
},
{
"msg_contents": "\"Mindaugas Riauba\" <[email protected]> writes:\n> Our database increases in size 2.5 times during the day.\n> What to do to avoid this? Autovacuum running with quite\n> aggressive settings, FSM settings are high enough.\n\nFirst thing I'd suggest is to get a more detailed idea of exactly\nwhat is bloating --- which tables/indexes are the problem?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Jun 2005 10:39:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to avoid database bloat "
},
{
"msg_contents": "\n> > Our database increases in size 2.5 times during the day.\n> > What to do to avoid this? Autovacuum running with quite\n> > aggressive settings, FSM settings are high enough.\n>\n> First thing I'd suggest is to get a more detailed idea of exactly\n> what is bloating --- which tables/indexes are the problem?\n\n I think the most problematic table is this one. After vacuum full/reindex\nit was 20MB in size now (after 6 hours) it is already 70MB and counting.\n\n vacuum verbose output below. msg_id is integer, next_retry - timestamp,\nrecipient - varchar(20). max_fsm_pages = 200000. Another table has foregn\nkey which referenced msg_id in this one.\n\n Thanks,\n\n Mindaugas\n\n$ vacuumdb -v -z -U postgres -t queue database\nINFO: vacuuming \"queue\"\nINFO: index \"queue_msg_id_pk\" now contains 110531 row versions in 5304\npages\nDETAIL: 31454 index row versions were removed.\n95 index pages have been deleted, 63 are currently reusable.\nCPU 0.03s/0.07u sec elapsed 2.50 sec.\nINFO: index \"queue_next_retry\" now contains 110743 row versions in 3551\npages\nDETAIL: 31454 index row versions were removed.\n1163 index pages have been deleted, 560 are currently reusable.\nCPU 0.04s/0.06u sec elapsed 4.93 sec.\nINFO: index \"queue_recipient_idx\" now contains 111596 row versions in 1802\npages\nDETAIL: 31454 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.05u sec elapsed 0.16 sec.\nINFO: \"queue\": removed 31454 row versions in 1832 pages\nDETAIL: CPU 0.00s/0.01u sec elapsed 0.27 sec.\nINFO: \"queue\": found 31454 removable, 110096 nonremovable row versions in\n9133 pages\nDETAIL: 119 dead row versions cannot be removed yet.\nThere were 258407 unused item pointers.\n0 pages are entirely empty.\nCPU 0.12s/0.25u sec elapsed 8.20 sec.\nINFO: analyzing \"queue\"\nINFO: \"queue\": scanned 3000 of 9133 pages, containing 34585 live rows and\n1808 dead rows; 3000 rows in sample, 105288 estimated total rows\nVACUUM\n\n",
"msg_date": "Thu, 2 Jun 2005 10:28:03 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to avoid database bloat "
},
{
"msg_contents": "\"Mindaugas Riauba\" <[email protected]> writes:\n>> First thing I'd suggest is to get a more detailed idea of exactly\n>> what is bloating --- which tables/indexes are the problem?\n\n> I think the most problematic table is this one. After vacuum full/reindex\n> it was 20MB in size now (after 6 hours) it is already 70MB and counting.\n\nAFAICT the vacuum is doing what it is supposed to, and the problem has\nto be just that it's not being done often enough. Which suggests either\nan autovacuum bug or your autovacuum settings aren't aggressive enough.\n\nWhich PG version is this exactly? Some of the earlier autovacuum\nreleases do have known bugs, so it'd be worth your while to update\nif you're not on the latest point release of your series.\n\nI don't know much about autovacuum settings, but if you'll show what\nyou're using someone can probably comment on them.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Jun 2005 09:45:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to avoid database bloat "
},
{
"msg_contents": "\n> >> First thing I'd suggest is to get a more detailed idea of exactly\n> >> what is bloating --- which tables/indexes are the problem?\n>\n> > I think the most problematic table is this one. After vacuum\nfull/reindex\n> > it was 20MB in size now (after 6 hours) it is already 70MB and counting.\n>\n> AFAICT the vacuum is doing what it is supposed to, and the problem has\n> to be just that it's not being done often enough. Which suggests either\n> an autovacuum bug or your autovacuum settings aren't aggressive enough.\n\n -D -d 1 -v 1000 -V 0.5 -a 1000 -A 0.1 -s 10\n\n That is autovacuum settings. Should be aggressive enough I think?\n\n> Which PG version is this exactly? Some of the earlier autovacuum\n> releases do have known bugs, so it'd be worth your while to update\n> if you're not on the latest point release of your series.\n\n 8.0.3\n\n> I don't know much about autovacuum settings, but if you'll show what\n> you're using someone can probably comment on them.\n\n And what in vacuum verbose output suggests that vacuum is not done\noften enough? Current output (table is 100MB already) is below.\n\n Thanks,\n\n Mindaugas\n\n$ vacuumdb -v -z -U postgres -t queue database\nINFO: vacuuming \"queue\"\nINFO: index \"queue_msg_id_pk\" now contains 302993 row versions in 18129\npages\nDETAIL: 102763 index row versions were removed.\n1 index pages have been deleted, 1 are currently reusable.\nCPU 0.87s/0.46u sec elapsed 76.40 sec.\nINFO: index \"queue_next_retry\" now contains 310080 row versions in 9092\npages\nDETAIL: 102763 index row versions were removed.\n675 index pages have been deleted, 658 are currently reusable.\nCPU 0.38s/0.31u sec elapsed 79.47 sec.\nINFO: index \"queue_recipient_idx\" now contains 323740 row versions in 2900\npages\nDETAIL: 102763 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.07s/0.27u sec elapsed 9.06 sec.\nINFO: \"queue\": removed 102763 row versions in 9623 pages\nDETAIL: CPU 0.16s/0.39u sec elapsed 29.26 sec.\nINFO: \"queue\": found 102763 removable, 292342 nonremovable row versions in\n12452 pages\nDETAIL: 14 dead row versions cannot be removed yet.\nThere were 183945 unused item pointers.\n0 pages are entirely empty.\nCPU 1.56s/1.51u sec elapsed 194.39 sec.\nINFO: analyzing \"queue\"\nINFO: \"queue\": scanned 3000 of 12452 pages, containing 72850 live rows and\n7537 dead rows; 3000 rows in sample, 302376 estimated total rows\nVACUUM\n\n",
"msg_date": "Thu, 2 Jun 2005 17:24:47 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to avoid database bloat "
},
{
"msg_contents": "Mindaugas Riauba wrote:\n\n>>AFAICT the vacuum is doing what it is supposed to, and the problem has\n>>to be just that it's not being done often enough. Which suggests either\n>>an autovacuum bug or your autovacuum settings aren't aggressive enough.\n>> \n>>\n>\n> -D -d 1 -v 1000 -V 0.5 -a 1000 -A 0.1 -s 10\n>\n> That is autovacuum settings. Should be aggressive enough I think?\n> \n>\n\nMight e aggressive enough, but might not. I have seen some people run \n-V 0.1. Also you probably don't need -A that low. This could an issue \nwhere analyze results in an inaccurate reltuples value which is \npreventing autovacuum from doing it's job. Could you please run it with \n-d 2 and show us the relevant log output.\n\n>>Which PG version is this exactly? Some of the earlier autovacuum\n>>releases do have known bugs, so it'd be worth your while to update\n>>if you're not on the latest point release of your series.\n>> \n>>\n>\n> 8.0.3\n> \n>\n\nThat should be fine.\n",
"msg_date": "Thu, 02 Jun 2005 12:10:27 -0400",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to avoid database bloat"
},
{
"msg_contents": "\"Mindaugas Riauba\" <[email protected]> writes:\n> And what in vacuum verbose output suggests that vacuum is not done\n> often enough? Current output (table is 100MB already) is below.\n\nThe output shows vacuum cleaning up about a third of the table. Usually\npeople like to keep the overhead down to 10% or so ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Jun 2005 12:10:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to avoid database bloat "
},
{
"msg_contents": "> \"Mindaugas Riauba\" <[email protected]> writes:\n>> And what in vacuum verbose output suggests that vacuum is not done\n>> often enough? Current output (table is 100MB already) is below.\n>\n> The output shows vacuum cleaning up about a third of the table. Usually\n> people like to keep the overhead down to 10% or so ...\n\n\nHe was running with -V 0.5 which should transalate to roughly 50% of the\ntable being touched before a vacuum is issued.j\n\nMatt\n",
"msg_date": "Thu, 2 Jun 2005 23:38:18 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: How to avoid database bloat"
},
{
"msg_contents": "> >>AFAICT the vacuum is doing what it is supposed to, and the problem has\n> >>to be just that it's not being done often enough. Which suggests either\n> >>an autovacuum bug or your autovacuum settings aren't aggressive enough.\n> >\n> > -D -d 1 -v 1000 -V 0.5 -a 1000 -A 0.1 -s 10\n> >\n> > That is autovacuum settings. Should be aggressive enough I think?\n>\n> Might e aggressive enough, but might not. I have seen some people run\n> -V 0.1. Also you probably don't need -A that low. This could an issue\n> where analyze results in an inaccurate reltuples value which is\n> preventing autovacuum from doing it's job. Could you please run it with\n> -d 2 and show us the relevant log output.\n\n Relevant parts are below. And we had to set so aggressive analyze because\notherwise planer statistics were getting old too fast. As I said table has\nvery\nhigh turnover most of the records live here only for a few seconds.\n\n And one more question - anyway why table keeps growing? It is shown that\nit occupies\n<10000 pages and max_fsm_pages = 200000 so vacuum should keep up with the\nchanges?\nOr is it too low according to pg_class system table? What should be the\nreasonable value?\n\nselect sum(relpages) from pg_class;\n sum\n-------\n 77994\n(1 row)\n\n Thanks,\n\n Mindaugas\n\n[2005-06-03 09:30:31 EEST] DEBUG: Performing: ANALYZE \"queue\"\n[2005-06-03 09:30:31 EEST] INFO: table name: database.\"queue\"\n[2005-06-03 09:30:31 EEST] INFO: relid: 465440; relisshared: 0\n[2005-06-03 09:30:31 EEST] INFO: reltuples: 98615.000000; relpages:\n6447\n[2005-06-03 09:30:31 EEST] INFO: curr_analyze_count: 39475111;\ncurr_vacuum_count: 30\n953987\n[2005-06-03 09:30:31 EEST] INFO: last_analyze_count: 39475111;\nlast_vacuum_count: 30\n913733\n[2005-06-03 09:30:31 EEST] INFO: analyze_threshold: 10861;\nvacuum_threshold: 43700\n\n[2005-06-03 09:31:11 EEST] DEBUG: Performing: VACUUM ANALYZE \"queue\"\n[2005-06-03 09:31:12 EEST] INFO: table name: database.\"queue\"\n[2005-06-03 09:31:12 EEST] INFO: relid: 465440; relisshared: 0\n[2005-06-03 09:31:12 EEST] INFO: reltuples: 99355.000000; relpages:\n6447\n[2005-06-03 09:31:12 EEST] INFO: curr_analyze_count: 39480332;\ncurr_vacuum_count: 30\n957872\n[2005-06-03 09:31:12 EEST] INFO: last_analyze_count: 39480332;\nlast_vacuum_count: 30\n957872\n[2005-06-03 09:31:12 EEST] INFO: analyze_threshold: 10935;\nvacuum_threshold: 50677\n\n\n",
"msg_date": "Fri, 3 Jun 2005 11:41:08 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to avoid database bloat"
},
{
"msg_contents": "Mindaugas Riauba wrote:\n\n>>Might e aggressive enough, but might not. I have seen some people run\n>>-V 0.1. Also you probably don't need -A that low. This could an issue\n>>where analyze results in an inaccurate reltuples value which is\n>>preventing autovacuum from doing it's job. Could you please run it with\n>>-d 2 and show us the relevant log output.\n>> \n>>\n>\n> Relevant parts are below. And we had to set so aggressive analyze because\n>otherwise planer statistics were getting old too fast. As I said table has\n>very\n>high turnover most of the records live here only for a few seconds.\n> \n>\n\nLooked like pg_autovacuum is operating as expected. One of the annoying \nlimitations of pg_autovacuum in current releases is that you can't set \nthresholds on a per table basis. It looks like this table might require \nan even more aggressive vacuum threshold. Couple of thoughts, are you \nsure it's the table that is growing and not the indexes? (assuming this \ntable has indexes on it). \n\n> And one more question - anyway why table keeps growing? It is shown that\n>it occupies\n><10000 pages and max_fsm_pages = 200000 so vacuum should keep up with the\n>changes?\n>Or is it too low according to pg_class system table? What should be the\n>reasonable value?\n> \n>\n\nDoes the table keep growing? Or does it grow to a point an then stop \ngrowing? It's normal for a table to operate at a steady state size that \nis bigger that it's fresly \"vacuum full\"'d size. And with -V set at 0.5 \nit should be at a minimum 50% larger than it's minimum size. Your email \nbefore said that this table went from 20M to 70M but does it keep \ngoing? Perhaps it would start leveling off at this point, or some point \nshortly there-after.\n\nAnyway, I'm not sure if there is something else going on here, but from \nthe log it looks as though pg_autovacuum is working as advertised. \n\n",
"msg_date": "Fri, 03 Jun 2005 12:43:06 -0400",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to avoid database bloat"
},
{
"msg_contents": "\n> Looked like pg_autovacuum is operating as expected. One of the annoying\n> limitations of pg_autovacuum in current releases is that you can't set\n> thresholds on a per table basis. It looks like this table might require\n> an even more aggressive vacuum threshold. Couple of thoughts, are you\n> sure it's the table that is growing and not the indexes? (assuming this\n> table has indexes on it).\n\n Yes I am sure (oid2name :) ).\n\n> > And one more question - anyway why table keeps growing? It is shown\nthat\n> >it occupies\n> ><10000 pages and max_fsm_pages = 200000 so vacuum should keep up with the\n> >changes?\n> >Or is it too low according to pg_class system table? What should be the\n> >reasonable value?\n> >\n> >\n>\n> Does the table keep growing? Or does it grow to a point an then stop\n> growing? It's normal for a table to operate at a steady state size that\n> is bigger that it's fresly \"vacuum full\"'d size. And with -V set at 0.5\n> it should be at a minimum 50% larger than it's minimum size. Your email\n> before said that this table went from 20M to 70M but does it keep\n> going? Perhaps it would start leveling off at this point, or some point\n> shortly there-after.\n\n Yes it keeps growing. And the main problem is that performance starts to\nsuffer from that. Do not forget that we are talking about 100+ insert/\nupdate/select/delete cycles per second.\n\n> Anyway, I'm not sure if there is something else going on here, but from\n> the log it looks as though pg_autovacuum is working as advertised.\n\n Something is out there :). But how to fix that bloat? More aggressive\nautovacuum settings? Even larger FSM?\n Do not know if that matters but database has very many connections to\nit (400-600) and clients are doing mostly asynchronous operations.\n\n How to find out where this extra space gone?\n\n Thanks,\n\n Mindaugas\n\n",
"msg_date": "Mon, 6 Jun 2005 17:58:10 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to avoid database bloat"
}
] |
[
{
"msg_contents": "Hi All,\n\nI have been reading about increasing PostgreSQL performance by relocating the\npg_xlog to a disk other than the one where the database resides. I have the\nfollowing pg_xlogs on my system.\n\n/raid02/databases/pg_xlog\n/raid02/rhdb_databases/pg_xlog\n/raid02/databases-8.0.0/pg_xlog\n/var/lib/pgsql/data/pg_xlog\n\nThe second and third entries are from backups that were made before major\nupgrades so I am expecting that I can blow them away.\n\nThe first entry is in the directory where my databases are located.\n\nI have no idea why the forth entry is there. It is in the PostgreSQL\ninstallation directory.\n\nHere is my filesystem.\n# df -k\nFilesystem 1K-blocks Used Available Use% Mounted on\n/dev/sda6 9052552 2605292 5987404 31% /\n/dev/sda1 101089 32688 63182 35% /boot\nnone 1282880 0 1282880 0% /dev/shm\n/dev/sdb2 16516084 32836 15644256 1% /raid01\n/dev/sdb3 16516084 1156160 14520932 8% /raid02\n/dev/sda5 2063504 32916 1925768 2% /tmp\n/dev/sda3 4127108 203136 3714324 6% /var\n/dev/cdrom 494126 494126 0 100% /mnt/cdrom\n\nCan I\n\n1) stop the postmaster\n2) rm -rf /var/lib/pgsql/data/pg_xlog\n3) mv /raid02/databases/pg_xlog /var/lib/pgsql/data/pg_xlog\n4) ln -s /var/lib/pgsql/data/pg_xlog /raid02/databases/pg_xlog\n5) start postmaster\n\nIf I can do that and place the pg_xlog in the installation directory will I\ncreate any installation issues the next time I upgrade PostgreSQL?\n\nTIA\n\nKind Regards,\nKeith\n",
"msg_date": "Wed, 1 Jun 2005 11:31:13 -0400",
"msg_from": "\"Keith Worthington\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Moving pg_xlog"
},
{
"msg_contents": "\"Keith Worthington\" <[email protected]> writes:\n> I have been reading about increasing PostgreSQL performance by relocating the\n> pg_xlog to a disk other than the one where the database resides. I have the\n> following pg_xlogs on my system.\n\n> /raid02/databases/pg_xlog\n> /raid02/rhdb_databases/pg_xlog\n> /raid02/databases-8.0.0/pg_xlog\n> /var/lib/pgsql/data/pg_xlog\n\n> I have no idea why the forth entry is there. It is in the PostgreSQL\n> installation directory.\n\nIt's there because the RPM sets up a database under /var/lib/pgsql/data.\n\n> 1) stop the postmaster\n> 2) rm -rf /var/lib/pgsql/data/pg_xlog\n> 3) mv /raid02/databases/pg_xlog /var/lib/pgsql/data/pg_xlog\n> 4) ln -s /var/lib/pgsql/data/pg_xlog /raid02/databases/pg_xlog\n> 5) start postmaster\n\nPut the xlog anywhere BUT there!!!!!!!!!\n\n> If I can do that and place the pg_xlog in the installation directory will I\n> create any installation issues the next time I upgrade PostgreSQL?\n\nOh, the installation will be just fine ... but your database will not\nbe after the upgrade wipes out your WAL. Put the xlog under some\nnon-system-defined directory.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Jun 2005 12:19:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving pg_xlog "
},
{
"msg_contents": "On Wed, 01 Jun 2005 12:19:40 -0400, Tom Lane wrote\n> \"Keith Worthington\" <[email protected]> writes:\n> > I have been reading about increasing PostgreSQL performance\n> > by relocating the pg_xlog to a disk other than the one\n> > where the database resides. I have the following pg_xlogs\n> > on my system.\n> >\n> > /raid02/databases/pg_xlog\n> > /raid02/rhdb_databases/pg_xlog\n> > /raid02/databases-8.0.0/pg_xlog\n> > /var/lib/pgsql/data/pg_xlog\n> >\n> > I have no idea why the forth entry is there. It is in the PostgreSQL\n> > installation directory.\n> \n> It's there because the RPM sets up a database under /var/lib/pgsql/data.\n> \n> > 1) stop the postmaster\n> > 2) rm -rf /var/lib/pgsql/data/pg_xlog\n> > 3) mv /raid02/databases/pg_xlog /var/lib/pgsql/data/pg_xlog\n> > 4) ln -s /var/lib/pgsql/data/pg_xlog /raid02/databases/pg_xlog\n> > 5) start postmaster\n> \n> Put the xlog anywhere BUT there!!!!!!!!!\n> \n> > If I can do that and place the pg_xlog in the installation\n> > directory will I create any installation issues the next\n> > time I upgrade PostgreSQL?\n> \n> Oh, the installation will be just fine ... but your database will not\n> be after the upgrade wipes out your WAL. Put the xlog under some\n> non-system-defined directory.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\nThanks Tom. I am glad I asked before I leaped. 8-0\n\nIs there a convention that most people follow. It would seem that anywhere in\nthe installation directory is a bad idea. From what I have read on other\nthreads it does not want to be in the database directory since in most cases\nthat would put it on the same disk as the database.\n\nI am assuming due to lack of reaction that the symbolic link is not an issue.\n Is there a cleaner or more appropriate way of moving the pg_xlog.\n\nFinally, am I correct in assuming that as long as the postmaster is shut down\nmoving the log is safe?\n\nKind Regards,\nKeith\n",
"msg_date": "Wed, 1 Jun 2005 16:11:43 -0400",
"msg_from": "\"Keith Worthington\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Moving pg_xlog "
},
{
"msg_contents": "\"Keith Worthington\" <[email protected]> writes:\n> On Wed, 01 Jun 2005 12:19:40 -0400, Tom Lane wrote\n>> Put the xlog anywhere BUT there!!!!!!!!!\n\n> Is there a convention that most people follow. It would seem that\n> anywhere in the installation directory is a bad idea. From what I\n> have read on other threads it does not want to be in the database\n> directory since in most cases that would put it on the same disk as\n> the database.\n\nI don't know of any fixed convention. The way I would be inclined to do\nit, given that I wanted the data on disk 1 (with mount point /disk1) and\nxlog on disk 2 (with mount point /disk2) is to create postgres-owned\ndirectories /disk1/postgres/ and /disk2/postgres/, and then within those\nput the data directory (thus, /disk1/postgres/data/) and xlog directory\n(/disk2/postgres/pg_xlog/). Having an extra level of postgres-owned\ndirectory is handy since it makes it easier to do database admin work\nwithout being root --- once you've created those two directories and\nchown'd them to postgres, everything else can be done as the postgres user.\n\nNow that I think about it, you were (if I understood your layout\ncorrectly) proposing to put the xlog on your system's root disk.\nThis is probably a bad idea for performance, because there will always\nbe other traffic to the root disk. What you are really trying to\naccomplish is to make sure the xlog is on a disk spindle that has no\nother traffic besides xlog, so that the disk heads never have to move\noff the current xlog file. The xlog traffic is 100% sequential writes\nand so you can cut the seeks involved to near nil if you can dedicate\na spindle to it.\n\n> I am assuming due to lack of reaction that the symbolic link is not an issue.\n> Is there a cleaner or more appropriate way of moving the pg_xlog.\n\nNo, that's exactly the way to do it.\n\n> Finally, am I correct in assuming that as long as the postmaster is shut down\n> moving the log is safe?\n\nRight. You can move the data directory too if you want. AFAIR the only\nposition-dependent stuff in there is (if you are using tablespaces in\n8.0) the tablespace symlinks under data/pg_tblspc/. You can fix those\nby hand if you have a need to move a tablespace.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Jun 2005 16:30:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving pg_xlog "
},
{
"msg_contents": "Keith Worthington wrote:\n\n>On Wed, 01 Jun 2005 12:19:40 -0400, Tom Lane wrote\n> \n>\n>>\"Keith Worthington\" <[email protected]> writes:\n>> \n>>\n>>>I have been reading about increasing PostgreSQL performance\n>>>by relocating the pg_xlog to a disk other than the one\n>>>where the database resides. I have the following pg_xlogs\n>>>on my system.\n>>>\n>>>/raid02/databases/pg_xlog\n>>>/raid02/rhdb_databases/pg_xlog\n>>>/raid02/databases-8.0.0/pg_xlog\n>>>/var/lib/pgsql/data/pg_xlog\n>>>\n>>>I have no idea why the forth entry is there. It is in the PostgreSQL\n>>>installation directory.\n>>> \n>>>\n>>It's there because the RPM sets up a database under /var/lib/pgsql/data.\n>>\n>> \n>>\n>>>1) stop the postmaster\n>>>2) rm -rf /var/lib/pgsql/data/pg_xlog\n>>>3) mv /raid02/databases/pg_xlog /var/lib/pgsql/data/pg_xlog\n>>>4) ln -s /var/lib/pgsql/data/pg_xlog /raid02/databases/pg_xlog\n>>>5) start postmaster\n>>> \n>>>\n>>Put the xlog anywhere BUT there!!!!!!!!!\n>>\n>> \n>>\n>>>If I can do that and place the pg_xlog in the installation\n>>>directory will I create any installation issues the next\n>>>time I upgrade PostgreSQL?\n>>> \n>>>\n>>Oh, the installation will be just fine ... but your database will not\n>>be after the upgrade wipes out your WAL. Put the xlog under some\n>>non-system-defined directory.\n>>\n>>\t\t\tregards, tom lane\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 7: don't forget to increase your free space map settings\n>> \n>>\n>\n>Thanks Tom. I am glad I asked before I leaped. 8-0\n>\n>Is there a convention that most people follow. It would seem that anywhere in\n>the installation directory is a bad idea. From what I have read on other\n>threads it does not want to be in the database directory since in most cases\n>that would put it on the same disk as the database.\n>\n> \n>\nWe tend to use somthing that associates the WAL with the appropriate \ncluster, like\n\n/var/lib/CLUSTER for the data\n/var/lib/CLUSTER_WAL for WAL files.\n\n>I am assuming due to lack of reaction that the symbolic link is not an issue.\n> Is there a cleaner or more appropriate way of moving the pg_xlog.\n>\n> \n>\n\nA symbolic link is the standard way to do it.\n\n>Finally, am I correct in assuming that as long as the postmaster is shut down\n>moving the log is safe?\n> \n>\n\nYou are correct. Moving the WAL files with the postmaster running would \nbe a very bad thing.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp. \n\n",
"msg_date": "Wed, 01 Jun 2005 16:31:38 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving pg_xlog"
},
{
"msg_contents": "Tom Lane wrote:\n...\n\n>Now that I think about it, you were (if I understood your layout\n>correctly) proposing to put the xlog on your system's root disk.\n>This is probably a bad idea for performance, because there will always\n>be other traffic to the root disk. What you are really trying to\n>accomplish is to make sure the xlog is on a disk spindle that has no\n>other traffic besides xlog, so that the disk heads never have to move\n>off the current xlog file. The xlog traffic is 100% sequential writes\n>and so you can cut the seeks involved to near nil if you can dedicate\n>a spindle to it.\n>\n>\nI certainly agree with what you wrote. But my understanding is that if\nyou only have 2 arrays, then moving xlog onto the array not on the\ndatabase is better than having it with the database. It isn't optimum,\nbut it is better. Because that way there isn't as much contention\nbetween the database and xlog.\n\nJohn\n=:->",
"msg_date": "Wed, 01 Jun 2005 16:27:48 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving pg_xlog"
},
{
"msg_contents": "John A Meinel <[email protected]> writes:\n> Tom Lane wrote:\n>> Now that I think about it, you were (if I understood your layout\n>> correctly) proposing to put the xlog on your system's root disk.\n>> This is probably a bad idea for performance, ...\n\n> I certainly agree with what you wrote. But my understanding is that if\n> you only have 2 arrays, then moving xlog onto the array not on the\n> database is better than having it with the database. It isn't optimum,\n> but it is better. Because that way there isn't as much contention\n> between the database and xlog.\n\nIf the machine isn't doing much else than running the database, then\nyeah, that may be the best available option. If there are other things\ngoing on then you have to think about how much competition there is for\nthe root disk.\n\nBut the impression I had from the OP's df listing is that he has several\ndisks available ... so he ought to be able to find one that doesn't need\nto do anything except xlog.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Jun 2005 02:06:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving pg_xlog "
}
] |
[
{
"msg_contents": "Hello,\n\nI'm the fellow who was interviewed in the fall about using PostgreSQL on\n1-800-Save-A-Pet.com:\nhttp://techdocs.postgresql.org/techdocs/interview-stosberg.php\n\nThe site traffic continues to grow, and we are now seeing parts of the\nday where the CPU load (according to MRTG graphs) on the database server\nis stuck at 100%. I would like to improve this, and I'm not sure where\nto look first. The machine is a dedicated PostgreSQL server which two\nweb server boxes talk to. \n\nI've used PQA to analyze my queries and happy overall with how they are\nrunning. About 55% of the query time is going to variations of the pet\nsearching query, which seems like where it should be going. The query is\nfrequent and complex. It has already been combed over for appropriate\nindexing.\n\nI'm more interested at this point in tuning the software and hardware\ninfrastructure, but would like to get a sense about which choices will\nbring the greatest reward. \n\nLet me explain some avenues I'm considering. \n\n - We are currently running 7.4. If I upgrade to 8.0 and DBD::Pg 1.42,\n then the \"server side prepare\" feature will be available for use. \n We do run the same queries a number of times. \n\n - PhpPgAds seems to sucking up about 7.5% of our query time and is\n unrelated to the core application. We could move this work to another\n machine. The queries it generates seem like they have some room to\n optimized, or simply don't need to be run in some cases. However, I\n would like to stay out of modifying third-party code and PHP if\n possible.\n\n - I saw the hardware tip to \"Separate the Transaction Log from the\n Database\". We have about 60% SELECT statements and 14% UPDATE\n statements. Focusing more on SELECT performance seems more important\n for us.\n\n - We have tried to tune 'shared_buffers' some, but haven't seen a\n noticeable performance improvement. \n\n Our hardware: Dual 3 Ghz processors 3 GB RAM, running on FreeBSD. \n\n I'm not quite sure how to check our average connection usage, but\n maybe this is helpful: When I do:\n select count(*) from pg_stat_activity ;\n I get values around 170. \n\n We have these values:\n max_connections = 400\n shared_buffers = 4096\n\nMost other values in postgresql.conf are still at the their defaults.\n\nAny suggestions are which avenues might offer the most bang for the buck\nare appreciated!\n\n( I have already found: http://www.powerpostgresql.com/PerfList/ and it has\nbeen a very helpful source of suggestions. )\n\n Mark\n\n\n",
"msg_date": "Wed, 1 Jun 2005 19:19:03 +0000 (UTC)",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Most effective tuning choices for busy website?"
},
{
"msg_contents": "Mark Stosberg wrote:\n> I've used PQA to analyze my queries and happy overall with how they are\n> running. About 55% of the query time is going to variations of the pet\n> searching query, which seems like where it should be going. The query is\n> frequent and complex. It has already been combed over for appropriate\n> indexing.\n\nIt might be worth posting the EXPLAIN ANALYZE and relevant schema \ndefinitions for this query, in case there is additional room for \noptimization.\n\n> Our hardware: Dual 3 Ghz processors 3 GB RAM, running on FreeBSD.\n\nDisk?\n\nYou are presumably using Xeon processors, right? If so, check the list \narchives for information on the infamous \"context switching storm\" that \ncauses performance problems for some people using SMP Xeons.\n\n-Neil\n",
"msg_date": "Mon, 06 Jun 2005 12:04:29 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Most effective tuning choices for busy website?"
},
{
"msg_contents": "Neil Conway wrote:\n\n> Mark Stosberg wrote:\n>> I've used PQA to analyze my queries and happy overall with how they are\n>> running. About 55% of the query time is going to variations of the pet\n>> searching query, which seems like where it should be going. The query is\n>> frequent and complex. It has already been combed over for appropriate\n>> indexing.\n> \n> It might be worth posting the EXPLAIN ANALYZE and relevant schema\n> definitions for this query, in case there is additional room for\n> optimization.\n> \n>> Our hardware: Dual 3 Ghz processors 3 GB RAM, running on FreeBSD.\n> \n> Disk?\n> \n> You are presumably using Xeon processors, right? If so, check the list\n> archives for information on the infamous \"context switching storm\" that\n> causes performance problems for some people using SMP Xeons.\n\nI wanted to follow-up to report a positive outcome to tuning this Xeon\nSMP machine on FreeBSD. We applied the following techniques, and saw the\naverage CPU usage drop by about 25%.\n\n- in /etc/sysctl.conf, we set it to use raw RAM for shared memory:\nkern.ipc.shm_use_phys=1\n\n- We updated our kernel config and postmaster.conf to set\n shared_buffers to about 8000.\n\n- We disabled hyperthreading in the BIOS, which had a label like\n \"Logical Processors? : Disabled\".\n\nI recall there was tweak my co-worker made that's not on my list.\n\nI realize it's not particularly scientific because we changed several things\nat once...but at least it is working well enough for now. \n\n Mark\n \n\n",
"msg_date": "Tue, 14 Jun 2005 17:34:52 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Most effective tuning choices for busy website?"
},
{
"msg_contents": "On 06/01/2005-07:19PM, Mark Stosberg wrote:\n> \n> - I saw the hardware tip to \"Separate the Transaction Log from the\n> Database\". We have about 60% SELECT statements and 14% UPDATE\n> statements. Focusing more on SELECT performance seems more important\n> for us.\n> \n\nI would think that would help SELECT If the spindle isn't busy writing\nTransaction log it can be reading for your SELECTs. \n\nYou did say you were CPU bound though.\n\n-- \n------------------------------------------------------------\nChristopher Weimann\nhttp://www.k12usa.com\nK12USA.com Cool Tools for Schools!\n------------------------------------------------------------\n",
"msg_date": "Tue, 14 Jun 2005 19:59:37 -0400",
"msg_from": "Christopher Weimann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Most effective tuning choices for busy website?"
}
] |
[
{
"msg_contents": "We've seen PostgreSQL performance as a dspam database be simply stellar on \nsome machines with absolutely no tuning to the postgres.conf, and no \nstatistics target altering.\n\nSome months ago, I moved my domains from a crusty old generic PIII 733 to a \nbrand new Athlon 3000+ server that I was leasing. The load was very high, \nand it was all PostgreSQL. I cried and screamed on #postgresql for hours, \nand eventually it was discovered that the following command fixed everything \nand suddenly performance was lightning fast again:\n\nalter table \"dspam_token_data\" alter \"token\" set statistics 200; analyze;\n\nWe had set up about 200 domains on a SuperMicro P4 2.4GHz server, and it was \nworking great too (without the above tweak!), but then the motherboard \nstarted having issues and the machine would lock up every few weeks. So we \nmoved everything to a brand new SuperMicro P4 3.0GHz server last week, and \nnow performance is simply appalling. Whereas before the load average was \nsomething around 0.02, it's now regularly at 4 (all postgres), and there's \nhundreds of messages in the queue waiting. Lots of people are complaining \nabout slow mail delivery, and I've been up for 2 days trying to fix this with \nno success.\n\nOriginally, the problem was a lot worse, but I spent a lot of time tuning the \npostgresql.conf, and changed the statistics target shown above, and this made \nthings okay (by okay I mean that it's okay at night, but during the day \nseveral hundred messages will regularly be waiting for delivery).\n\nI found this response to my original post, and tried every single suggestion \nin it, which has not helped:\n\nhttp://archives.postgresql.org/pgsql-performance/2004-11/msg00416.php\n\nI'm sorry to come begging for help, but this is a MAJOR problem with no \nlogical explanation, and is almost certainly the fault of PostgreSQL, because \nthe database and contents have been identical across all the hosts, and some \nwork beautifully with no tuning whatsoever; so I don't feel I'm wrong in \nplacing blame...\n\nAll machines run Gentoo Linux. All have the same package versions. Disk I/O \ndoesn't seem to be related - the 733MHz server had a 33MB/s IDE drive, the \n2.4GHz server had a RAID 5 with 3 ultra320 drives: neither of those required \nany tuning. The new 3.0GHz has a mirror raid with 2 ultra320 drives, and the \n3000+ that tuning fixed had an ultra160 disk not in a RAID.\n\nI really like PostgreSQL, and really don't want to use MySQL for dspam, but if \nI can't get this worked out ASAP I'm going to have to change for the sake of \nour customers. Any help is GREATLY appreciated!\n\nI'm online on instant messengers (contact IDs shown below), monitoring my \nemail, and will be on #postgresql on Freenode.\n\nCheers,\n-- \nCasey Allen Shobe | http://casey.shobe.info\[email protected] | cell 425-443-4653\nAIM & Yahoo: SomeLinuxGuy | ICQ: 1494523\nSeattleServer.com, Inc. | http://www.seattleserver.com\n",
"msg_date": "Wed, 1 Jun 2005 20:19:13 +0000",
"msg_from": "Casey Allen Shobe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance nightmare with dspam (urgent)"
},
{
"msg_contents": "On Wednesday 01 June 2005 20:19, Casey Allen Shobe wrote:\n> We've seen PostgreSQL performance as a dspam database be simply stellar on\n> some machines with absolutely no tuning to the postgres.conf, and no\n> statistics target altering.\n\nWow. That took a phenomenally long time to post. I asked on IRC, and they \nsaid it is \"normal\" for the PG lists to bee so horribly slow. What gives? I \nthink you guys really need to stop using majordomo, but I'll avoid blaming \nthat for the time being. Maybe a good time for the performance crew to look \nat the mailing list software instead of just PG.\n\n> We had set up about 200 domains on a SuperMicro P4 2.4GHz server, and it was \n> working great too (without the above tweak!), but then the motherboard \n> started having issues and the machine would lock up every few weeks. So we \n> moved everything to a brand new SuperMicro P4 3.0GHz server last week, and \n> now performance is simply appalling.\n\nWell, we actually added about 10 more domains right around the time of the \nmove, not thinking anything of it. Turns out that simply set the disk usage \nover the threshhold of what the drive could handle. At least, that's the \nbest guess of the situation - I don't really know whether to believe that \nbecause the old machine had a 3-disk RAID5 so it should have been half the \nspeed of the new machine. However, analyzing the statements showed that they \nwere all using index scans as they should, and no amount of tuning managed to \nreduce the I/O to an acceptable level.\n\nAfter lots of tuning, we moved pg_xlog onto a separate disk, and switched \ndspam from TEFT to TOE mode (which reduces the number of inserts). By doing \nthis, the immediate problem was alleviated.\n\nIndeed the suggestion in link in my previous email to add an extra index was a \nBAD idea, since it increased the amount of work that had to be done per \nwrite, and didn't help anything.\n\nLong-term, whenever we hit the I/O limit again, it looks like we really don't \nhave much of a solution except to throw more hardware (mainly lots of disks \nin RAID0's) at the problem. :( Fortunately, with the above two changes I/O \nusage on the PG data disk is a quarter of what it was, so theoretically we \nshould be able to quadruple the number of users on current hardware.\n\nOur plan forward is to increase the number of disks in the two redundant mail \nservers, so that each has a single ultra320 disk for O/S and pg_xlog, and a \n3-disk RAID0 for the data. This should triple our current capacity.\n\nThe general opinion of the way dspam uses the database among people I've \ntalked to on #postgresql is not very good, but of course the dspam folk blame \nPostgreSQL and say to use MySQL if you want reasonable performance. Makes it \nreal fun to be a DSpam+PostgreSQL user when limits are reached, since \neveryone denies responsibility. Fortunately, PostgreSQL people are pretty \nhelpful even if they think the client software sucks. :)\n\nCheers,\n-- \nCasey Allen Shobe | http://casey.shobe.info\[email protected] | cell 425-443-4653\nAIM & Yahoo: SomeLinuxGuy | ICQ: 1494523\nSeattleServer.com, Inc. | http://www.seattleserver.com\n",
"msg_date": "Mon, 6 Jun 2005 14:54:47 +0000",
"msg_from": "Casey Allen Shobe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance nightmare with dspam (urgent) (resolved)"
},
{
"msg_contents": "Casey Allen Shobe wrote:\n> On Wednesday 01 June 2005 20:19, Casey Allen Shobe wrote:\n> \n...\n> Long-term, whenever we hit the I/O limit again, it looks like we really don't \n> have much of a solution except to throw more hardware (mainly lots of disks \n> in RAID0's) at the problem. :( Fortunately, with the above two changes I/O \n> usage on the PG data disk is a quarter of what it was, so theoretically we \n> should be able to quadruple the number of users on current hardware.\n> \n\nBe very careful in this situation. If any disks in a RAID0 fails, the\nentire raid is lost. You *really* want a RAID10. It takes more drives,\nbut then if anything dies you don't lose everything.\n\nIf you are running RAID0 and you *really* want performance, and aren't\nconcerned about safety (at all), you could also set fsync=false. That\nshould also speed things up. But you are really risking corruption/data\nloss on your system.\n\n> Our plan forward is to increase the number of disks in the two redundant mail \n> servers, so that each has a single ultra320 disk for O/S and pg_xlog, and a \n> 3-disk RAID0 for the data. This should triple our current capacity.\n\nI don't know if you can do it, but it would be nice to see this be 1\nRAID1 for OS, 1 RAID10 for pg_xlog, and another RAID10 for data. That is\nthe recommended performance layout. It takes quite a few drives (minimum\nof 10). But it means your data is safe, and your performance should be\nvery good.\n\n> \n> The general opinion of the way dspam uses the database among people I've \n> talked to on #postgresql is not very good, but of course the dspam folk blame \n> PostgreSQL and say to use MySQL if you want reasonable performance. Makes it \n> real fun to be a DSpam+PostgreSQL user when limits are reached, since \n> everyone denies responsibility. Fortunately, PostgreSQL people are pretty \n> helpful even if they think the client software sucks. :)\n> \n\nI can't say how dspam uses the database. But they certainly could make\nassumptions about how certain actions are done by the db, which are not\nquite true with postgres. (For instance MySQL can use an index to return\ninformation, because Postgres supports transactions, it cannot, because\neven though a row is in the index, it may not be visible to the current\ntransaction.)\n\nThey also might be doing stuff like \"select max(row)\" instead of \"select\nrow ORDER BY row DESC LIMIT 1\". In postgres the former will be a\nsequential scan, the latter will be an index scan. Though I wonder about\n\"select max(row) ORDER BY row DESC LIMIT 1\". to me, that should still\nreturn the right answer, but I'm not sure.\n\n> Cheers,\n\nGood luck,\nJohn\n=:->",
"msg_date": "Mon, 06 Jun 2005 10:08:23 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance nightmare with dspam (urgent) (resolved)"
},
{
"msg_contents": "\n\n> PostgreSQL and say to use MySQL if you want reasonable performance.\n\n\tIf you want MySQL performance and reliability with postgres, simply run \nit with fsync deactivated ;)\n\tI'd suggest a controller with battery backed up cache to get rid of the 1 \ncommit = 1 seek boundary.\n\n> Makes it\n> real fun to be a DSpam+PostgreSQL user when limits are reached, since\n> everyone denies responsibility. Fortunately, PostgreSQL people are \n> pretty\n> helpful even if they think the client software sucks. :)\n>\n> Cheers,\n\n\n",
"msg_date": "Mon, 06 Jun 2005 17:11:31 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance nightmare with dspam (urgent) (resolved)"
},
{
"msg_contents": "On Mon, Jun 06, 2005 at 10:08:23AM -0500, John A Meinel wrote:\n>I don't know if you can do it, but it would be nice to see this be 1\n>RAID1 for OS, 1 RAID10 for pg_xlog, \n\nThat's probably overkill--it's a relatively small sequential-write\npartition with really small writes; I don't see how pg_xlog would\nbenefit from raid10 as opposed to raid1. \n\nMike Stone\n\n",
"msg_date": "Mon, 06 Jun 2005 11:19:11 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance nightmare with dspam (urgent) (resolved)"
},
{
"msg_contents": "Michael Stone wrote:\n> On Mon, Jun 06, 2005 at 10:08:23AM -0500, John A Meinel wrote:\n> \n>> I don't know if you can do it, but it would be nice to see this be 1\n>> RAID1 for OS, 1 RAID10 for pg_xlog, \n> \n> \n> That's probably overkill--it's a relatively small sequential-write\n> partition with really small writes; I don't see how pg_xlog would\n> benefit from raid10 as opposed to raid1.\n> Mike Stone\n> \n\npg_xlog benefits from being super fast. Because it has to be fully\nsynced before the rest of the data can be committed. Yes they are small,\nbut if you can make it fast, you eliminate that overhead. It also\nbenefits from having it's own spindle, because you eliminate the seek\ntime. (Since it is always appending)\n\nAnyway, my point is that pg_xlog isn't necessarily tiny. Many people\nseem to set it as high as 100-200, and each one is 16MB.\n\nBut one other thing to consider is to make pg_xlog on a battery backed\nramdisk. Because it really *can* use the extra speed. I can't say that a\nramdisk is more cost effective than faster db disks. But if you aren't\nusing many checkpoint_segments, it seems like you could get a 1GB\nramdisk, and probably have a pretty good performance boost. (I have not\ntested this personally, though).\n\nSince he is using the default settings (mostly) for dspam, he could\nprobably get away with something like a 256MB ramdisk.\n\nThe only prices I could find with a few minutes of googleing was:\nhttp://www.cenatek.com/store/category.cfm?Category=15\nWhich is $1.6k for 2GB.\n\nBut there is also a product that is being developed, which claims $60\nfor the PCI card, you supply the memory. It has 4 DDR slots\nhttp://www.engadget.com/entry/1234000227045399/\nAnd you can get a 128MB SDRAM ECC module for around $22\nhttp://www.newegg.com/Product/Product.asp?Item=N82E16820998004\nSo that would put the total cost of a 512MB battery backed ramdisk at\n$60 + 4*22 = $150.\n\nThat certainly seems less than what you would pay for the same speed in\nhard-drives.\nUnfortunately the Giga-byte iRam seems to just be in the demo stage. But\nif they aren't lying in the press releases, it would certainly be\nsomething to keep an eye on.\n\nJohn\n=:->",
"msg_date": "Mon, 06 Jun 2005 10:52:09 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance nightmare with dspam (urgent) (resolved)"
},
{
"msg_contents": "On Monday 06 June 2005 15:08, John A Meinel wrote:\n> Be very careful in this situation. If any disks in a RAID0 fails, the\n> entire raid is lost. You *really* want a RAID10. It takes more drives,\n> but then if anything dies you don't lose everything.\n\nWe have redundancy at the machine level using DRBD, so this is not a concern.\n\n> I don't know if you can do it, but it would be nice to see this be 1\n> RAID1 for OS, 1 RAID10 for pg_xlog, and another RAID10 for data. That is\n> the recommended performance layout. It takes quite a few drives (minimum\n> of 10). But it means your data is safe, and your performance should be\n> very good.\n\nThe current servers have 4 drive bays, and we can't even afford to fill them \nall right now...we just invested what amounts to \"quite a lot\" on our budget \nfor these 2 servers, so replacing them is not an option at all right now.\n\nI think the most cost-effective road forward is to add 2 more drives to each \nof the existing servers (which currently have 2 each).\n\nCheers,\n-- \nCasey Allen Shobe | http://casey.shobe.info\[email protected] | cell 425-443-4653\nAIM & Yahoo: SomeLinuxGuy | ICQ: 1494523\nSeattleServer.com, Inc. | http://www.seattleserver.com\n",
"msg_date": "Mon, 6 Jun 2005 16:04:39 +0000",
"msg_from": "Casey Allen Shobe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance nightmare with dspam (urgent) (resolved)"
},
{
"msg_contents": "On Mon, Jun 06, 2005 at 10:52:09AM -0500, John A Meinel wrote:\n>pg_xlog benefits from being super fast. Because it has to be fully\n>synced before the rest of the data can be committed. Yes they are small,\n>but if you can make it fast, you eliminate that overhead. It also\n>benefits from having it's own spindle, because you eliminate the seek\n>time. (Since it is always appending)\n\nEliminating the seeks is definately a win. \n\n>Anyway, my point is that pg_xlog isn't necessarily tiny. Many people\n>seem to set it as high as 100-200, and each one is 16MB.\n\nIt's not the size of the xlog, it's the size of the write. Unless you're\nwriting out a stripe size of data at once you're only effectively\nwriting to one disk pair at a time anyway. (Things change if you have a\nbig NVRAM cache to aggregate the writes, but you'd need a *lot* of\ntransaction activity to exceed the 50MB/s or so you could get from the\nsingle raid1 pair in that scenario.)\n\nMike Stone\n",
"msg_date": "Mon, 06 Jun 2005 13:27:40 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance nightmare with dspam (urgent) (resolved)"
},
{
"msg_contents": "On Thu, 2 Jun 2005 06:19 am, Casey Allen Shobe wrote:\n> I found this response to my original post, and tried every single suggestion \n> in it, which has not helped:\n> \n> http://archives.postgresql.org/pgsql-performance/2004-11/msg00416.php\n> \n> I'm sorry to come begging for help, but this is a MAJOR problem with no \n> logical explanation, and is almost certainly the fault of PostgreSQL, because \n> the database and contents have been identical across all the hosts, and some \n> work beautifully with no tuning whatsoever; so I don't feel I'm wrong in \n> placing blame...\n\nI would personally strongly suggest turing on logging\non the PG server for about an hour, sifting through the runtimes for the queries and\nfinding which ones are taking all the time. I'd then run explain analyze and see what\nis happening. I have heard you could get much better performance by rewriting some of\nthe dspam queries to use PG features. But I've never used dspam, so I can't verify that.\n\nBut a quick look through the dspam pg driver source...\n\n /* Declare Cursor */\n#ifdef VIRTUAL_USERS\n strcpy (query, \"DECLARE dscursor CURSOR FOR SELECT DISTINCT username FROM dspam_virtual_uids\");\n#else\n strcpy (query, \"DECLARE dscursor CURSOR FOR SELECT DISTINCT uid FROM dspam_stats\");\n#endif\n\nIf that's run often, it probably won't give the best performance, but that's a guess.\nAgain I'd suggest turning up the logging.\n\n\n> \n> All machines run Gentoo Linux. All have the same package versions. Disk I/O \n> doesn't seem to be related - the 733MHz server had a 33MB/s IDE drive, the \n> 2.4GHz server had a RAID 5 with 3 ultra320 drives: neither of those required \n> any tuning. The new 3.0GHz has a mirror raid with 2 ultra320 drives, and the \n> 3000+ that tuning fixed had an ultra160 disk not in a RAID.\n> \n> I really like PostgreSQL, and really don't want to use MySQL for dspam, but if \n> I can't get this worked out ASAP I'm going to have to change for the sake of \n> our customers. Any help is GREATLY appreciated!\nAgain I'd suggest turning up the logging.\n\n\n> \n> I'm online on instant messengers (contact IDs shown below), monitoring my \n> email, and will be on #postgresql on Freenode.\n> \n> Cheers,\n",
"msg_date": "Tue, 7 Jun 2005 19:43:16 +1000",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance nightmare with dspam (urgent)"
}
] |
[
{
"msg_contents": "Is it any way to attempt to force the planner to use some specific index\nwhile creating the plan? Other than eventually dropping all the other\nindices (which is obiously not a solution in production setting anyway)?\n\nI have one case where I have added 16 indices to a table, many of them\nbeeing partial indices. The table itself has only 50k of rows, but are\nfrequently used in heavy joins. I imagine there can be exponential order on\nthe number of alternative paths the planner must examinate as function of\nthe number of indices?\n\nIt seems to me that the planner is quite often not choosing the \"best\"\nindex, so I wonder if there is any easy way for me to check out what the\nplanner think about a specific index :-)\n\n-- \nTobias Brox, Beijing\n",
"msg_date": "Thu, 2 Jun 2005 10:05:28 +0800",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Forcing use of specific index"
},
{
"msg_contents": "A pretty awful way is to mangle the sql statement so the other field \nlogical statements are like so:\n\nselect * from mytable where 0+field = 100\n\n\n\n\nTobias Brox wrote:\n> Is it any way to attempt to force the planner to use some specific index\n> while creating the plan? Other than eventually dropping all the other\n> indices (which is obiously not a solution in production setting anyway)?\n> \n> I have one case where I have added 16 indices to a table, many of them\n> beeing partial indices. The table itself has only 50k of rows, but are\n> frequently used in heavy joins. I imagine there can be exponential order on\n> the number of alternative paths the planner must examinate as function of\n> the number of indices?\n> \n> It seems to me that the planner is quite often not choosing the \"best\"\n> index, so I wonder if there is any easy way for me to check out what the\n> planner think about a specific index :-)\n> \n",
"msg_date": "Fri, 03 Jun 2005 16:22:49 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing use of specific index"
},
{
"msg_contents": "HI all,\nI also would like to know if there is a way to force a use of a\nspecific index for a specific query. I am currently using Postgresql\n7.4.6\n\nIn my case I have a relatively big table (several millions of records)\nthat are frequently used to join with other tables (explicit join or\nthrough view).\nThe table has several indices, some are single column and some are multi column.\nSome queries are faster if using single colum index while other are\nfaster using multi column indexes.\nI have play around with SET STATISTICS, but it doesn't seem to make\nany differences (I tried to set it to 1000 one time, but still the\nsame). I did analyze and vacuum after SET STATISTICS.\nAny pointer on how to do this is greatly appreciated.\nThank you in advance,\n\n\nJ\n\n\n\nOn 6/1/05, Tobias Brox <[email protected]> wrote:\n> Is it any way to attempt to force the planner to use some specific index\n> while creating the plan? Other than eventually dropping all the other\n> indices (which is obiously not a solution in production setting anyway)?\n> \n> I have one case where I have added 16 indices to a table, many of them\n> beeing partial indices. The table itself has only 50k of rows, but are\n> frequently used in heavy joins. I imagine there can be exponential order on\n> the number of alternative paths the planner must examinate as function of\n> the number of indices?\n> \n> It seems to me that the planner is quite often not choosing the \"best\"\n> index, so I wonder if there is any easy way for me to check out what the\n> planner think about a specific index :-)\n> \n> --\n> Tobias Brox, Beijing\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n",
"msg_date": "Fri, 3 Jun 2005 18:52:27 -0700",
"msg_from": "Junaili Lie <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing use of specific index"
},
{
"msg_contents": "\"Tobias Brox\" <[email protected]> writes\n> Is it any way to attempt to force the planner to use some specific index\n> while creating the plan? Other than eventually dropping all the other\n> indices (which is obiously not a solution in production setting anyway)?\n>\n\nI don't think currently PG supports this but \"SQL hints\" is planned ...\n\nRegards,\nQingqing\n\n\n",
"msg_date": "Sat, 4 Jun 2005 22:14:26 +0800",
"msg_from": "\"Qingqing Zhou\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing use of specific index"
}
] |
[
{
"msg_contents": "We're in the process of buying another Opteron server to run Postgres, and\nbased on the suggestions in this list I've asked our IT director to get an\nLSI MegaRaid controller rather than one of the Adaptecs.\n\nBut when we tried to place our order, our vendor (Penguin Computing) advised\nus:\n\n\"we find LSI does not work well with 4GB of RAM. Our engineering find that\nLSI card could cause system crashes. One of our customer ... has found that\nAdaptec cards works well on PostGres SQL -- they're using it as a preforce\nserver with xfs and post-gress.\"\n\nAny comments? Suggestions for other RAID controllers?\n\n",
"msg_date": "Wed, 1 Jun 2005 20:42:58 -0700",
"msg_from": "\"Stacy White\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Adaptec/LSI/?? RAID"
},
{
"msg_contents": "\n\nStacy White presumably uttered the following on 06/01/05 23:42:\n> We're in the process of buying another Opteron server to run Postgres, and\n> based on the suggestions in this list I've asked our IT director to get an\n> LSI MegaRaid controller rather than one of the Adaptecs.\n> \n> But when we tried to place our order, our vendor (Penguin Computing) advised\n> us:\n> \n> \"we find LSI does not work well with 4GB of RAM. Our engineering find that\n> LSI card could cause system crashes. One of our customer ... has found that\n> Adaptec cards works well on PostGres SQL -- they're using it as a preforce\n> server with xfs and post-gress.\"\n> \n> Any comments? Suggestions for other RAID controllers?\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n\nWe use the LSI MegaRaid 320-2x with the battery-backed cache on a dual \nopteron system that uses 8G of RAM. OS is FreeBSD amd64 (5.4) and runs \nwithout hesitation. Database currently over 100GB and it performs \nadmirably. So chalk one anecdotal item towards the LSI column. To be \nfair I have not tried an Adaptec card with this setup so I cannot \ncomment positively or negatively on that card. As a side note, we did \nhave issues with this setup with Linux (2.6 kernel - 64bit) and XFS file \nsystem (we generally use FreeBSD but I wanted to try other 64bit OSes \nbefore committing). Whether the linux issues were due to the LSI, \nmemory, Tyan mobo, or something else was never determined -- FreeBSD ran \nit and did so without flinching so our choice was easy.\n\nHTH\n\nSven\n",
"msg_date": "Thu, 02 Jun 2005 00:35:51 -0400",
"msg_from": "Sven Willenberger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adaptec/LSI/?? RAID"
},
{
"msg_contents": "I've used LSI MegaRAIDs successfully in the following systems with both \nRedhat 9 and FC3 64bit.\n\nArima HDAMA/8GB RAM\nTyan S2850/4GB RAM\nTyan S2881/4GB RAM\n\nI've previously stayed away from Adaptec because we used to run Solaris \nx86 and the driver was somewhat buggy. For Linux and FreeBSD, I'd be \nless worried as open source development of drivers usually lead to \nbetter testing & bug-fixing.\n\n\nStacy White wrote:\n> We're in the process of buying another Opteron server to run Postgres, and\n> based on the suggestions in this list I've asked our IT director to get an\n> LSI MegaRaid controller rather than one of the Adaptecs.\n> \n> But when we tried to place our order, our vendor (Penguin Computing) advised\n> us:\n> \n> \"we find LSI does not work well with 4GB of RAM. Our engineering find that\n> LSI card could cause system crashes. One of our customer ... has found that\n> Adaptec cards works well on PostGres SQL -- they're using it as a preforce\n> server with xfs and post-gress.\"\n> \n> Any comments? Suggestions for other RAID controllers?\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n",
"msg_date": "Wed, 01 Jun 2005 22:00:09 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adaptec/LSI/?? RAID"
},
{
"msg_contents": "On Wed, 2005-06-01 at 20:42 -0700, Stacy White wrote:\n> We're in the process of buying another Opteron server to run Postgres, and\n> based on the suggestions in this list I've asked our IT director to get an\n> LSI MegaRaid controller rather than one of the Adaptecs.\n> \n> But when we tried to place our order, our vendor (Penguin Computing) advised\n> us:\n> \n> \"we find LSI does not work well with 4GB of RAM. Our engineering find that\n> LSI card could cause system crashes. One of our customer ... has found that\n> Adaptec cards works well on PostGres SQL -- they're using it as a preforce\n> server with xfs and post-gress.\"\n> \n> Any comments? Suggestions for other RAID controllers?\n\nHi,\n\nWe're using the Megaraid (Intel branded model) on a dual Opteron system\nwith 8G RAM very happily. The motherboard is a RioWorks one, the OS is\nDebian \"Sarge\" AMD64 with kernel 2.6.11.8 and PostgreSQL 7.4.7.\n\nCheers,\n\t\t\t\t\tAndrew.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n\n\n-------------------------------------------------------------------------",
"msg_date": "Thu, 02 Jun 2005 21:15:13 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adaptec/LSI/?? RAID"
},
{
"msg_contents": "On Wed, 01 Jun 2005 22:00:09 -0700 William Yu <[email protected]> wrote:\n> I've previously stayed away from Adaptec because we used to run Solaris \n> x86 and the driver was somewhat buggy. For Linux and FreeBSD, I'd be \n> less worried as open source development of drivers usually lead to \n> better testing & bug-fixing.\n\nAdaptec is in the doghouse in some corners of the community because they\nhave behaved badly about releasing documentation on some of their\ncurrent RAID controllers to *BSD developers. FreeBSD has a not-quite-free\ndriver for those latest Adaptecs. OpenBSD wants nothing to do with them.\n\nrichard\n-- \nRichard Welty [email protected]\nAverill Park Networking\n Java, PHP, PostgreSQL, Unix, Linux, IP Network Engineering, Security\n \"Well, if you're not going to expect unexpected flames,\n what's the point of going anywhere?\" -- Truckle the Uncivil\n",
"msg_date": "Thu, 2 Jun 2005 07:50:22 -0400 (EDT)",
"msg_from": "Richard Welty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adaptec/LSI/?? RAID"
},
{
"msg_contents": "\nI've got a bunch of mission-critical Postgres servers on \nOpterons, all with no less than 4GB RAM, running Linux + \nXFS, and most with LSI MegaRAID cards. We've never had a \nsingle system crash or failure on our postgres servers, \nand some of them are well-used and with uptimes in excess \nof a year.\n\nIt may be anecdotal, but LSI MegaRAID cards generally seem \nto work pretty well with Linux. The only problem I've \never seen was a BIOS problem between the LSI and the \nmotherboard, which was solved by flashing the BIOS on the \nmotherboard with the latest version (it was grossly out of \ndate anyway).\n\n\nJ. Andrew Rogers\n",
"msg_date": "Thu, 02 Jun 2005 10:10:14 -0700",
"msg_from": "\"J. Andrew Rogers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adaptec/LSI/?? RAID"
},
{
"msg_contents": "\n> It may be anecdotal, but LSI MegaRAID cards generally seem to work \n> pretty well with Linux. The only problem I've ever seen was a BIOS \n> problem between the LSI and the motherboard, which was solved by \n> flashing the BIOS on the motherboard with the latest version (it was \n> grossly out of date anyway).\n\nAt Command Prompt we have also had some great success with the LSI \ncards. The only thing we didn't like is the obscure way you have to \nconfigure RAID 10.\n\nSincerely,\n\nJoshua D. Drake\n\n> \n> \n> J. Andrew Rogers\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n",
"msg_date": "Thu, 02 Jun 2005 10:42:46 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adaptec/LSI/?? RAID"
},
{
"msg_contents": "On Jun 1, 2005, at 11:42 PM, Stacy White wrote:\n\n> \"we find LSI does not work well with 4GB of RAM. Our engineering \n> find that\n> LSI card could cause system crashes. One of our customer ... has \n> found that\n> Adaptec cards works well on PostGres SQL -- they're using it as a \n> preforce\n> server with xfs and post-gress.\"\n>\n> Any comments? Suggestions for other RAID controllers?\n>\n\nI have twin dual opteron, 4GB RAM, LSI MegaRAID-2X cards with 8 disks \n(2@RAID0 system+pg_xlog, 6@RAID10 data) running FreeBSD 5.4-RELEASE.\n\nWorks just perfectly fine under some very heavy insert/update/delete \nload. Database + indexes hovers at about 50Gb.\n\nI don't use the adaptec controllers because they don't support \nFreeBSD well (and vice versa) and the management tools are not there \nfor FreeBSD in a supported fashion like they are for LSI.\n\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806",
"msg_date": "Thu, 2 Jun 2005 13:47:49 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adaptec/LSI/?? RAID"
},
{
"msg_contents": "I have a similar question about what to choose (either LSI or Adaptec U320), but\nplan to use them just for JBOD drivers. I expect to be using either net or\nfreebsd. The system CPU will be Opteron. My impression is that both the ahd\nand mpt drivers (for U320 Adaptec and LSI, respectively) are quite stable, but\nnot from personal experience. Like I said, I don't plan to have the cards doing\nRAID in hardware. Should I be pretty safe with either choice of HBA then?\n\nThanks (and sorry for the semi-hijack).\n\n\nQuoting Vivek Khera <[email protected]>:\n\n> \n> On Jun 1, 2005, at 11:42 PM, Stacy White wrote:\n> \n> > \"we find LSI does not work well with 4GB of RAM. Our engineering \n> > find that\n> > LSI card could cause system crashes. One of our customer ... has \n> > found that\n> > Adaptec cards works well on PostGres SQL -- they're using it as a \n> > preforce\n> > server with xfs and post-gress.\"\n> >\n> > Any comments? Suggestions for other RAID controllers?\n> >\n> \n> I have twin dual opteron, 4GB RAM, LSI MegaRAID-2X cards with 8 disks \n> (2@RAID0 system+pg_xlog, 6@RAID10 data) running FreeBSD 5.4-RELEASE.\n> \n> Works just perfectly fine under some very heavy insert/update/delete \n> load. Database + indexes hovers at about 50Gb.\n> \n> I don't use the adaptec controllers because they don't support \n> FreeBSD well (and vice versa) and the management tools are not there \n> for FreeBSD in a supported fashion like they are for LSI.\n> \n> \n> Vivek Khera, Ph.D.\n> +1-301-869-4449 x806\n> \n> \n> \n\n\n",
"msg_date": "Thu, 2 Jun 2005 14:02:03 -0700",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Adaptec/LSI/?? RAID (what about JBOD?)"
},
{
"msg_contents": "On Thu, 2005-06-02 at 14:02 -0700, [email protected] wrote:\n> I have a similar question about what to choose (either LSI or Adaptec U320), but\n> plan to use them just for JBOD drivers. I expect to be using either net or\n> freebsd. The system CPU will be Opteron. My impression is that both the ahd\n> and mpt drivers (for U320 Adaptec and LSI, respectively) are quite stable, but\n> not from personal experience. Like I said, I don't plan to have the cards doing\n> RAID in hardware. Should I be pretty safe with either choice of HBA then?\n\nOn the machine I mentioned earlier in this thread we use the Megaraid\nfor JBOD, but the card setup to use the disks that way was somewhat\nconfusing, requiring us to configure logical drives that in fact matched\nthe physical ones. The card still wanted to write that information onto\nthe disks, reducing the total disk space available by some amount, but\nalso meaning that we were unable to migrate our system from a previous\nnon-RAID card cleanly.\n\nRegards,\n\t\t\t\t\tAndrew.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\nWhereof one cannot speak, thereon one must remain silent. -- Wittgenstein\n-------------------------------------------------------------------------",
"msg_date": "Fri, 03 Jun 2005 11:19:28 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adaptec/LSI/?? RAID (what about JBOD?)"
},
{
"msg_contents": "Thanks, Andrew. I expect to choose between HBAs with no RAID functionality or\nwith the option to completely bypass RAID functionality--meaning that I'll\nhopefully avoid the situation that you've described. I'm mostly curious as to\nwhether the driver problems described for U320 Adaptec RAID controllers also\napply to the regular SCSI drivers.\n\nThanks.\n\nQuoting Andrew McMillan <[email protected]>:\n\n> On Thu, 2005-06-02 at 14:02 -0700, [email protected] wrote:\n> > I have a similar question about what to choose (either LSI or Adaptec\n> U320), but\n> > plan to use them just for JBOD drivers. I expect to be using either net\n> or\n> > freebsd. The system CPU will be Opteron. My impression is that both the\n> ahd\n> > and mpt drivers (for U320 Adaptec and LSI, respectively) are quite stable,\n> but\n> > not from personal experience. Like I said, I don't plan to have the cards\n> doing\n> > RAID in hardware. Should I be pretty safe with either choice of HBA\n> then?\n> \n> On the machine I mentioned earlier in this thread we use the Megaraid\n> for JBOD, but the card setup to use the disks that way was somewhat\n> confusing, requiring us to configure logical drives that in fact matched\n> the physical ones. The card still wanted to write that information onto\n> the disks, reducing the total disk space available by some amount, but\n> also meaning that we were unable to migrate our system from a previous\n> non-RAID card cleanly.\n> \n> Regards,\n> \t\t\t\t\tAndrew.\n> \n> -------------------------------------------------------------------------\n> Andrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\n> WEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\n> DDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n> Whereof one cannot speak, thereon one must remain silent. -- Wittgenstein\n> -------------------------------------------------------------------------\n> \n> \n\n\n",
"msg_date": "Thu, 2 Jun 2005 17:30:30 -0700",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Adaptec/LSI/?? RAID (what about JBOD?)"
},
{
"msg_contents": "\nHello,\n\n\nI've split my data in daily tables to keep them in an acceptable size.\n\nNow I have quite complex queries which may be very long if I need to query a\nlarge number of daily tables.\n\n\nI've just made a first test wich resulted in a query being 15KB big annd\ncontaining 63 UNION.\n\nThe Query plan in PGAdmin is about 100KB big with 800 lines :-)\n\n\nThe performance is not such bad, but I'm wondering if there are some\nPOSTGRES limitations I should take care of with this strategy.\n\n\nThanks,\n\nMarc\n\n-- \nGeschenkt: 3 Monate GMX ProMail gratis + 3 Ausgaben stern gratis\n++ Jetzt anmelden & testen ++ http://www.gmx.net/de/go/promail ++\n",
"msg_date": "Fri, 3 Jun 2005 08:35:28 +0200 (MEST)",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Query limitations (size, number of UNIONs ...)"
},
{
"msg_contents": "Stacy White wrote:\n\n> We're in the process of buying another Opteron server to run Postgres, and\n> based on the suggestions in this list I've asked our IT director to get an\n> LSI MegaRaid controller rather than one of the Adaptecs.\n> \n> But when we tried to place our order, our vendor (Penguin Computing) advised\n> \"we find LSI does not work well with 4GB of RAM. Our engineering find that\n> LSI card could cause system crashes. One of our customer ... has found that\n> Adaptec cards works well on PostGres SQL\n\nProbably, your vendor is trying to avoid problems at all, but\n\"one of our customers\" is not a pretty general case, and\n\"we find LSI does not work well\", but is there a documented reason?\n\nAnyway, my personal experience has been with an Acer Altos R701 + S300\nexternal storage unit, equipped with LSI Logic Megaraid U320 aka\nAMI Megaraid aka LSI Elite 1600\n(honestly, these cards come with zillions of names and subnames, that\nI don't know exactly how to call them).\n\nThis system was configured in various ways. The final layout is\n3 x RAID1 arrays (each of 2 disks) and 1 x RAID10 array (12 disks).\nThis configuration is only available when you use 2 LSI cards (one\nfor each S300 scsi bus).\n\nThe system behaves pretty well, with a sustained sequential write rate\nof 80Mb/s, and more importantly, a quite high load in our environment\nof 10 oltp transactions per second, without any problems and\n`cat /proc/loadavg` < 1.\n\nI don't like the raid configuration system of LSI, that is\ncounter-intuitive for raid 10 arrays. It got me 4 hours and\na tech support call to figure out how to do it right.\n\nAlso, I think LSI cards don't behave well with particular\nraid configurations, like RAID 0 with 4 disks, or RAID 10\nwith also 4 disks. It seemed that these configurations put\nthe controller under heavy load, thus behaving unreasonably\nworse than, for example, 6-disks-RAID0 or 6-disks-RAID1.\nSorry, I can't be more \"scientific\" on this.\n\nFor Adaptec, I don't have any direct experience.\n\n-- \nCosimo\n\n",
"msg_date": "Fri, 03 Jun 2005 09:37:17 +0200",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adaptec/LSI/?? RAID"
},
{
"msg_contents": "* Marc Mamin ([email protected]) wrote:\n> I've just made a first test wich resulted in a query being 15KB big annd\n> containing 63 UNION.\n\nIf the data is distinct from each other or you don't mind duplicate\nrecords you might try using 'union all' instead of 'union'. Just a\nthought.\n\n\tStephen",
"msg_date": "Fri, 3 Jun 2005 09:27:20 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query limitations (size, number of UNIONs ...)"
}
] |
[
{
"msg_contents": "My database has two scsi disks....\nmy current configuration has pg_xlog on disk1 and data on disk2....\nthe machine is used for database only....\nnow did some logging and came to a conclusion that my disk2(data disk) is getting used around 3 times more than disk1(pg_xlog)....\n \nso wht is recommended... move some of the data to disk1 so that both disks are equally used... by creating tablespaces or let my configuration be whts its currently... iowait is one of the bottlenecks in my application performance.....\n \nthx\nHimanshu\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \nMy database has two scsi disks....\nmy current configuration has pg_xlog on disk1 and data on disk2....\nthe machine is used for database only....\nnow did some logging and came to a conclusion that my disk2(data disk) is getting used around 3 times more than disk1(pg_xlog)....\n \nso wht is recommended... move some of the data to disk1 so that both disks are equally used... by creating tablespaces or let my configuration be whts its currently... iowait is one of the bottlenecks in my application performance.....\n \nthx\nHimanshu__________________________________________________Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com",
"msg_date": "Thu, 2 Jun 2005 00:02:09 -0700 (PDT)",
"msg_from": "Himanshu Baweja <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Moving pg_xlog"
},
{
"msg_contents": "Himanshu Baweja <[email protected]> writes:\n> My database has two scsi disks....\n> my current configuration has pg_xlog on disk1 and data on disk2....\n> the machine is used for database only....\n> now did some logging and came to a conclusion that my disk2(data disk) is getting used around 3 times more than disk1(pg_xlog)....\n \n> so wht is recommended... move some of the data to disk1 so that both disks are equally used... by creating tablespaces or let my configuration be whts its currently... iowait is one of the bottlenecks in my application performance.....\n\nIt seems highly unlikely that putting more stuff on the xlog disk will\nimprove performance --- at least not if your bottleneck is update speed.\nIf it's a read-mostly workload then optimizing xlog writes may not be\nthe most important thing to you. In that case you might want to ignore\nxlog and try something along the lines of tables on one disk, indexes\non the other.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Jun 2005 10:23:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving pg_xlog "
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n>It seems highly unlikely that putting more stuff on the xlog disk will\n>improve performance --- at least not if your bottleneck is update speed.\n \nTom you are right.. i did some testing...\n1) default config--- xlog on disk1 and data on disk2=>\n 27 mins and 22 secs\n2) xlog and some tables on disk1 and rest of tables on disk2=>\n 28 mins and 38 secs\n \nbut the most startling of the results is....\n3) xlog on disk1 and half the tables on partition 1 of disk2 and other half on partition 2 of disk2\n 24 mins and 14 secs\n \n ??????????\nshouldnt moving data to diff partitions degrade performance instead of enhancing it....\n \nalso in configuration 1, my heap_blks_hit/heap_blks_fetched was good enough....\nbut in configuration 3, its was really low.. something of the order of 1/15...\nstill the performance improved....\n \nany ideas.....\ndoes moving across partitions help...\n \nRegards\nHimanshu\n \n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \nTom Lane <[email protected]> wrote:\n>It seems highly unlikely that putting more stuff on the xlog disk will>improve performance --- at least not if your bottleneck is update speed.\n \nTom you are right.. i did some testing...\n1) default config--- xlog on disk1 and data on disk2=>\n 27 mins and 22 secs\n2) xlog and some tables on disk1 and rest of tables on disk2=>\n 28 mins and 38 secs\n \nbut the most startling of the results is....\n3) xlog on disk1 and half the tables on partition 1 of disk2 and other half on partition 2 of disk2\n 24 mins and 14 secs\n \n ??????????\nshouldnt moving data to diff partitions degrade performance instead of enhancing it....\n \nalso in configuration 1, my heap_blks_hit/heap_blks_fetched was good enough....\nbut in configuration 3, its was really low.. something of the order of 1/15...\nstill the performance improved....\n \nany ideas.....\ndoes moving across partitions help...\n \nRegards\nHimanshu\n __________________________________________________Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com",
"msg_date": "Thu, 2 Jun 2005 07:47:21 -0700 (PDT)",
"msg_from": "Himanshu Baweja <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Moving pg_xlog "
}
] |
[
{
"msg_contents": "\nThe following bug has been logged online:\n\nBug reference: 1697\nLogged by: Bahadur Singh\nEmail address: [email protected]\nPostgreSQL version: 8.0\nOperating system: Windows 2000 server\nDescription: Select getting slower on continously updating data\nDetails: \n\nHello,\n\nI found situtation that, when I am selecting data from a table of 200\nrecords, getting slower as I do continous update to the same existing data.\n\n\n\nCREATE TABLE salesarticle\n(\n articlenumber char(20) NOT NULL,\n price int4 NOT NULL,\n eodid int4 NOT NULL,\n departmentnumber char(4) NOT NULL,\n keycounter int4 NOT NULL,\n scancounter int4 NOT NULL,\n grosssalescounter int8 NOT NULL,\n grosssalesamount int8 NOT NULL,\n discountcounter int8 NOT NULL,\n discountamount int8 NOT NULL,\n reductioncounter int8 NOT NULL,\n reductionamount int8 NOT NULL,\n transactioncounter int4 NOT NULL,\n promotionamount int8 NOT NULL,\n promotioncounter int8 NOT NULL,\n datelastsale char(14) NOT NULL,\n CONSTRAINT salesarticle_pkey PRIMARY KEY (articlenumber, price, eodid),\n CONSTRAINT salesarticle_eodid_fkey FOREIGN KEY (eodid) REFERENCES eodinfo\n(eodid) ON UPDATE NO ACTION ON DELETE NO ACTION\n) \nWITH OIDS;\n\nThis is my select statement:\n\nEXPLAIN ANALYZE\nSELECT ArticleNumber, Price, EodId FROM SalesArticle WHERE ArticleNumber IN\n(' 9502',\n' 9500',' 9501',' 9505',' \n 9506',' 9507',' 9515',\n' 9516',' 9518',' 9520',' \n 9472',' 9508',' 9546',\n' 3322',' 9521' ) AND EodId = 12\n\"Index Scan using salesarticle_pkey, salesarticle_pkey, salesarticle_pkey,\nsalesarticle_pkey, salesarticle_pkey, salesarticle_pkey, salesarticle_pkey,\nsalesarticle_pkey, salesarticle_pkey, salesarticle_pkey, salesarticle_pkey,\nsalesarticle_pkey, salesarticl (..)\"\n\" Index Cond: ((articlenumber = ' 9502'::bpchar) OR\n(articlenumber = ' 9500'::bpchar) OR (articlenumber = ' \n 9501'::bpchar) OR (articlenumber = ' 9505'::bpchar)\nOR (articlenumber = ' (..)\"\n\" Filter: (eodid = 12)\"\n\"Total runtime: 47.000 ms\"\n\nThe first iteration(400 times selects and update that selected data ) say\n400 are within 2 sec, then it keep on increasing at the end, it take 9\nseconds to execute 100 selects and updates on the database. No new records\nare added during this operation.\n\nperfromace of above select degrade as follows\n = 16 ms ==> yealds 1600 ms for 100 iteration.\n = 32 ms ==> yealds 3200 ms for 100 it...\n = 47 ms ==> yealds 4700 ms for 100 it...\n = 80 ms ==> yealds 80000 ms for 100 it...\n = 104 ms ==> yealds 10400 ms for 100 it...\n\nwhen I create an index on PK of this table, it boosts select performance to\n16 ms, but update stmts are slowing down. I do insert only once in begining\nand then update them continously as long I recieve same input data. (means\nno insert take place in between on this salesArticle table.)\n\nPlease advice me some solution or any trick.\n\n\nThanks in Advance,\nBahadur\n",
"msg_date": "Thu, 2 Jun 2005 12:05:00 +0100 (BST)",
"msg_from": "\"Bahadur Singh\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #1697: Select getting slower on continously updating data"
},
{
"msg_contents": "This does not belong on the pgsql-bugs list. The pgsql-novice or\npgsql-performance lists seem more appropiate. I have set followups\nto the pgsql-novice list.\n\nOn Thu, Jun 02, 2005 at 12:05:00 +0100,\n Bahadur Singh <[email protected]> wrote:\n> \n> Hello,\n> \n> I found situtation that, when I am selecting data from a table of 200\n> records, getting slower as I do continous update to the same existing data.\n\nYou need to be vacuuming (and possibly analyzing) the table more often as the\nupdates will leave dead rows in the table which will bloat the table size and\nslow down access, particularly sequential scans. If the updates modify the\ndata value distributions significantly, then you will also need to\nreanalyze the table to help the planner make good decisions.\n",
"msg_date": "Thu, 2 Jun 2005 08:02:59 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #1697: Select getting slower on continously updating data"
},
{
"msg_contents": "\n\n--- Bruno Wolff III <[email protected]> wrote:\n\n> This does not belong on the pgsql-bugs list. The\n> pgsql-novice or\n> pgsql-performance lists seem more appropiate. I have\n> set followups\n> to the pgsql-novice list.\n> \n> On Thu, Jun 02, 2005 at 12:05:00 +0100,\n> Bahadur Singh <[email protected]> wrote:\n> > \n> > Hello,\n> > \n> > I found situtation that, when I am selecting data\n> from a table of 200\n> > records, getting slower as I do continous update\n> to the same existing data.\n> \n> You need to be vacuuming (and possibly analyzing)\n> the table more often as the\n> updates will leave dead rows in the table which will\n> bloat the table size and\n> slow down access, particularly sequential scans. If\n> the updates modify the\n> data value distributions significantly, then you\n> will also need to\n> reanalyze the table to help the planner make good\n> decisions.\n> \n\nMany thanks for this tip !\nBut is this good idea to analyse/vacuuming the\ndatabase tables while updates are taking place..\nSince, I update continuously say (100,000 ) times or\nmore the same data set.\n\nThis is the result of analyze command.\n\nINFO: analyzing \"public.salesarticle\"\nINFO: \"salesarticle\": scanned 3000 of 20850 pages,\ncontaining 62 live rows and 134938 dead rows; 62 rows\nin sample, 431 estimated total rows\n\nGesamtlaufzeit der Abfrage: 5531 ms.\nTotal Time Taken : 5531 ms.\n\nCan you suggest me some clever way to so, because I\nwould prefer to do vaccumming while database is not\nloaded with queries/transactions.\n\nRegards\nBahadur\n\n\n\n\t\t\n__________________________________ \nDiscover Yahoo! \nFind restaurants, movies, travel and more fun for the weekend. Check it out! \nhttp://discover.yahoo.com/weekend.html \n\n",
"msg_date": "Fri, 3 Jun 2005 00:09:00 -0700 (PDT)",
"msg_from": "Bahadur Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #1697: Select getting slower on continously updating data"
},
{
"msg_contents": "On Fri, Jun 03, 2005 at 00:09:00 -0700,\n Bahadur Singh <[email protected]> wrote:\n> \n> Many thanks for this tip !\n> But is this good idea to analyse/vacuuming the\n> database tables while updates are taking place..\n> Since, I update continuously say (100,000 ) times or\n> more the same data set.\n> \n> This is the result of analyze command.\n> \n> INFO: analyzing \"public.salesarticle\"\n> INFO: \"salesarticle\": scanned 3000 of 20850 pages,\n> containing 62 live rows and 134938 dead rows; 62 rows\n> in sample, 431 estimated total rows\n> \n> Gesamtlaufzeit der Abfrage: 5531 ms.\n> Total Time Taken : 5531 ms.\n> \n> Can you suggest me some clever way to so, because I\n> would prefer to do vaccumming while database is not\n> loaded with queries/transactions.\n\nWhile that may be a nice preference, under your usage pattern that does\nnot appear to be a good idea. As long as your disk I/O isn't saturated\nyou want to be running vacuums a lot more often than you are. (Analyze should\nonly be needed if the distrution of values is changing constantly. An example\nwould be timestamps indicating when an update occured.)\n",
"msg_date": "Fri, 3 Jun 2005 09:49:51 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #1697: Select getting slower on continously updating data"
}
] |
[
{
"msg_contents": " \nHow is it that the index scan has such poor performance? Shouldn't index\nlookups be quicker?\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Thursday, May 26, 2005 1:32 PM\nTo: Brad Might\nCc: [email protected]\nSubject: Re: [PERFORM] Specific query performance problem help requested\n- postgresql 7.4 \n\n\"Brad Might\" <[email protected]> writes:\n> Can someone help me break this down and figure out why the one query \n> takes so much longer than the other?\n\nIt looks to me like there's a correlation between filename and bucket,\nsuch that the indexscan in filename order takes much longer to run\nacross the first 25 rows with bucket = 3 than it does to run across the\nfirst 25 with bucket = 7 or bucket = 8. It's not just a matter of there\nbeing fewer rows with bucket = 3 ... the cost differential is much\nlarger than is explained by the count ratios. The bucket = 3 rows have\nto be lurking further to the back of the filename order than the others.\n\n> Here's the bucket distribution..i have clustered the index on the \n> bucket value.\n\nIf you have an index on bucket, it's not doing you any good here anyway,\nsince you wrote the constraint as a crosstype operator (\"3\" is int4 not\nint8). It might help to explicitly cast the constant to int8.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 2 Jun 2005 10:03:43 -0500",
"msg_from": "\"Brad Might\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Specific query performance problem help requested - postgresql\n\t7.4"
}
] |
[
{
"msg_contents": "\nHi,\n\nI am using PostgreSQL (7.4) with a schema that was generated automatically (using hibernate). \nThe schema consists of about 650 relations. One particular query (also generated \nautomatically) consists of left joining approximately 350 tables. At this \nstage, most tables are empty and those with values have less than 50 entries. \nThe query takes about 90 seconds to execute (on a P4, 2.6Ghz).\n\nAll of the relations have a primary key which is indexed and all of the joins \nare on foreign keys which are explicitly declared. I've checked the obvious \ntunables (effective_cache_size, shared_memory and sort_buffer) but changing \nthese has had no effect. The system has a total of 750MB RAM, I've varied \nthe shared memory up to 256MB and the sort buffer up to 128MB without affecting \nthe performance. \n\nRunning the query as a JDBC prepared statement indicates that the query optimiser \nis spending a negligable amount of time on the task (~ 36 ms) compared to \nthe executor (~ 90 seconds). The output of EXPLAIN indicates (AFAICT) that \nall of the joins are of type \"Nested Loop Left Join\" and all of the scans \nare of type \"Seq Scan\". I have refrained from posting the query and the \nquery plan since these are 80K and 100K apiece but if anyone wants to see \nthem I can certainly forward them on. \n\nMy (uninformed) suspicion is that the optimiser has failed over to the default \nplan on the basis of the number of tables in the join. My question is, is \nthere anyone out there using PostgreSQL with this size of schema? Is there \nanything that can be done to bring about the order of magnitude increase \nin speed that I need? \n\nThanks for your help,\n -phil \n\n\nI'm using Vodafone Mail - to get your free mobile email account go to http://www.vodafone.ie\nUse of Vodafone Mail is subject to Terms and Conditions http://www.vodafone.ie/terms/website\n\n\n",
"msg_date": "Thu, 2 Jun 2005 16:25:25 +0100 (IST)",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Query plan for very large number of joins"
},
{
"msg_contents": "[email protected] wrote:\n> Hi,\n> \n> I am using PostgreSQL (7.4) with a schema that was generated\n> automatically (using hibernate). The schema consists of about 650\n> relations. One particular query (also generated automatically)\n> consists of left joining approximately 350 tables.\n\nMay I be the first to offer an \"ouch\"!\n\n> At this stage, most tables are empty and those with values have less\n> than 50 entries. The query takes about 90 seconds to execute (on a\n> P4, 2.6Ghz).\n> \n> All of the relations have a primary key which is indexed and all of\n> the joins are on foreign keys which are explicitly declared. I've\n> checked the obvious tunables (effective_cache_size, shared_memory and\n> sort_buffer) but changing these has had no effect. The system has a\n> total of 750MB RAM, I've varied the shared memory up to 256MB and the\n> sort buffer up to 128MB without affecting the performance.\n\nThe sort-mem is the only thing I can see helping with a single query.\n\n> Running the query as a JDBC prepared statement indicates that the\n> query optimiser is spending a negligable amount of time on the task\n> (~ 36 ms) compared to the executor (~ 90 seconds). The output of\n> EXPLAIN indicates (AFAICT) that all of the joins are of type \"Nested\n> Loop Left Join\" and all of the scans are of type \"Seq Scan\". I have\n> refrained from posting the query and the query plan since these are\n> 80K and 100K apiece but if anyone wants to see them I can certainly\n> forward them on.\n\nWell, if most tables are small then a seq-scan makes sense. Does it look \nlike it's estimating the number of rows badly anywhere? I'm not sure the \nlist will accept attachments that large - is it possible to upload them \nsomewhere accessible?\n\n> My (uninformed) suspicion is that the optimiser has failed over to\n> the default plan on the basis of the number of tables in the join. My\n> question is, is there anyone out there using PostgreSQL with this\n> size of schema? Is there anything that can be done to bring about the\n> order of magnitude increase in speed that I need?\n\nWell - the genetic planner must surely be kicking in here (see the \nrun-time configuration chapter of the manuals, query-planning, \ngeqo_threshold). However, I'm not sure how much leeway there is in \nplanning a largely left-joined query.\n\nIt could be there's some overhead in the executor that's only noticable \nwith hundreds of tables involved, you're running at about 0.25 secs per \njoin.\n\nI take it you have no control over the schema or query, so there's not \nmuch fiddling you can do. You've tried sort_mem, so there are only two \nthings I can think of:\n1. Try the various enable_xxx config settings and see if disabling \nseq-scan or the relevant join-type does anything (I'm not sure it will)\n2. Try against 8.0 - there may be some improvement there.\n\nOther people on this list have experience on larger systems than me, so \nthey may be able to help more.\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 02 Jun 2005 17:02:28 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan for very large number of joins"
},
{
"msg_contents": "Richard Huxton <[email protected]> writes:\n> [email protected] wrote:\n>> I am using PostgreSQL (7.4) with a schema that was generated\n>> automatically (using hibernate). The schema consists of about 650\n>> relations. One particular query (also generated automatically)\n>> consists of left joining approximately 350 tables.\n\n> May I be the first to offer an \"ouch\"!\n\nSeconded.\n\n> However, I'm not sure how much leeway there is in \n> planning a largely left-joined query.\n\nNot much. The best hope for a better result is to order the LEFT JOIN\nclauses in a way that will produce a good plan.\n\nOne thought is that I am not sure I believe the conclusion that planning\nis taking only 36 ms; even realizing that the exclusive use of left\njoins eliminates options for join order, there are still quite a lot of\nplans to consider. You should try both EXPLAIN and EXPLAIN ANALYZE\nfrom psql and see how long each takes. It'd also be interesting to keep\nan eye on how large the backend process grows while doing this --- maybe\nit's being driven into swap.\n\nAlso: I'm not sure there *is* such a thing as a good plan for a 350-way\njoin. It may be time to reconsider your data representation. If\nHibernate really forces this on you, it may be time to reconsider your\nchoice of tool.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Jun 2005 12:26:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan for very large number of joins "
},
{
"msg_contents": "\n\n> I am using PostgreSQL (7.4) with a schema that was generated \n> automatically (using hibernate).\n> The schema consists of about 650 relations. One particular query (also \n> generated\n> automatically) consists of left joining approximately 350 tables. At this\n\n\tJust out of curiosity, what application is this ?\n\tAnd what are the reasons for so many tables ...and especially such a \nquery ?\n\tNot criticizing, but curious.\n",
"msg_date": "Thu, 02 Jun 2005 22:15:36 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan for very large number of joins"
},
{
"msg_contents": "\n\nTom Lane schrieb:\n\n>Richard Huxton <[email protected]> writes:\n> \n>\n>>[email protected] wrote:\n>> \n>>\n>>>I am using PostgreSQL (7.4) with a schema that was generated\n>>>automatically (using hibernate). The schema consists of about 650\n>>>relations. One particular query (also generated automatically)\n>>>consists of left joining approximately 350 tables.\n>>> \n>>>\n>\n> \n>\n>>May I be the first to offer an \"ouch\"!\n>> \n>>\n>\n>Seconded.\n>\n> \n>\n>>However, I'm not sure how much leeway there is in \n>>planning a largely left-joined query.\n>> \n>>\n>\n>Not much. The best hope for a better result is to order the LEFT JOIN\n>clauses in a way that will produce a good plan.\n> \n>\nIf this is the best way, you should consider to use an sql query and not \nthe hibernate ql language in this case. This is possible with Hibernate!\nI suppose you could also consider a view in Postgre and let Hibernate \nread from this view. This is also possible.\n\n>One thought is that I am not sure I believe the conclusion that planning\n>is taking only 36 ms; even realizing that the exclusive use of left\n>joins eliminates options for join order, there are still quite a lot of\n>plans to consider. You should try both EXPLAIN and EXPLAIN ANALYZE\n>from psql and see how long each takes. It'd also be interesting to keep\n>an eye on how large the backend process grows while doing this --- maybe\n>it's being driven into swap.\n>\n>Also: I'm not sure there *is* such a thing as a good plan for a 350-way\n>join. It may be time to reconsider your data representation. If\n>Hibernate really forces this on you, it may be time to reconsider your\n>choice of tool.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n>\n> \n>\n\n-- \nKind Regards / Viele Grᅵᅵe\n\nSebastian Hennebrueder\n\n-----\nhttp://www.laliluna.de/tutorials.html\nTutorials for Java, Struts, JavaServer Faces, JSP, Hibernate, EJB and more.\n\n",
"msg_date": "Fri, 03 Jun 2005 00:23:55 +0200",
"msg_from": "Sebastian Hennebrueder <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan for very large number of joins"
}
] |
[
{
"msg_contents": "Hi All,\n\nWe are testing PostgreSQL 8.0.3 on MS Windows for porting an OLTP system \nfrom MS SqlServer.\n\nWe got a major performance issue which seems to boil down to the following \ntype of query:\n\nselect DISTINCT ON (PlayerID) PlayerID,AtDate from Player where \nPlayerID='22220' order by PlayerID desc, AtDate desc;\nThe Player table has primary key (PlayerID, AtDate) representing data over \ntime and the query gets the latest data for a player.\n\nWith enable_seqscan forced off (which I'm not sure if that should be done \nfor a production system), the average query still takes a very long time to \nreturn a record:\n\nesdt=> explain analyze select DISTINCT ON (PlayerID) PlayerID,AtDate from \nPlayer\n where PlayerID='22220' order by PlayerID desc, AtDate desc;\n Unique (cost=0.00..2507.66 rows=1 width=23) (actual time=0.000..187.000 \nrows=1 loops=1)\n -> Index Scan Backward using pk_player on player (cost=0.00..2505.55 \nrows=8\n43 width=23) (actual time=0.000..187.000 rows=1227 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Total runtime: 187.000 ms\n\nIt appears that all the 1227 data records for that player were searched, \neven when doing a backward index scan. I would presume that, after locating \nthe index for the highest AtDate, only the first data record needs to be \nretrieved.\n\nThe following summary of tests seems to confirm my observation. They were \ndone on a quiet system (MS Windows 2000 Server, P4 3.0GHz with \nHyperthreading, 1GB Memory, PostgreSQL shared_buffers = 50000), starting \nwith a test database before doing a vacuum:\n\nset enable_seqscan = off;\nselect\t\tTotal runtime: 187.000 ms\nagain:\t\tTotal runtime: 78.000 ms\nvacuum analyze verbose player;\nselect\t\tTotal runtime: 47.000 ms\nagain:\t\tTotal runtime: 47.000 ms\nreindex table player;\nselect\t\tTotal runtime: 78.000 ms\nagain:\t\tTotal runtime: 63.000 ms\ncluster pk_player on player;\nselect\t\tTotal runtime: 16.000 ms\nagain:\t\tTotal runtime: 0.000 ms\nset enable_seqscan = on;\nanalyze verbose player;\nselect\t\tTotal runtime: 62.000 ms\nagain:\t\tTotal runtime: 78.000 ms\n\nPreviously, we have also tried to use LIMIT 1 instead of DISTINCT, but the \nperformance was no better:\nselect PlayerID,AtDate from Player where PlayerID='22220' order by PlayerID \ndesc, AtDate desc LIMIT 1\n\nAny clue or suggestions would be most appreciated. If you need further info \nor the full explain logs, please let me know.\n\nRegards,\nKC.\n \n\n",
"msg_date": "Fri, 03 Jun 2005 09:56:49 +0800",
"msg_from": "K C Lau <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT DISTINCT Performance Issue"
},
{
"msg_contents": "\n\n> Previously, we have also tried to use LIMIT 1 instead of DISTINCT, but \n> the performance was no better:\n> select PlayerID,AtDate from Player where PlayerID='22220' order by \n> PlayerID desc, AtDate desc LIMIT 1\n\n\tThe DISTINCT query will pull out all the rows and keep only one, so the \none with LIMIT should be faster. Can you post explain analyze of the LIMIT \nquery ?\n",
"msg_date": "Mon, 06 Jun 2005 13:45:39 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT DISTINCT Performance Issue"
},
{
"msg_contents": "At 19:45 05/06/06, PFC wrote:\n\n\n>>Previously, we have also tried to use LIMIT 1 instead of DISTINCT, but\n>>the performance was no better:\n>>select PlayerID,AtDate from Player where PlayerID='22220' order by\n>>PlayerID desc, AtDate desc LIMIT 1\n>\n> The DISTINCT query will pull out all the rows and keep only one, \n> so the\n>one with LIMIT should be faster. Can you post explain analyze of the LIMIT\n>query ?\n\nActually the problem with LIMIT 1 query is when we use views with the LIMIT \n1 construct. The direct SQL is ok:\n\nesdt=> explain analyze select PlayerID,AtDate from Player where \nPlayerID='22220'\n order by PlayerID desc, AtDate desc LIMIT 1;\n\n Limit (cost=0.00..1.37 rows=1 width=23) (actual time=0.000..0.000 rows=1 \nloops\n=1)\n -> Index Scan Backward using pk_player on player (cost=0.00..16074.23 \nrows=\n11770 width=23) (actual time=0.000..0.000 rows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Total runtime: 0.000 ms\n\nesdt=> create or replace view VCurPlayer3 as select * from Player a\nwhere AtDate = (select b.AtDate from Player b where a.PlayerID = b.PlayerID\norder by b.PlayerID desc, b.AtDate desc LIMIT 1);\n\nesdt=> explain analyze select PlayerID,AtDate,version from VCurPlayer3 \nwhere Pla\nyerID='22220';\n Index Scan using pk_player on player a (cost=0.00..33072.78 rows=59 \nwidth=27)\n(actual time=235.000..235.000 rows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Filter: ((atdate)::text = ((subplan))::text)\n SubPlan\n -> Limit (cost=0.00..1.44 rows=1 width=23) (actual \ntime=0.117..0.117 rows\n=1 loops=1743)\n -> Index Scan Backward using pk_player on player \nb (cost=0.00..1402\n3.67 rows=9727 width=23) (actual time=0.108..0.108 rows=1 loops=1743)\n Index Cond: (($0)::text = (playerid)::text)\n Total runtime: 235.000 ms\n\nThe problem appears to be in the loops=1743 scanning all 1743 data records \nfor that player.\n\nRegards, KC.\n\n\n",
"msg_date": "Mon, 06 Jun 2005 22:54:44 +0800",
"msg_from": "K C Lau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT DISTINCT Performance Issue"
},
{
"msg_contents": "On 6/2/05, K C Lau <[email protected]> wrote:\n...\n> \n> select DISTINCT ON (PlayerID) PlayerID,AtDate from Player where\n> PlayerID='22220' order by PlayerID desc, AtDate desc;\n> The Player table has primary key (PlayerID, AtDate) representing data over\n> time and the query gets the latest data for a player.\n> \n>\n... \n> esdt=> explain analyze select DISTINCT ON (PlayerID) PlayerID,AtDate from\n> Player\n> where PlayerID='22220' order by PlayerID desc, AtDate desc;\n> Unique (cost=0.00..2507.66 rows=1 width=23) (actual time=0.000..187.000\n> rows=1 loops=1)\n> -> Index Scan Backward using pk_player on player (cost=0.00..2505.55\n> rows=8\n> 43 width=23) (actual time=0.000..187.000 rows=1227 loops=1)\n> Index Cond: ((playerid)::text = '22220'::text)\n> Total runtime: 187.000 ms\n> \n\nIs PlayerID an integer datatype or a text datatype. It seems like\nPlayerID should be an integer data type, but postgres treats PlayerID\nas a text data type. This is because the value '22220' is quoted in\nyour query. Also, the explain analyze output shows \"Index Cond:\n((playerid)::text = '22220'::text\".\n\nGeorge Essig\n",
"msg_date": "Wed, 8 Jun 2005 08:34:29 -0500",
"msg_from": "George Essig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT DISTINCT Performance Issue"
},
{
"msg_contents": "Both keys are text fields. Does it make any difference if PlayerID were \ninteger?\n\nBTW, I think the real performance problem is when we use SELECT ... ORDER \nBY PlayerID DESC, AtDate DESC LIMIT 1 in a VIEW. Please see my subsequent \nemail http://archives.postgresql.org/pgsql-performance/2005-06/msg00110.php \non this show-stopper problem for which I still have no clue how to get \naround. Suggestions are much appreciated.\n\nThanks and regards, KC.\n\nAt 21:34 05/06/08, George Essig wrote:\n>On 6/2/05, K C Lau <[email protected]> wrote:\n>...\n> >\n> > select DISTINCT ON (PlayerID) PlayerID,AtDate from Player where\n> > PlayerID='22220' order by PlayerID desc, AtDate desc;\n> > The Player table has primary key (PlayerID, AtDate) representing data over\n> > time and the query gets the latest data for a player.\n> >\n> >\n>...\n> > esdt=> explain analyze select DISTINCT ON (PlayerID) PlayerID,AtDate from\n> > Player\n> > where PlayerID='22220' order by PlayerID desc, AtDate desc;\n> > Unique (cost=0.00..2507.66 rows=1 width=23) (actual time=0.000..187.000\n> > rows=1 loops=1)\n> > -> Index Scan Backward using pk_player on player (cost=0.00..2505.55\n> > rows=8\n> > 43 width=23) (actual time=0.000..187.000 rows=1227 loops=1)\n> > Index Cond: ((playerid)::text = '22220'::text)\n> > Total runtime: 187.000 ms\n> >\n>\n>Is PlayerID an integer datatype or a text datatype. It seems like\n>PlayerID should be an integer data type, but postgres treats PlayerID\n>as a text data type. This is because the value '22220' is quoted in\n>your query. Also, the explain analyze output shows \"Index Cond:\n>((playerid)::text = '22220'::text\".\n>\n>George Essig\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n",
"msg_date": "Wed, 08 Jun 2005 22:25:16 +0800",
"msg_from": "K C Lau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT DISTINCT Performance Issue"
},
{
"msg_contents": "On 6/8/05, K C Lau <[email protected]> wrote:\n> Both keys are text fields. Does it make any difference if PlayerID were\n> integer?\n> \n\nIt can make a difference in speed and integrity. If the column is an\ninteger, the storage on disk could be smaller for the column and the\nrelated indexes. If the the column is an integer, it would not be\npossible to have a value like 'arbitrary value that looks nothing like\nan integer'.\n\nGeorge Essig\n",
"msg_date": "Wed, 8 Jun 2005 09:51:11 -0500",
"msg_from": "George Essig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT DISTINCT Performance Issue"
}
] |
[
{
"msg_contents": "Hi @ all,\n\ni have only a little question. Which filesystem is preferred for \npostgresql? I'm plan to use xfs (before i used reiserfs). The reason\nis the xfs_freeze Tool to make filesystem-snapshots. \n\nIs the performance better than reiserfs, is it reliable?\n\nbest regards,\nMartin\n\n",
"msg_date": "Fri, 3 Jun 2005 09:06:41 +0200",
"msg_from": "\"Martin Fandel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Filesystem"
},
{
"msg_contents": "Martin Fandel wrote:\n> Hi @ all,\n> \n> i have only a little question. Which filesystem is preferred for \n> postgresql? I'm plan to use xfs (before i used reiserfs). The reason\n> is the xfs_freeze Tool to make filesystem-snapshots. \n> \n> Is the performance better than reiserfs, is it reliable?\n> \n\nI used postgresql with xfs on mandrake 9.0/9.1 a while ago - \nreliability was great, performance seemed better than ext3. I didn't \ncompare with reiserfs - the only time I have ever lost data from a Linux \nbox has been when I used reiserfs, hence I am not a fan :-(\n\nbest wishes\n\nMark\n",
"msg_date": "Fri, 03 Jun 2005 22:41:22 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filesystem"
},
{
"msg_contents": "We have been using XFS for about 6 months now and it has even tolerated a \ncontroller card crash. So far we have mostly good things to report about \nXFS. I benchmarked raw throughputs at various stripe sizes, and XFS came out \non top for us against reiser and ext3. I also used it because of it's \nsupposed good support for large files, which was verified somewhat by the \nbenchmarks.\n\nI have noticed a problem though - if you have 800000 files in a directory, \nit seems that XFS chokes on simple operations like 'ls' or 'chmod -R ...' \nwhere ext3 doesn't, don't know about reiser, I went straight back to default \nafter that problem (that partition is not on a DB server though).\n\nAlex Turner\nnetEconomist\n\nOn 6/3/05, Martin Fandel <[email protected]> wrote:\n> \n> Hi @ all,\n> \n> i have only a little question. Which filesystem is preferred for\n> postgresql? I'm plan to use xfs (before i used reiserfs). The reason\n> is the xfs_freeze Tool to make filesystem-snapshots.\n> \n> Is the performance better than reiserfs, is it reliable?\n> \n> best regards,\n> Martin\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nWe have been using XFS for about 6 months now and it has even tolerated\na controller card crash. So far we have mostly good things to\nreport about XFS. I benchmarked raw throughputs at various stripe\nsizes, and XFS came out on top for us against reiser and ext3. I\nalso used it because of it's supposed good support for large files,\nwhich was verified somewhat by the benchmarks.\n\nI have noticed a problem though - if you have 800000 files in a\ndirectory, it seems that XFS chokes on simple operations like 'ls' or\n'chmod -R ...' where ext3 doesn't, don't know about reiser, I went\nstraight back to default after that problem (that partition is not on a\nDB server though).\n\nAlex Turner\nnetEconomistOn 6/3/05, Martin Fandel <[email protected]> wrote:\nHi @ all,i have only a little question. Which filesystem is preferred forpostgresql? I'm plan to use xfs (before i used reiserfs). The reasonis the xfs_freeze Tool to make filesystem-snapshots.Is the performance better than reiserfs, is it reliable?\nbest regards,Martin---------------------------(end of broadcast)---------------------------TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to \[email protected] so that your message can get through to the mailing list cleanly",
"msg_date": "Fri, 3 Jun 2005 09:18:10 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filesystem"
},
{
"msg_contents": "Hi\n\ni have tested a xfs+LVM installation with the scalix (HP OpenMail)\nMailserver (it's a little time ago). I had at that time some problems \nusing xfs_freeze. I used a script for freezing the fs and making storing\nthe snapshots. Sometimes the complete Server hangs (no blinking cursor,\nno possible logins, no network). I don't know if it was a hardware\nproblem or if it was the xfs-software. I installed/compiled the newest \nkernel for this system (i think it was a 2.6.9) to check out if it's \nmaybe a kernel-problem. But on the next days, the system hangs\nagain. After that i used reiserfs again. \n\nI tested it with Suse Linux Enterprise Server 8.\n\nHas someone heared about such problems? That is the only reason that\ni have a bit fear to use xfs for a critical database :/. \n\nBest regards,\nMartin\n \nAm Freitag, den 03.06.2005, 09:18 -0400 schrieb Alex Turner:\n> We have been using XFS for about 6 months now and it has even\n> tolerated a controller card crash. So far we have mostly good things\n> to report about XFS. I benchmarked raw throughputs at various stripe\n> sizes, and XFS came out on top for us against reiser and ext3. I also\n> used it because of it's supposed good support for large files, which\n> was verified somewhat by the benchmarks.\n> \n> I have noticed a problem though - if you have 800000 files in a\n> directory, it seems that XFS chokes on simple operations like 'ls' or\n> 'chmod -R ...' where ext3 doesn't, don't know about reiser, I went\n> straight back to default after that problem (that partition is not on\n> a DB server though).\n> \n> Alex Turner\n> netEconomist\n> \n> On 6/3/05, Martin Fandel <[email protected]> wrote:\n> Hi @ all,\n> \n> i have only a little question. Which filesystem is preferred\n> for\n> postgresql? I'm plan to use xfs (before i used reiserfs). The\n> reason\n> is the xfs_freeze Tool to make filesystem-snapshots.\n> \n> Is the performance better than reiserfs, is it reliable? \n> \n> best regards,\n> Martin\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an\n> appropriate\n> subscribe-nomail command to [email protected] so\n> that your\n> message can get through to the mailing list cleanly\n> \n\n",
"msg_date": "Fri, 3 Jun 2005 15:52:00 +0200",
"msg_from": "\"Martin Fandel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Filesystem"
},
{
"msg_contents": "On Fri, 3 Jun 2005 09:06:41 +0200\n \"Martin Fandel\" <[email protected]> wrote:\n> i have only a little question. Which filesystem is \n>preferred for postgresql? I'm plan to use xfs\n>(before i used reiserfs). The reason\n> is the xfs_freeze Tool to make filesystem-snapshots. \n\n\nXFS has worked great for us, and has been both reliable \nand fast. Zero problems and currently our standard server \nfilesystem. Reiser, on the other hand, has on rare \noccasion eaten itself on the few systems where someone was \nrunning a Reiser partition, though none were running \nPostgres at the time. We have deprecated the use of \nReiser on all systems where it is not already running.\n\nIn terms of performance for Postgres, the rumor is that \nXFS and JFS are at the top of the heap, definitely better \nthan ext3 and somewhat better than Reiser. I've never \nused JFS, but I've seen a few benchmarks that suggest it \nis at least as fast as XFS for Postgres.\n\nSince XFS is more mature than JFS on Linux, I go with XFS \nby default. If some tragically bad problems develop with \nXFS I may reconsider that position, but we've been very \nhappy with it so far. YMMV.\n\ncheers,\n\nJ. Andrew Rogers\n",
"msg_date": "Fri, 03 Jun 2005 10:18:45 -0700",
"msg_from": "\"J. Andrew Rogers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filesystem"
},
{
"msg_contents": "Hi,\n\nI've installed the same installation of my reiser-fs-postgres-8.0.1\nwith xfs.\n\nNow my pgbench shows the following results:\n\npostgres@ramses:~> pgbench -h 127.0.0.1 -p 5432 -c150 -t5 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 150\nnumber of transactions per client: 5\nnumber of transactions actually processed: 750/750\ntps = 133.719348 (including connections establishing)\ntps = 151.670315 (excluding connections establishing)\n\nWith reiserfs my pgbench results are between 230-280 (excluding\nconnections establishing) and 199-230 (including connections \nestablishing). I'm using Suse Linux 9.3.\n\nI can't see better performance with xfs. :/ Must I enable special \nfstab-settings?\n\nBest regards,\nMartin\n\nAm Freitag, den 03.06.2005, 10:18 -0700 schrieb J. Andrew Rogers:\n> On Fri, 3 Jun 2005 09:06:41 +0200\n> \"Martin Fandel\" <[email protected]> wrote:\n> > i have only a little question. Which filesystem is \n> >preferred for postgresql? I'm plan to use xfs\n> >(before i used reiserfs). The reason\n> > is the xfs_freeze Tool to make filesystem-snapshots. \n> \n> \n> XFS has worked great for us, and has been both reliable \n> and fast. Zero problems and currently our standard server \n> filesystem. Reiser, on the other hand, has on rare \n> occasion eaten itself on the few systems where someone was \n> running a Reiser partition, though none were running \n> Postgres at the time. We have deprecated the use of \n> Reiser on all systems where it is not already running.\n> \n> In terms of performance for Postgres, the rumor is that \n> XFS and JFS are at the top of the heap, definitely better \n> than ext3 and somewhat better than Reiser. I've never \n> used JFS, but I've seen a few benchmarks that suggest it \n> is at least as fast as XFS for Postgres.\n> \n> Since XFS is more mature than JFS on Linux, I go with XFS \n> by default. If some tragically bad problems develop with \n> XFS I may reconsider that position, but we've been very \n> happy with it so far. YMMV.\n> \n> cheers,\n> \n> J. Andrew Rogers\n\n",
"msg_date": "Wed, 8 Jun 2005 09:36:31 +0200",
"msg_from": "\"Martin Fandel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Filesystem"
},
{
"msg_contents": "On Wed, Jun 08, 2005 at 09:36:31AM +0200, Martin Fandel wrote:\n>I've installed the same installation of my reiser-fs-postgres-8.0.1\n>with xfs.\n\nDo you have pg_xlog on a seperate partition? I've noticed that ext2\nseems to have better performance than xfs for the pg_xlog workload (with\nall the syncs).\n\nMike Stone\n",
"msg_date": "Wed, 08 Jun 2005 08:10:10 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filesystem"
},
{
"msg_contents": "Hi,\n\nah you're right. :) I forgot to symlink the pg_xlog-dir to another\npartition. Now it's a bit faster than before. But not faster than \nthe same installation with reiserfs:\n\npostgres@ramses:~> pgbench -h 127.0.0.1 -p 5432 -c150 -t5 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 150\nnumber of transactions per client: 5\nnumber of transactions actually processed: 750/750\ntps = 178.831543 (including connections establishing)\ntps = 213.931383 (excluding connections establishing)\n\nI've tested dump's and copy's with the xfs-installation. It's\nfaster than before. But the transactions-query's are still slower\nthan the reiserfs-installation.\n\nAre any fstab-/mount-options recommended for xfs?\n\nbest regards,\nMartin\n\nAm Mittwoch, den 08.06.2005, 08:10 -0400 schrieb Michael Stone:\n> [email protected]\n\n",
"msg_date": "Wed, 8 Jun 2005 14:25:52 +0200",
"msg_from": "\"Martin Fandel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Filesystem"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nMartin Fandel wrote:\n|\n| I've tested dump's and copy's with the xfs-installation. It's\n| faster than before. But the transactions-query's are still slower\n| than the reiserfs-installation.\n|\n| Are any fstab-/mount-options recommended for xfs?\n|\n\nHello, Martin.\n\nI'm afraid that, unless you planned for your typical workload and\ndatabase cluster configuration at filesystem creation time, there is not\nmuch you can do solely by using different mount options. As you don't\nmention how you configured the filesystem though, here's some thoughts\non that (everybody is more than welcome to comment on this, of course).\n\nDepending on the underlying array, the block size should be set as high\nas possible (page size) to get as close as possible to single stripe\nunit size, provided that the stripe unit is a multiple of block size.\nFor now, x86 unfortunately doesn't allow blocks of multiple pages (yet).\nIf possible, try to stick as close to the PostgreSQL page size as well,\nwhich is 8kB, if I recall correctly. 4k blocks may hence be a good\nchoice here.\n\nHigher allocation group count (agcount/agsize) allows for better\nparallelism when allocating blocks and inodes. From your perspective,\nthis may not necessarily be needed (or desired), as allocation and block\nreorganization may be \"implicitly forced\" to being performed internally\nas often as possible (depending on how frequently you run VACUUM FULL;\nif you can afford it, try running it as seldomly as possible). What you\ndo want here though, is a good enough an allocation group count to\nprevent one group from occupying too much of one single disk in the\narray, thus smothering other applicants trying to obtain an extent (this\nwould imply $agsize = ($desired_agsize - ($sunit * n)), where n <\n($swidth / $sunit).\n\nIf stripe unit for the underlying RAID device is x kB, the \"sunit\"\nsetting is (2 * x), as it is in 512-byte blocks (do not be mislead by\nthe rather confusing manpage). If you have RAID10/RAID01 in place,\n\"swidth\" may be four times the size of \"sunit\", depending on how your\nRAID controller (or software driver) understands it (I'm not 100% sure\non this, comments, anyone?).\n\n\"unwritten\" (for unwritten extent markings) can be set to 0 if all of\nthe files are predominantly preallocated - again, if you VACUUM FULL\nextremely seldomly, and delete/update a lot, this may be useful as it\nsaves I/O and CPU time. YMMV.\n\nInode size can be set using \"size\" parameter set to maximum, which\nis currently 2048 bytes on x86, if you're using page-sized blocks. As\nthe filesystem will probably be rather big, as well as the files that\nlive on it, you probably won't be using much of it for inodes, so you\ncan set \"maxpct\" to a safe minimum of 1%, which would yield apprx.\n200.000 file slots in a 40GB filesystem (with inode size of 2kB).\n\nLog can, of course, be either \"internal\", with a \"sunit\" that fits the\nlogical configuration of the array, or any other option, if you want to\nmove the book-keeping overhead away from your data. Do mind that typical\njournal size is usually rather small though, so you probably want to be\nusing one partitioned disk drive for a number of journals, especially\nsince there usually isn't much journalism to be done on a typical\ndatabase cluster filesystem (compared to, for example, a mail server).\n\nNaming (a.k.a. directory) area of the filesystem is also rather poorly\nutilized, as there are few directories, and they only contain small\nnumbers of files, so you can try optimizing in this area too: \"size\" may\nbe set to maximum, 64k, although this probably doesn't buy you much\nbesides a couple of kilobytes' worth of space.\n\nNow finally, the most common options you could play with at mount time.\nThey would most probably include \"noatime\", as it is of course rather\nundesirable to update inodes upon each and every read access, attribute\nor directory lookup, etc. I would be surprised if you were running the\nfilesystem both without noatime and a good reason to do that. :) Do mind\nthat this is a generic option available for all filesystems that support\nthe atime attribute and is not xfs-specific in any way.\n\nAs for XFS, biosize=n can be used, where n = log2(${swidth} * ${sunit}),\n~ or a multiple thereof. This is, _if_ you planned for your workload by\nusing an array configuration and stripe sizes befitting of biosize, as\nwell as configuring filesystem appropriately, the setting where you can\ngain by making operating system cache in a slightly readahead manner.\n\nAnother useful option might be osyncisosync, which implements a true\nO_SYNC on files opened with that option, instead of the default Linux\nbehaviour where O_SYNC, O_DSYNC and O_RSYNC are synonymous. It may hurt\nyour performance though, so beware.\n\nIf you decided to externalize log journal to another disk drive, and you\nhave several contendants to that storage unit, you may also want to\nrelease contention a bit by using larger logbufs and logbsize settings,\nto provide for more slack in others when a particular needs to spill\nbuffers to disk.\n\nAll of these ideas share one common thought: you can tune a filesystem\nso it helps in reducing the amount of iowait. The filesystem itself can\nhelp preventing unnecessary work performed by the disk and eliminating\ncontention for the bandwidth of the transport subsystem. This can be\nachieved by improving internal organization of the filesystem to better\nsuite the requirements of a typical database workload, and eliminating\nthe (undesired part of the) book-keeping work in a your filesystem.\n\nHope to have helped.\n\nKind regards,\n- --\nGrega Bremec\ngregab at p0f dot net\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (GNU/Linux)\n\niD8DBQFCpvdcfu4IwuB3+XoRAiQQAJ4rnnFYGW42U/SnYz4LGmgEsF0s1gCfXikL\nHT6EHWeTvQfd+s+9DkvOQpI=\n=V+E2\n-----END PGP SIGNATURE-----\n",
"msg_date": "Wed, 08 Jun 2005 15:49:16 +0200",
"msg_from": "Grega Bremec <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filesystem"
}
] |
[
{
"msg_contents": "\n\n>>> I am using PostgreSQL (7.4) with a schema that was generated\n>>> automatically (using hibernate). The schema consists of about 650\n>>> relations. One particular query (also generated automatically)\n>>> consists of left joining approximately 350 tables.\n\n[snip]\n\n>One thought is that I am not sure I believe the conclusion that planning\n>is taking only 36 ms; even realizing that the exclusive use of left\n>joins eliminates options for join order, there are still quite a lot of\n>plans to consider. You should try both EXPLAIN and EXPLAIN ANALYZE\n>from psql and see how long each takes. It'd also be interesting to keep\n>an eye on how large the backend process grows while doing this --- maybe\n>it's being driven into swap.\n\n\nThanks for the suggestion. I've timed both the EXPLAIN and the EXPLAIN ANALYZE operations. \nBoth operations took 1m 37s. The analyze output indicates that the query \nexecution time was 950ms. This doesn't square with the JDBC prepareStatement \nexecuting in 36ms. My guess is that the prepare was actually a no-op but \nI haven't found anything about this yet. \n\nSo, is it correct to interpret this as the query planner taking an awful long \ntime? Is it possible to force the query planner to adopt a specific strategy \nand not search for alternatives (I'm aware of the noXX options, it's the \nreverse logic that I'm thinking of here). Alternatively, is there some way \nto check if the query planner is bottlenecking on a specific resource? \n\nFinally, PFC was asking about the nature of the application, it's not a \nspecific application just a generic bit of infrastructure consisting of \na transformation of the UBL schema. Despite being fairly restricted in scope, \nthe schema is highly denormalized hence the large number of tables. \n\nThanks for all your help.\n -phil\n\n\n\n\n\n\n\nI'm using Vodafone Mail - to get your free mobile email account go to http://www.vodafone.ie\nUse of Vodafone Mail is subject to Terms and Conditions http://www.vodafone.ie/terms/website\n\n\n",
"msg_date": "Fri, 3 Jun 2005 13:22:41 +0100 (IST)",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query plan for very large number of joins"
},
{
"msg_contents": "<[email protected]> writes:\n> Thanks for the suggestion. I've timed both the EXPLAIN and the EXPLAIN ANALYZE operations. \n> Both operations took 1m 37s. The analyze output indicates that the query \n> execution time was 950ms. This doesn't square with the JDBC prepareStatement \n> executing in 36ms. My guess is that the prepare was actually a no-op but \n> I haven't found anything about this yet. \n\nOnly in very recent JDBCs does prepareStatement do much of anything.\n\n> So, is it correct to interpret this as the query planner taking an\n> awful long time?\n\nLooks that way.\n\n> Is it possible to force the query planner to adopt a specific strategy \n> and not search for alternatives (I'm aware of the noXX options, it's the \n> reverse logic that I'm thinking of here).\n\nThere's no positive forcing method. But you could probably save some\ntime by disabling both mergejoin and hashjoin, now that you know it's\ngoing to end up picking nestloop for each join anyway. Even more\nimportant: are you sure that *every* one of the joins is a LEFT JOIN?\nEven a couple of regular joins will let it fool around choosing\ndifferent join orders.\n\n> Alternatively, is there some way to check if the query planner is\n> bottlenecking on a specific resource?\n\nI think it would be interesting to try profiling it. I'm not really\nexpecting to find anything easily-fixable, but you never know. From\nwhat you said before, the database is not all that large --- would\nyou be willing to send me a database dump and the text of the query\noff-list?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Jun 2005 09:15:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan for very large number of joins "
},
{
"msg_contents": "On Fri, 2005-06-03 at 13:22 +0100, [email protected] wrote:\n> \n> >>> I am using PostgreSQL (7.4) with a schema that was generated\n> >>> automatically (using hibernate). The schema consists of about 650\n> >>> relations. One particular query (also generated automatically)\n> >>> consists of left joining approximately 350 tables.\n\n> Despite being fairly restricted in scope, \n> the schema is highly denormalized hence the large number of tables. \n\nDo you mean normalized? Or do you mean you've pushed the superclass\ndetails down onto each of the leaf classes?\n\nI guess I'm interested in what type of modelling led you to have so many\ntables in the first place?\n\nGotta say, never seen 350 table join before in a real app. \n\nWouldn't it be possible to smooth out the model and end up with less\ntables? Or simply break things up somewhere slightly down from the root\nof the class hierarchy?\n\nBest Regards, Simon Riggs\n\n\n\n",
"msg_date": "Sat, 04 Jun 2005 00:23:57 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan for very large number of joins"
}
] |
[
{
"msg_contents": "\nAnyone following this thread might be interested to know that disabling \nthe merge and hash joins (as suggested below) resulted in the execution \ntime dropping from ~90 seconds to ~35 seconds. Disabling GEQO has brought \nabout a marginal reduction (~1 second, pretty much within the the margin \nof error)\n\nTom, a quick grep indicates that all of the joins are left joins so there's no \nscope for tweaking there. I'll send you the schema + query offlist, anyone \nelse curious about it, let me know. \n\nThanks again, \n -phil\n\n\n\n><[email protected]> writes:\n>> Thanks for the suggestion. I've timed both the EXPLAIN and the EXPLAIN \nANALYZE operations.\n>> Both operations took 1m 37s. The analyze output indicates that the query\n>> execution time was 950ms. This doesn't square with the JDBC prepareStatement\n>> executing in 36ms. My guess is that the prepare was actually a no-op \nbut\n>> I haven't found anything about this yet.\n>\n>Only in very recent JDBCs does prepareStatement do much of anything.\n>\n>> So, is it correct to interpret this as the query planner taking an\n>> awful long time?\n>\n>Looks that way.\n>\n>> Is it possible to force the query planner to adopt a specific strategy\n>> and not search for alternatives (I'm aware of the noXX options, it's \nthe\n>> reverse logic that I'm thinking of here).\n>\n>There's no positive forcing method. But you could probably save some\n>time by disabling both mergejoin and hashjoin, now that you know it's\n>going to end up picking nestloop for each join anyway. Even more\n>important: are you sure that *every* one of the joins is a LEFT JOIN?\n>Even a couple of regular joins will let it fool around choosing\n>different join orders.\n>\n>> Alternatively, is there some way to check if the query planner is\n>> bottlenecking on a specific resource?\n>\n>I think it would be interesting to try profiling it. I'm not really\n>expecting to find anything easily-fixable, but you never know. From\n>what you said before, the database is not all that large --- would\n>you be willing to send me a database dump and the text of the query\n>off-list?\n\n\n\nI'm using Vodafone Mail - to get your free mobile email account go to http://www.vodafone.ie\nUse of Vodafone Mail is subject to Terms and Conditions http://www.vodafone.ie/terms/website\n\n\n",
"msg_date": "Fri, 3 Jun 2005 15:24:02 +0100 (IST)",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query plan for very large number of joins"
}
] |
[
{
"msg_contents": "<[email protected]> writes:\n> I've attached the schema and query text, hopefully it will be of some use \n> to you. Note that both are taken from the HyperUBL project\n> (https://hyperubl.dev.java.net/). Sadly, at this stage I think it's\n> time for me to try alternatives to either Hibernate or Postgresql. \n\nThanks. Profiling on 7.4 I get this for an EXPLAIN (after vacuum\nanalyzing the database):\n\n % cumulative self self total \n time seconds seconds calls Ks/call Ks/call name \n 61.66 618.81 618.81 2244505819 0.00 0.00 compare_path_costs\n 15.01 769.44 150.63 1204882 0.00 0.00 add_path\n 8.08 850.57 81.13 772077 0.00 0.00 nth\n 3.76 888.27 37.70 1113598 0.00 0.00 nconc\n 2.59 914.30 26.03 233051 0.00 0.00 find_joininfo_node\n 2.23 936.70 22.40 30659124 0.00 0.00 bms_equal\n 1.14 948.14 11.44 39823463 0.00 0.00 equal\n 0.77 955.84 7.70 83300 0.00 0.00 find_base_rel\n\nThis is with no special planner settings. Obviously the problem is that\nit's considering way too many different paths. We did do something\nabout that in 8.0 (basically, throw away paths with \"nearly the same\"\ncost) ... but the bottom line didn't improve a whole lot. CVS tip\nprofile for the same case is\n\n % cumulative self self total \n time seconds seconds calls s/call s/call name \n 38.37 176.41 176.41 53344348 0.00 0.00 list_nth_cell\n 35.26 338.52 162.11 196481 0.00 0.00 get_rte_attribute_is_dropped\n 5.42 363.44 24.92 233051 0.00 0.00 find_joininfo_node\n 4.72 385.14 21.70 30659416 0.00 0.00 bms_equal\n 4.09 403.95 18.81 53344348 0.00 0.00 list_nth\n 2.31 414.58 10.63 37347920 0.00 0.00 equal\n 1.40 421.03 6.45 83299 0.00 0.00 find_base_rel\n 1.08 426.01 4.98 617917 0.00 0.00 SearchCatCache\n 0.90 430.13 4.12 5771640 0.00 0.00 AllocSetAlloc\n\nThe get_rte_attribute_is_dropped calls (and list_nth/list_nth_cell,\nwhich are mostly being called from there) arise from a rather hastily\nadded patch that prevents failure when a JOIN list in a stored view\nrefers to a since-dropped column of an underlying relation. I had not\nrealized that that check could have O(N^2) behavior in deeply nested\njoins, but it does. Obviously we'll have to rethink that.\n\nAfter that it looks like the next hotspot is find_joininfo_node\n(and bms_equal which is mostly getting called from there). We could\nmaybe fix that by rethinking the joininfo data structure --- right now\nit's a collection of simple Lists, which betrays the planner's Lispy\nheritage ;-). Again, that's not something I've ever seen at the top\nof a profile before --- there may be some O(N^2) behavior involved\nhere too, but I've not analyzed it in detail.\n\nIt does look like 8.0 would be about a factor of 2 faster for you\nthan 7.4, but the real fix will probably have to wait for 8.1.\n\nAlso: the 8.0 problem is definitely an O(N^2) type of deal, which means\nif you could reduce the depth of nesting by a factor of 2 the cost would\ngo down 4x. You said this was an automatically generated query, so\nthere may not be much you can do about it, but if you could parenthesize\nthe FROM list a bit more intelligently the problem would virtually go\naway. What you have is effectively\n\n\tFROM ((((a left join b) left join c) left join d) ....\n\nso the nesting goes all the way down. With something like\n\n\tFROM ((a left join b) left join c ...)\n left join\n ((d left join e) left join f ...)\n\nthe max nesting depth would be halved. I don't understand your schema\nat all so I'm not sure what an appropriate nesting might look like, but\nmaybe there is a short-term workaround to be found there. (This will\n*not* help on 7.4, as the bottleneck there is completely different.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Jun 2005 13:29:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query plan for very large number of joins"
}
] |
[
{
"msg_contents": "I have a small business client that cannot afford high-end/high quality\nRAID cards for their next server. That's a seperate argument/issue right\nthere for me, but what the client wants is what the client wants.\n\nHas anyone ran Postgres with software RAID or LVM on a production box?\nWhat have been your experience?\n\nI don't forsee more 10-15 concurrent sessions running for an their OLTP\napplication.\n\nThanks.\n\nSteve Poe\n\n",
"msg_date": "Fri, 03 Jun 2005 11:45:29 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql and Software RAID/LVM"
},
{
"msg_contents": "Steve Poe wrote:\n> I have a small business client that cannot afford high-end/high quality\n> RAID cards for their next server. That's a seperate argument/issue right\n> there for me, but what the client wants is what the client wants.\n> \n> Has anyone ran Postgres with software RAID or LVM on a production box?\n> What have been your experience?\n\nI would not run RAID + LVM in a software scenario. Software RAID is fine \nhowever.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> I don't forsee more 10-15 concurrent sessions running for an their OLTP\n> application.\n> \n> Thanks.\n> \n> Steve Poe\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n",
"msg_date": "Fri, 03 Jun 2005 15:03:17 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql and Software RAID/LVM"
},
{
"msg_contents": "On Fri, 2005-06-03 at 11:45 -0700, Steve Poe wrote:\n> I have a small business client that cannot afford high-end/high quality\n> RAID cards for their next server. That's a seperate argument/issue right\n> there for me, but what the client wants is what the client wants.\n> \n> Has anyone ran Postgres with software RAID or LVM on a production box?\n> What have been your experience?\n\nHi,\n\nWe regularly run LVM on top of software raid for our PostgreSQL servers\n(and our other servers, for that matter).\n\nAs far as I can see these systems have not had issues related to either\nsoftware RAID or LVM - that's around 30 systems all up, maybe 8 running\nPostgreSQL, in production. The database servers are a variety of\ndual-Xeon (older) and dual-Opteron (newer) systems.\n\nThe Xeons are all running Debian \"Woody\" with 2.4.xx kernels and the\nOpterons are all running \"Sarge\" with 2.6.x kernels.\n\nRegards,\n\t\t\t\t\tAndrew.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n The secret of being a bore is to say everything -- Voltaire\n-------------------------------------------------------------------------",
"msg_date": "Mon, 06 Jun 2005 18:48:45 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql and Software RAID/LVM"
}
] |
[
{
"msg_contents": "\nA consultant did a project for us and chose MySQL. We thought it was\ncool that MySQL was free.\n\nTurns out, MySQL costs over $500 (USD) if you are a commercial\norganization like us! Even worse, we have to formally transfer\nlicenses to customers and any further transfers must include\ninvolvement of the MySQL organization.\n\nSince we are a reputable organization, we diligently track the license\nnumbers- I make my mfg group log them, print them and include them in\nthe BOM of systems we ship. Occassionally, I audit them to make sure\nwe are staying legal. I spent many hours studying the MySQL license\nagreements, I found ambiguitites and questions and called their rep\nseveral times. As usual, licenses punish the honest people. What a\nPITA.\n\nThe cost for us to do that work and tracking is hard to measure but is\ncertainly not free.\n\nThis prompted me to look around and find another open source database\nthat did not go over to the dark side and turn greedy. Since Postgres\nhas true foreign key integrity enforcement and truly has a reputation\nfor being hardened and robust, it got our attention.\n\nWe are pretty close to choosing PostgreSQL 8.x. Since we know and use\nonly Windows, there's still some learning curve and pain we are going\nthrough. \n\nFortunately, there is a simple installer for windows. The PGAdmin tool\nthat comes with PG looks decent and a company named EMS makes a decent\nlooking tool for about $195.\n\nTrouble is, we are not DB admins. We're programmers who love and know\njava, JDBC and a few other languages.\n\nSo, our problem in installing is we don't know a cluster or SSL from a\nhole in the ground. Things get confusing about contexts- are we\ntalking about a user of the system or the database? Yikes, do I need\nto write down the 30+ character autogenerated password? \n\nWe just want to use JDBC, code SQL queries and essentially not care\nwhat database is under us. We would love to find a good tool that runs\nas an Eclipse plug-in that lets us define our database, generate a\nscript file to create it and perhaps also help us concoct queries.\n\nOur experience is that the many UNIX-ish thing about postgres are there\nand we don't know UNIX. This makes you realize how much you take for\ngranted about the OS you do know. Of course, we'll learn, but postgres\npeople, if you're listening: good job, now take us a little farther and\nwe will be your most ardent supporters.\n\n==Bill==\n\n\n\n--\nBill Ewing\n------------------------------------------------------------------------\nPosted via http://www.codecomments.com\n------------------------------------------------------------------------\n \n",
"msg_date": "Fri, 3 Jun 2005 15:01:29 -0500",
"msg_from": "Bill Ewing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "> So, our problem in installing is we don't know a cluster or SSL from a\n> hole in the ground. Things get confusing about contexts- are we\n> talking about a user of the system or the database? Yikes, do I need\n> to write down the 30+ character autogenerated password? \n\nNo you don't need to write it down :)\n\n> We just want to use JDBC, code SQL queries and essentially not care\n> what database is under us. We would love to find a good tool that runs\n> as an Eclipse plug-in that lets us define our database, generate a\n> script file to create it and perhaps also help us concoct queries.\n\nDunno if such a thing exists?\n\n> Our experience is that the many UNIX-ish thing about postgres are there\n> and we don't know UNIX. This makes you realize how much you take for\n> granted about the OS you do know. Of course, we'll learn, but postgres\n> people, if you're listening: good job, now take us a little farther and\n> we will be your most ardent supporters.\n\nJust ask questions on the lists, or get instant answers on #postgresql \non irc.freenode.org.\n\nCheers,\n\nChris\n",
"msg_date": "Mon, 06 Jun 2005 09:57:38 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "\n\n\n\tIf you want something more \"embedded\" in your application, you could \nconsider :\n\nhttp://firebird.sourceforge.net/\nhttp://hsqldb.sourceforge.net/\nhttp://sqlite.org/\n",
"msg_date": "Mon, 06 Jun 2005 13:54:03 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
}
] |
[
{
"msg_contents": "Hi,\n\nI am having a problem with inserting a large amount of data with my libpqxx\nprogram into an initially empty database. It appears to be the EXACT same\nproblem discussed here:\n\nhttp://archives.postgresql.org/pgsql-bugs/2005-03/msg00183.php\n\nIn fact my situation is nearly identical, with roughly 5 major tables, with\nforeign keys between each other. All the tables are being loaded into\nsimiltaneously with about 2-3 million rows each. It seems that the problem\nis caused by the fact that I am using prepared statments, that cause the\nquery planner to choose sequential scans for the foreign key checks due to\nthe table being initially empty. As with the post above, if I dump my\nconnection after about 4000 inserts, and restablish it the inserts speed up\nby a couple of orders of magnitude and remain realtively constant through\nthe whole insertion.\n\nAt first I was using straight insert statments, and although they were a bit\nslower than the prepared statments(after the restablished connection) they\nnever ran into this problem with the database being initially empty. I only\nchanged to the prepared statements because it was suggested in the\ndocumentation for advice on bulk data loads =).\n\nI can work around this problem, and I am sure somebody is working on fixing\nthis, but I thought it might be good to reaffirm the problem.\n\nThanks,\nMorgan Kita\n\n\n",
"msg_date": "Fri, 3 Jun 2005 16:13:21 -0700",
"msg_from": "\"Morgan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Insert slow down on empty database"
},
{
"msg_contents": "In an attempt to throw the authorities off his trail, \"Morgan\" <[email protected]> transmitted:\n> At first I was using straight insert statments, and although they\n> were a bit slower than the prepared statments(after the restablished\n> connection) they never ran into this problem with the database being\n> initially empty. I only changed to the prepared statements because\n> it was suggested in the documentation for advice on bulk data loads\n> =).\n\nI remember encountering this with Oracle, and the answer being \"do\nsome loading 'til it slows down, then update statistics and restart.\"\n\nI don't know that there's an obvious alternative outside of perhaps\nsome variation on pg_autovacuum...\n-- \nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://linuxdatabases.info/info/spreadsheets.html\nSo long and thanks for all the fish.\n",
"msg_date": "Fri, 03 Jun 2005 21:20:08 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert slow down on empty database"
}
] |
[
{
"msg_contents": "hi\nfirst let me draw the outline.\nwe have a database which stores \"adverts\".\neach advert is in one category, and one or more \"region\".\nregions and categories form (each) tree structure.\nassume category tree:\n\n a\n / \\\n b c\n / \\\n d e\n\nif any given advert is in category \"e\". it means it is also in \"b\" and\n\"a\".\nsame goes for regions.\n\nas for now we have approx. 400 categories, 1300 regions, and 1000000\nadverts.\n\nsince checking always over the tress of categories and regions we\ncreated acr_cache table (advert/category/region)\nwhich stores information on all adverts and all categories and regions\nthis particular region is in.\nplus some more information for sorting purposes.\n\nthis table is ~ 11 milion records.\n\nnow.\nwe query this in more or less this manner:\n\nselect advert_id from acr_cache where category_id = ? and region_id = ?\norder by XXX {asc|desc} limit 20;\n\nwhere XXX is one of 5 possible fields,\ntimestamp,\ntimestamp,\ntext,\ntext,\nnumeric\n\nwe created index on acr_cache (category_id, region_id) \nand it works rather well.\nusually.\nif a given \"crossing\" (category + region) has small amount of ads (less\nthen 10000) - the query is good enough (up to 300 miliseconds).\nbut when we enter the crossings which result in 50000 ads - the query\ntakes up to 10 seconds.\nwhich is almost \"forever\".\n\nwe thought about creating indices like this:\nindex on acr_cache (effective_date);\nwhere effective_dateis on of the timestamp fields.\nit worked well for the crossings with lots of ads, but when we asked for\nsmall crossing (like 1000 ads) it took > 120 seconds!\nit appears that postgresql was favorizing this new advert instead of\nusing much better index on category_id and region_id.\n\nactually - i'm not sure what to do next.\ni am even thinkinh about createing special indices (partial) for big\ncrossings, but that's just weird. plus the fact that already the\nacr_cache vacuum time exceeds 3 hours!.\n\n\nany suggestions?\nhardware is dual xeon 3 ghz, 4G ram, hardware scsi raid put into raid 1.\nsettings in postgresql.conf:\nlisten_addresses = '*'\nport = 5800\nmax_connections = 300\nsuperuser_reserved_connections = 50\nshared_buffers = 131072\nwork_mem = 4096\nmaintenance_work_mem = 65536\nfsync = false\ncommit_delay = 100\ncommit_siblings = 5\ncheckpoint_segments = 10\neffective_cache_size = 10000\nrandom_page_cost = 1.1\nlog_destination = 'stderr'\nredirect_stderr = true\nlog_directory = '/home/pgdba/logs'\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\nlog_truncate_on_rotation = false\nlog_rotation_age = 1440\nlog_rotation_size = 502400\nlog_min_duration_statement = -1\nlog_connections = true\nlog_duration = true\nlog_line_prefix = '[%t] [%p] <%u@%d> '\nlog_statement = 'all'\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = true\nstats_reset_on_server_start = true\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\n\nactual max numer of connection is 120 plus some administrative connections (psql sessions).\npostgresql version 8.0.2 on linux debian sarge.\n\nbest regards,\n\ndepesz\n\n-- \nhubert lubaczewski\nNetwork Operations Center\neo Networks Sp. z o.o.",
"msg_date": "Sat, 4 Jun 2005 10:17:42 +0200",
"msg_from": "hubert lubaczewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "strategies for optimizing read on rather large tables"
},
{
"msg_contents": "Without reading too hard, I suggest having a quick look at contrib/ltree \nmodule in the PostgreSQL distribution. It may or may not help you.\n\nChris\n\nhubert lubaczewski wrote:\n> hi\n> first let me draw the outline.\n> we have a database which stores \"adverts\".\n> each advert is in one category, and one or more \"region\".\n> regions and categories form (each) tree structure.\n> assume category tree:\n> \n> a\n> / \\\n> b c\n> / \\\n> d e\n> \n> if any given advert is in category \"e\". it means it is also in \"b\" and\n> \"a\".\n> same goes for regions.\n> \n> as for now we have approx. 400 categories, 1300 regions, and 1000000\n> adverts.\n> \n> since checking always over the tress of categories and regions we\n> created acr_cache table (advert/category/region)\n> which stores information on all adverts and all categories and regions\n> this particular region is in.\n> plus some more information for sorting purposes.\n> \n> this table is ~ 11 milion records.\n> \n> now.\n> we query this in more or less this manner:\n> \n> select advert_id from acr_cache where category_id = ? and region_id = ?\n> order by XXX {asc|desc} limit 20;\n> \n> where XXX is one of 5 possible fields,\n> timestamp,\n> timestamp,\n> text,\n> text,\n> numeric\n> \n> we created index on acr_cache (category_id, region_id) \n> and it works rather well.\n> usually.\n> if a given \"crossing\" (category + region) has small amount of ads (less\n> then 10000) - the query is good enough (up to 300 miliseconds).\n> but when we enter the crossings which result in 50000 ads - the query\n> takes up to 10 seconds.\n> which is almost \"forever\".\n> \n> we thought about creating indices like this:\n> index on acr_cache (effective_date);\n> where effective_dateis on of the timestamp fields.\n> it worked well for the crossings with lots of ads, but when we asked for\n> small crossing (like 1000 ads) it took > 120 seconds!\n> it appears that postgresql was favorizing this new advert instead of\n> using much better index on category_id and region_id.\n> \n> actually - i'm not sure what to do next.\n> i am even thinkinh about createing special indices (partial) for big\n> crossings, but that's just weird. plus the fact that already the\n> acr_cache vacuum time exceeds 3 hours!.\n> \n> \n> any suggestions?\n> hardware is dual xeon 3 ghz, 4G ram, hardware scsi raid put into raid 1.\n> settings in postgresql.conf:\n> listen_addresses = '*'\n> port = 5800\n> max_connections = 300\n> superuser_reserved_connections = 50\n> shared_buffers = 131072\n> work_mem = 4096\n> maintenance_work_mem = 65536\n> fsync = false\n> commit_delay = 100\n> commit_siblings = 5\n> checkpoint_segments = 10\n> effective_cache_size = 10000\n> random_page_cost = 1.1\n> log_destination = 'stderr'\n> redirect_stderr = true\n> log_directory = '/home/pgdba/logs'\n> log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\n> log_truncate_on_rotation = false\n> log_rotation_age = 1440\n> log_rotation_size = 502400\n> log_min_duration_statement = -1\n> log_connections = true\n> log_duration = true\n> log_line_prefix = '[%t] [%p] <%u@%d> '\n> log_statement = 'all'\n> stats_start_collector = true\n> stats_command_string = true\n> stats_block_level = true\n> stats_row_level = true\n> stats_reset_on_server_start = true\n> lc_messages = 'en_US.UTF-8'\n> lc_monetary = 'en_US.UTF-8'\n> lc_numeric = 'en_US.UTF-8'\n> lc_time = 'en_US.UTF-8'\n> \n> actual max numer of connection is 120 plus some administrative connections (psql sessions).\n> postgresql version 8.0.2 on linux debian sarge.\n> \n> best regards,\n> \n> depesz\n> \n",
"msg_date": "Sat, 04 Jun 2005 19:17:17 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strategies for optimizing read on rather large tables"
},
{
"msg_contents": "\n\n> select advert_id from acr_cache where category_id = ? and region_id = ?\n> order by XXX {asc|desc} limit 20;\n>\n> where XXX is one of 5 possible fields,\n> timestamp,\n> timestamp,\n> text,\n> text,\n> numeric\n\n\tCreate 5 indexes on ( category_id, region_id, a field )\n\twhere \"a field\" is one of your 5 fields.\n\nThen write your query as :\n\nselect advert_id from acr_cache where category_id = ? and region_id = ?\norder by category_id, region_id, XXX limit 20;\n\nselect advert_id from acr_cache where category_id = ? and region_id = ?\norder by category_id desc, region_id desc, XXX desc limit 20;\n\nThis should put your query down to a millisecond. It will use the index \nfor the lookup, the sort and the limit, and hence only retrieve 20 rows \nfor the table. Downside is you have 5 indexes, but that's not so bad.\n\nIf your categories and regions form a tree, you should definitely use a \nltree datatype, which enables indexed operators like \"is contained in\" \nwhich would probably allow you to reduce the size of your cache table a \nlot.\n\n\n\n>\n> we created index on acr_cache (category_id, region_id)\n> and it works rather well.\n> usually.\n> if a given \"crossing\" (category + region) has small amount of ads (less\n> then 10000) - the query is good enough (up to 300 miliseconds).\n> but when we enter the crossings which result in 50000 ads - the query\n> takes up to 10 seconds.\n> which is almost \"forever\".\n>\n> we thought about creating indices like this:\n> index on acr_cache (effective_date);\n> where effective_dateis on of the timestamp fields.\n> it worked well for the crossings with lots of ads, but when we asked for\n> small crossing (like 1000 ads) it took > 120 seconds!\n> it appears that postgresql was favorizing this new advert instead of\n> using much better index on category_id and region_id.\n>\n> actually - i'm not sure what to do next.\n> i am even thinkinh about createing special indices (partial) for big\n> crossings, but that's just weird. plus the fact that already the\n> acr_cache vacuum time exceeds 3 hours!.\n>\n>\n> any suggestions?\n> hardware is dual xeon 3 ghz, 4G ram, hardware scsi raid put into raid 1.\n> settings in postgresql.conf:\n> listen_addresses = '*'\n> port = 5800\n> max_connections = 300\n> superuser_reserved_connections = 50\n> shared_buffers = 131072\n> work_mem = 4096\n> maintenance_work_mem = 65536\n> fsync = false\n> commit_delay = 100\n> commit_siblings = 5\n> checkpoint_segments = 10\n> effective_cache_size = 10000\n> random_page_cost = 1.1\n> log_destination = 'stderr'\n> redirect_stderr = true\n> log_directory = '/home/pgdba/logs'\n> log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\n> log_truncate_on_rotation = false\n> log_rotation_age = 1440\n> log_rotation_size = 502400\n> log_min_duration_statement = -1\n> log_connections = true\n> log_duration = true\n> log_line_prefix = '[%t] [%p] <%u@%d> '\n> log_statement = 'all'\n> stats_start_collector = true\n> stats_command_string = true\n> stats_block_level = true\n> stats_row_level = true\n> stats_reset_on_server_start = true\n> lc_messages = 'en_US.UTF-8'\n> lc_monetary = 'en_US.UTF-8'\n> lc_numeric = 'en_US.UTF-8'\n> lc_time = 'en_US.UTF-8'\n>\n> actual max numer of connection is 120 plus some administrative \n> connections (psql sessions).\n> postgresql version 8.0.2 on linux debian sarge.\n>\n> best regards,\n>\n> depesz\n>\n\n\n",
"msg_date": "Sat, 04 Jun 2005 13:18:04 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strategies for optimizing read on rather large tables"
},
{
"msg_contents": "On Sat, Jun 04, 2005 at 07:17:17PM +0800, Christopher Kings-Lynne wrote:\n> Without reading too hard, I suggest having a quick look at contrib/ltree \n> module in the PostgreSQL distribution. It may or may not help you.\n\nacr_cache doesn't care about trees. and - since i have acr_cache - i\ndont have to worry about trees when selecting from acr_cache.\n\nltree - is known to me. yet i decided not to use it to have the ability\nto move to another database engines without rewriting something that is\nhavily used.\n\ndepesz",
"msg_date": "Sat, 4 Jun 2005 13:40:23 +0200",
"msg_from": "hubert lubaczewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strategies for optimizing read on rather large tables"
},
{
"msg_contents": "On Sat, Jun 04, 2005 at 01:18:04PM +0200, PFC wrote:\n> Then write your query as :\n> select advert_id from acr_cache where category_id = ? and region_id = ?\n> order by category_id, region_id, XXX limit 20;\n\nthis is great idea - i'll check it out definitelly.\n\ndepesz",
"msg_date": "Sat, 4 Jun 2005 13:41:32 +0200",
"msg_from": "hubert lubaczewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strategies for optimizing read on rather large tables"
},
{
"msg_contents": "\n>> select advert_id from acr_cache where category_id = ? and region_id = ?\n>> order by category_id, region_id, XXX limit 20;\n\n\tdon't forget to mention all the index columns in the order by, or the \nplanner won't use it.\n",
"msg_date": "Sat, 04 Jun 2005 14:07:52 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strategies for optimizing read on rather large tables"
},
{
"msg_contents": "On Sat, Jun 04, 2005 at 02:07:52PM +0200, PFC wrote:\n> \tdon't forget to mention all the index columns in the order by, or \n> \tthe planner won't use it.\n\nof course.\ni understand the concept. actually i find kind of ashamed i did not try\nit before. \nanyway - thanks for great tip.\n\ndepesz",
"msg_date": "Sat, 4 Jun 2005 14:13:19 +0200",
"msg_from": "hubert lubaczewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strategies for optimizing read on rather large tables"
}
] |
[
{
"msg_contents": ">> Despite being fairly restricted in scope,\n>> the schema is highly denormalized hence the large number of tables.\n>\n>Do you mean normalized? Or do you mean you've pushed the superclass\n>details down onto each of the leaf classes?\n\nSorry, I meant normalized, typing faster than I'm thinking here:) The schema \nwas generated by hyperjaxb, a combination of Hibernate and JAXB. This allows \none to go from XSD -> Object model -> Persistance in a single step. I'm \njust getting the hang of Hibernate so I don't know how flexible its' strategy \nis. Obviously though, the emphasis is on correctness first so while the \nsame result could possibly be achieved more quickly with many smaller queries, \nit probably considers that it's up to the DBMS to handle optimisation (not \nunreasonably either I guess) \n\nSince the entire process from the XSD onwards is automated, there's no scope \nfor tweaking either the OR mapping code or the DB schema itself except for \nisolated troubleshooting purposes. The XSD set in question is the UBL schema \npublished by OASIS which has about 650 relations, I thought it would be \nnice to have this as a standard component in future development. \n\nRegards,\n -phil\n\n\n>\n>I guess I'm interested in what type of modelling led you to have so many\n>tables in the first place?\n>\n>Gotta say, never seen 350 table join before in a real app.\n>\n>Wouldn't it be possible to smooth out the model and end up with less\n>tables? Or simply break things up somewhere slightly down from the root\n>of the class hierarchy?\n>\n>Best Regards, Simon Riggs\n\n\nI'm using Vodafone Mail - to get your free mobile email account go to http://www.vodafone.ie\nUse of Vodafone Mail is subject to Terms and Conditions http://www.vodafone.ie/terms/website\n\n\n",
"msg_date": "Sat, 4 Jun 2005 11:04:10 +0100 (IST)",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query plan for very large number of joins"
}
] |
[
{
"msg_contents": "Hi there,\n\nAnd sorry for bringing this up again, but I couldn't find any recent\ndiscussion on the best hardware, and I know it actually depends on what you\nare doing...\n\nSo this is what I had in mind:\n\nOur database is going to consist of about 100 tables or so of which only a\nhand full will be really big, say in the 100 of million rows, fully indexed\nand we are going to add a lot of entries (n* 100 000, n<100) on a daily\nbases (24/5). So from my experience with MySql I know that it is somewhat\nhard on the I/O, and that the speed of the head of the HD is actually\nlimitiing. Also, I only experimented with RAID5, and heard that RAID10 will\nbe good for reading but not writing.\n\nSo I wanted to go whith RAIDKing. They have a 16 bay Raid box that they fill\nwith Raptors (10krpm,73 GB, SATA), connected via FC. Now I am not sure what\nserver would be good or if I should go with redundant servers. Are Quad CPUs\nany good? I heard that the IBM quad system is supposed to be 40% faster than\nHP or Dell???. And how much RAM should go for: are 8GB enough? Oh, of course\nI wanted to run it under RedHat...\n\nI would appreciate any sugestions and comments or if you are too bored with\nthis topic, just send me a link where I can read up on this....\n\nThanks a lot for your kind replies.\n\nBernd\n\n\nBernd Jagla, PhD\nAssociate Research Scientist\nColumbia University\n \n\n",
"msg_date": "Sat, 4 Jun 2005 09:30:57 -0400",
"msg_from": "\"Bernd Jagla\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Best hardware"
},
{
"msg_contents": "Quoting Bernd Jagla <[email protected]>:\n\n> ... the speed of the head of the HD is actually\n> limitiing. Also, I only experimented with RAID5, and heard that\n> RAID10 will be good for reading but not writing.\n\nAu contraire. RAID5 is worse than RAID10 for writing, because it has the\nextra implicit read (parity stripe) for every write. I've switched all\nmy perftest boxes over from RAID5 to RAID10, and the smallest\nperformance increase was x1.6 . This is in an update-intensive system;\nthe WAL log's disk write rate was the controlling factor.\n\n> Are Quad CPUs any good? I heard that the IBM quad system is supposed\nto be 40%\n> faster than HP or Dell???. \nCheck out the other threads for negative experiences with Xeon 2x2 and\nperhaps quad CPU's. Me, I'm looking forward to my first Opteron box\narriving next week.\n\n> And how much RAM should go for: are 8GB enough? Oh, of course I wanted\nto run it under RedHat...\n\nFirst off, you need enough RAM to hold all your connections. Run your\napp, watch the RSS column of \"ps\". For my own simpler apps (that pump\ndata into the db) I allow 20MB/connection.\n\nNext, if you are heavy on inserts, your tables will never fit in RAM,\nand you really just need enough to hold the top levels of the indexes.\nLook at the disk space used in your $PGDATA/base/<dboid>/<tableoid>\nfiles, and you can work out whether holding ALL your indexes in memory\nis feasible. \n\nIf you are heavy on updates, the above holds, but ymmv depending on\nlocality of reference, you have to run your own tests. \n\nIf you have concurrent big queries, all bets are off --- ask not how\nmuch RAM you need, but how much you can afford :-)\n\n\n",
"msg_date": "Sat, 4 Jun 2005 16:23:37 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best hardware"
},
{
"msg_contents": "Hello Bernd Jagla,\n\nAre you the Bernd from Berlin? I am looking for you and found your name on the internet. Would you please contact me?\n\nMirjam Tilstra\n--\nSent from the PostgreSQL - performance forum at Nabble.com:\nhttp://www.nabble.com/Best-hardware-t49131.html#a1797831\n\nHello Bernd Jagla,\n\nAre you the Bernd from Berlin? I am looking for you and found your name on the internet. Would you please contact me?\n\nMirjam Tilstra\n\nSent from the PostgreSQL - performance forum at Nabble.com:\nRe: Best hardware",
"msg_date": "Mon, 5 Dec 2005 08:24:06 -0800 (PST)",
"msg_from": "\"Mirjam (sent by Nabble.com)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best hardware"
}
] |
[
{
"msg_contents": "Re: your JDBC wishes: Consider IBM Cloudscape (now Apache Derby) too, \nwhich has an apache license. It's all pure java and it's easy to get going.\n\n\nAs to MySql vs Postgres: license issues aside, if you have \ntransactionally complex needs (multi-table updates, etc), PostgreSQL \nwins hands down in my experience. There are a bunch of things about \nMySQL that just suck for high end SQL needs. (I like my subqueries,\nand I absolutely demand transactional integrity).\n\nThere are some pitfalls to pgsql though, especially for existing SQL \ncode using MAX and some other things which can really be blindsided \n(performance-wise) by pgsql if you don't use the workarounds.\n\n\nMySQL is nice for what I call \"raw read speed\" applications. But that \nlicense is an issue for me, as it is for you apparently.\n\n\nSome cloudscape info:\nhttp://www-306.ibm.com/software/data/cloudscape/\n\nSome info on pitfalls of MySQL and PostgreSQL, an interesting contrast:\nhttp://sql-info.de/postgresql/postgres-gotchas.html\nhttp://sql-info.de/mysql/gotchas.html\n\n",
"msg_date": "Mon, 06 Jun 2005 11:51:22 -0400",
"msg_from": "Jeffrey Tenny <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "[Jeffrey Tenny - Mon at 11:51:22AM -0400]\n> There are some pitfalls to pgsql though, especially for existing SQL \n> code using MAX and some other things which can really be blindsided \n> (performance-wise) by pgsql if you don't use the workarounds.\n\nYes, I discovered that - \"select max(num_attr)\" does a full table scan even\nif the figure can be found easily through an index.\n\nThere exists a workaround:\n\n select num_attr from my_table order by num_attr desc limit 1;\n\nwill find the number through the index.\n\n-- \nTobias Brox, Tallinn\n",
"msg_date": "Mon, 6 Jun 2005 20:25:08 +0300",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "On Mon, Jun 06, 2005 at 08:25:08PM +0300, Tobias Brox wrote:\n> [Jeffrey Tenny - Mon at 11:51:22AM -0400]\n> > There are some pitfalls to pgsql though, especially for existing SQL \n> > code using MAX and some other things which can really be blindsided \n> > (performance-wise) by pgsql if you don't use the workarounds.\n> \n> Yes, I discovered that - \"select max(num_attr)\" does a full table scan even\n> if the figure can be found easily through an index.\n\nPostgreSQL 8.1 will be able to use indexes for MIN and MAX.\n\nhttp://archives.postgresql.org/pgsql-committers/2005-04/msg00163.php\nhttp://archives.postgresql.org/pgsql-committers/2005-04/msg00168.php\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Mon, 6 Jun 2005 12:33:40 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
}
] |
[
{
"msg_contents": "Hi all,\n\nThanks for your replies. \n\nI ran a very prelimnary test, and found following results. I feel they are\nwierd and I dont know what I am doing wrong !!!\n\nI made a schema with 5 tables. I have a master data table with foreign keys\npointing to other 4 tables. Master data table has around 4 million records.\nWhen I run a select joining it with the baby tables, \n\npostgres -> returns results in 2.8 seconds\nmysql -> takes around 16 seconds !!!! (This is with myisam ... with innodb\nit takes 220 seconds)\n\nI am all for postgres at this point, however just want to know why I am\ngetting opposite results !!! Both DBs are on the same machine\n\nThanks,\nAmit\n\n-----Original Message-----\nFrom: Jeffrey Tenny [mailto:[email protected]]\nSent: Monday, June 06, 2005 11:51 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Need help to decide Mysql vs Postgres\n\n\nRe: your JDBC wishes: Consider IBM Cloudscape (now Apache Derby) too, \nwhich has an apache license. It's all pure java and it's easy to get going.\n\n\nAs to MySql vs Postgres: license issues aside, if you have \ntransactionally complex needs (multi-table updates, etc), PostgreSQL \nwins hands down in my experience. There are a bunch of things about \nMySQL that just suck for high end SQL needs. (I like my subqueries,\nand I absolutely demand transactional integrity).\n\nThere are some pitfalls to pgsql though, especially for existing SQL \ncode using MAX and some other things which can really be blindsided \n(performance-wise) by pgsql if you don't use the workarounds.\n\n\nMySQL is nice for what I call \"raw read speed\" applications. But that \nlicense is an issue for me, as it is for you apparently.\n\n\nSome cloudscape info:\nhttp://www-306.ibm.com/software/data/cloudscape/\n\nSome info on pitfalls of MySQL and PostgreSQL, an interesting contrast:\nhttp://sql-info.de/postgresql/postgres-gotchas.html\nhttp://sql-info.de/mysql/gotchas.html\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n",
"msg_date": "Mon, 6 Jun 2005 12:00:08 -0400 ",
"msg_from": "Amit V Shah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "On Mon, Jun 06, 2005 at 12:00:08PM -0400, Amit V Shah wrote:\n\n> I made a schema with 5 tables. I have a master data table with foreign keys\n> pointing to other 4 tables. Master data table has around 4 million records.\n> When I run a select joining it with the baby tables, \n> \n> postgres -> returns results in 2.8 seconds\n> mysql -> takes around 16 seconds !!!! (This is with myisam ... with innodb\n> it takes 220 seconds)\n\nPostgreSQL has an excellent query optimizer, so if you get a much better\nexecution time than MySQL in complex queries this isn't at all unexpected.\n\nI assume the MySQL guys would tell you to rewrite the queries in certain\nways to make it go faster (just like the Postgres guys tell people to\nrewrite certain things when they hit Postgres limitations.)\n\n-- \nAlvaro Herrera (<alvherre[a]surnet.cl>)\n\"I would rather have GNU than GNOT.\" (ccchips, lwn.net/Articles/37595/)\n",
"msg_date": "Mon, 6 Jun 2005 12:15:37 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "On Mon, 2005-06-06 at 12:00 -0400, Amit V Shah wrote:\n> Hi all,\n> \n> Thanks for your replies. \n> \n> I ran a very prelimnary test, and found following results. I feel they are\n> wierd and I dont know what I am doing wrong !!!\n> \n> I made a schema with 5 tables. I have a master data table with foreign keys\n> pointing to other 4 tables. Master data table has around 4 million records.\n> When I run a select joining it with the baby tables, \n> \n> postgres -> returns results in 2.8 seconds\n> mysql -> takes around 16 seconds !!!! (This is with myisam ... with innodb\n> it takes 220 seconds)\n\nWe said MySQL was faster for simple selects and non-transaction inserts\non a limited number of connections.\n\nAssuming you rebuilt statistics in MySQL (myisamchk -a), I would presume\nthat PostgreSQLs more mature optimizer has come into play in the above 5\ntable join test by finding a better (faster) way of executing the query.\n\nIf you post EXPLAIN ANALYZE output for the queries, we might be able to\ntell you what they did differently.\n\n> I am all for postgres at this point, however just want to know why I am\n> getting opposite results !!! Both DBs are on the same machine\n\nIf possible, it would be wise to run a performance test with the\nexpected load you will receive. If you expect to have 10 clients perform\noperation X at a time, then benchmark that specific scenario.\n\nBoth PostgreSQL and MySQL will perform differently in a typical real\nload situation than with a single user, single query situation.\n\n> -----Original Message-----\n> From: Jeffrey Tenny [mailto:[email protected]]\n> Sent: Monday, June 06, 2005 11:51 AM\n> To: [email protected]\n> Subject: Re: [PERFORM] Need help to decide Mysql vs Postgres\n> \n> \n> Re: your JDBC wishes: Consider IBM Cloudscape (now Apache Derby) too, \n> which has an apache license. It's all pure java and it's easy to get going.\n> \n> \n> As to MySql vs Postgres: license issues aside, if you have \n> transactionally complex needs (multi-table updates, etc), PostgreSQL \n> wins hands down in my experience. There are a bunch of things about \n> MySQL that just suck for high end SQL needs. (I like my subqueries,\n> and I absolutely demand transactional integrity).\n> \n> There are some pitfalls to pgsql though, especially for existing SQL \n> code using MAX and some other things which can really be blindsided \n> (performance-wise) by pgsql if you don't use the workarounds.\n> \n> \n> MySQL is nice for what I call \"raw read speed\" applications. But that \n> license is an issue for me, as it is for you apparently.\n> \n> \n> Some cloudscape info:\n> http://www-306.ibm.com/software/data/cloudscape/\n> \n> Some info on pitfalls of MySQL and PostgreSQL, an interesting contrast:\n> http://sql-info.de/postgresql/postgres-gotchas.html\n> http://sql-info.de/mysql/gotchas.html\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n-- \n\n",
"msg_date": "Mon, 06 Jun 2005 12:22:29 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "\n> postgres -> returns results in 2.8 seconds\n\n\tWhat kind of plan does it do ? seq scan on the big tables and hash join \non the small tables ?\n\n> mysql -> takes around 16 seconds !!!! (This is with myisam ... with \n> innodb it takes 220 seconds)\n\n\tI'm not surprised at all.\n\tTry the same Join query but with a indexed where + order by / limit on \nthe big table and you should get even worse for MySQL.\n\tI found 3 tables in a join was the maximum the MySQL planner was able to \ncope with before blowing up just like you experienced.\n\n> I am all for postgres at this point, however just want to know why I am\n> getting opposite results !!! Both DBs are on the same machine\n\n\tWhy do you say \"opposite results\" ?\n",
"msg_date": "Mon, 06 Jun 2005 18:41:13 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
}
] |
[
{
"msg_contents": "> I am all for postgres at this point, however just want to know why I am\n> getting opposite results !!! Both DBs are on the same machine\n\n>\tWhy do you say \"opposite results\" ?\n\nPlease pardon my ignorance, but from whatever I had heard, mysql was\nsupposedly always faster than postgres !!!! Thats why I was so surprised !!\nI will definately post the \"analyze query\" thing by end of today ...\n\nThanks for all your helps !!\nAmit\n\n",
"msg_date": "Mon, 6 Jun 2005 12:45:51 -0400 ",
"msg_from": "Amit V Shah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "In the last exciting episode, [email protected] (Amit V Shah) wrote:\n>> I am all for postgres at this point, however just want to know why I am\n>> getting opposite results !!! Both DBs are on the same machine\n>\n>>\tWhy do you say \"opposite results\" ?\n>\n> Please pardon my ignorance, but from whatever I had heard, mysql was\n> supposedly always faster than postgres !!!! Thats why I was so\n> surprised !! I will definately post the \"analyze query\" thing by\n> end of today ...\n\nThere is a common \"use case\" where MySQL(tm) using the \"MyISAM\"\nstorage manager tends to be quicker than PostgreSQL, namely where you\nare submitting a lot of more-or-less serial requests of the form:\n\n select * from some_table where id='some primary key value';\n\nIf your usage patterns differ from that, then \"what you heard\" won't\nnecessarily apply to your usage.\n-- \noutput = (\"cbbrowne\" \"@\" \"acm.org\")\nhttp://linuxdatabases.info/info/rdbms.html\nThe difference between a child and a hacker is the amount he flames\nabout his toys. -- Ed Schwalenberg\n",
"msg_date": "Mon, 06 Jun 2005 13:01:23 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "\n\n> Please pardon my ignorance, but from whatever I had heard, mysql was\n> supposedly always faster than postgres !!!! Thats why I was so surprised \n> !!\n\n\tI heard a lot of this too, so much it seems common wisdom that postgres \nis slow... well maybe some old version was, but it's getting better at \nevery release, and the 8.0 really delivers... I get the feeling that the \nPG team is really working and delivering improvements every few months, \ncompare this to MySQL 5 which has been in beta for as long as I can \nremember.\n\tAlso, yes, definitely mysql is faster when doing simple selects like \nSELECT * FROM table WHERE id=constant, or on updates with few users, but \nonce you start digging... it can get a thousand times slower on some joins \njust because the optimizer is dumb... and then suddenly 0.2 ms for MySQL \nversus 0.3 ms for postgres on a simple query doesn't seem that attractive \nwhen it's 2 ms on postgres versus 2 seconds on mysql for a not so \ncomplicated one like pulling the first N rows from a join ordered by...\n\tPG is considered slower than mysql also because many people don't use \npersistent connections, and connecting postgres is a lot slower than \nconnecting MySQL... But well, persistent connections are easy to use and \nmandatory for performance on any database anyway so I don't understand why \nthe fuss.\n\n\n> I will definately post the \"analyze query\" thing by end of today ...\n>\n> Thanks for all your helps !!\n> Amit\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if \n> your\n> joining column's datatypes do not match\n>\n\n\n",
"msg_date": "Mon, 06 Jun 2005 20:12:11 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "On 6/6/2005 2:12 PM, PFC wrote:\n\n> \n>> Please pardon my ignorance, but from whatever I had heard, mysql was\n>> supposedly always faster than postgres !!!! Thats why I was so surprised \n>> !!\n> \n> \tI heard a lot of this too, so much it seems common wisdom that postgres \n> is slow... well maybe some old version was, but it's getting better at \n> every release, and the 8.0 really delivers...\n\nThe harder it is to evaluate software, the less often people reevaluate \nit and the more often people just \"copy\" opinions instead of doing an \nevaluation at all.\n\nToday there are a gazillion people out there who \"know\" that MySQL is \nfaster than PostgreSQL. They don't know under what circumstances it is, \nor what the word \"circumstances\" means in this context anyway. When you \nask them when was the last time they actually tested this you get in \nabout 99% of the cases an answer anywhere between 3 years and infinity \n(for all those who never did). The remaining 1% can then be reduced to \nan insignificant minority by asking how many concurrent users their test \nsimulated.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Mon, 06 Jun 2005 14:55:23 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "Christopher Browne wrote:\n> \n> There is a common \"use case\" where MySQL(tm) ...\n> \n> select * from some_table where id='some primary key value';\n> \n> If your usage patterns differ from that...\n\nHowever this is a quite common use-case; and I wonder what the\nbest practices for postgresql is for applications like that.\n\nI'm guessing the answer is PGMemcache?\n(http://people.freebsd.org/~seanc/pgmemcache/pgmemcache.pdf)\n... with triggers and listen/notify to manage deletes&updates\nand tweaks to the application code to look to memcached for\nthose primary_key=constant queries?\n\nIf that is the answer, I'm curious if anyone's benchmarked\nor even has qualitative \"yeah, feels very fast\" results for\nsuch an application for the common mysql use case.\n",
"msg_date": "Mon, 06 Jun 2005 12:02:27 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "I did my own evaluation a few months back, because postgres was not cutting\nit for me.\nI found that postgres 8.0 (was what I was using at the time, now on 8.0.2)\nout performed mysql on a optiplex with 2gig meg of memory. I had postgres\nand mysql loaded and would run one server at a time doing testing. \nMy tests included using aqua studios connection to both databases and .asp\npage using odbc connections. There was not a huge difference, but I had\nsignificant time in postgres and it was a little faster, so I just took new\napproaches (flattened views,eliminated outer joins etc) to fixing the\nissues.\n \nJoel Fradkin\n \nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n \[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n \n\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Jan Wieck\nSent: Monday, June 06, 2005 1:55 PM\nTo: PFC\nCc: Amit V Shah; [email protected]\nSubject: Re: [PERFORM] Need help to decide Mysql vs Postgres\n\nOn 6/6/2005 2:12 PM, PFC wrote:\n\n> \n>> Please pardon my ignorance, but from whatever I had heard, mysql was\n>> supposedly always faster than postgres !!!! Thats why I was so surprised\n\n>> !!\n> \n> \tI heard a lot of this too, so much it seems common wisdom that\npostgres \n> is slow... well maybe some old version was, but it's getting better at \n> every release, and the 8.0 really delivers...\n\nThe harder it is to evaluate software, the less often people reevaluate \nit and the more often people just \"copy\" opinions instead of doing an \nevaluation at all.\n\nToday there are a gazillion people out there who \"know\" that MySQL is \nfaster than PostgreSQL. They don't know under what circumstances it is, \nor what the word \"circumstances\" means in this context anyway. When you \nask them when was the last time they actually tested this you get in \nabout 99% of the cases an answer anywhere between 3 years and infinity \n(for all those who never did). The remaining 1% can then be reduced to \nan insignificant minority by asking how many concurrent users their test \nsimulated.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n",
"msg_date": "Mon, 06 Jun 2005 15:12:50 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "Ron Mayer <[email protected]> writes:\n> Christopher Browne wrote:\n>> There is a common \"use case\" where MySQL(tm) ...\n>> select * from some_table where id='some primary key value';\n\n> However this is a quite common use-case; and I wonder what the\n> best practices for postgresql is for applications like that.\n\nSetting up a prepared statement should be a noticeable win for that sort\nof thing. Also of course there are the usual tuning issues: have you\npicked an appropriate shared_buffers setting, etc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Jun 2005 00:54:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres "
},
{
"msg_contents": "\n> My tests included using aqua studios connection to both databases and \n> .asp\n> page using odbc connections.\n\n\tPerformance also depends a lot on the driver.\n\tFor instance, the PHP driver for MySQL is very very fast. It is also very \ndumb, as it returns everything as a string and doesn't know about quoting.\n\tFor Python it's the reverse : the MySQL driver is slow and dumb, and the \npostgres driver (psycopg 2) is super fast, handles all quoting, and knows \nabout type conversions, it will automatically convert a Python List into a \npostgres Array and do the right thing with quoting, and it works both ways \n(ie you select a TEXT[] you get a list of strings all parsed for you). It \nknows about all the postgres types (yes even numeric <=> python Decimal) \nand you can even add your own types. That's really cool, plus the \ndeveloper is a friendly guy.\n\n------------------ in psql :\ntest=> CREATE TABLE typetests ( id SERIAL PRIMARY KEY, iarray INTEGER[] \nNULL, narray NUMERIC[] NULL, tarray TEXT[] NULL,vnum NUMERIC NULL, vint \nINTEGER NULL, vtext TEXT NULL) WITHOUT OIDS;\nNOTICE: CREATE TABLE will create implicit sequence \"typetests_id_seq\" for \nserial column \"typetests.id\"\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n\"typetests_pkey\" for table \"typetests\"\nCREATE TABLE\n\n------------------- in Python :\ndata = {\n\t'myiarray' : [1,5,8,6],\n\t'mytarray' : ['hello','world'],\n\t'mynarray' : [Decimal(\"1.23\"),Decimal(\"6.58\")],\n\t'mynum' : Decimal(\"66.66\"),\n\t'myint' : 555,\n\t'mytext' :u \"This is an Unicode String Портал по изучению иностранных\"\n}\ncursor.execute( \"\"\"INSERT INTO typetests \n(iarray,narray,tarray,vnum,vint,vtext)\n\tVALUES \n(%(myiarray)s,%(mynarray)s,%(mytarray)s,%(mynum)s,%(myint)s,%(mytext)s)\"\"\", \ndata );\n\n------------------ in psql :\ntest=> SELECT * FROM typetests;\n id | iarray | narray | tarray | vnum | vint | vtext\n----+-----------+-------------+---------------+-------+------+-----------\n 4 | {1,5,8,6} | {1.23,6.58} | {hello,world} | 66.66 | 555 | This is an \nUnicode String Портал по изучению иностранных\n(1 ligne)\n\n------------------- in Python :\n\ncursor.execute( \"SELECT * FROM typetests\" )\nfor row in cursor.fetchall():\n\tfor elem in row:\n\t\tprint type(elem), elem\n\n------------------- output :\n\n<type 'int'> 4\n<type 'list'> [1, 5, 8, 6]\n<type 'list'> [Decimal(\"1.23\"), Decimal(\"6.58\")]\n<type 'list'> ['hello', 'world']\n<class 'decimal.Decimal'> 66.66\n<type 'int'> 555\n<type 'str'> This is an Unicode String Портал по изучению иностранных\n\n------------------- in Python :\n\ncursor = db.cursor(cursor_factory = psycopg.extras.DictCursor)\ncursor.execute( \"SELECT * FROM typetests\" )\nfor row in cursor.fetchall():\n\tfor key, value in row.items():\n\t\tprint key, \":\", type(value), value\n\n------------------- output :\n\niarray : <type 'list'> [1, 5, 8, 6]\ntarray : <type 'list'> ['hello', 'world']\nvtext : <type 'str'> This is an Unicode String Портал по изучению \nиностранных\nid : <type 'int'> 4\nvnum : <class 'decimal.Decimal'> 66.66\nvint : <type 'int'> 555\nnarray : <type 'list'> [Decimal(\"1.23\"), Decimal(\"6.58\")]\n\n------------------- Timings :\n\nTime to execute SELECT * FROM typetests and fetch the results, including \ntype conversions :\n\nPlain query : 0.279 ms / request\nPrepared query : 0.252 ms / request\n\n(not that bad ! Pentium-M 1600 MHz laptop with local postgres).\n\nJust doing SELECT id FROM typetests gives 0.1 ms for executing query and \nfetching the result.\n\n\nWho said Postgres was slow on small queries ?\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 07 Jun 2005 14:53:54 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "* PFC <[email protected]> wrote:\n\n<snip>\n> \tFor Python it's the reverse : the MySQL driver is slow and dumb, \n> \tand the postgres driver (psycopg 2) is super fast, handles all quoting, \n> and knows about type conversions, it will automatically convert a \n> Python List into a postgres Array and do the right thing with quoting, \n> and it works both ways (ie you select a TEXT[] you get a list of \n> strings all parsed for you). It knows about all the postgres types (yes \n> even numeric <=> python Decimal) and you can even add your own types. \n> That's really cool, plus the developer is a friendly guy.\n\nIs there anything similar for java ?\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n",
"msg_date": "Fri, 8 Jul 2005 16:43:36 +0200",
"msg_from": "Enrico Weigelt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "On Fri, 2005-07-08 at 16:43 +0200, Enrico Weigelt wrote:\n> * PFC <[email protected]> wrote:\n> \n> <snip>\n> > \tFor Python it's the reverse : the MySQL driver is slow and dumb, \n> > \tand the postgres driver (psycopg 2) is super fast, handles all quoting, \n> > and knows about type conversions, it will automatically convert a \n> > Python List into a postgres Array and do the right thing with quoting, \n> > and it works both ways (ie you select a TEXT[] you get a list of \n> > strings all parsed for you). It knows about all the postgres types (yes \n> > even numeric <=> python Decimal) and you can even add your own types. \n> > That's really cool, plus the developer is a friendly guy.\n> \n> Is there anything similar for java ?\n> \n\nThe postgres JDBC driver is very good-- refer to pgsql-jdbc mailing list\nor look at jdbc.postgresql.org. I've had only limited experience with\nthe mysql jdbc driver, but it seemed servicable enough, if you can live\nwith their licensing and feature set.\n\n\n\n",
"msg_date": "Fri, 08 Jul 2005 09:08:10 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "Linux(Debian) + Java + PostgreSQL = Fastest\n\n2005/7/8, Mark Lewis <[email protected]>:\n> On Fri, 2005-07-08 at 16:43 +0200, Enrico Weigelt wrote:\n> > * PFC <[email protected]> wrote:\n> >\n> > <snip>\n> > > For Python it's the reverse : the MySQL driver is slow and dumb,\n> > > and the postgres driver (psycopg 2) is super fast, handles all quoting,\n> > > and knows about type conversions, it will automatically convert a\n> > > Python List into a postgres Array and do the right thing with quoting,\n> > > and it works both ways (ie you select a TEXT[] you get a list of\n> > > strings all parsed for you). It knows about all the postgres types (yes\n> > > even numeric <=> python Decimal) and you can even add your own types.\n> > > That's really cool, plus the developer is a friendly guy.\n> >\n> > Is there anything similar for java ?\n> >\n> \n> The postgres JDBC driver is very good-- refer to pgsql-jdbc mailing list\n> or look at jdbc.postgresql.org. I've had only limited experience with\n> the mysql jdbc driver, but it seemed servicable enough, if you can live\n> with their licensing and feature set.\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n\n-- \nAtte\n\nMoises Alberto Lindo Gutarra\nConsultor y Desarrollador Java / Open Source\nTUMI Solutions SAC\nTel: +51.13481104\nCel: +51.197366260 \nMSN : [email protected]\n",
"msg_date": "Fri, 8 Jul 2005 11:22:38 -0500",
"msg_from": "Moises Alberto Lindo Gutarra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
}
] |
[
{
"msg_contents": "HI!\n\nI have a table that I use for about a month. As the month progresses,\nCOPYs performed to this table get much much slower than they were at\nthe beginning, for the same number of rows (about 100,000 and\ngrowing).\n\nI'm essentially doing a delete for a given day, then a COPY as a big\ntransaction. This is done about 12 times a day.\n\nWhen the table is new it's very fast, towards the end of the month\nit's taking almost 10 times longer, yet I'm deleting and COPYing in\nthe same amount of data. Other operations on this table slow down,\ntoo, that were fast before using the same criteria.\n\nI do a VACUUM ANALYZE after each delete / COPY process, I tried\nexperimenting with CLUSTER but saw no real difference.\n\nthis is psql 7.45 on Linux server, dedicated for this purpose. About 5\nindexes, no FKs on this table.\n\nhappy to provide any other info might need, suggestions appreciated\n\nall my best,\nJone\n",
"msg_date": "Mon, 6 Jun 2005 09:48:26 -0700",
"msg_from": "Jone C <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow growing table"
},
{
"msg_contents": "On Mon, Jun 06, 2005 at 09:48:26AM -0700, Jone C wrote:\n> When the table is new it's very fast, towards the end of the month\n> it's taking almost 10 times longer, yet I'm deleting and COPYing in\n> the same amount of data. Other operations on this table slow down,\n> too, that were fast before using the same criteria.\n\nYou might have a problem with index bloat. Could you try REINDEXing the\nindexes on the table and see if that makes a difference?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 6 Jun 2005 19:00:37 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow growing table"
},
{
"msg_contents": "On Mon, Jun 06, 2005 at 07:00:37PM +0200, Steinar H. Gunderson wrote:\n> You might have a problem with index bloat. Could you try REINDEXing the\n> indexes on the table and see if that makes a difference?\n\nOn second thought... Does a VACUUM FULL help? If so, you might want to\nincrease your FSM settings.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 6 Jun 2005 19:12:53 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow growing table"
},
{
"msg_contents": "On Mon, 2005-06-06 at 09:48 -0700, Jone C wrote:\n> HI!\n> \n> I have a table that I use for about a month. As the month progresses,\n> COPYs performed to this table get much much slower than they were at\n> the beginning, for the same number of rows (about 100,000 and\n> growing).\n> \n> I'm essentially doing a delete for a given day, then a COPY as a big\n> transaction. This is done about 12 times a day.\n> \n> When the table is new it's very fast, towards the end of the month\n> it's taking almost 10 times longer, yet I'm deleting and COPYing in\n> the same amount of data. Other operations on this table slow down,\n> too, that were fast before using the same criteria.\n> \n> I do a VACUUM ANALYZE after each delete / COPY process, I tried\n> experimenting with CLUSTER but saw no real difference.\n> \n> this is psql 7.45 on Linux server, dedicated for this purpose. About 5\n> indexes, no FKs on this table.\n> \n> happy to provide any other info might need, suggestions appreciated\n> \n\nSearch the archives for details within 4 months of a similar issue.\n\nThe consensus was that this was because the indexes had become too big\nto fit in memory, hence the leap in response times.\n\nThe workaround is to split the table into smaller pieces.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 07 Jun 2005 23:21:48 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow growing table"
},
{
"msg_contents": "> On second thought... Does a VACUUM FULL help? If so, you might want to\n> increase your FSM settings.\n\nThank you for the reply, sorry for delay I was on holiday.\n\nI tried that it had no effect. I benchmarked 2x before, peformed\nVACUUM FULL on the table in question post inserts, then benchmarked 2x\nafter. Same results...\n\nShould I try your suggestion on deleting the indexes? This table needs\nto be accessible for reads at all times however though...\n\nthank you kindly\n\n\nOn 6/6/05, Steinar H. Gunderson <[email protected]> wrote:\n> On Mon, Jun 06, 2005 at 07:00:37PM +0200, Steinar H. Gunderson wrote:\n> > You might have a problem with index bloat. Could you try REINDEXing the\n> > indexes on the table and see if that makes a difference?\n> \n\n> \n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n",
"msg_date": "Tue, 21 Jun 2005 08:05:17 -0700",
"msg_from": "Jone C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow growing table"
},
{
"msg_contents": "Jone C wrote:\n\n>>On second thought... Does a VACUUM FULL help? If so, you might want to\n>>increase your FSM settings.\n>>\n>>\n>\n>Thank you for the reply, sorry for delay I was on holiday.\n>\n>I tried that it had no effect. I benchmarked 2x before, peformed\n>VACUUM FULL on the table in question post inserts, then benchmarked 2x\n>after. Same results...\n>\n>Should I try your suggestion on deleting the indexes? This table needs\n>to be accessible for reads at all times however though...\n>\n>thank you kindly\n>\n>\n\nI believe dropping an index inside a transaction is only visible to that\ntransaction. (Can someone back me up on this?)\nWhich means if you did:\n\nBEGIN;\nDROP INDEX <index in question>;\nCREATE INDEX <same index> ON <same stuff>;\nCOMMIT;\n\nThe only problem is that if you are using a unique or primary key index,\na foreign key which is referencing that index would have to be dropped\nand re-created as well. So you could have a pretty major cascade effect.\n\nA better thing to do if your table only has one (or at least only a few)\nindexes, would be to CLUSTER, which is effectively a VACUUM FULL + a\nREINDEX (plus sorting the rows so that they are in index order). It\nholds a full lock on the table, and takes a while, but when you are\ndone, things are cleaned up quite a bit.\n\nYou might also try just a REINDEX on the indexes in question, but this\nalso holds a full lock on the table. (My DROP + CREATE might also as\nwell, I'm not really sure, I just think of it as a way to recreate\nwithout losing it for other transactions)\n\nJohn\n=:->",
"msg_date": "Tue, 21 Jun 2005 10:19:49 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow growing table"
}
] |
[
{
"msg_contents": "I'm not sure if this is the appropriate list to post this question to\nbut i'm starting with this one because it is related to the performance\nof Postgresql server. I have a Penguin Computing dual AMD 64 bit\nopteron machine with 8 Gigs of memory. In my attempt to increase the\nnumber of shared_buffers from the default to 65000 i was running into a\nsemget error when trying to start Postgresql. After reading the\ndocumentation I adjusted the semaphore settings in the kernel to allow\nPostgresql to start successfully. With this configuration running if I\ndo a ipcs -u i get the following.\n\n------ Shared Memory Status --------\nsegments allocated 1\npages allocated 30728\npages resident 30626\npages swapped 0\nSwap performance: 0 attempts 0 successes\n\n------ Semaphore Status --------\nused arrays = 1880\nallocated semaphores = 31928\n\n------ Messages: Status --------\nallocated queues = 0\nused headers = 0\nused space = 0 bytes\n\nI'm questioning the number of semaphores being used. In order for\npostgresql to start I had to set the maximum number of semaphores system\nwide to 6000000. This seems to be an abnormal amount of semaphores. I'm\ncurious if this is a bug in the amd64 postgresql port. Is anyone else\nusing postgresql on an AMD64 machine without similar issues?\n\nTIA\nMark\n\n\n",
"msg_date": "06 Jun 2005 12:53:40 -0500",
"msg_from": "Mark Rinaudo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql on an AMD64 machine"
},
{
"msg_contents": "On Jun 6, 2005, at 1:53 PM, Mark Rinaudo wrote:\n\n> I'm questioning the number of semaphores being used. In order for\n> postgresql to start I had to set the maximum number of semaphores \n> system\n> wide to 6000000. This seems to be an abnormal amount of \n> semaphores. I'm\n> curious if this is a bug in the amd64 postgresql port. Is anyone else\n> using postgresql on an AMD64 machine without similar issues?\n>\n\nNo such nonsense required for me under FreeBSD 5.4/amd64. I used the \nsame settings I had under i386 OS. Postgres uses very few \nsemaphores, from what I recall. My system shows 13 active semaphores.\n\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806",
"msg_date": "Mon, 6 Jun 2005 14:32:14 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "On 06 Jun 2005 12:53:40 -0500, Mark Rinaudo <[email protected]> wrote:\n> I'm not sure if this is the appropriate list to post this question to\n> but i'm starting with this one because it is related to the performance\n> of Postgresql server. I have a Penguin Computing dual AMD 64 bit\n> opteron machine with 8 Gigs of memory. In my attempt to increase the\n> number of shared_buffers from the default to 65000 i was running into a\n> semget error when trying to start Postgresql. After reading the\n> documentation I adjusted the semaphore settings in the kernel to allow\n> Postgresql to start successfully. With this configuration running if I\n> do a ipcs -u i get the following.\n\n\nOn my HP-585, 4xOpteron, 16G RAM, Gentoo Linux (2.6.9):\n\n$ ipcs -u i\n\n------ Shared Memory Status --------\nsegments allocated 1\npages allocated 34866\npages resident 31642\npages swapped 128\nSwap performance: 0 attempts 0 successes\n\n------ Semaphore Status --------\nused arrays = 7\nallocated semaphores = 119\n\n------ Messages: Status --------\nallocated queues = 0\nused headers = 0\nused space = 0 bytes\n\n\nDid you perhaps disable spinlocks when compiling PG?\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n",
"msg_date": "Mon, 6 Jun 2005 16:59:21 -0400",
"msg_from": "Mike Rylander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "Mike Rylander <[email protected]> writes:\n> On 06 Jun 2005 12:53:40 -0500, Mark Rinaudo <[email protected]> wrote:\n>> I'm not sure if this is the appropriate list to post this question to\n>> but i'm starting with this one because it is related to the performance\n>> of Postgresql server. I have a Penguin Computing dual AMD 64 bit\n>> opteron machine with 8 Gigs of memory. In my attempt to increase the\n>> number of shared_buffers from the default to 65000 i was running into a\n>> semget error when trying to start Postgresql.\n\n> Did you perhaps disable spinlocks when compiling PG?\n\nThat sure looks like it must be the issue --- in a normal build the\nnumber of semaphores needed does not vary with shared_buffers, but\nit will if Postgres is falling back to semaphore-based spinlocks.\nWhich is a really bad idea from a performance standpoint, so you\nwant to fix the build.\n\nWhich PG version is this exactly, and what configure options did\nyou use? What compiler was used?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Jun 2005 17:15:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine "
},
{
"msg_contents": "I'm running the Redhat Version of Postgresql which came pre-installed\nwith Redhat ES. It's version number is 7.3.10-1. I'm not sure what\noptions it was compiled with. Is there a way for me to tell? Should i\njust compile my own postgresql for this platform?\n\nThanks\nMark\n\nOn Mon, 2005-06-06 at 16:15, Tom Lane wrote:\n> Mike Rylander <[email protected]> writes:\n> > On 06 Jun 2005 12:53:40 -0500, Mark Rinaudo <[email protected]> wrote:\n> >> I'm not sure if this is the appropriate list to post this question to\n> >> but i'm starting with this one because it is related to the performance\n> >> of Postgresql server. I have a Penguin Computing dual AMD 64 bit\n> >> opteron machine with 8 Gigs of memory. In my attempt to increase the\n> >> number of shared_buffers from the default to 65000 i was running into a\n> >> semget error when trying to start Postgresql.\n> \n> > Did you perhaps disable spinlocks when compiling PG?\n> \n> That sure looks like it must be the issue --- in a normal build the\n> number of semaphores needed does not vary with shared_buffers, but\n> it will if Postgres is falling back to semaphore-based spinlocks.\n> Which is a really bad idea from a performance standpoint, so you\n> want to fix the build.\n> \n> Which PG version is this exactly, and what configure options did\n> you use? What compiler was used?\n> \n> \t\t\tregards, tom lane\n> \n-- \nMark Rinaudo\n318-213-8780 ext 111\nBowman Systems\n\n",
"msg_date": "06 Jun 2005 17:28:20 -0500",
"msg_from": "Mark Rinaudo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "Mark Rinaudo wrote:\n> I'm running the Redhat Version of Postgresql which came pre-installed\n> with Redhat ES. It's version number is 7.3.10-1. I'm not sure what\n> options it was compiled with. Is there a way for me to tell?\n\n`pg_config --configure` in recent releases.\n\n> Should i just compile my own postgresql for this platform?\n\nYes, I would. 7.4 was the first release to include support for proper \nspinlocks on AMD64.\n\n(From a Redhat POV, it would probably be a good idea to patch 7.3 to \ninclude the relatively trivial changes needed for decent AMD64 \nperformance, assuming that shipping a more recent version of PG with ES \nisn't an option.)\n\n-Neil\n",
"msg_date": "Tue, 07 Jun 2005 09:48:48 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "Neil Conway <[email protected]> writes:\n> (From a Redhat POV, it would probably be a good idea to patch 7.3 to \n> include the relatively trivial changes needed for decent AMD64 \n> performance,\n\nHow embarrassing :-( Will see about fixing it. However, this certainly\nwon't ship before the next RHEL3 quarterly update, so in the meantime if\nMark feels like building locally, it wouldn't be a bad idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Jun 2005 00:28:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine "
},
{
"msg_contents": "Get FATAL when starting up (64 bit) with large shared_buffers setting\n\nI built a 64 bit for Sparc/Solaris easily but I found that the\nstartup of postmaster generates a FATAL diagnostic due to going\nover the 2GB limit (3.7 GB).\n\nWhen building for 64 bit is there some other\nthings that must change in order to size UP the shared_buffers?\n\nThanks.\n\nDon C.\n\nP.S. A severe checkpoint problem I was having was fixed with\n\"checkpoint_segments=200\".\n\n\nMessage:\n\nFATAL: 460000 is outside the valid range for parameter \"shared_buffers\" \n(16 .. 262143)\nLOG: database system was shut down at 2005-06-07 15:20:28 EDT\n\nMike Rylander wrote:\n\n>On 06 Jun 2005 12:53:40 -0500, Mark Rinaudo <[email protected]> wrote:\n> \n>\n>>I'm not sure if this is the appropriate list to post this question to\n>>but i'm starting with this one because it is related to the performance\n>>of Postgresql server. I have a Penguin Computing dual AMD 64 bit\n>>opteron machine with 8 Gigs of memory. In my attempt to increase the\n>>number of shared_buffers from the default to 65000 i was running into a\n>>semget error when trying to start Postgresql. After reading the\n>>documentation I adjusted the semaphore settings in the kernel to allow\n>>Postgresql to start successfully. With this configuration running if I\n>>do a ipcs -u i get the following.\n>> \n>>\n>\n>\n>On my HP-585, 4xOpteron, 16G RAM, Gentoo Linux (2.6.9):\n>\n>$ ipcs -u i\n>\n>------ Shared Memory Status --------\n>segments allocated 1\n>pages allocated 34866\n>pages resident 31642\n>pages swapped 128\n>Swap performance: 0 attempts 0 successes\n>\n>------ Semaphore Status --------\n>used arrays = 7\n>allocated semaphores = 119\n>\n>------ Messages: Status --------\n>allocated queues = 0\n>used headers = 0\n>used space = 0 bytes\n>\n>\n>Did you perhaps disable spinlocks when compiling PG?\n>\n> \n>\n\n",
"msg_date": "Tue, 07 Jun 2005 15:38:30 -0400",
"msg_from": "Donald Courtney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "According to my research, you only need a 64 bit image if you are going \nto be doing intensive floating point operations (which most db servers \ndon't do). Some benchmarking results I've found on the internet \nindicate that 64 bit executables can be slower than 32 bit versions. \nI've been running 32 bit compiles on solaris for several years.\n\nHow much memory do you have on that sparc box? Allocating more than \nabout 7-12% to shared buffers has proven counter productive for us (it \nslows down).\n\nKernel buffers are another animal. :)\n\nDonald Courtney wrote:\n> Get FATAL when starting up (64 bit) with large shared_buffers setting\n> \n> I built a 64 bit for Sparc/Solaris easily but I found that the\n> startup of postmaster generates a FATAL diagnostic due to going\n> over the 2GB limit (3.7 GB).\n> \n> When building for 64 bit is there some other\n> things that must change in order to size UP the shared_buffers?\n> \n> Thanks.\n> \n> Don C.\n> \n> P.S. A severe checkpoint problem I was having was fixed with\n> \"checkpoint_segments=200\".\n> \n> \n> Message:\n> \n> FATAL: 460000 is outside the valid range for parameter \"shared_buffers\" \n> (16 .. 262143)\n> LOG: database system was shut down at 2005-06-07 15:20:28 EDT\n> \n> Mike Rylander wrote:\n> \n>> On 06 Jun 2005 12:53:40 -0500, Mark Rinaudo <[email protected]> \n>> wrote:\n>> \n>>\n>>> I'm not sure if this is the appropriate list to post this question to\n>>> but i'm starting with this one because it is related to the performance\n>>> of Postgresql server. I have a Penguin Computing dual AMD 64 bit\n>>> opteron machine with 8 Gigs of memory. In my attempt to increase the\n>>> number of shared_buffers from the default to 65000 i was running into a\n>>> semget error when trying to start Postgresql. After reading the\n>>> documentation I adjusted the semaphore settings in the kernel to allow\n>>> Postgresql to start successfully. With this configuration running if I\n>>> do a ipcs -u i get the following.\n>>> \n>>\n>>\n>>\n>> On my HP-585, 4xOpteron, 16G RAM, Gentoo Linux (2.6.9):\n>>\n>> $ ipcs -u i\n>>\n>> ------ Shared Memory Status --------\n>> segments allocated 1\n>> pages allocated 34866\n>> pages resident 31642\n>> pages swapped 128\n>> Swap performance: 0 attempts 0 successes\n>>\n>> ------ Semaphore Status --------\n>> used arrays = 7\n>> allocated semaphores = 119\n>>\n>> ------ Messages: Status --------\n>> allocated queues = 0\n>> used headers = 0\n>> used space = 0 bytes\n>>\n>>\n>> Did you perhaps disable spinlocks when compiling PG?\n>>\n>> \n>>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n> \n> \n",
"msg_date": "Tue, 07 Jun 2005 12:54:24 -0700",
"msg_from": "Tom Arthurs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "Tom Arthurs wrote:\n\n> According to my research, you only need a 64 bit image if you are \n> going to be doing intensive floating point operations (which most db \n> servers don't do). Some benchmarking results I've found on the \n> internet indicate that 64 bit executables can be slower than 32 bit \n> versions. I've been running 32 bit compiles on solaris for several years.\n>\n> How much memory do you have on that sparc box? Allocating more than \n> about 7-12% to shared buffers has proven counter productive for us (it \n> slows down).\n>\nThe system has 8 CPUs w/ 32 GB - I'm hoping to see some benefit to large \ncaches -\nAm I missing something key with postgreSQL? \n\nYes - we have seen with oracle 64 bit that there can be as much as a 10% \nhit moving\nfrom 32 - but we make it up big time with large db-buffer sizes that \ndrastically\nreduce I/O and allow for other things (like more connections). Maybe\nthe expectation of less I/O is not correct?\n\nDon\n\nP.S. built with the Snapshot from two weeks ago.\n\n> Kernel buffers are another animal. :)\n>\n> Donald Courtney wrote:\n>\n>> Get FATAL when starting up (64 bit) with large shared_buffers setting\n>>\n>> I built a 64 bit for Sparc/Solaris easily but I found that the\n>> startup of postmaster generates a FATAL diagnostic due to going\n>> over the 2GB limit (3.7 GB).\n>>\n>> When building for 64 bit is there some other\n>> things that must change in order to size UP the shared_buffers?\n>>\n>> Thanks.\n>>\n>> Don C.\n>>\n>> P.S. A severe checkpoint problem I was having was fixed with\n>> \"checkpoint_segments=200\".\n>>\n>>\n>> Message:\n>>\n>> FATAL: 460000 is outside the valid range for parameter \n>> \"shared_buffers\" (16 .. 262143)\n>> LOG: database system was shut down at 2005-06-07 15:20:28 EDT\n>>\n>> Mike Rylander wrote:\n>>\n>>> On 06 Jun 2005 12:53:40 -0500, Mark Rinaudo <[email protected]> \n>>> wrote:\n>>> \n>>>\n>>>> I'm not sure if this is the appropriate list to post this question to\n>>>> but i'm starting with this one because it is related to the \n>>>> performance\n>>>> of Postgresql server. I have a Penguin Computing dual AMD 64 bit\n>>>> opteron machine with 8 Gigs of memory. In my attempt to increase the\n>>>> number of shared_buffers from the default to 65000 i was running \n>>>> into a\n>>>> semget error when trying to start Postgresql. After reading the\n>>>> documentation I adjusted the semaphore settings in the kernel to allow\n>>>> Postgresql to start successfully. With this configuration running \n>>>> if I\n>>>> do a ipcs -u i get the following.\n>>>> \n>>>\n>>>\n>>>\n>>>\n>>> On my HP-585, 4xOpteron, 16G RAM, Gentoo Linux (2.6.9):\n>>>\n>>> $ ipcs -u i\n>>>\n>>> ------ Shared Memory Status --------\n>>> segments allocated 1\n>>> pages allocated 34866\n>>> pages resident 31642\n>>> pages swapped 128\n>>> Swap performance: 0 attempts 0 successes\n>>>\n>>> ------ Semaphore Status --------\n>>> used arrays = 7\n>>> allocated semaphores = 119\n>>>\n>>> ------ Messages: Status --------\n>>> allocated queues = 0\n>>> used headers = 0\n>>> used space = 0 bytes\n>>>\n>>>\n>>> Did you perhaps disable spinlocks when compiling PG?\n>>>\n>>> \n>>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 5: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n>>\n>>\n>>\n\n",
"msg_date": "Tue, 07 Jun 2005 16:19:24 -0400",
"msg_from": "Donald Courtney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": ">>\n> The system has 8 CPUs w/ 32 GB - I'm hoping to see some benefit to large \n> caches -\n> Am I missing something key with postgreSQL?\n> Yes - we have seen with oracle 64 bit that there can be as much as a 10% \n> hit moving\n> from 32 - but we make it up big time with large db-buffer sizes that \n> drastically\n\nWell for Opteron you should also gain from the very high memory \nbandwidth and the fact that it has I believe \"3\" FP units per CPU.\n\nSincerely,\n\nJoshua D. Drake\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n",
"msg_date": "Tue, 07 Jun 2005 13:27:54 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "On Tue, Jun 07, 2005 at 04:19:24PM -0400, Donald Courtney wrote:\n\n> The system has 8 CPUs w/ 32 GB - I'm hoping to see some benefit to large \n> caches -\n> Am I missing something key with postgreSQL? \n\nYeah. Postgres makes extensive use of the kernel's cache (or, more\nprecisely, assumes that the kernel is doing some caching on its own).\nSo the bulk of the memory should be left to the kernel to handle, and\nshared_buffers be set relatively slow.\n\nThis was the standard wisdom with releases previous to 8.0; I'm not sure\nif anyone confirmed to still hold after the buffer manager changes in\n8.0 and later in 8.1 -- we saw extensive redesign of the bufmgr on both,\nso the behavior may have changed. If you wanna test, I'm sure lots of\npeople here will be interested in the results.\n\n-- \nAlvaro Herrera (<alvherre[a]surnet.cl>)\n\"This is a foot just waiting to be shot\" (Andrew Dunstan)\n",
"msg_date": "Tue, 7 Jun 2005 16:27:55 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "Yes, shared buffers in postgres are not used for caching -- unlike \nOracle. Every time we hire an Oracle dba, I have to break them of the \nnotion (which I shared when I started with postgres -- Josh Berkus and \nJosh Drake helped burst that bubble for me) :)\n\nYou should gain i/o reduction through increasing kernel buffers -- \nPostgresql counts on read/write caching through that, so increasing that \nshould get your performance improvemnets -- though I haven't found the \nsweet spot there yet, for solaris. My biggest challenge with \nsolaris/sparc is trying to reduce context switching.\n\nDonald Courtney wrote:\n> Tom Arthurs wrote:\n> \n>> According to my research, you only need a 64 bit image if you are \n>> going to be doing intensive floating point operations (which most db \n>> servers don't do). Some benchmarking results I've found on the \n>> internet indicate that 64 bit executables can be slower than 32 bit \n>> versions. I've been running 32 bit compiles on solaris for several years.\n>>\n>> How much memory do you have on that sparc box? Allocating more than \n>> about 7-12% to shared buffers has proven counter productive for us (it \n>> slows down).\n>>\n> The system has 8 CPUs w/ 32 GB - I'm hoping to see some benefit to large \n> caches -\n> Am I missing something key with postgreSQL?\n> Yes - we have seen with oracle 64 bit that there can be as much as a 10% \n> hit moving\n> from 32 - but we make it up big time with large db-buffer sizes that \n> drastically\n> reduce I/O and allow for other things (like more connections). Maybe\n> the expectation of less I/O is not correct?\n> \n> Don\n> \n> P.S. built with the Snapshot from two weeks ago.\n> \n>> Kernel buffers are another animal. :)\n>>\n>> Donald Courtney wrote:\n>>\n>>> Get FATAL when starting up (64 bit) with large shared_buffers setting\n>>>\n>>> I built a 64 bit for Sparc/Solaris easily but I found that the\n>>> startup of postmaster generates a FATAL diagnostic due to going\n>>> over the 2GB limit (3.7 GB).\n>>>\n>>> When building for 64 bit is there some other\n>>> things that must change in order to size UP the shared_buffers?\n>>>\n>>> Thanks.\n>>>\n>>> Don C.\n>>>\n>>> P.S. A severe checkpoint problem I was having was fixed with\n>>> \"checkpoint_segments=200\".\n>>>\n>>>\n>>> Message:\n>>>\n>>> FATAL: 460000 is outside the valid range for parameter \n>>> \"shared_buffers\" (16 .. 262143)\n>>> LOG: database system was shut down at 2005-06-07 15:20:28 EDT\n>>>\n>>> Mike Rylander wrote:\n>>>\n>>>> On 06 Jun 2005 12:53:40 -0500, Mark Rinaudo <[email protected]> \n>>>> wrote:\n>>>> \n>>>>\n>>>>> I'm not sure if this is the appropriate list to post this question to\n>>>>> but i'm starting with this one because it is related to the \n>>>>> performance\n>>>>> of Postgresql server. I have a Penguin Computing dual AMD 64 bit\n>>>>> opteron machine with 8 Gigs of memory. In my attempt to increase the\n>>>>> number of shared_buffers from the default to 65000 i was running \n>>>>> into a\n>>>>> semget error when trying to start Postgresql. After reading the\n>>>>> documentation I adjusted the semaphore settings in the kernel to allow\n>>>>> Postgresql to start successfully. With this configuration running \n>>>>> if I\n>>>>> do a ipcs -u i get the following.\n>>>>> \n>>>>\n>>>>\n>>>>\n>>>>\n>>>>\n>>>> On my HP-585, 4xOpteron, 16G RAM, Gentoo Linux (2.6.9):\n>>>>\n>>>> $ ipcs -u i\n>>>>\n>>>> ------ Shared Memory Status --------\n>>>> segments allocated 1\n>>>> pages allocated 34866\n>>>> pages resident 31642\n>>>> pages swapped 128\n>>>> Swap performance: 0 attempts 0 successes\n>>>>\n>>>> ------ Semaphore Status --------\n>>>> used arrays = 7\n>>>> allocated semaphores = 119\n>>>>\n>>>> ------ Messages: Status --------\n>>>> allocated queues = 0\n>>>> used headers = 0\n>>>> used space = 0 bytes\n>>>>\n>>>>\n>>>> Did you perhaps disable spinlocks when compiling PG?\n>>>>\n>>>> \n>>>>\n>>>\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 5: Have you checked our extensive FAQ?\n>>>\n>>> http://www.postgresql.org/docs/faq\n>>>\n>>>\n>>>\n> \n> \n> \n> \n",
"msg_date": "Tue, 07 Jun 2005 13:39:04 -0700",
"msg_from": "Tom Arthurs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "On Tue, Jun 07, 2005 at 01:39:04PM -0700, Tom Arthurs wrote:\n>Yes, shared buffers in postgres are not used for caching \n\nThat begs the question of what they are used for. :)\n\nMike Stone\n",
"msg_date": "Tue, 07 Jun 2005 17:04:39 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> This was the standard wisdom with releases previous to 8.0; I'm not sure\n> if anyone confirmed to still hold after the buffer manager changes in\n> 8.0 and later in 8.1 -- we saw extensive redesign of the bufmgr on both,\n> so the behavior may have changed. If you wanna test, I'm sure lots of\n> people here will be interested in the results.\n\nQuite. The story at the moment is that we haven't bothered to create\nsupport for shared memory exceeding 2Gb, because there's never been any\nevidence that pushing shared_buffers up even close to that, much less\nabove it, was a good idea. Most people have found the \"sweet spot\" to\nbe in the range of 10K to 50K shared buffers, with performance dropping\noff above that.\n\nObviously we'd be willing to do this work if there were convincing\nevidence it'd be worth the time. A benchmark showing performance\ncontinuing to climb with increasing shared_buffers right up to the 2Gb\nlimit would be reasonably convincing. I think there is 0 chance of\ndrawing such a graph with a pre-8.1 server, because of internal\ninefficiencies in the buffer manager ... but with CVS tip the story\nmight be different.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Jun 2005 17:11:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine "
},
{
"msg_contents": "Tom Arthurs wrote:\n> Yes, shared buffers in postgres are not used for caching\n\nShared buffers in Postgres _are_ used for caching, they just form a \nsecondary cache on top of the kernel's IO cache. Postgres does IO \nthrough the filesystem, which is then cached by the kernel. Increasing \nshared_buffers means that less memory is available for the kernel to \ncache IO -- increasing shared_buffers has been shown to be a net \nperformance loss beyond a certain point. Still, there is value in \nshared_buffers as it means we can avoid a read() system call for hot \npages. We can also do better buffer replacement in the PG shared buffer \nthan the kernel can do (e.g. treating IO caused by VACUUM specially).\n\n> My biggest challenge with solaris/sparc is trying to reduce context\n> switching.\n\nIt would be interesting to see if this is improved with current sources, \nas Tom's bufmgr rewrite should have hopefully have reduced this problem.\n\n-Neil\n",
"msg_date": "Wed, 08 Jun 2005 10:20:19 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "Neil Conway wrote:\n> Tom Arthurs wrote:\n> \n>> Yes, shared buffers in postgres are not used for caching\n> \n> \n> Shared buffers in Postgres _are_ used for caching, they just form a\n> secondary cache on top of the kernel's IO cache. Postgres does IO\n> through the filesystem, which is then cached by the kernel. Increasing\n> shared_buffers means that less memory is available for the kernel to\n> cache IO -- increasing shared_buffers has been shown to be a net\n> performance loss beyond a certain point. Still, there is value in\n> shared_buffers as it means we can avoid a read() system call for hot\n> pages. We can also do better buffer replacement in the PG shared buffer\n> than the kernel can do (e.g. treating IO caused by VACUUM specially).\n> \n\nAs I recall, one of the performance problems with a large shared_buffers\nis that there are some commands which require looking at *all* of the\nshared buffer space. So the larger it gets, the longer those functions take.\n\n>> My biggest challenge with solaris/sparc is trying to reduce context\n>> switching.\n> \n> \n> It would be interesting to see if this is improved with current sources,\n> as Tom's bufmgr rewrite should have hopefully have reduced this problem.\n> \n\nThese might be what was fixed with Tom's rewrite. I don't really know.\n\nJohn\n=:->\n\n> -Neil\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster",
"msg_date": "Tue, 07 Jun 2005 19:44:18 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "Tom,\n\n> Obviously we'd be willing to do this work if there were convincing\n> evidence it'd be worth the time. A benchmark showing performance\n> continuing to climb with increasing shared_buffers right up to the 2Gb\n> limit would be reasonably convincing. I think there is 0 chance of\n> drawing such a graph with a pre-8.1 server, because of internal\n> inefficiencies in the buffer manager ... but with CVS tip the story\n> might be different.\n\nNot that I've seen in testing so far. Your improvements have, fortunately, \neliminated the penalty for allocating too much shared buffers as far as I can \ntell (at least, allocating 70,000 when gains stopped at 15,000 doesn't seem \nto carry a penalty), but I don't see any progressive gain with increased \nbuffers above the initial ideal. In fact, with clock-sweep the shared_buffer \ncurve is refreshingly flat once it reaches the required level, which will \ntake a lot of the guesswork out of allocating buffers.\n\nRegarding 2GB memory allocation, though, we *could* really use support for \nwork_mem and maintenance_mem of > 2GB. \n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 7 Jun 2005 20:05:47 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Not that I've seen in testing so far. Your improvements have, fortunately, \n> eliminated the penalty for allocating too much shared buffers as far as I can\n> tell (at least, allocating 70,000 when gains stopped at 15,000 doesn't seem \n> to carry a penalty),\n\nCool, that's definitely a step forward ;-)\n\n> Regarding 2GB memory allocation, though, we *could* really use support for \n> work_mem and maintenance_mem of > 2GB. \n\nAgain, let's see some evidence that it's worth putting effort into that.\n(Offhand it seems this is probably an easier fix than changing the\nshared-memory allocation code; but conventional wisdom is that really\nlarge values of work_mem are a bad idea, and I'm not sure I see the case\nfor maintenance_work_mem above 2Gb either.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Jun 2005 23:50:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine "
},
{
"msg_contents": "On Tue, 2005-06-07 at 23:50 -0400, Tom Lane wrote:\n> > Regarding 2GB memory allocation, though, we *could* really use support for \n> > work_mem and maintenance_mem of > 2GB. \n> \n> Again, let's see some evidence that it's worth putting effort into that.\n> (Offhand it seems this is probably an easier fix than changing the\n> shared-memory allocation code; but conventional wisdom is that really\n> large values of work_mem are a bad idea, and I'm not sure I see the case\n> for maintenance_work_mem above 2Gb either.)\n\nWe have strong evidence that an in-memory sort is better than an\nexternal sort. And strong evidence that a hash-join/aggregate is faster\nthan a sort-merge or sort-aggregate.\n\nWhat other evidence do you need?\n\nThe idea that work_mem is bad is a workload dependent thing. It assumes\nthat using the memory for other things is useful. That isn't the case\nfor apps with large tables, which just churn through memory with zero\ngain.\n\nIn 8.2, I imagine a workload management feature that would limit the\nallocation of work_mem and maintenance_work_mem, so that they can be\nmore safely allocated to very high values in production. That would open\nthe door to the use of very high work_mem values.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Wed, 08 Jun 2005 07:16:14 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "On Tue, Jun 07, 2005 at 11:50:33PM -0400, Tom Lane wrote:\n>Again, let's see some evidence that it's worth putting effort into that.\n>(Offhand it seems this is probably an easier fix than changing the\n>shared-memory allocation code; but conventional wisdom is that really\n>large values of work_mem are a bad idea, and I'm not sure I see the case\n>for maintenance_work_mem above 2Gb either.)\n\nHmm. That would be a fairly hard thing to test, no? I wouldn't expect to\nsee a smooth curve as the value is increased--I'd expect it to remain\nfairly flat until you hit the sweet spot where you can fit the whole\nworking set into RAM. When you say \"2Gb\", does that imply that the\nmemory allocation limit in 8.1 has been increased from 1G-1?\n\nMike Stone\n",
"msg_date": "Wed, 08 Jun 2005 05:51:50 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "I just puhsd 8.0.3 to production on Sunday, and haven't had a time to \nreally monitor it under load, so I can't tell if it's helped the context \nswitch problem yet or not.\n\nNeil Conway wrote:\n> Tom Arthurs wrote:\n> \n>> Yes, shared buffers in postgres are not used for caching\n> \n> \n> Shared buffers in Postgres _are_ used for caching, they just form a \n> secondary cache on top of the kernel's IO cache. Postgres does IO \n> through the filesystem, which is then cached by the kernel. Increasing \n> shared_buffers means that less memory is available for the kernel to \n> cache IO -- increasing shared_buffers has been shown to be a net \n> performance loss beyond a certain point. Still, there is value in \n> shared_buffers as it means we can avoid a read() system call for hot \n> pages. We can also do better buffer replacement in the PG shared buffer \n> than the kernel can do (e.g. treating IO caused by VACUUM specially).\n> \n>> My biggest challenge with solaris/sparc is trying to reduce context\n>> switching.\n> \n> \n> It would be interesting to see if this is improved with current sources, \n> as Tom's bufmgr rewrite should have hopefully have reduced this problem.\n> \n> -Neil\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n> \n",
"msg_date": "Wed, 08 Jun 2005 09:43:51 -0700",
"msg_from": "Tom Arthurs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "Hi,\n\n> I just puhsd 8.0.3 to production on Sunday, and haven't had a time to \n> really monitor it under load, so I can't tell if it's helped the context \n> switch problem yet or not.\n\nAttached is a \"vmstat 5\" output from one of our machines. This is a dual \nXeon 3,2 Ghz with EM64T and 8 GB RAM, running postgresql 8.0.3 on Debian \nSarge 64bit. Connection count is about 350.\n\nLargest amount of cs per second is nearly 10000 which is high, yes, but \nnot too high.\n\nRegards,\nBjoern",
"msg_date": "Wed, 08 Jun 2005 18:53:57 +0200",
"msg_from": "Bjoern Metzdorf <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "Joshua D. Drake wrote:\n>> Yes - we have seen with oracle 64 bit that there can be as much as a \n>> 10% hit moving\n>> from 32 - but we make it up big time with large db-buffer sizes that \n>> drastically\n> Well for Opteron you should also gain from the very high memory \n> bandwidth and the fact that it has I believe \"3\" FP units per CPU.\n\nSure. But you get those benefits in 32 or 64-bit mode.\n\nSam.\n",
"msg_date": "Thu, 09 Jun 2005 09:14:02 +1200",
"msg_from": "Sam Vilain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
},
{
"msg_contents": "Tom Arthurs wrote:\n> I just puhsd 8.0.3 to production on Sunday, and haven't had a time to \n> really monitor it under load, so I can't tell if it's helped the context \n> switch problem yet or not.\n\n8.0 is unlikely to make a significant difference -- by \"current sources\" \nI meant the current CVS HEAD sources (i.e. 8.1devel).\n\n-Neil\n",
"msg_date": "Thu, 09 Jun 2005 10:19:51 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on an AMD64 machine"
}
] |
[
{
"msg_contents": " > Has anyone ran Postgres with software RAID or LVM on a production box?\n > What have been your experience?\n\nYes, we have run for a couple years Pg with software LVM (mirroring) \nagainst two hardware RAID5 arrays. We host a production Sun box that \nruns 24/7.\n\nMy experience:\n* Software RAID (other than mirroring) is a disaster waiting to happen. \n If the metadata for the RAID set gives out for any reason (CMOS \nscrambles, card dies, power spike, etc.) then you are hosed beyond \nbelief. In most cases it is almost impossible to recover. With \nmirroring, however, you can always boot and operate on a single mirror, \npretending that no LVM/RAID is underway. In other words, each mirror is \na fully functional copy of the data which will operate your server.\n\n* Hardware RAID5 is a terrific way to boost performance via write \ncaching and spreading I/O across multiple spindles. Each of our \nexternal arrays operates 14 drives (12 data, 1 parity and 1 hot spare). \n While RAID5 protects against single spindle failure, it will not hedge \nagainst multiple failures in a short time period, SCSI contoller \nfailure, SCSI cable problems or even wholesale failure of the RAID \ncontroller. All of these things happen in a 24/7 operation. Using \nsoftware RAID1 against the hardware RAID5 arrays hedges against any \nsingle failure.\n\n* Software mirroring gives you tremendous ability to change the system \nwhile it is running, by taking offline the mirror you wish to change and \nthen synchronizing it after the change.\n\nOn a fully operational production server, we have:\n* restriped the RAID5 array\n* replaced all RAID5 media with higher capacity drives\n* upgraded RAID5 controller\n* moved all data from an old RAID5 array to a newer one\n* replaced host SCSI controller\n* uncabled and physically moved storage to a different part of data center\n\nAgain, all of this has taken place (over the years) while our machine \nwas fully operational.\n\n",
"msg_date": "Mon, 06 Jun 2005 13:13:51 -0600",
"msg_from": "Marty Scholes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql and Software RAID/LVM"
},
{
"msg_contents": "Marty Scholes wrote:\n>> Has anyone ran Postgres with software RAID or LVM on a production box?\n>> What have been your experience?\n> \n> Yes, we have run for a couple years Pg with software LVM (mirroring)\n> against two hardware RAID5 arrays. We host a production Sun box that\n> runs 24/7.\n> \n> My experience:\n> * Software RAID (other than mirroring) is a disaster waiting to happen.\n> If the metadata for the RAID set gives out for any reason (CMOS\n> scrambles, card dies, power spike, etc.) then you are hosed beyond\n> belief. In most cases it is almost impossible to recover. With\n> mirroring, however, you can always boot and operate on a single mirror,\n> pretending that no LVM/RAID is underway. In other words, each mirror is\n> a fully functional copy of the data which will operate your server.\n\nIsn't this actually more of a problem for the meta-data to give out in a\nhardware situation? I mean, if the card you are using dies, you can't\njust get another one.\nWith software raid, because the meta-data is on the drives, you can pull\nit out of that machine, and put it into any machine that has a\ncontroller which can read the drives, and a similar kernel, and you are\nback up and running.\n> \n> * Hardware RAID5 is a terrific way to boost performance via write\n> caching and spreading I/O across multiple spindles. Each of our\n> external arrays operates 14 drives (12 data, 1 parity and 1 hot spare).\n> While RAID5 protects against single spindle failure, it will not hedge\n> against multiple failures in a short time period, SCSI contoller\n> failure, SCSI cable problems or even wholesale failure of the RAID\n> controller. All of these things happen in a 24/7 operation. Using\n> software RAID1 against the hardware RAID5 arrays hedges against any\n> single failure.\n\nNo, it hedges against *more* than one failure. But you can also do a\nRAID1 over a RAID5 in software. But if you are honestly willing to\ncreate a full RAID1, just create a RAID1 over RAID0. The performance is\nmuch better. And since you have a full RAID1, as long as both drives of\na pairing don't give out, you can lose half of your drives.\n\nIf you want the space, but you feel that RAID5 isn't redundant enough,\ngo to RAID6, which uses 2 parity locations, each with a different method\nof storing parity, so not only is it more redundant, you have a better\nchance of finding problems.\n\n> \n> * Software mirroring gives you tremendous ability to change the system\n> while it is running, by taking offline the mirror you wish to change and\n> then synchronizing it after the change.\n>\n\nThat certainly is a nice ability. But remember that LVM also has the\nidea of \"snapshot\"ing a running system. I don't know the exact details,\njust that there is a way to have some processes see the filesystem as it\nexisted at an exact point in time. Which is also a great way to handle\nbackups.\n\n> On a fully operational production server, we have:\n> * restriped the RAID5 array\n> * replaced all RAID5 media with higher capacity drives\n> * upgraded RAID5 controller\n> * moved all data from an old RAID5 array to a newer one\n> * replaced host SCSI controller\n> * uncabled and physically moved storage to a different part of data center\n> \n> Again, all of this has taken place (over the years) while our machine\n> was fully operational.\n> \nSo you are saying that you were able to replace the RAID controller\nwithout turning off the machine? I realize there does exist\nhot-swappable PCI cards, but I think you are overstating what you mean\nby \"fully operational\". For instance, it's not like you can access your\ndata while it is being physically moved.\n\nI do think you had some nice hardware. But I know you can do all of this\nin software as well. It is usually a price/performance tradeoff. You\nspend quite a bit to get a hardware RAID card that can keep up with a\nmodern CPU. I know we have an FC raid box at work which has a full 512MB\nof cache on it, but it wasn't that much cheaper than buying a dedicated\nserver.\n\nJohn\n=:->",
"msg_date": "Mon, 06 Jun 2005 23:36:53 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql and Software RAID/LVM"
},
{
"msg_contents": "John A Meinel wrote:\n> \n> Isn't this actually more of a problem for the meta-data to give out in a\n> hardware situation? I mean, if the card you are using dies, you can't\n> just get another one.\n> With software raid, because the meta-data is on the drives, you can pull\n> it out of that machine, and put it into any machine that has a\n> controller which can read the drives, and a similar kernel, and you are\n> back up and running.\n\nProbably true. If you have a similar kernel and hardware and if you can \nrecover the state information, knowing where the state information is \nstored. Those are some very big \"ifs\" during a hectic disaster.\n\n> No, it hedges against *more* than one failure. But you can also do a\n> RAID1 over a RAID5 in software. But if you are honestly willing to\n> create a full RAID1, just create a RAID1 over RAID0. The performance is\n> much better. And since you have a full RAID1, as long as both drives of\n> a pairing don't give out, you can lose half of your drives.\n\nTrue as well. The problem with RAID1 over RAID0 is that, during a drive \nfailure, you are one bad sector from disaster. Further, RAID5 does \nautomatic rebuild, whereas most RAID1 setups do not. RAID5 reduces the \namount of time that things are degraded, reducing the time that your \ndata is in danger.\n\n> If you want the space, but you feel that RAID5 isn't redundant enough,\n> go to RAID6, which uses 2 parity locations, each with a different method\n> of storing parity, so not only is it more redundant, you have a better\n> chance of finding problems.\n\nAgreed, RAID6 is the future, but still won't keep the server running \nwhen the RAID controller dies, or the SCSI/FC host adapter goes, or you \nwant to upgrade controller firmware, or you want to replace the media, or...\n\n> So you are saying that you were able to replace the RAID controller\n> without turning off the machine? I realize there does exist\n> hot-swappable PCI cards, but I think you are overstating what you mean\n> by \"fully operational\". For instance, it's not like you can access your\n> data while it is being physically moved.\n\nDetach mirror 1, uncable and move, recable and resync. Detach mirror 2, \nuncable and move, recable and resync.\n\n> \n> I do think you had some nice hardware. But I know you can do all of this\n> in software as well. It is usually a price/performance tradeoff. You\n> spend quite a bit to get a hardware RAID card that can keep up with a\n> modern CPU. I know we have an FC raid box at work which has a full 512MB\n> of cache on it, but it wasn't that much cheaper than buying a dedicated\n> server.\n\nWe run two Nexsan ATABoy2 arrays. These can be found in 1 TB \nconfigurations for about $3,000 each, putting mirrored RAID5 storage at \n$6 per GB. Is that a lot of money for storage? Maybe. In our case, \nthat's dirt cheap protection against storage-related downtime.\n\nMarty\n\n",
"msg_date": "Tue, 07 Jun 2005 09:30:13 -0600",
"msg_from": "Marty Scholes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql and Software RAID/LVM"
},
{
"msg_contents": "\nDebian Stable has gone from Woody to Sarge.\n\nHooray!\n\nThat means the normal package installed goes from 7.2.1 to 7.4.7.\n\nThanks to the folks who told me about backports.org, but I didn't follow\nthrough and load it, though. Maybe when backports has 8.x I'll go that\nroute.\n\nThanks Oliver for the work you did (I'm assuming) on getting the Sarge\npostgreSQL package ready over the various incarnations of Testing.\n\nNow I'm off to upgrade the rest of my machines, prudently saving my\nproduction server for last.\n\nbrew\n\n ==========================================================================\n Strange Brew ([email protected])\n Check out my Stock Option Covered Call website http://www.callpix.com\n and my Musician's Online Database Exchange http://www.TheMode.com\n ==========================================================================\n\n",
"msg_date": "Tue, 7 Jun 2005 11:51:32 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Debian Stable goes from Woody to Sarge!!"
},
{
"msg_contents": "[email protected] wrote:\n> Thanks Oliver for the work you did (I'm assuming) on getting the\n> Sarge postgreSQL package ready over the various incarnations of\n> Testing.\n\nMartin Pitt maintains the Debian packages of PostgreSQL these days.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Tue, 7 Jun 2005 18:15:21 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Debian Stable goes from Woody to Sarge!!"
}
] |
[
{
"msg_contents": "\nHi Everyone,\n\nIm having a performance issue with version 7.3.4 which i first thought was Disk IO\nrelated, however now it seems like the problem is caused by really slow commits, this\nis running on Redhat 8.\n\nBasically im taking a .sql file with insert of about 15,000 lines and <'ing straight\ninto psql DATABASENAME, the Disk writes never gets over about 2000 on this machine\nwith a RAID5 SCSI setup, this happens in my PROD and DEV environment.\n\nIve installed the latest version on RedHat ES3 and copied the configs across however\nthe inserts are really really fast..\n\nWas there a performce change from 7.3.4 to current to turn of autocommits by default\nor is buffering handled differently ?\n\nI have ruled out Disk IO issues as a siple 'cp' exceeds Disk writes to 60000 (using vmstat)\n\nIf i do this with a BEGIN; and COMMIT; its really fast, however not practical as im setting\nup a cold-standby server for automation.\n\nHave been trying to debug for a few days now and see nothing.. here is some info :\n\n::::::::::::::\n/proc/sys/kernel/shmall\n::::::::::::::\n2097152\n::::::::::::::\n/proc/sys/kernel/shmmax\n::::::::::::::\n134217728\n::::::::::::::\n/proc/sys/kernel/shmmni\n::::::::::::::\n4096\n\n\nshared_buffers = 51200\nmax_fsm_relations = 1000\nmax_fsm_pages = 10000\nmax_locks_per_transaction = 64\nwal_buffers = 64\neffective_cache_size = 65536\n\nMemTotal: 1547608 kB\nMemFree: 47076 kB\nMemShared: 0 kB\nBuffers: 134084 kB\nCached: 1186596 kB\nSwapCached: 544 kB\nActive: 357048 kB\nActiveAnon: 105832 kB\nActiveCache: 251216 kB\nInact_dirty: 321020 kB\nInact_laundry: 719492 kB\nInact_clean: 28956 kB\nInact_target: 285300 kB\nHighTotal: 655336 kB\nHighFree: 1024 kB\nLowTotal: 892272 kB\nLowFree: 46052 kB\nSwapTotal: 1534056 kB\nSwapFree: 1526460 kB\n\nThis is a real doosey for me, please provide any advise possible.\n\nSteve\n",
"msg_date": "Wed, 8 Jun 2005 18:39:09 +0930",
"msg_from": "\"Steve Pollard\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Importing from pg_dump slow, low Disk IO"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm having problems with the query optimizer and FULL OUTER JOIN on \nPostgreSQL 7.4. I cannot get it to use my indexes with full outer joins. \nI might be naive, but I think that it should be possible?\n\nI have two BIG tables (virtually identical) with 3 NOT NULL columns \nStation_id, TimeObs, Temp_XXXX, with unique indexes on (Station_id, \nTimeObs) and valid ANALYSE (set statistics=100). I want to join the two \ntables with a FULL OUTER JOIN.\n\nWhen I specify the query as:\n\nSELECT station_id, timeobs,temp_grass, temp_dry_at_2m\n FROM temp_dry_at_2m a\n FULL OUTER JOIN temp_grass b \n USING (station_id, timeobs)\n WHERE station_id = 52981\n AND timeobs = '2004-1-1 0:0:0'\n\nI get the correct results\n\n station_id | timeobs | temp_grass | temp_dry_at_2m\n------------+---------------------+------------+----------------\n 52944 | 2004-01-01 00:10:00 | | -1.1\n(1 row)\n\nBUT LOUSY performance, and the following EXPLAIN:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Full Join (cost=1542369.83..1618958.58 rows=6956994 width=32) (actual time=187176.408..201436.264 rows=1 loops=1)\n Merge Cond: ((\"outer\".station_id = \"inner\".station_id) AND (\"outer\".timeobs = \"inner\".timeobs))\n Filter: ((COALESCE(\"outer\".station_id, \"inner\".station_id) = 52981) AND (COALESCE(\"outer\".timeobs, \"inner\".timeobs) = '2004-01-01 00:00:00'::timestamp without time zone))\n -> Sort (cost=1207913.44..1225305.93 rows=6956994 width=16) (actual time=145748.253..153851.607 rows=6956994 loops=1)\n Sort Key: a.station_id, a.timeobs\n -> Seq Scan on temp_dry_at_2m a (cost=0.00..117549.94 rows=6956994 width=16) (actual time=0.049..54226.770 rows=6956994 loops=1)\n -> Sort (cost=334456.38..340472.11 rows=2406292 width=16) (actual time=31668.876..34491.123 rows=2406292 loops=1)\n Sort Key: b.station_id, b.timeobs\n -> Seq Scan on temp_grass b (cost=0.00..40658.92 rows=2406292 width=16) (actual time=0.052..5484.489 rows=2406292 loops=1)\n Total runtime: 201795.989 ms\n(10 rows)\n\nIf I change the query (note the \"b.\"s)\n\nexplain analyse SELECT b.station_id, b.timeobs,temp_grass, temp_dry_at_2m\n FROM temp_dry_at_2m a\n FULL OUTER JOIN temp_grass b\n USING (station_id, timeobs)\n WHERE b.station_id = 52981\n AND b.timeobs = '2004-1-1 0:0:0'\n\nI seem to destroy the FULL OUTER JOIN and get wrong results (nothing)\nIf I had happend to use \"a.\", and not \"b.\", I would have gotten correct \nresults (by accident).\nThe \"a.\" variant gives this EXPLAIN:\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..11.97 rows=1 width=20) (actual time=0.060..0.067 rows=1 loops=1)\n -> Index Scan using temp_dry_at_2m_idx on temp_dry_at_2m a (cost=0.00..5.99 rows=1 width=16) (actual time=0.033..0.036 rows=1 loops=1)\n Index Cond: ((station_id = 52981) AND (timeobs = '2004-01-01 00:00:00'::timestamp without time zone))\n -> Index Scan using temp_grass_idx on temp_grass b (cost=0.00..5.96 rows=1 width=16) (actual time=0.018..0.021 rows=1 loops=1)\n Index Cond: ((\"outer\".station_id = b.station_id) AND (\"outer\".timeobs = b.timeobs))\n Total runtime: 0.140 ms\n(6 rows)\n\nWhy will PostgreSQL not use the same plan for both these queries - they \nare virtually identical??\n\nI have tried to formulate the problem with left joins, but this demands \nfrom me that I know which table has all the values (and thus has to go \nfirst), and in practice no such table excists.\n\nTIA,\nKim Bisgaard.\n\n",
"msg_date": "Wed, 08 Jun 2005 11:37:40 +0200",
"msg_from": "Kim Bisgaard <[email protected]>",
"msg_from_op": true,
"msg_subject": "full outer performance problem"
},
{
"msg_contents": "On Wed, Jun 08, 2005 at 11:37:40 +0200,\n Kim Bisgaard <[email protected]> wrote:\n> Hi,\n> \n> I'm having problems with the query optimizer and FULL OUTER JOIN on \n> PostgreSQL 7.4. I cannot get it to use my indexes with full outer joins. \n> I might be naive, but I think that it should be possible?\n> \n> I have two BIG tables (virtually identical) with 3 NOT NULL columns \n> Station_id, TimeObs, Temp_XXXX, with unique indexes on (Station_id, \n> TimeObs) and valid ANALYSE (set statistics=100). I want to join the two \n> tables with a FULL OUTER JOIN.\n> \n> When I specify the query as:\n> \n> SELECT station_id, timeobs,temp_grass, temp_dry_at_2m\n> FROM temp_dry_at_2m a\n> FULL OUTER JOIN temp_grass b \n> USING (station_id, timeobs)\n> WHERE station_id = 52981\n> AND timeobs = '2004-1-1 0:0:0'\n> \n> I get the correct results\n> \n> station_id | timeobs | temp_grass | temp_dry_at_2m\n> ------------+---------------------+------------+----------------\n> 52944 | 2004-01-01 00:10:00 | | -1.1\n> (1 row)\n> \n> BUT LOUSY performance, and the following EXPLAIN:\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Merge Full Join (cost=1542369.83..1618958.58 rows=6956994 width=32) \n> (actual time=187176.408..201436.264 rows=1 loops=1)\n> Merge Cond: ((\"outer\".station_id = \"inner\".station_id) AND \n> (\"outer\".timeobs = \"inner\".timeobs))\n> Filter: ((COALESCE(\"outer\".station_id, \"inner\".station_id) = 52981) AND \n> (COALESCE(\"outer\".timeobs, \"inner\".timeobs) = '2004-01-01 \n> 00:00:00'::timestamp without time zone))\n> -> Sort (cost=1207913.44..1225305.93 rows=6956994 width=16) (actual \n> time=145748.253..153851.607 rows=6956994 loops=1)\n> Sort Key: a.station_id, a.timeobs\n> -> Seq Scan on temp_dry_at_2m a (cost=0.00..117549.94 \n> rows=6956994 width=16) (actual time=0.049..54226.770 rows=6956994 \n> loops=1)\n> -> Sort (cost=334456.38..340472.11 rows=2406292 width=16) (actual \n> time=31668.876..34491.123 rows=2406292 loops=1)\n> Sort Key: b.station_id, b.timeobs\n> -> Seq Scan on temp_grass b (cost=0.00..40658.92 rows=2406292 \n> width=16) (actual time=0.052..5484.489 rows=2406292 loops=1)\n> Total runtime: 201795.989 ms\n> (10 rows)\n\nSomeone else will need to comment on why Postgres can't use a more\nefficient plan. What I think will work for you is to restrict\nthe station_id and timeobs on each side and then do a full join.\nYou can try something like the sample query below (which hasn't been tested):\nSELECT station_id, timeobs, temp_grass, temp_dry_at_2m\n FROM\n (SELECT station_id, timeobs, temp_dry_at_2m\n FROM temp_dry_at_2m\n WHERE\n station_id = 52981\n AND\n timeobs = '2004-1-1 0:0:0') a\n FULL OUTER JOIN\n (SELECT station_id, timeobs, temp_grass\n FROM temp_grass\n WHERE\n station_id = 52981\n AND\n timeobs = '2004-1-1 0:0:0') b\n USING (station_id, timeobs)\n",
"msg_date": "Wed, 8 Jun 2005 07:17:55 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: full outer performance problem"
},
{
"msg_contents": "Hi Bruno,\n\nThanks for the moral support! I feel so too - but I am confident it will \nshow up soon.\n\nW.r.t. your rewrite of the query, I get this \"ERROR: could not devise a \nquery plan for the given query\" but no further details - I will try google\n\nRegards,\nKim.\n\nBruno Wolff III wrote:\n\n>On Wed, Jun 08, 2005 at 11:37:40 +0200,\n> Kim Bisgaard <[email protected]> wrote:\n> \n>\n>>Hi,\n>>\n>>I'm having problems with the query optimizer and FULL OUTER JOIN on \n>>PostgreSQL 7.4. I cannot get it to use my indexes with full outer joins. \n>>I might be naive, but I think that it should be possible?\n>>\n>>I have two BIG tables (virtually identical) with 3 NOT NULL columns \n>>Station_id, TimeObs, Temp_XXXX, with unique indexes on (Station_id, \n>>TimeObs) and valid ANALYSE (set statistics=100). I want to join the two \n>>tables with a FULL OUTER JOIN.\n>>\n>>When I specify the query as:\n>>\n>>SELECT station_id, timeobs,temp_grass, temp_dry_at_2m\n>> FROM temp_dry_at_2m a\n>> FULL OUTER JOIN temp_grass b \n>> USING (station_id, timeobs)\n>> WHERE station_id = 52981\n>> AND timeobs = '2004-1-1 0:0:0'\n>>\n>>I get the correct results\n>>\n>>station_id | timeobs | temp_grass | temp_dry_at_2m\n>>------------+---------------------+------------+----------------\n>> 52944 | 2004-01-01 00:10:00 | | -1.1\n>>(1 row)\n>>\n>>BUT LOUSY performance, and the following EXPLAIN:\n>>\n>> QUERY PLAN\n>>------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>Merge Full Join (cost=1542369.83..1618958.58 rows=6956994 width=32) \n>>(actual time=187176.408..201436.264 rows=1 loops=1)\n>> Merge Cond: ((\"outer\".station_id = \"inner\".station_id) AND \n>> (\"outer\".timeobs = \"inner\".timeobs))\n>> Filter: ((COALESCE(\"outer\".station_id, \"inner\".station_id) = 52981) AND \n>> (COALESCE(\"outer\".timeobs, \"inner\".timeobs) = '2004-01-01 \n>> 00:00:00'::timestamp without time zone))\n>> -> Sort (cost=1207913.44..1225305.93 rows=6956994 width=16) (actual \n>> time=145748.253..153851.607 rows=6956994 loops=1)\n>> Sort Key: a.station_id, a.timeobs\n>> -> Seq Scan on temp_dry_at_2m a (cost=0.00..117549.94 \n>> rows=6956994 width=16) (actual time=0.049..54226.770 rows=6956994 \n>> loops=1)\n>> -> Sort (cost=334456.38..340472.11 rows=2406292 width=16) (actual \n>> time=31668.876..34491.123 rows=2406292 loops=1)\n>> Sort Key: b.station_id, b.timeobs\n>> -> Seq Scan on temp_grass b (cost=0.00..40658.92 rows=2406292 \n>> width=16) (actual time=0.052..5484.489 rows=2406292 loops=1)\n>>Total runtime: 201795.989 ms\n>>(10 rows)\n>> \n>>\n>\n>Someone else will need to comment on why Postgres can't use a more\n>efficient plan. What I think will work for you is to restrict\n>the station_id and timeobs on each side and then do a full join.\n>You can try something like the sample query below (which hasn't been tested):\n>SELECT station_id, timeobs, temp_grass, temp_dry_at_2m\n> FROM\n> (SELECT station_id, timeobs, temp_dry_at_2m\n> FROM temp_dry_at_2m\n> WHERE\n> station_id = 52981\n> AND\n> timeobs = '2004-1-1 0:0:0') a\n> FULL OUTER JOIN\n> (SELECT station_id, timeobs, temp_grass\n> FROM temp_grass\n> WHERE\n> station_id = 52981\n> AND\n> timeobs = '2004-1-1 0:0:0') b\n> USING (station_id, timeobs)\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n> \n>\n",
"msg_date": "Wed, 08 Jun 2005 14:32:28 +0200",
"msg_from": "Kim Bisgaard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: full outer performance problem"
},
{
"msg_contents": "Kim Bisgaard <[email protected]> writes:\n> SELECT station_id, timeobs,temp_grass, temp_dry_at_2m\n> FROM temp_dry_at_2m a\n> FULL OUTER JOIN temp_grass b \n> USING (station_id, timeobs)\n> WHERE station_id = 52981\n> AND timeobs = '2004-1-1 0:0:0'\n\n> explain analyse SELECT b.station_id, b.timeobs,temp_grass, temp_dry_at_2m\n> FROM temp_dry_at_2m a\n> FULL OUTER JOIN temp_grass b\n> USING (station_id, timeobs)\n> WHERE b.station_id = 52981\n> AND b.timeobs = '2004-1-1 0:0:0'\n\n> Why will PostgreSQL not use the same plan for both these queries - they \n> are virtually identical??\n\nBecause they're semantically completely different. The second query is\neffectively a RIGHT JOIN, because join rows in which b is all-null will\nbe thrown away by the WHERE. The optimizer sees this (note your second\nplan doesn't use a Full Join step anywhere) and is able to produce a\nmuch better plan. Full outer join is difficult to optimize, in part\nbecause we have no choice but to use a merge join for it --- the other\njoin types don't support full join.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Jun 2005 10:03:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: full outer performance problem "
},
{
"msg_contents": "Kim Bisgaard <[email protected]> writes:\n> W.r.t. your rewrite of the query, I get this \"ERROR: could not devise a \n> query plan for the given query\" but no further details - I will try google\n\nWhich PG version are you using again? That should be fixed in 7.4.3\nand later.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Jun 2005 12:23:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: full outer performance problem "
},
{
"msg_contents": "Quoting Tom Lane <[email protected]>:\n\n> Kim Bisgaard <[email protected]> writes:\n> > SELECT station_id, timeobs,temp_grass, temp_dry_at_2m\n> > FROM temp_dry_at_2m a\n> > FULL OUTER JOIN temp_grass b\n> > USING (station_id, timeobs)\n> > WHERE station_id = 52981\n> > AND timeobs = '2004-1-1 0:0:0'\n>\n> > explain analyse SELECT b.station_id, b.timeobs,temp_grass, temp_dry_at_2m\n> > FROM temp_dry_at_2m a\n> > FULL OUTER JOIN temp_grass b\n> > USING (station_id, timeobs)\n> > WHERE b.station_id = 52981\n> > AND b.timeobs = '2004-1-1 0:0:0'\n>\n> > Why will PostgreSQL not use the same plan for both these queries - they\n> > are virtually identical??\n>\n> Because they're semantically completely different. The second query is\n> effectively a RIGHT JOIN, because join rows in which b is all-null will\n> be thrown away by the WHERE. The optimizer sees this (note your second\n> plan doesn't use a Full Join step anywhere) and is able to produce a\n> much better plan. Full outer join is difficult to optimize, in part\n> because we have no choice but to use a merge join for it --- the other\n> join types don't support full join.\n>\n> \t\t\tregards, tom lane\n>\n\n\nYes I am aware that they are not \"identical\", they also give different results,\nbut the data nessesary to compute the results is (0-2 rows, 0-1 row from each\ntable), and thus ideally have the potential to have similar performance - to my\nhead anyway, but I may not have grasped the complete picture yet :-)\n\nRegards,\nKim.\n",
"msg_date": "Wed, 8 Jun 2005 20:46:35 +0200",
"msg_from": "Kim Bisgaard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: full outer performance problem "
},
{
"msg_contents": "Quoting Tom Lane <[email protected]>:\n\n> Kim Bisgaard <[email protected]> writes:\n> > W.r.t. your rewrite of the query, I get this \"ERROR: could not devise a\n> > query plan for the given query\" but no further details - I will try google\n>\n> Which PG version are you using again? That should be fixed in 7.4.3\n> and later.\n>\n> \t\t\tregards, tom lane\n>\n\nIts 7.4.1. I am in the process (may take a while yet) of installing 8.0.3 on the\nsame hardware in order to have a parallel system. Time is a finite meassure :-)\n\nI must admit I would rather have the first query perform, that have this\nworkaround function ;-)\n\nRegards,\nKim.\n",
"msg_date": "Wed, 8 Jun 2005 20:53:52 +0200",
"msg_from": "Kim Bisgaard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: full outer performance problem "
}
] |
[
{
"msg_contents": "It seems that Postgres is estimating that all rows in a 50k row table\nwill be returned, but only one should match. The query runs slow because\nof the seqscan. When I set enable_seqscan to off, then it does an index\nscan and it runs quickly.\n\nI've set the statistics target on the index to 100 and 1000, and they\ndon't make a difference in the plan. I've also ran VACUUM ANALYZE right\nbefore the query.\n\nHere is my query, output of EXPLAIN ANALYZE, and my tables:\nI'm not sure how wrapping will make this look, so I've put it into a\npastebin also, if it makes it easier to read:\nhttp://rafb.net/paste/results/RqeyX523.nln.html\n\ntalluria=# explain analyze SELECT t.*, p.name AS owner, c.name FROM\ntiles AS t LEFT JOIN cities AS c USING (cityid) LEFT JOIN players p\nUSING (playerid) WHERE box(t.coord, t.coord) ~= box(point (4,3), point\n(4,3));\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------\n Merge Right Join (cost=119.07..122.13 rows=52 width=55) (actual time=232.777..232.780 rows=1 loops=1)\n Merge Cond: (\"outer\".playerid = \"inner\".playerid)\n -> Index Scan using users_pkey on players p (cost=0.00..4138.82 rows=56200 width=8) (actual time=0.017..122.409 rows=56200 loops=1)\n -> Sort (cost=119.07..119.20 rows=52 width=55) (actual time=0.070..0.072 rows=1 loops=1)\n Sort Key: c.playerid\n -> Hash Left Join (cost=1.03..117.59 rows=52 width=55) (actual time=0.045..0.059 rows=1 loops=1)\n Hash Cond: (\"outer\".cityid = \"inner\".cityid)\n -> Index Scan using tiles_coord_key on tiles t (cost=0.00..116.29 rows=52 width=37) (actual time=0.014..0.026 rows=1 loops=1)\n Index Cond: (box(coord, coord) ~= '(4,3),(4,3)'::box)\n -> Hash (cost=1.02..1.02 rows=2 width=22) (actual time=0.017..0.017 rows=0 loops=1)\n -> Seq Scan on cities c (cost=0.00..1.02 rows=2 width=22) (actual time=0.008..0.012 rows=2 loops=1)\n Total runtime: 232.893 ms\n(12 rows)\n \ntalluria=# set enable_seqscan = false;\nSET\ntalluria=# explain analyze SELECT t.*, p.name AS owner, c.name FROM tiles AS t LEFT JOIN cities AS c USING (cityid) LEFT JOIN players p USING (playerid) WHERE box(t.coord, t.coord) ~= box(point (4,3), point (4,3));\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=121.07..124.14 rows=52 width=55) (actual time=0.102..0.105 rows=1 loops=1)\n Merge Cond: (\"outer\".playerid = \"inner\".playerid)\n -> Sort (cost=121.07..121.20 rows=52 width=55) (actual time=0.076..0.077 rows=1 loops=1)\n Sort Key: c.playerid\n -> Hash Left Join (cost=3.03..119.59 rows=52 width=55) (actual time=0.053..0.066 rows=1 loops=1)\n Hash Cond: (\"outer\".cityid = \"inner\".cityid)\n -> Index Scan using tiles_coord_key on tiles t (cost=0.00..116.29 rows=52 width=37) (actual time=0.014..0.026 rows=1 loops=1)\n Index Cond: (box(coord, coord) ~= '(4,3),(4,3)'::box)\n -> Hash (cost=3.02..3.02 rows=2 width=22) (actual time=0.026..0.026 rows=0 loops=1)\n -> Index Scan using cities_userid_key on cities c (cost=0.00..3.02 rows=2 width=22) (actual time=0.016..0.021 rows=2 loops=1)\n -> Index Scan using users_pkey on players p (cost=0.00..4138.82 rows=56200 width=8) (actual time=0.012..0.012 rows=1 loops=1)\n Total runtime: 0.200 ms\n(12 rows)\n \ntalluria=# \\d tiles\n Table \"public.tiles\"\n Column | Type | Modifiers\n--------+-------------------+----------------------------------------------------------------------\n tileid | integer | not null default nextval('tiles_tileid_seq'::text)\n mapid | integer | not null default 1\n tile | character varying | not null default 'field'::character varying\n coord | point | not null default point((0)::double precision, (0)::double precision)\n cityid | integer |\nIndexes:\n \"times_pkey\" PRIMARY KEY, btree (tileid) CLUSTER\n \"tiles_cityid_key\" btree (cityid)\n \"tiles_coord_key\" rtree (box(coord, coord))\nForeign-key constraints:\n \"tiles_cityid_fkey\" FOREIGN KEY (cityid) REFERENCES cities(cityid) ON UPDATE CASCADE ON DELETE SET NULL\n \ntalluria=# \\d cities\n Table \"public.cities\"\n Column | Type | Modifiers\n-------------+-----------------------+-----------------------------------------------------\n cityid | integer | not null default nextval('cities_cityid_seq'::text)\n playerid | integer | not null default 0\n bordercolor | character(6) | not null default '0000ff'::bpchar\n citystatus | smallint | not null default 0\n name | character varying(30) | not null\nIndexes:\n \"cities_pkey\" PRIMARY KEY, btree (cityid)\n \"cities_cityname_uikey\" UNIQUE, btree (lower(name::text))\n \"cities_userid_key\" btree (playerid)\nForeign-key constraints:\n \"cities_userid_fkey\" FOREIGN KEY (playerid) REFERENCES players(playerid) ON UPDATE CASCADE ON DELETE CASCADE\n \ntalluria=# \\d players\n Table \"public.players\"\n Column | Type | Modifiers\n-----------------+------------------------+----------------------------------------------------------------------\n playerid | integer | not null default nextval('players_playerid_seq'::text)\n username | character varying(30) | not null default ''::character varying\n md5password | character(32) | not null default (''::bpchar)::character(1)\n name | character varying(100) | not null default ''::character varying\n email | character varying(50) | not null default ''::character varying\n(snipped a few irrelavent columns)\nIndexes:\n \"users_pkey\" PRIMARY KEY, btree (playerid)\n \"players_username_key\" UNIQUE, btree (username, md5password)\n \"users_username_lkey\" UNIQUE, btree (lower(username::text))\n \"users_coord_key\" rtree (box(coord, coord))\nForeign-key constraints:\n \"players_stylesheet_fkey\" FOREIGN KEY (stylesheet) REFERENCES stylesheets(stylesheetid) ON UPDATE CASCADE ON DELETE SET DEFAULT\n \"users_arm\" FOREIGN KEY (arm) REFERENCES items(itemid) ON UPDATE CASCADE ON DELETE SET NULL\n \"users_activefight_pkey\" FOREIGN KEY (activefight) REFERENCES monsters(monsterid) ON UPDATE CASCADE ON DELETE SET NULL\n \"players_map_fkey\" FOREIGN KEY (map) REFERENCES maps(mapid) ON UPDATE CASCADE ON DELETE SET DEFAULT\n \"users_belt\" FOREIGN KEY (belt) REFERENCES items(itemid) ON UPDATE CASCADE ON DELETE SET NULL\n \"users_body\" FOREIGN KEY (body) REFERENCES items(itemid) ON UPDATE CASCADE ON DELETE SET NULL\n \"users_head\" FOREIGN KEY (head) REFERENCES items(itemid) ON UPDATE CASCADE ON DELETE SET NULL\n \"users_lefthand\" FOREIGN KEY (lefthand) REFERENCES items(itemid) ON UPDATE CASCADE ON DELETE SET NULL\n \"users_leg\" FOREIGN KEY (leg) REFERENCES items(itemid) ON UPDATE CASCADE ON DELETE SET NULL\n \"users_righthand\" FOREIGN KEY (righthand) REFERENCES items(itemid) ON UPDATE CASCADE ON DELETE SET NULL\n\nThanks in advance for any help,\nAllan Wang\n\n",
"msg_date": "Wed, 08 Jun 2005 11:04:04 -0400",
"msg_from": "Allan Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance problems, bad estimates and plan"
},
{
"msg_contents": "Allan Wang <[email protected]> writes:\n> It seems that Postgres is estimating that all rows in a 50k row table\n> will be returned, but only one should match.\n\nI think this is the same issue fixed here:\n\n2005-04-03 21:43 tgl\n\n\t* src/backend/optimizer/path/: costsize.c (REL7_4_STABLE),\n\tcostsize.c (REL8_0_STABLE), costsize.c: In cost_mergejoin, the\n\tearly-exit effect should not apply to the outer side of an outer\n\tjoin. Per andrew@supernews.\n\nAre you running 7.4.8 or 8.0.2 or later?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Jun 2005 13:02:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problems, bad estimates and plan "
},
{
"msg_contents": "Allan Wang <[email protected]> writes:\n> On Wed, 2005-06-08 at 13:02 -0400, Tom Lane wrote:\n>> Are you running 7.4.8 or 8.0.2 or later?\n\n> I'm running 8.0.2 on Gentoo.\n\nOh, OK [ looks again ... ] I read the join backward, the issue I was\nconcerned about would've applied to a right join there not left.\n\nThe seqscan vs indexscan difference is a red herring: if you look at the\nexplain output, the only thing that changes to an indexscan is the scan\non cities, which is only two rows and is not taking any time anyway.\nThe thing that is taking a long time (or not) is the indexscan over\nplayers. The planner is expecting that to stop short of completion\n(presumably based on comparing the maximum values of playerid in\nthe two tables) --- and in one plan it does so, so the planner's\nlogic is apparently correct.\n\nAre there any NULLs in c.playerid? We found an executor issue recently\nthat it would not figure out it could stop the scan if there was a NULL\ninvolved.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Jun 2005 13:39:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problems, bad estimates and plan "
},
{
"msg_contents": "[ Please cc your responses to the list; other people may be interested\n in the same problem ]\n\nAllan Wang <[email protected]> writes:\n> On Wed, 2005-06-08 at 13:39 -0400, Tom Lane wrote:\n>> Are there any NULLs in c.playerid?\n\n> Here is the contents of cities:\n\nI'm sorry, what I should've said is \"are there any NULLs in c.playerid\nin the output of the first LEFT JOIN?\" In practice that means \"does\nthe selected row of tiles actually join to cities?\"\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Jun 2005 13:57:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problems, bad estimates and plan "
},
{
"msg_contents": "Allan Wang <[email protected]> writes:\n> No, the tiles row doesn't join with cities:\n\nUh-huh, so it's the same issue described here:\nhttp://archives.postgresql.org/pgsql-performance/2005-05/msg00219.php\n\nThis is fixed in CVS tip but the change was large enough that I'm\ndisinclined to try to back-port it ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Jun 2005 14:06:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problems, bad estimates and plan "
}
] |
[
{
"msg_contents": "I'm tasked with specifying a new machine to run a web application\nprototype. The machine will be serving web pages with a Postgresql\nbackend; we will be making extensive use of plpgsql functions. No\ndatabase tables are likely to go over a million rows during the\nprototype period.\n\nWe are considering two RAID1 system disks, and two RAID1 data disks.\nWe've avoided buying Xeons. The machine we are looking at looks like\nthis:\n\n Rackmount Chassis - 500W PSU / 4 x SATA Disk Drive Bays\n S2882-D - Dual Opteron / AMD 8111 Chipset / 5 x PCI Slots\n 2x - (Dual) AMD Opteron 246 Processors (2.0GHz) - 1MB L2 Cache/core (single core)\n 2GB (2x 1024MB) DDR-400 (PC3200) ECC Registered SDRAM (single rank)\n 4 Port AMCC/3Ware 9500-4LP PCI SATA RAID Controller\n 80GB SATA-150 7200RPM Hard Disk / 8MB Cache\n 80GB SATA-150 7200RPM Hard Disk / 8MB Cache\n 250GB SATA-150 7200RPM Hard Disk / 8MB Cache\n 250GB SATA-150 7200RPM Hard Disk / 8MB Cache\n Slimline 8x DVD / 24x CD-ROM Drive\n Standard 3yr (UK) Next Business Day On-site Warranty \n\nI would be grateful for any comments about this config.\n\nKind regards,\nRory\n-- \nRory Campbell-Lange \n<[email protected]>\n<www.campbell-lange.net>\n",
"msg_date": "Wed, 8 Jun 2005 17:34:18 +0100",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help specifying new web server/database machine"
},
{
"msg_contents": "Hi,\n\nRory Campbell-Lange wrote:\n\n > We are considering two RAID1 system disks, and two RAID1 data disks.\n> We've avoided buying Xeons. The machine we are looking at looks like\n> this:\n> \n> Rackmount Chassis - 500W PSU / 4 x SATA Disk Drive Bays\n> S2882-D - Dual Opteron / AMD 8111 Chipset / 5 x PCI Slots\n> 2x - (Dual) AMD Opteron 246 Processors (2.0GHz) - 1MB L2 Cache/core (single core)\n> 2GB (2x 1024MB) DDR-400 (PC3200) ECC Registered SDRAM (single rank)\n\nMake that 4 or 8 GB total. We have seen a huge boost in performance when \nwe upgraded from 4 to 8 GB. Make sure to use a decent 64bit Linux.\n\n> 4 Port AMCC/3Ware 9500-4LP PCI SATA RAID Controller\n> 80GB SATA-150 7200RPM Hard Disk / 8MB Cache\n> 80GB SATA-150 7200RPM Hard Disk / 8MB Cache\n> 250GB SATA-150 7200RPM Hard Disk / 8MB Cache\n> 250GB SATA-150 7200RPM Hard Disk / 8MB Cache\n\nThree options:\n\n9500-4LP with Raptor drives 10k rpm, raid 1 + raid 1\n9500-8LP with Raptor drives 10k rpm, raid 10 + raid 1\nGo for SCSI (LSI Megaraid or ICP Vortex) and take 10k drives\n\nBBU option is always nice.\n\nRegards,\nBjoern\n",
"msg_date": "Wed, 08 Jun 2005 18:41:01 +0200",
"msg_from": "Bjoern Metzdorf <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new web server/database machine"
},
{
"msg_contents": "\n> Three options:\n> \n> 9500-4LP with Raptor drives 10k rpm, raid 1 + raid 1\n> 9500-8LP with Raptor drives 10k rpm, raid 10 + raid 1\n> Go for SCSI (LSI Megaraid or ICP Vortex) and take 10k drives\n\nIf you are going with Raptor drives use the LSI 150-6 SATA RAID\nwith the BBU.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n> BBU option is always nice.\n> \n> Regards,\n> Bjoern\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n",
"msg_date": "Wed, 08 Jun 2005 09:48:12 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new web server/database machine"
},
{
"msg_contents": "On 6/8/05, Rory Campbell-Lange <[email protected]> wrote:\n> I'm tasked with specifying a new machine to run a web application\n> prototype. The machine will be serving web pages with a Postgresql\n> backend; we will be making extensive use of plpgsql functions. No\n> database tables are likely to go over a million rows during the\n> prototype period.\n...\n> 2GB (2x 1024MB) DDR-400 (PC3200) ECC Registered SDRAM (single rank)\n> 4 Port AMCC/3Ware 9500-4LP PCI SATA RAID Controller\n> 80GB SATA-150 7200RPM Hard Disk / 8MB Cache\n> 80GB SATA-150 7200RPM Hard Disk / 8MB Cache\n> 250GB SATA-150 7200RPM Hard Disk / 8MB Cache\n> 250GB SATA-150 7200RPM Hard Disk / 8MB Cache\n\nIf your app is select heavy, especially the types of things that do\nsequential scans, you will enjoy having enough ram to easily load all\nof your tables and indexes in ram. If your database will exceed 1GB on\ndisk consider more ram than 2GB.\n\nIf your database will be write heavy choosing good controllers and\ndisks is essential. Reading through the archives you will see that\nthere are some important disk configurations you can choose for\noptimizing disk writes such as using the outer portions of the disks\nexclusively. If data integrity is not an issue, choose a controller\nthat allows caching of writes (usually IDE and cheaper SATA systems\ncache writes regardless of what you want).\n\nIf it were my application, and if I had room in the budget, I'd double\nthe RAM. I don't know anything about your application though so use\nthe guidlines above.\n\n-- \nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Wed, 8 Jun 2005 13:53:30 -0500",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new web server/database machine"
},
{
"msg_contents": "> We are considering two RAID1 system disks, and two RAID1 data disks.\n> We've avoided buying Xeons. The machine we are looking at looks like\n> this:\n> \n> Rackmount Chassis - 500W PSU / 4 x SATA Disk Drive Bays\n> S2882-D - Dual Opteron / AMD 8111 Chipset / 5 x PCI Slots\n> 2x - (Dual) AMD Opteron 246 Processors (2.0GHz) - 1MB L2 Cache/core (single core)\n\nFor about $1500 more, you could go 2x270 (dual core 2ghz) and get a 4X \nSMP system. (My DC 2x265 system just arrived -- can't wait to start \ntesting it!!!)\n\n> 2GB (2x 1024MB) DDR-400 (PC3200) ECC Registered SDRAM (single rank)\n\nThis is a wierd configuration. For a 2x Opteron server to operate at max \nperformance, it needs 4 DIMMs minimum. Opterons use a 128-bit memory \ninterface and hence requires 2 DIMMs per CPU to run at full speed. With \nonly 2 DIMMS, you either have both CPUs run @ 64-bit (this may not even \nbe possible) or populate only 1 CPU bank -- the other CPU must then \nrequest all memory access through the other CPU which is a significant \npenalty. If you went 4x512MB, you'd limit your future update options by \nhaving less slots available to add more memory. I'd definitely out of \nthe chute get 4x1GB,\n\n> 4 Port AMCC/3Ware 9500-4LP PCI SATA RAID Controller\n> 80GB SATA-150 7200RPM Hard Disk / 8MB Cache\n> 80GB SATA-150 7200RPM Hard Disk / 8MB Cache\n> 250GB SATA-150 7200RPM Hard Disk / 8MB Cache\n> 250GB SATA-150 7200RPM Hard Disk / 8MB Cache\n\nNow this is comes to the interesting part. We've had huge, gigantic \nthreads (check archives for the $7K server threads) about SCSI versus \nSATA in the past. 7200 SATAs just aren't fast/smart enough to cut it for \nmost production uses in regular configs. If you are set on SATA, you \nwill have to consider the following options: (1) use 10K Raptors for TCQ \ngoodness, (2) put a huge amount of memory onto the SATA RAID card -- 1GB \nminimum, (3) use a ton of SATA drives to make a RAID10 array -- 8 drives \nminimum.\n\nOr you could go SCSI. SCSI is cost prohibitive though at the larger disk \nsizes -- this is why I'm considering option #3 for my data processing \nserver.\n",
"msg_date": "Wed, 08 Jun 2005 16:33:21 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new web server/database machine"
},
{
"msg_contents": "Hi All. Thanks very much for Joshua, William, Bjoern and Matthew's\nreplies.\n\nI've now looked at the famous \"Server for 7K\" thread. In my case we are\nlooking for a server for around 3000 pounds (UK); the server is to be an\nall-purpose web and database server.\n\nProcessor:\n\nFirst of all I noted that we were intending to use Opteron processors. I\nguess this isn't a straightforward choice because I believe Debian (our\nLinux of choice) doesn't have a stable AMD64 port. However some users on\nthis list suggest that Opterons work very well even in a 32 bit\nenvironment. Some have suggested that a single dual core processor is\nthe way to go. The RAM needs to fit the CPU arrangement too; William\npoints out that one needs 2 DIMMS per CPU.\n\nDisks:\n\nI'm somewhat confused here. I've followed the various notes about SATA\nvs SCSI and it seems that SCSI is the way to go. On a four-slot 1U\nserver, would one do a single RAID10 over 4 disks 10000rpm U320 disks?\nI would run the database in its own partition, separate from the rest of\nthe OS, possible on LVM. An LSI-Megaraid-2 appears to be the card of\nchoice.\n\nThe following (without RAID card) breaks my budget by about 200 pounds:\n\n System : Armari Opteron AM-2138-A8 1U Base PCI-X (BEI)\n Case Accessories : IPMI 2.0 module for AM Series Opteron Servers\n CPU : AMD Opteron 265 - Dual Core 1.8GHz CPU (940pin)\n Memory : 2GB 400MHz DDR SDRAM (4 x 512MB (PC3200) ECC REG.s)\n Hard drive : Maxtor Atlas 10K V 147.1GB 10K U320/SCA - 8D147J0\n Additional Drives : 3 x Maxtor Atlas 10K V 147.1GB 10K U320/SCA - 8D147J0\n CD/DVD Drive : AM series Server 8x Slimline DVD-ROM\n Warranty : 3 Year Return to base Warranty (Opteron Server)\n Carriage : PC System Carriage (UK only) for 1U Server\n\nThanks for any further comments,\nRory\n\n-- \nRory Campbell-Lange \n<[email protected]>\n<www.campbell-lange.net>\n",
"msg_date": "Thu, 9 Jun 2005 17:44:20 +0100",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help specifying new web server/database machine"
},
{
"msg_contents": "On 6/9/05, Rory Campbell-Lange <[email protected]> wrote:\n> Disks:\n> \n> I'm somewhat confused here. I've followed the various notes about SATA\n> vs SCSI and it seems that SCSI is the way to go. On a four-slot 1U\n> server, would one do a single RAID10 over 4 disks 10000rpm U320 disks?\n> I would run the database in its own partition, separate from the rest of\n> the OS, possible on LVM. An LSI-Megaraid-2 appears to be the card of\n> choice.\n> \n\nCan you tell us about your application? How much data will you have,\nwhat is your ratio of reads to writes, how tollerant to data loss are\nyou? (for example, some people load their data in batches and if they\nloose their data its no big deal, others would have heart failure if a\nfew transactions were lost)\n\nIf your application is 95% writes then people will suggest drastically\ndifferent hardware than if your application is 95% selects.\n\nHere is an example of one of my servers:\napplication is 95+% selects, has 15GB of data (counting indexes), low\ntollerance for data loss, runs on a 1 GHz P3 Compaq server with\nmirrored 35 GB IDE disks and 1.6GB of RAM. Application response time\nis aproximately .1 second to serve a request on a moderately loaded\nserver.\n\n-- \nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Thu, 9 Jun 2005 12:18:15 -0500",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new web server/database machine"
},
{
"msg_contents": "On Thu, 2005-06-09 at 17:44 +0100, Rory Campbell-Lange wrote:\n> Hi All. Thanks very much for Joshua, William, Bjoern and Matthew's\n> replies.\n> \n> I've now looked at the famous \"Server for 7K\" thread. In my case we are\n> looking for a server for around 3000 pounds (UK); the server is to be an\n> all-purpose web and database server.\n> \n> Processor:\n> \n> First of all I noted that we were intending to use Opteron processors. I\n> guess this isn't a straightforward choice because I believe Debian (our\n> Linux of choice) doesn't have a stable AMD64 port.\n\nYes it does. Now sarge has become the new stable release, the amd64\nversion has also become stable. It doesn't have as many packages as the\ni386 port, but those it has will be supported by the Debian security\nteam. Look at the debian-amd64 mailing list for more information.\n\nIt only has PostgreSQL 7.4. To run 8.0, download the source packages\nfrom unstable and build them yourself. You need postgresql-8.0 and\npostgresql-common; if you also have an existing database to upgrade you\nneed postgresql and postgresql-7.4.\n\n> However some users on\n> this list suggest that Opterons work very well even in a 32 bit\n> environment. \n\nYou can treat the machine as a 32bit machine and install the i386\nversion of Debian; it will run rather slower than with 64 bit software.\n\nOliver Elphick\n\n",
"msg_date": "Thu, 09 Jun 2005 19:00:19 +0100",
"msg_from": "Oliver Elphick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new web server/database machine"
},
{
"msg_contents": "Rory Campbell-Lange wrote:\n> Processor:\n> \n> First of all I noted that we were intending to use Opteron processors. I\n> guess this isn't a straightforward choice because I believe Debian (our\n> Linux of choice) doesn't have a stable AMD64 port. However some users on\n> this list suggest that Opterons work very well even in a 32 bit\n> environment. Some have suggested that a single dual core processor is\n> the way to go. The RAM needs to fit the CPU arrangement too; William\n> points out that one needs 2 DIMMS per CPU.\n\n\nYour summary here just pointed out the obvious to me. Start with a 2P MB \nbut only populate a single DC Opteron. That'll give you 2P system with \nroom to expand to 4P in the future. Plus you only need to populate 1 \nmemory bank so you can do 2x1GB.\n\n> Disks:\n> \n> I'm somewhat confused here. I've followed the various notes about SATA\n> vs SCSI and it seems that SCSI is the way to go. On a four-slot 1U\n> server, would one do a single RAID10 over 4 disks 10000rpm U320 disks?\n> I would run the database in its own partition, separate from the rest of\n> the OS, possible on LVM. An LSI-Megaraid-2 appears to be the card of\n> choice.\n\nWith only 4 disks, a MegaRAID U320-1 is good enough. It's quite a \npremium to go to the 2x channel MegaRAID. With 4 drives, I'd still do 2 \nbig drives mirrored for the DB partition and 2 small drives for OS+WAL.\n",
"msg_date": "Thu, 09 Jun 2005 12:53:38 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new web server/database machine"
},
{
"msg_contents": "On 09/06/05, William Yu ([email protected]) wrote:\n> Rory Campbell-Lange wrote:\n\n> > ... Some have suggested that a single dual core processor is the way\n> > to go. The RAM needs to fit the CPU arrangement too; William points\n> > out that one needs 2 DIMMS per CPU.\n\n> Your summary here just pointed out the obvious to me. Start with a 2P MB \n> but only populate a single DC Opteron. That'll give you 2P system with \n> room to expand to 4P in the future. Plus you only need to populate 1 \n> memory bank so you can do 2x1GB.\n\nThat makes sense. I should by a board with support for 2 Dual-core\nOpterons, but only use one Opteron for the moment. Then I should buy\n2x1GB RAM sticks to service that processor.\n\n> > ... On a four-slot 1U server, would one do a single RAID10 over 4\n> > disks 10000rpm U320 disks? I would run the database in its own\n> > partition, separate from the rest of the OS, possible on LVM. An\n> > LSI-Megaraid-2 appears to be the card of choice.\n> \n> With only 4 disks, a MegaRAID U320-1 is good enough. It's quite a \n> premium to go to the 2x channel MegaRAID. With 4 drives, I'd still do 2 \n> big drives mirrored for the DB partition and 2 small drives for OS+WAL.\n\nShould these all RAID1? \n\nI'm a bit worried about how to partition up my system if it is strictly\ndivided between a system RAID1 disk entity and a DB disk entity, as the\nproportion of web server content (images, movies, sounds) to actual\ndatabase data is an unknown quantity at the moment. \n\nI typically keep all the database stuff in a /var logical partition and\nfor this project would expect to keep the web stuff under a /web logical\npartition. I was thinking of using LVM to be able to shift around space\non a large (4 x 147GB RAID 1 or RAID10) raided volume. I appreciate that\nthis may not be optimal for the WAL transaction log.\n\nThanks for your comments;\nRory\n\n-- \nRory Campbell-Lange \n<[email protected]>\n<www.campbell-lange.net>\n",
"msg_date": "Thu, 9 Jun 2005 22:01:07 +0100",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help specifying new web server/database machine"
},
{
"msg_contents": "On 09/06/05, Matthew Nuzum ([email protected]) wrote:\n> On 6/9/05, Rory Campbell-Lange <[email protected]> wrote:\n> > Disks:\n> > \n> > I'm somewhat confused here. I've followed the various notes about SATA\n> > vs SCSI and it seems that SCSI is the way to go. On a four-slot 1U\n> > server, would one do a single RAID10 over 4 disks 10000rpm U320 disks?\n> > I would run the database in its own partition, separate from the rest of\n> > the OS, possible on LVM. An LSI-Megaraid-2 appears to be the card of\n> > choice.\n\n> Can you tell us about your application? How much data will you have,\n> what is your ratio of reads to writes, how tollerant to data loss are\n> you? (for example, some people load their data in batches and if they\n> loose their data its no big deal, others would have heart failure if a\n> few transactions were lost)\n\nThe application is a web-based prototype system for kids to make their\nown galleries based on content found in museums and galleries. They will\nlink to content provided by curators, and be able to add in their own\nmaterial, including movies, sounds and pictures. All the content,\nhowever, will be restricted in size. I also do not intend to store the\nmovies, sounds or pictures in the database (although I have happily done\nthe latter in the past).\n\nUp to the data will be uploaded from 3G handsets. The rest will be done\non a per-user, per-pc basis through the web interface.\n\nThe service is expected to be used by about 50000 users over 18 months.\nOf these around half will be content creators, so will account for say\nhalf a million rows in the main content table and under 2 million rows\nin the commentary table. The most used table will probably be a\n'history' function required by the contract, tracking use through the\nsite. I imagine this will account for something like 20 million rows\n(with very little data in them). \n\nThe main tables will have something like 80% read, 20% write (thumb\nsuck). The history table will be read by an automated process at 3 in\nthe morning, to pick up some stats on how people are using the system.\n\nIt wouldn't be a problem to very occasionally (once a month) lose a tiny\npiece of data (i.e a record). Losing any significant amounts of data is\nentirely out of the question. \n\n> If your application is 95% writes then people will suggest drastically\n> different hardware than if your application is 95% selects.\n> \n> Here is an example of one of my servers:\n> application is 95+% selects, has 15GB of data (counting indexes), low\n> tollerance for data loss, runs on a 1 GHz P3 Compaq server with\n> mirrored 35 GB IDE disks and 1.6GB of RAM. Application response time\n> is aproximately .1 second to serve a request on a moderately loaded\n> server.\n\nYeah. Maybe the machine I'm speccing up is total overkill for this\nproject? I'm just worried that if it is a big success, or if we have 400\nkids pounding the server at once over high-speed school lines, the thing\nwill grind to a halt.\n\nThanks very much for your comments.\n\nRegards,\nRory\n\n-- \nRory Campbell-Lange \n<[email protected]>\n<www.campbell-lange.net>\n",
"msg_date": "Thu, 9 Jun 2005 22:27:28 +0100",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help specifying new web server/database machine"
}
] |
[
{
"msg_contents": "Hi,\nI have the following table:\nperson - primary key id, and some attributes\nfood - primary key id, foreign key p_id reference to table person.\n\ntable food store all the food that a person is eating. The more recent\nfood is indicated by the higher food.id.\n\nI need to find what is the most recent food a person ate for every person.\nThe query:\nselect f.p_id, max(f.id) from person p, food f where p.id=f.p_id group\nby f.p_id will work.\nBut I understand this is not the most efficient way. Is there another\nway to rewrite this query? (maybe one that involves order by desc\nlimit 1)\n\nThank you in advance.\n",
"msg_date": "Wed, 8 Jun 2005 12:34:32 -0700",
"msg_from": "Junaili Lie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help with rewriting query"
},
{
"msg_contents": "[Junaili Lie - Wed at 12:34:32PM -0700]\n> select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group\n> by f.p_id will work.\n> But I understand this is not the most efficient way. Is there another\n> way to rewrite this query? (maybe one that involves order by desc\n> limit 1)\n\neventually, try something like\n\n select p.id,(select f.id from food f where f.p_id=p.id order by f.id desc limit 1)\n from person p\n \nnot tested, no warranties.\n\nSince subqueries can be inefficient, use \"explain analyze\" to see which one\nis actually better.\n\nThis issue will be solved in future versions of postgresql.\n\n-- \nTobias Brox, +47-91700050\nTallinn\n",
"msg_date": "Wed, 8 Jun 2005 22:56:25 +0300",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with rewriting query"
},
{
"msg_contents": "How about\n SELECT p_id, f_id\n FROM\n person as p\n LEFT JOIN\n (SELECT f.p_id, max(f.id), f_item\n FROM food) as f\n ON p.p_id = f.p_id\n\nCreate an index on Food (p_id, seq #)\n\nThis may not gain any performance, but worth a try. I don't have any \ndata similar to this to test it on. Let us know.\n\nI assume that the food id is a sequential number across all people. \nHave you thought of a date field and a number representing what meal was \nlast eaten, i.e. 1= breakfast, 2 = mid morning snack etc. Or a date \nfield and the food id code?\n\n\n\nJunaili Lie wrote:\n\n>Hi,\n>The suggested query below took forever when I tried it.\n>In addition, as suggested by Tobias, I also tried to create index on\n>food(p_id, id), but still no goal (same query plan).\n>Here is the explain:\n>TEST1=# explain select f.p_id, max(f.id) from Food f, Person p where\n>(f.p_id = p.id) group by p.id;\n> QUERY PLAN\n>----------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=0.00..214585.51 rows=569 width=16)\n> -> Merge Join (cost=0.00..200163.50 rows=2884117 width=16)\n> Merge Cond: (\"outer\".id = \"inner\".p_id)\n> -> Index Scan using person_pkey on person p\n>(cost=0.00..25.17 rows=569 width=8)\n> -> Index Scan using person_id_food_index on food f\n>(cost=0.00..164085.54 rows=2884117 width=16)\n>(5 rows)\n>\n>\n>\n>\n>TEST1=# explain select p.id, (Select f.id from food f where\n>f.p_id=p.id order by f.id desc limit 1) from person p;\n> QUERY PLAN\n>-----------------------------------------------------------------------------------------------------------\n> Seq Scan on Person p (cost=100000000.00..100007015.24 rows=569 width=8)\n> SubPlan\n> -> Limit (cost=0.00..12.31 rows=1 width=8)\n> -> Index Scan Backward using food_pkey on food f\n>(cost=0.00..111261.90 rows=9042 width=8)\n> Filter: (p_id = $0)\n>(5 rows)\n>\n>any ideas or suggestions is appreciate.\n>\n>\n>On 6/8/05, Tobias Brox <[email protected]> wrote:\n> \n>\n>>[Junaili Lie - Wed at 12:34:32PM -0700]\n>> \n>>\n>>>select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group\n>>>by f.p_id will work.\n>>>But I understand this is not the most efficient way. Is there another\n>>>way to rewrite this query? (maybe one that involves order by desc\n>>>limit 1)\n>>> \n>>>\n>>eventually, try something like\n>>\n>> select p.id,(select f.id from food f where f.p_id=p.id order by f.id desc limit 1)\n>> from person p\n>>\n>>not tested, no warranties.\n>>\n>>Since subqueries can be inefficient, use \"explain analyze\" to see which one\n>>is actually better.\n>>\n>>This issue will be solved in future versions of postgresql.\n>>\n>>--\n>>Tobias Brox, +47-91700050\n>>Tallinn\n>>\n>> \n>>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n>\n> \n>\n\n",
"msg_date": "Wed, 08 Jun 2005 20:48:05 +0000",
"msg_from": "Jim Johannsen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with rewriting query"
},
{
"msg_contents": "Hi,\nThe suggested query below took forever when I tried it.\nIn addition, as suggested by Tobias, I also tried to create index on\nfood(p_id, id), but still no goal (same query plan).\nHere is the explain:\nTEST1=# explain select f.p_id, max(f.id) from Food f, Person p where\n(f.p_id = p.id) group by p.id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.00..214585.51 rows=569 width=16)\n -> Merge Join (cost=0.00..200163.50 rows=2884117 width=16)\n Merge Cond: (\"outer\".id = \"inner\".p_id)\n -> Index Scan using person_pkey on person p\n(cost=0.00..25.17 rows=569 width=8)\n -> Index Scan using person_id_food_index on food f\n(cost=0.00..164085.54 rows=2884117 width=16)\n(5 rows)\n\n\n\n\nTEST1=# explain select p.id, (Select f.id from food f where\nf.p_id=p.id order by f.id desc limit 1) from person p;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Seq Scan on Person p (cost=100000000.00..100007015.24 rows=569 width=8)\n SubPlan\n -> Limit (cost=0.00..12.31 rows=1 width=8)\n -> Index Scan Backward using food_pkey on food f\n(cost=0.00..111261.90 rows=9042 width=8)\n Filter: (p_id = $0)\n(5 rows)\n\nany ideas or suggestions is appreciate.\n\n\nOn 6/8/05, Tobias Brox <[email protected]> wrote:\n> [Junaili Lie - Wed at 12:34:32PM -0700]\n> > select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group\n> > by f.p_id will work.\n> > But I understand this is not the most efficient way. Is there another\n> > way to rewrite this query? (maybe one that involves order by desc\n> > limit 1)\n> \n> eventually, try something like\n> \n> select p.id,(select f.id from food f where f.p_id=p.id order by f.id desc limit 1)\n> from person p\n> \n> not tested, no warranties.\n> \n> Since subqueries can be inefficient, use \"explain analyze\" to see which one\n> is actually better.\n> \n> This issue will be solved in future versions of postgresql.\n> \n> --\n> Tobias Brox, +47-91700050\n> Tallinn\n>\n",
"msg_date": "Wed, 8 Jun 2005 15:48:27 -0700",
"msg_from": "Junaili Lie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help with rewriting query"
},
{
"msg_contents": "On Wed, Jun 08, 2005 at 15:48:27 -0700,\n Junaili Lie <[email protected]> wrote:\n> Hi,\n> The suggested query below took forever when I tried it.\n> In addition, as suggested by Tobias, I also tried to create index on\n> food(p_id, id), but still no goal (same query plan).\n> Here is the explain:\n> TEST1=# explain select f.p_id, max(f.id) from Food f, Person p where\n> (f.p_id = p.id) group by p.id;\n\nThe above is going to require reading all the food table (assuming no\norphaned records), so the plan below seems reasonable.\n\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=0.00..214585.51 rows=569 width=16)\n> -> Merge Join (cost=0.00..200163.50 rows=2884117 width=16)\n> Merge Cond: (\"outer\".id = \"inner\".p_id)\n> -> Index Scan using person_pkey on person p\n> (cost=0.00..25.17 rows=569 width=8)\n> -> Index Scan using person_id_food_index on food f\n> (cost=0.00..164085.54 rows=2884117 width=16)\n> (5 rows)\n> \n> \n> \n> \n> TEST1=# explain select p.id, (Select f.id from food f where\n> f.p_id=p.id order by f.id desc limit 1) from person p;\n\nUsing a subselect seems to be the best hope of getting better performance.\nI think you almost got it right, but in order to use the index on\n(p_id, id) you need to order by f.p_id desc, f.id desc. Postgres won't\ndeduce this index can be used because f.p_id is constant in the subselect,\nyou need to give it some help.\n\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------\n> Seq Scan on Person p (cost=100000000.00..100007015.24 rows=569 width=8)\n> SubPlan\n> -> Limit (cost=0.00..12.31 rows=1 width=8)\n> -> Index Scan Backward using food_pkey on food f\n> (cost=0.00..111261.90 rows=9042 width=8)\n> Filter: (p_id = $0)\n> (5 rows)\n> \n> any ideas or suggestions is appreciate.\n> \n> \n> On 6/8/05, Tobias Brox <[email protected]> wrote:\n> > [Junaili Lie - Wed at 12:34:32PM -0700]\n> > > select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group\n> > > by f.p_id will work.\n> > > But I understand this is not the most efficient way. Is there another\n> > > way to rewrite this query? (maybe one that involves order by desc\n> > > limit 1)\n> > \n> > eventually, try something like\n> > \n> > select p.id,(select f.id from food f where f.p_id=p.id order by f.id desc limit 1)\n> > from person p\n> > \n> > not tested, no warranties.\n> > \n> > Since subqueries can be inefficient, use \"explain analyze\" to see which one\n> > is actually better.\n> > \n> > This issue will be solved in future versions of postgresql.\n> > \n> > --\n> > Tobias Brox, +47-91700050\n> > Tallinn\n> >\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n",
"msg_date": "Wed, 8 Jun 2005 21:59:07 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with rewriting query"
},
{
"msg_contents": "Hi Bruno,\nI followed your suggestion.\nThe query plan shows that it uses the index (id, person_id). However,\nthe execution time is still slow. I have to do ctl-C to stop it.\nMaybe something is wrong with my postgresql config.\nIt's running Solaris on dual Opteron, 4GB.\nI allocated around 128MB for sorting and more than 80% for\neffective_cache_size and shared_buffers = 32768.\nAny further ideas is much appreciated.\n\n\n\n\nOn 6/8/05, Bruno Wolff III <[email protected]> wrote:\n> On Wed, Jun 08, 2005 at 15:48:27 -0700,\n> Junaili Lie <[email protected]> wrote:\n> > Hi,\n> > The suggested query below took forever when I tried it.\n> > In addition, as suggested by Tobias, I also tried to create index on\n> > food(p_id, id), but still no goal (same query plan).\n> > Here is the explain:\n> > TEST1=# explain select f.p_id, max(f.id) from Food f, Person p where\n> > (f.p_id = p.id) group by p.id;\n> \n> The above is going to require reading all the food table (assuming no\n> orphaned records), so the plan below seems reasonable.\n> \n> > QUERY PLAN\n> > ----------------------------------------------------------------------------------------------------------------\n> > GroupAggregate (cost=0.00..214585.51 rows=569 width=16)\n> > -> Merge Join (cost=0.00..200163.50 rows=2884117 width=16)\n> > Merge Cond: (\"outer\".id = \"inner\".p_id)\n> > -> Index Scan using person_pkey on person p\n> > (cost=0.00..25.17 rows=569 width=8)\n> > -> Index Scan using person_id_food_index on food f\n> > (cost=0.00..164085.54 rows=2884117 width=16)\n> > (5 rows)\n> >\n> >\n> >\n> >\n> > TEST1=# explain select p.id, (Select f.id from food f where\n> > f.p_id=p.id order by f.id desc limit 1) from person p;\n> \n> Using a subselect seems to be the best hope of getting better performance.\n> I think you almost got it right, but in order to use the index on\n> (p_id, id) you need to order by f.p_id desc, f.id desc. Postgres won't\n> deduce this index can be used because f.p_id is constant in the subselect,\n> you need to give it some help.\n> \n> > QUERY PLAN\n> > -----------------------------------------------------------------------------------------------------------\n> > Seq Scan on Person p (cost=100000000.00..100007015.24 rows=569 width=8)\n> > SubPlan\n> > -> Limit (cost=0.00..12.31 rows=1 width=8)\n> > -> Index Scan Backward using food_pkey on food f\n> > (cost=0.00..111261.90 rows=9042 width=8)\n> > Filter: (p_id = $0)\n> > (5 rows)\n> >\n> > any ideas or suggestions is appreciate.\n> >\n> >\n> > On 6/8/05, Tobias Brox <[email protected]> wrote:\n> > > [Junaili Lie - Wed at 12:34:32PM -0700]\n> > > > select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group\n> > > > by f.p_id will work.\n> > > > But I understand this is not the most efficient way. Is there another\n> > > > way to rewrite this query? (maybe one that involves order by desc\n> > > > limit 1)\n> > >\n> > > eventually, try something like\n> > >\n> > > select p.id,(select f.id from food f where f.p_id=p.id order by f.id desc limit 1)\n> > > from person p\n> > >\n> > > not tested, no warranties.\n> > >\n> > > Since subqueries can be inefficient, use \"explain analyze\" to see which one\n> > > is actually better.\n> > >\n> > > This issue will be solved in future versions of postgresql.\n> > >\n> > > --\n> > > Tobias Brox, +47-91700050\n> > > Tallinn\n> > >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: the planner will ignore your desire to choose an index scan if your\n> > joining column's datatypes do not match\n>\n",
"msg_date": "Thu, 9 Jun 2005 18:26:09 -0700",
"msg_from": "Junaili Lie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help with rewriting query"
},
{
"msg_contents": "[Junaili Lie - Thu at 06:26:09PM -0700]\n> Hi Bruno,\n> I followed your suggestion.\n> The query plan shows that it uses the index (id, person_id). However,\n> the execution time is still slow. I have to do ctl-C to stop it.\n\nWhat is the estimate planner cost?\n\n> Maybe something is wrong with my postgresql config.\n> It's running Solaris on dual Opteron, 4GB.\n> I allocated around 128MB for sorting and more than 80% for\n> effective_cache_size and shared_buffers = 32768.\n> Any further ideas is much appreciated.\n\nSounds a bit excessive. Compare with the vanilla configuration, and see\nwhat is faster.\n\n-- \nTobias Brox, +47-91700050\n",
"msg_date": "Fri, 10 Jun 2005 11:49:25 +0300",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with rewriting query"
},
{
"msg_contents": "On Thu, Jun 09, 2005 at 18:26:09 -0700,\n Junaili Lie <[email protected]> wrote:\n> Hi Bruno,\n> I followed your suggestion.\n> The query plan shows that it uses the index (id, person_id). However,\n> the execution time is still slow. I have to do ctl-C to stop it.\n> Maybe something is wrong with my postgresql config.\n> It's running Solaris on dual Opteron, 4GB.\n> I allocated around 128MB for sorting and more than 80% for\n> effective_cache_size and shared_buffers = 32768.\n> Any further ideas is much appreciated.\n\nIt might be useful to see that plan and the actual query you used. There were\nonly 569 entries in the people table, so I find it hard to believe that an\nindex look up per person is taking so long that you need to cancel the query.\n\n> \n> \n> \n> \n> On 6/8/05, Bruno Wolff III <[email protected]> wrote:\n> > On Wed, Jun 08, 2005 at 15:48:27 -0700,\n> > Junaili Lie <[email protected]> wrote:\n> > > Hi,\n> > > The suggested query below took forever when I tried it.\n> > > In addition, as suggested by Tobias, I also tried to create index on\n> > > food(p_id, id), but still no goal (same query plan).\n> > > Here is the explain:\n> > > TEST1=# explain select f.p_id, max(f.id) from Food f, Person p where\n> > > (f.p_id = p.id) group by p.id;\n> > \n> > The above is going to require reading all the food table (assuming no\n> > orphaned records), so the plan below seems reasonable.\n> > \n> > > QUERY PLAN\n> > > ----------------------------------------------------------------------------------------------------------------\n> > > GroupAggregate (cost=0.00..214585.51 rows=569 width=16)\n> > > -> Merge Join (cost=0.00..200163.50 rows=2884117 width=16)\n> > > Merge Cond: (\"outer\".id = \"inner\".p_id)\n> > > -> Index Scan using person_pkey on person p\n> > > (cost=0.00..25.17 rows=569 width=8)\n> > > -> Index Scan using person_id_food_index on food f\n> > > (cost=0.00..164085.54 rows=2884117 width=16)\n> > > (5 rows)\n> > >\n> > >\n> > >\n> > >\n> > > TEST1=# explain select p.id, (Select f.id from food f where\n> > > f.p_id=p.id order by f.id desc limit 1) from person p;\n> > \n> > Using a subselect seems to be the best hope of getting better performance.\n> > I think you almost got it right, but in order to use the index on\n> > (p_id, id) you need to order by f.p_id desc, f.id desc. Postgres won't\n> > deduce this index can be used because f.p_id is constant in the subselect,\n> > you need to give it some help.\n> > \n> > > QUERY PLAN\n> > > -----------------------------------------------------------------------------------------------------------\n> > > Seq Scan on Person p (cost=100000000.00..100007015.24 rows=569 width=8)\n> > > SubPlan\n> > > -> Limit (cost=0.00..12.31 rows=1 width=8)\n> > > -> Index Scan Backward using food_pkey on food f\n> > > (cost=0.00..111261.90 rows=9042 width=8)\n> > > Filter: (p_id = $0)\n> > > (5 rows)\n> > >\n> > > any ideas or suggestions is appreciate.\n> > >\n> > >\n> > > On 6/8/05, Tobias Brox <[email protected]> wrote:\n> > > > [Junaili Lie - Wed at 12:34:32PM -0700]\n> > > > > select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group\n> > > > > by f.p_id will work.\n> > > > > But I understand this is not the most efficient way. Is there another\n> > > > > way to rewrite this query? (maybe one that involves order by desc\n> > > > > limit 1)\n> > > >\n> > > > eventually, try something like\n> > > >\n> > > > select p.id,(select f.id from food f where f.p_id=p.id order by f.id desc limit 1)\n> > > > from person p\n> > > >\n> > > > not tested, no warranties.\n> > > >\n> > > > Since subqueries can be inefficient, use \"explain analyze\" to see which one\n> > > > is actually better.\n> > > >\n> > > > This issue will be solved in future versions of postgresql.\n> > > >\n> > > > --\n> > > > Tobias Brox, +47-91700050\n> > > > Tallinn\n> > > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 9: the planner will ignore your desire to choose an index scan if your\n> > > joining column's datatypes do not match\n> >\n",
"msg_date": "Fri, 10 Jun 2005 07:06:38 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with rewriting query"
}
] |
[
{
"msg_contents": "We have had four databases serving our web site, but due to licensing\nissues, we have had to take two out of production, and we are looking to\nbring those two onto PostgreSQL very quickly, with an eye toward moving\neverything in the longer term. The central web DBs are all copies of\nthe same data, drawn from 72 servers at remote locations. We replicate\nmodifications made at these 72 remote sites real-time to all central\nservers. \n \nOn each central server, there are 352 tables and 412 indexes holding\nabout 700 million rows, taking almost 200 GB of disk space. The largest\ntable has about 125 million of those rows, with several indexes. There\nare about 3 million database transactions modifying each central\ndatabase every day, with each transaction typically containing many\ninserts and/or updates -- deletes are sparse. During idle time the\nreplication process compares tables in the source databases to the\ncentral databases to log any differences and correct the central copies.\n To support the 2 million browser and SOAP hits per day, the web sites\nspread about 6 million SELECT statements across available central\nservers, using load balancing. Many of these queries involve a 10 or\nmore tables with many subqueries; some involve unions. \n \nThe manager of the DBA team is reluctant to change both the OS and the\nDBMS at the same time, so unless I can make a strong case for why it is\nimportant to run postgresql under Linux, we will be running this on\nWindows. Currently, there are two Java-based middle tier processes\nrunning on each central database server, one for the replication and one\nfor the web. We expect to keep it that way, so the database needs to\nplay well with these processes. \n \nI've been reading everything I can find on postgresql configuration, but\nwould welcome any specific suggestions for this environment. I'd also\nbe really happy to hear that we're not the first to use postgresql with\nthis much data and load. \n \nThanks for any info you can provide. \n \n-Kevin \n \n \n\n\n\n\n\n\n\n\n We have had four databases serving our web site, but due to licensing issues, we have had to take two out of production, and we are looking to bring those two onto PostgreSQL very quickly, with an eye toward moving everything in the longer term. The central web DBs are all copies of the same data, drawn from 72 servers at remote locations. We replicate modifications made at these 72 remote sites real-time to all central servers.\n \n\n \n \n\n On each central server, there are 352 tables and 412 indexes holding about 700 million rows, taking almost 200 GB of disk space. The largest table has about 125 million of those rows, with several indexes. There are about 3 million database transactions modifying each central database every day, with each transaction typically containing many inserts and/or updates -- deletes are sparse. During idle time the replication process compares tables in the source databases to the central databases to log any differences and correct the central copies. To support the 2 million browser and SOAP hits per day, the web sites spread about 6 million SELECT statements across available central servers, using load balancing. Many of these queries involve a 10 or more tables with many subqueries; some involve unions.\n \n\n \n \n\n The manager of the DBA team is reluctant to change both the OS and the DBMS at the same time, so unless I can make a strong case for why it is important to run postgresql under Linux, we will be running this on Windows. Currently, there are two Java-based middle tier processes running on each central database server, one for the replication and one for the web. We expect to keep it that way, so the database needs to play well with these processes.\n \n\n \n \n\n I've been reading everything I can find on postgresql configuration, but would welcome any specific suggestions for this environment. I'd also be really happy to hear that we're not the first to use postgresql with this much data and load.\n \n\n \n \n\n Thanks for any info you can provide.\n \n\n \n \n\n -Kevin",
"msg_date": "Wed, 08 Jun 2005 16:04:36 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Recommendations for configuring a 200 GB database"
},
{
"msg_contents": "Kevin Grittner wrote:\n> \n> The manager of the DBA team is reluctant to change both the OS and the\n> DBMS at the same time, so unless I can make a strong case for why it is\n> important to run postgresql under Linux, we will be running this on\n> Windows. Currently, there are two Java-based middle tier processes\n> running on each central database server, one for the replication and one\n> for the web. We expect to keep it that way, so the database needs to\n> play well with these processes. \n\nWell, there's a lot more experience running PG on various *nix systems \nand a lot more help available. Also, I don't think performance on \nWindows is as good as on Linux/*BSD yet.\n\nAgainst switching OS is the fact that you presumably don't have the \nskills in-house for it, and the hardware was chosen for Windows \ncompatibility/performance.\n\nSpeaking of which, what sort of hardware are we talking about?\n\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 09 Jun 2005 09:06:44 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommendations for configuring a 200 GB database"
}
] |
[
{
"msg_contents": "As a follow up to this ive installed on another test Rehat 8 machine\nwith\n7.3.4 and slow inserts are present, however on another machine with ES3\nthe same 15,000 inserts is about 20 times faster, anyone know of a\nchange\nthat would effect this, kernel or rehat release ?\n\nSteve\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Steve\nPollard\nSent: Wednesday, 8 June 2005 6:39 PM\nTo: [email protected]\nSubject: [PERFORM] Importing from pg_dump slow, low Disk IO\n\n\nHi Everyone,\n\nIm having a performance issue with version 7.3.4 which i first thought\nwas Disk IO related, however now it seems like the problem is caused by\nreally slow commits, this is running on Redhat 8.\n\nBasically im taking a .sql file with insert of about 15,000 lines and\n<'ing straight into psql DATABASENAME, the Disk writes never gets over\nabout 2000 on this machine with a RAID5 SCSI setup, this happens in my\nPROD and DEV environment.\n\nIve installed the latest version on RedHat ES3 and copied the configs\nacross however the inserts are really really fast..\n\nWas there a performce change from 7.3.4 to current to turn of\nautocommits by default or is buffering handled differently ?\n\nI have ruled out Disk IO issues as a siple 'cp' exceeds Disk writes to\n60000 (using vmstat)\n\nIf i do this with a BEGIN; and COMMIT; its really fast, however not\npractical as im setting up a cold-standby server for automation.\n\nHave been trying to debug for a few days now and see nothing.. here is\nsome info :\n\n::::::::::::::\n/proc/sys/kernel/shmall\n::::::::::::::\n2097152\n::::::::::::::\n/proc/sys/kernel/shmmax\n::::::::::::::\n134217728\n::::::::::::::\n/proc/sys/kernel/shmmni\n::::::::::::::\n4096\n\n\nshared_buffers = 51200\nmax_fsm_relations = 1000\nmax_fsm_pages = 10000\nmax_locks_per_transaction = 64\nwal_buffers = 64\neffective_cache_size = 65536\n\nMemTotal: 1547608 kB\nMemFree: 47076 kB\nMemShared: 0 kB\nBuffers: 134084 kB\nCached: 1186596 kB\nSwapCached: 544 kB\nActive: 357048 kB\nActiveAnon: 105832 kB\nActiveCache: 251216 kB\nInact_dirty: 321020 kB\nInact_laundry: 719492 kB\nInact_clean: 28956 kB\nInact_target: 285300 kB\nHighTotal: 655336 kB\nHighFree: 1024 kB\nLowTotal: 892272 kB\nLowFree: 46052 kB\nSwapTotal: 1534056 kB\nSwapFree: 1526460 kB\n\nThis is a real doosey for me, please provide any advise possible.\n\nSteve\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n",
"msg_date": "Thu, 9 Jun 2005 13:26:57 +0930",
"msg_from": "\"Steve Pollard\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Importing from pg_dump slow, low Disk IO"
}
] |
[
{
"msg_contents": "This is a pattern which I've seen many of times. I call it a \"best\nchoice\" query -- you can easily match a row from one table against any\nof a number of rows in another, the trick is to pick the one that\nmatters most. I've generally found that I want the query results to\nshow more than the columns used for making the choice (and there can be\nmany), which rules out the min/max technique. What works in a pretty\nstraitforward way, and generally optimizes at least as well as the\nalternatives, is to join to the set of candidate rows and add a \"not\nexists\" test to eliminate all but the best choice.\n \nFor your example, I've taken some liberties and added hypothetical\ncolumns from both tables to the result set, to demonstrate how that\nworks. Feel free to drop them or substitute actual columns as you see\nfit. This will work best if there is an index for the food table on\np_id and id. Please let me know whether this works for you.\n \nselect p.id as p_id, p.fullname, f.id, f.foodtype, f.ts\nfrom food f join person p\non f.p_id = p.id\nand not exists (select * from food f2 where f2.p_id = f.p_id and f2.id >\nf.id)\norder by p_id\n \nNote that this construct works for inner or outer joins and works\nregardless of how complex the logic for picking the best choice is. I\nthink one reason this tends to optimize well is that an EXISTS test can\nfinish as soon as it finds one matching row.\n \n-Kevin\n \n \n>>> Junaili Lie <[email protected]> 06/08/05 2:34 PM >>>\nHi,\nI have the following table:\nperson - primary key id, and some attributes\nfood - primary key id, foreign key p_id reference to table person.\n\ntable food store all the food that a person is eating. The more recent\nfood is indicated by the higher food.id.\n\nI need to find what is the most recent food a person ate for every\nperson.\nThe query:\nselect f.p_id, max(f.id) from person p, food f where p.id=f.p_id group\nby f.p_id will work.\nBut I understand this is not the most efficient way. Is there another\nway to rewrite this query? (maybe one that involves order by desc\nlimit 1)\n\nThank you in advance.\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if\nyour\n joining column's datatypes do not match\n\n",
"msg_date": "Wed, 08 Jun 2005 23:01:35 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help with rewriting query"
},
{
"msg_contents": "Hi Kevin,\nThanks for the reply.\nI tried that query. It definately faster, but not fast enough (took\naround 50 second to complete).\nI have around 2.5 million on food and 1000 on person.\nHere is the query plan:\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..11662257.52 rows=1441579 width=16)\n Merge Cond: (\"outer\".id = \"inner\".p_id)\n -> Index Scan using person_pkey on person p (cost=0.00..25.17\nrows=569 width=8)\n -> Index Scan using p_id_food_index on food f \n(cost=0.00..11644211.28 rows=1441579 width=16)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using p_id_food_index on food f2 \n(cost=0.00..11288.47 rows=2835 width=177)\n Index Cond: (p_id = $0)\n Filter: (id > $1)\n(9 rows)\n\nI appreciate if you have further ideas to troubleshoot this issue.\nThank you!\n\nOn 6/8/05, Kevin Grittner <[email protected]> wrote:\n> This is a pattern which I've seen many of times. I call it a \"best\n> choice\" query -- you can easily match a row from one table against any\n> of a number of rows in another, the trick is to pick the one that\n> matters most. I've generally found that I want the query results to\n> show more than the columns used for making the choice (and there can be\n> many), which rules out the min/max technique. What works in a pretty\n> straitforward way, and generally optimizes at least as well as the\n> alternatives, is to join to the set of candidate rows and add a \"not\n> exists\" test to eliminate all but the best choice.\n> \n> For your example, I've taken some liberties and added hypothetical\n> columns from both tables to the result set, to demonstrate how that\n> works. Feel free to drop them or substitute actual columns as you see\n> fit. This will work best if there is an index for the food table on\n> p_id and id. Please let me know whether this works for you.\n> \n> select p.id as p_id, p.fullname, f.id, f.foodtype, f.ts\n> from food f join person p\n> on f.p_id = p.id\n> and not exists (select * from food f2 where f2.p_id = f.p_id and f2.id >\n> f.id)\n> order by p_id\n> \n> Note that this construct works for inner or outer joins and works\n> regardless of how complex the logic for picking the best choice is. I\n> think one reason this tends to optimize well is that an EXISTS test can\n> finish as soon as it finds one matching row.\n> \n> -Kevin\n> \n> \n> >>> Junaili Lie <[email protected]> 06/08/05 2:34 PM >>>\n> Hi,\n> I have the following table:\n> person - primary key id, and some attributes\n> food - primary key id, foreign key p_id reference to table person.\n> \n> table food store all the food that a person is eating. The more recent\n> food is indicated by the higher food.id.\n> \n> I need to find what is the most recent food a person ate for every\n> person.\n> The query:\n> select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group\n> by f.p_id will work.\n> But I understand this is not the most efficient way. Is there another\n> way to rewrite this query? (maybe one that involves order by desc\n> limit 1)\n> \n> Thank you in advance.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if\n> your\n> joining column's datatypes do not match\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n>\n",
"msg_date": "Thu, 9 Jun 2005 18:30:37 -0700",
"msg_from": "Junaili Lie <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with rewriting query"
}
] |
[
{
"msg_contents": "Dear Group!\n Thank you for all the support you all have been \nproviding from time to time. I have a small question: How do I find the \nactual size of the Database? Awaiting you replies,\n\nShan.\n",
"msg_date": "Thu, 09 Jun 2005 10:10:42 +0530",
"msg_from": "Shanmugasundaram Doraisamy <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to find the size of a database - reg."
},
{
"msg_contents": "contrib/dbsize in the postgresql distribution.\n\nShanmugasundaram Doraisamy wrote:\n> Dear Group!\n> Thank you for all the support you all have been \n> providing from time to time. I have a small question: How do I find the \n> actual size of the Database? Awaiting you replies,\n> \n> Shan.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n",
"msg_date": "Thu, 09 Jun 2005 13:01:35 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to find the size of a database - reg."
}
] |
[
{
"msg_contents": "Greetings all,\nI am continously encountering an issue with query plans that changes after \na pg_dump / pg_restore operation has been performed.\nOn the production database, PostGre refuses to use the defined indexes in \nseveral queries however once the database has been dumped and restored \neither on another server or on the same database server it suddenly \n\"magically\" changes the query plan to utilize the indexes thereby cutting \nthe query cost down to 10% of the original.\nDatabases are running on the same PostGre v7.3.9 on RH Enterprise 3.1 \nserver.\n\nA VACUUM FULL runs regularly once a day and VACUUM ANALYZE every other \nhour.\nThe data in the tables affected by this query doesn't change very often\nEven doing a manual VACUUM FULL, VACUUM ANALYZE or REINDEX before the \nquery is run on the production database changes nothing.\nHave tried to drop the indexes completely and re-create them as well, all \nto no avail.\n\nIf the queries are run with SET ENABLE_SEQSCAN TO OFF, the live database \nuses the correct indexes as expected.\n\nHave placed an export of the query, query plan etc. online at: \nhttp://213.173.234.215:8080/plan.htm in order to ensure it's still \nreadable.\nFor the plans, the key tables are marked with bold.\n\nAny insight into why PostGre behaves this way as well as a possible \nsolution (other than performing a pg_dump / pg_restore on the live \ndatabase) would be very much appreciated?\n\nCheers\nJona\n",
"msg_date": "Thu, 9 Jun 2005 02:02:02 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Query plan changes after pg_dump / pg_restore"
},
{
"msg_contents": "On Thu, 9 Jun 2005 [email protected] wrote:\n\n> I am continously encountering an issue with query plans that changes after \n> a pg_dump / pg_restore operation has been performed.\n> \n> Have placed an export of the query, query plan etc. online at: \n> http://213.173.234.215:8080/plan.htm in order to ensure it's still \n> readable.\n\nThere is not a major difference in time, so pg is at least not way off \n(225ms vs. 280ms). The estimated cost is however not very related to the \nruntime (117 vs 1389).\n\nWhat you have not showed is if the database is properly tuned. The output\nof SHOW ALL; could help explain a lot together with info of how much\nmemory your computer have.\n\nThe first thing that comes to mind to me is that you probably have not \ntuned shared_buffers and effective_cache_size properly (SHOW ALL would \ntell).\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Thu, 9 Jun 2005 09:33:07 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan changes after pg_dump / pg_restore"
},
{
"msg_contents": "Thank you for the swift reply, the following is the output of the SHOW \nALL for shared_buffers and effective_cache_size.\nshared_buffers: 13384\neffective_cache_size: 4000\nserver memory: 2GB\n\nPlease note, the databases are on the same server, it's merely 2 \ninstances of the same database in order to figure out why there's a \ndifference in the query plan before and after a dump / restore.\n\nWhat worries me is that the plan is different, in the bad plan it makes \na seq scan of a table with 6.5k recods in (fairly silly) and another of \na table with 50k records in (plan stupid).\nIn the good plan it uses the indexes available as expected.\n\nThe estimated cost is obviously way off in the live database, even \nthough statistics etc should be up to date. Any insight into this?\n\nAppreciate the help here...\n\nCheers\nJona\n\nDennis Bjorklund wrote:\n\n>On Thu, 9 Jun 2005 [email protected] wrote:\n>\n> \n>\n>>I am continously encountering an issue with query plans that changes after \n>>a pg_dump / pg_restore operation has been performed.\n>>\n>>Have placed an export of the query, query plan etc. online at: \n>>http://213.173.234.215:8080/plan.htm in order to ensure it's still \n>>readable.\n>> \n>>\n>\n>There is not a major difference in time, so pg is at least not way off \n>(225ms vs. 280ms). The estimated cost is however not very related to the \n>runtime (117 vs 1389).\n>\n>What you have not showed is if the database is properly tuned. The output\n>of SHOW ALL; could help explain a lot together with info of how much\n>memory your computer have.\n>\n>The first thing that comes to mind to me is that you probably have not \n>tuned shared_buffers and effective_cache_size properly (SHOW ALL would \n>tell).\n>\n> \n>\n\n\n\n\n\n\n\nThank you for the swift reply, the following is the output of the SHOW\nALL for shared_buffers and effective_cache_size.\nshared_buffers: 13384\neffective_cache_size: 4000\nserver memory: 2GB\n\nPlease note, the databases are on the same server, it's merely 2\ninstances of the same database in order to figure out why there's a\ndifference in the query plan before and after a dump / restore.\n\nWhat worries me is that the plan is different, in the bad plan it makes\na seq scan of a table with 6.5k recods in (fairly silly) and another of\na table with 50k records in (plan stupid).\nIn the good plan it uses the indexes available as expected.\n\nThe estimated cost is obviously way off in the live database, even\nthough statistics etc should be up to date. Any insight into this?\n\nAppreciate the help here...\n\nCheers\nJona\n\nDennis Bjorklund wrote:\n\nOn Thu, 9 Jun 2005 [email protected] wrote:\n\n \n\nI am continously encountering an issue with query plans that changes after \na pg_dump / pg_restore operation has been performed.\n\nHave placed an export of the query, query plan etc. online at: \nhttp://213.173.234.215:8080/plan.htm in order to ensure it's still \nreadable.\n \n\n\nThere is not a major difference in time, so pg is at least not way off \n(225ms vs. 280ms). The estimated cost is however not very related to the \nruntime (117 vs 1389).\n\nWhat you have not showed is if the database is properly tuned. The output\nof SHOW ALL; could help explain a lot together with info of how much\nmemory your computer have.\n\nThe first thing that comes to mind to me is that you probably have not \ntuned shared_buffers and effective_cache_size properly (SHOW ALL would \ntell).",
"msg_date": "Thu, 09 Jun 2005 10:12:09 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan changes after pg_dump / pg_restore"
},
{
"msg_contents": "> Thank you for the swift reply, the following is the output of the SHOW \n> ALL for shared_buffers and effective_cache_size.\n> shared_buffers: 13384\n> effective_cache_size: 4000\n> server memory: 2GB\n\neffective_cache_size should be 10-100x larger perhaps...\n\nChris\n\n",
"msg_date": "Thu, 09 Jun 2005 16:25:54 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan changes after pg_dump / pg_restore"
},
{
"msg_contents": "Thanks... have notified our sys admin of that so he can make the correct \nchanges.\n\nIt still doesn't explain the difference in query plans though?\n\nI mean, it's the same database server the two instances of the same \ndatabase is running on.\nOne instance (the live) just insists on doing the seq scan of the 50k \nrecords in Price_Tbl and the 6.5k records in SCT2SubCatType_Tbl.\nSeems weird....\n\nCheers\nJona\n\nChristopher Kings-Lynne wrote:\n\n>> Thank you for the swift reply, the following is the output of the \n>> SHOW ALL for shared_buffers and effective_cache_size.\n>> shared_buffers: 13384\n>> effective_cache_size: 4000\n>> server memory: 2GB\n>\n>\n> effective_cache_size should be 10-100x larger perhaps...\n>\n> Chris\n\n\n",
"msg_date": "Thu, 09 Jun 2005 10:54:32 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan changes after pg_dump / pg_restore"
},
{
"msg_contents": "Is effective_cache_size set the same on the test and live?\n\nJona wrote:\n> Thanks... have notified our sys admin of that so he can make the correct \n> changes.\n> \n> It still doesn't explain the difference in query plans though?\n> \n> I mean, it's the same database server the two instances of the same \n> database is running on.\n> One instance (the live) just insists on doing the seq scan of the 50k \n> records in Price_Tbl and the 6.5k records in SCT2SubCatType_Tbl.\n> Seems weird....\n> \n> Cheers\n> Jona\n> \n> Christopher Kings-Lynne wrote:\n> \n>>> Thank you for the swift reply, the following is the output of the \n>>> SHOW ALL for shared_buffers and effective_cache_size.\n>>> shared_buffers: 13384\n>>> effective_cache_size: 4000\n>>> server memory: 2GB\n>>\n>>\n>>\n>> effective_cache_size should be 10-100x larger perhaps...\n>>\n>> Chris\n> \n> \n\n",
"msg_date": "Thu, 09 Jun 2005 16:54:58 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan changes after pg_dump / pg_restore"
},
{
"msg_contents": "It's the same (physical) server as well as the same PostGreSQL daemon, \nso yes.\n\nThe only difference is the actual database, the test database is made \nfrom a backup of the live database and restored onto the same PostGreSQL \nserver.\nSo if I run \"show databases\" in psql i get:\n- test\n- live\n\nMakes sense??\n\n/Jona\n\nChristopher Kings-Lynne wrote:\n\n> Is effective_cache_size set the same on the test and live?\n>\n> Jona wrote:\n>\n>> Thanks... have notified our sys admin of that so he can make the \n>> correct changes.\n>>\n>> It still doesn't explain the difference in query plans though?\n>>\n>> I mean, it's the same database server the two instances of the same \n>> database is running on.\n>> One instance (the live) just insists on doing the seq scan of the 50k \n>> records in Price_Tbl and the 6.5k records in SCT2SubCatType_Tbl.\n>> Seems weird....\n>>\n>> Cheers\n>> Jona\n>>\n>> Christopher Kings-Lynne wrote:\n>>\n>>>> Thank you for the swift reply, the following is the output of the \n>>>> SHOW ALL for shared_buffers and effective_cache_size.\n>>>> shared_buffers: 13384\n>>>> effective_cache_size: 4000\n>>>> server memory: 2GB\n>>>\n>>>\n>>>\n>>>\n>>> effective_cache_size should be 10-100x larger perhaps...\n>>>\n>>> Chris\n>>\n>>\n>>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n",
"msg_date": "Thu, 09 Jun 2005 11:23:14 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan changes after pg_dump / pg_restore"
},
{
"msg_contents": "On Thu, 9 Jun 2005, Jona wrote:\n\n> It's the same (physical) server as well as the same PostGreSQL daemon, \n> so yes.\n\nThe only thing that can differ then is the statistics collected and the\namount of dead space in tables and indexes (but since you both reindex and\nrun vacuum full that should not be it).\n\nSo comparing the statistics in the system tables is the only thing I can \nthink of that might bring some light on the issue. Maybe someone else have \nsome ideas.\n\nAnd as KL said, the effective_cache_size looked like it was way to small. \nWith that setting bigger then pg should select index scans more often. It \ndoesn't explain why the databases behave like they do now, but it might \nmake pg select the same plan nevertheless.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Thu, 9 Jun 2005 12:48:07 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan changes after pg_dump / pg_restore"
},
{
"msg_contents": "Thank you for the insight, any suggestion as to what table / columns I \nshould compare between the databases?\n\nCheers\nJona\n\nDennis Bjorklund wrote:\n\n>On Thu, 9 Jun 2005, Jona wrote:\n>\n> \n>\n>>It's the same (physical) server as well as the same PostGreSQL daemon, \n>>so yes.\n>> \n>>\n>\n>The only thing that can differ then is the statistics collected and the\n>amount of dead space in tables and indexes (but since you both reindex and\n>run vacuum full that should not be it).\n>\n>So comparing the statistics in the system tables is the only thing I can \n>think of that might bring some light on the issue. Maybe someone else have \n>some ideas.\n>\n>And as KL said, the effective_cache_size looked like it was way to small. \n>With that setting bigger then pg should select index scans more often. It \n>doesn't explain why the databases behave like they do now, but it might \n>make pg select the same plan nevertheless.\n>\n> \n>\n\n\n\n\n\n\n\nThank you for the insight, any suggestion as to what table / columns I\nshould compare between the databases?\n\nCheers\nJona\n\nDennis Bjorklund wrote:\n\nOn Thu, 9 Jun 2005, Jona wrote:\n\n \n\nIt's the same (physical) server as well as the same PostGreSQL daemon, \nso yes.\n \n\n\nThe only thing that can differ then is the statistics collected and the\namount of dead space in tables and indexes (but since you both reindex and\nrun vacuum full that should not be it).\n\nSo comparing the statistics in the system tables is the only thing I can \nthink of that might bring some light on the issue. Maybe someone else have \nsome ideas.\n\nAnd as KL said, the effective_cache_size looked like it was way to small. \nWith that setting bigger then pg should select index scans more often. It \ndoesn't explain why the databases behave like they do now, but it might \nmake pg select the same plan nevertheless.",
"msg_date": "Thu, 09 Jun 2005 15:42:24 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan changes after pg_dump / pg_restore"
},
{
"msg_contents": "Jona <[email protected]> writes:\n> What worries me is that the plan is different,\n\nGiven that the estimated costs are close to the same, this is probably\njust the result of small differences in the ANALYZE statistics leading\nto small differences in cost estimates and thus choice of different\nplans. I'll bet if you re-ANALYZE a few times on the source database\nyou'll see it flipping between plan choices too. This is normal because\nANALYZE takes a random sample of rows rather than being exhaustive.\n\nSo the interesting question is not \"why are the plan choices different\"\nit is \"how do I get the cost estimates closer to reality\". That's the\nonly way in the long run to ensure the planner makes the right choice.\nIncreasing the statistics targets or fooling with planner cost\nparameters are the basic tools you have available here.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Jun 2005 10:54:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan changes after pg_dump / pg_restore "
},
{
"msg_contents": "Hi Tom,\nThank you for the input, you're absolutely right.\nHave just executed like 10 VACUUM ANALYZE on the Price_Tbl in both \ndatabases and now both queries use the same plan.... the bad one, GREAT!\nWho said ignorance is bliss?? ;-)\n\nHave just messed around with ALTER TABLE ... ALTER .... SET STATISTICS \n.... for both tables to no effect.\nHave tried setting both high number (100 and 200) and a low number (1) \nand run several VACUUM ANALYZE afterwards.\nIt still insists on the bad plan...\n\nFurthermore I've played around with the RANDOM_PAGE_COST runtime parameter.\nSeems that when I set it to 2.2 it switch to using the aff_price_uq \nindex on Price_Tbl, however it needs to be set to 0.7 before it uses \nthe subcat_uq index on SCT2SubCatType_Tbl.\nHas no effect wether the statistics is set to 1 or a 100 for this behaviour.\nThe overall plan remains the same though, and even when it uses both \nindexes the total cost is roughly 5.5 times higher than the good plan.\n\nNew plan:\nUnique (cost=612.29..612.65 rows=3 width=75) (actual \ntime=255.88..255.89 rows=3 loops=1)\n -> Hash Join (cost=158.26..596.22 rows=288 width=75) (actual \ntime=60.91..99.69 rows=2477 loops=1)\n Hash Cond: (\"outer\".sctid = \"inner\".sctid)\n -> Index Scan using aff_price_uq on price_tbl \n(cost=0.00..409.24 rows=5025 width=4) (actual time=0.03..17.81 rows=5157 \nloops=1)\n Index Cond: (affid = 8)\n -> Hash (cost=157.37..157.37 rows=355 width=71) \n(actual time=60.77..60.77 rows=0 loops=1)\n -> Merge Join (cost=10.26..157.37 rows=355 \nwidth=71) (actual time=14.42..53.79 rows=2493 loops=1)\n Merge Cond: (\"outer\".subcattpid = \n\"inner\".id)\n -> Index Scan using subcat_uq on \nsct2subcattype_tbl (cost=0.00..126.28 rows=6536 width=8) (actual \ntime=0.03..23.25 rows=6527 loops=1)\n -> Sort (cost=10.26..10.28 rows=9 \nwidth=63) (actual time=2.46..5.66 rows=2507 loops=1)\n\n\"Total runtime: 257.49 msec\"\n\nOld \"good\" plan:\nUnique (cost=117.18..117.20 rows=1 width=147) (actual \ntime=224.62..224.63 rows=3 loops=1)\n -> Index Scan using subcat_uq on sct2subcattype_tbl \n(cost=0.00..100.47 rows=33 width=8) (actual time=0.01..0.20 rows=46 \nloops=54)\n Index Cond: (\"outer\".id = sct2subcattype_tbl.subcattpid) \t\n\t\n -> Index Scan using aff_price_uq on price_tbl \n(cost=0.00..7.11 rows=1 width=4) (actual time=0.01..0.01 rows=1 \nloops=2493) \t\n Index Cond: ((price_tbl.affid = 8) AND (\"outer\".sctid = \nprice_tbl.sctid)) \t\n\nTotal runtime: 225.14 msec\n\nIt seems that the more it knows about\n\nCould you provide some input on how to make it realise that the plan it \nselects is not the optimal?\n\nCheers\nJona\n\nTom Lane wrote:\n\n>Jona <[email protected]> writes:\n> \n>\n>>What worries me is that the plan is different,\n>> \n>>\n>\n>Given that the estimated costs are close to the same, this is probably\n>just the result of small differences in the ANALYZE statistics leading\n>to small differences in cost estimates and thus choice of different\n>plans. I'll bet if you re-ANALYZE a few times on the source database\n>you'll see it flipping between plan choices too. This is normal because\n>ANALYZE takes a random sample of rows rather than being exhaustive.\n>\n>So the interesting question is not \"why are the plan choices different\"\n>it is \"how do I get the cost estimates closer to reality\". That's the\n>only way in the long run to ensure the planner makes the right choice.\n>Increasing the statistics targets or fooling with planner cost\n>parameters are the basic tools you have available here.\n>\n>\t\t\tregards, tom lane\n> \n>\n\n\n\n\n\n\n\nHi Tom,\nThank you for the input, you're absolutely right.\nHave just executed like 10 VACUUM ANALYZE on the Price_Tbl in both\ndatabases and now both queries use the same plan.... the bad one, GREAT!\nWho said ignorance is bliss?? ;-)\n\nHave just messed around with ALTER TABLE ... ALTER .... SET STATISTICS\n.... for both tables to no effect.\nHave tried setting both high number (100 and 200) and a low number (1)\nand run several VACUUM ANALYZE afterwards.\nIt still insists on the bad plan...\n\nFurthermore I've played around with the RANDOM_PAGE_COST runtime\nparameter.\nSeems that when I set it to 2.2 it switch to using the aff_price_uq\nindex on Price_Tbl, however it needs to be set to 0.7 before it uses\nthe subcat_uq index on SCT2SubCatType_Tbl.\nHas no effect wether the statistics is set to 1 or a 100 for this\nbehaviour.\nThe overall plan remains the same though, and even when it uses both\nindexes the total cost is roughly 5.5 times higher than the good plan.\n\nNew plan:\nUnique (cost=612.29..612.65 rows=3 width=75) (actual\ntime=255.88..255.89 rows=3 loops=1)\n -> Hash Join (cost=158.26..596.22 rows=288 width=75) (actual\ntime=60.91..99.69 rows=2477 loops=1)\n Hash Cond: (\"outer\".sctid = \"inner\".sctid)\n -> Index Scan using aff_price_uq on price_tbl \n(cost=0.00..409.24 rows=5025 width=4) (actual time=0.03..17.81\nrows=5157 loops=1)\n Index Cond: (affid = 8)\n -> Hash (cost=157.37..157.37 rows=355\nwidth=71) (actual time=60.77..60.77 rows=0 loops=1)\n -> Merge Join (cost=10.26..157.37\nrows=355 width=71) (actual time=14.42..53.79 rows=2493 loops=1)\n Merge Cond: (\"outer\".subcattpid =\n\"inner\".id)\n -> Index Scan using subcat_uq on\nsct2subcattype_tbl (cost=0.00..126.28 rows=6536 width=8) (actual\ntime=0.03..23.25 rows=6527 loops=1)\n -> Sort (cost=10.26..10.28 rows=9\nwidth=63) (actual time=2.46..5.66 rows=2507 loops=1)\n\n\"Total runtime: 257.49 msec\"\n\nOld \"good\" plan:\nUnique (cost=117.18..117.20 rows=1 width=147)\n(actual time=224.62..224.63 rows=3 loops=1)\n\n\n\n \n-> Index Scan using subcat_uq on\nsct2subcattype_tbl (cost=0.00..100.47 rows=33\nwidth=8) (actual time=0.01..0.20 rows=46 loops=54)\n\n\n Index Cond: (\"outer\".id =\nsct2subcattype_tbl.subcattpid)\n\n\n\n\n\n\n -> Index Scan\nusing aff_price_uq on price_tbl (cost=0.00..7.11\nrows=1 width=4) (actual time=0.01..0.01 rows=1 loops=2493)\n\n\n\n\n Index Cond: ((price_tbl.affid = 8)\nAND (\"outer\".sctid = price_tbl.sctid))\n\n\n\n\n\nTotal runtime: 225.14 msec\n\nIt seems that the more it knows about\n\nCould you provide some input on how to make it realise that the plan it\nselects is not the optimal?\n\nCheers\nJona\n\nTom Lane wrote:\n\nJona <[email protected]> writes:\n \n\nWhat worries me is that the plan is different,\n \n\n\nGiven that the estimated costs are close to the same, this is probably\njust the result of small differences in the ANALYZE statistics leading\nto small differences in cost estimates and thus choice of different\nplans. I'll bet if you re-ANALYZE a few times on the source database\nyou'll see it flipping between plan choices too. This is normal because\nANALYZE takes a random sample of rows rather than being exhaustive.\n\nSo the interesting question is not \"why are the plan choices different\"\nit is \"how do I get the cost estimates closer to reality\". That's the\nonly way in the long run to ensure the planner makes the right choice.\nIncreasing the statistics targets or fooling with planner cost\nparameters are the basic tools you have available here.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 09 Jun 2005 18:15:04 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan changes after pg_dump / pg_restore"
}
] |
[
{
"msg_contents": "Hi, \n\n \n\nMy Secenario :\n\n \n\nP4 with 1G of memory on Fedora Core\n\nabout 100 inserts/update per hour\n\nabout 100 query per minute\n\n20 concurrent connections\n\n \n\n \n\n1. What is the best parameter setting in the pg_autovacuum for my scenario ?\n\n \n\n2. what will be my sleep setting if i want to execute pg_autovacuum only\nafter 10 hrs after the last execution.\n\n \n\n \n\nSorry for asking those stupid question but is this my first time to use\npg_autovacuum and i can't understand the \n\nREADME.pg_autovacuum file :)\n\n \n\nThe default parameters in pg_autovacuum.h makes my box suffer some resources\nproblem.\n\n \n\nThanks\n\n \n\n\n\n\n\n\n\n\n\n\n \nHi, \n \nMy Secenario :\n \nP4 with 1G of memory on Fedora Core\nabout 100 inserts/update per hour\nabout 100 query per minute\n20 concurrent connections\n \n \n1. What is the best parameter setting in the pg_autovacuum\nfor my scenario ?\n \n2. what will be my sleep setting if i want to execute\npg_autovacuum only after 10 hrs after the last execution.\n \n \nSorry for asking those stupid question but is this my first\ntime to use pg_autovacuum and i can’t understand the \nREADME.pg_autovacuum file :)\n \nThe default parameters in pg_autovacuum.h makes my box\nsuffer some resources problem.\n \nThanks",
"msg_date": "Thu, 9 Jun 2005 17:19:55 +0800",
"msg_from": "\"Michael Ryan S. Puncia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_autovacuum settings"
}
] |
[
{
"msg_contents": "Hi,\n\nafter having migrated a 7.2 pg-database to 7.4 while upgrdaing from\ndebian woody to debian sarge there are some more conf-Parameters to\nevaluate. \nWe are running a small but continuously growing datawarehouse which has\nrecently around 40 million fact entries. \n\nTo my question: I found the parameter \"stats_reset_on_server_start\"\nwhich is set to true by default. Why did you choose this (and not false)\nand what are the impacts of changeing it to false? I mean, as long as I\nunderstood it, each query or statements generates some statistic data\nwhich is used by the optimizer (or anything equal) later on. So in my\noppinion, wouldn't it be better so set this parameter to false and to\nenable a kind of a \"startup reset_stats\" option?\n\nRegards,\nYann\n",
"msg_date": "Thu, 9 Jun 2005 12:08:52 +0200",
"msg_from": "Yann Michel <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql.conf runtime statistics default"
},
{
"msg_contents": "Yann Michel wrote:\n> \n> To my question: I found the parameter \"stats_reset_on_server_start\"\n> which is set to true by default. Why did you choose this (and not false)\n> and what are the impacts of changeing it to false? I mean, as long as I\n> understood it, each query or statements generates some statistic data\n> which is used by the optimizer (or anything equal) later on. So in my\n> oppinion, wouldn't it be better so set this parameter to false and to\n> enable a kind of a \"startup reset_stats\" option?\n\nThis is administrator statistics (e.g. number of disk blocks read from \nthis index) not planner statistics. You're right - it would be foolish \nto throw away planner stats.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 09 Jun 2005 14:11:22 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf runtime statistics default"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jun 09, 2005 at 02:11:22PM +0100, Richard Huxton wrote:\n> >\n> >To my question: I found the parameter \"stats_reset_on_server_start\"\n> >which is set to true by default. Why did you choose this (and not false)\n> >and what are the impacts of changeing it to false? I mean, as long as I\n> >understood it, each query or statements generates some statistic data\n> >which is used by the optimizer (or anything equal) later on. So in my\n> >oppinion, wouldn't it be better so set this parameter to false and to\n> >enable a kind of a \"startup reset_stats\" option?\n> \n> This is administrator statistics (e.g. number of disk blocks read from \n> this index) not planner statistics. You're right - it would be foolish \n> to throw away planner stats.\n\nSo what is best to set this parameter to and when? As I read this\nparameter is documented within the section \"16.4.7.2. Query and Index\nStatistics Collector\" so I guess it is better to set it to false as\ndescribed above. Or am I wrong?\n\nRegards,\nYann\n",
"msg_date": "Fri, 10 Jun 2005 06:57:02 +0200",
"msg_from": "Yann Michel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf runtime statistics default"
},
{
"msg_contents": "Yann Michel wrote:\n> Hi,\n> \n> On Thu, Jun 09, 2005 at 02:11:22PM +0100, Richard Huxton wrote:\n> \n>>>To my question: I found the parameter \"stats_reset_on_server_start\"\n>>>which is set to true by default. Why did you choose this (and not false)\n>>>and what are the impacts of changeing it to false? I mean, as long as I\n>>>understood it, each query or statements generates some statistic data\n>>>which is used by the optimizer (or anything equal) later on. So in my\n>>>oppinion, wouldn't it be better so set this parameter to false and to\n>>>enable a kind of a \"startup reset_stats\" option?\n>>\n>>This is administrator statistics (e.g. number of disk blocks read from \n>>this index) not planner statistics. You're right - it would be foolish \n>>to throw away planner stats.\n> \n> \n> So what is best to set this parameter to and when? As I read this\n> parameter is documented within the section \"16.4.7.2. Query and Index\n> Statistics Collector\" so I guess it is better to set it to false as\n> described above. Or am I wrong?\n\nIt depends on whether you want to know how much activity your \ntables/indexes have received *ever* or since you last restarted. If you \naltered your database schema, added/removed indexes or changed \nhardware/configuration then you might want to reset the counts to zero \nto more easily see the effect of the new setup.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 10 Jun 2005 08:17:37 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf runtime statistics default"
}
] |
[
{
"msg_contents": "On 6/8/05, Francisco Figueiredo Jr. <[email protected]> wrote:\n> \n> --- Josh Close <[email protected]> escreveu:\n> \n> > Well, that would make total sense. I was kinda curious how the data\n> > provider differentianted between :a and casting like now()::text.\n> >\n> \n> Hi Josh!\n> \n> Npgsql uses the info found in NpgsqlCommand.Parameters collection. We do check\n> if a parameter in Parameters collection isn't found in query string. The other\n> way around isn't done yet. So, you can safely put something like: :a::text and\n> it will send the text 5::text for example.\n> \n> I hope it helps.\n\nYes, that does help. Thanks.\n\n-Josh\n",
"msg_date": "Thu, 9 Jun 2005 08:35:29 -0500",
"msg_from": "Josh Close <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Npgsql-general] index out of range"
}
] |
[
{
"msg_contents": "Thanks for your reply. Besides your post regarding *nix vs. Windows I\ngot a few which didn't go to the group. Words like \"bold move\" and\n\"disaster waiting to happen\" tended to feature prominently in these\nmessages (regarding putting something this big on PostgreSQL under\nWindows), and management is considering deploying one under Windows and\none under Linux, or possibly even both under Linux -- so please pass\nalong advice for either environment.\n \nThe four web servers are not all identical -- we have two \"large\" and\ntwo \"small\". They are split between sites, and even one of the small\nones is capable of keeping our apps running, although with significantly\nimpaired performance. The initial PostgreSQL implementation will be on\none large and one small, unless we decide to do one each of Windows and\nLinux; in that case we'd want identical hardware to better compare the\nOS issues, so it would probably be the two small servers.\n \nThe small servers are IBM 8686-9RX servers with 4 xeon processors at 2\nghz, 6 gig of ram. The internal drives are set as a 67 gig raid 5 array\nwith three drives. We have an external storage arry attached. This has\na 490 gig raid 5 array on it. The drives are 15K drives.\n\nhttp://www-307.ibm.com/pc/support/site.wss/quickPath.do?quickPathEntry=86869rx\nfor more info.\n\nThe large servers are also IBM, although I don't have a model number\nhandy. I know the xeons are 3 ghz and the bus is faster; otherwise they\nare similar. I know the large servers can go to 64 GB RAM, and\nmanagement has said they are willing to add a lot more RAM if it will\nget used. (Our current, commercial database product can't use it under\nWindows.) There is also the possibility of adding additional CPUs.\n \nLike I said, with the current hardware and Sybase 12.5.1, one small\nmachine can keep the applications limping along, although data\nreplication falls behind during the day and catches up at night, and we\nget complaints from web users about slow response and some requests\ntiming out. One large machine handles the load with little degradation,\nand using any two machines keeps everyone happy. We have four so that\nwe can have two each at two different sites, and so we can take one out\nfor maintenance and still tolerate a singe machine failure.\n \nWe're hoping PostgreSQL can match or beat Sybase performance, and\npreliminary tests look good. We should be able to get some load testing\ngoing within a week, and we're shooting for slipping these machines into\nthe mix around the end of this month. (We've gone to some lengths to\nkeep our code portable.)\n \n-Kevin\n \n \n>>> Richard Huxton <[email protected]> 06/09/05 3:06 AM >>>\nKevin Grittner wrote:\n> \n> The manager of the DBA team is reluctant to change both the OS and the\n> DBMS at the same time, so unless I can make a strong case for why it\nis\n> important to run postgresql under Linux, we will be running this on\n> Windows. Currently, there are two Java-based middle tier processes\n> running on each central database server, one for the replication and\none\n> for the web. We expect to keep it that way, so the database needs to\n> play well with these processes. \n\nWell, there's a lot more experience running PG on various *nix systems \nand a lot more help available. Also, I don't think performance on \nWindows is as good as on Linux/*BSD yet.\n\nAgainst switching OS is the fact that you presumably don't have the \nskills in-house for it, and the hardware was chosen for Windows \ncompatibility/performance.\n\nSpeaking of which, what sort of hardware are we talking about?\n\n--\n Richard Huxton\n Archonet Ltd\n\n",
"msg_date": "Thu, 09 Jun 2005 09:55:35 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recommendations for configuring a 200 GB"
},
{
"msg_contents": "> We're hoping PostgreSQL can match or beat Sybase performance, and\n> preliminary tests look good. We should be able to get some load testing\n> going within a week, and we're shooting for slipping these machines into\n> the mix around the end of this month. (We've gone to some lengths to\n> keep our code portable.)\n\nJust make sure to set up and run the contrib/pg_autovacuum daemon, or \nmake sure you fully read 'regular database maintenance' in the manual.\n\nChris\n",
"msg_date": "Thu, 09 Jun 2005 23:01:33 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommendations for configuring a 200 GB"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.