threads
listlengths
1
275
[ { "msg_contents": "Recently I've wrote few pgSql procedures that generates invoices and \nstore it in postgres table. Small test has shown that there is \nperformance problem. I've thought that string operation in pgsql are not \nperfect but it has appeared that 90% of time program waste on very \nsimple update.\nBelow is my simplified procedures:\n\nCREATE TABLE group_fin_account_tst (\n group_fin_account_tst_id BIGSERIAL PRIMARY KEY,\n credit NUMERIC(8,2) DEFAULT 0.00 NOT NULL\n) ; ALTER TABLE group_fin_account_tst OWNER TO freeconetadm;\n\nINSERT INTO group_fin_account_tst\n(credit) VALUES (4);\n\nCREATE OR REPLACE FUNCTION test()\nRETURNS void AS\n$BODY$\nDECLARE\nBEGIN \nFOR v_i IN 1..4000 LOOP\n UPDATE group_fin_account_tst SET\n credit = v_i\n WHERE group_fin_account_tst_id = 1; -- for real procedure I \nupdate different rows\n\nEND LOOP;\nEND; $BODY$ LANGUAGE 'plpgsql' VOLATILE;\nALTER FUNCTION test() OWNER TO freeconetadm;\nselect test();\n\nThe strange thing is how program behave when I increase number of \niteration.\nBelow my results (where u/s is number of updates per second)\n\nOn windows\n500 - 0.3s(1666u/s)\n1000 - 0.7s (1428u/s)\n2000 - 2.3s (869u/s)\n4000 - 9s (444u/s)\n8000 -29s (275u/s)\n16000-114s (14u/s)\n\nOn linux:\n500 - 0.5s(1000u/s)\n1000 - 1.8s (555u/s)\n2000 - 7.0s (285u/s)\n4000 - 26s (153u/s)\n8000 -101s (79u/s)\n16000-400s (40u/s)\n\nOn both systems relation between number of iteration and time is \nstrongly nonlinear! Do you know what is a problem? Is it possible to \ncommit transaction inside pgsql procedure because I think that maybe \ntransaction is too long?\n\nRegards\nMichal Szymanski\nhttp://blog.szymanskich.net\n", "msg_date": "Fri, 25 May 2007 10:57:04 +0200", "msg_from": "Michal Szymanski <[email protected]>", "msg_from_op": true, "msg_subject": "Big problem with sql update operation" }, { "msg_contents": "Michal Szymanski <[email protected]> writes:\n> CREATE OR REPLACE FUNCTION test()\n> RETURNS void AS\n> $BODY$\n> DECLARE\n> BEGIN \n> FOR v_i IN 1..4000 LOOP\n> UPDATE group_fin_account_tst SET\n> credit = v_i\n> WHERE group_fin_account_tst_id = 1; -- for real procedure I \n> update different rows\n\nDoes updating the *same* record 4000 times per transaction reflect the\nreal behavior of your application? If not, this is not a good\nbenchmark. If so, consider redesigning your app to avoid so many\nredundant updates.\n\n(For the record, the reason you see nonlinear degradation is the\naccumulation of tentatively-dead versions of the row, each of which has\nto be rechecked by each later update.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 May 2007 10:28:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big problem with sql update operation " }, { "msg_contents": "Tom Lane wrote:\n> Michal Szymanski <[email protected]> writes:\n> \n>> CREATE OR REPLACE FUNCTION test()\n>> RETURNS void AS\n>> $BODY$\n>> DECLARE\n>> BEGIN \n>> FOR v_i IN 1..4000 LOOP\n>> UPDATE group_fin_account_tst SET\n>> credit = v_i\n>> WHERE group_fin_account_tst_id = 1; -- for real procedure I \n>> update different rows\n>> \n>\n> Does updating the *same* record 4000 times per transaction reflect the\n> real behavior of your application? If not, this is not a good\n> benchmark. If so, consider redesigning your app to avoid so many\n> redundant updates.\n>\n> \nReal application modifiy every time modify different row.\n\n> (For the record, the reason you see nonlinear degradation is the\n> accumulation of tentatively-dead versions of the row, each of which has\n> to be rechecked by each later update.)\n>\n> \nThere is another strange thing. We have two versions of our test \nenvironment one with production DB copy and second genereated with \nminimal data set and it is odd that update presented above on copy of \nproduction is executing 170ms but on small DB it executing 6s !!!!\n\nMichal Szymanski\nhttp://blog.szymanskich.net\n", "msg_date": "Sat, 26 May 2007 00:27:02 +0200", "msg_from": "Michal Szymanski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big problem with sql update operation" }, { "msg_contents": "Michal Szymanski wrote:\n> Tom Lane wrote:\n\n> >(For the record, the reason you see nonlinear degradation is the\n> >accumulation of tentatively-dead versions of the row, each of which has\n> >to be rechecked by each later update.)\n> > \n> There is another strange thing. We have two versions of our test \n> environment one with production DB copy and second genereated with \n> minimal data set and it is odd that update presented above on copy of \n> production is executing 170ms but on small DB it executing 6s !!!!\n\nHow are you vacuuming the tables?\n\n-- \nAlvaro Herrera http://www.advogato.org/person/alvherre\n\"El conflicto es el camino real hacia la uni�n\"\n", "msg_date": "Fri, 25 May 2007 18:38:28 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big problem with sql update operation" }, { "msg_contents": "Michal Szymanski wrote:\n> There is another strange thing. We have two versions of our test\n> >>environment one with production DB copy and second genereated with \n> >>minimal data set and it is odd that update presented above on copy of \n> >>production is executing 170ms but on small DB it executing 6s !!!!\n> >\n> >How are you vacuuming the tables?\n> > \n> Using pgAdmin (DB is installed on my laptop) and I use this tool for \n> vaccuminh, I do not think that vaccuming can help because I've tested on \n> both database just after importing.\n\nI think you are misunderstanding the importance of vacuuming the table.\nTry this: on a different terminal from the one running the test, run a\nVACUUM on the updated table with vacuum_cost_delay set to 20, on an\ninfinite loop. Keep this running while you do your update test. Vary\nthe vacuum_cost_delay and measure the average/min/max UPDATE times.\nAlso try putting a short sleep on the infinite VACUUM loop and see how\nits length affects the UPDATE times.\n\nOne thing not clear to me is if your table is in a clean state. Before\nrunning this test, do a TRUNCATE and import the data again. This will\nget rid of any dead space that may be hurting your measurements.\n\n-- \nAlvaro Herrera http://www.advogato.org/person/alvherre\n\"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\n Quite refreshing in a world of \"weekend drag racer\" developers.\"\n(Scott Marlowe)\n", "msg_date": "Tue, 29 May 2007 16:41:25 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big problem with sql update operation" } ]
[ { "msg_contents": "\"Also sprach Kenneth Marshall:\"\n> > Surprise, ... I got a speed up of hundreds of times. The same application\n> > that crawled under my original rgdbm implementation and under PG now\n> > maxed out the network bandwidth at close to a full 10Mb/s and 1200\n> > pkts/s, at 10% CPU on my 700MHz client, and a bit less on the 1GHz\n> > server.\n> > \n> > So\n> > \n> > * Is that what is holding up postgres over the net too? Lots of tiny\n> > packets?\n> \n> \n> This effect is very common, but you are in effect altering the query/\n\nI imagined so, but no, I am not changing the behaviour - I believe you\nare imagining something different here. Let me explain.\n\nIt is usually the case that drivers and the network layer conspire to\nemit packets when they are otherwise idle, since they have nothing\nbetter to do. That is, if the transmission unit is the normal 1500B and\nthere is 200B in the transmission buffer and nothing else is frisking\nthem about the chops, something along the line will shrug and say, OK,\nI'll just send out a 200B fragment now, apologize, and send out another\nfragment later if anything else comes along for me to chunter out.\n\nIt is also the case that drivers do the opposite .. that is, they do\nNOT send out packets when the transmission buffer is full, even if they\nhave 1500B worth. Why? Well, on Ge for sure, and on 100BT most of the\ntime, it doesn't pay to send out individual packets because the space\nrequired between packets is relatively too great to permit the network\nto work at that speed given the speed of light as it is, and the\nspacing it implies between packets (I remember when I advised the\nnetworking protocol people that Ge was a coming thing about 6 years\nago, they all protested and said it was _physically_ impossible. It is.\nIf you send packets one by one!). An ethernet line is fundamentally\nonly electrical and only signals up or down (relative) and needs time to\nquiesce. And then there's the busmastering .. a PCI bus is only about\n33MHz, and 32 bits wide (well, or 16 on portables, or even 64, but\nyou're getting into heavy server equipment then). That's 128MB/s in\none direction, and any time one releases the bus there's a re-setup time\nthat costs the earth and will easily lower bandwidth by 75%. So drivers\nlike to take the bus for a good few packets at a time. Even a single\npacket (1500B) will take 400 multi-step bus cycles to get to the\ncard, and then it's a question of how much onboard memory it has or\nwhether one has to drive it synchronously. Most cards have something\nlike a 32-unit ring buffer, and I think each unit is considerable.\n\nNow, if a driver KNOWS what's coming then it can alter its behavior in\norder to mesh properly with the higher level layers. What I did was\n_tell_ the driver and the protocol not to send any data until I well\nand truly tell it to, and then told it to, when I was ready. The result\nis that a full communication unit (start, header, following data, and\nstop codon) was sent in one blast.\n\nThat meant that there were NO tiny fragments blocking up the net, being\nsent wily-nily. And it also meant that the driver was NOT waiting for\nmore info to come in before getting bored and sending out what it had.\nIt did as I told it to.\n\nThe evidence from monitoring the PG network thruput is that 75% of its\npackets are in the 64-128B range, including tcp header. That's hitting\nthe 100Kb/s (10KB/s) bandwidth regime on my network at the lower end.\nIt will be even _worse_ on a faster net, I think (feel free to send me a\nfaster net to compare with :). \n\nI also graphed latency, but I haven't taken into account the results as\nthe bandwidth measurements were so striking.\n\n> response behavior of the database. Most applications expect an answer\n> from the database after every query.\n\nWell of course. Nothing else would work! (I imagine you have some kind\nof async scheme, but I haven't investigated). I ask, the db replies. I\nask, the db replies. What I did was\n\n 1) made the ASK go out as one lump.\n 2) made the REPLY go out as one lump\n 3) STOPPED the card waiting for several replies or asks to accumulate\n before sending out anything at all.\n\n> If it could manage retrying failed\n> queries later, you could use the typical sliding window/delayed ack\n> that is so useful in improving the bandwidth utilization of many network\n\nThat is not what is going on (though that's not a bad idea). See\nabove for the explanation. One has to take into account the physical\nhardware involved and its limitations, and arrange the communications\naccordingly. All I did was send EACH query and EACH response as a\nsingle unit, at the hardware level. \n\nOne could do better still by managing _several_ threads communications\nat once.\n\n> programs. Maybe an option in libpq to tell it to use delayed \"acks\". I\n> do not know what would be involved.\n\nNothing spectacular is required to see a considerable improvement, I\nthink,. apart from a little direction from the high level protocol down\nto the driver about where the communication boundaries are. 1000%\nspeedup in my case.\n\nNow, where is the actual socket send done in the pg code? I'd like to\ncheck what's happening in there.\n\n\n\n\nPeter\n", "msg_date": "Fri, 25 May 2007 15:23:18 +0200 (MET DST)", "msg_from": "\"Peter T. Breuer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: general PG network slowness (possible cure) (repost)" } ]
[ { "msg_contents": "TOASTed means storage outside of the main table. But AFAIK, only rows bigger 2K are considered for toasting.\n\nAndreas\n\n-- Ursprüngl. Mitteil. --\nBetreff:\tRe: My quick and dirty \"solution\" (Re: [PERFORM] Performance Problem with Vacuum of bytea table (PG 8.0.13))\nVon:\tBastian Voigt <[email protected]>\nDatum:\t\t25.05.2007 14:13\n\nRichard Huxton wrote:\n> Could you check the output of vacuum verbose on that table and see how \n> much work it's doing? I'd have thought the actual bytea data would be \n> TOASTed away to a separate table for storage, leaving the vacuum with \n> very little work to do.\nI'm quite new to postgres (actually I just ported our running \napplication from MySQL...), so I don't know what toast means. But I \nnoticed that vacuum also tried to cleanup some \"toast\" relations or so. \nThis was what took so long.\n\n> It might well be your actual problem is your disk I/O is constantly \n> saturated and the vacuum just pushes it over the edge. In which case \n> you'll either need more/better disks or to find a quiet time once a \n> day to vacuum and just do so then.\nYes, that was definitely the case. But now everything runs smoothly \nagain, so I don't think I need to buy new disks.\n\nRegards\nBastian\n\n\n-- \nBastian Voigt\nNeumünstersche Straße 4\n20251 Hamburg\ntelefon +49 - 40 - 67957171\nmobil +49 - 179 - 4826359\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n", "msg_date": "Fri, 25 May 2007 17:20:23 +0200", "msg_from": "\"Andreas Kostyrka\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: My quick and dirty \"solution\" (Re: Performance P\n\troblem with Vacuum of bytea table (PG 8.0.13))" } ]
[ { "msg_contents": "Greetings,\n\nWe have two servers running pgsql -- an older server running 8.2.3, \nand a newer (far superior) one running 8.2.4. One of our reporting \nqueries is running painfully slowly on 8.2.4, but it executes in a \nreasonable timeframe on 8.2.3. Below, I've included a contrived, \nstripped down query which still exhibits the same unintuitively poor \nperformance, as well as its explain analyze output from both \nservers. In particular, 8.2.4 opts for filters in a couple places \nwhere we would expect index conds. Also, we've noticed that the \n8.2.4 box (in other similar queries) consistently underestimates \ncosts, whereas the 8.2.3 box consistently overestimates.\n\nAll columns involved in this query are indexed (btrees), and there is \na functional index on mm_date_trunc('day', created_at)...where \nmm_date_trunc is simply an immutable version of date_trunc (fine for \nour purposes). The only configuration differences between the \nservers are various memory settings... work_mem and temp_buffers are \n8mb / 16mb, shared buffers 128mb / 512mb on the 8.2.3 and 8.2.4 \nservers, respectively. Stats targets are 10 on both, for \nconsistency... but it is worth mentioning that performance was still \nabysmal under 8.2.4 with 250 as the target.\n\nAny insight would be most appreciated, as we're a bit stumped. Thanks!\n\nCheers,\n\nDave Pirotte\nDirector of Technology\nMedia Matters for America\n\n===============================================================\n\nselect h.day, h.c as total,\n\t(select count(*) as c\n\tfrom hits h2\n\t\tjoin uri_qstrings uq on (h2.uri_qstring_id = uq.id)\n\t\tjoin referrer_paths rp on (h2.referrer_path_id = rp.id)\n\t\tjoin referrer_domains rd on (rp.referrer_domain_id = rd.id)\n\twhere mm_date_trunc('day', created_at) = h.day\n\t\tand site_id = 3\n\t\tand uq.qstring = '?f=h_top'\n\t\tand rd.domain = 'mediamatters.org'\n\t) as h_top\nfrom (\n\tselect mm_date_trunc('day', h.created_at) as day,\n\t\tcount(*) as c\n\tfrom hits h\n\twhere created_at > date_trunc('day', now() - interval '2 days')\n\tgroup by mm_date_trunc('day', h.created_at)\n) h\norder by h.day asc;\n\n \n QUERY PLAN (8.2.4)\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n----------------------------------------\nSort (cost=204012.65..204012.66 rows=3 width=16) (actual \ntime=83012.885..83012.885 rows=3 loops=1)\n Sort Key: \"day\" -> Subquery Scan h (cost=149811.02..204012.62 \nrows=3 width=16) (actual time=28875.251..83012.868 rows=3 loops=1)\n -> HashAggregate (cost=149811.02..149811.06 rows=3 \nwidth=8) (actual time=1602.787..1602.794 rows=3 loops=1)\n -> Bitmap Heap Scan on hits h \n(cost=6485.90..148079.18 rows=346368 width=8) (actual \ntime=48.222..1358.196 rows=391026 loops=1)\n Recheck Cond: (created_at > date_trunc \n('day'::text, (now() - '2 days'::interval)))\n -> Bitmap Index Scan on hits_created_idx \n(cost=0.00..6399.31 rows=346368 width=0) (actual time=47.293..47.293 \nrows=391027 loops=1)\n Index Cond: (created_at > date_trunc \n('day'::text, (now() - '2 days'::interval)))\n SubPlan\n -> Aggregate (cost=18067.17..18067.18 rows=1 width=0) \n(actual time=27136.681..27136.681 rows=1 loops=3)\n -> Nested Loop (cost=40.66..18067.16 rows=1 \nwidth=0) (actual time=1105.396..27135.496 rows=3394 loops=3)\n -> Nested Loop (cost=40.66..18063.56 rows=9 \nwidth=8) (actual time=32.132..26837.394 rows=50537 loops=3)\n -> Nested Loop (cost=40.66..5869.35 \nrows=47 width=8) (actual time=20.482..276.889 rows=121399 loops=3)\n -> Index Scan using \nreferrer_domains_domains_idx on referrer_domains rd (cost=0.00..8.27 \nrows=1 width=8) (actual time=0.024..0.026 rows=1 loops=3)\n Index Cond: \n((\"domain\")::text = 'mediamatters.org'::text)\n -> Bitmap Heap Scan on \nreferrer_paths rp (cost=40.66..5834.77 rows=2105 width=16) (actual \ntime=20.402..210.440 rows=121399 loops=3)\n Recheck Cond: \n(rp.referrer_domain_id = rd.id)\n -> Bitmap Index Scan on \nreferrer_paths_domains_idx (cost=0.00..40.13 rows=2105 width=0) \n(actual time=17.077..17.077 rows=121399 loops=3)\n Index Cond: \n(rp.referrer_domain_id = rd.id)\n -> Index Scan using hits_refer_idx on \nhits h2 (cost=0.00..257.59 rows=149 width=16) (actual \ntime=0.167..0.218 rows=0 loops=364197)\n Index Cond: (h2.referrer_path_id \n= rp.id)\n Filter: ((mm_date_trunc \n('day'::text, created_at) = $0) AND (site_id = 3))\n -> Index Scan using uri_qstrings_pkey on \nuri_qstrings uq (cost=0.00..0.39 rows=1 width=8) (actual \ntime=0.005..0.005 rows=0 loops=151611)\n Index Cond: (h2.uri_qstring_id = uq.id)\n Filter: ((qstring)::text = '? \nf=h_top'::text)\n Total runtime: 83013.098 ms\n\n\n \n QUERY PLAN (8.2.3)\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-------------------------------------\nSort (cost=270110.73..270110.74 rows=1 width=16) (actual \ntime=2116.106..2116.107 rows=3 loops=1)\n Sort Key: \"day\" -> Subquery Scan h (cost=118726.46..270110.72 \nrows=1 width=16) (actual time=1763.504..2116.090 rows=3 loops=1)\n -> HashAggregate (cost=118726.46..118726.47 rows=1 \nwidth=8) (actual time=1678.462..1678.467 rows=3 loops=1)\n -> Bitmap Heap Scan on hits h \n(cost=1827.68..118382.45 rows=68802 width=8) (actual \ntime=56.346..1496.264 rows=334231 loops=1)\n Recheck Cond: (created_at > date_trunc \n('day'::text, (now() - '2 days'::interval)))\n -> Bitmap Index Scan on hits_created_idx \n(cost=0.00..1810.48 rows=68802 width=0) (actual time=55.225..55.225 \nrows=334231 loops=1)\n Index Cond: (created_at > date_trunc \n('day'::text, (now() - '2 days'::interval)))\n SubPlan\n -> Aggregate (cost=151384.23..151384.24 rows=1 width=0) \n(actual time=145.865..145.865 rows=1 loops=3)\n -> Hash Join (cost=4026.42..151384.23 rows=1 \nwidth=0) (actual time=30.663..145.271 rows=2777 loops=3)\n Hash Cond: (rp.referrer_domain_id = rd.id)\n -> Nested Loop (cost=4018.13..151375.82 \nrows=30 width=8) (actual time=30.585..143.498 rows=3174 loops=3)\n -> Hash Join (cost=4018.13..151149.21 \nrows=30 width=8) (actual time=30.550..93.357 rows=3174 loops=3)\n Hash Cond: (h2.uri_qstring_id = \nuq.id)\n -> Bitmap Heap Scan on hits h2 \n(cost=3857.37..150325.60 rows=176677 width=16) (actual \ntime=19.710..60.881 rows=108568 loops=3)\n Recheck Cond: (mm_date_trunc \n('day'::text, created_at) = $0)\n Filter: (site_id = 3)\n -> Bitmap Index Scan on \nhits_date_trunc_day_idx (cost=0.00..3813.20 rows=178042 width=0) \n(actual time=19.398..19.398 rows=111410 loops=3)\n Index Cond: \n(mm_date_trunc('day'::text, created_at) = $0)\n -> Hash (cost=160.24..160.24 \nrows=42 width=8) (actual time=32.417..32.417 rows=141 loops=1)\n -> Bitmap Heap Scan on \nuri_qstrings uq (cost=4.69..160.24 rows=42 width=8) (actual \ntime=31.502..32.352 rows=141 loops=1)\n Recheck Cond: \n((qstring)::text = '?f=h_top'::text)\n -> Bitmap Index Scan \non uri_qstrings_qstring_idx (cost=0.00..4.68 rows=42 width=0) \n(actual time=31.482..31.482 rows=141 loops=1)\n Index Cond: \n((qstring)::text = '?f=h_top'::text)\n -> Index Scan using \nreferrer_paths_pkey on referrer_paths rp (cost=0.00..7.54 rows=1 \nwidth=16) (actual time=0.014..0.015 rows=1 loops=9521)\n Index Cond: (h2.referrer_path_id \n= rp.id)\n -> Hash (cost=8.27..8.27 rows=1 width=8) \n(actual time=0.062..0.062 rows=1 loops=1)\n -> Index Scan using \nreferrer_domains_domains_idx on referrer_domains rd (cost=0.00..8.27 \nrows=1 width=8) (actual time=0.058..0.059 rows=1 loops=1)\n Index Cond: ((\"domain\")::text = \n'mediamatters.org'::text)\n Total runtime: 2116.266 ms\n\n\n\n\n\n\nGreetings,We have two servers running pgsql -- an older server running 8.2.3, and a newer (far superior) one running 8.2.4.  One of our reporting queries is running painfully slowly on 8.2.4, but it executes in a reasonable timeframe on 8.2.3.  Below, I've included a contrived, stripped down query which still exhibits the same unintuitively poor performance, as well as its explain analyze output from both servers.  In particular, 8.2.4 opts for filters in a couple places where we would expect index conds.  Also, we've noticed that the 8.2.4 box (in other similar queries) consistently underestimates costs, whereas the 8.2.3 box consistently overestimates.All columns involved in this query are indexed (btrees), and there is a functional index on mm_date_trunc('day', created_at)...where mm_date_trunc is simply an immutable version of date_trunc (fine for our purposes).  The only configuration differences between the servers are various memory settings... work_mem and temp_buffers are 8mb / 16mb, shared buffers 128mb / 512mb on the 8.2.3 and 8.2.4 servers, respectively.  Stats targets are 10 on both, for consistency... but it is worth mentioning that performance was still abysmal under 8.2.4 with 250 as the target.Any insight would be most appreciated, as we're a bit stumped.  Thanks!Cheers,Dave PirotteDirector of TechnologyMedia Matters for America===============================================================select h.day, h.c as total, (select count(*) as c from hits h2 join uri_qstrings uq on (h2.uri_qstring_id = uq.id) join referrer_paths rp on (h2.referrer_path_id = rp.id) join referrer_domains rd on (rp.referrer_domain_id = rd.id) where mm_date_trunc('day', created_at) = h.day and site_id = 3 and uq.qstring = '?f=h_top' and rd.domain = 'mediamatters.org' ) as h_topfrom ( select mm_date_trunc('day', h.created_at) as day, count(*) as c from hits h where created_at > date_trunc('day', now() - interval '2 days') group by mm_date_trunc('day', h.created_at)) horder by h.day asc;                                                                                       QUERY PLAN (8.2.4)                                                                                       ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=204012.65..204012.66 rows=3 width=16) (actual time=83012.885..83012.885 rows=3 loops=1)   Sort Key: \"day\"   ->  Subquery Scan h  (cost=149811.02..204012.62 rows=3 width=16) (actual time=28875.251..83012.868 rows=3 loops=1)         ->  HashAggregate  (cost=149811.02..149811.06 rows=3 width=8) (actual time=1602.787..1602.794 rows=3 loops=1)               ->  Bitmap Heap Scan on hits h  (cost=6485.90..148079.18 rows=346368 width=8) (actual time=48.222..1358.196 rows=391026 loops=1)                     Recheck Cond: (created_at > date_trunc('day'::text, (now() - '2 days'::interval)))                     ->  Bitmap Index Scan on hits_created_idx  (cost=0.00..6399.31 rows=346368 width=0) (actual time=47.293..47.293 rows=391027 loops=1)                           Index Cond: (created_at > date_trunc('day'::text, (now() - '2 days'::interval)))         SubPlan           ->  Aggregate  (cost=18067.17..18067.18 rows=1 width=0) (actual time=27136.681..27136.681 rows=1 loops=3)                 ->  Nested Loop  (cost=40.66..18067.16 rows=1 width=0) (actual time=1105.396..27135.496 rows=3394 loops=3)                       ->  Nested Loop  (cost=40.66..18063.56 rows=9 width=8) (actual time=32.132..26837.394 rows=50537 loops=3)                             ->  Nested Loop  (cost=40.66..5869.35 rows=47 width=8) (actual time=20.482..276.889 rows=121399 loops=3)                                   ->  Index Scan using referrer_domains_domains_idx on referrer_domains rd  (cost=0.00..8.27 rows=1 width=8) (actual time=0.024..0.026 rows=1 loops=3)                                         Index Cond: ((\"domain\")::text = 'mediamatters.org'::text)                                   ->  Bitmap Heap Scan on referrer_paths rp  (cost=40.66..5834.77 rows=2105 width=16) (actual time=20.402..210.440 rows=121399 loops=3)                                         Recheck Cond: (rp.referrer_domain_id = rd.id)                                         ->  Bitmap Index Scan on referrer_paths_domains_idx  (cost=0.00..40.13 rows=2105 width=0) (actual time=17.077..17.077 rows=121399 loops=3)                                               Index Cond: (rp.referrer_domain_id = rd.id)                             ->  Index Scan using hits_refer_idx on hits h2  (cost=0.00..257.59 rows=149 width=16) (actual time=0.167..0.218 rows=0 loops=364197)                                   Index Cond: (h2.referrer_path_id = rp.id)                                   Filter: ((mm_date_trunc('day'::text, created_at) = $0) AND (site_id = 3))                       ->  Index Scan using uri_qstrings_pkey on uri_qstrings uq  (cost=0.00..0.39 rows=1 width=8) (actual time=0.005..0.005 rows=0 loops=151611)                             Index Cond: (h2.uri_qstring_id = uq.id)                             Filter: ((qstring)::text = '?f=h_top'::text) Total runtime: 83013.098 ms                                                                                     QUERY PLAN (8.2.3)                                                                                    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Sort  (cost=270110.73..270110.74 rows=1 width=16) (actual time=2116.106..2116.107 rows=3 loops=1)   Sort Key: \"day\"   ->  Subquery Scan h  (cost=118726.46..270110.72 rows=1 width=16) (actual time=1763.504..2116.090 rows=3 loops=1)         ->  HashAggregate  (cost=118726.46..118726.47 rows=1 width=8) (actual time=1678.462..1678.467 rows=3 loops=1)               ->  Bitmap Heap Scan on hits h  (cost=1827.68..118382.45 rows=68802 width=8) (actual time=56.346..1496.264 rows=334231 loops=1)                     Recheck Cond: (created_at > date_trunc('day'::text, (now() - '2 days'::interval)))                     ->  Bitmap Index Scan on hits_created_idx  (cost=0.00..1810.48 rows=68802 width=0) (actual time=55.225..55.225 rows=334231 loops=1)                           Index Cond: (created_at > date_trunc('day'::text, (now() - '2 days'::interval)))         SubPlan           ->  Aggregate  (cost=151384.23..151384.24 rows=1 width=0) (actual time=145.865..145.865 rows=1 loops=3)                 ->  Hash Join  (cost=4026.42..151384.23 rows=1 width=0) (actual time=30.663..145.271 rows=2777 loops=3)                       Hash Cond: (rp.referrer_domain_id = rd.id)                       ->  Nested Loop  (cost=4018.13..151375.82 rows=30 width=8) (actual time=30.585..143.498 rows=3174 loops=3)                             ->  Hash Join  (cost=4018.13..151149.21 rows=30 width=8) (actual time=30.550..93.357 rows=3174 loops=3)                                   Hash Cond: (h2.uri_qstring_id = uq.id)                                   ->  Bitmap Heap Scan on hits h2  (cost=3857.37..150325.60 rows=176677 width=16) (actual time=19.710..60.881 rows=108568 loops=3)                                         Recheck Cond: (mm_date_trunc('day'::text, created_at) = $0)                                         Filter: (site_id = 3)                                         ->  Bitmap Index Scan on hits_date_trunc_day_idx  (cost=0.00..3813.20 rows=178042 width=0) (actual time=19.398..19.398 rows=111410 loops=3)                                               Index Cond: (mm_date_trunc('day'::text, created_at) = $0)                                   ->  Hash  (cost=160.24..160.24 rows=42 width=8) (actual time=32.417..32.417 rows=141 loops=1)                                         ->  Bitmap Heap Scan on uri_qstrings uq  (cost=4.69..160.24 rows=42 width=8) (actual time=31.502..32.352 rows=141 loops=1)                                               Recheck Cond: ((qstring)::text = '?f=h_top'::text)                                               ->  Bitmap Index Scan on uri_qstrings_qstring_idx  (cost=0.00..4.68 rows=42 width=0) (actual time=31.482..31.482 rows=141 loops=1)                                                     Index Cond: ((qstring)::text = '?f=h_top'::text)                             ->  Index Scan using referrer_paths_pkey on referrer_paths rp  (cost=0.00..7.54 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=9521)                                   Index Cond: (h2.referrer_path_id = rp.id)                       ->  Hash  (cost=8.27..8.27 rows=1 width=8) (actual time=0.062..0.062 rows=1 loops=1)                             ->  Index Scan using referrer_domains_domains_idx on referrer_domains rd  (cost=0.00..8.27 rows=1 width=8) (actual time=0.058..0.059 rows=1 loops=1)                                   Index Cond: ((\"domain\")::text = 'mediamatters.org'::text) Total runtime: 2116.266 ms", "msg_date": "Fri, 25 May 2007 14:08:52 -0400", "msg_from": "Dave Pirotte <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problem on 8.2.4, but not 8.2.3" }, { "msg_contents": "Lo!\n\nreferrer_paths seems to have totally wrong stats. try full analyze on \nit.\nhow many records in total do you have in referrer_paths on 8.2.4 server?\nmight be just a problem of usage pattern change from old system to \nnew (1 row vs. 121399 rows) ?\ndoes not seem to be just a plan problem as the data itself seems to \nbe quite different.\n\nKristo\n\nOn 25.05.2007, at 21:08, Dave Pirotte wrote:\n\n> Greetings,\n>\n> We have two servers running pgsql -- an older server running 8.2.3, \n> and a newer (far superior) one running 8.2.4. One of our reporting \n> queries is running painfully slowly on 8.2.4, but it executes in a \n> reasonable timeframe on 8.2.3. Below, I've included a contrived, \n> stripped down query which still exhibits the same unintuitively \n> poor performance, as well as its explain analyze output from both \n> servers. In particular, 8.2.4 opts for filters in a couple places \n> where we would expect index conds. Also, we've noticed that the \n> 8.2.4 box (in other similar queries) consistently underestimates \n> costs, whereas the 8.2.3 box consistently overestimates.\n>\n> All columns involved in this query are indexed (btrees), and there \n> is a functional index on mm_date_trunc('day', created_at)...where \n> mm_date_trunc is simply an immutable version of date_trunc (fine \n> for our purposes). The only configuration differences between the \n> servers are various memory settings... work_mem and temp_buffers \n> are 8mb / 16mb, shared buffers 128mb / 512mb on the 8.2.3 and 8.2.4 \n> servers, respectively. Stats targets are 10 on both, for \n> consistency... but it is worth mentioning that performance was \n> still abysmal under 8.2.4 with 250 as the target.\n>\n> Any insight would be most appreciated, as we're a bit stumped. \n> Thanks!\n>\n> Cheers,\n>\n> Dave Pirotte\n> Director of Technology\n> Media Matters for America\n>\n> ===============================================================\n>\n> select h.day, h.c as total,\n> \t(select count(*) as c\n> \tfrom hits h2\n> \t\tjoin uri_qstrings uq on (h2.uri_qstring_id = uq.id)\n> \t\tjoin referrer_paths rp on (h2.referrer_path_id = rp.id)\n> \t\tjoin referrer_domains rd on (rp.referrer_domain_id = rd.id)\n> \twhere mm_date_trunc('day', created_at) = h.day\n> \t\tand site_id = 3\n> \t\tand uq.qstring = '?f=h_top'\n> \t\tand rd.domain = 'mediamatters.org'\n> \t) as h_top\n> from (\n> \tselect mm_date_trunc('day', h.created_at) as day,\n> \t\tcount(*) as c\n> \tfrom hits h\n> \twhere created_at > date_trunc('day', now() - interval '2 days')\n> \tgroup by mm_date_trunc('day', h.created_at)\n> ) h\n> order by h.day asc;\n>\n> \n> QUERY PLAN (8.2.4)\n> ---------------------------------------------------------------------- \n> ---------------------------------------------------------------------- \n> --------------------------------------------\n> Sort (cost=204012.65..204012.66 rows=3 width=16) (actual \n> time=83012.885..83012.885 rows=3 loops=1)\n> Sort Key: \"day\" -> Subquery Scan h \n> (cost=149811.02..204012.62 rows=3 width=16) (actual \n> time=28875.251..83012.868 rows=3 loops=1)\n> -> HashAggregate (cost=149811.02..149811.06 rows=3 \n> width=8) (actual time=1602.787..1602.794 rows=3 loops=1)\n> -> Bitmap Heap Scan on hits h \n> (cost=6485.90..148079.18 rows=346368 width=8) (actual \n> time=48.222..1358.196 rows=391026 loops=1)\n> Recheck Cond: (created_at > date_trunc \n> ('day'::text, (now() - '2 days'::interval)))\n> -> Bitmap Index Scan on hits_created_idx \n> (cost=0.00..6399.31 rows=346368 width=0) (actual \n> time=47.293..47.293 rows=391027 loops=1)\n> Index Cond: (created_at > date_trunc \n> ('day'::text, (now() - '2 days'::interval)))\n> SubPlan\n> -> Aggregate (cost=18067.17..18067.18 rows=1 width=0) \n> (actual time=27136.681..27136.681 rows=1 loops=3)\n> -> Nested Loop (cost=40.66..18067.16 rows=1 \n> width=0) (actual time=1105.396..27135.496 rows=3394 loops=3)\n> -> Nested Loop (cost=40.66..18063.56 \n> rows=9 width=8) (actual time=32.132..26837.394 rows=50537 loops=3)\n> -> Nested Loop (cost=40.66..5869.35 \n> rows=47 width=8) (actual time=20.482..276.889 rows=121399 loops=3)\n> -> Index Scan using \n> referrer_domains_domains_idx on referrer_domains rd \n> (cost=0.00..8.27 rows=1 width=8) (actual time=0.024..0.026 rows=1 \n> loops=3)\n> Index Cond: \n> ((\"domain\")::text = 'mediamatters.org'::text)\n> -> Bitmap Heap Scan on \n> referrer_paths rp (cost=40.66..5834.77 rows=2105 width=16) (actual \n> time=20.402..210.440 rows=121399 loops=3)\n> Recheck Cond: \n> (rp.referrer_domain_id = rd.id)\n> -> Bitmap Index Scan on \n> referrer_paths_domains_idx (cost=0.00..40.13 rows=2105 width=0) \n> (actual time=17.077..17.077 rows=121399 loops=3)\n> Index Cond: \n> (rp.referrer_domain_id = rd.id)\n> -> Index Scan using hits_refer_idx on \n> hits h2 (cost=0.00..257.59 rows=149 width=16) (actual \n> time=0.167..0.218 rows=0 loops=364197)\n> Index Cond: (h2.referrer_path_id \n> = rp.id)\n> Filter: ((mm_date_trunc \n> ('day'::text, created_at) = $0) AND (site_id = 3))\n> -> Index Scan using uri_qstrings_pkey on \n> uri_qstrings uq (cost=0.00..0.39 rows=1 width=8) (actual \n> time=0.005..0.005 rows=0 loops=151611)\n> Index Cond: (h2.uri_qstring_id = uq.id)\n> Filter: ((qstring)::text = '? \n> f=h_top'::text)\n> Total runtime: 83013.098 ms\n>\n>\n> \n> QUERY PLAN (8.2.3)\n> ---------------------------------------------------------------------- \n> ---------------------------------------------------------------------- \n> -----------------------------------------\n> Sort (cost=270110.73..270110.74 rows=1 width=16) (actual \n> time=2116.106..2116.107 rows=3 loops=1)\n> Sort Key: \"day\" -> Subquery Scan h \n> (cost=118726.46..270110.72 rows=1 width=16) (actual \n> time=1763.504..2116.090 rows=3 loops=1)\n> -> HashAggregate (cost=118726.46..118726.47 rows=1 \n> width=8) (actual time=1678.462..1678.467 rows=3 loops=1)\n> -> Bitmap Heap Scan on hits h \n> (cost=1827.68..118382.45 rows=68802 width=8) (actual \n> time=56.346..1496.264 rows=334231 loops=1)\n> Recheck Cond: (created_at > date_trunc \n> ('day'::text, (now() - '2 days'::interval)))\n> -> Bitmap Index Scan on hits_created_idx \n> (cost=0.00..1810.48 rows=68802 width=0) (actual time=55.225..55.225 \n> rows=334231 loops=1)\n> Index Cond: (created_at > date_trunc \n> ('day'::text, (now() - '2 days'::interval)))\n> SubPlan\n> -> Aggregate (cost=151384.23..151384.24 rows=1 \n> width=0) (actual time=145.865..145.865 rows=1 loops=3)\n> -> Hash Join (cost=4026.42..151384.23 rows=1 \n> width=0) (actual time=30.663..145.271 rows=2777 loops=3)\n> Hash Cond: (rp.referrer_domain_id = rd.id)\n> -> Nested Loop (cost=4018.13..151375.82 \n> rows=30 width=8) (actual time=30.585..143.498 rows=3174 loops=3)\n> -> Hash Join \n> (cost=4018.13..151149.21 rows=30 width=8) (actual \n> time=30.550..93.357 rows=3174 loops=3)\n> Hash Cond: (h2.uri_qstring_id = \n> uq.id)\n> -> Bitmap Heap Scan on hits h2 \n> (cost=3857.37..150325.60 rows=176677 width=16) (actual \n> time=19.710..60.881 rows=108568 loops=3)\n> Recheck Cond: \n> (mm_date_trunc('day'::text, created_at) = $0)\n> Filter: (site_id = 3)\n> -> Bitmap Index Scan on \n> hits_date_trunc_day_idx (cost=0.00..3813.20 rows=178042 width=0) \n> (actual time=19.398..19.398 rows=111410 loops=3)\n> Index Cond: \n> (mm_date_trunc('day'::text, created_at) = $0)\n> -> Hash (cost=160.24..160.24 \n> rows=42 width=8) (actual time=32.417..32.417 rows=141 loops=1)\n> -> Bitmap Heap Scan on \n> uri_qstrings uq (cost=4.69..160.24 rows=42 width=8) (actual \n> time=31.502..32.352 rows=141 loops=1)\n> Recheck Cond: \n> ((qstring)::text = '?f=h_top'::text)\n> -> Bitmap Index \n> Scan on uri_qstrings_qstring_idx (cost=0.00..4.68 rows=42 width=0) \n> (actual time=31.482..31.482 rows=141 loops=1)\n> Index Cond: \n> ((qstring)::text = '?f=h_top'::text)\n> -> Index Scan using \n> referrer_paths_pkey on referrer_paths rp (cost=0.00..7.54 rows=1 \n> width=16) (actual time=0.014..0.015 rows=1 loops=9521)\n> Index Cond: (h2.referrer_path_id \n> = rp.id)\n> -> Hash (cost=8.27..8.27 rows=1 width=8) \n> (actual time=0.062..0.062 rows=1 loops=1)\n> -> Index Scan using \n> referrer_domains_domains_idx on referrer_domains rd \n> (cost=0.00..8.27 rows=1 width=8) (actual time=0.058..0.059 rows=1 \n> loops=1)\n> Index Cond: ((\"domain\")::text = \n> 'mediamatters.org'::text)\n> Total runtime: 2116.266 ms\n>\n>\n>\n>\n>\n\n\nLo!referrer_paths seems to have totally wrong stats. try full analyze on it.how many records in total do you have in referrer_paths on 8.2.4 server?might be just a problem of usage pattern change from old system to new (1 row vs. 121399 rows) ?does not seem to be just a plan problem as the data itself seems to be quite different.KristoOn 25.05.2007, at 21:08, Dave Pirotte wrote:Greetings,We have two servers running pgsql -- an older server running 8.2.3, and a newer (far superior) one running 8.2.4.  One of our reporting queries is running painfully slowly on 8.2.4, but it executes in a reasonable timeframe on 8.2.3.  Below, I've included a contrived, stripped down query which still exhibits the same unintuitively poor performance, as well as its explain analyze output from both servers.  In particular, 8.2.4 opts for filters in a couple places where we would expect index conds.  Also, we've noticed that the 8.2.4 box (in other similar queries) consistently underestimates costs, whereas the 8.2.3 box consistently overestimates.All columns involved in this query are indexed (btrees), and there is a functional index on mm_date_trunc('day', created_at)...where mm_date_trunc is simply an immutable version of date_trunc (fine for our purposes).  The only configuration differences between the servers are various memory settings... work_mem and temp_buffers are 8mb / 16mb, shared buffers 128mb / 512mb on the 8.2.3 and 8.2.4 servers, respectively.  Stats targets are 10 on both, for consistency... but it is worth mentioning that performance was still abysmal under 8.2.4 with 250 as the target.Any insight would be most appreciated, as we're a bit stumped.  Thanks!Cheers,Dave PirotteDirector of TechnologyMedia Matters for America===============================================================select h.day, h.c as total, (select count(*) as c from hits h2 join uri_qstrings uq on (h2.uri_qstring_id = uq.id) join referrer_paths rp on (h2.referrer_path_id = rp.id) join referrer_domains rd on (rp.referrer_domain_id = rd.id) where mm_date_trunc('day', created_at) = h.day and site_id = 3 and uq.qstring = '?f=h_top' and rd.domain = 'mediamatters.org' ) as h_topfrom ( select mm_date_trunc('day', h.created_at) as day, count(*) as c from hits h where created_at > date_trunc('day', now() - interval '2 days') group by mm_date_trunc('day', h.created_at)) horder by h.day asc;                                                                                       QUERY PLAN (8.2.4)                                                                                       ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=204012.65..204012.66 rows=3 width=16) (actual time=83012.885..83012.885 rows=3 loops=1)   Sort Key: \"day\"   ->  Subquery Scan h  (cost=149811.02..204012.62 rows=3 width=16) (actual time=28875.251..83012.868 rows=3 loops=1)         ->  HashAggregate  (cost=149811.02..149811.06 rows=3 width=8) (actual time=1602.787..1602.794 rows=3 loops=1)               ->  Bitmap Heap Scan on hits h  (cost=6485.90..148079.18 rows=346368 width=8) (actual time=48.222..1358.196 rows=391026 loops=1)                     Recheck Cond: (created_at > date_trunc('day'::text, (now() - '2 days'::interval)))                     ->  Bitmap Index Scan on hits_created_idx  (cost=0.00..6399.31 rows=346368 width=0) (actual time=47.293..47.293 rows=391027 loops=1)                           Index Cond: (created_at > date_trunc('day'::text, (now() - '2 days'::interval)))         SubPlan           ->  Aggregate  (cost=18067.17..18067.18 rows=1 width=0) (actual time=27136.681..27136.681 rows=1 loops=3)                 ->  Nested Loop  (cost=40.66..18067.16 rows=1 width=0) (actual time=1105.396..27135.496 rows=3394 loops=3)                       ->  Nested Loop  (cost=40.66..18063.56 rows=9 width=8) (actual time=32.132..26837.394 rows=50537 loops=3)                             ->  Nested Loop  (cost=40.66..5869.35 rows=47 width=8) (actual time=20.482..276.889 rows=121399 loops=3)                                   ->  Index Scan using referrer_domains_domains_idx on referrer_domains rd  (cost=0.00..8.27 rows=1 width=8) (actual time=0.024..0.026 rows=1 loops=3)                                         Index Cond: ((\"domain\")::text = 'mediamatters.org'::text)                                   ->  Bitmap Heap Scan on referrer_paths rp  (cost=40.66..5834.77 rows=2105 width=16) (actual time=20.402..210.440 rows=121399 loops=3)                                         Recheck Cond: (rp.referrer_domain_id = rd.id)                                         ->  Bitmap Index Scan on referrer_paths_domains_idx  (cost=0.00..40.13 rows=2105 width=0) (actual time=17.077..17.077 rows=121399 loops=3)                                               Index Cond: (rp.referrer_domain_id = rd.id)                             ->  Index Scan using hits_refer_idx on hits h2  (cost=0.00..257.59 rows=149 width=16) (actual time=0.167..0.218 rows=0 loops=364197)                                   Index Cond: (h2.referrer_path_id = rp.id)                                   Filter: ((mm_date_trunc('day'::text, created_at) = $0) AND (site_id = 3))                       ->  Index Scan using uri_qstrings_pkey on uri_qstrings uq  (cost=0.00..0.39 rows=1 width=8) (actual time=0.005..0.005 rows=0 loops=151611)                             Index Cond: (h2.uri_qstring_id = uq.id)                             Filter: ((qstring)::text = '?f=h_top'::text) Total runtime: 83013.098 ms                                                                                     QUERY PLAN (8.2.3)                                                                                    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Sort  (cost=270110.73..270110.74 rows=1 width=16) (actual time=2116.106..2116.107 rows=3 loops=1)   Sort Key: \"day\"   ->  Subquery Scan h  (cost=118726.46..270110.72 rows=1 width=16) (actual time=1763.504..2116.090 rows=3 loops=1)         ->  HashAggregate  (cost=118726.46..118726.47 rows=1 width=8) (actual time=1678.462..1678.467 rows=3 loops=1)               ->  Bitmap Heap Scan on hits h  (cost=1827.68..118382.45 rows=68802 width=8) (actual time=56.346..1496.264 rows=334231 loops=1)                     Recheck Cond: (created_at > date_trunc('day'::text, (now() - '2 days'::interval)))                     ->  Bitmap Index Scan on hits_created_idx  (cost=0.00..1810.48 rows=68802 width=0) (actual time=55.225..55.225 rows=334231 loops=1)                           Index Cond: (created_at > date_trunc('day'::text, (now() - '2 days'::interval)))         SubPlan           ->  Aggregate  (cost=151384.23..151384.24 rows=1 width=0) (actual time=145.865..145.865 rows=1 loops=3)                 ->  Hash Join  (cost=4026.42..151384.23 rows=1 width=0) (actual time=30.663..145.271 rows=2777 loops=3)                       Hash Cond: (rp.referrer_domain_id = rd.id)                       ->  Nested Loop  (cost=4018.13..151375.82 rows=30 width=8) (actual time=30.585..143.498 rows=3174 loops=3)                             ->  Hash Join  (cost=4018.13..151149.21 rows=30 width=8) (actual time=30.550..93.357 rows=3174 loops=3)                                   Hash Cond: (h2.uri_qstring_id = uq.id)                                   ->  Bitmap Heap Scan on hits h2  (cost=3857.37..150325.60 rows=176677 width=16) (actual time=19.710..60.881 rows=108568 loops=3)                                         Recheck Cond: (mm_date_trunc('day'::text, created_at) = $0)                                         Filter: (site_id = 3)                                         ->  Bitmap Index Scan on hits_date_trunc_day_idx  (cost=0.00..3813.20 rows=178042 width=0) (actual time=19.398..19.398 rows=111410 loops=3)                                               Index Cond: (mm_date_trunc('day'::text, created_at) = $0)                                   ->  Hash  (cost=160.24..160.24 rows=42 width=8) (actual time=32.417..32.417 rows=141 loops=1)                                         ->  Bitmap Heap Scan on uri_qstrings uq  (cost=4.69..160.24 rows=42 width=8) (actual time=31.502..32.352 rows=141 loops=1)                                               Recheck Cond: ((qstring)::text = '?f=h_top'::text)                                               ->  Bitmap Index Scan on uri_qstrings_qstring_idx  (cost=0.00..4.68 rows=42 width=0) (actual time=31.482..31.482 rows=141 loops=1)                                                     Index Cond: ((qstring)::text = '?f=h_top'::text)                             ->  Index Scan using referrer_paths_pkey on referrer_paths rp  (cost=0.00..7.54 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=9521)                                   Index Cond: (h2.referrer_path_id = rp.id)                       ->  Hash  (cost=8.27..8.27 rows=1 width=8) (actual time=0.062..0.062 rows=1 loops=1)                             ->  Index Scan using referrer_domains_domains_idx on referrer_domains rd  (cost=0.00..8.27 rows=1 width=8) (actual time=0.058..0.059 rows=1 loops=1)                                   Index Cond: ((\"domain\")::text = 'mediamatters.org'::text) Total runtime: 2116.266 ms", "msg_date": "Fri, 25 May 2007 22:17:50 +0300", "msg_from": "Kristo Kaiv <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem on 8.2.4, but not 8.2.3" }, { "msg_contents": "Dave Pirotte <[email protected]> writes:\n> We have two servers running pgsql -- an older server running 8.2.3, \n> and a newer (far superior) one running 8.2.4. One of our reporting \n> queries is running painfully slowly on 8.2.4, but it executes in a \n> reasonable timeframe on 8.2.3.\n\nAre you sure you've analyzed all these tables in the 8.2.4 database?\nSome of the rowcount estimates seem a bit far off.\n\nI looked through the CVS logs and didn't find any planner changes\nbetween 8.2.3 and 8.2.4 that seem likely to affect your query, so\nI'm thinking it must be a statistical discrepancy.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 May 2007 15:56:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem on 8.2.4, but not 8.2.3 " }, { "msg_contents": "On Fri, May 25, 2007 at 03:56:35PM -0400, Tom Lane wrote:\n> I looked through the CVS logs and didn't find any planner changes\n> between 8.2.3 and 8.2.4 that seem likely to affect your query, so\n> I'm thinking it must be a statistical discrepancy.\n\nIt looks like the estimated cost is lower for 8.2.4 -- could it be that the\nfact that he's giving it more memory lead to the planner picking a plan that\nhappens to be worse?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 25 May 2007 22:05:23 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem on 8.2.4, but not 8.2.3" }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> It looks like the estimated cost is lower for 8.2.4 -- could it be that the\n> fact that he's giving it more memory lead to the planner picking a plan that\n> happens to be worse?\n\nOffhand I don't think so. More work_mem might make a hash join look\ncheaper (or a sort for a mergejoin), but the problem here seems to be\nthat it's switching away from a hash and to a nestloop. Which is a\nloser because there are many more outer-relation rows than it's\nexpecting.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 May 2007 16:33:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem on 8.2.4, but not 8.2.3 " }, { "msg_contents": "Thanks for the quick responses. :-) The data is almost identical, \nbetween the two servers: 8.2.3 has 882198 records, 8.2.4 has 893121. \nFor background, I pg_dump'ed the data into the 8.2.4 server \nyesterday, and analyzed with the stats target of 250, then reanalyzed \nwith target 10. So, the statistics should theoretically be ok. \nRunning a vacuum full analyze on referrer_paths, per Kristo's \nsuggestion, didn't affect the query plan.\n\nWe downgraded to 8.2.3 just to rule that out, upped stats target to \n100, analyzed, and are still experiencing the same behavior -- it's \nstill coming up with the same bogus rowcount estimates. Over the \nweekend I'll lower the memory and see if that does anything, just to \nrule that out... Any other thoughts? Thanks so much for your time \nand suggestions thus far.\n\nCheers,\nDave\n\nOn May 25, 2007, at 4:33 PM, Tom Lane wrote:\n\n> \"Steinar H. Gunderson\" <[email protected]> writes:\n>> It looks like the estimated cost is lower for 8.2.4 -- could it be \n>> that the\n>> fact that he's giving it more memory lead to the planner picking a \n>> plan that\n>> happens to be worse?\n>\n> Offhand I don't think so. More work_mem might make a hash join look\n> cheaper (or a sort for a mergejoin), but the problem here seems to be\n> that it's switching away from a hash and to a nestloop. Which is a\n> loser because there are many more outer-relation rows than it's\n> expecting.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\nDave Pirotte\nDirector of Technology\nMedia Matters for America\[email protected]\nphone: 202-756-4122\n\n\n\nThanks for the quick responses.  :-)  The data is almost identical, between the two servers: 8.2.3 has 882198 records, 8.2.4 has 893121.  For background, I pg_dump'ed the data into the 8.2.4 server yesterday, and analyzed with the stats target of 250, then reanalyzed with target 10.   So, the statistics should theoretically be ok.  Running a vacuum full analyze on referrer_paths, per Kristo's suggestion, didn't affect the query plan.   We downgraded to 8.2.3 just to rule that out, upped stats target to 100, analyzed, and are still experiencing the same behavior -- it's still coming up with the same bogus rowcount estimates.  Over the weekend I'll lower the memory and see if that does anything, just to rule that out...  Any other thoughts?  Thanks so much for your time and suggestions thus far.Cheers,DaveOn May 25, 2007, at 4:33 PM, Tom Lane wrote:\"Steinar H. Gunderson\" <[email protected]> writes: It looks like the estimated cost is lower for 8.2.4 -- could it be that thefact that he's giving it more memory lead to the planner picking a plan thathappens to be worse? Offhand I don't think so.  More work_mem might make a hash join lookcheaper (or a sort for a mergejoin), but the problem here seems to bethat it's switching away from a hash and to a nestloop.  Which is aloser because there are many more outer-relation rows than it'sexpecting. regards, tom lane---------------------------(end of broadcast)---------------------------TIP 7: You can help support the PostgreSQL project by donating at                http://www.postgresql.org/about/donate Dave PirotteDirector of TechnologyMedia Matters for [email protected]: 202-756-4122", "msg_date": "Fri, 25 May 2007 17:37:36 -0400", "msg_from": "Dave Pirotte <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem on 8.2.4, but not 8.2.3 " }, { "msg_contents": "these bogus rowcount estimates are a bit strange. if you have 800K \nrows and select 100K of them the rowcount estimate should most likely \ncome from the histogram for the column. can you check what the \nhistograms are for\nreferrer path tables refferrer_domain where id <= referrer_domain \n'mediamatters.org' both in 8.2.3 and 8.2.4\nI still think this happens because of skewed statistics. More memory \nshould encourage the planner to choose hash join over nested loop afaik.\n\nKristo\n\nOn 26.05.2007, at 0:37, Dave Pirotte wrote:\n\n> Thanks for the quick responses. :-) The data is almost identical, \n> between the two servers: 8.2.3 has 882198 records, 8.2.4 has \n> 893121. For background, I pg_dump'ed the data into the 8.2.4 \n> server yesterday, and analyzed with the stats target of 250, then \n> reanalyzed with target 10. So, the statistics should \n> theoretically be ok. Running a vacuum full analyze on \n> referrer_paths, per Kristo's suggestion, didn't affect the query plan.\n>\n> We downgraded to 8.2.3 just to rule that out, upped stats target to \n> 100, analyzed, and are still experiencing the same behavior -- it's \n> still coming up with the same bogus rowcount estimates. Over the \n> weekend I'll lower the memory and see if that does anything, just \n> to rule that out... Any other thoughts? Thanks so much for your \n> time and suggestions thus far.\n>\n> Cheers,\n> Dave\n>\n> On May 25, 2007, at 4:33 PM, Tom Lane wrote:\n>\n>> \"Steinar H. Gunderson\" <[email protected]> writes:\n>>> It looks like the estimated cost is lower for 8.2.4 -- could it \n>>> be that the\n>>> fact that he's giving it more memory lead to the planner picking \n>>> a plan that\n>>> happens to be worse?\n>>\n>> Offhand I don't think so. More work_mem might make a hash join look\n>> cheaper (or a sort for a mergejoin), but the problem here seems to be\n>> that it's switching away from a hash and to a nestloop. Which is a\n>> loser because there are many more outer-relation rows than it's\n>> expecting.\n>>\n>> \t\t\tregards, tom lane\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 7: You can help support the PostgreSQL project by donating at\n>>\n>> http://www.postgresql.org/about/donate\n>>\n>\n> Dave Pirotte\n> Director of Technology\n> Media Matters for America\n> [email protected]\n> phone: 202-756-4122\n>\n>\n\n\nthese bogus rowcount estimates are a bit strange. if you have 800K rows and select 100K of them the rowcount estimate should most likely come from the histogram for the column. can you check what the histograms are for referrer path tables refferrer_domain where id <= referrer_domain 'mediamatters.org' both in 8.2.3 and 8.2.4 I still think this happens because of skewed statistics. More memory should encourage the planner to choose hash join over nested loop afaik.KristoOn 26.05.2007, at 0:37, Dave Pirotte wrote:Thanks for the quick responses.  :-)  The data is almost identical, between the two servers: 8.2.3 has 882198 records, 8.2.4 has 893121.  For background, I pg_dump'ed the data into the 8.2.4 server yesterday, and analyzed with the stats target of 250, then reanalyzed with target 10.   So, the statistics should theoretically be ok.  Running a vacuum full analyze on referrer_paths, per Kristo's suggestion, didn't affect the query plan.   We downgraded to 8.2.3 just to rule that out, upped stats target to 100, analyzed, and are still experiencing the same behavior -- it's still coming up with the same bogus rowcount estimates.  Over the weekend I'll lower the memory and see if that does anything, just to rule that out...  Any other thoughts?  Thanks so much for your time and suggestions thus far.Cheers,DaveOn May 25, 2007, at 4:33 PM, Tom Lane wrote:\"Steinar H. Gunderson\" <[email protected]> writes: It looks like the estimated cost is lower for 8.2.4 -- could it be that thefact that he's giving it more memory lead to the planner picking a plan thathappens to be worse? Offhand I don't think so.  More work_mem might make a hash join lookcheaper (or a sort for a mergejoin), but the problem here seems to bethat it's switching away from a hash and to a nestloop.  Which is aloser because there are many more outer-relation rows than it'sexpecting. regards, tom lane---------------------------(end of broadcast)---------------------------TIP 7: You can help support the PostgreSQL project by donating at                http://www.postgresql.org/about/donate Dave PirotteDirector of TechnologyMedia Matters for [email protected]: 202-756-4122", "msg_date": "Sat, 26 May 2007 15:41:27 +0300", "msg_from": "Kristo Kaiv <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem on 8.2.4, but not 8.2.3 " } ]
[ { "msg_contents": "\"Also sprach Kenneth Marshall:\"\n> improvement from coalescing the packets. Good luck in your investigations.\n\nWhile I am recompiling stuff, just some stats.\n\nTypical network traffic analysis during the PG runs:\n\n Total Packets Processed 493,499\n Unicast 100.0% 493,417\n Broadcast 0.0% 82\n Multicast 0.0% 0\n pktCast distribution chart\n Shortest 42 bytes\n Average Size 192 bytes\n Longest 1,514 bytes\n <= 64 bytes 0.0% 158\n 64 to 128 bytes 77.3% 381,532\n 129 to 256 bytes 6.8% 33,362\n 257 to 512 bytes 8.6% 42,535\n 513 to 1024 bytes 4.0% 19,577\n 1025 to 1518 bytes 3.3% 16,335\n \nTypical application rusage stats:\n\n time ./c -timeout 12000 -database postgresql://pebbles/d /tmp/tty_io..c\n user system elapsed cpu\n 7.866u 6.038s 5:49.13 3.9% 0+0k 0+0io 0pf+0w\n \nThose stats show the system lost in i/o. It's neither in kernel nor in\nuserspace. Presumably the other side plus networking was the holdup.\n\nFor comparison, against localhost via loopback (\"fake\" networking):\n\n time ./c -timeout 12000 -database postgresql://localhost/d /tmp/tty_io..c\n user system elapsed cpu\n 9.483u 5.321s 2:41.78 9.1% 0+0k 0+0io 0pf+0w\n\nbut in that case postmaster was doing about 54% cpu, so the overall\ncpu for server + client is 63%.\n\nI moved to a unix domain socket and postmaster alone went to 68%.\n\n\n time ./c -timeout 12000 -database postgresql://unix/var/run/postgresql/d /tmp/tty_io..c\n user system elapsed cpu\n 9.569u 3.698s 2:52.41 7.6% 0+0k 0+0io 0pf+0w\n\nThe elapsed time is not much different between unix and localhost. One can\nsee that there is some i/o holdup because the two threads ought to do 100%\nbetween them if handover of info were costless. The difference (the system\nwas queiscent o/w apart from the monitoring software, which shows only a\nfraction of a percent loading). There were no memory shortages and swap\nwas disabled for the test (both sides)\n\nFor comparison, running against gdbm straignt to disk\n\n time ./c -timeout 12000 /tmp/tty_io..c\n user system elapsed cpu\n 2.637u 0.735s 0:05.34 62.9% 0+0k 0+0io 0pf+0w\n\nThrough localhost:\n\n time ./c -timeout 12000 -database gdbm://localhost/ptb/c /tmp/tty_io..c\n user system elapsed cpu\n 2.746u 3.699s 0:16.00 40.1% 0+0k 0+0io 0pf+0w\n\n(the server process was at 35% cpu, for 75% total).\n \nAcross the net:\n\n time ./c -timeout 12000 -database gdbm://pebbles/ptb/c /tmp/tty_io..c\n user system elapsed cpu\n 2.982u 4.430s 1:03.44 7.9% 0+0k 0+0io 0pf+0w\n\n(the server was at 7% cpu)\n\n\nHave to go shopping ....\n \nPeter\n", "msg_date": "Fri, 25 May 2007 20:57:34 +0200 (MET DST)", "msg_from": "\"Peter T. Breuer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: general PG network slowness (possible cure) (repost)" } ]
[ { "msg_contents": "I have a busy postgresql server running running on a raid1 of 2 15k rpm\nscsi drives.\n\nI have been running into the problem of maxed out IO bandwidth. I would\nlike to convert my raid1 into a raid10 but that would require a full\nrebuild which is more downtime than I want so I am looking into other\nalternatives.\n\nThe best one I have come up with is moving the xlog/wal (can someone\nconfirm whether these are the same thing?) to another physical drive. I\nalso think it may be beneficial to move some indexes to another drive as\nwell (same one as xlog).\n\nSome questions on this:\n1. Can the database survive loss/corruption of the xlog and indexes in a\nrecoverable way? To save money (and because I won't need the throughput as\nmuch), I am thinking on making this index/wal/xlog drive a single cheap\nsata drive (or maybe a non-raided 15k scsi for 60% more money). However\nwithout the redundancy of a mirror I am concerned about drive failure.\nLoss of several mins of recent transactions in a serious crash is\nacceptable to be, but full/serious database corruption (the likes of fsync\noff) is not.\n\n2. Is there any point using a high performance (ie scsi) disk for this, or\nwould the mirror containing the majority of the data still be the major\nbottleneck causing the disk usage to not exceed sata performance anyway?\n\n3. Is there any easy way to move ALL indexes to another drive? Is this a\ngood performance idea or would they just bottleneck each other seriously?\n\n\nOther info for reference\nRunning postgresql 8.2 on FreeBSD 6.1\nserver is a core2 with 4gb of ram. CPU usage is moderate.\n\n\nAlso, can anyone recommend a good shared_buffers size? The server is\ndedicated to postgres except for half a gig used by memcached. Right now I\nhave it set at 51200 which may be too high (I've read varying suggestions\nwith this and I'm not sure how aggressive FreeBSD6's IO cache is).\n\nAnd any suggestions on what effective_cache_size I should use on this\nhardware and OS? I've been using 384MB but I don't know if this is optimal\nor not.\n", "msg_date": "Fri, 25 May 2007 14:43:41 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Adding disks/xlog & index" }, { "msg_contents": "[email protected] writes:\n> The best one I have come up with is moving the xlog/wal (can someone\n> confirm whether these are the same thing?) to another physical drive.\n\nYeah, two names for same thing.\n\n> I also think it may be beneficial to move some indexes to another drive as\n> well (same one as xlog).\n\nDepends on how the I/O workload works out. On systems that have fairly\nheavy write traffic, the standard advice is that you want WAL on its own\ndedicated spindle, because the less that head needs to move the faster\nyou can write WAL, and WAL output speed is going to determine how fast\nyou can perform updates.\n\nIf it's a read-mostly database then maybe you can ignore that advice and\nworry more about separating indexes from tables.\n\n> 1. Can the database survive loss/corruption of the xlog and indexes in a\n> recoverable way? To save money (and because I won't need the throughput as\n> much), I am thinking on making this index/wal/xlog drive a single cheap\n> sata drive (or maybe a non-raided 15k scsi for 60% more money).\n\nDo not go cheap on the WAL drive --- you lose WAL, you're in serious\ntrouble. Indexes can always be rebuilt with REINDEX, so they're maybe\na bit more expendable.\n\n> 3. Is there any easy way to move ALL indexes to another drive?\n\nNo, I think you have to move 'em one at a time :-(. The standard advice\nfor this is to set up a plpgsql function that scans the catalogs and\nissues the commands you want (ALTER INDEX SET TABLESPACE in this case).\n\n> Is this a\n> good performance idea or would they just bottleneck each other seriously?\n\nImpossible to tell without a lot more details than you provided. I'd\nsuggest you try it and see.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 May 2007 19:47:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding disks/xlog & index " }, { "msg_contents": "[email protected] writes:\n> The best one I have come up with is moving the xlog/wal (can someone\n> confirm whether these are the same thing?) to another physical drive.\n\nYeah, two names for same thing.\n\n> I also think it may be beneficial to move some indexes to another drive as\n> well (same one as xlog).\n\nDepends on how the I/O workload works out. On systems that have fairly\nheavy write traffic, the standard advice is that you want WAL on its own\ndedicated spindle, because the less that head needs to move the faster\nyou can write WAL, and WAL output speed is going to determine how fast\nyou can perform updates.\n\nIf it's a read-mostly database then maybe you can ignore that advice and\nworry more about separating indexes from tables.\n\n> 1. Can the database survive loss/corruption of the xlog and indexes in a\n> recoverable way? To save money (and because I won't need the throughput as\n> much), I am thinking on making this index/wal/xlog drive a single cheap\n> sata drive (or maybe a non-raided 15k scsi for 60% more money).\n\nDo not go cheap on the WAL drive --- you lose WAL, you're in serious\ntrouble. Indexes can always be rebuilt with REINDEX, so they're maybe\na bit more expendable.\n\n> 3. Is there any easy way to move ALL indexes to another drive?\n\nNo, I think you have to move 'em one at a time :-(. The standard advice\nfor this is to set up a plpgsql function that scans the catalogs and\nissues the commands you want (ALTER INDEX SET TABLESPACE in this case).\n\n> Is this a\n> good performance idea or would they just bottleneck each other seriously?\n\nImpossible to tell without a lot more details than you provided. I'd\nsuggest you try it and see.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 May 2007 19:47:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding disks/xlog & index " }, { "msg_contents": "<[email protected]> writes:\n\n> Some questions on this:\n> 1. Can the database survive loss/corruption of the xlog and indexes in a\n> recoverable way? To save money (and because I won't need the throughput as\n> much), I am thinking on making this index/wal/xlog drive a single cheap\n> sata drive (or maybe a non-raided 15k scsi for 60% more money). However\n> without the redundancy of a mirror I am concerned about drive failure.\n> Loss of several mins of recent transactions in a serious crash is\n> acceptable to be, but full/serious database corruption (the likes of fsync\n> off) is not.\n\nLosing any WAL that the database has fsynced is exactly like having fsync off.\n\n> 2. Is there any point using a high performance (ie scsi) disk for this, or\n> would the mirror containing the majority of the data still be the major\n> bottleneck causing the disk usage to not exceed sata performance anyway?\n\nWell that depends on your database traffic. In most databases the volume of\nWAL traffic is substantially less than the i/o traffic to the data drives. So\nyou usually don't need to be able to sustain high i/o bandwidth to the WAL\ndrive.\n\nHowever in some database loads the latency to the WAL drive does matter. This\nis especially true if you're executing a lot of short transactions and\nresponse time is critical. Especially if you aren't executing many such\ntransactions in parallel. So for example if you're processing a serial batch\nof short transactions and committing each one as a separate transaction. In\nthat case you would want a drive that can fsync fast which either means a\nbattery backed cache or 15kRPM drive. It doesn't necessarily mean you need a\nbit raid array though.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Sat, 26 May 2007 01:35:43 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding disks/xlog & index" } ]
[ { "msg_contents": "We're thinking of building some new servers. We bought some a while back that have ECC (error correcting) RAM, which is absurdly expensive compared to the same amount of non-ECC RAM. Does anyone have any real-life data about the error rate of non-ECC RAM, and whether it matters or not? In my long career, I've never once had a computer that corrupted memory, or at least I never knew if it did. ECC sound like a good idea, but is it solving a non-problem?\n\nThanks,\nCraig\n", "msg_date": "Fri, 25 May 2007 18:45:15 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "ECC RAM really needed?" }, { "msg_contents": "On Fri, May 25, 2007 at 18:45:15 -0700,\n Craig James <[email protected]> wrote:\n> We're thinking of building some new servers. We bought some a while back \n> that have ECC (error correcting) RAM, which is absurdly expensive compared \n> to the same amount of non-ECC RAM. Does anyone have any real-life data \n> about the error rate of non-ECC RAM, and whether it matters or not? In my \n> long career, I've never once had a computer that corrupted memory, or at \n> least I never knew if it did. ECC sound like a good idea, but is it \n> solving a non-problem?\n\nIn the past when I purchased ECC ram it wasn't that much more expensive\nthan nonECC ram.\n\nWikipedia suggests a rule of thumb of one error per month per gigabyte,\nthough suggests error rates vary widely. They reference a paper that should\nprovide you with more background.\n", "msg_date": "Fri, 25 May 2007 21:15:53 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ECC RAM really needed?" }, { "msg_contents": "On Fri, 25 May 2007, Bruno Wolff III wrote:\n\n> Wikipedia suggests a rule of thumb of one error per month per gigabyte, \n> though suggests error rates vary widely. They reference a paper that \n> should provide you with more background.\n\nThe paper I would recommend is\n\nhttp://www.tezzaron.com/about/papers/soft_errors_1_1_secure.pdf\n\nwhich is a summary of many other people's papers, and quite informative. \nI know I had no idea before reading it how much error rates go up with \nincreasing altitute.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 26 May 2007 00:01:56 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ECC RAM really needed?" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> The paper I would recommend is\n> http://www.tezzaron.com/about/papers/soft_errors_1_1_secure.pdf\n> which is a summary of many other people's papers, and quite informative. \n> I know I had no idea before reading it how much error rates go up with \n> increasing altitute.\n\nNot real surprising if you figure the problem is mostly cosmic rays.\n\nAnyway, this paper says\n\n> Even using a relatively conservative error rate (500 FIT/Mbit), a\n> system with 1 GByte of RAM can expect an error every two weeks;\n\nwhich should pretty much cure any idea that you want to run a server\nwith non-ECC memory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 26 May 2007 00:19:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ECC RAM really needed? " }, { "msg_contents": "On Fri, May 25, 2007 at 06:45:15PM -0700, Craig James wrote:\n>We're thinking of building some new servers. We bought some a while back \n>that have ECC (error correcting) RAM, which is absurdly expensive compared \n>to the same amount of non-ECC RAM. Does anyone have any real-life data \n>about the error rate of non-ECC RAM, and whether it matters or not? In my \n>long career, I've never once had a computer that corrupted memory, or at \n>least I never knew if it did. \n\n...because ECC RAM will correct single bit errors. FWIW, I've seen *a \nlot* of single bit errors over the years. Some systems are much better \nabout reporting than others, but any system will have occasional errors. \nAlso, if a stick starts to go bad you'll generally be told about with \nECC memory, rather than having the system just start to flake out. \n\nMike Stone\n", "msg_date": "Sat, 26 May 2007 08:43:15 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ECC RAM really needed?" }, { "msg_contents": "On Sat, May 26, 2007 at 08:43:15AM -0400, Michael Stone wrote:\n> On Fri, May 25, 2007 at 06:45:15PM -0700, Craig James wrote:\n> >We're thinking of building some new servers. We bought some a while back \n> >that have ECC (error correcting) RAM, which is absurdly expensive compared \n> >to the same amount of non-ECC RAM. Does anyone have any real-life data \n> >about the error rate of non-ECC RAM, and whether it matters or not? In my \n> >long career, I've never once had a computer that corrupted memory, or at \n> >least I never knew if it did. \n> ...because ECC RAM will correct single bit errors. FWIW, I've seen *a \n> lot* of single bit errors over the years. Some systems are much better \n> about reporting than others, but any system will have occasional errors. \n> Also, if a stick starts to go bad you'll generally be told about with \n> ECC memory, rather than having the system just start to flake out. \n\nFirst: I would use ECC RAM for a server. The memory is not\nsignificantly more expensive.\n\nNow that this is out of the way - I found this thread interesting because\nalthough it talked about RAM bit errors, I haven't seen reference to the\nsignificance of RAM bit errors.\n\nQuite a bit of memory is only rarely used (sent out to swap or flushed\nbefore it is accessed), or used in a read-only capacity in a limited form.\nFor example, if searching table rows - as long as the row is not selected,\nand the bit error is in a field that isn't involved in the selection\ncriteria, who cares if it is wrong?\n\nSo, the question then becomes, what percentage of memory is required\nto be correct all of the time? I believe the estimates for bit error\nare high estimates with regard to actual effect. Stating that a bit\nmay be wrong once every two weeks does not describe effect. In my\nopinion, software defects have a similar estimate for potential for\ndamage to occur.\n\nIn the last 10 years - the only problems with memory I have ever\nsuccessfully diagnosed were with cheap hardware running in a poor\nenvironment, where the problem became quickly obvious, to the point\nthat the system would be unusable or the BIOS would refuse to boot\nwith the broken memory stick. (This paragraph represents the primary\nstate of many of my father's machines :-) ) Replacing the memory\nstick made the problems go away.\n\nIn any case - the word 'cheap' is significant in the above paragraph.\nnon-ECC RAM should be considered 'cheap' memory. It will work fine\nmost of the time and most people will never notice a problem.\n\nDo you want to be the one person who does notice a problem? :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Sat, 26 May 2007 10:52:14 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: ECC RAM really needed?" }, { "msg_contents": "On Sat, May 26, 2007 at 10:52:14AM -0400, [email protected] wrote:\n> Do you want to be the one person who does notice a problem? :-)\n\nRight, and notice that when you notice the problem _may not_ be when\nit happens. The problem with errors in memory (or on disk\ncontrollers, another place not to skimp in your hardware budget for\ndatabase machines) is that the unnoticed failure could well write\ncorrupted data out. It's some time later that you notice you have\nthe problem, when you go to look at the data and discover you have\ngarbage.\n\nIf your data is worth storing, it's worth storing correctly, and so\ndoing things to improve the chances of correct storage is a good\nidea.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nEverything that happens in the world happens at some place.\n\t\t--Jane Jacobs \n", "msg_date": "Sun, 27 May 2007 10:27:08 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ECC RAM really needed?" } ]
[ { "msg_contents": "Since PITR has to enable archiving does this not increase the amount \nof disk I/O required ?\n\nDave\n", "msg_date": "Mon, 28 May 2007 08:45:38 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "PITR performance costs" }, { "msg_contents": "Dave Cramer <[email protected]> wrote:\n\n> Since PITR has to enable archiving does this not increase the amount \n> of disk I/O required ?\n\nIt does increase the required amount of I/O.\n\n-- \nBill Moran\nhttp://www.potentialtech.com\n", "msg_date": "Mon, 28 May 2007 08:53:43 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR performance costs" }, { "msg_contents": "am Mon, dem 28.05.2007, um 8:45:38 -0400 mailte Dave Cramer folgendes:\n> Since PITR has to enable archiving does this not increase the amount \n> of disk I/O required ?\n\nYes. But you can use a different hard drive for this log.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Mon, 28 May 2007 14:54:28 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR performance costs" }, { "msg_contents": "Dave Cramer wrote:\n> Since PITR has to enable archiving does this not increase the amount of \n> disk I/O required ?\n\nThere's no difference in normal DML operations, but some bulk operations \nlike CREATE INDEX that don't otherwise generate WAL, need to be WAL \nlogged when archiving is enabled.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 28 May 2007 17:31:33 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR performance costs" }, { "msg_contents": "Heikki,\n\nDon't the archived logs have to be copied as well as the regular WAL \nlogs get recycled ?\n\nDave\nOn 28-May-07, at 12:31 PM, Heikki Linnakangas wrote:\n\n> Dave Cramer wrote:\n>> Since PITR has to enable archiving does this not increase the \n>> amount of disk I/O required ?\n>\n> There's no difference in normal DML operations, but some bulk \n> operations like CREATE INDEX that don't otherwise generate WAL, \n> need to be WAL logged when archiving is enabled.\n>\n> -- \n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 28 May 2007 14:48:55 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PITR performance costs" }, { "msg_contents": "Dave, et al,\n\n* Dave Cramer ([email protected]) wrote:\n> Don't the archived logs have to be copied as well as the regular WAL \n> logs get recycled ?\n\nYes, but I'd expect at the point they're being copied off to some other\nstore (probably a seperate disk, or even over the network to another\nsystem, etc), they're probably in the system cache, so you're probably\nnot going out to disk to get those blocks anyway. That might not be the\ncase on a slow-write system, but in those cases it seems at least\nsomewhat unlikely you'll be hit very hard by the occational 16MB copy\noff the disk...\n\n\tThanks,\n\n\t\tStephen\n\n> On 28-May-07, at 12:31 PM, Heikki Linnakangas wrote:\n> \n> >Dave Cramer wrote:\n> >>Since PITR has to enable archiving does this not increase the \n> >>amount of disk I/O required ?\n> >\n> >There's no difference in normal DML operations, but some bulk \n> >operations like CREATE INDEX that don't otherwise generate WAL, \n> >need to be WAL logged when archiving is enabled.\n> >\n> >-- \n> > Heikki Linnakangas\n> > EnterpriseDB http://www.enterprisedb.com\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq", "msg_date": "Mon, 28 May 2007 15:46:03 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR performance costs" }, { "msg_contents": "On Mon, 2007-05-28 at 08:45 -0400, Dave Cramer wrote:\n\n> Since PITR has to enable archiving does this not increase the amount \n> of disk I/O required ?\n\nAs Heikki says, some operations need logging when PITR is on; these are\nnow documented in the performance tips section of the latest dev docs:\nhttp://developer.postgresql.org/pgdocs/postgres/populate.html#POPULATE-PITR\n\nThis isn't additional logging because of PITR, its just that we've had\nto exclude PITR from some recent tuning operations. I'll be looking at\nways of making that optional in certain cases.\n\nOverall, the cost of shipping WAL files away has been measured in large\nscale tests by Mark Wong to be around 1% drop in measured peak\ntransaction throughput, tests about ~2 years ago now on 8.0. It's\npossible that has increased as we have further tuned the server, but I'm\nthinking its still fairly negligible overall.\n\nReplication solutions currently weigh in significantly more than this\noverhead, which is one reason to make me start thinking about log based\nreplication in future releases.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 May 2007 21:40:08 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR performance costs" }, { "msg_contents": "On 5/28/07, Dave Cramer <[email protected]> wrote:\n> Since PITR has to enable archiving does this not increase the amount\n> of disk I/O required ?\n\nI've set up warm standbys on a few servers (some of them quite\nbusy!)...the additional load is virtually unmeasurable. I usually\ndon't copy the files locally...I scp them off to some other server.\nWhen archived, the WAL files are likely cached but there is some\noverhead to copying them off however.\n\nmerlin\n", "msg_date": "Tue, 29 May 2007 09:06:56 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR performance costs" } ]
[ { "msg_contents": "Hi \n\nI am currently running a vacuum analyse through PgAdmin on a PostgreSQL\n8.1.9 database that takes forever without doing anything: no\n(noticeable) disk activity or (noticeable) CPU activity. \n\nThe mesage tab in PgAdmin says:\n\n...\nDetail: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 568.16 sec\n....\n\nand lots of entries looking just like this ( 0 % CPU, > 500 secs).\n\nThere are no other connections to the database and the machine does not\ndo anything else than me typing this e-mail and playing Metallica MP3's.\n\nCould this be because of my Cost-Based Vacuum Delay settings ?\n\nvacuum_cost_delay = 200\nvacuum_cost_page_hit = 6\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\nvacuum_cost_limit = 100\n\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\nweb: www.askesis.nl\n", "msg_date": "Tue, 29 May 2007 19:03:43 +0200", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum takes forever" }, { "msg_contents": "\n\n> Could this be because of my Cost-Based Vacuum Delay settings ?\n\n\tYeah. It is supposed to slow down VACUUM so it doesn't kill your server, \nbut it is not aware of the load. It will also slow it down if there is no \nload. That is its purpose after all ;)\n\tIf you want fast vacuum, issue SET vacuum_cost_delay = 0; before.\n\n\n>\n> vacuum_cost_delay = 200\n> vacuum_cost_page_hit = 6\n> #vacuum_cost_page_miss = 10 # 0-10000 credits\n> #vacuum_cost_page_dirty = 20 # 0-10000 credits\n> vacuum_cost_limit = 100\n>\n>\n>\n\n\n", "msg_date": "Tue, 29 May 2007 19:16:41 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum takes forever" }, { "msg_contents": "On Tue, 2007-05-29 at 19:16 +0200, PFC wrote:\n> \n> > Could this be because of my Cost-Based Vacuum Delay settings ?\n> \n> \tYeah. It is supposed to slow down VACUUM so it doesn't kill your server, \n> but it is not aware of the load. It will also slow it down if there is no \n> load. That is its purpose after all ;)\n> \tIf you want fast vacuum, issue SET vacuum_cost_delay = 0; before.\nThanks, I tried it and it worked. I did not know that changing this\nsetting would result in such a performance drop ( I just followed an\nadvise I read on http://www.powerpostgresql.com/PerfList/) which\nmentioned a tripling of the the execution time. Not a change from\n8201819 ms to 17729 ms.\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\nweb: www.askesis.nl\n", "msg_date": "Tue, 29 May 2007 19:56:07 +0200", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum takes forever" }, { "msg_contents": "Joost Kraaijeveld wrote:\n> Hi \n> \n> I am currently running a vacuum analyse through PgAdmin on a PostgreSQL\n> 8.1.9 database that takes forever without doing anything: no\n> (noticeable) disk activity or (noticeable) CPU activity. \n> \n> The mesage tab in PgAdmin says:\n> \n> ...\n> Detail: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.01s/0.00u sec elapsed 568.16 sec\n> ....\n> \n> and lots of entries looking just like this ( 0 % CPU, > 500 secs).\n> \n> There are no other connections to the database and the machine does not\n> do anything else than me typing this e-mail and playing Metallica MP3's.\n\nCliff, Jason or Rob era? Could be important...\n\n:-)\n\nRegards, Dave.\n", "msg_date": "Tue, 29 May 2007 21:43:46 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum takes forever" }, { "msg_contents": "Dave Page wrote:\n\n>> and lots of entries looking just like this ( 0 % CPU, > 500 secs).\n>>\n>> There are no other connections to the database and the machine does not\n>> do anything else than me typing this e-mail and playing Metallica MP3's.\n> \n> Cliff, Jason or Rob era? Could be important...\n\nWell Metallica is pretty heavy metal, you might be weighing the machine \ndown....\n\n/me wonders how many groans were collectively heard through the internet.\n\nJoshua D. Drake\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Tue, 29 May 2007 13:49:09 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum takes forever" }, { "msg_contents": "On Tue, 2007-05-29 at 21:43 +0100, Dave Page wrote:\n> Cliff, Jason or Rob era? Could be important...\nCliff and Jason.\n\nRob is in my Ozzy collection ;-)\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\nweb: www.askesis.nl\n", "msg_date": "Wed, 30 May 2007 05:58:31 +0200", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum takes forever" }, { "msg_contents": "Joost Kraaijeveld wrote:\n> On Tue, 2007-05-29 at 21:43 +0100, Dave Page wrote:\n>> Cliff, Jason or Rob era? Could be important...\n> Cliff and Jason.\n> \n> Rob is in my Ozzy collection ;-)\n\nAnd rightly so imho.\n\n:-)\n\n/D\n", "msg_date": "Wed, 30 May 2007 09:21:43 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum takes forever" }, { "msg_contents": "On Tue, May 29, 2007 at 07:56:07PM +0200, Joost Kraaijeveld wrote:\n> Thanks, I tried it and it worked. I did not know that changing this\n> setting would result in such a performance drop ( I just followed an\n\nIt's not a performance drop. It's an on-purpose delay of the\nfunctionality, introduced so that _other_ transactions don't get I/O\nstarved. (\"Make vacuum fast\" isn't in most cases an interesting\ngoal.)\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nI remember when computers were frustrating because they *did* exactly what \nyou told them to. That actually seems sort of quaint now.\n\t\t--J.D. Baldwin\n", "msg_date": "Wed, 30 May 2007 10:11:04 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum takes forever" }, { "msg_contents": "On May 29, 2007, at 12:03 PM, Joost Kraaijeveld wrote:\n> vacuum_cost_delay = 200\n> vacuum_cost_page_hit = 6\n> #vacuum_cost_page_miss = 10 # 0-10000 credits\n> #vacuum_cost_page_dirty = 20 # 0-10000 credits\n> vacuum_cost_limit = 100\n\nI didn't see anyone else mention this, so...\n\nThose settings are *very* aggressive. I'm not sure why you upped the \ncost of page_hit or dropped the cost_limit, but I can tell you the \neffect: vacuum will sleep at least every 17 pages... even if those \npages were already in shared_buffers and vacuum didn't have to dirty \nthem. I really can't think of any reason you'd want to do that.\n\nI do find vacuum_cost_delay to be an extremely useful tool, but \ntypically I'll set it to between 10 and 20 and leave the other \nparameters alone.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Sun, 10 Jun 2007 21:51:08 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum takes forever" } ]
[ { "msg_contents": "hi,\n\nthis is not really postgresql specific, but any help is appreciated.\ni have read more spindles the better it is for IO performance.\n\nsuppose i have 8 drives , should a stripe (raid0) be created on\n2 mirrors (raid1) of 4 drives each OR should a stripe on 4 mirrors\nof 2 drives each be created ?\n\nalso does single channel or dual channel controllers makes lot\nof difference in raid10 performance ?\n\nregds\nmallah.\n", "msg_date": "Wed, 30 May 2007 02:44:52 +0530", "msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>", "msg_from_op": true, "msg_subject": "setting up raid10 with more than 4 drives" }, { "msg_contents": "Stripe of mirrors is preferred to mirror of stripes for the best balance of\nprotection and performance.\n\nIn the stripe of mirrors you can lose up to half of the disks and still be\noperational. In the mirror of stripes, the most you could lose is two\ndrives. The performance of the two should be similar - perhaps the seek\nperformance would be different for high concurrent use in PG.\n\n- Luke\n\n\nOn 5/29/07 2:14 PM, \"Rajesh Kumar Mallah\" <[email protected]> wrote:\n\n> hi,\n> \n> this is not really postgresql specific, but any help is appreciated.\n> i have read more spindles the better it is for IO performance.\n> \n> suppose i have 8 drives , should a stripe (raid0) be created on\n> 2 mirrors (raid1) of 4 drives each OR should a stripe on 4 mirrors\n> of 2 drives each be created ?\n> \n> also does single channel or dual channel controllers makes lot\n> of difference in raid10 performance ?\n> \n> regds\n> mallah.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n\n", "msg_date": "Tue, 29 May 2007 14:50:57 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "On 5/30/07, Luke Lonergan <[email protected]> wrote:\n> Stripe of mirrors is preferred to mirror of stripes for the best balance of\n> protection and performance.\n\nnooo! i am not aksing raid10 vs raid01 . I am considering stripe of\nmirrors only.\nthe question is how are more number of disks supposed to be\nBEST utilized in terms of IO performance for\n\n1. for adding more mirrored stripes OR\n2. for adding more harddrives to the mirrors.\n\nsay i had 4 drives in raid10 format\n\nD1 raid1 D2 --> MD0\nD3 raid1 D4 --> MD1\nMD0 raid0 MD1 --> MDF (final)\n\nnow i get 2 drives D5 and D6 the i got 2 options\n\n1. create a new mirror\nD5 raid1 D6 --> MD2\nMD0 raid0 MD1 raid0 MD2 --> MDF final\n\n\nOR\n\nD1 raid1 D2 raid1 D5 --> MD0\nD3 raid1 D4 raid1 D6 --> MD1\nMD0 raid0 MD1 --> MDF (final)\n\nthanks , hope my question is clear now.\n\n\nRegds\nmallah.\n\n\n\n\n>\n> In the stripe of mirrors you can lose up to half of the disks and still be\n> operational. In the mirror of stripes, the most you could lose is two\n> drives. The performance of the two should be similar - perhaps the seek\n> performance would be different for high concurrent use in PG.\n>\n> - Luke\n>\n>\n> On 5/29/07 2:14 PM, \"Rajesh Kumar Mallah\" <[email protected]> wrote:\n>\n> > hi,\n> >\n> > this is not really postgresql specific, but any help is appreciated.\n> > i have read more spindles the better it is for IO performance.\n> >\n> > suppose i have 8 drives , should a stripe (raid0) be created on\n> > 2 mirrors (raid1) of 4 drives each OR should a stripe on 4 mirrors\n> > of 2 drives each be created ?\n> >\n> > also does single channel or dual channel controllers makes lot\n> > of difference in raid10 performance ?\n> >\n> > regds\n> > mallah.\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's datatypes do not\n> > match\n> >\n>\n>\n>\n", "msg_date": "Wed, 30 May 2007 07:48:02 +0530", "msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "Hi Rajesh,\n\nOn 5/29/07 7:18 PM, \"Rajesh Kumar Mallah\" <[email protected]> wrote:\n\n> D1 raid1 D2 raid1 D5 --> MD0\n> D3 raid1 D4 raid1 D6 --> MD1\n> MD0 raid0 MD1 --> MDF (final)\n\nAFAIK you can't RAID1 more than two drives, so the above doesn't make sense\nto me.\n\n- Luke\n\n\n", "msg_date": "Tue, 29 May 2007 20:26:37 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "* Luke Lonergan ([email protected]) wrote:\n> Hi Rajesh,\n> \n> On 5/29/07 7:18 PM, \"Rajesh Kumar Mallah\" <[email protected]> wrote:\n> \n> > D1 raid1 D2 raid1 D5 --> MD0\n> > D3 raid1 D4 raid1 D6 --> MD1\n> > MD0 raid0 MD1 --> MDF (final)\n> \n> AFAIK you can't RAID1 more than two drives, so the above doesn't make sense\n> to me.\n\nIt's just more copies of the same data if it's really a RAID1, for the\nextra, extra paranoid. Basically, in the example above, I'd read it as\n\"D1, D2, D5 have identical data on them\".\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 29 May 2007 23:31:52 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "Stephen,\n\nOn 5/29/07 8:31 PM, \"Stephen Frost\" <[email protected]> wrote:\n\n> It's just more copies of the same data if it's really a RAID1, for the\n> extra, extra paranoid. Basically, in the example above, I'd read it as\n> \"D1, D2, D5 have identical data on them\".\n\nIn that case, I'd say it's a waste of disk to add 1+2 redundancy to the\nmirrors.\n\n- Luke \n\n\n", "msg_date": "Tue, 29 May 2007 20:49:56 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "On 5/29/07, Luke Lonergan <[email protected]> wrote:\n> AFAIK you can't RAID1 more than two drives, so the above doesn't make sense\n> to me.\n\nYeah, I've never seen a way to RAID-1 more than 2 drives either. It\nwould have to be his first one:\n\nD1 + D2 = MD0 (RAID 1)\nD3 + D4 = MD1 ...\nD5 + D6 = MD2 ...\nMD0 + MD1 + MD2 = MDF (RAID 0)\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 30 May 2007 00:41:46 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "On Wed, 30 May 2007, Jonah H. Harris wrote:\n\n> On 5/29/07, Luke Lonergan <[email protected]> wrote:\n>> AFAIK you can't RAID1 more than two drives, so the above doesn't make\n>> sense\n>> to me.\n>\n> Yeah, I've never seen a way to RAID-1 more than 2 drives either. It\n> would have to be his first one:\n>\n> D1 + D2 = MD0 (RAID 1)\n> D3 + D4 = MD1 ...\n> D5 + D6 = MD2 ...\n> MD0 + MD1 + MD2 = MDF (RAID 0)\n>\n\nI don't know what the failure mode ends up being, but on linux I had no \nproblems creating what appears to be a massively redundant (but small) array\n\nmd0 : active raid1 sdo1[10](S) sdn1[8] sdm1[7] sdl1[6] sdk1[5] sdj1[4] sdi1[3] sdh1[2] sdg1[9] sdf1[1] sde1[11](S) sdd1[0]\n 896 blocks [10/10] [UUUUUUUUUU]\n\nDavid Lang\n", "msg_date": "Tue, 29 May 2007 22:30:18 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "On 30/05/07, [email protected] <[email protected]> wrote:\n>\n> On Wed, 30 May 2007, Jonah H. Harris wrote:\n>\n> > On 5/29/07, Luke Lonergan <[email protected]> wrote:\n> >> AFAIK you can't RAID1 more than two drives, so the above doesn't make\n> >> sense\n> >> to me.\n> >\n> > Yeah, I've never seen a way to RAID-1 more than 2 drives either. It\n> > would have to be his first one:\n> >\n> > D1 + D2 = MD0 (RAID 1)\n> > D3 + D4 = MD1 ...\n> > D5 + D6 = MD2 ...\n> > MD0 + MD1 + MD2 = MDF (RAID 0)\n> >\n>\n> I don't know what the failure mode ends up being, but on linux I had no\n> problems creating what appears to be a massively redundant (but small)\n> array\n>\n> md0 : active raid1 sdo1[10](S) sdn1[8] sdm1[7] sdl1[6] sdk1[5] sdj1[4]\n> sdi1[3] sdh1[2] sdg1[9] sdf1[1] sde1[11](S) sdd1[0]\n> 896 blocks [10/10] [UUUUUUUUUU]\n>\n> David Lang\n>\n>\nGood point, also if you had Raid 1 with 3 drives with some bit errors at\nleast you can take a vote on whats right. Where as if you only have 2 and\nthey disagree how do you know which is right other than pick one and hope...\nBut whatever it will be slower to keep in sync on a heavy write system.\n\nPeter.\n\nOn 30/05/07, [email protected] <[email protected]> wrote:\nOn Wed, 30 May 2007, Jonah H. Harris wrote:> On 5/29/07, Luke Lonergan <[email protected]> wrote:>>  AFAIK you can't RAID1 more than two drives, so the above doesn't make\n>>  sense>>  to me.>> Yeah, I've never seen a way to RAID-1 more than 2 drives either.  It> would have to be his first one:>> D1 + D2 = MD0 (RAID 1)> D3 + D4 = MD1 ...\n> D5 + D6 = MD2 ...> MD0 + MD1 + MD2 = MDF (RAID 0)>I don't know what the failure mode ends up being, but on linux I had noproblems creating what appears to be a massively redundant (but small) array\nmd0 : active raid1 sdo1[10](S) sdn1[8] sdm1[7] sdl1[6] sdk1[5] sdj1[4] sdi1[3] sdh1[2] sdg1[9] sdf1[1] sde1[11](S) sdd1[0]       896 blocks [10/10] [UUUUUUUUUU]David Lang\nGood point, also if you had Raid 1 with 3 drives with some bit errors at least you can take a vote on whats right. Where as if you only have 2 and they disagree how do you know which is right other than pick one and hope... But whatever it will be slower to keep in sync on a heavy write system.\nPeter.", "msg_date": "Wed, 30 May 2007 08:29:14 +0100", "msg_from": "\"Peter Childs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "* Peter Childs ([email protected]) wrote:\n> Good point, also if you had Raid 1 with 3 drives with some bit errors at\n> least you can take a vote on whats right. Where as if you only have 2 and\n> they disagree how do you know which is right other than pick one and hope...\n> But whatever it will be slower to keep in sync on a heavy write system.\n\nI'm not sure, but I don't think most RAID1 systems do reads against all\ndrives and compare the results before returning it to the caller... I'd\nbe curious if I'm wrong.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Wed, 30 May 2007 06:42:36 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "\n\"Jonah H. Harris\" <[email protected]> writes:\n\n> On 5/29/07, Luke Lonergan <[email protected]> wrote:\n>> AFAIK you can't RAID1 more than two drives, so the above doesn't make sense\n>> to me.\n\nSure you can. In fact it's a very common backup strategy. You build a\nthree-way mirror and then when it comes time to back it up you break it into a\ntwo-way mirror and back up the orphaned array at your leisure. When it's done\nyou re-add it and rebuild the degraded array. Good raid controllers can\nrebuild the array at low priority squeezing in the reads in idle cycles.\n\nI don't think you normally do it for performance though since there's more to\nbe gained by using larger stripes. In theory you should get the same boost on\nreads as widening your stripes but of course you get no benefit on writes. And\nI'm not sure raid controllers optimize raid1 accesses well in practice either.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 30 May 2007 13:08:25 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "Hi Peter,\n\nOn 5/30/07 12:29 AM, \"Peter Childs\" <[email protected]> wrote:\n\n> Good point, also if you had Raid 1 with 3 drives with some bit errors at least\n> you can take a vote on whats right. Where as if you only have 2 and they\n> disagree how do you know which is right other than pick one and hope... But\n> whatever it will be slower to keep in sync on a heavy write system.\n\nMuch better to get a RAID system that checksums blocks so that \"good\" is\nknown. Solaris ZFS does that, as do high end systems from EMC and HDS.\n\n- Luke\n\n\n", "msg_date": "Wed, 30 May 2007 07:06:54 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "On Wed, May 30, 2007 at 07:06:54AM -0700, Luke Lonergan wrote:\n>On 5/30/07 12:29 AM, \"Peter Childs\" <[email protected]> wrote:\n>> Good point, also if you had Raid 1 with 3 drives with some bit errors at least\n>> you can take a vote on whats right. Where as if you only have 2 and they\n>> disagree how do you know which is right other than pick one and hope... But\n>> whatever it will be slower to keep in sync on a heavy write system.\n>\n>Much better to get a RAID system that checksums blocks so that \"good\" is\n>known. Solaris ZFS does that, as do high end systems from EMC and HDS.\n\nI don't see how that's better at all; in fact, it reduces to exactly the \nsame problem: given two pieces of data which disagree, which is right? \nThe ZFS hashes do a better job of error detection, but that's still not \nthe same thing as a voting system (3 copies, 2 of 3 is correct answer) \nto resolve inconsistencies.\n\nMike Stone\n", "msg_date": "Wed, 30 May 2007 10:25:59 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "> I don't see how that's better at all; in fact, it reduces to \n> exactly the same problem: given two pieces of data which \n> disagree, which is right? \n\nThe one that matches the checksum.\n\n- Luke\n\n", "msg_date": "Wed, 30 May 2007 10:36:48 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "On Wed, May 30, 2007 at 10:36:48AM -0400, Luke Lonergan wrote:\n>> I don't see how that's better at all; in fact, it reduces to \n>> exactly the same problem: given two pieces of data which \n>> disagree, which is right? \n>\n>The one that matches the checksum.\n\nAnd you know the checksum is good, how?\n\nMike Stone\n", "msg_date": "Wed, 30 May 2007 11:09:30 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "\"Michael Stone\" <[email protected]> writes:\n\n\"Michael Stone\" <[email protected]> writes:\n\n> On Wed, May 30, 2007 at 07:06:54AM -0700, Luke Lonergan wrote:\n>\n> > Much better to get a RAID system that checksums blocks so that \"good\" is\n> > known. Solaris ZFS does that, as do high end systems from EMC and HDS.\n>\n> I don't see how that's better at all; in fact, it reduces to exactly the same\n> problem: given two pieces of data which disagree, which is right? \n\nWell, the one where the checksum is correct.\n\nIn practice I've never seen a RAID failure due to outright bad data. In my\nexperience when a drive goes bad it goes really bad and you can't read the\nblock at all without i/o errors.\n\nIn every case where I've seen bad data it was due to bad memory (in one case\nbad memory in the RAID controller cache -- that was hell to track down).\nChecksums aren't even enough in that case as you'll happily generate a\nchecksum for the bad data before storing it...\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 30 May 2007 16:23:46 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "On Wed, 30 May 2007 16:36:48 +0200, Luke Lonergan \n<[email protected]> wrote:\n\n>> I don't see how that's better at all; in fact, it reduces to\n>> exactly the same problem: given two pieces of data which\n>> disagree, which is right?\n>\n> The one that matches the checksum.\n\n\t- postgres tells OS \"write this block\"\n\t- OS sends block to drives A and B\n\t- drive A happens to be lucky and seeks faster, writes data\n\t- student intern carrying pizzas for senior IT staff trips over power \ncord*\n\t- boom\n\t- drive B still has old block\n\n\tBoth blocks have correct checksum, so only a version counter/timestamp \ncould tell.\n\tFortunately if fsync() is honored correctly (did you check ?) postgres \nwill zap such errors in recovery.\n\n\tSmart RAID1 or 0+1 controllers (including software RAID) will distribute \nrandom reads to both disks (but not writes obviously).\n\n\t* = this happened at my old job, yes they had a very frightening server \nroom, or more precisely \"cave\" ; I never went there, I didn't want to be \nthe one fired for tripping over the wire...\n\n\n\tFrom Linux Software RAID howto :\n\n\t- benchmarking (quite brief !)\n\thttp://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-9.html#ss9.5\n\n\t- read \"Data Scrubbing\" here :\nhttp://gentoo-wiki.com/HOWTO_Install_on_Software_RAID\n\n\t- yeah but does it work ? (scary)\nhttp://bugs.donarmstrong.com/cgi-bin/bugreport.cgi?bug=405919\n\n md/sync_action\n This can be used to monitor and control the resync/recovery\n process of MD. In particular, writing \"check\" here will cause\n the array to read all data block and check that they are\n consistent (e.g. parity is correct, or all mirror replicas are\n the same). Any discrepancies found are NOT corrected.\n\n A count of problems found will be stored in md/mismatch_count.\n\n Alternately, \"repair\" can be written which will cause the same\n check to be performed, but any errors will be corrected.\n", "msg_date": "Wed, 30 May 2007 17:31:58 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "\n\tOh by the way, I saw a nifty patch in the queue :\n\nFind a way to reduce rotational delay when repeatedly writing last WAL page\nCurrently fsync of WAL requires the disk platter to perform a full \nrotation to fsync again.\nOne idea is to write the WAL to different offsets that might reduce the \nrotational delay.\n\n\tThis will not work if the WAL is on RAID1, because two disks never spin \nexactly at the same speed...\n", "msg_date": "Wed, 30 May 2007 17:40:50 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "On Wed, May 30, 2007 at 12:41:46AM -0400, Jonah H. Harris wrote:\n> Yeah, I've never seen a way to RAID-1 more than 2 drives either.\n\npannekake:~> grep -A 1 md0 /proc/mdstat \nmd0 : active raid1 dm-20[2] dm-19[1] dm-18[0]\n 64128 blocks [3/3] [UUU]\n\nIt's not a big device, but I can ensure you it exists :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 1 Jun 2007 00:39:23 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "Hi,\n\nOp 1-jun-2007, om 1:39 heeft Steinar H. Gunderson het volgende \ngeschreven:\n> On Wed, May 30, 2007 at 12:41:46AM -0400, Jonah H. Harris wrote:\n>> Yeah, I've never seen a way to RAID-1 more than 2 drives either.\n>\n> pannekake:~> grep -A 1 md0 /proc/mdstat\n> md0 : active raid1 dm-20[2] dm-19[1] dm-18[0]\n> 64128 blocks [3/3] [UUU]\n>\n> It's not a big device, but I can ensure you it exists :-)\n\nI talked to someone yesterday who did a 10 or 11 way RAID1 with Linux \nMD for high performance video streaming. Seemed to work very well.\n\n- Sander\n\n", "msg_date": "Fri, 1 Jun 2007 01:54:36 +0300", "msg_from": "Sander Steffann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "Apologies for a somewhat off-topic question, but...\n\nThe Linux kernel doesn't properly detect my software RAID1+0 when I boot up. It detects the two RAID1 arrays, the partitions of which are marked properly. But it can't find the RAID0 on top of that, because there's no corresponding device to auto-detect. The result is that it creates /dev/md0 and /dev/md1 and assembles the RAID1 devices on bootup, but /dev/md2 isn't created, so the RAID0 can't be assembled at boot time.\n\nHere's what it looks like:\n\n$ cat /proc/mdstat \nPersonalities : [raid0] [raid1] \nmd2 : active raid0 md0[0] md1[1]\n 234436224 blocks 64k chunks\n \nmd1 : active raid1 sde1[1] sdc1[2]\n 117218176 blocks [2/2] [UU]\n \nmd0 : active raid1 sdd1[1] sdb1[0]\n 117218176 blocks [2/2] [UU]\n\n$ uname -r\n2.6.12-1.1381_FC3\n\nAfter a reboot, I always have to do this:\n\n mknod /dev/md2 b 9 2\n mdadm --assemble /dev/md2 /dev/md0 /dev/md1\n mount /dev/md2\n\nWhat am I missing here?\n\nThanks,\nCraig\n", "msg_date": "Fri, 01 Jun 2007 10:57:56 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Autodetect of software RAID1+0 fails" }, { "msg_contents": "Craig,\n\nto make things working properly here you need to create a config file\nkeeping both raid1 and raid0 information (/etc/mdadm/mdadm.conf).\nHowever if your root filesystem is corrupted, or you loose this file,\nor move disks somewhere else - you are back to the same initial issue\n:))\n\nSo, the solution I've found 100% working in any case is: use mdadm to\ncreate raid1 devices (as you do already) and then use LVM to create\nraid0 volume on it - LVM writes its own labels on every MD devices and\nwill find its volumes peaces automatically! Tested for crash several\ntimes and was surprised by its robustness :))\n\nRgds,\n-Dimitri\n\nOn 6/1/07, Craig James <[email protected]> wrote:\n> Apologies for a somewhat off-topic question, but...\n>\n> The Linux kernel doesn't properly detect my software RAID1+0 when I boot up.\n> It detects the two RAID1 arrays, the partitions of which are marked\n> properly. But it can't find the RAID0 on top of that, because there's no\n> corresponding device to auto-detect. The result is that it creates /dev/md0\n> and /dev/md1 and assembles the RAID1 devices on bootup, but /dev/md2 isn't\n> created, so the RAID0 can't be assembled at boot time.\n>\n> Here's what it looks like:\n>\n> $ cat /proc/mdstat\n> Personalities : [raid0] [raid1]\n> md2 : active raid0 md0[0] md1[1]\n> 234436224 blocks 64k chunks\n>\n> md1 : active raid1 sde1[1] sdc1[2]\n> 117218176 blocks [2/2] [UU]\n>\n> md0 : active raid1 sdd1[1] sdb1[0]\n> 117218176 blocks [2/2] [UU]\n>\n> $ uname -r\n> 2.6.12-1.1381_FC3\n>\n> After a reboot, I always have to do this:\n>\n> mknod /dev/md2 b 9 2\n> mdadm --assemble /dev/md2 /dev/md0 /dev/md1\n> mount /dev/md2\n>\n> What am I missing here?\n>\n> Thanks,\n> Craig\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Fri, 1 Jun 2007 20:51:00 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autodetect of software RAID1+0 fails" }, { "msg_contents": "Dimitri,\n\nLVM is great, one thing to watch out for: it is very slow compared to pure\nmd. That will only matter in practice if you want to exceed 1GB/s of\nsequential I/O bandwidth.\n\n- Luke\n\n\nOn 6/1/07 11:51 AM, \"Dimitri\" <[email protected]> wrote:\n\n> Craig,\n> \n> to make things working properly here you need to create a config file\n> keeping both raid1 and raid0 information (/etc/mdadm/mdadm.conf).\n> However if your root filesystem is corrupted, or you loose this file,\n> or move disks somewhere else - you are back to the same initial issue\n> :))\n> \n> So, the solution I've found 100% working in any case is: use mdadm to\n> create raid1 devices (as you do already) and then use LVM to create\n> raid0 volume on it - LVM writes its own labels on every MD devices and\n> will find its volumes peaces automatically! Tested for crash several\n> times and was surprised by its robustness :))\n> \n> Rgds,\n> -Dimitri\n> \n> On 6/1/07, Craig James <[email protected]> wrote:\n>> Apologies for a somewhat off-topic question, but...\n>> \n>> The Linux kernel doesn't properly detect my software RAID1+0 when I boot up.\n>> It detects the two RAID1 arrays, the partitions of which are marked\n>> properly. But it can't find the RAID0 on top of that, because there's no\n>> corresponding device to auto-detect. The result is that it creates /dev/md0\n>> and /dev/md1 and assembles the RAID1 devices on bootup, but /dev/md2 isn't\n>> created, so the RAID0 can't be assembled at boot time.\n>> \n>> Here's what it looks like:\n>> \n>> $ cat /proc/mdstat\n>> Personalities : [raid0] [raid1]\n>> md2 : active raid0 md0[0] md1[1]\n>> 234436224 blocks 64k chunks\n>> \n>> md1 : active raid1 sde1[1] sdc1[2]\n>> 117218176 blocks [2/2] [UU]\n>> \n>> md0 : active raid1 sdd1[1] sdb1[0]\n>> 117218176 blocks [2/2] [UU]\n>> \n>> $ uname -r\n>> 2.6.12-1.1381_FC3\n>> \n>> After a reboot, I always have to do this:\n>> \n>> mknod /dev/md2 b 9 2\n>> mdadm --assemble /dev/md2 /dev/md0 /dev/md1\n>> mount /dev/md2\n>> \n>> What am I missing here?\n>> \n>> Thanks,\n>> Craig\n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n>> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n\n", "msg_date": "Fri, 01 Jun 2007 12:54:44 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autodetect of software RAID1+0 fails" }, { "msg_contents": "On Fri, Jun 01, 2007 at 10:57:56AM -0700, Craig James wrote:\n> The Linux kernel doesn't properly detect my software RAID1+0 when I boot \n> up. It detects the two RAID1 arrays, the partitions of which are marked \n> properly. But it can't find the RAID0 on top of that, because there's no \n> corresponding device to auto-detect. The result is that it creates \n> /dev/md0 and /dev/md1 and assembles the RAID1 devices on bootup, but \n> /dev/md2 isn't created, so the RAID0 can't be assembled at boot time.\n\nEither do your md discovery in userspace via mdadm (your distribution can\nprobably help you with this), or simply use the raid10 module instead of\nbuilding raid1+0 yourself.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 1 Jun 2007 23:35:01 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autodetect of software RAID1+0 fails" }, { "msg_contents": "> On Fri, Jun 01, 2007 at 10:57:56AM -0700, Craig James wrote:\n> > The Linux kernel doesn't properly detect my software RAID1+0 when I boot \n> > up. It detects the two RAID1 arrays, the partitions of which are marked \n> > properly. But it can't find the RAID0 on top of that, because there's no \n> > corresponding device to auto-detect. The result is that it creates \n> > /dev/md0 and /dev/md1 and assembles the RAID1 devices on bootup, but \n> > /dev/md2 isn't created, so the RAID0 can't be assembled at boot time.\n\nHi Craig:\n\nI had the same problem for a short time. There *is* a device to base the\nRAID0 off, however, it needs to be recursively detected. mdadm will do this\nfor you, however, if the device order isn't optimal, it may need some help\nvia /etc/mdadm.conf. For a while, I used something like:\n\nDEVICE partitions\n...\nARRAY /dev/md3 level=raid0 num-devices=2 UUID=10d58416:5cd52161:7703b48e:cd93a0e0\nARRAY /dev/md5 level=raid1 num-devices=2 UUID=1515ac26:033ebf60:fa5930c5:1e1f0f12\nARRAY /dev/md6 level=raid1 num-devices=2 UUID=72ddd3b6:b063445c:d7718865:bb79aad7\n\nMy symptoms were that it worked where started from user space, but failed during\nreboot without the above hints. I believe if I had defined md5 and md6 before\nmd3, it may have worked automatically without hints.\n\nOn Fri, Jun 01, 2007 at 11:35:01PM +0200, Steinar H. Gunderson wrote:\n> Either do your md discovery in userspace via mdadm (your distribution can\n> probably help you with this), or simply use the raid10 module instead of\n> building raid1+0 yourself.\n\nI agree with using the mdadm RAID10 support. RAID1+0 has the\nflexibility of allowing you to fine-control the RAID1 vs RAID0 if you\nwant to add disks later. RAID10 from mdadm has the flexibility that\nyou don't need an even number of disks. As I don't intend to add disks\nto my array - the RAID10 as a single layer, with potentially better\nintelligence in terms of performance, appeals to me.\n\nThey both worked for me - but I am sticking with the single layer now.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Fri, 1 Jun 2007 20:23:34 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Autodetect of software RAID1+0 fails" }, { "msg_contents": "Steinar,\n\nOn 6/1/07 2:35 PM, \"Steinar H. Gunderson\" <[email protected]> wrote:\n\n> Either do your md discovery in userspace via mdadm (your distribution can\n> probably help you with this), or simply use the raid10 module instead of\n> building raid1+0 yourself.\n\nI found md raid10 to be *very* slow compared to raid1+0 on Linux 2.6.9 ->\n2.6.18. Very slow in this case is < 400 MB/s compared to 1,800 MB/s.\n\n- Luke \n\n\n", "msg_date": "Fri, 01 Jun 2007 22:41:07 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autodetect of software RAID1+0 fails" } ]
[ { "msg_contents": "Hi All,\n\nI have a very slow left outer join that speeds up by more then 1000\ntimes when I turn set enable_seqscan=off. This is not the query I\nactually do in my application, but is a simplified one that singles out\nthe part that is really slow. All of the columns involved in the query\nhave indexes on them, but unless I set enable_seqscan=off the planner is\ndoing a sequential scan instead of using the indexes. I'm hoping there\nis something simple I am doing wrong that someone can point out to me.\nI am using version 8.1.5.\n\nSo here is the explain analyze output first with enable_seqscan=on, and\nthe second with enable_seqscan=off:\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using(record_id) where\nbackup_id = 1071;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-----\n Hash Join (cost=361299.50..1054312.92 rows=34805 width=8) (actual\ntime=1446.861..368723.597 rows=2789 loops=1)\n Hash Cond: (\"outer\".record_id = \"inner\".record_id)\n -> Seq Scan on backupobjects (cost=0.00..429929.79 rows=13136779\nwidth=8) (actual time=5.165..359168.216 rows=13136779 loops=1)\n -> Hash (cost=360207.21..360207.21 rows=436915 width=8) (actual\ntime=820.979..820.979 rows=2789 loops=1)\n -> Bitmap Heap Scan on backup_location\n(cost=3831.20..360207.21 rows=436915 width=8) (actual\ntime=797.463..818.269 rows=2789 loops=1)\n Recheck Cond: (backup_id = 1071)\n -> Bitmap Index Scan on backup_location_bid\n(cost=0.00..3831.20 rows=436915 width=0) (actual time=59.592..59.592\nrows=2789 loops=1)\n Index Cond: (backup_id = 1071)\n Total runtime: 368725.122 ms\n(9 rows)\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using(record_id) where\nbackup_id = 1071;\n QUERY\nPLAN\n\n------------------------------------------------------------------------\n-----------------------------------------------------------------------\n Nested Loop (cost=3833.21..1682311.27 rows=34805 width=8) (actual\ntime=103.132..201.808 rows=2789 loops=1)\n -> Bitmap Heap Scan on backup_location (cost=3831.20..360207.21\nrows=436915 width=8) (actual time=94.375..97.688 rows=2789 loops=1)\n Recheck Cond: (backup_id = 1071)\n -> Bitmap Index Scan on backup_location_bid\n(cost=0.00..3831.20 rows=436915 width=0) (actual time=84.239..84.239\nrows=2789 loops=1)\n Index Cond: (backup_id = 1071)\n -> Bitmap Heap Scan on backupobjects (cost=2.00..3.01 rows=1\nwidth=8) (actual time=0.033..0.034 rows=1 loops=2789)\n Recheck Cond: (backupobjects.record_id = \"outer\".record_id)\n -> Bitmap Index Scan on backupobjects_pkey (cost=0.00..2.00\nrows=1 width=0) (actual time=0.021..0.021 rows=1 loops=2789)\n Index Cond: (backupobjects.record_id = \"outer\".record_id)\n Total runtime: 203.378 ms\n(10 rows)\n\nHere are the two tables in the query:\n\nmdsdb=# \\d backup_location\n Table \"public.backup_location\"\n Column | Type | Modifiers\n-----------+---------+-----------\n record_id | bigint | not null\n backup_id | integer | not null\nIndexes:\n \"backup_location_pkey\" PRIMARY KEY, btree (record_id, backup_id)\n \"backup_location_bid\" btree (backup_id)\n \"backup_location_rid\" btree (record_id)\nForeign-key constraints:\n \"backup_location_bfk\" FOREIGN KEY (backup_id) REFERENCES\nbackups(backup_id) ON DELETE CASCADE\n \nmdsdb=# \\d backupobjects\n Table \"public.backupobjects\"\n Column | Type | Modifiers\n----------------+-----------------------------+-----------\n record_id | bigint | not null\n dir_record_id | integer |\n name | text |\n extension | character varying(64) |\n hash | character(40) |\n mtime | timestamp without time zone |\n size | bigint |\n user_id | integer |\n group_id | integer |\n meta_data_hash | character(40) |\nIndexes:\n \"backupobjects_pkey\" PRIMARY KEY, btree (record_id)\n \"backupobjects_meta_data_hash_key\" UNIQUE, btree (meta_data_hash)\n \"backupobjects_extension\" btree (extension)\n \"backupobjects_hash\" btree (hash)\n \"backupobjects_mtime\" btree (mtime)\n \"backupobjects_size\" btree (size)\n\nThanks,\nEd\n", "msg_date": "Tue, 29 May 2007 17:16:57 -0700", "msg_from": "\"Tyrrill, Ed\" <[email protected]>", "msg_from_op": true, "msg_subject": "Very slow left outer join" }, { "msg_contents": "\nOn May 29, 2007, at 19:16 , Tyrrill, Ed wrote:\n\n> -----\n> Hash Join (cost=361299.50..1054312.92 rows=34805 width=8) (actual\n> time=1446.861..368723.597 rows=2789 loops=1)\n> Hash Cond: (\"outer\".record_id = \"inner\".record_id)\n> -> Seq Scan on backupobjects (cost=0.00..429929.79 rows=13136779\n> width=8) (actual time=5.165..359168.216 rows=13136779 loops=1)\n> -> Hash (cost=360207.21..360207.21 rows=436915 width=8) (actual\n> time=820.979..820.979 rows=2789 loops=1)\n> -> Bitmap Heap Scan on backup_location\n> (cost=3831.20..360207.21 rows=436915 width=8) (actual\n> time=797.463..818.269 rows=2789 loops=1)\n> Recheck Cond: (backup_id = 1071)\n> -> Bitmap Index Scan on backup_location_bid\n> (cost=0.00..3831.20 rows=436915 width=0) (actual time=59.592..59.592\n> rows=2789 loops=1)\n\nOff the cuff, when was the last time you vacuumed or ran ANALYZE? \nYour row estimates look off by a couple orders of magnitude. With up- \nto-date statistics the planner might do a better job.\n\nAs for any other improvements, I'll leave that to those that know \nmore than I. :)\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Tue, 29 May 2007 20:38:24 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow left outer join" }, { "msg_contents": "On Tue, 29 May 2007 17:16:57 -0700, \"Tyrrill, Ed\" <[email protected]> wrote:\n> mdsdb=# explain analyze select backupobjects.record_id from\n> backupobjects left outer join backup_location using(record_id) where\n> backup_id = 1071;\n[...]\n> \n> Here are the two tables in the query:\n> \n> mdsdb=# \\d backup_location\n> Table \"public.backup_location\"\n> Column | Type | Modifiers\n> -----------+---------+-----------\n> record_id | bigint | not null\n> backup_id | integer | not null\n[...]\n> \n> mdsdb=# \\d backupobjects\n> Table \"public.backupobjects\"\n> Column | Type | Modifiers\n> ----------------+-----------------------------+-----------\n> record_id | bigint | not null\n> dir_record_id | integer |\n> name | text |\n> extension | character varying(64) |\n> hash | character(40) |\n> mtime | timestamp without time zone |\n> size | bigint |\n> user_id | integer |\n> group_id | integer |\n> meta_data_hash | character(40) |\n\nWhy are you using left join?\n\nThe where condition is going to force the row to exist.\n\nklint.\n\n+---------------------------------------+-----------------+\n: Klint Gore : \"Non rhyming :\n: EMail : [email protected] : slang - the :\n: Snail : A.B.R.I. : possibilities :\n: Mail University of New England : are useless\" :\n: Armidale NSW 2351 Australia : L.J.J. :\n: Fax : +61 2 6772 5376 : :\n+---------------------------------------+-----------------+\n", "msg_date": "Wed, 30 May 2007 11:54:21 +1000", "msg_from": "Klint Gore <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow left outer join" }, { "msg_contents": "Klint Gore <[email protected]> writes:\n> On Tue, 29 May 2007 17:16:57 -0700, \"Tyrrill, Ed\" <[email protected]> wrote:\n>> mdsdb=# explain analyze select backupobjects.record_id from\n>> backupobjects left outer join backup_location using(record_id) where\n>> backup_id = 1071;\n\n> Why are you using left join?\n> The where condition is going to force the row to exist.\n\nWhich indeed the planner figured out (note the lack of any mention of\nleft join in the EXPLAIN result). Michael put his finger on the problem\nthough: there's something way off about the rowcount estimate here:\n\n> -> Bitmap Heap Scan on backup_location (cost=3831.20..360207.21\n> rows=436915 width=8) (actual time=94.375..97.688 rows=2789 loops=1)\n> Recheck Cond: (backup_id = 1071)\n> -> Bitmap Index Scan on backup_location_bid\n> (cost=0.00..3831.20 rows=436915 width=0) (actual time=84.239..84.239\n> rows=2789 loops=1)\n> Index Cond: (backup_id = 1071)\n\nWith such a simple index condition the planner really ought to be able\nto come close to the right rowcount estimate. Check for vacuuming\nproblems, check for lack of ANALYZE, consider whether you need to bump\nup the statistics target ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 29 May 2007 23:18:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow left outer join " }, { "msg_contents": " \nMichael Glaesemann <[email protected]> writes:\n> Off the cuff, when was the last time you vacuumed or ran ANALYZE? \n> Your row estimates look off by a couple orders of magnitude. With up- \n> to-date statistics the planner might do a better job.\n>\n> As for any other improvements, I'll leave that to those that know \n> more than I. :)\n>\n> Michael Glaesemann\n> grzm seespotcode net\n\nThe script that imports data into these tables runs a vacuum analyze at\nthe end so there has been no new data added to the tables since the last\ntime vacuum analyze was run.\n\n", "msg_date": "Wed, 30 May 2007 09:22:46 -0700", "msg_from": "\"Tyrrill, Ed\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very slow left outer join" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n> Klint Gore <[email protected]> writes:\n>> On Tue, 29 May 2007 17:16:57 -0700, \"Tyrrill, Ed\"\n<[email protected]> wrote:\n>>> mdsdb=# explain analyze select backupobjects.record_id from\n>>> backupobjects left outer join backup_location using(record_id) where\n>>> backup_id = 1071;\n>\n>> Why are you using left join?\n>> The where condition is going to force the row to exist.\n\nThis select is a simplified version of what I am really doing that still\nexhibits the problem I am having. I know this small query doesn't\nreally make sense, but I thought it would be easier to evaluate\nsomething small rather then the entire query.\n\n>\n> Which indeed the planner figured out (note the lack of any mention of\n> left join in the EXPLAIN result). Michael put his finger on the\nproblem\n> though: there's something way off about the rowcount estimate here:\n>\n>> -> Bitmap Heap Scan on backup_location (cost=3831.20..360207.21\n>> rows=436915 width=8) (actual time=94.375..97.688 rows=2789 loops=1)\n>> Recheck Cond: (backup_id = 1071)\n>> -> Bitmap Index Scan on backup_location_bid\n>> (cost=0.00..3831.20 rows=436915 width=0) (actual time=84.239..84.239\n>> rows=2789 loops=1)\n>> Index Cond: (backup_id = 1071)\n>\n> With such a simple index condition the planner really ought to be able\n> to come close to the right rowcount estimate. Check for vacuuming\n> problems, check for lack of ANALYZE, consider whether you need to bump\n> up the statistics target ...\n>\n>\t\t\tregards, tom lane\n\nI did a vacuum analyze after inserting all the data. Is there possibly\na bug in analyze in 8.1.5-6? I know it says rows=436915, but the last\ntime the backup_location table has had that little data in it was a\ncouple months ago, and analyze has been run many times since then.\nCurrently it has over 160 million rows.\n\nThanks,\nEd\n", "msg_date": "Wed, 30 May 2007 09:55:10 -0700", "msg_from": "\"Tyrrill, Ed\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very slow left outer join " }, { "msg_contents": "\"Tyrrill, Ed\" <[email protected]> writes:\n> I did a vacuum analyze after inserting all the data. Is there possibly\n> a bug in analyze in 8.1.5-6? I know it says rows=3D436915, but the last\n> time the backup_location table has had that little data in it was a\n> couple months ago, and analyze has been run many times since then.\n> Currently it has over 160 million rows.\n\nPossibly you need a larger statistics target for that table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 May 2007 12:59:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow left outer join " } ]
[ { "msg_contents": "Hi,\n\tafter doing the \"dd\" tests for a server we have at work I obtained:\nRead: 47.20 Mb/s\nWrite: 39.82 Mb/s\n\tSome days ago read performance was around 20Mb/s due to no readahead in md0 \nso I modified it using hdparm. However, it seems to me that being it a RAID1 \nread speed could be much better. These are SATA disks with 3Gb of RAM so I \ndid 'time bash -c \"dd if=/dev/zero of=bigfile bs=8k count=786432 && sync\"'. \nFile system is ext3 (if read many times in the list that XFS is faster), but \nI don't want to change the file system right now. Modifing the readahead from \nthe current 1024k to 2048k doesn't make any difference. Are there any other \ntweaks I can make?\n \n", "msg_date": "Wed, 30 May 2007 11:35:32 +0200", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": true, "msg_subject": "Bad RAID1 read performance" }, { "msg_contents": "This sounds like a bad RAID controller - are you using a built-in hardware\nRAID? If so, you will likely want to use Linux software RAID instead.\n\nAlso - you might want to try a 512KB readahead - I've found that is optimal\nfor RAID1 on some RAID controllers.\n\n- Luke \n\n\nOn 5/30/07 2:35 AM, \"Albert Cervera Areny\" <[email protected]> wrote:\n\n> Hi,\n> after doing the \"dd\" tests for a server we have at work I obtained:\n> Read: 47.20 Mb/s\n> Write: 39.82 Mb/s\n> Some days ago read performance was around 20Mb/s due to no readahead in md0\n> so I modified it using hdparm. However, it seems to me that being it a RAID1\n> read speed could be much better. These are SATA disks with 3Gb of RAM so I\n> did 'time bash -c \"dd if=/dev/zero of=bigfile bs=8k count=786432 && sync\"'.\n> File system is ext3 (if read many times in the list that XFS is faster), but\n> I don't want to change the file system right now. Modifing the readahead from\n> the current 1024k to 2048k doesn't make any difference. Are there any other\n> tweaks I can make?\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n", "msg_date": "Wed, 30 May 2007 07:09:02 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad RAID1 read performance" }, { "msg_contents": "Hardware isn't very good I believe, and it's about 2-3 years old, but the RAID \nis Linux software, and though not very good the difference between reading \nand writing should probably be greater... (?)\n\nWould you set 512Kb readahead on both drives and RAID? I tried various \nconfigurations and none seemed to make a big difference. It seemed correct to \nme to set 512kb per drive and 1024kb for md0.\n\nA Dimecres 30 Maig 2007 16:09, Luke Lonergan va escriure:\n> This sounds like a bad RAID controller - are you using a built-in hardware\n> RAID? If so, you will likely want to use Linux software RAID instead.\n>\n> Also - you might want to try a 512KB readahead - I've found that is optimal\n> for RAID1 on some RAID controllers.\n>\n> - Luke\n>\n> On 5/30/07 2:35 AM, \"Albert Cervera Areny\" <[email protected]> wrote:\n> > Hi,\n> > after doing the \"dd\" tests for a server we have at work I obtained:\n> > Read: 47.20 Mb/s\n> > Write: 39.82 Mb/s\n> > Some days ago read performance was around 20Mb/s due to no readahead in\n> > md0 so I modified it using hdparm. However, it seems to me that being it\n> > a RAID1 read speed could be much better. These are SATA disks with 3Gb of\n> > RAM so I did 'time bash -c \"dd if=/dev/zero of=bigfile bs=8k count=786432\n> > && sync\"'. File system is ext3 (if read many times in the list that XFS\n> > is faster), but I don't want to change the file system right now.\n> > Modifing the readahead from the current 1024k to 2048k doesn't make any\n> > difference. Are there any other tweaks I can make?\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n-- \nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n \n", "msg_date": "Wed, 30 May 2007 17:00:26 +0200", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad RAID1 read performance" }, { "msg_contents": "As there is no 'continuous space' option on ext3/ext2 (or probably \"-f\nfragment_size\" may do a trick?) - I think after some filesystem\nactivity you simply loose continuous space allocation and rather\nexpected sequential reading may be transformed into random seeking of\n'logically' sequentual blocks...\n\nRgds,\n-Dimitri\n\nOn 5/30/07, Albert Cervera Areny <[email protected]> wrote:\n> Hardware isn't very good I believe, and it's about 2-3 years old, but the\n> RAID\n> is Linux software, and though not very good the difference between reading\n> and writing should probably be greater... (?)\n>\n> Would you set 512Kb readahead on both drives and RAID? I tried various\n> configurations and none seemed to make a big difference. It seemed correct\n> to\n> me to set 512kb per drive and 1024kb for md0.\n>\n> A Dimecres 30 Maig 2007 16:09, Luke Lonergan va escriure:\n> > This sounds like a bad RAID controller - are you using a built-in hardware\n> > RAID? If so, you will likely want to use Linux software RAID instead.\n> >\n> > Also - you might want to try a 512KB readahead - I've found that is\n> optimal\n> > for RAID1 on some RAID controllers.\n> >\n> > - Luke\n> >\n> > On 5/30/07 2:35 AM, \"Albert Cervera Areny\" <[email protected]> wrote:\n> > > Hi,\n> > > after doing the \"dd\" tests for a server we have at work I obtained:\n> > > Read: 47.20 Mb/s\n> > > Write: 39.82 Mb/s\n> > > Some days ago read performance was around 20Mb/s due to no readahead in\n> > > md0 so I modified it using hdparm. However, it seems to me that being it\n> > > a RAID1 read speed could be much better. These are SATA disks with 3Gb\n> of\n> > > RAM so I did 'time bash -c \"dd if=/dev/zero of=bigfile bs=8k\n> count=786432\n> > > && sync\"'. File system is ext3 (if read many times in the list that XFS\n> > > is faster), but I don't want to change the file system right now.\n> > > Modifing the readahead from the current 1024k to 2048k doesn't make any\n> > > difference. Are there any other tweaks I can make?\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 4: Have you searched our list archives?\n> > >\n> > > http://archives.postgresql.org\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: explain analyze is your friend\n>\n> --\n> Albert Cervera Areny\n> Dept. Informàtica Sedifa, S.L.\n>\n> Av. Can Bordoll, 149\n> 08202 - Sabadell (Barcelona)\n> Tel. 93 715 51 11\n> Fax. 93 715 51 12\n>\n> ====================================================================\n> ........................ AVISO LEGAL ............................\n> La presente comunicación y sus anexos tiene como destinatario la\n> persona a la que va dirigida, por lo que si usted lo recibe\n> por error debe notificarlo al remitente y eliminarlo de su\n> sistema, no pudiendo utilizarlo, total o parcialmente, para\n> ningún fin. Su contenido puede tener información confidencial o\n> protegida legalmente y únicamente expresa la opinión del\n> remitente. El uso del correo electrónico vía Internet no\n> permite asegurar ni la confidencialidad de los mensajes\n> ni su correcta recepción. En el caso de que el\n> destinatario no consintiera la utilización del correo electrónico,\n> deberá ponerlo en nuestro conocimiento inmediatamente.\n> ====================================================================\n> ........................... DISCLAIMER .............................\n> This message and its attachments are intended exclusively for the\n> named addressee. If you receive this message in error, please\n> immediately delete it from your system and notify the sender. You\n> may not use this message or any part of it for any purpose.\n> The message may contain information that is confidential or\n> protected by law, and any opinions expressed are those of the\n> individual sender. Internet e-mail guarantees neither the\n> confidentiality nor the proper receipt of the message sent.\n> If the addressee of this message does not consent to the use\n> of internet e-mail, please inform us inmmediately.\n> ====================================================================\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Wed, 30 May 2007 20:19:56 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad RAID1 read performance" }, { "msg_contents": "Albert,\n\nOn 5/30/07 8:00 AM, \"Albert Cervera Areny\" <[email protected]> wrote:\n\n> Hardware isn't very good I believe, and it's about 2-3 years old, but the RAID\n> is Linux software, and though not very good the difference between reading\n> and writing should probably be greater... (?)\n\nNot for one thread/process of I/O. Mirror sets can nearly double the read\nperformance on most RAID adapters or SW RAID when using two or more\nthread/processes, but a single thread will get one drive worth of\nperformance.\n\nYou should try running two simultaneous processes during reading and see\nwhat you get.\n \n> Would you set 512Kb readahead on both drives and RAID? I tried various\n> configurations and none seemed to make a big difference. It seemed correct to\n> me to set 512kb per drive and 1024kb for md0.\n\nShouldn't matter that much, but yes, each drive getting half the readahead\nis a good strategy. Try 256+256 and 512.\n\nThe problem you have is likely not related to the readahead though - I\nsuggest you try read/write to a single disk and see what you get. You\nshould get around 60 MB/s if the drive is a modern 7200 RPM SATA disk. If\nyou aren't getting that on a single drive, there's something wrong with the\nSATA driver or the drive(s).\n\n- Luke \n> A Dimecres 30 Maig 2007 16:09, Luke Lonergan va escriure:\n>> This sounds like a bad RAID controller - are you using a built-in hardware\n>> RAID? If so, you will likely want to use Linux software RAID instead.\n>> \n>> Also - you might want to try a 512KB readahead - I've found that is optimal\n>> for RAID1 on some RAID controllers.\n>> \n>> - Luke\n>> \n>> On 5/30/07 2:35 AM, \"Albert Cervera Areny\" <[email protected]> wrote:\n>>> Hi,\n>>> after doing the \"dd\" tests for a server we have at work I obtained:\n>>> Read: 47.20 Mb/s\n>>> Write: 39.82 Mb/s\n>>> Some days ago read performance was around 20Mb/s due to no readahead in\n>>> md0 so I modified it using hdparm. However, it seems to me that being it\n>>> a RAID1 read speed could be much better. These are SATA disks with 3Gb of\n>>> RAM so I did 'time bash -c \"dd if=/dev/zero of=bigfile bs=8k count=786432\n>>> && sync\"'. File system is ext3 (if read many times in the list that XFS\n>>> is faster), but I don't want to change the file system right now.\n>>> Modifing the readahead from the current 1024k to 2048k doesn't make any\n>>> difference. Are there any other tweaks I can make?\n>>> \n>>> \n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 4: Have you searched our list archives?\n>>> \n>>> http://archives.postgresql.org\n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n\n\n", "msg_date": "Wed, 30 May 2007 13:13:45 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad RAID1 read performance" }, { "msg_contents": "As you suggested with two threads I get 42.39 Mb/s in one and 40.70 Mb/s in \nthe other one, so that's more than 80Mb/s. That's what I expected with a \nsingle thread, so thanks for the information. It seems I will have to buy \nbetter hard drives if I want increased performance...\n\nA Dimecres 30 Maig 2007 22:13, Luke Lonergan va escriure:\n> Not for one thread/process of I/O.  Mirror sets can nearly double the read\n> performance on most RAID adapters or SW RAID when using two or more\n> thread/processes, but a single thread will get one drive worth of\n> performance.\n>\n> You should try running two simultaneous processes during reading and see\n> what you get.\n\n \n", "msg_date": "Thu, 31 May 2007 13:00:49 +0200", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad RAID1 read performance" } ]
[ { "msg_contents": "It's created when the data is written to both drives.\n\nThis is standard stuff, very well proven: try googling 'self healing zfs'.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tMichael Stone [mailto:[email protected]]\nSent:\tWednesday, May 30, 2007 11:11 AM Eastern Standard Time\nTo:\[email protected]\nSubject:\tRe: [PERFORM] setting up raid10 with more than 4 drives\n\nOn Wed, May 30, 2007 at 10:36:48AM -0400, Luke Lonergan wrote:\n>> I don't see how that's better at all; in fact, it reduces to \n>> exactly the same problem: given two pieces of data which \n>> disagree, which is right? \n>\n>The one that matches the checksum.\n\nAnd you know the checksum is good, how?\n\nMike Stone\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n\n\nRe: [PERFORM] setting up raid10 with more than 4 drives\n\n\n\nIt's created when the data is written to both drives.\n\nThis is standard stuff, very well proven: try googling 'self healing zfs'.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom:   Michael Stone [mailto:[email protected]]\nSent:   Wednesday, May 30, 2007 11:11 AM Eastern Standard Time\nTo:     [email protected]\nSubject:        Re: [PERFORM] setting up raid10 with more than 4 drives\n\nOn Wed, May 30, 2007 at 10:36:48AM -0400, Luke Lonergan wrote:\n>> I don't see how that's better at all; in fact, it reduces to\n>> exactly the same problem: given two pieces of data which\n>> disagree, which is right? \n>\n>The one that matches the checksum.\n\nAnd you know the checksum is good, how?\n\nMike Stone\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n               http://archives.postgresql.org", "msg_date": "Wed, 30 May 2007 11:21:21 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "> This is standard stuff, very well proven: try googling 'self healing zfs'.\n\nThe first hit on this search is a demo of ZFS detecting corruption of one of\nthe mirror pair using checksums, very cool:\n \nhttp://www.opensolaris.org/os/community/zfs/demos/selfheal/;jsessionid=52508\nD464883F194061E341F58F4E7E1\n\nThe bad drive is pointed out directly using the checksum and the data\nintegrity is preserved.\n\n- Luke\n\n\n\n\nRe: [PERFORM] setting up raid10 with more than 4 drives\n\n\n\n> This is standard stuff, very well proven: try googling 'self healing zfs'.\n\nThe first hit on this search is a demo of ZFS detecting corruption of one of the mirror pair using checksums, very cool:\n  http://www.opensolaris.org/os/community/zfs/demos/selfheal/;jsessionid=52508D464883F194061E341F58F4E7E1\n\nThe bad drive is pointed out directly using the checksum and the data integrity is preserved.\n\n- Luke", "msg_date": "Wed, 30 May 2007 08:51:45 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "On Wed, May 30, 2007 at 08:51:45AM -0700, Luke Lonergan wrote:\n> > This is standard stuff, very well proven: try googling 'self healing zfs'.\n> The first hit on this search is a demo of ZFS detecting corruption of one of\n> the mirror pair using checksums, very cool:\n> \n> http://www.opensolaris.org/os/community/zfs/demos/selfheal/;jsessionid=52508\n> D464883F194061E341F58F4E7E1\n> \n> The bad drive is pointed out directly using the checksum and the data\n> integrity is preserved.\n\nOne part is corruption. Another is ordering and consistency. ZFS represents\nboth RAID-style storage *and* journal-style file system. I imagine consistency\nand ordering is handled through journalling.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Wed, 30 May 2007 11:57:34 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "Sorry for posting and disappearing.\n\ni am still not clear what is the best way of throwing in more\ndisks into the system.\ndoes more stripes means more performance (mostly) ?\nalso is there any thumb rule about best stripe size ? (8k,16k,32k...)\n\nregds\nmallah\n\n\n\nOn 5/30/07, [email protected] <[email protected]> wrote:\n> On Wed, May 30, 2007 at 08:51:45AM -0700, Luke Lonergan wrote:\n> > > This is standard stuff, very well proven: try googling 'self healing zfs'.\n> > The first hit on this search is a demo of ZFS detecting corruption of one of\n> > the mirror pair using checksums, very cool:\n> >\n> > http://www.opensolaris.org/os/community/zfs/demos/selfheal/;jsessionid=52508\n> > D464883F194061E341F58F4E7E1\n> >\n> > The bad drive is pointed out directly using the checksum and the data\n> > integrity is preserved.\n>\n> One part is corruption. Another is ordering and consistency. ZFS represents\n> both RAID-style storage *and* journal-style file system. I imagine consistency\n> and ordering is handled through journalling.\n>\n> Cheers,\n> mark\n>\n> --\n> [email protected] / [email protected] / [email protected] __________________________\n> . . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n> |\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ |\n> | | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n>\n> One ring to rule them all, one ring to find them, one ring to bring them all\n> and in the darkness bind them...\n>\n> http://mark.mielke.cc/\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n", "msg_date": "Thu, 31 May 2007 01:28:58 +0530", "msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "Mark,\n\nOn 5/30/07 8:57 AM, \"[email protected]\" <[email protected]> wrote:\n\n> One part is corruption. Another is ordering and consistency. ZFS represents\n> both RAID-style storage *and* journal-style file system. I imagine consistency\n> and ordering is handled through journalling.\n\nYep and versioning, which answers PFC's scenario.\n\nShort answer: ZFS has a very reliable model that uses checksumming and\njournaling along with block versioning to implement \"self healing\". There\nare others that do some similar things with checksumming on the SAN HW and\ncooperation with the filesystem.\n\n- Luke\n\n\n", "msg_date": "Wed, 30 May 2007 13:18:09 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "On Thu, May 31, 2007 at 01:28:58AM +0530, Rajesh Kumar Mallah wrote:\n> i am still not clear what is the best way of throwing in more\n> disks into the system.\n> does more stripes means more performance (mostly) ?\n> also is there any thumb rule about best stripe size ? (8k,16k,32k...)\n\nIt isn't that simple. RAID1 should theoretically give you the best read\nperformance. If all you care about is read, then \"best performance\" would\nbe to add more mirrors to your array.\n\nFor write performance, RAID0 is the best. I think this is what you mean\nby \"more stripes\".\n\nThis is where RAID 1+0/0+1 come in. To reconcile the above. Your question\nseems to be: I have a RAID 1+0/0+1 system. Should I add disks onto the 0\npart of the array? Or the 1 part of the array?\n\nMy conclusion to you would be: Both, unless you are certain that you load\nis scaled heavily towards read, in which case the 1, or if scaled heavily\ntowards write, then 0.\n\nThen comes the other factors. Do you want redundancy? Then you want 1.\nDo you want capacity? Then you want 0.\n\nThere is no single answer for most people.\n\nFor me, stripe size is the last decision to make, and may be heavily\nsensitive to load patterns. This suggests a trial and error / benchmarking\nrequirement to determine the optimal stripe size for your use.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Wed, 30 May 2007 16:41:34 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" }, { "msg_contents": "On 5/31/07, [email protected] <[email protected]> wrote:\n> On Thu, May 31, 2007 at 01:28:58AM +0530, Rajesh Kumar Mallah wrote:\n> > i am still not clear what is the best way of throwing in more\n> > disks into the system.\n> > does more stripes means more performance (mostly) ?\n> > also is there any thumb rule about best stripe size ? (8k,16k,32k...)\n>\n> It isn't that simple. RAID1 should theoretically give you the best read\n> performance. If all you care about is read, then \"best performance\" would\n> be to add more mirrors to your array.\n>\n> For write performance, RAID0 is the best. I think this is what you mean\n> by \"more stripes\".\n>\n> This is where RAID 1+0/0+1 come in. To reconcile the above. Your question\n> seems to be: I have a RAID 1+0/0+1 system. Should I add disks onto the 0\n> part of the array? Or the 1 part of the array?\n\n> My conclusion to you would be: Both, unless you are certain that you load\n> is scaled heavily towards read, in which case the 1, or if scaled heavily\n> towards write, then 0.\n\nthanks . this answers to my query. all the time i was thinking of 1+0\nonly failing to observe the 0+1 part in it.\n\n>\n> Then comes the other factors. Do you want redundancy? Then you want 1.\n> Do you want capacity? Then you want 0.\n\nOk.\n\n>\n> There is no single answer for most people.\n>\n> For me, stripe size is the last decision to make, and may be heavily\n> sensitive to load patterns. This suggests a trial and error / benchmarking\n> requirement to determine the optimal stripe size for your use.\n\nthanks.\nmallah.\n\n>\n> Cheers,\n> mark\n>\n> --\n> [email protected] / [email protected] / [email protected] __________________________\n> . . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n> |\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ |\n> | | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n>\n> One ring to rule them all, one ring to find them, one ring to bring them all\n> and in the darkness bind them...\n>\n> http://mark.mielke.cc/\n>\n>\n", "msg_date": "Thu, 31 May 2007 07:24:40 +0530", "msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setting up raid10 with more than 4 drives" } ]
[ { "msg_contents": "I have a question regarding \"connection for xxyy established\" The situation\nbelow shows records being added to 3 tables which are heavily populated. We\nnever \"update\" any table, only read from them. Or we delete a full-day worth\nof records from them.\n\nThe question is: Is this method of repeatedly establishing and\nre-establishing database connections with the same 3 tables efficient? As\nin, is there a better way to add data?\n\nApr 25 03:38:47 saw alert_mgr[33696]: Database connection for Tbl_A\nestablished\nApr 25 03:38:51 saw alert_mgr[33698]: Database connection for Tbl_B\nestablished\nApr 25 03:38:54 saw alert_mgr[33700]: Database connection for Tbl_A\nestablished\nApr 25 03:38:55 saw alert_mgr[25182]: user_new_locked: added 64 entries\n(66880 total)\nApr 25 03:38:57 saw alert_mgr[33702]: Database connection for Tbl_B\nestablished\nApr 25 03:38:59 saw alert_mgr[33704]: Database connection for Tbl_A\nestablished\nApr 25 03:39:02 saw alert_mgr[33706]: Database connection for Tbl_B\nestablished\nApr 25 03:39:04 saw alert_mgr[33708]: Database connection for Tbl_C\nestablished\nApr 25 03:39:05 saw alert_mgr[33710]: Database connection for Tbl_A\nestablished\nApr 25 03:39:06 saw alert_mgr[25182]: user_new_locked: added 64 entries\n(66944 total)\nApr 25 03:39:06 saw alert_mgr[33712]: Database connection for Tbl_B\nestablished\nApr 25 03:39:08 saw alert_mgr[33714]: Database connection for Tbl_A\nestablished\nApr 25 03:39:11 saw alert_mgr[33716]: Database connection for Tbl_B\nestablished\nApr 25 03:39:13 saw alert_mgr[33718]: Database connection for Tbl_A\nestablished\nApr 25 03:39:15 saw alert_mgr[25182]: user_new_locked: added 64 entries\n(67008 total)\nApr 25 03:39:18 saw alert_mgr[33720]: Database connection for Tbl_B\nestablished\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nI have a question regarding \"connection for xxyy established\" The\nsituation below shows records being added to 3 tables which are heavily\npopulated. We never \"update\" any table, only read from them. Or we\ndelete a full-day worth of records from them. \n\nThe question is:  Is this method of repeatedly establishing and\nre-establishing database connections with the same 3 tables efficient?\nAs in, is there a better way to add data?\nApr 25 03:38:47 saw alert_mgr[33696]: Database connection for Tbl_A established\nApr 25 03:38:51 saw alert_mgr[33698]: Database connection for Tbl_B established\nApr 25 03:38:54 saw alert_mgr[33700]: Database connection for Tbl_A established\nApr 25 03:38:55 saw alert_mgr[25182]: user_new_locked: added 64 entries (66880 total)\nApr 25 03:38:57 saw alert_mgr[33702]: Database connection for Tbl_B established\nApr 25 03:38:59 saw alert_mgr[33704]: Database connection for Tbl_A established\nApr 25 03:39:02 saw alert_mgr[33706]: Database connection for Tbl_B established\nApr 25 03:39:04 saw alert_mgr[33708]: Database connection for Tbl_C established\nApr 25 03:39:05 saw alert_mgr[33710]: Database connection for Tbl_A established\nApr 25 03:39:06 saw alert_mgr[25182]: user_new_locked: added 64 entries (66944 total)\nApr 25 03:39:06 saw alert_mgr[33712]: Database connection for Tbl_B established\nApr 25 03:39:08 saw alert_mgr[33714]: Database connection for Tbl_A established\nApr 25 03:39:11 saw alert_mgr[33716]: Database connection for Tbl_B established\nApr 25 03:39:13 saw alert_mgr[33718]: Database connection for Tbl_A established\nApr 25 03:39:15 saw alert_mgr[25182]: user_new_locked: added 64 entries (67008 total)\nApr 25 03:39:18 saw alert_mgr[33720]: Database connection for Tbl_B established\n-- Yudhvir Singh Sidhu408 375 3134 cell", "msg_date": "Wed, 30 May 2007 14:45:07 -0700", "msg_from": "\"Y Sidhu\" <[email protected]>", "msg_from_op": true, "msg_subject": "Database connection for Tbl_B established" }, { "msg_contents": "\"Y Sidhu\" <[email protected]> writes:\n> The question is: Is this method of repeatedly establishing and\n> re-establishing database connections with the same 3 tables efficient?\n\nNo. Launching a new backend process is a fairly expensive proposition;\nif you're striving for performance you don't want to do it for just one\nor two queries. Look into connection pooling ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 May 2007 18:32:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database connection for Tbl_B established " }, { "msg_contents": "You are referring to pgpool? BTW, thanks for this insight.\n\nYudhvir\n========\n\nOn 5/30/07, Tom Lane <[email protected]> wrote:\n>\n> \"Y Sidhu\" <[email protected]> writes:\n> > The question is: Is this method of repeatedly establishing and\n> > re-establishing database connections with the same 3 tables efficient?\n>\n> No. Launching a new backend process is a fairly expensive proposition;\n> if you're striving for performance you don't want to do it for just one\n> or two queries. Look into connection pooling ...\n>\n> regards, tom lane\n>\n\n\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nYou are referring to pgpool? BTW, thanks for this insight. \n\nYudhvir\n========On 5/30/07, Tom Lane <[email protected]> wrote:\n\"Y Sidhu\" <[email protected]> writes:> The question is:  Is this method of repeatedly establishing and> re-establishing database connections with the same 3 tables efficient?\nNo.  Launching a new backend process is a fairly expensive proposition;if you're striving for performance you don't want to do it for just oneor two queries.  Look into connection pooling ...\n                        regards,\ntom lane-- Yudhvir Singh Sidhu408 375 3134 cell", "msg_date": "Wed, 30 May 2007 16:36:50 -0700", "msg_from": "\"Y Sidhu\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database connection for Tbl_B established" } ]
[ { "msg_contents": "Can you help me appending two table values into single table without\nperforming INSERT?\nNote that these tables are of same schema.\n\nIs there any sql command is supported?\n\nThanks,\nHanu\n\n\nOn 5/29/07, Alvaro Herrera <[email protected]> wrote:\n>\n> Michal Szymanski wrote:\n> > There is another strange thing. We have two versions of our test\n> > >>environment one with production DB copy and second genereated with\n> > >>minimal data set and it is odd that update presented above on copy of\n> > >>production is executing 170ms but on small DB it executing 6s !!!!\n> > >\n> > >How are you vacuuming the tables?\n> > >\n> > Using pgAdmin (DB is installed on my laptop) and I use this tool for\n> > vaccuminh, I do not think that vaccuming can help because I've tested on\n> > both database just after importing.\n>\n> I think you are misunderstanding the importance of vacuuming the table.\n> Try this: on a different terminal from the one running the test, run a\n> VACUUM on the updated table with vacuum_cost_delay set to 20, on an\n> infinite loop. Keep this running while you do your update test. Vary\n> the vacuum_cost_delay and measure the average/min/max UPDATE times.\n> Also try putting a short sleep on the infinite VACUUM loop and see how\n> its length affects the UPDATE times.\n>\n> One thing not clear to me is if your table is in a clean state. Before\n> running this test, do a TRUNCATE and import the data again. This will\n> get rid of any dead space that may be hurting your measurements.\n>\n> --\n> Alvaro Herrera\n> http://www.advogato.org/person/alvherre\n> \"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\n> Quite refreshing in a world of \"weekend drag racer\" developers.\"\n> (Scott Marlowe)\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n\n-- \nWith best regards,\nHanumanthappa Kurubar\nMobile: 98 801 800 65\n\nCan you help me appending two table values into single table without performing INSERT? \nNote that these tables are of same schema.\n \nIs there any sql command is supported?\n \nThanks,\nHanu \nOn 5/29/07, Alvaro Herrera <[email protected]> wrote:\nMichal Szymanski wrote:> There is another strange thing. We have two versions of our test> >>environment one with production DB copy and second genereated with\n> >>minimal data set and it is odd that update presented above on copy of> >>production is executing 170ms but on small DB it executing 6s !!!!> >> >How are you vacuuming the tables?\n> >> Using pgAdmin (DB is installed on my laptop) and I use this tool for> vaccuminh, I do not think that vaccuming can help because I've tested on> both database just after importing.\nI think you are misunderstanding the importance of vacuuming the table.Try this: on a different terminal from the one running the test, run aVACUUM on the updated table with vacuum_cost_delay set to 20, on an\ninfinite loop.  Keep this running while you do your update test.  Varythe vacuum_cost_delay and measure the average/min/max UPDATE times.Also try putting a short sleep on the infinite VACUUM loop and see howits length affects the UPDATE times.\nOne thing not clear to me is if your table is in a clean state.  Beforerunning this test, do a TRUNCATE and import the data again.  This willget rid of any dead space that may be hurting your measurements.\n--Alvaro Herrera                        http://www.advogato.org/person/alvherre\"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\nQuite refreshing in a world of \"weekend drag racer\" developers.\"(Scott Marlowe)---------------------------(end of broadcast)---------------------------TIP 4: Have you searched our list archives?\n              http://archives.postgresql.org-- With best regards,Hanumanthappa KurubarMobile: 98 801 800 65", "msg_date": "Wed, 30 May 2007 23:36:48 -0400", "msg_from": "\"Hanu Kurubar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Append table" }, { "msg_contents": "Any luck on appending two table in PostgreSQL.\nBelow are two table with same schema that have different values. In this\ncase EmpID is unique value.\n\ntabelA\n------------\nEmpId (Int) EmpName (String)\n1 Hanu\n2 Alvaro\n\n\ntabelB\n------------\nEmpId (Int) EmpName (String)\n3 Michal\n4 Tom\n\n\nI would be looking below output after appending tableA with tableB. Is this\npossible in PostgreSQL?\n\n\ntabelA\n------------\nEmpId (Int) EmpName (String)\n1 Hanu\n2 Alvaro\n3 Michal\n4 Tom\n\n\nThanks,\nHanu\n\n\nOn 5/30/07, Hanu Kurubar <[email protected]> wrote:\n>\n> Can you help me appending two table values into single table without\n> performing INSERT?\n> Note that these tables are of same schema.\n>\n> Is there any sql command is supported?\n>\n> Thanks,\n> Hanu\n>\n>\n> On 5/29/07, Alvaro Herrera <[email protected]> wrote:\n> >\n> > Michal Szymanski wrote:\n> > > There is another strange thing. We have two versions of our test\n> > > >>environment one with production DB copy and second genereated with\n> > > >>minimal data set and it is odd that update presented above on copy\n> > of\n> > > >>production is executing 170ms but on small DB it executing 6s !!!!\n> > > >\n> > > >How are you vacuuming the tables?\n> > > >\n> > > Using pgAdmin (DB is installed on my laptop) and I use this tool for\n> > > vaccuminh, I do not think that vaccuming can help because I've tested\n> > on\n> > > both database just after importing.\n> >\n> > I think you are misunderstanding the importance of vacuuming the table.\n> > Try this: on a different terminal from the one running the test, run a\n> > VACUUM on the updated table with vacuum_cost_delay set to 20, on an\n> > infinite loop. Keep this running while you do your update test. Vary\n> > the vacuum_cost_delay and measure the average/min/max UPDATE times.\n> > Also try putting a short sleep on the infinite VACUUM loop and see how\n> > its length affects the UPDATE times.\n> >\n> > One thing not clear to me is if your table is in a clean state. Before\n> > running this test, do a TRUNCATE and import the data again. This will\n> > get rid of any dead space that may be hurting your measurements.\n> >\n> > --\n> > Alvaro Herrera\n> > http://www.advogato.org/person/alvherre\n> > \"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\n> > Quite refreshing in a world of \"weekend drag racer\" developers.\"\n> > (Scott Marlowe)\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n>\n>\n>\n> --\n> With best regards,\n> Hanumanthappa Kurubar\n> Mobile: 98 801 800 65\n\n\n\n\n-- \nWith best regards,\nHanumanthappa Kurubar\nMobile: 98 801 800 65\n\nAny luck on appending two table in PostgreSQL. \nBelow are two table with same schema that have different values. In this case EmpID is unique value.\n \ntabelA\n------------\nEmpId (Int) EmpName (String)\n1               Hanu\n2               Alvaro\n \n \ntabelB\n------------\nEmpId (Int) EmpName (String)\n3               Michal\n4               Tom\n \n \nI would be looking below output after appending tableA with tableB. Is this possible in PostgreSQL?\n \ntabelA\n------------\nEmpId (Int) EmpName (String)\n1               Hanu\n2               Alvaro\n3               Michal\n4               Tom\n \nThanks,\nHanu\n \nOn 5/30/07, Hanu Kurubar <[email protected]> wrote:\n\nCan you help me appending two table values into single table without performing INSERT? \nNote that these tables are of same schema.\n \nIs there any sql command is supported?\n \nThanks,\nHanu \nOn 5/29/07, Alvaro Herrera <[email protected]\n> wrote:\nMichal Szymanski wrote:> There is another strange thing. We have two versions of our test> >>environment one with production DB copy and second genereated with \n> >>minimal data set and it is odd that update presented above on copy of> >>production is executing 170ms but on small DB it executing 6s !!!!> >> >How are you vacuuming the tables? \n> >> Using pgAdmin (DB is installed on my laptop) and I use this tool for> vaccuminh, I do not think that vaccuming can help because I've tested on> both database just after importing.\nI think you are misunderstanding the importance of vacuuming the table.Try this: on a different terminal from the one running the test, run aVACUUM on the updated table with vacuum_cost_delay set to 20, on an\ninfinite loop.  Keep this running while you do your update test.  Varythe vacuum_cost_delay and measure the average/min/max UPDATE times.Also try putting a short sleep on the infinite VACUUM loop and see howits length affects the UPDATE times. \nOne thing not clear to me is if your table is in a clean state.  Beforerunning this test, do a TRUNCATE and import the data again.  This willget rid of any dead space that may be hurting your measurements. \n--Alvaro Herrera                        http://www.advogato.org/person/alvherre\"The Postgresql hackers have what I call a \"NASA space shot\" mentality. \nQuite refreshing in a world of \"weekend drag racer\" developers.\"(Scott Marlowe)---------------------------(end of broadcast)---------------------------TIP 4: Have you searched our list archives? \n              http://archives.postgresql.org\n-- With best regards,Hanumanthappa KurubarMobile: 98 801 800 65 -- With best regards,Hanumanthappa KurubarMobile: 98 801 800 65", "msg_date": "Sat, 2 Jun 2007 11:52:08 -0400", "msg_from": "\"Hanu Kurubar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Append table" }, { "msg_contents": "There are two solutions:\nYou can insert all data from tableB in tableA using a simple insert \nselect-statement like so:\nINSERT INTO tabelA SELECT EmpId, EmpName FROM tabelB;\n\nOr you can visually combine them without actually putting the records in \na single table. That can be with a normal select-union statement or with \na view, something like this:\nSELECT EmpId, EmpName FROM tabelA UNION EmpID, EmpName FROM tabelB;\n\nYou can use this query as a table-generating subquery in a FROM-clause, \nlike so:\n\nSELECT * FROM (SELECT EmpId, EmpName FROM tabelA UNION EmpID, EmpName \nFROM tabelB) as emps WHERE EmpId = 1;\n\nOr with the view:\nCREATE VIEW tabelC AS SELECT EmpId, EmpName FROM tabelA UNION EmpID, \nEmpName FROM tabelB;\n\nAnd then you can use the view as if it was a normal table (altough \ninserts are not possible without applying rules to them, see the manual \nfor that).\n\nSELECT * FROM tabelC WHERE EmpId = 1;\n\nBest regards,\n\nArjen\n\nOn 2-6-2007 17:52 Hanu Kurubar wrote:\n> Any luck on appending two table in PostgreSQL.\n> Below are two table with same schema that have different values. In this \n> case EmpID is unique value.\n> \n> tabelA\n> ------------\n> EmpId (Int) EmpName (String)\n> 1 Hanu\n> 2 Alvaro\n> \n> \n> tabelB\n> ------------\n> EmpId (Int) EmpName (String)\n> 3 Michal\n> 4 Tom\n> \n> \n> I would be looking below output after appending tableA with tableB. Is \n> this possible in PostgreSQL?\n> \n> \n> tabelA\n> ------------\n> EmpId (Int) EmpName (String)\n> 1 Hanu\n> 2 Alvaro\n> 3 Michal\n> 4 Tom\n> \n> \n> \n> Thanks,\n> Hanu\n> \n> \n> On 5/30/07, *Hanu Kurubar* <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> Can you help me appending two table values into single table without\n> performing INSERT?\n> Note that these tables are of same schema.\n> \n> Is there any sql command is supported?\n> \n> Thanks,\n> Hanu\n> \n> \n> On 5/29/07, *Alvaro Herrera* <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> Michal Szymanski wrote:\n> > There is another strange thing. We have two versions of our test\n> > >>environment one with production DB copy and second\n> genereated with\n> > >>minimal data set and it is odd that update presented above\n> on copy of\n> > >>production is executing 170ms but on small DB it executing\n> 6s !!!!\n> > >\n> > >How are you vacuuming the tables?\n> > >\n> > Using pgAdmin (DB is installed on my laptop) and I use this\n> tool for\n> > vaccuminh, I do not think that vaccuming can help because\n> I've tested on\n> > both database just after importing.\n> \n> I think you are misunderstanding the importance of vacuuming the\n> table.\n> Try this: on a different terminal from the one running the test,\n> run a\n> VACUUM on the updated table with vacuum_cost_delay set to 20, on an\n> infinite loop. Keep this running while you do your update\n> test. Vary\n> the vacuum_cost_delay and measure the average/min/max UPDATE times.\n> Also try putting a short sleep on the infinite VACUUM loop and\n> see how\n> its length affects the UPDATE times.\n> \n> One thing not clear to me is if your table is in a clean\n> state. Before\n> running this test, do a TRUNCATE and import the data\n> again. This will\n> get rid of any dead space that may be hurting your measurements.\n> \n> --\n> Alvaro\n> Herrera http://www.advogato.org/person/alvherre\n> \"The Postgresql hackers have what I call a \"NASA space shot\"\n> mentality.\n> Quite refreshing in a world of \"weekend drag racer\" developers.\"\n> (Scott Marlowe)\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> <http://archives.postgresql.org/>\n> \n> \n> \n> \n> -- \n> With best regards,\n> Hanumanthappa Kurubar\n> Mobile: 98 801 800 65 \n> \n> \n> \n> \n> -- \n> With best regards,\n> Hanumanthappa Kurubar\n> Mobile: 98 801 800 65\n", "msg_date": "Sat, 02 Jun 2007 18:04:19 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Append table" }, { "msg_contents": "\"Arjen van der Meijden\" <[email protected]> writes:\n\n> There are two solutions:\n...\n> Or you can visually combine them without actually putting the records in a\n> single table. That can be with a normal select-union statement or with a view,\n> something like this:\n> SELECT EmpId, EmpName FROM tabelA UNION EmpID, EmpName FROM tabelB;\n\nIf you're sure the two sets are distinct or you want to get any duplicates and\nnot eliminate them then if you went with this option you would want to use\n\"UNION ALL\" not just a plain union.\n\nIn SQL UNION has to remove duplicates which often involves gathering all the\nrecords and performing a big sort and lots of extra work. UNION ALL is much\nfaster and can start returning records right away.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Sat, 02 Jun 2007 17:36:20 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Append table" }, { "msg_contents": "Thanks for quick answer.\n\nPrevisoly I have exported table records into employee.csv file using COPY\ncommand which has 36,00,0000 records.\n\nAfter that I have added few more entries in database and EmpId is\nincremented.\n\nI want put the exported data back into database with re-generating new\nEmpId. Like importing back all data without harming existing data.\n\nIf I choose INSERT opeartion, it is very time consuming.\n\nI am thinking of creating new table (dummy table) and copying all data (COPY\nfrom command) into that table and maniplate the data so that EmpId is unique\nin parent table and dummy table and then append these two tables.\n\nI feel creating views and joins will make things complex.\n\nDo you have inputs on this?\n\nOn 6/2/07, Arjen van der Meijden <[email protected]> wrote:\n>\n> There are two solutions:\n> You can insert all data from tableB in tableA using a simple insert\n> select-statement like so:\n> INSERT INTO tabelA SELECT EmpId, EmpName FROM tabelB;\n>\n> Or you can visually combine them without actually putting the records in\n> a single table. That can be with a normal select-union statement or with\n> a view, something like this:\n> SELECT EmpId, EmpName FROM tabelA UNION EmpID, EmpName FROM tabelB;\n>\n> You can use this query as a table-generating subquery in a FROM-clause,\n> like so:\n>\n> SELECT * FROM (SELECT EmpId, EmpName FROM tabelA UNION EmpID, EmpName\n> FROM tabelB) as emps WHERE EmpId = 1;\n>\n> Or with the view:\n> CREATE VIEW tabelC AS SELECT EmpId, EmpName FROM tabelA UNION EmpID,\n> EmpName FROM tabelB;\n>\n> And then you can use the view as if it was a normal table (altough\n> inserts are not possible without applying rules to them, see the manual\n> for that).\n>\n> SELECT * FROM tabelC WHERE EmpId = 1;\n>\n> Best regards,\n>\n> Arjen\n>\n> On 2-6-2007 17:52 Hanu Kurubar wrote:\n> > Any luck on appending two table in PostgreSQL.\n> > Below are two table with same schema that have different values. In this\n> > case EmpID is unique value.\n> >\n> > tabelA\n> > ------------\n> > EmpId (Int) EmpName (String)\n> > 1 Hanu\n> > 2 Alvaro\n> >\n> >\n> > tabelB\n> > ------------\n> > EmpId (Int) EmpName (String)\n> > 3 Michal\n> > 4 Tom\n> >\n> >\n> > I would be looking below output after appending tableA with tableB. Is\n> > this possible in PostgreSQL?\n> >\n> >\n> > tabelA\n> > ------------\n> > EmpId (Int) EmpName (String)\n> > 1 Hanu\n> > 2 Alvaro\n> > 3 Michal\n> > 4 Tom\n> >\n> >\n> >\n> > Thanks,\n> > Hanu\n> >\n> >\n> > On 5/30/07, *Hanu Kurubar* <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > Can you help me appending two table values into single table without\n> > performing INSERT?\n> > Note that these tables are of same schema.\n> >\n> > Is there any sql command is supported?\n> >\n> > Thanks,\n> > Hanu\n> >\n> >\n> > On 5/29/07, *Alvaro Herrera* <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > Michal Szymanski wrote:\n> > > There is another strange thing. We have two versions of our\n> test\n> > > >>environment one with production DB copy and second\n> > genereated with\n> > > >>minimal data set and it is odd that update presented above\n> > on copy of\n> > > >>production is executing 170ms but on small DB it executing\n> > 6s !!!!\n> > > >\n> > > >How are you vacuuming the tables?\n> > > >\n> > > Using pgAdmin (DB is installed on my laptop) and I use this\n> > tool for\n> > > vaccuminh, I do not think that vaccuming can help because\n> > I've tested on\n> > > both database just after importing.\n> >\n> > I think you are misunderstanding the importance of vacuuming the\n> > table.\n> > Try this: on a different terminal from the one running the test,\n> > run a\n> > VACUUM on the updated table with vacuum_cost_delay set to 20, on\n> an\n> > infinite loop. Keep this running while you do your update\n> > test. Vary\n> > the vacuum_cost_delay and measure the average/min/max UPDATE\n> times.\n> > Also try putting a short sleep on the infinite VACUUM loop and\n> > see how\n> > its length affects the UPDATE times.\n> >\n> > One thing not clear to me is if your table is in a clean\n> > state. Before\n> > running this test, do a TRUNCATE and import the data\n> > again. This will\n> > get rid of any dead space that may be hurting your measurements.\n> >\n> > --\n> > Alvaro\n> > Herrera\n> http://www.advogato.org/person/alvherre\n> > \"The Postgresql hackers have what I call a \"NASA space shot\"\n> > mentality.\n> > Quite refreshing in a world of \"weekend drag racer\" developers.\"\n> > (Scott Marlowe)\n> >\n> > ---------------------------(end of\n> > broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> > <http://archives.postgresql.org/>\n> >\n> >\n> >\n> >\n> > --\n> > With best regards,\n> > Hanumanthappa Kurubar\n> > Mobile: 98 801 800 65\n> >\n> >\n> >\n> >\n> > --\n> > With best regards,\n> > Hanumanthappa Kurubar\n> > Mobile: 98 801 800 65\n>\n\n\n\n-- \nWith best regards,\nHanumanthappa Kurubar\nMobile: 98 801 800 65\n\nThanks for quick answer.\n \nPrevisoly I have exported table records into employee.csv file using COPY command which has 36,00,0000 records.\n \nAfter that I have added few more entries in database and EmpId is incremented.\n \nI want put the exported data back into database with re-generating new EmpId. Like importing back all data without harming existing data.\n \nIf I choose INSERT opeartion, it is very time consuming.\n \nI am thinking of creating new table (dummy table) and copying all data (COPY from command) into that table and maniplate the data so that EmpId is unique in parent table and dummy table and then append these two tables.\n\n \nI feel creating views and joins will make things complex.\n \nDo you have inputs on this? \nOn 6/2/07, Arjen van der Meijden <[email protected]> wrote:\nThere are two solutions:You can insert all data from tableB in tableA using a simple insertselect-statement like so:\nINSERT INTO tabelA SELECT EmpId, EmpName FROM tabelB;Or you can visually combine them without actually putting the records ina single table. That can be with a normal select-union statement or witha view, something like this:\nSELECT EmpId, EmpName FROM tabelA UNION EmpID, EmpName FROM tabelB;You can use this query as a table-generating subquery in a FROM-clause,like so:SELECT * FROM (SELECT EmpId, EmpName FROM tabelA UNION EmpID, EmpName\nFROM tabelB) as emps WHERE EmpId = 1;Or with the view:CREATE VIEW tabelC AS SELECT EmpId, EmpName FROM tabelA UNION EmpID,EmpName FROM tabelB;And then you can use the view as if it was a normal table (altough\ninserts are not possible without applying rules to them, see the manualfor that).SELECT * FROM tabelC WHERE EmpId = 1;Best regards,ArjenOn 2-6-2007 17:52 Hanu Kurubar wrote:> Any luck on appending two table in PostgreSQL.\n> Below are two table with same schema that have different values. In this> case EmpID is unique value.>> tabelA> ------------> EmpId (Int) EmpName (String)> 1               Hanu\n> 2               Alvaro>>> tabelB> ------------> EmpId (Int) EmpName (String)> 3               Michal> 4               Tom>>> I would be looking below output after appending tableA with tableB. Is\n> this possible in PostgreSQL?>>> tabelA> ------------> EmpId (Int) EmpName (String)> 1               Hanu> 2               Alvaro> 3               Michal\n> 4               Tom>>>> Thanks,> Hanu>>> On 5/30/07, *Hanu Kurubar* <[email protected]> <mailto:\[email protected]>> wrote:>>     Can you help me appending two table values into single table without>     performing INSERT?>     Note that these tables are of same schema.>\n>     Is there any sql command is supported?>>     Thanks,>     Hanu>>>     On 5/29/07, *Alvaro Herrera* <[email protected]\n>     <mailto:[email protected]>> wrote:>>         Michal Szymanski wrote:>          > There is another strange thing. We have two versions of our test\n>          > >>environment one with production DB copy and second>         genereated with>          > >>minimal data set and it is odd that update presented above>         on copy of\n>          > >>production is executing 170ms but on small DB it executing>         6s !!!!>          > >>          > >How are you vacuuming the tables?>          > >\n>          > Using pgAdmin (DB is installed on my laptop) and I use this>         tool for>          > vaccuminh, I do not think that vaccuming can help because>         I've tested on\n>          > both database just after importing.>>         I think you are misunderstanding the importance of vacuuming the>         table.>         Try this: on a different terminal from the one running the test,\n>         run a>         VACUUM on the updated table with vacuum_cost_delay set to 20, on an>         infinite loop.  Keep this running while you do your update>         test.  Vary>         the vacuum_cost_delay and measure the average/min/max UPDATE times.\n>         Also try putting a short sleep on the infinite VACUUM loop and>         see how>         its length affects the UPDATE times.>>         One thing not clear to me is if your table is in a clean\n>         state.  Before>         running this test, do a TRUNCATE and import the data>         again.  This will>         get rid of any dead space that may be hurting your measurements.>\n>         -->         Alvaro>         Herrera                        http://www.advogato.org/person/alvherre>         \"The Postgresql hackers have what I call a \"NASA space shot\"\n>         mentality.>         Quite refreshing in a world of \"weekend drag racer\" developers.\">         (Scott Marlowe)>>         ---------------------------(end of>         broadcast)---------------------------\n>         TIP 4: Have you searched our list archives?>>                       http://archives.postgresql.org>         <\nhttp://archives.postgresql.org/>>>>>>     -->     With best regards,>     Hanumanthappa Kurubar>     Mobile: 98 801 800 65>>>>\n> --> With best regards,> Hanumanthappa Kurubar> Mobile: 98 801 800 65-- With best regards,Hanumanthappa KurubarMobile: 98 801 800 65", "msg_date": "Sat, 2 Jun 2007 15:49:00 -0400", "msg_from": "\"Hanu Kurubar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Append table" }, { "msg_contents": "Arjen van der Meijden wrote:\n> There are two solutions:\n> You can insert all data from tableB in tableA using a simple insert \n> select-statement like so:\n> INSERT INTO tabelA SELECT EmpId, EmpName FROM tabelB;\n>\n> Or you can visually combine them without actually putting the records \n> in a single table. That can be with a normal select-union statement or \n> with a view, something like this:\n> SELECT EmpId, EmpName FROM tabelA UNION EmpID, EmpName FROM tabelB;\nSince they both have the same schema, you could also combine them by \ncreating a parent table and making both tables children. Check out \nPostgreSQL's inheritance features. To make an existing table a child \nyou'll need to be using PostgreSQL 8.2 or newer.\n\ncreate table emp_rollup (like tabelA);\nalter table tabelA inherits emp_rollup;\nalter table tabelB inherits emp_rollup;\n\nNow issue your queries against emp_rollup... You could also just make \ntabelB a child of tabelA:\n\nalter table tabelB inherits tabelA;\n\nBut that would mean that if you wanted to query only tabelA you'd have \nto modify your query syntax.\n\nselect * from ONLY tabelA;\n\nWould only retrieve records from tabelA ...\n\n\nYou could also allow PostgreSQL to limit its index usage based on the \nEmpID field by defining some table constraints and enabling constraint \nexclusion.\n>\n> You can use this query as a table-generating subquery in a \n> FROM-clause, like so:\n>\n> SELECT * FROM (SELECT EmpId, EmpName FROM tabelA UNION EmpID, EmpName \n> FROM tabelB) as emps WHERE EmpId = 1;\n>\n> Or with the view:\n> CREATE VIEW tabelC AS SELECT EmpId, EmpName FROM tabelA UNION EmpID, \n> EmpName FROM tabelB;\n>\n> And then you can use the view as if it was a normal table (altough \n> inserts are not possible without applying rules to them, see the \n> manual for that).\n>\n> SELECT * FROM tabelC WHERE EmpId = 1;\n>\n> Best regards,\n>\n> Arjen\n>\n> On 2-6-2007 17:52 Hanu Kurubar wrote:\n>> Any luck on appending two table in PostgreSQL.\n>> Below are two table with same schema that have different values. In \n>> this case EmpID is unique value.\n>> \n>> tabelA\n>> ------------\n>> EmpId (Int) EmpName (String)\n>> 1 Hanu\n>> 2 Alvaro\n>> \n>> \n>> tabelB\n>> ------------\n>> EmpId (Int) EmpName (String)\n>> 3 Michal\n>> 4 Tom\n>> \n>> \n>> I would be looking below output after appending tableA with tableB. \n>> Is this possible in PostgreSQL?\n>>\n>> \n>> tabelA\n>> ------------\n>> EmpId (Int) EmpName (String)\n>> 1 Hanu\n>> 2 Alvaro\n>> 3 Michal\n>> 4 Tom\n>>\n\n\n-- \nChander Ganesan\nThe Open Technology Group\nOne Copley Parkway, Suite 210\nMorrisville, NC 27560\nPhone: 877-258-8987/919-463-0999\nhttp://www.otg-nc.com\nExpert PostgreSQL Training - http://test.otg-nc.com/training-courses/coursedetail.php?courseid=40&cat_id=8\n\n", "msg_date": "Tue, 05 Jun 2007 16:11:51 -0400", "msg_from": "Chander Ganesan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Append table" } ]
[ { "msg_contents": "Hello,\n\nI am going to build a new PostgreSQL dedicated server, on FreeBSD. Before it goes to production \nservice I need to make some tests and take configuration decisions, focused on my application needs. \nUsual thing. One of them is selection of one of 32 or 64 bit versions of both OS and PG. What I am \ngoing to do is to install both versions on different filesystems of the same machine. As a \nconsequence I would also have to deal with two independent copies of my real databases on which I \nwant to perfrom my tests. However, the databases are rather large, so I am thinking about \npossibilities of not to have to restore two copies of my data, but use just one instead, and sharing \nit between the 32 and 64 versions, across reboots.\n\nWould that scenario work, or I am simply too naive considering it?\n\n\nThanks\n\nIreneusz Pluta\n\nPS.\nOr rather, instead of testing 32/64 bit, I would just simply go with 64 bit, considering that the \nserver has quad core X5355 Xeon and 16GB RAM?\n\n", "msg_date": "Thu, 31 May 2007 15:21:40 +0200", "msg_from": "Ireneusz Pluta <[email protected]>", "msg_from_op": true, "msg_subject": "DB cluster sharing between 32 and 64 bit software versions" }, { "msg_contents": "In response to Ireneusz Pluta <[email protected]>:\n\n> Hello,\n> \n> I am going to build a new PostgreSQL dedicated server, on FreeBSD. Before it goes to production \n> service I need to make some tests and take configuration decisions, focused on my application needs. \n> Usual thing. One of them is selection of one of 32 or 64 bit versions of both OS and PG. What I am \n> going to do is to install both versions on different filesystems of the same machine. As a \n> consequence I would also have to deal with two independent copies of my real databases on which I \n> want to perfrom my tests. However, the databases are rather large, so I am thinking about \n> possibilities of not to have to restore two copies of my data, but use just one instead, and sharing \n> it between the 32 and 64 versions, across reboots.\n> \n> Would that scenario work, or I am simply too naive considering it?\n\nIt won't work, unfortunately. The on-disk representation of the data is\ndifferent between ia32 and amd64.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Thu, 31 May 2007 09:36:37 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB cluster sharing between 32 and 64 bit software\n versions" } ]
[ { "msg_contents": "Folks,\n\njust wanted to share some benchmark results from one long performance\nstudy comparing MySQL, PostgreSQL and Oracle transactions throughput\nand engine scalability on T2000 and V890 (under Solaris). Oracle\nresults are removed (of course :), but other are quite interesting...\nFindings are presented as it, following step by step learning and\ntuning curve :)\n\nSo well, you may find:\n - http://dimitrik.free.fr/db_STRESS.html - Benchmark kit description\n - http://dimitrik.free.fr/db_STRESS_BMK_Part1.html -- first main part\n - http://dimitrik.free.fr/db_STRESS_BMK_Part2_ZFS.html -- second\npart including ZFS specific tuning\n\nTests were executed in Mar/Apr.2007 with latest v8.2.3 on that time.\nDue limited spare time I was able to publish results only now...\nAny comments are welcome! :)\n\nBest regards!\n-Dimitri\n", "msg_date": "Thu, 31 May 2007 21:28:19 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Some info to share: db_STRESS Benchmark results" }, { "msg_contents": "On 5/31/07, Dimitri <[email protected]> wrote:\n> just wanted to share some benchmark results from one long performance\n> study comparing MySQL, PostgreSQL and Oracle transactions throughput\n> and engine scalability on T2000 and V890 (under Solaris).\n\nInteresting, if awfully cryptic. The lack of axis labels, the lack of\naxis normalization, and the fact that you put the graphs for different\ndatabases and parameters on separate pages makes it rather hard to\ncompare the various results.\n\nAlexander.\n", "msg_date": "Thu, 31 May 2007 21:48:26 +0200", "msg_from": "\"Alexander Staubo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some info to share: db_STRESS Benchmark results" }, { "msg_contents": "Well, let's say I want to have compact graphs :)\n\nSo, few comments on graphs:\n - Title: compact name of test and execution conditions\n - X-axis: is always representing time scale\n - Y-axis: is showing a value level (whatever)\n - Legend: gives you a value Name and its metric (KB/s, Op/s, TPS, etc)\n\nTPS: (transactions per second)\n - ALL-tps TR_all: all transactions (READ+WRITE) per second level\n - ALL-tps TR_Read: only READ tps level\n - ALL-tps TR_Write: only WRITE tps level\n\nI must say I was more intrested by databases tuning rather documenting\neach my step... But well, without documenting there is no result :)\nAs well I did not think to compare database initially (don't know why\nbut it's always starting a small war between DB vendors :)), but\nresults were so surprising so I just continued until it was possible\n:))\n\nRgds,\n-Dimitri\n\nOn 5/31/07, Alexander Staubo <[email protected]> wrote:\n> On 5/31/07, Dimitri <[email protected]> wrote:\n> > just wanted to share some benchmark results from one long performance\n> > study comparing MySQL, PostgreSQL and Oracle transactions throughput\n> > and engine scalability on T2000 and V890 (under Solaris).\n>\n> Interesting, if awfully cryptic. The lack of axis labels, the lack of\n> axis normalization, and the fact that you put the graphs for different\n> databases and parameters on separate pages makes it rather hard to\n> compare the various results.\n>\n> Alexander.\n>\n", "msg_date": "Thu, 31 May 2007 22:19:15 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some info to share: db_STRESS Benchmark results" } ]
[ { "msg_contents": "Hi,\nI'm having some problems in performance in a simple select count(id)\nfrom.... I have 700 000 records in one table, and when I do:\n\n# explain select (id) from table_name;\n-[ RECORD 1 ]----------------------------------------------------------------\nQUERY PLAN | Seq Scan on table_name (cost=0.00..8601.30 rows=266730 width=4)\n\nI had created an index for id(btree), but still shows \"Seq Scan\".\nWhat I'm doing wrong?\n\nThanks,\nTyler\n", "msg_date": "Fri, 1 Jun 2007 17:48:56 +0100", "msg_from": "\"Tyler Durden\" <[email protected]>", "msg_from_op": true, "msg_subject": "Seq Scan" }, { "msg_contents": "Tyler Durden wrote:\n> Hi,\n> I'm having some problems in performance in a simple select count(id)\n> from.... I have 700 000 records in one table, and when I do:\n> \n> # explain select (id) from table_name;\n> -[ RECORD 1 \n> ]----------------------------------------------------------------\n> QUERY PLAN | Seq Scan on table_name (cost=0.00..8601.30 rows=266730 \n> width=4)\n> \n> I had created an index for id(btree), but still shows \"Seq Scan\".\n> What I'm doing wrong?\n> \n> Thanks,\n> Tyler\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\nselect count(*) will *always* do a sequential scan, due to the MVCC \narchitecture. See archives for much discussion about this.\n\n-Dan\n", "msg_date": "Fri, 01 Jun 2007 10:59:01 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seq Scan" }, { "msg_contents": "\nOn Jun 1, 2007, at 11:48 , Tyler Durden wrote:\n\n> I'm having some problems in performance in a simple select count(id)\n> from....\n\nUnrestricted count() (i.e., no WHERE clause) will perform a \nsequential scan. If you're looking for faster ways to store table row \ncount information, please search the archives, as this has been \ndiscussed many times before.\n\n> # explain select (id) from table_name;\n> -[ RECORD \n> 1 ]----------------------------------------------------------------\n> QUERY PLAN | Seq Scan on table_name (cost=0.00..8601.30 \n> rows=266730 width=4)\n\nThe query returns the id column value for each row in the table. The \nfastest way to do this is visiting every row., i.e., a sequential \nscan. Using an index would require (1) looking in the index and (2) \nlooking up the corresponding row.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Fri, 1 Jun 2007 12:03:30 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seq Scan" } ]
[ { "msg_contents": "Hello List,\n\nWe've been running PostgreSQL as our web application database for\nalmost a year and it has noticeably slowed down over last few months.\n\nOur current setup and pgsql configuration looks like this:\n\n8.1.2 on Ubuntu 4 on Opteron Dual Core with 2 GBytes RAM. This is a\ndedicated DB server.\n\nWe currently have about 3.5 million rows in 91 tables. Besides the\nrequests coming from the web server, we have batch processes running\nevery 15 minutes from another internal machine that do a lot of\nUPDATE, DELETE and INSERT queries on thousands of rows.\n\nMany of the SELECT queries coming from the web server contain large\nJOINS and aggregate calculations.\n\nWe are running a financial application which is very data intensive\nand calculates a lot on the SQL side.\n\nAnyways, watching the system processes we realized that PostgreSQL is\nonly using about 300 Mbytes for itself. Also, both cores are usually\nmaxed out to 100% usage.\n\nAre we expecting too much from our server?\n\nOur non-default configuration settings are:\n\nmax_connections = 100\nshared_buffers = 17500\nwork_mem = 2048\nmaintenance_work_mem = 40000\nmax_fsm_pages = 35000\nautovacuum = on\n\nWhat can I do to make best use of my db server? Is our configuration\nflawed? Or are we already at a point where we need consider clustering\n/ load balancing?\n\nAny ideas and suggestions are welcome.\n\nRegards,\nGregory Stewart\n", "msg_date": "Fri, 1 Jun 2007 14:40:19 -0500", "msg_from": "\"Gregory Stewart\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL not fully utilizing system resources?" }, { "msg_contents": "\"Gregory Stewart\" <[email protected]> wrote:\n>\n> Hello List,\n> \n> We've been running PostgreSQL as our web application database for\n> almost a year and it has noticeably slowed down over last few months.\n\nJust going to go through your email and address each point inline.\n\nFirst off, you say nothing of your vacuum/analyze schedule other than\nto point out that autovacuum is on. If you run \"vacuum verbose\" on the\ndatabase, what does the output say?\n\n> Our current setup and pgsql configuration looks like this:\n> \n> 8.1.2 on Ubuntu 4 on Opteron Dual Core with 2 GBytes RAM. This is a\n> dedicated DB server.\n\nUpgrade. 8.1.2 is old, you should be running 8.1.9 unless you have a\nspecific reason not to.\n\n> We currently have about 3.5 million rows in 91 tables.\n\nHow large is the dataset? What does pg_database_size tell you? 3.5M\ncould be a lot or a little, depending on the size of each row.\n\n> Besides the\n> requests coming from the web server, we have batch processes running\n> every 15 minutes from another internal machine that do a lot of\n> UPDATE, DELETE and INSERT queries on thousands of rows.\n\nHence my concern that your autovacuum settings may not be aggressive\nenough.\n\n> Many of the SELECT queries coming from the web server contain large\n> JOINS and aggregate calculations.\n> \n> We are running a financial application which is very data intensive\n> and calculates a lot on the SQL side.\n> \n> Anyways, watching the system processes we realized that PostgreSQL is\n> only using about 300 Mbytes for itself.\n\nThat's because you told it to. Below, you allocated 143M of RAM to\nshared buffers. Current thinking is to allocate 1/3 of your RAM to\nshared buffers and start fine-tuning from there. If you haven't\nalready determined that less is better for your workload, I'd consider\nbumping shared_buffers up to ~70000.\n\n> Also, both cores are usually\n> maxed out to 100% usage.\n\nMaxed out on CPU usage? What's your IO look like?\n\n> Are we expecting too much from our server?\n\nHard to say without more details.\n\n> Our non-default configuration settings are:\n> \n> max_connections = 100\n> shared_buffers = 17500\n> work_mem = 2048\n\nWhile I can't be sure without more details, you may benefit by\nraising the work_mem value. If you've got 2G of RAM, and you\nallocate 600M to shared_buffers, that leaves 1.4G for work_mem.\nDepending on whether or not the large joins you describe need\nit or not, you may benefit from increasing work_mem.\n\nYour description gives the impression that most of the RAM on\nthis system is completely free. If that's the case, you may be\nconstraining PG without need, but there's not enough information in\nyour post to be sure.\n\n> maintenance_work_mem = 40000\n> max_fsm_pages = 35000\n> autovacuum = on\n> \n> What can I do to make best use of my db server? Is our configuration\n> flawed? Or are we already at a point where we need consider clustering\n> / load balancing?\n\nIt's a tough call. Explain of some problematic queries would be\nhelpful. It is entirely possible that you're doing some intensive\nmath and you're simply going to need more CPU horsepower to get it\ndone any faster, but there's just not enough information in your\npost to know for sure.\n\nPost some explains of some problem queries. Let us know more about\nyour IO load. Give us some snapshots of top under load. Find out\nhow large the database is. Provide the output of vacuum verbose.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n\n", "msg_date": "Mon, 4 Jun 2007 19:50:41 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL not fully utilizing system resources?" }, { "msg_contents": "On Fri, 1 Jun 2007, Gregory Stewart wrote:\n\n> Is our configuration flawed?\n\nFor sure. The bad news is that you have a good chunk of work to do; the \ngood news is that you should quickly see a dramatic improvement as that \nprogresses.\n\n> Anyways, watching the system processes we realized that PostgreSQL is \n> only using about 300 Mbytes for itself. Also, both cores are usually \n> maxed out to 100% usage. Are we expecting too much from our server?\n\nYour server isn't even running close to its capacity right now. Start by \nfollowing the instructions at \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm to tune \nyour system so it actually is using much more of your memory. When you \nrun a manual VACUUM ANALYZE as it recommends, you'll probably discover you \nhave to increase max_fsm_pages. The follow-up references at the bottom of \nthat page will lead you to several tuning guides that will go into more \ndetail about other things you might do.\n\nThe first obvious thing is that your extremely low work_mem setting is \nlikely throttling all your joins; read \nhttp://www.postgresql.org/docs/8.1/interactive/runtime-config-resource.html \nto understand how that setting works, then test some of your queries after \nincreasing it and see how things improve (note that you have to be careful \nmaking comparisons here because if you run exactly the same query twice, \nthe second time will usually be better because the data is cached).\n\nNext, if your settings for checkpoint_settings is at the default, that \nwould be a killer with your workload as well.\n\nThat should get you started. If you still aren't happy with performance \nafter all that, post again with some details about your disk configuration \nand an EXPLAIN plan for something that's moving slowly.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 4 Jun 2007 20:57:24 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL not fully utilizing system resources?" } ]
[ { "msg_contents": "Hello great gurus of performance:\nOur 'esteemed' Engr group recently informed a customer that in their testing, \nupgrading to 8.2.x improved the performance of our J2EE \napplication \"approximately 20%\", so of course, the customer then tasked me \nwith upgrading them. We dumped their db, removed pgsql, installed the 8.2.4 \nrpms from postgresql.org, did an initdb, and the pg_restored their data. It's \nbeen about a week now, and the customer is complaining that in their testing, \nthey are seeing a 30% /decrease/ in general performance. Of course, our Engr \ngroup is being less than responsive, and I have a feeling all they're doing \nis googling for answers, so I'm turning to this group for actual \nassistance :)\nI'd like to start by examining the poistgresql.conf file. Under 7.4.x, we had \nspent the better part of their 2 years as a customer tuning and tweaking \nsetting. I've attached the file that was in place at the time of upgrade. I \ndid some cursory googling of my own, and quickly realized that enough has \nchanged in v8 that I'm not comfortable making the exact same modification to \ntheir new config file as some options are new, some have gone away, etc. I've \nattached the existing v8 conf file as well. \nI'd really like it if someone could assist me in determining which of the v8 \noptions need adjusted to be 'functionally equivalent' to the v7 file. Right \nnow, my goal is to get the customer back to the previous level of \nperformance, and only then pursue further optimization. I can provide any and \nall information needed, but didn't know what to include initially, so I've \nopted to include the minimal :)\nThe DB server in question does nothing else, is running CentOS 4.5, kernel \n2.6.9-55.ELsmp. Hyperthreading is disabled in the BIOS and there are 2 Xeon \n3.4Ghz cpus. There is 8Gb of RAM in the machine, and another 8Gb of swap.\n\nThank you in advance for any and all assistance you can provide.\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nHandy Guide to Modern Science:\n1. If it's green or it wiggles, it's biology.\n2. If it stinks, it's chemistry.\n3. If it doesn't work, it's physics.", "msg_date": "Sat, 2 Jun 2007 09:13:32 -0400", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "On Sat, Jun 02, 2007 at 09:13:32AM -0400, Douglas J Hunley wrote:\n> Our 'esteemed' Engr group recently informed a customer that in their testing, \n> upgrading to 8.2.x improved the performance of our J2EE \n> application \"approximately 20%\", so of course, the customer then tasked me \n> with upgrading them. We dumped their db, removed pgsql, installed the 8.2.4 \n> rpms from postgresql.org, did an initdb, and the pg_restored their data. It's \n> been about a week now, and the customer is complaining that in their testing, \n> they are seeing a 30% /decrease/ in general performance.\n\nAfter the restore, did you ANALYZE the entire database to update\nthe planner's statistics? Have you enabled autovacuum or are you\notherwise vacuuming and analyzing regularly? What kind of queries\nare slower than desired? If you post an example query and the\nEXPLAIN ANALYZE output then we might be able to see if the slowness\nis due to query plans.\n\nA few differences between the configuration files stand out. The\n7.4 file has the following settings:\n\n shared_buffers = 25000\n sort_mem = 15000\n effective_cache_size = 196608\n\nThe 8.2 config has:\n\n #shared_buffers = 32MB\n #work_mem = 1MB\n #effective_cache_size = 128MB\n\nTo be equivalent to the 7.4 config the 8.2 config would need:\n\n shared_buffers = 195MB\n work_mem = 15000kB\n effective_cache_size = 1536MB\n\nWith 8GB of RAM you might try increasing shared_buffers to 400MB - 800MB\n(less if the entire database isn't that big) and effective_cache_size\nto 5GB - 6GB. You might have to increase the kernel's shared memory\nsettings before increasing shared_buffers.\n\nSome of the other settings are the same between the configurations\nbut deserve discussion:\n\n fsync = off\n\nDisabling fsync is dangerous -- are all parties aware of the risk\nand willing to accept it? Has the risk been weighed against the\ncost of upgrading to a faster I/O subsystem? How much performance\nbenefit are you realizing by disabling fsync? What kind of activity\nled to the decision to disable fynsc? Are applications doing\nanything like executing large numbers of insert/update/delete\nstatements outside of a transaction block when they could be done\nin a single transaction?\n\n commit_delay = 20000\n commit_siblings = 3\n\nWhat kind of activity led to the above settings? Are they a guess\nor were they determined empirically? How much benefit are they\nproviding and how did you measure that?\n\n enable_mergejoin = off\n geqo = off\n\nI've occasionally had to tweak planner settings but I prefer to do\nso for specific queries instead of changing them server-wide.\n\n-- \nMichael Fuhr\n", "msg_date": "Sat, 2 Jun 2007 09:21:41 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "Douglas J Hunley wrote:\n\nHello\n\n> The DB server in question does nothing else, is running CentOS 4.5, kernel \n> 2.6.9-55.ELsmp. Hyperthreading is disabled in the BIOS and there are 2 Xeon \n> 3.4Ghz cpus. There is 8Gb of RAM in the machine, and another 8Gb of swap.\n> \n\nAfter a very quick read of your configuration files, I found some \nparamaters that need to be change if your server has 8GB of RAM. The \nvalues of these parameters depend a lot of how much RAM you have, what \ntype of database you have (reading vs. writing) and how big the database is.\n\nI do not have experience with 8.2.x yet, but with 8.1.x we are using as \ndefaults in out 8GB RAM servers these values in some of the paramaters \n(they are not the only ones, but they are the minimum to change):\n\n25% of RAM for shared_buffers\n2/3 of ram for effective_cache_size\n256MB for maintenance_work_mem\n32-64MB for work_mem\n128 checkpoint_segments\n2 random_page_cost\n\nAnd the most important of all:\n\nfsync should be ***ON*** if you appreciate your data.\n\nIt looks like you are using default values ....\n\n> \n> #shared_buffers = 32MB\t\t\t# min 128kB or max_connections*16kB\n> #work_mem = 1MB\t\t\t\t# min 64kB\n> #maintenance_work_mem = 16MB\t\t# min 1MB\n> fsync = off\t\t\t\t# turns forced synchronization on or off\n> #effective_cache_size = 128MB\n[........................]\n\n-- \n Rafael Martinez, <[email protected]>\n Center for Information Technology Services\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/>\n", "msg_date": "Sat, 02 Jun 2007 17:21:54 +0200", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance\n then 7.4.x" }, { "msg_contents": "Douglas J Hunley <[email protected]> writes:\n> ... We dumped their db, removed pgsql, installed the 8.2.4 \n> rpms from postgresql.org, did an initdb, and the pg_restored their data. It's \n> been about a week now, and the customer is complaining that in their testing, \n> they are seeing a 30% /decrease/ in general performance.\n\nWell, you've definitely blown it on transferring the config-file\nsettings --- a quick look says that shared_buffers, work_mem, and\nmax_fsm_pages are all still default in the 8.2 config file.\nDon't be frightened off by the \"KB/MB\" usages in the 8.2 file ---\nyou can still write \"shared_buffers = 25000\" if you'd rather specify\nit in number of buffers than in megabytes.\n\nThere are some things you *did* transfer that I find pretty\nquestionable, like \"enable_mergejoin = false\". There are very major\ndifferences between the 7.4 and 8.2 planners, so you need to revisit\nthe tests that led you to do that.\n\nAnother thing that seems strange is that the 8.2 config file does not\nseem to have been processed by initdb --- or did you explicitly comment\nout the settings it made?\n\nAnother thing to check is whether you ANALYZEd the new database after\nloading data; a pg_dump/reload sequence doesn't do that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Jun 2007 11:25:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x " }, { "msg_contents": "Michael Fuhr wrote:\n> On Sat, Jun 02, 2007 at 09:13:32AM -0400, Douglas J Hunley wrote:\n>> Our 'esteemed' Engr group recently informed a customer that in their testing, \n>> upgrading to 8.2.x improved the performance of our J2EE \n>> application \"approximately 20%\", so of course, the customer then tasked me \n>> with upgrading them. We dumped their db, removed pgsql, installed the 8.2.4 \n>> rpms from postgresql.org, did an initdb, and the pg_restored their data. It's \n>> been about a week now, and the customer is complaining that in their testing, \n>> they are seeing a 30% /decrease/ in general performance.\n> \n> After the restore, did you ANALYZE the entire database to update\n> the planner's statistics? Have you enabled autovacuum or are you\n> otherwise vacuuming and analyzing regularly? What kind of queries\n> are slower than desired? If you post an example query and the\n> EXPLAIN ANALYZE output then we might be able to see if the slowness\n> is due to query plans.\n> \n> A few differences between the configuration files stand out. The\n> 7.4 file has the following settings:\n> \n> shared_buffers = 25000\n> sort_mem = 15000\n> effective_cache_size = 196608\n> \n> The 8.2 config has:\n> \n> #shared_buffers = 32MB\n> #work_mem = 1MB\n> #effective_cache_size = 128MB\n> \n> To be equivalent to the 7.4 config the 8.2 config would need:\n> \n> shared_buffers = 195MB\n> work_mem = 15000kB\n> effective_cache_size = 1536MB\n> \n> With 8GB of RAM you might try increasing shared_buffers to 400MB - 800MB\n> (less if the entire database isn't that big) and effective_cache_size\n> to 5GB - 6GB. You might have to increase the kernel's shared memory\n> settings before increasing shared_buffers.\n\nsome testing here has shown that while it is usually a good idea to set\neffective_cache_size rather optimistically in versions <8.2 it is\nadvisable to make it accurate or even a bit less than that in 8.2 and up.\n\n\nStefan\n", "msg_date": "Sat, 02 Jun 2007 17:31:02 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance\n then 7.4.x" }, { "msg_contents": "On Sat, Jun 02, 2007 at 09:13:32AM -0400, Douglas J Hunley wrote:\n> Hello great gurus of performance:\n> Our 'esteemed' Engr group recently informed a customer that in their testing, \n> upgrading to 8.2.x improved the performance of our J2EE \n> application \"approximately 20%\", so of course, the customer then tasked me \n> with upgrading them. We dumped their db, removed pgsql, installed the 8.2.4 \n> rpms from postgresql.org, did an initdb, and the pg_restored their data. It's \n> been about a week now, and the customer is complaining that in their testing, \n> they are seeing a 30% /decrease/ in general performance. Of course, our Engr \n> group is being less than responsive, and I have a feeling all they're doing \n> is googling for answers, so I'm turning to this group for actual \n> assistance :)\n> I'd like to start by examining the poistgresql.conf file. Under 7.4.x, we had \n> spent the better part of their 2 years as a customer tuning and tweaking \n> setting. I've attached the file that was in place at the time of upgrade. I \n> did some cursory googling of my own, and quickly realized that enough has \n> changed in v8 that I'm not comfortable making the exact same modification to \n> their new config file as some options are new, some have gone away, etc. I've \n> attached the existing v8 conf file as well. \n> I'd really like it if someone could assist me in determining which of the v8 \n> options need adjusted to be 'functionally equivalent' to the v7 file. Right \n> now, my goal is to get the customer back to the previous level of \n> performance, and only then pursue further optimization. I can provide any and \n> all information needed, but didn't know what to include initially, so I've \n> opted to include the minimal :)\n> The DB server in question does nothing else, is running CentOS 4.5, kernel \n> 2.6.9-55.ELsmp. Hyperthreading is disabled in the BIOS and there are 2 Xeon \n> 3.4Ghz cpus. There is 8Gb of RAM in the machine, and another 8Gb of swap.\n> \n> Thank you in advance for any and all assistance you can provide.\n> -- \n> Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\n> http://doug.hunley.homeip.net\n> \n\nDouglas,\n\nIf these are the current config files, it is no wonder that the performance\nis worse. Here are the things that need to be changed right from the start.\nThe old 7.x is on the left and the 8.2 value is on the right. Make them\nthe same to start and see how it looks then.\n\nsetting 7.x current 8.2\n------------------------------------------------------\nshared_buffers = 25000 / 32MB (=3906)\nsort_mem/work_mem = 15000/ 1MB (=122)\nvacuum_mem/maint_work_mem = 100000 / 16MB (=1950)\neffective_cache = 196608 / 128MB (=15600) should start between 200k-500k\n\nThese changes alone should get you back to the performance point you are\nexpecting. It would also be worth re-evaluating whether or not you should\nbe disabling enable_mergehashjoin in general, and not just for specific\nproblem queries. I would also tend to start with an effective_cache at\nthe higher end on a dedicated DB server. Good luck with your tuning. If\nthe 8.2 config file you posted is the one that has been in use, these few\nchanges will restore your performance and then some.\n\nKen\n", "msg_date": "Sat, 2 Jun 2007 10:45:22 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "On Saturday 02 June 2007 11:21:41 Michael Fuhr wrote:\n> After the restore, did you ANALYZE the entire database to update\n> the planner's statistics? Have you enabled autovacuum or are you\n> otherwise vacuuming and analyzing regularly? What kind of queries\n> are slower than desired? If you post an example query and the\n> EXPLAIN ANALYZE output then we might be able to see if the slowness\n> is due to query plans.\n\nI forgot to mention that. Yes, we did:\nvacuumdb -a -f -v -z\n\nWe have not yet turned on autovacuum. That was next on our list, and then \ncustomer started in w/ the performance. We are doing an 'analyze table' \nfollowed by 'vacuum table' on a periodic basis, but I'll have to wait till \nI'm in the office on Monday to see what that schedule is (customer only \nallows us to VPN from work)\n\n>\n> A few differences between the configuration files stand out. The\n> 7.4 file has the following settings:\n>\n> shared_buffers = 25000\n> sort_mem = 15000\n> effective_cache_size = 196608\n>\n> The 8.2 config has:\n>\n> #shared_buffers = 32MB\n> #work_mem = 1MB\n> #effective_cache_size = 128MB\n>\n> To be equivalent to the 7.4 config the 8.2 config would need:\n>\n> shared_buffers = 195MB\n> work_mem = 15000kB\n> effective_cache_size = 1536MB\n>\n> With 8GB of RAM you might try increasing shared_buffers to 400MB - 800MB\n> (less if the entire database isn't that big) and effective_cache_size\n> to 5GB - 6GB. You might have to increase the kernel's shared memory\n> settings before increasing shared_buffers.\n>\n\nWe have the following in sysctl.conf:\nkernel.shmmax=2147483648\nkernal.shmall=2097152\nkernel.sem = 250 32000 100 128\n\nwhich should be sufficient, no?\n\n> Some of the other settings are the same between the configurations\n> but deserve discussion:\n>\n> fsync = off\n>\n> Disabling fsync is dangerous -- are all parties aware of the risk\n> and willing to accept it? Has the risk been weighed against the\n> cost of upgrading to a faster I/O subsystem? How much performance\n> benefit are you realizing by disabling fsync? What kind of activity\n> led to the decision to disable fynsc? Are applications doing\n> anything like executing large numbers of insert/update/delete\n> statements outside of a transaction block when they could be done\n> in a single transaction?\n\nYes, they're aware. This is a temporary setting while they order upgraded SAN \ndevices. Currently, the I/O on the boxes is horrific.\n\n>\n> commit_delay = 20000\n> commit_siblings = 3\n>\n> What kind of activity led to the above settings? Are they a guess\n> or were they determined empirically? How much benefit are they\n> providing and how did you measure that?\n\nThose are based on a thread their (non-pgsql) DBA found online. I'm perfectly \nwilling to discount him if so advised.\n\n>\n> enable_mergejoin = off\n> geqo = off\n>\n> I've occasionally had to tweak planner settings but I prefer to do\n> so for specific queries instead of changing them server-wide.\n\nI concur. Unfortunately, our Engr group don't actually write the SQL for the \napp. It's generated, and is done in such a fashion as to work on all our \nsupported dbs (pgsql, oracle, mysql). \n\nThanks a ton for the input thus far\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nAnything worth shooting is worth shooting twice. Ammo is cheap. Life is \nexpensive.\n", "msg_date": "Sun, 3 Jun 2007 13:24:15 -0400", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "On Saturday 02 June 2007 11:25:11 Tom Lane wrote:\n> Another thing that seems strange is that the 8.2 config file does not\n> seem to have been processed by initdb --- or did you explicitly comment\n> out the settings it made?\n\nI don't understand this comment. You are saying 'initdb' will make changes to \nthe file? The file I sent is the working copy from the machine in question.\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\n\"Does it worry you that you don't talk any kind of sense?\"\n", "msg_date": "Sun, 3 Jun 2007 13:29:07 -0400", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "On Sun, 3 Jun 2007, Douglas J Hunley wrote:\n\n>> commit_delay = 20000\n>> commit_siblings = 3\n> Those are based on a thread their (non-pgsql) DBA found online. I'm perfectly\n> willing to discount him if so advised.\n\nThose likely came indirectly from the otherwise useful recommendations at \nhttp://www.wlug.org.nz/PostgreSQLNotes , that's the first place I saw that \nparticular combination recommended at. The fact that you mention a thread \nmakes me guess your DBA found \nhttps://kb.vasoftware.com/index.php?x=&mod_id=2&id=20 , which is a \ncompletely bogus set of suggestions. Anyone who gives out a blanket \nrecommendation for any PostgreSQL performance parameter without asking \nquestions first about things like your memory and your disk setup doesn't \nreally know what they're doing, and I'd suggest discounting the entirety \nof that advice.\n\nThose commit_ values are completely wrong for many workloads; they're \nintroducing a 20ms delay into writes as soon as there are more then 3 \nclients writing things at once. If someone just took those values from a \nweb page without actually testing them out, you'd be better off turning \nboth values back to the defaults (which disables the feature) and waiting \nuntil you have some time to correctly determine useful settings for your \nsystem.\n\nNote that with fsync=off, I don't think that's actually doing anything \nright now so it's kind of irrelevant to get excited about; should be \naddressed before fsync gets turned back on though.\n\nAlso: some of the recommendations you've been getting for shared_buffers \nare on the low side as far as I'm concerned. You should consider maxing \nthat value out at 262143 (2GB of RAM) on your server with 8GB of RAM \navailable, then putting effective_cache_size at 5GB or so. That may \nrequire just a bit more upward tweaking of your kernel parameters to \nsupport.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 3 Jun 2007 18:30:17 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance\n then 7.4.x" }, { "msg_contents": "Douglas J Hunley <[email protected]> writes:\n> On Saturday 02 June 2007 11:25:11 Tom Lane wrote:\n>> Another thing that seems strange is that the 8.2 config file does not\n>> seem to have been processed by initdb --- or did you explicitly comment\n>> out the settings it made?\n\n> I don't understand this comment. You are saying 'initdb' will make changes to\n> the file? The file I sent is the working copy from the machine in question.\n\nYeah --- in a normal installation, initdb will provide un-commented\nentries for these postgresql.conf parameters:\n\nmax_connections\nshared_buffers\nmax_fsm_pages\ndatestyle\nlc_messages\nlc_monetary\nlc_numeric\nlc_time\n\n(The first three are set dependent on SHMMAX probing, the others\ndependent on locale.) Your conf file doesn't seem to have been through\nthat autoconfiguration step, which suggests someone poking at things\nthey should have left alone.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 03 Jun 2007 23:17:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x " }, { "msg_contents": "On Sun, 2007-06-03 at 23:17 -0400, Tom Lane wrote:\n> Douglas J Hunley <[email protected]> writes:\n> > On Saturday 02 June 2007 11:25:11 Tom Lane wrote:\n> >> Another thing that seems strange is that the 8.2 config file does not\n> >> seem to have been processed by initdb --- or did you explicitly comment\n> >> out the settings it made?\n> \n> > I don't understand this comment. You are saying 'initdb' will make changes to\n> > the file? The file I sent is the working copy from the machine in question.\n> \n> Yeah --- in a normal installation, initdb will provide un-commented\n> entries for these postgresql.conf parameters:\n> \n> max_connections\n> shared_buffers\n> max_fsm_pages\n> datestyle\n> lc_messages\n> lc_monetary\n> lc_numeric\n> lc_time\n> \n> (The first three are set dependent on SHMMAX probing, the others\n> dependent on locale.) Your conf file doesn't seem to have been through\n> that autoconfiguration step, which suggests someone poking at things\n> they should have left alone.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\nA WAG, but perhaps the new conf file was overwritten after installation\nwith the one from the 'old' installation '..because that's the\nconfiguration that we've already tweaked and was working...'\n", "msg_date": "Mon, 04 Jun 2007 10:28:45 -0400", "msg_from": "Reid Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance\n\tthen 7.4.x" }, { "msg_contents": "On Sunday 03 June 2007 18:30:17 Greg Smith wrote:\n> To be equivalent to the 7.4 config the 8.2 config would need:\n\nI've taken all the wonderful advise offered thus far, and put the attached \ninto use. Our initial testing shows a 66% improvement in page load times for \nour app. I have the customer beating on things and noting anything that is \nstill slow.\n\nOn a side note, is there any real benefit to using autovacuum over a \nperiodically scheduled vacuum? I ask because we have the latter already coded \nup and cron'd and it seems to keep things fairly optimized.\n\nBTW, I'm on the list, so there's no need to reply direct. I can get the \nreplies from the list\n\nThanks again for everyone's assistance thus far. Y'all rock!\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nI feel like I'm diagonally parked in a parallel universe...", "msg_date": "Mon, 4 Jun 2007 13:05:12 -0400", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "Douglas J Hunley wrote:\n\n> On a side note, is there any real benefit to using autovacuum over a \n> periodically scheduled vacuum? I ask because we have the latter already coded \n> up and cron'd and it seems to keep things fairly optimized.\n\nNo, not really. Maybe autovacuum could get to specific highly-updated\ntables quickier than the cron job, or slow down when there's no\nactivity; but your current setup is good enough for you there's no\nreason to change.\n\n> BTW, I'm on the list, so there's no need to reply direct. I can get the \n> replies from the list\n\nHuh, sorry, this is just the customary way to use these lists.\nPersonally, I prefer to get several copies of each message and have my\nsoftware (procmail) deliver only one to me discarding the duplicates.\nThat way, if one is lost or takes long to get home, I don't even notice\nit (it used to happen a lot on the lists). Look at the \"eliminatecc\"\noption in the Majordomo user web pages.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Mon, 4 Jun 2007 14:08:13 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "On Saturday 02 June 2007 11:21:41 Michael Fuhr wrote:\n> If you post an example query and the\n> EXPLAIN ANALYZE output then we might be able to see if the slowness\n> is due to query plans.\n\nQuery 1:\n(SELECT\n project.path AS rbac_project_path_string,\n role_operation.resource_name AS rbac_resource_name,\n role_operation.resource_value AS rbac_resource_value\nFROM\n project project,\n role role,\n role_default_user role_default_user,\n role_operation role_operation\nWHERE\n role.id=role_default_user.role_id\n AND role_default_user.project_id=project.id\n AND role.id=role_operation.role_id\n AND role.is_deleted=false\n AND role_operation.object_type_id='Scm.Repository'\n AND role_operation.operation_category='use'\n AND role_operation.operation_name='access'\n AND project.path='projects.barnes_and_nobles_college_bookse_3'\n AND project.is_deleted=false\n AND role_default_user.default_user_class_id='1'\n)\nUNION\n(SELECT\n project.path AS rbac_project_path_string,\n role_operation.resource_name AS rbac_resource_name,\n role_operation.resource_value AS rbac_resource_value\nFROM\n project project,\n role role,\n role_default_user role_default_user,\n role_operation role_operation\nWHERE\n role.id=role_default_user.role_id\n AND role_default_user.project_id=project.id\n AND role.id=role_operation.role_id\n AND role.is_deleted=false\n AND role_operation.object_type_id='Scm.Repository'\n AND role_operation.operation_category='use'\n AND role_operation.operation_name='access'\n AND project.path='projects.barnes_and_nobles_college_bookse_3'\n AND project.is_deleted=false\n AND role_default_user.default_user_class_id='2'\n)\nUNION\n(SELECT\n project.path AS rbac_project_path_string,\n role_operation.resource_name AS rbac_resource_name,\n role_operation.resource_value AS rbac_resource_value\nFROM\n project project,\n role role,\n role_default_user role_default_user,\n role_operation role_operation\nWHERE\n role.id=role_default_user.role_id\n AND role_default_user.project_id=project.id\n AND role.id=role_operation.role_id\n AND role.is_deleted=false\n AND role_operation.object_type_id='Scm.Repository'\n AND role_operation.operation_category='use'\n AND role_operation.operation_name='access'\n AND project.path='projects.barnes_and_nobles_college_bookse_3'\n AND project.is_deleted=false\n AND role_default_user.default_user_class_id='3'\n)\nUNION\n(SELECT\n project.path AS rbac_project_path_string,\n role_operation.resource_name AS rbac_resource_name,\n role_operation.resource_value AS rbac_resource_value\nFROM\n sfuser sfuser,\n project project,\n role role,\n projectmembership projectmembership,\n role_default_user role_default_user,\n role_operation role_operation\nWHERE\n role.id=role_default_user.role_id\n AND role_default_user.project_id=project.id\n AND role.id=role_operation.role_id\n AND role.is_deleted=false\n AND role_operation.object_type_id='Scm.Repository'\n AND role_operation.operation_category='use'\n AND role_operation.operation_name='access'\n AND project.path='projects.barnes_and_nobles_college_bookse_3'\n AND project.is_deleted=false\n AND role_default_user.default_user_class_id='4'\n AND projectmembership.member_id=sfuser.id\n AND role_default_user.project_id=projectmembership.project_id\n AND sfuser.username='rtrejo'\n)\nUNION\n(SELECT\n project.path AS rbac_project_path_string,\n role_operation.resource_name AS rbac_resource_name,\n role_operation.resource_value AS rbac_resource_value\nFROM\n sfuser sfuser,\n project project,\n role role,\n role_user role_user,\n role_operation role_operation\nWHERE\n role.id=role_user.role_id\n AND role_user.project_id=project.id\n AND role.id=role_operation.role_id\n AND role.is_deleted=false\n AND role_operation.object_type_id='Scm.Repository'\n AND role_operation.operation_category='use'\n AND role_operation.operation_name='access'\n AND role_user.user_id=sfuser.id\n AND project.path='projects.barnes_and_nobles_college_bookse_3'\n AND project.is_deleted=false\n AND sfuser.username='rtrejo'\n);\n\ntake 0m1.693s according to 'time'\nExplain attached as explain1\n\nQuery 2:\nSELECT\n artifact.id AS id,\n artifact.priority AS priority,\n project.path AS projectPathString,\n project.title AS projectTitle,\n folder.project_id AS projectId,\n folder.path AS folderPathString,\n folder.title AS folderTitle,\n item.folder_id AS folderId,\n item.title AS title,\n item.name AS name,\n artifact.description AS description,\n field_value.value AS artifactGroup,\n field_value2.value AS status,\n field_value2.value_class AS statusClass,\n field_value3.value AS category,\n field_value4.value AS customer,\n sfuser.username AS submittedByUsername,\n sfuser.full_name AS submittedByFullname,\n item.date_created AS submittedDate,\n artifact.close_date AS closeDate,\n sfuser2.username AS assignedToUsername,\n sfuser2.full_name AS assignedToFullname,\n item.date_last_modified AS lastModifiedDate,\n artifact.estimated_hours AS estimatedHours,\n artifact.actual_hours AS actualHours,\n item.version AS version\nFROM\n relationship relationship,\n sfuser sfuser,\n sfuser sfuser2,\n field_value field_value3,\n item item,\n project project,\n field_value field_value2,\n field_value field_value,\n artifact artifact,\n folder folder,\n field_value field_value4\nWHERE\n artifact.id=item.id\n AND item.folder_id=folder.id\n AND folder.project_id=project.id\n AND artifact.group_fv=field_value.id\n AND artifact.status_fv=field_value2.id\n AND artifact.category_fv=field_value3.id\n AND artifact.customer_fv=field_value4.id\n AND item.created_by_id=sfuser.id\n AND relationship.is_deleted=false\n AND relationship.relationship_type_name='ArtifactAssignment'\n AND relationship.origin_id=sfuser2.id\n AND artifact.id=relationship.target_id\n AND item.is_deleted=false\n AND ((project.path='projects.union_gas_gdar_ebt' AND ((folder.path IN \n('tracker.cutover_tasks', 'tracker.peer_review_tracker', 'tracker.tars_0', 'tracker.reviews', 'tracker.defects', 'tracker.tars', 'tracker.database_change_requests')) \nOR folder.path LIKE 'tracker.cutover_tasks.%' OR folder.path \nLIKE 'tracker.peer_review_tracker.%' OR folder.path LIKE 'tracker.tars_0.%' \nOR folder.path LIKE 'tracker.reviews.%' OR folder.path LIKE 'tracker.defects.\n%' OR folder.path LIKE 'tracker.tars.%' OR folder.path \nLIKE 'tracker.database_change_requests.%')))\n AND folder.project_id='proj1775'\n AND item.folder_id='tracker11923'\n AND folder.path='tracker.defects'\n AND (sfuser2.username='nobody' AND field_value2.value_class='Open');\n\ntakes 0m9.506s according to time.. it's attached as explain2\n\nTIA, again\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nIt's not the pace of life that concerns me, it's the sudden stop at the end.", "msg_date": "Mon, 4 Jun 2007 16:01:08 -0400", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "\nThose plans look like they have a lot of casts to text in them. How have you\ndefined your indexes? Are your id columns really text?\n\nAnd you don't have a 7.4 install around to compare the plans do you?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 04 Jun 2007 22:11:23 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "Gregory Stark wrote:\n> Those plans look like they have a lot of casts to text in them. How have you\n> defined your indexes? Are your id columns really text?\n\nAnd did you use the same encoding and locale? Text operations on \nmultibyte encodings are much more expensive.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 04 Jun 2007 22:17:03 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance\n then 7.4.x" }, { "msg_contents": "On Monday 04 June 2007 17:17:03 Heikki Linnakangas wrote:\n> And did you use the same encoding and locale? Text operations on\n> multibyte encodings are much more expensive.\n\nThe db was created as:\ncreatedb -E UNICODE -O <user> <dbname>\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nLinux is the answer, now what was your question?\n", "msg_date": "Tue, 5 Jun 2007 10:25:14 -0400", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "On Monday 04 June 2007 17:11:23 Gregory Stark wrote:\n> Those plans look like they have a lot of casts to text in them. How have\n> you defined your indexes? Are your id columns really text?\n\nproject table:\nIndexes:\n \"project_pk\" PRIMARY KEY, btree (id)\n \"project_path\" UNIQUE, btree (path)\n\nrole table:\nIndexes:\n \"role_pk\" PRIMARY KEY, btree (id)\n\nrole_default_user table:\nIndexes:\n \"role_def_user_pk\" PRIMARY KEY, btree (id)\n \"role_def_u_prj_idx\" UNIQUE, btree (role_id, default_user_class_id, \nproject_id)\n\nrole_operation table:\nIndexes:\n \"role_operation_pk\" PRIMARY KEY, btree (id)\n \"role_oper_obj_oper\" btree (object_type_id, operation_category, \noperation_name)\n \"role_oper_role_id\" btree (role_id)\n\nsfuser table:\nIndexes:\n \"sfuser_pk\" PRIMARY KEY, btree (id)\n \"sfuser_username\" UNIQUE, btree (username)\n\nprojectmembership table:\nIndexes:\n \"pjmb_pk\" PRIMARY KEY, btree (id)\n \"pjmb_projmember\" UNIQUE, btree (project_id, member_id)\n \"pjmb_member\" btree (member_id)\n\nrelationship table:\nIndexes:\n \"relationship_pk\" PRIMARY KEY, btree (id)\n \"relation_origin\" btree (origin_id)\n \"relation_target\" btree (target_id)\n \"relation_type\" btree (relationship_type_name)\n\nfield_value table:\nIndexes:\n \"field_value_pk\" PRIMARY KEY, btree (id)\n \"f_val_fid_val_idx\" UNIQUE, btree (field_id, value)\n \"field_class_idx\" btree (value_class)\n \"field_value_idx\" btree (value)\n\nitem table:\nIndexes:\n \"item_pk\" PRIMARY KEY, btree (id)\n \"item_created_by_id\" btree (created_by_id)\n \"item_folder\" btree (folder_id)\n \"item_name\" btree (name)\n\nand yes, the 'id' column is always: character varying type\n\n> And you don't have a 7.4 install around to compare the plans do you?\n\nI have a 7.3.19 db, if that would be useful\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nWhose cruel idea was it for the word \"lisp\" to have an \"s\" in it?\n", "msg_date": "Tue, 5 Jun 2007 10:34:04 -0400", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "Find the attached Postgres.conf file. I am using 8.1 Version in Lab.\n\nI haven't done any changes to this conf file to improve the performance.\n\n\n\nWhat are the attributes needs to be modified in the conf file to improve the\nperformance?\n\n\n\nI am looking forward for your assistance.\n\nRegards,\nHanu\n\n\nOn 6/2/07, Douglas J Hunley <[email protected]> wrote:\n>\n> Hello great gurus of performance:\n> Our 'esteemed' Engr group recently informed a customer that in their\n> testing,\n> upgrading to 8.2.x improved the performance of our J2EE\n> application \"approximately 20%\", so of course, the customer then tasked me\n> with upgrading them. We dumped their db, removed pgsql, installed the\n> 8.2.4\n> rpms from postgresql.org, did an initdb, and the pg_restored their data.\n> It's\n> been about a week now, and the customer is complaining that in their\n> testing,\n> they are seeing a 30% /decrease/ in general performance. Of course, our\n> Engr\n> group is being less than responsive, and I have a feeling all they're\n> doing\n> is googling for answers, so I'm turning to this group for actual\n> assistance :)\n> I'd like to start by examining the poistgresql.conf file. Under 7.4.x, we\n> had\n> spent the better part of their 2 years as a customer tuning and tweaking\n> setting. I've attached the file that was in place at the time of upgrade.\n> I\n> did some cursory googling of my own, and quickly realized that enough has\n> changed in v8 that I'm not comfortable making the exact same modification\n> to\n> their new config file as some options are new, some have gone away, etc.\n> I've\n> attached the existing v8 conf file as well.\n> I'd really like it if someone could assist me in determining which of the\n> v8\n> options need adjusted to be 'functionally equivalent' to the v7 file.\n> Right\n> now, my goal is to get the customer back to the previous level of\n> performance, and only then pursue further optimization. I can provide any\n> and\n> all information needed, but didn't know what to include initially, so I've\n> opted to include the minimal :)\n> The DB server in question does nothing else, is running CentOS 4.5, kernel\n> 2.6.9-55.ELsmp. Hyperthreading is disabled in the BIOS and there are 2\n> Xeon\n> 3.4Ghz cpus. There is 8Gb of RAM in the machine, and another 8Gb of swap.\n>\n> Thank you in advance for any and all assistance you can provide.\n> --\n> Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\n> http://doug.hunley.homeip.net\n>\n> Handy Guide to Modern Science:\n> 1. If it's green or it wiggles, it's biology.\n> 2. If it stinks, it's chemistry.\n> 3. If it doesn't work, it's physics.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n>\n>\n\n\n-- \nWith best regards,\nHanumanthappa Kurubar\nMobile: 98 801 800 65", "msg_date": "Tue, 5 Jun 2007 13:01:24 -0400", "msg_from": "\"Hanu Kurubar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "On Tuesday 05 June 2007 10:34:04 Douglas J Hunley wrote:\n> On Monday 04 June 2007 17:11:23 Gregory Stark wrote:\n> > Those plans look like they have a lot of casts to text in them. How have\n> > you defined your indexes? Are your id columns really text?\n>\n> project table:\n> Indexes:\n> \"project_pk\" PRIMARY KEY, btree (id)\n> \"project_path\" UNIQUE, btree (path)\n>\n> role table:\n> Indexes:\n> \"role_pk\" PRIMARY KEY, btree (id)\n>\n> role_default_user table:\n> Indexes:\n> \"role_def_user_pk\" PRIMARY KEY, btree (id)\n> \"role_def_u_prj_idx\" UNIQUE, btree (role_id, default_user_class_id,\n> project_id)\n>\n> role_operation table:\n> Indexes:\n> \"role_operation_pk\" PRIMARY KEY, btree (id)\n> \"role_oper_obj_oper\" btree (object_type_id, operation_category,\n> operation_name)\n> \"role_oper_role_id\" btree (role_id)\n>\n> sfuser table:\n> Indexes:\n> \"sfuser_pk\" PRIMARY KEY, btree (id)\n> \"sfuser_username\" UNIQUE, btree (username)\n>\n> projectmembership table:\n> Indexes:\n> \"pjmb_pk\" PRIMARY KEY, btree (id)\n> \"pjmb_projmember\" UNIQUE, btree (project_id, member_id)\n> \"pjmb_member\" btree (member_id)\n>\n> relationship table:\n> Indexes:\n> \"relationship_pk\" PRIMARY KEY, btree (id)\n> \"relation_origin\" btree (origin_id)\n> \"relation_target\" btree (target_id)\n> \"relation_type\" btree (relationship_type_name)\n>\n> field_value table:\n> Indexes:\n> \"field_value_pk\" PRIMARY KEY, btree (id)\n> \"f_val_fid_val_idx\" UNIQUE, btree (field_id, value)\n> \"field_class_idx\" btree (value_class)\n> \"field_value_idx\" btree (value)\n>\n> item table:\n> Indexes:\n> \"item_pk\" PRIMARY KEY, btree (id)\n> \"item_created_by_id\" btree (created_by_id)\n> \"item_folder\" btree (folder_id)\n> \"item_name\" btree (name)\n>\n> and yes, the 'id' column is always: character varying type\n>\n> > And you don't have a 7.4 install around to compare the plans do you?\n>\n> I have a 7.3.19 db, if that would be useful\n\nAny insight given the above?\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\n\"It is our moral duty to corrupt the young\"\n", "msg_date": "Wed, 6 Jun 2007 10:35:19 -0400", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" } ]
[ { "msg_contents": "When you initdb, a config file is edited from the template by initdb to reflect your machine config.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tDouglas J Hunley [mailto:[email protected]]\nSent:\tSunday, June 03, 2007 02:30 PM Eastern Standard Time\nTo:\tTom Lane\nCc:\[email protected]\nSubject:\tRe: [PERFORM] upgraded to pgsql 8.2.4, getting worse performance then 7.4.x\n\nOn Saturday 02 June 2007 11:25:11 Tom Lane wrote:\n> Another thing that seems strange is that the 8.2 config file does not\n> seem to have been processed by initdb --- or did you explicitly comment\n> out the settings it made?\n\nI don't understand this comment. You are saying 'initdb' will make changes to \nthe file? The file I sent is the working copy from the machine in question.\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\n\"Does it worry you that you don't talk any kind of sense?\"\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n\n\n\nRe: [PERFORM] upgraded to pgsql 8.2.4, getting worse performance then 7.4.x\n\n\n\nWhen you initdb, a config file is edited from the template by initdb to reflect your machine config.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom:   Douglas J Hunley [mailto:[email protected]]\nSent:   Sunday, June 03, 2007 02:30 PM Eastern Standard Time\nTo:     Tom Lane\nCc:     [email protected]\nSubject:        Re: [PERFORM] upgraded to pgsql 8.2.4, getting worse performance then 7.4.x\n\nOn Saturday 02 June 2007 11:25:11 Tom Lane wrote:\n> Another thing that seems strange is that the 8.2 config file does not\n> seem to have been processed by initdb --- or did you explicitly comment\n> out the settings it made?\n\nI don't understand this comment. You are saying 'initdb' will make changes to\nthe file? The file I sent is the working copy from the machine in question.\n\n--\nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\n\"Does it worry you that you don't talk any kind of sense?\"\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n               http://www.postgresql.org/docs/faq", "msg_date": "Sun, 3 Jun 2007 16:39:51 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "On Sunday 03 June 2007 16:39:51 Luke Lonergan wrote:\n> When you initdb, a config file is edited from the template by initdb to\n> reflect your machine config.\n\nI didn't realize that. I'll have to harass the rest of the team to see if \nsomeone overwrote that file or not. In the interim, I did an 'initdb' to \nanother location on the same box and then copied those values into the config \nfile. That's cool to do, I assume?\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nCowering in a closet is starting to seem like a reasonable plan.\n", "msg_date": "Mon, 4 Jun 2007 08:40:39 -0400", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" }, { "msg_contents": "Douglas J Hunley wrote:\n> On Sunday 03 June 2007 16:39:51 Luke Lonergan wrote:\n>> When you initdb, a config file is edited from the template by initdb to\n>> reflect your machine config.\n> \n> I didn't realize that. I'll have to harass the rest of the team to see if \n> someone overwrote that file or not. In the interim, I did an 'initdb' to \n> another location on the same box and then copied those values into the config \n> file. That's cool to do, I assume?\n\nYeah, that's ok.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 04 Jun 2007 13:44:36 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance\n then 7.4.x" } ]
[ { "msg_contents": "Absolutely!\n\nA summary of relevant comments so far are:\n- enable-mergejoin\n- shared-buffers\n- fsync\n\nAnother to consider if you use indexes is random-page-cost.\n\nWhat would be helpful is if you could identify a slow query and post the explain analyze here.\n\nThe concurrent performance of many users should just be faster with 8.2, so I'd think it's a problem with plans.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tDouglas J Hunley [mailto:[email protected]]\nSent:\tMonday, June 04, 2007 08:40 AM Eastern Standard Time\nTo:\tLuke Lonergan\nCc:\tTom Lane; [email protected]\nSubject:\tRe: [PERFORM] upgraded to pgsql 8.2.4, getting worse performance then 7.4.x\n\nOn Sunday 03 June 2007 16:39:51 Luke Lonergan wrote:\n> When you initdb, a config file is edited from the template by initdb to\n> reflect your machine config.\n\nI didn't realize that. I'll have to harass the rest of the team to see if \nsomeone overwrote that file or not. In the interim, I did an 'initdb' to \nanother location on the same box and then copied those values into the config \nfile. That's cool to do, I assume?\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nCowering in a closet is starting to seem like a reasonable plan.\n\n\n\n\nRe: [PERFORM] upgraded to pgsql 8.2.4, getting worse performance then 7.4.x\n\n\n\nAbsolutely!\n\nA summary of relevant comments so far are:\n- enable-mergejoin\n- shared-buffers\n- fsync\n\nAnother to consider if you use indexes is random-page-cost.\n\nWhat would be helpful is if you could identify a slow query and post the explain analyze here.\n\nThe concurrent performance of many users should just be faster with 8.2, so I'd think it's a problem with plans.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom:   Douglas J Hunley [mailto:[email protected]]\nSent:   Monday, June 04, 2007 08:40 AM Eastern Standard Time\nTo:     Luke Lonergan\nCc:     Tom Lane; [email protected]\nSubject:        Re: [PERFORM] upgraded to pgsql 8.2.4, getting worse performance then 7.4.x\n\nOn Sunday 03 June 2007 16:39:51 Luke Lonergan wrote:\n> When you initdb, a config file is edited from the template by initdb to\n> reflect your machine config.\n\nI didn't realize that. I'll have to harass the rest of the team to see if\nsomeone overwrote that file or not. In the interim, I did an 'initdb' to\nanother location on the same box and then copied those values into the config\nfile. That's cool to do, I assume?\n\n--\nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nCowering in a closet is starting to seem like a reasonable plan.", "msg_date": "Mon, 4 Jun 2007 08:51:47 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x" } ]
[ { "msg_contents": "Hi,\n\nI'm currently playing with dbt2 and am wondering, if the results I'm \ngetting are reasonable. I'm testing a 2x Dual Core Xeon system with 4 GB \nof RAM and 8 SATA HDDs attached via Areca RAID Controller w/ battery \nbacked write cache. Seven of the eight platters are configured as one \nRAID6, one spare drive. That should leave five platters for distributing \nread only accesses.\n\nThe NOTPM numbers I'm getting are suspiciously low, IMO, but maybe I'm \nexpecting too much. What do you think, is this reasonable or do I have \nto twiddle with the configuration somewhat more?\n\nRegards\n\nMarkus\n\n\nHere are my results:\n\n Response Time (s)\n Transaction % Average : 90th % Total \nRollbacks %\n------------ ----- --------------------- ----------- --------------- \n -----\n Delivery 3.83 549.046 : 595.280 1212 \n0 0.00\n New Order 45.79 524.659 : 562.016 14494 \n151 1.05\nOrder Status 3.98 517.497 : 551.552 1261 0 \n 0.00\n Payment 42.50 514.562 : 550.383 13452 \n0 0.00\n Stock Level 3.90 510.610 : 546.957 1236 \n0 0.00\n------------ ----- --------------------- ----------- --------------- \n -----\n\n238.39 new-order transactions per minute (NOTPM)\n59.5 minute duration\n0 total unknown errors\n529 second(s) ramping up\n", "msg_date": "Mon, 04 Jun 2007 17:00:07 +0200", "msg_from": "Markus Schiltknecht <[email protected]>", "msg_from_op": true, "msg_subject": "dbt2 NOTPM numbers" }, { "msg_contents": "Markus Schiltknecht wrote:\n> I'm currently playing with dbt2 and am wondering, if the results I'm \n> getting are reasonable. I'm testing a 2x Dual Core Xeon system with 4 GB \n> of RAM and 8 SATA HDDs attached via Areca RAID Controller w/ battery \n> backed write cache. Seven of the eight platters are configured as one \n> RAID6, one spare drive. That should leave five platters for distributing \n> read only accesses.\n> \n> The NOTPM numbers I'm getting are suspiciously low, IMO, but maybe I'm \n> expecting too much. What do you think, is this reasonable or do I have \n> to twiddle with the configuration somewhat more?\n\nThere's clearly something wrong. The response times are ridiculously \nhigh, they should be < 5 seconds (except for stock level transaction) to \npass a TPC-C test. I wonder if you built any indexes at all?\n\nThe configuration I'm running here has 3 data drives, and I'm getting \nreasonable results with ~100 warehouses, at ~1200 noTPM.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 04 Jun 2007 16:09:39 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "Hi,\n\nHeikki Linnakangas wrote:\n> There's clearly something wrong. The response times are ridiculously \n> high, they should be < 5 seconds (except for stock level transaction) to \n> pass a TPC-C test. I wonder if you built any indexes at all?\n\nHm.. according to the output/5/db/plan0.out, all queries use index \nscans, so that doesn't seem to be the problem.\n\n> The configuration I'm running here has 3 data drives, and I'm getting \n> reasonable results with ~100 warehouses, at ~1200 noTPM.\n\nThanks, that's exactly the one simple and very raw comparison value I've \nbeen looking for. (Since most of the results pages of (former?) OSDL are \ndown).\n\nI'll run a bonnie++ first. As the CPUs seem to be idle most of the time \n(see the vmstat.out below), I'm suspecting the RAID or disks.\n\nRegards\n\nMarkus\n\n\nprocs -----------memory---------- ---swap-- -----io---- -system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n 2 2 329512 1289384 8 2680704 0 0 4 1 1 2 0 \n 0 100 0\n 0 48 329512 1733016 8 2083400 0 0 1789 2265 553 1278 1 \n 1 63 35\n 2 81 329512 492052 8 3194532 0 0 6007 10135 1025 3291 6 \n1 21 71\n 0 8 329512 153360 8 3457936 0 0 6321 11919 1093 4581 7 \n2 12 79\n 0 9 329512 150188 8 3433380 0 0 2083 5078 707 2197 2 \n1 35 62\n 0 6 329512 148412 8 3408748 0 0 1001 2888 526 1203 1 \n0 34 64\n 0 27 329512 152212 8 3379736 0 0 2281 5166 733 2320 3 \n1 18 79\n 0 11 329512 152560 8 3355940 0 0 1837 4028 626 1738 2 \n1 35 63\n 0 14 329512 149268 8 3334912 0 0 1674 3836 630 1619 2 \n1 31 67\n 0 6 329512 152916 8 3311552 0 0 1404 3017 568 1372 1 \n0 57 41\n 0 13 329688 149492 8 3313200 0 0 1687 4178 650 1644 2 \n1 29 69\n 0 84 329688 153480 8 3309684 0 0 812 3790 641 2669 1 \n1 22 76\n 0 18 329688 149232 8 3314032 0 0 87 2147 511 2414 0 \n0 16 83\n 3 20 329688 149196 8 3314648 0 0 756 1854 496 1044 1 \n0 52 47\n", "msg_date": "Mon, 04 Jun 2007 17:26:38 +0200", "msg_from": "Markus Schiltknecht <[email protected]>", "msg_from_op": true, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "\n> I'll run a bonnie++ first. As the CPUs seem to be idle most of the time \n> (see the vmstat.out below), I'm suspecting the RAID or disks.\n\n\tYou have a huge amount of iowait !\n\tDid you put the xlog on a separate disk ?\n\tWhat filesystem do you use ?\n\tDid you check that your BBU cache works ?\n\n\tFor that run a dumb script which does INSERTS in a test table in \nautocommit mode ; if you get (7200rpm / 60) = 120 inserts / sec or less, \nthe good news is that your drives don't lie about fsync, the bad news is \nthat your BBU cache isn't working...\n", "msg_date": "Mon, 04 Jun 2007 18:47:20 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "Hi,\n\nPFC wrote:\n> You have a huge amount of iowait !\n\nYup.\n\n> Did you put the xlog on a separate disk ?\n\nNo, it's all one big RAID6 for the sake of simplicity (plus I doubt \nsomewhat, that 2 disks for WAL + 5 for data + 1 spare would be much \nfaster than 7 disks for WAL and data + 1 spare - considering that RAID6 \nneeds two parity disks, that's 3 vs 5 disks for data...)\n\n> What filesystem do you use ?\n\nXFS\n\n> Did you check that your BBU cache works ?\n\nThanks to you're hint, yes. I've attached the small python script, in \ncase it might help someone else, too.\n\n> For that run a dumb script which does INSERTS in a test table in \n> autocommit mode ; if you get (7200rpm / 60) = 120 inserts / sec or less, \n> the good news is that your drives don't lie about fsync, the bad news is \n> that your BBU cache isn't working...\n\nAccording to my little script, I constantly get somewhat around 6000 \ninserts per second, so I guess either my BBU works, or the drives are \nlying ;-) Simplistic troughput testing with dd gives > 200MB/s, which \nalso seems fine.\n\n\nObviously there's something else I'm doing wrong. I didn't really care \nmuch about postgresql.conf, except setting a larger shared_buffers and a \nreasonable effective_cache_size.\n\n\nOh, something else that's probably worth thinking about (and just came \nto my mind again): the XFS is on a lvm2, on that RAID6.\n\n\nRegards\n\nMarkus\n\n\nSimplistic throughput testing with dd:\n\ndd of=test if=/dev/zero bs=10K count=800000\n800000+0 records in\n800000+0 records out\n8192000000 bytes (8.2 GB) copied, 37.3552 seconds, 219 MB/s\npamonth:/opt/dbt2/bb# dd if=test of=/dev/zero bs=10K count=800000\n800000+0 records in\n800000+0 records out\n8192000000 bytes (8.2 GB) copied, 27.6856 seconds, 296 MB/s", "msg_date": "Mon, 04 Jun 2007 20:56:37 +0200", "msg_from": "Markus Schiltknecht <[email protected]>", "msg_from_op": true, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "Markus Schiltknecht wrote:\n> Hi,\n> \n> Heikki Linnakangas wrote:\n>> There's clearly something wrong. The response times are ridiculously \n>> high, they should be < 5 seconds (except for stock level transaction) \n>> to pass a TPC-C test. I wonder if you built any indexes at all?\n> \n> Hm.. according to the output/5/db/plan0.out, all queries use index \n> scans, so that doesn't seem to be the problem.\n\nI still suspect there's something wrong with plans, I doubt you can get \nthat bad performance unless it's doing something really stupid. I'd \nsuggest setting log_min_duration_statement = 5000, and seeing what you \nget. Also check pg_stat_user_table.seq_scan just to be extra sure \nthere's no seq scans.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 04 Jun 2007 21:22:35 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "Hi,\n\nHeikki Linnakangas wrote:\n> I still suspect there's something wrong with plans, I doubt you can get \n> that bad performance unless it's doing something really stupid.\n\nAgreed, but I'm still looking for that really stupid thing... AFAICT, \nthere are really no seqscans..., see the pg_stat_user_tables below.\n\n> I'd \n> suggest setting log_min_duration_statement = 5000, and seeing what you \n> get. Also check pg_stat_user_table.seq_scan just to be extra sure \n> there's no seq scans.\n\nI've also added some of the log messages for min_duration_statement \nbelow. Both were taken after two or three test runs.\n\nI'm really wondering, if the RAID 6 of the ARECA 1260 hurts so badly. \nThat would be disappointing, IMO. I'll try if I can reconfigure it to do \nRAID 1+0, and then test again. (Unfortunately the box has already been \nshipped to the customer, so that's getting tricky to do via ssh..:-( ).\n\n\nRegards\n\nMarkus\n\n\n*** pg_stat_user_tables ***\n\n relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | \nidx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | last_vacuum | \n last_autovacuum | last_analyze | last_autoanalyze\n-------+------------+------------+----------+--------------+----------+---------------+-----------+-----------+-----------+-------------+-------------------------------+--------------+-------------------------------\n 16390 | public | district | 0 | 0 | 206335 | \n 206335 | 0 | 100771 | 0 | | \n2007-06-05 15:40:44.39573+02 | | 2007-06-05 15:39:41.636736+02\n 16396 | public | new_order | 0 | 0 | 91860 | \n 41402317 | 51372 | 0 | 45844 | | \n | |\n 16400 | public | order_line | 0 | 0 | 101195 | \n 933197 | 538442 | 436140 | 0 | | \n | |\n 16402 | public | item | 0 | 0 | 538942 | \n 538442 | 0 | 0 | 0 | | \n | |\n 16404 | public | stock | 0 | 0 | 1093528 | \n 1077782 | 0 | 538442 | 0 | | \n | |\n 16394 | public | history | 0 | 0 | | \n | 49399 | 0 | 0 | | \n | |\n 16388 | public | warehouse | 0 | 0 | 150170 | \n 150170 | 0 | 49399 | 0 | | \n2007-06-05 15:39:41.059572+02 | | 2007-06-05 15:38:39.976122+02\n 16398 | public | orders | 0 | 0 | 96490 | \n 96519 | 51372 | 45930 | 0 | | \n | |\n 16392 | public | customer | 0 | 0 | 233263 | \n 599917 | 0 | 95329 | 0 | | \n | |\n\n\n*** database log snippet ***\n\n2007-06-05 15:42:09 CEST LOG: duration: 6020.820 ms statement: SELECT \n* FROM order_status(1747, 291, 3, '')\n2007-06-05 15:42:09 CEST LOG: duration: 688.730 ms statement: SELECT \npayment(47, 2, 1533, 47, 2, '', 4295.460000)\n2007-06-05 15:42:09 CEST LOG: duration: 5923.518 ms statement: SELECT \npayment(319, 8, 0, 319, 8, 'OUGHTATIONEING', 2331.470000)\n2007-06-05 15:42:09 CEST LOG: duration: 6370.433 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:09 CEST LOG: duration: 6463.583 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:09 CEST LOG: duration: 6358.047 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:09 CEST LOG: duration: 6114.972 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:09 CEST LOG: duration: 6193.684 ms statement: SELECT \npayment(96, 10, 0, 96, 10, 'ESEOUGHTBAR', 997.050000)\n2007-06-05 15:42:09 CEST LOG: duration: 6375.163 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:09 CEST LOG: duration: 6139.613 ms statement: SELECT \npayment(454, 8, 0, 454, 8, 'ANTIOUGHTEING', 1575.110000)\n2007-06-05 15:42:09 CEST LOG: duration: 6336.462 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:09 CEST LOG: duration: 6420.227 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:09 CEST LOG: duration: 6447.025 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:09 CEST LOG: duration: 15549.277 ms statement: SELECT \ndelivery(124, 7)\n2007-06-05 15:42:09 CEST LOG: duration: 1432.199 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:09 CEST LOG: duration: 6478.086 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:09 CEST LOG: duration: 1405.925 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:09 CEST LOG: duration: 8399.567 ms statement: SELECT \ndelivery(374, 4)\n2007-06-05 15:42:10 CEST LOG: duration: 657.939 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:10 CEST LOG: duration: 1159.131 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:10 CEST LOG: duration: 840.907 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:10 CEST LOG: duration: 616.234 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:10 CEST LOG: duration: 1115.098 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:10 CEST LOG: duration: 1332.445 ms statement: SELECT \npayment(465, 6, 0, 465, 6, 'ABLEESEESE', 4660.790000)\n2007-06-05 15:42:10 CEST LOG: duration: 855.661 ms statement: SELECT \npayment(267, 6, 0, 267, 6, 'OUGHTEINGOUGHT', 4214.080000)\n2007-06-05 15:42:10 CEST LOG: duration: 580.983 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:10 CEST LOG: duration: 883.528 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:10 CEST LOG: duration: 7757.581 ms statement: SELECT \ndelivery(126, 6)\n2007-06-05 15:42:10 CEST LOG: duration: 537.642 ms statement: SELECT \npayment(493, 2, 0, 493, 2, 'BARBARANTI', 2881.500000)\n2007-06-05 15:42:10 CEST LOG: duration: 1035.529 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:10 CEST LOG: duration: 1007.521 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:11 CEST LOG: duration: 1088.356 ms statement: FETCH \nALL IN mycursor\n2007-06-05 15:42:11 CEST LOG: duration: 1749.507 ms statement: SELECT \ndelivery(442, 5)\n\n\n", "msg_date": "Tue, 05 Jun 2007 16:16:30 +0200", "msg_from": "Markus Schiltknecht <[email protected]>", "msg_from_op": true, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "Markus Schiltknecht wrote:\n> Hi,\n> \n> Heikki Linnakangas wrote:\n>> I still suspect there's something wrong with plans, I doubt you can \n>> get that bad performance unless it's doing something really stupid.\n> \n> Agreed, but I'm still looking for that really stupid thing... AFAICT, \n> there are really no seqscans..., see the pg_stat_user_tables below.\n> \n>> I'd suggest setting log_min_duration_statement = 5000, and seeing what \n>> you get. Also check pg_stat_user_table.seq_scan just to be extra sure \n>> there's no seq scans.\n> \n> I've also added some of the log messages for min_duration_statement \n> below. Both were taken after two or three test runs.\n> \n> I'm really wondering, if the RAID 6 of the ARECA 1260 hurts so badly. \n> That would be disappointing, IMO. I'll try if I can reconfigure it to do \n> RAID 1+0, and then test again. (Unfortunately the box has already been \n> shipped to the customer, so that's getting tricky to do via ssh..:-( ).\n\nMaybe, TPC-C is very write-intensive. I don't know much about RAID \nstuff, but I think you'd really benefit from a separate WAL drive. You \ncould try turning fsync=off to see if that makes a difference.\n\nOh, and how many connections are you using? DBT-2 can be quite sensitive \nto that. 30 seems to work pretty well for me.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 05 Jun 2007 15:21:52 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "Hi,\n\nHeikki Linnakangas wrote:\n> Maybe, TPC-C is very write-intensive. I don't know much about RAID \n> stuff, but I think you'd really benefit from a separate WAL drive. You \n> could try turning fsync=off to see if that makes a difference.\n\nHm.. good idea, I'll try that.\n\n> Oh, and how many connections are you using? DBT-2 can be quite sensitive \n> to that. 30 seems to work pretty well for me.\n\nI've been using between 2 and 90, but that made pretty much no \ndifference at all. I'm not getting anything more that some 300 NOTPM.\n\nRegards\n\nMarkus\n", "msg_date": "Tue, 05 Jun 2007 16:25:33 +0200", "msg_from": "Markus Schiltknecht <[email protected]>", "msg_from_op": true, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "On Tue, 5 Jun 2007, Markus Schiltknecht wrote:\n\n> I'm really wondering, if the RAID 6 of the ARECA 1260 hurts so badly\n\nAll of your disk performance tests look reasonable; certainly not slow \nenough to cause the issue you're seeing. The only thing I've seen in this \nthread that makes me slightly suspicious is:\n\n> Oh, something else that's probably worth thinking about (and just came \n> to my mind again): the XFS is on a lvm2, on that RAID6.\n\nThere have been reports here from reliable sources that lvm can introduce \nperformance issues; \nhttp://archives.postgresql.org/pgsql-performance/2006-07/msg00276.php is \none example. Your dd test results suggest you're getting decent thoughput \nthough, so you don't seem to be suffering too much from that.\n\nAnyway, did you post your postgresql.conf yet? I don't remember seeing \nit. From looking at your latest data, my first guess would be there's \nsomething wrong there that's either causing a) the system to be \ncheckpointing constantly, or b) not enough memory allocated to allow \nqueries to execute properly. The fact that all your long statement times \ncome from SELECTs suggests to me that playing with the WAL parameters like \nfsync isn't likely to help here.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 5 Jun 2007 10:42:36 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "On 6/4/07, Markus Schiltknecht <[email protected]> wrote:\n> Thanks, that's exactly the one simple and very raw comparison value I've\n> been looking for. (Since most of the results pages of (former?) OSDL are\n> down).\n\nYeah, those results pages are gone for good. :(\n\nRegards,\nMark\n", "msg_date": "Fri, 8 Jun 2007 10:01:59 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "On Jun 4, 2007, at 1:56 PM, Markus Schiltknecht wrote:\n> Simplistic throughput testing with dd:\n>\n> dd of=test if=/dev/zero bs=10K count=800000\n> 800000+0 records in\n> 800000+0 records out\n> 8192000000 bytes (8.2 GB) copied, 37.3552 seconds, 219 MB/s\n> pamonth:/opt/dbt2/bb# dd if=test of=/dev/zero bs=10K count=800000\n> 800000+0 records in\n> 800000+0 records out\n> 8192000000 bytes (8.2 GB) copied, 27.6856 seconds, 296 MB/s\n\nI don't think that kind of testing is useful for good raid \ncontrollers on RAID5/6, because the controller will just be streaming \nthe data out; it'll compute the parity blocks on the fly and just \nstream data to the drives as fast as possible.\n\nBut that's not how writes in the database work (except for WAL); \nyou're writing stuff all over the place, none of which is streamed. \nSo in the best case (the entire stripe being updated is in the \ncontroller's cache), at a minimum it's going to have to write data + \nparity ( * 2 for RAID 6, IIRC) for every write. But any real-sized \ndatabase is going to be far larger than your raid cache, which means \nthere's a good chance a block being written will no longer have it's \nstripe in cache. In that case, the controller is going to have to \nread a bunch of data back off the drive, which is going to clobber \nperformance.\n\nNow, add that performance bottleneck on top of your WAL writes and \nyou're in real trouble.\n\nBTW, I was thinking in terms of stripe size when I wrote this, but I \ndon't know if good controllers actually need to deal with things at a \nstripe level, or if they can deal with smaller chunks of a stripe. In \neither case, the issue is still the number of extra reads going on.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Sun, 10 Jun 2007 22:08:32 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "Hi,\n\nJim Nasby wrote:\n> I don't think that kind of testing is useful for good raid controllers \n> on RAID5/6, because the controller will just be streaming the data out; \n> it'll compute the parity blocks on the fly and just stream data to the \n> drives as fast as possible.\n\nThat's why I called it 'simplistic throughput testing'...\n\n> But that's not how writes in the database work (except for WAL); you're \n> writing stuff all over the place, none of which is streamed. So in the \n> best case (the entire stripe being updated is in the controller's \n> cache), at a minimum it's going to have to write data + parity ( * 2 for \n> RAID 6, IIRC) for every write. But any real-sized database is going to \n> be far larger than your raid cache, which means there's a good chance a \n> block being written will no longer have it's stripe in cache. In that \n> case, the controller is going to have to read a bunch of data back off \n> the drive, which is going to clobber performance.\n\nI'm well aware. Our workload (hopefully) consists of a much lower \nwrites/reads ratio than dbt2, so RAID 6 might work anyway.\n\n> Now, add that performance bottleneck on top of your WAL writes and \n> you're in real trouble.\n\nWell, I'm basically surprised of the low NOTPM numbers compared to my \ndesktop system, which also does around 200 NOTPMs, with only two \nplatters in RAID 1 config... How can a server with four Cores and 8 \nPlatters be equaly slow?\n\nAnyway, I've now reconfigured the system with RAID 1+0 and got more than \ntwice the NOTPMs:\n\n Response Time (s)\n Transaction % Average : 90th % Total \nRollbacks %\n------------ ----- --------------------- ----------- --------------- \n -----\n Delivery 3.84 204.733 : 241.998 704 \n0 0.00\n New Order 45.77 203.651 : 242.847 8382 \n75 0.90\nOrder Status 4.32 199.184 : 238.081 792 0 \n 0.00\n Payment 42.02 198.969 : 236.549 7695 \n0 0.00\n Stock Level 4.04 198.668 : 236.113 740 \n0 0.00\n------------ ----- --------------------- ----------- --------------- \n -----\n\n567.72 new-order transactions per minute (NOTPM)\n14.5 minute duration\n0 total unknown errors\n529 second(s) ramping up\n\nI'm still feeling that 550 is pretty low. The response times are beyond \ngood and evil.\n\nAs vmstat.out tells us, the CPUs are still pretty much idle or waiting \nmost of the time.\n\nprocs -----------memory---------- ---swap-- -----io---- -system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n 0 3 494560 2181964 8 1787680 13 15 317 242 140 2 4 \n 1 72 23\n 0 9 494560 1558892 8 2298348 0 0 2973 2018 584 1114 2 \n 1 76 21\n 1 14 494496 424116 8 3316000 2 0 5613 9293 935 2943 5 \n1 29 65\n 0 15 452840 150148 8 3487160 738 3 5662 8709 925 3444 5 \n2 21 73\n 0 11 439172 151052 8 3386556 263 0 5690 8293 969 4145 5 \n2 23 70\n 0 17 438996 149748 8 3308184 57 6 5036 7174 902 4104 5 \n2 25 69\n 1 25 439940 150344 8 3228304 9 28 4757 7479 922 4269 5 \n2 26 67\n\nFor everybody interested, these settings are different from Pg 8.2 \ndefault postgresql.conf:\n\nlisten_addresses = '*'\nport = 54321\nshared_buffers = 2048MB\nwork_mem = 10MB\nmaintenance_work_mem = 64MB\n#max_stack_depth = 4MB\nmax_fsm_pages = 409600\neachcheckpoint_segments = 6\ncheckpoint_timeout = 1h\neffective_cache_size = 3800MB\nlog_min_duration_statement = 500\n\n\nFor dbt2, I've used 500 warehouses and 90 concurrent connections, \ndefault values for everything else.\n\nDo I simply have to put more RAM (currently 4GB) in that machine? Or \nwhat else can be wrong?\n\nIs anybody else seeing low performance with the Areca SATA Controllers? \n(in my case: \"Areca Technology Corp. ARC-1260 16-Port PCI-Express to \nSATA RAID Controller\", according to lspci)\n\n\nThen again, maybe I'm just expecting too much...\n\n\nRegards\n\nMarkus\n\n", "msg_date": "Mon, 11 Jun 2007 17:30:38 +0200", "msg_from": "Markus Schiltknecht <[email protected]>", "msg_from_op": true, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "Markus Schiltknecht wrote:\n> For dbt2, I've used 500 warehouses and 90 concurrent connections, \n> default values for everything else.\n\n500? That's just too much for the hardware. Start from say 70 warehouses \nand up it from there 10 at a time until you hit the wall. I'm using 30 \nconnections with ~100 warehouses on somewhat similar hardware.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 11 Jun 2007 16:34:34 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "Heikki Linnakangas wrote:\n> Markus Schiltknecht wrote:\n>> For dbt2, I've used 500 warehouses and 90 concurrent connections, \n>> default values for everything else.\n> \n> 500? That's just too much for the hardware. Start from say 70 warehouses \n> and up it from there 10 at a time until you hit the wall. I'm using 30 \n> connections with ~100 warehouses on somewhat similar hardware.\n\nAha! That's why... I've seen the '500' in some dbt2 samples and thought \nit'd be a good default value.\n\nBut it makes sense that the benchmark doesn't automatically 'scale down'...\n\nStupid me.\n\nThanks again! Hoping for larger NOTPMs.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Mon, 11 Jun 2007 17:44:24 +0200", "msg_from": "Markus Schiltknecht <[email protected]>", "msg_from_op": true, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "On 6/11/07, Markus Schiltknecht <[email protected]> wrote:\n> Heikki Linnakangas wrote:\n> > Markus Schiltknecht wrote:\n> >> For dbt2, I've used 500 warehouses and 90 concurrent connections,\n> >> default values for everything else.\n> >\n> > 500? That's just too much for the hardware. Start from say 70 warehouses\n> > and up it from there 10 at a time until you hit the wall. I'm using 30\n> > connections with ~100 warehouses on somewhat similar hardware.\n>\n> Aha! That's why... I've seen the '500' in some dbt2 samples and thought\n> it'd be a good default value.\n>\n> But it makes sense that the benchmark doesn't automatically 'scale down'...\n>\n> Stupid me.\n>\n> Thanks again! Hoping for larger NOTPMs.\n\nYeah, I ran with 500+ warehouses, but I had 6 14-disk arrays of 15K\nRPM scsi drives and 6 dual-channel controllers... :)\n\nRegards,\nMark\n", "msg_date": "Wed, 13 Jun 2007 09:33:22 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "Hi,\n\nMark Wong wrote:\n> Yeah, I ran with 500+ warehouses, but I had 6 14-disk arrays of 15K\n> RPM scsi drives and 6 dual-channel controllers... :)\n\nLucky you!\n\nIn the mean time, I've figured out that the box in question peaked at \nabout 1450 NOTPMs with 120 warehouses with RAID 1+0. I'll try to compare \nagain to RAID 6.\n\nIs there any place where such results are collected?\n\nRegards\n\nMarkus\n", "msg_date": "Wed, 13 Jun 2007 18:43:35 +0200", "msg_from": "Markus Schiltknecht <[email protected]>", "msg_from_op": true, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "On 6/13/07, Markus Schiltknecht <[email protected]> wrote:\n> Hi,\n>\n> Mark Wong wrote:\n> > Yeah, I ran with 500+ warehouses, but I had 6 14-disk arrays of 15K\n> > RPM scsi drives and 6 dual-channel controllers... :)\n>\n> Lucky you!\n>\n> In the mean time, I've figured out that the box in question peaked at\n> about 1450 NOTPMs with 120 warehouses with RAID 1+0. I'll try to compare\n> again to RAID 6.\n>\n> Is there any place where such results are collected?\n\nUnfortunately not anymore. When I was working at OSDL there was...\nI've been told that the lab has been mostly disassembled now so the\ndata are lost now.\n\nRegards,\nMark\n", "msg_date": "Wed, 13 Jun 2007 10:02:41 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 NOTPM numbers" }, { "msg_contents": "On Jun 13, 2007, at 11:43 AM, Markus Schiltknecht wrote:\n> In the mean time, I've figured out that the box in question peaked \n> at about 1450 NOTPMs with 120 warehouses with RAID 1+0. I'll try to \n> compare again to RAID 6.\n>\n> Is there any place where such results are collected?\n\nThere is the ill-used -benchmarks list, but perhaps it would be \nbetter if we setup a wiki for this...\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Mon, 18 Jun 2007 17:50:01 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 NOTPM numbers" } ]
[ { "msg_contents": "I have several thousand clients. Our clients do surveys, and each survey\nhas two tables for the client data,\n\n responders\n responses\n\nFrequent inserts into both table.\n\nRight now, we are seeing significant time during inserts to these two\ntables.\n\nSome of the indices in tableA and tableB do not index on the client ID\nfirst.\n\nSo, we are considering two possible solutions.\n\n (1) Create separate responders and responses tables for each client.\n\n (2) Make sure all indices on responders and responses start with the\n client id (excepting, possibly, the primary keys for these fields) and\n have all normal operation queries always include an id_client.\n\nRight now, for example, given a responder and a survey question, we do a\nquery in responses by the id_responder and id_survey. This gives us a\nunique record, but I'm wondering if maintaining the index on\n(id_responder,id_survey) is more costly on inserts than maintaining the\nindex (id_client,id_responder,id_survey) given that we also have other\nindices on (id_client,...).\n\nOption (1) makes me very nervous. I don't like the idea of the same sorts\nof data being stored in lots of different tables, in part for long-term\nmaintenance reasons. We don't really need cross-client reporting, however.\n\n=thomas\n\n", "msg_date": "Mon, 04 Jun 2007 13:40:01 -0400", "msg_from": "Thomas Andrews <[email protected]>", "msg_from_op": true, "msg_subject": "Thousands of tables versus on table?" }, { "msg_contents": "On Mon, 2007-06-04 at 13:40 -0400, Thomas Andrews wrote:\n> I have several thousand clients. Our clients do surveys, and each survey\n> has two tables for the client data,\n> \n> responders\n> responses\n> \n> Frequent inserts into both table.\n> \n> Right now, we are seeing significant time during inserts to these two\n> tables.\n\nCan you provide some concrete numbers here? Perhaps an EXPLAIN ANALYZE\nfor the insert, sizes of tables, stuff like that?\n\n> Some of the indices in tableA and tableB do not index on the client ID\n> first.\n\nWhat reason do you have to think that this matters?\n\n> So, we are considering two possible solutions.\n> \n> (1) Create separate responders and responses tables for each client.\n> \n> (2) Make sure all indices on responders and responses start with the\n> client id (excepting, possibly, the primary keys for these fields) and\n> have all normal operation queries always include an id_client.\n> \n> Right now, for example, given a responder and a survey question, we do a\n> query in responses by the id_responder and id_survey. This gives us a\n> unique record, but I'm wondering if maintaining the index on\n> (id_responder,id_survey) is more costly on inserts than maintaining the\n> index (id_client,id_responder,id_survey) given that we also have other\n> indices on (id_client,...).\n> \n> Option (1) makes me very nervous. I don't like the idea of the same sorts\n> of data being stored in lots of different tables, in part for long-term\n> maintenance reasons. We don't really need cross-client reporting, however.\n\nWhat version of PG is this? What is your vacuuming strategy? Have you\ntried a REINDEX to see if that helps?\n\n-- Mark Lewis\n", "msg_date": "Mon, 04 Jun 2007 11:15:43 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "We're running 7.4 but will be upgrading to 8.2.\n\nThe responses table has 20,000,000 records.\n\nSometimes (but not all the time) an insert into the responses table can \ntake 5-6 seconds.\n\nI guess my real question is, does it ever make sense to create thousands \nof tables like this?\n\n=thomas\n\nMark Lewis wrote:\n> On Mon, 2007-06-04 at 13:40 -0400, Thomas Andrews wrote:\n>> I have several thousand clients. Our clients do surveys, and each survey\n>> has two tables for the client data,\n>>\n>> responders\n>> responses\n>>\n>> Frequent inserts into both table.\n>>\n>> Right now, we are seeing significant time during inserts to these two\n>> tables.\n> \n> Can you provide some concrete numbers here? Perhaps an EXPLAIN ANALYZE\n> for the insert, sizes of tables, stuff like that?\n> \n>> Some of the indices in tableA and tableB do not index on the client ID\n>> first.\n> \n> What reason do you have to think that this matters?\n> \n>> So, we are considering two possible solutions.\n>>\n>> (1) Create separate responders and responses tables for each client.\n>>\n>> (2) Make sure all indices on responders and responses start with the\n>> client id (excepting, possibly, the primary keys for these fields) and\n>> have all normal operation queries always include an id_client.\n>>\n>> Right now, for example, given a responder and a survey question, we do a\n>> query in responses by the id_responder and id_survey. This gives us a\n>> unique record, but I'm wondering if maintaining the index on\n>> (id_responder,id_survey) is more costly on inserts than maintaining the\n>> index (id_client,id_responder,id_survey) given that we also have other\n>> indices on (id_client,...).\n>>\n>> Option (1) makes me very nervous. I don't like the idea of the same sorts\n>> of data being stored in lots of different tables, in part for long-term\n>> maintenance reasons. We don't really need cross-client reporting, however.\n> \n> What version of PG is this? What is your vacuuming strategy? Have you\n> tried a REINDEX to see if that helps?\n> \n> -- Mark Lewis\n>", "msg_date": "Mon, 04 Jun 2007 14:45:45 -0400", "msg_from": "Thomas Andrews <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "Oh, and we vacuum every day. Not sure about REINDEX, but I doubt we \nhave done that.\n\n=thomas\n\nMark Lewis wrote:\n> On Mon, 2007-06-04 at 13:40 -0400, Thomas Andrews wrote:\n>> I have several thousand clients. Our clients do surveys, and each survey\n>> has two tables for the client data,\n>>\n>> responders\n>> responses\n>>\n>> Frequent inserts into both table.\n>>\n>> Right now, we are seeing significant time during inserts to these two\n>> tables.\n> \n> Can you provide some concrete numbers here? Perhaps an EXPLAIN ANALYZE\n> for the insert, sizes of tables, stuff like that?\n> \n>> Some of the indices in tableA and tableB do not index on the client ID\n>> first.\n> \n> What reason do you have to think that this matters?\n> \n>> So, we are considering two possible solutions.\n>>\n>> (1) Create separate responders and responses tables for each client.\n>>\n>> (2) Make sure all indices on responders and responses start with the\n>> client id (excepting, possibly, the primary keys for these fields) and\n>> have all normal operation queries always include an id_client.\n>>\n>> Right now, for example, given a responder and a survey question, we do a\n>> query in responses by the id_responder and id_survey. This gives us a\n>> unique record, but I'm wondering if maintaining the index on\n>> (id_responder,id_survey) is more costly on inserts than maintaining the\n>> index (id_client,id_responder,id_survey) given that we also have other\n>> indices on (id_client,...).\n>>\n>> Option (1) makes me very nervous. I don't like the idea of the same sorts\n>> of data being stored in lots of different tables, in part for long-term\n>> maintenance reasons. We don't really need cross-client reporting, however.\n> \n> What version of PG is this? What is your vacuuming strategy? Have you\n> tried a REINDEX to see if that helps?\n> \n> -- Mark Lewis\n>", "msg_date": "Mon, 04 Jun 2007 14:46:51 -0400", "msg_from": "Thomas Andrews <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "\n\"Thomas Andrews\" <[email protected]> writes:\n\n> I guess my real question is, does it ever make sense to create thousands of\n> tables like this?\n\nSometimes. But usually it's not a good idea. \n\nWhat you're proposing is basically partitioning, though you may not actually\nneed to put all the partitions together for your purposes. Partitioning's main\nbenefit is in the management of the data. You can drop and load partitions in\nchunks rather than have to perform large operations on millions of records.\n\nPostgres doesn't really get any faster by breaking the tables up like that. In\nfact it probably gets slower as it has to look up which of the thousands of\ntables you want to work with.\n\nHow often do you update or delete records and how many do you update or\ndelete? Once per day is a very low frequency for vacuuming a busy table, you\nmay be suffering from table bloat. But if you never delete or update records\nthen that's irrelevant.\n\nDoes reindexing or clustering the table make a marked difference?\n\nI would suggest you post your schema and the results of \"vacuum verbose\".\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 04 Jun 2007 20:43:38 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "\n\n\nOn 6/4/07 3:43 PM, \"Gregory Stark\" <[email protected]> wrote:\n\n> \n> \"Thomas Andrews\" <[email protected]> writes:\n> \n>> I guess my real question is, does it ever make sense to create thousands of\n>> tables like this?\n> \n> Sometimes. But usually it's not a good idea.\n> \n> What you're proposing is basically partitioning, though you may not actually\n> need to put all the partitions together for your purposes. Partitioning's main\n> benefit is in the management of the data. You can drop and load partitions in\n> chunks rather than have to perform large operations on millions of records.\n> \n> Postgres doesn't really get any faster by breaking the tables up like that. In\n> fact it probably gets slower as it has to look up which of the thousands of\n> tables you want to work with.\n> \n> How often do you update or delete records and how many do you update or\n> delete? Once per day is a very low frequency for vacuuming a busy table, you\n> may be suffering from table bloat. But if you never delete or update records\n> then that's irrelevant.\n\nIt looks like the most inserts that have occurred in a day is about 2000.\nThe responders table has 1.3 million records, the responses table has 50\nmillion records. Most of the inserts are in the responses table.\n\n> \n> Does reindexing or clustering the table make a marked difference?\n> \n\nClustering sounds like it might be a really good solution. How long does a\ncluster command usually take on a table with 50,000,000 records? Is it\nsomething that can be run daily/weekly?\n\nI'd rather not post the schema because it's not mine - I'm a consultant. I\ncan tell you our vacuum every night is taking 2 hours and that disk IO is\nthe real killer - the CPU rarely gets higher than 20% or so.\n\n=thomas\n\n", "msg_date": "Mon, 04 Jun 2007 15:57:41 -0400", "msg_from": "Thomas Andrews <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "On 6/4/07, Thomas Andrews <[email protected]> wrote:\n>\n>\n>\n>\n> On 6/4/07 3:43 PM, \"Gregory Stark\" <[email protected]> wrote:\n>\n> >\n> > \"Thomas Andrews\" <[email protected]> writes:\n> >\n> >> I guess my real question is, does it ever make sense to create\n> thousands of\n> >> tables like this?\n> >\n> > Sometimes. But usually it's not a good idea.\n> >\n> > What you're proposing is basically partitioning, though you may not\n> actually\n> > need to put all the partitions together for your purposes.\n> Partitioning's main\n> > benefit is in the management of the data. You can drop and load\n> partitions in\n> > chunks rather than have to perform large operations on millions of\n> records.\n> >\n> > Postgres doesn't really get any faster by breaking the tables up like\n> that. In\n> > fact it probably gets slower as it has to look up which of the thousands\n> of\n> > tables you want to work with.\n> >\n> > How often do you update or delete records and how many do you update or\n> > delete? Once per day is a very low frequency for vacuuming a busy table,\n> you\n> > may be suffering from table bloat. But if you never delete or update\n> records\n> > then that's irrelevant.\n>\n> It looks like the most inserts that have occurred in a day is about 2000.\n> The responders table has 1.3 million records, the responses table has 50\n> million records. Most of the inserts are in the responses table.\n>\n> >\n> > Does reindexing or clustering the table make a marked difference?\n> >\n>\n> Clustering sounds like it might be a really good solution. How long does\n> a\n> cluster command usually take on a table with 50,000,000 records? Is it\n> something that can be run daily/weekly?\n>\n> I'd rather not post the schema because it's not mine - I'm a\n> consultant. I\n> can tell you our vacuum every night is taking 2 hours and that disk IO is\n> the real killer - the CPU rarely gets higher than 20% or so.\n>\n> =thomas\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\nWhat OS are you running on?\n\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nOn 6/4/07, Thomas Andrews <[email protected]> wrote:\nOn 6/4/07 3:43 PM, \"Gregory Stark\" <[email protected]> wrote:>> \"Thomas Andrews\" <\[email protected]> writes:>>> I guess my real question is, does it ever make sense to create thousands of>> tables like this?>> Sometimes. But usually it's not a good idea.\n>> What you're proposing is basically partitioning, though you may not actually> need to put all the partitions together for your purposes. Partitioning's main> benefit is in the management of the data. You can drop and load partitions in\n> chunks rather than have to perform large operations on millions of records.>> Postgres doesn't really get any faster by breaking the tables up like that. In> fact it probably gets slower as it has to look up which of the thousands of\n> tables you want to work with.>> How often do you update or delete records and how many do you update or> delete? Once per day is a very low frequency for vacuuming a busy table, you> may be suffering from table bloat. But if you never delete or update records\n> then that's irrelevant.It looks like the most inserts that have occurred in a day is about 2000.The responders table has 1.3 million records, the responses table has 50million records.  Most of the inserts are in the responses table.\n>> Does reindexing or clustering the table make a marked difference?>Clustering sounds like it might be a really good solution.  How long does acluster command usually take on a table with 50,000,000 records?  Is it\nsomething that can be run daily/weekly?I'd rather not post the schema because it's not mine - I'm a consultant.  Ican tell you our vacuum every night is taking 2 hours and that disk IO isthe real killer - the CPU rarely gets higher than 20% or so.\n=thomas---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settingsWhat OS are you running on?\n\n\n-- Yudhvir Singh Sidhu408 375 3134 cell", "msg_date": "Mon, 4 Jun 2007 13:08:38 -0700", "msg_from": "\"Y Sidhu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "Linux 2.4.9, if I¹m reading this right.\n\n=thomas\n\n\nOn 6/4/07 4:08 PM, \"Y Sidhu\" <[email protected]> wrote:\n\n> On 6/4/07, Thomas Andrews <[email protected]> wrote:\n>> \n>> \n>> \n>> On 6/4/07 3:43 PM, \"Gregory Stark\" <[email protected]> wrote:\n>> \n>>> >\n>>> > \"Thomas Andrews\" < [email protected]\n>>> <mailto:[email protected]> > writes:\n>>> >\n>>>> >> I guess my real question is, does it ever make sense to create thousands\nof\n>>>> >> tables like this?\n>>> >\n>>> > Sometimes. But usually it's not a good idea.\n>>> >\n>>> > What you're proposing is basically partitioning, though you may not\n>>> actually\n>>> > need to put all the partitions together for your purposes. Partitioning's\n>>> main\n>>> > benefit is in the management of the data. You can drop and load partitions\n>>> in \n>>> > chunks rather than have to perform large operations on millions of\n>>> records.\n>>> >\n>>> > Postgres doesn't really get any faster by breaking the tables up like\n>>> that. In\n>>> > fact it probably gets slower as it has to look up which of the thousands\n>>> of \n>>> > tables you want to work with.\n>>> >\n>>> > How often do you update or delete records and how many do you update or\n>>> > delete? Once per day is a very low frequency for vacuuming a busy table,\n>>> you\n>>> > may be suffering from table bloat. But if you never delete or update\n>>> records \n>>> > then that's irrelevant.\n>> \n>> It looks like the most inserts that have occurred in a day is about 2000.\n>> The responders table has 1.3 million records, the responses table has 50\n>> million records. Most of the inserts are in the responses table.\n>> \n>>> >\n>>> > Does reindexing or clustering the table make a marked difference?\n>>> >\n>> \n>> Clustering sounds like it might be a really good solution. How long does a\n>> cluster command usually take on a table with 50,000,000 records? Is it\n>> something that can be run daily/weekly?\n>> \n>> I'd rather not post the schema because it's not mine - I'm a consultant. I\n>> can tell you our vacuum every night is taking 2 hours and that disk IO is\n>> the real killer - the CPU rarely gets higher than 20% or so.\n>> \n>> =thomas\n>> \n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n> \n> \n> What OS are you running on?\n> \n\n\n\n\n\nRe: [PERFORM] Thousands of tables versus on table?\n\n\nLinux 2.4.9, if I’m reading this right.\n\n=thomas\n\n\nOn 6/4/07 4:08 PM, \"Y Sidhu\" <[email protected]> wrote:\n\nOn 6/4/07, Thomas Andrews <[email protected]> wrote:\n\n\n\nOn 6/4/07 3:43 PM, \"Gregory Stark\" <[email protected]> wrote:\n\n>\n> \"Thomas Andrews\" < [email protected] <mailto:[email protected]> > writes:\n>\n>> I guess my real question is, does it ever make sense to create thousands of\n>> tables like this?\n>\n> Sometimes. But usually it's not a good idea. \n>\n> What you're proposing is basically partitioning, though you may not actually\n> need to put all the partitions together for your purposes. Partitioning's main\n> benefit is in the management of the data. You can drop and load partitions in \n> chunks rather than have to perform large operations on millions of records.\n>\n> Postgres doesn't really get any faster by breaking the tables up like that. In\n> fact it probably gets slower as it has to look up which of the thousands of \n> tables you want to work with.\n>\n> How often do you update or delete records and how many do you update or\n> delete? Once per day is a very low frequency for vacuuming a busy table, you\n> may be suffering from table bloat. But if you never delete or update records \n> then that's irrelevant.\n\nIt looks like the most inserts that have occurred in a day is about 2000.\nThe responders table has 1.3 million records, the responses table has 50\nmillion records.  Most of the inserts are in the responses table. \n\n>\n> Does reindexing or clustering the table make a marked difference?\n>\n\nClustering sounds like it might be a really good solution.  How long does a\ncluster command usually take on a table with 50,000,000 records?  Is it \nsomething that can be run daily/weekly?\n\nI'd rather not post the schema because it's not mine - I'm a consultant.  I\ncan tell you our vacuum every night is taking 2 hours and that disk IO is\nthe real killer - the CPU rarely gets higher than 20% or so. \n\n=thomas\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\nWhat OS are you running on?", "msg_date": "Mon, 04 Jun 2007 16:14:14 -0400", "msg_from": "Thomas Andrews <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "\"Thomas Andrews\" <[email protected]> writes:\n\n> Clustering sounds like it might be a really good solution. How long does a\n> cluster command usually take on a table with 50,000,000 records? Is it\n> something that can be run daily/weekly?\n\nouch, ok, with 50M records cluster isn't going to be quick either, especially\nif you have a lot of indexes.\n\nWith those kinds of numbers and with the kind of workload you're describing\nwhere you have different areas that are really complete separate you might\nconsider partitioning the table. That's essentially what you're proposing\nanyways. \n\nHonestly table partitioning in Postgres is pretty young and primitive and if\nyou have the flexibility in your application to refer to different tables\nwithout embedding them throughout your application then you might consider\nthat. But there are also advantages to being able to select from all the\ntables together using the partitioned table.\n\n> I'd rather not post the schema because it's not mine - I'm a consultant. I\n> can tell you our vacuum every night is taking 2 hours and that disk IO is\n> the real killer - the CPU rarely gets higher than 20% or so.\n\nDo you ever update or delete these records? If you never update or delete\nrecords then the vacuum is mostly a waste of effort anyways. (You still have\nto vacuum occasionally to prevent xid wraparound but that's much much less\noften).\n\nIf you do delete records in large batches or have lots of updates then\nvacuuming daily with default fsm settings probably isn't enough.\n\nHow many indexes do you have?\n\nAnd if they don't all have client_id in their prefix then I wonder about the\nplans you're getting. It's unfortunate you can't post your schema and query\nplans. It's possible you have some plans that are processing many more records\nthan they need to to do their work because they're using indexes or\ncombinations of indexes that aren't ideal.\nspecific enough \n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 04 Jun 2007 21:18:43 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "Gregory Stark wrote:\n> \"Thomas Andrews\" <[email protected]> writes:\n>\n> \n>> I guess my real question is, does it ever make sense to create thousands of\n>> tables like this?\n>> \n>\n> Sometimes. But usually it's not a good idea. \n>\n> What you're proposing is basically partitioning, though you may not actually\n> need to put all the partitions together for your purposes. Partitioning's main\n> benefit is in the management of the data. You can drop and load partitions in\n> chunks rather than have to perform large operations on millions of records.\n>\n> Postgres doesn't really get any faster by breaking the tables up like that. In\n> fact it probably gets slower as it has to look up which of the thousands of\n> tables you want to work with.\n> \n\nThat's not entirely true. PostgreSQL can be markedly faster using \npartitioning as long as you always access it by referencing the \npartitioning key in the where clause. So, if you partition the table by \ndate, and always reference it with a date in the where clause, it will \nusually be noticeably faster. OTOH, if you access it without using a \nwhere clause that lets it pick partitions, then it will be slower than \none big table.\n\nSo, while this poster might originally think to have one table for each \nuser, resulting in thousands of tables, maybe a compromise where you \npartition on userid ranges would work out well, and keep each partition \ntable down to some 50-100 thousand rows, with smaller indexes to match.\n", "msg_date": "Mon, 04 Jun 2007 15:20:55 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "\n\n> can tell you our vacuum every night is taking 2 hours and that disk IO is\n> the real killer - the CPU rarely gets higher than 20% or so.\n\n\tHow many gigabytes of stuff do you have in this database ?\n\t( du -sh on the *right* directory will suffice, don't include the logs \netc, aim for data/base/oid)\n\n", "msg_date": "Mon, 04 Jun 2007 22:21:18 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "On Mon, 4 Jun 2007, Scott Marlowe wrote:\n\n> Gregory Stark wrote:\n>> \"Thomas Andrews\" <[email protected]> writes:\n>>\n>> \n>> > I guess my real question is, does it ever make sense to create thousands \n>> > of\n>> > tables like this?\n>> > \n>>\n>> Sometimes. But usually it's not a good idea. \n>>\n>> What you're proposing is basically partitioning, though you may not\n>> actually\n>> need to put all the partitions together for your purposes. Partitioning's\n>> main\n>> benefit is in the management of the data. You can drop and load partitions\n>> in\n>> chunks rather than have to perform large operations on millions of\n>> records.\n>>\n>> Postgres doesn't really get any faster by breaking the tables up like\n>> that. In\n>> fact it probably gets slower as it has to look up which of the thousands\n>> of\n>> tables you want to work with.\n>> \n>\n> That's not entirely true. PostgreSQL can be markedly faster using \n> partitioning as long as you always access it by referencing the partitioning \n> key in the where clause. So, if you partition the table by date, and always \n> reference it with a date in the where clause, it will usually be noticeably \n> faster. OTOH, if you access it without using a where clause that lets it \n> pick partitions, then it will be slower than one big table.\n>\n> So, while this poster might originally think to have one table for each user, \n> resulting in thousands of tables, maybe a compromise where you partition on \n> userid ranges would work out well, and keep each partition table down to some \n> 50-100 thousand rows, with smaller indexes to match.\n>\n\nwhat if he doesn't use the postgres internal partitioning, but instead \nmakes his code access the tables named responsesNNNNN where NNNNN is the \nid of the customer?\n\nthis is what it sounded like he was asking initially.\n\nDavid Lang\n", "msg_date": "Mon, 4 Jun 2007 16:33:24 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "[email protected] wrote:\n> On Mon, 4 Jun 2007, Scott Marlowe wrote:\n>\n>> Gregory Stark wrote:\n>>> \"Thomas Andrews\" <[email protected]> writes:\n>>>\n>>>\n>>> > I guess my real question is, does it ever make sense to create \n>>> thousands > of\n>>> > tables like this?\n>>> >\n>>> Sometimes. But usually it's not a good idea.\n>>> What you're proposing is basically partitioning, though you may not\n>>> actually\n>>> need to put all the partitions together for your purposes. \n>>> Partitioning's\n>>> main\n>>> benefit is in the management of the data. You can drop and load \n>>> partitions\n>>> in\n>>> chunks rather than have to perform large operations on millions of\n>>> records.\n>>>\n>>> Postgres doesn't really get any faster by breaking the tables up like\n>>> that. In\n>>> fact it probably gets slower as it has to look up which of the \n>>> thousands\n>>> of\n>>> tables you want to work with.\n>>>\n>>\n>> That's not entirely true. PostgreSQL can be markedly faster using \n>> partitioning as long as you always access it by referencing the \n>> partitioning key in the where clause. So, if you partition the table \n>> by date, and always reference it with a date in the where clause, it \n>> will usually be noticeably faster. OTOH, if you access it without \n>> using a where clause that lets it pick partitions, then it will be \n>> slower than one big table.\n>>\n>> So, while this poster might originally think to have one table for \n>> each user, resulting in thousands of tables, maybe a compromise where \n>> you partition on userid ranges would work out well, and keep each \n>> partition table down to some 50-100 thousand rows, with smaller \n>> indexes to match.\n>>\n>\n> what if he doesn't use the postgres internal partitioning, but instead \n> makes his code access the tables named responsesNNNNN where NNNNN is \n> the id of the customer?\n>\n> this is what it sounded like he was asking initially.\n\nSorry, I think I initially read your response as \"Postgres doesn't \nreally get any faster by breaking the tables up\" without the \"like that\" \npart.\n\nI've found that as long as the number of tables is > 10,000 or so, \nhaving a lot of tables doesn't seem to really slow pgsql down a lot. \nI'm sure that the tipping point is dependent on your db machine. I \nwould bet that if he's referring to individual tables directly, and each \none has hundreds instead of millions of rows, the performance would be \nbetter. But the only way to be sure is to test it.\n", "msg_date": "Tue, 05 Jun 2007 11:48:37 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "So, partitioning in PSQL 8 is workable, but breaking up the table up into\nactual separate tables is not?\n\nAnother solution we have proposed is having 'active' and 'completed' tables.\nSo, rather than thousands, we'd have four tables:\n\n responders_active\n responders_completed\n responses_active\n responses_completed\n\nThat way, the number of responses_active records would not be as huge. The\nproblem, as we see it, is that the responders are entering their responses\nand it is taking too long. But if we separate out active and completed\nsurveys, then the inserts will likely cost less. We might even be able to\nreduce the indices on the _active tables because survey administrators would\nnot want to run as many complex reports on the active responses.\n\nThere would be an extra cost, when the survey is completed, of copying the\nrecords from the '_active' table to the '_completed' table and then deleting\nthem, but that operation is something a survey administrator would be\nwilling to accept as taking a while (as well as something we could put off\nto an off hour, although we have lots of international customers so it's not\nclear when our off hours are.)\n\n=thomas\n\n\nOn 6/5/07 12:48 PM, \"Scott Marlowe\" <[email protected]> wrote:\n\n> [email protected] wrote:\n>> On Mon, 4 Jun 2007, Scott Marlowe wrote:\n>> \n>>> Gregory Stark wrote:\n>>>> \"Thomas Andrews\" <[email protected]> writes:\n>>>> \n>>>> \n>>>>> I guess my real question is, does it ever make sense to create\n>>>> thousands > of\n>>>>> tables like this?\n>>>>> \n>>>> Sometimes. But usually it's not a good idea.\n>>>> What you're proposing is basically partitioning, though you may not\n>>>> actually\n>>>> need to put all the partitions together for your purposes.\n>>>> Partitioning's\n>>>> main\n>>>> benefit is in the management of the data. You can drop and load\n>>>> partitions\n>>>> in\n>>>> chunks rather than have to perform large operations on millions of\n>>>> records.\n>>>> \n>>>> Postgres doesn't really get any faster by breaking the tables up like\n>>>> that. In\n>>>> fact it probably gets slower as it has to look up which of the\n>>>> thousands\n>>>> of\n>>>> tables you want to work with.\n>>>> \n>>> \n>>> That's not entirely true. PostgreSQL can be markedly faster using\n>>> partitioning as long as you always access it by referencing the\n>>> partitioning key in the where clause. So, if you partition the table\n>>> by date, and always reference it with a date in the where clause, it\n>>> will usually be noticeably faster. OTOH, if you access it without\n>>> using a where clause that lets it pick partitions, then it will be\n>>> slower than one big table.\n>>> \n>>> So, while this poster might originally think to have one table for\n>>> each user, resulting in thousands of tables, maybe a compromise where\n>>> you partition on userid ranges would work out well, and keep each\n>>> partition table down to some 50-100 thousand rows, with smaller\n>>> indexes to match.\n>>> \n>> \n>> what if he doesn't use the postgres internal partitioning, but instead\n>> makes his code access the tables named responsesNNNNN where NNNNN is\n>> the id of the customer?\n>> \n>> this is what it sounded like he was asking initially.\n> \n> Sorry, I think I initially read your response as \"Postgres doesn't\n> really get any faster by breaking the tables up\" without the \"like that\"\n> part.\n> \n> I've found that as long as the number of tables is > 10,000 or so,\n> having a lot of tables doesn't seem to really slow pgsql down a lot.\n> I'm sure that the tipping point is dependent on your db machine. I\n> would bet that if he's referring to individual tables directly, and each\n> one has hundreds instead of millions of rows, the performance would be\n> better. But the only way to be sure is to test it.\n\n", "msg_date": "Tue, 05 Jun 2007 13:04:31 -0400", "msg_from": "Thomas Andrews <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "Thomas Andrews wrote:\n>\n>\n> On 6/5/07 12:48 PM, \"Scott Marlowe\" <[email protected]> wrote:\n>\n> \n>> [email protected] wrote:\n>> \n>>> On Mon, 4 Jun 2007, Scott Marlowe wrote:\n>>>\n>>> \n>>>> Gregory Stark wrote:\n>>>> \n>>>>> \"Thomas Andrews\" <[email protected]> writes:\n>>>>>\n>>>>>\n>>>>> \n>>>>>> I guess my real question is, does it ever make sense to create\n>>>>>> \n>>>>> thousands > of\n>>>>> \n>>>>>> tables like this?\n>>>>>>\n>>>>>> \n>>>>> Sometimes. But usually it's not a good idea.\n>>>>> What you're proposing is basically partitioning, though you may not\n>>>>> actually\n>>>>> need to put all the partitions together for your purposes.\n>>>>> Partitioning's\n>>>>> main\n>>>>> benefit is in the management of the data. You can drop and load\n>>>>> partitions\n>>>>> in\n>>>>> chunks rather than have to perform large operations on millions of\n>>>>> records.\n>>>>>\n>>>>> Postgres doesn't really get any faster by breaking the tables up like\n>>>>> that. In\n>>>>> fact it probably gets slower as it has to look up which of the\n>>>>> thousands\n>>>>> of\n>>>>> tables you want to work with.\n>>>>>\n>>>>> \n>>>> That's not entirely true. PostgreSQL can be markedly faster using\n>>>> partitioning as long as you always access it by referencing the\n>>>> partitioning key in the where clause. So, if you partition the table\n>>>> by date, and always reference it with a date in the where clause, it\n>>>> will usually be noticeably faster. OTOH, if you access it without\n>>>> using a where clause that lets it pick partitions, then it will be\n>>>> slower than one big table.\n>>>>\n>>>> So, while this poster might originally think to have one table for\n>>>> each user, resulting in thousands of tables, maybe a compromise where\n>>>> you partition on userid ranges would work out well, and keep each\n>>>> partition table down to some 50-100 thousand rows, with smaller\n>>>> indexes to match.\n>>>>\n>>>> \n>>> what if he doesn't use the postgres internal partitioning, but instead\n>>> makes his code access the tables named responsesNNNNN where NNNNN is\n>>> the id of the customer?\n>>>\n>>> this is what it sounded like he was asking initially.\n>>> \n>> Sorry, I think I initially read your response as \"Postgres doesn't\n>> really get any faster by breaking the tables up\" without the \"like that\"\n>> part.\n>>\n>> I've found that as long as the number of tables is > 10,000 or so,\n>> \nThat should have been as long as the number of tables is < 10,000 or so...\n\n>> having a lot of tables doesn't seem to really slow pgsql down a lot.\n>> I'm sure that the tipping point is dependent on your db machine. I\n>> would bet that if he's referring to individual tables directly, and each\n>> one has hundreds instead of millions of rows, the performance would be\n>> better. But the only way to be sure is to test it.\n>> \n>\n> \nPlease stop top posting. This is my last reply until you stop top posting.\n\n> So, partitioning in PSQL 8 is workable, but breaking up the table up into\n> actual separate tables is not?\n> \nUmmm, that's not what I said. They're similar in execution. However, \npartitioning might let you put 100 customers into a given table, if, \nsay, you partitioned on customer ID or something that would allow you to \ngroup a few together.\n> Another solution we have proposed is having 'active' and 'completed' tables.\n> So, rather than thousands, we'd have four tables:\n>\n> responders_active\n> responders_completed\n> responses_active\n> responses_completed\n> \nThat's not a bad idea. Just keep up on your vacuuming.\n", "msg_date": "Tue, 05 Jun 2007 12:09:58 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "\n\"Scott Marlowe\" <[email protected]> writes:\n\n> Sorry, I think I initially read your response as \"Postgres doesn't really get\n> any faster by breaking the tables up\" without the \"like that\" part.\n\nWell breaking up the tables like that or partitioning, either way should be\nabout equivalent really. Breaking up the tables and doing it in the\napplication should perform even better but it does make the schema less\nflexible and harder to do non-partition based queries and so on.\n\nI guess I should explain what I originally meant: A lot of people come from a\nflat-file world and assume that things get slower when you deal with large\ntables. In fact due to the magic of log(n) accessing records from a large\nindex is faster than first looking up the table and index info in a small\nindex and then doing a second lookup in up in an index for a table half the\nsize.\n\nWhere the win in partitioning comes in is in being able to disappear some of\nthe data entirely. By making part of the index key implicit in the choice of\npartition you get away with a key that's half as large. And in some cases you\ncan get away with using a different key entirely which wouldn't otherwise have\nbeen feasible to index. In some cases you can even do sequential scans whereas\nin an unpartitioned table you would have to use an index (or scan the entire\ntable).\n\nBut the real reason people partition data is really for the management ease.\nBeing able to drop, and load entire partitions in O(1) is makes it feasible to\nmanage data on a scale that would simply be impossible without partitioned\ntables.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 05 Jun 2007 19:55:44 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "Gregory Stark wrote:\n> \"Scott Marlowe\" <[email protected]> writes:\n>\n> \n>> Sorry, I think I initially read your response as \"Postgres doesn't really get\n>> any faster by breaking the tables up\" without the \"like that\" part.\n>> \n>\n> Well breaking up the tables like that or partitioning, either way should be\n> about equivalent really. Breaking up the tables and doing it in the\n> application should perform even better but it does make the schema less\n> flexible and harder to do non-partition based queries and so on.\n> \nTrue, but we can break it up by something other than the company name on \nthe survey, in this instance, and might find it far easier to manage by, \nsay, date range, company ID range, etc...\nPlus with a few hand rolled bash or perl scripts we can maintain our \ndatabase and keep all the logic of partitioning out of our app. Which \nwould allow developers not wholly conversant in our partitioning scheme \nto participate in development without the fear of them putting data in \nthe wrong place.\n> Where the win in partitioning comes in is in being able to disappear some of\n> the data entirely. By making part of the index key implicit in the choice of\n> partition you get away with a key that's half as large. And in some cases you\n> can get away with using a different key entirely which wouldn't otherwise have\n> been feasible to index. In some cases you can even do sequential scans whereas\n> in an unpartitioned table you would have to use an index (or scan the entire\n> table).\n> \nYeah, I found that out recently while I benchmarking a 12,000,000 row \ngeometric data set. Breaking it into 400 or so partitions resulted in \nno need for indexes and response times of 0.2 or so seconds, where \nbefore that I'd been in the 1.5 to 3 second range.\n", "msg_date": "Tue, 05 Jun 2007 14:34:08 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "On Tue, 5 Jun 2007, Gregory Stark wrote:\n\n> \"Scott Marlowe\" <[email protected]> writes:\n>\n>> Sorry, I think I initially read your response as \"Postgres doesn't really get\n>> any faster by breaking the tables up\" without the \"like that\" part.\n>\n> Well breaking up the tables like that or partitioning, either way should be\n> about equivalent really. Breaking up the tables and doing it in the\n> application should perform even better but it does make the schema less\n> flexible and harder to do non-partition based queries and so on.\n\nbut he said in the initial message that they don't do cross-customer \nreports anyway, so there really isn't any non-partition based querying \ngoing on anyway.\n\n> I guess I should explain what I originally meant: A lot of people come from a\n> flat-file world and assume that things get slower when you deal with large\n> tables. In fact due to the magic of log(n) accessing records from a large\n> index is faster than first looking up the table and index info in a small\n> index and then doing a second lookup in up in an index for a table half the\n> size.\n\nhowever, if your query plan every does a sequential scan of a table then \nyou are nog doing a log(n) lookup are you?\n\n> Where the win in partitioning comes in is in being able to disappear some of\n> the data entirely. By making part of the index key implicit in the choice of\n> partition you get away with a key that's half as large. And in some cases you\n> can get away with using a different key entirely which wouldn't otherwise have\n> been feasible to index. In some cases you can even do sequential scans whereas\n> in an unpartitioned table you would have to use an index (or scan the entire\n> table).\n>\n> But the real reason people partition data is really for the management ease.\n> Being able to drop, and load entire partitions in O(1) is makes it feasible to\n> manage data on a scale that would simply be impossible without partitioned\n> tables.\n\nremember that the origional question wasn't about partitioned tables, it \nwas about the performance problem he was having with one large table (slow \ninsert speed) and asking if postgres would collapse if he changed his \nschema to use a seperate table per customer.\n\nI see many cases where people advocate collapsing databases/tables \ntogeather by adding a column that indicates which customer the line is \nfor.\n\nhowever I really don't understand why it is more efficiant to have a 5B \nline table that you do a report/query against 0.1% of then it is to have \n1000 different tables of 5M lines each and do a report/query against 100% \nof. it would seem that the fact that you don't have to skip over 99.9% of \nthe data to find things that _may_ be relavent would have a noticable cost \nin and of itself.\n\nDavid Lang\n", "msg_date": "Tue, 5 Jun 2007 13:33:23 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "[email protected] writes:\n> however I really don't understand why it is more efficiant to have a 5B \n> line table that you do a report/query against 0.1% of then it is to have \n> 1000 different tables of 5M lines each and do a report/query against 100% \n> of.\n\nEssentially what you are doing when you do that is taking the top few\nlevels of the index out of the database and putting it into the\nfilesystem; plus creating duplicative indexing information in the\ndatabase's system catalogs.\n\nThe degree to which this is a win is *highly* debatable, and certainly\ndepends on a whole lot of assumptions about filesystem performance.\nYou also need to assume that constraint-exclusion in the planner is\npretty doggone cheap relative to the table searches, which means it\nalmost certainly will lose badly if you carry the subdivision out to\nthe extent that the individual tables become small. (This last could\nbe improved in some cases if we had a more explicit representation of\npartitioning, but it'll never be as cheap as one more level of index\nsearch.)\n\nI think the main argument for partitioning is when you are interested in\nbeing able to drop whole partitions cheaply.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Jun 2007 17:59:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table? " }, { "msg_contents": "On Tue, Jun 05, 2007 at 05:59:25PM -0400, Tom Lane wrote:\n> I think the main argument for partitioning is when you are interested in\n> being able to drop whole partitions cheaply.\n\nWasn't there also talk about adding the ability to mark individual partitions\nas read-only, thus bypassing MVCC and allowing queries to be satisfied using\nindexes only?\n\nNot that I think I've seen it on the TODO... :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 6 Jun 2007 00:06:09 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "On Tue, 5 Jun 2007, Tom Lane wrote:\n\n> [email protected] writes:\n>> however I really don't understand why it is more efficiant to have a 5B\n>> line table that you do a report/query against 0.1% of then it is to have\n>> 1000 different tables of 5M lines each and do a report/query against 100%\n>> of.\n>\n> Essentially what you are doing when you do that is taking the top few\n> levels of the index out of the database and putting it into the\n> filesystem; plus creating duplicative indexing information in the\n> database's system catalogs.\n>\n> The degree to which this is a win is *highly* debatable, and certainly\n> depends on a whole lot of assumptions about filesystem performance.\n> You also need to assume that constraint-exclusion in the planner is\n> pretty doggone cheap relative to the table searches, which means it\n> almost certainly will lose badly if you carry the subdivision out to\n> the extent that the individual tables become small. (This last could\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nwhat is considered 'small'? a few thousand records, a few million records?\n\nwhat multiplication factor would there need to be on the partitioning to \nmake it worth while? 100 tables, 1000 tables, 10000 tables?\n\nthe company that I'm at started out with a seperate database per customer \n(not useing postgres), there are basicly zero cross-customer queries, with \na large volume of updates and lookups.\n\noverall things have now grown to millions of updates/day (some multiple of \nthis in lookups), and ~2000 customers, with tens of millions of rows \nbetween them.\n\nhaving each one as a seperate database has really helped us over the years \nas it's made it easy to scale (run 500 databases on each server instead of \n1000, performance just doubled)\n\nvarious people (not database experts) are pushing to install Oracle \ncluster so that they can move all of these to one table with a customerID \ncolumn.\n\nthe database folks won't comment much on this either way, but they don't \nseem enthusiastic to combine all the data togeather.\n\nI've been on the side of things that said that seperate databases is \nbetter becouse it improves data locality to only have to look at the data \nfor one customer at a time rather then having to pick out that customer's \ndata out from the mass of other, unrelated data.\n\n> be improved in some cases if we had a more explicit representation of\n> partitioning, but it'll never be as cheap as one more level of index\n> search.)\n\nsay you have a billing table of\ncustomerID, date, description, amount, tax, extended, paid\n\nand you need to do things like\nreport on invoices that haven't been paied\nsummarize the amount billed each month\nsummarize the tax for each month\n\nbut you need to do this seperately for each customerID (not as a batch job \nthat reports on all customerID's at once, think a website where the \ncustomer can request such reports at any time with a large variation in \ncriteria)\n\nwould you be able to just have one index on customerID and then another on \ndate? or would the second one need to be on customerID||date?\n\nand would this process of going throught he index and seeking to the data \nit points to really be faster then a sequential scan of just the data \nrelated to that customerID?\n\n> I think the main argument for partitioning is when you are interested in\n> being able to drop whole partitions cheaply.\n\nI fully understand this if you are doing queries across all the \npartitions, but if your query is confined to a single partition, \nespecially in the case where you know ahead of time in the application \nwhich 'partition' you care about it would seem that searching through \nsignificantly less data should be a win.\n\nDavid Lang\n", "msg_date": "Tue, 5 Jun 2007 15:31:55 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table? " }, { "msg_contents": "On Wed, 6 Jun 2007, Steinar H. Gunderson wrote:\n\n> On Tue, Jun 05, 2007 at 05:59:25PM -0400, Tom Lane wrote:\n>> I think the main argument for partitioning is when you are interested in\n>> being able to drop whole partitions cheaply.\n>\n> Wasn't there also talk about adding the ability to mark individual partitions\n> as read-only, thus bypassing MVCC and allowing queries to be satisfied using\n> indexes only?\n>\n> Not that I think I've seen it on the TODO... :-)\n\nnow that's a very interesting idea, especially when combined with \ntime-based data where the old times will never change.\n\nDavid Lang\n", "msg_date": "Tue, 5 Jun 2007 15:58:01 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "[email protected] wrote:\n> On Wed, 6 Jun 2007, Steinar H. Gunderson wrote:\n> \n>> On Tue, Jun 05, 2007 at 05:59:25PM -0400, Tom Lane wrote:\n>>> I think the main argument for partitioning is when you are interested in\n>>> being able to drop whole partitions cheaply.\n>>\n>> Wasn't there also talk about adding the ability to mark individual \n>> partitions\n>> as read-only, thus bypassing MVCC and allowing queries to be satisfied \n>> using\n>> indexes only?\n>>\n>> Not that I think I've seen it on the TODO... :-)\n> \n> now that's a very interesting idea, especially when combined with \n> time-based data where the old times will never change.\n\nThat's been discussed, but it's controversial. IMHO a better way to \nachieve that is to design the dead-space-map so that it can be used to \ncheck which parts of a table are visible to everyone, and skip \nvisibility checks. That doesn't require any user action, and allows updates.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 06 Jun 2007 08:29:35 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "[email protected] wrote:\n> On Tue, 5 Jun 2007, Tom Lane wrote:\n>\n>> [email protected] writes:\n>>> however I really don't understand why it is more efficiant to have a 5B\n>>> line table that you do a report/query against 0.1% of then it is to \n>>> have\n>>> 1000 different tables of 5M lines each and do a report/query against \n>>> 100%\n>>> of.\n>>\n>> Essentially what you are doing when you do that is taking the top few\n>> levels of the index out of the database and putting it into the\n>> filesystem; plus creating duplicative indexing information in the\n>> database's system catalogs.\n>>\n>> The degree to which this is a win is *highly* debatable, and certainly\n>> depends on a whole lot of assumptions about filesystem performance.\n>> You also need to assume that constraint-exclusion in the planner is\n>> pretty doggone cheap relative to the table searches, which means it\n>> almost certainly will lose badly if you carry the subdivision out to\n>> the extent that the individual tables become small. (This last could\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> what is considered 'small'? a few thousand records, a few million \n> records?\n\nI would say small is when the individual tables are in the 10 to 20 \nMegabyte range. How many records that is depends on record width, of \ncourse. Basically, once the tables get small enough that you don't \nreally need indexes much, since you tend to grab 25% or more of each one \nthat you're going to hit in a query.\n\n> what multiplication factor would there need to be on the partitioning \n> to make it worth while? 100 tables, 1000 tables, 10000 tables?\nReally depends on the size of the master table I think. If the master \ntable is about 500 Megs in size, and you partition it down to about 1 \nmeg per child table, you're probably ok. Walking through 500 entries \nfor constraint exclusion seems pretty speedy from the tests I've run on \na 12M row table that was about 250 Megs, split into 200 to 400 or so \nequisized child tables. The time to retrieve 85,000 rows that were all \nneighbors went from 2 to 6 seconds, to about 0.2 seconds, and we got rid \nof indexes entirely since they weren't really needed anymore.\n\n> the company that I'm at started out with a seperate database per \n> customer (not useing postgres), there are basicly zero cross-customer \n> queries, with a large volume of updates and lookups.\n>\n> overall things have now grown to millions of updates/day (some \n> multiple of this in lookups), and ~2000 customers, with tens of \n> millions of rows between them.\n>\n> having each one as a seperate database has really helped us over the \n> years as it's made it easy to scale (run 500 databases on each server \n> instead of 1000, performance just doubled)\nI think that for what you're doing, partitioning at the database level \nis probably a pretty good compromise solution. Like you say, it's easy \nto put busy databases on a new server to balance out the load. Hardware \nis cheap.\n\n> various people (not database experts) are pushing to install Oracle \n> cluster so that they can move all of these to one table with a \n> customerID column.\nHave these people identified a particular problem they're trying to \nsolve, or is this a religious issue for them? From your description it \nsounds like a matter of dogma, not problem solving.\n> the database folks won't comment much on this either way, but they \n> don't seem enthusiastic to combine all the data togeather.\nI think they can see the fecal matter heading towards the rotational \ncooling device on this one. I can't imagine this being a win from the \nperspective of saving the company money.\n\n", "msg_date": "Wed, 06 Jun 2007 10:07:19 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "Tom Lane wrote:\n> The degree to which this is a win is *highly* debatable, and certainly\n> depends on a whole lot of assumptions about filesystem performance.\n> You also need to assume that constraint-exclusion in the planner is\n> pretty doggone cheap relative to the table searches, which means it\n> almost certainly will lose badly if you carry the subdivision out to\n> the extent that the individual tables become small. (This last could\n> be improved in some cases if we had a more explicit representation of\n> partitioning, but it'll never be as cheap as one more level of index\n> search.)\nI did some testing a while back on some of this, and with 400 or so \npartitions, the select time was still very fast.\n\nWe were testing grabbing 50-80k rows from 12M at a time, all adjacent to \neach other. With the one big table and one big two way index method, we \nwere getting linearly increasing select times as the dataset grew larger \nand larger. The indexes were much larger than available memory and \nshared buffers. The retrieval time for 50-80k rows was on the order of \n2 to 6 seconds, while the retrieval time for the same number of rows \nwith 400 partitions was about 0.2 to 0.5 seconds.\n\nI haven't tested with more partitions than that, but might if I get a \nchance. What was really slow was the inserts since I was using rules at \nthe time. I'd like to try re-writing it to use triggers, since I would \nthen have one trigger on the parent table instead of 400 rules. Or I \ncould imbed the rules into the app that was creating / inserting the \ndata. The insert performance dropped off VERY fast as I went over 100 \nrules, and that was what primarily stopped me from testing larger \nnumbers of partitions.\n\nThe select performance stayed very fast with more partitions, so I'm \nguessing that the constraint exclusion is pretty well optimized.\n\nI'll play with it some more when I get a chance. For certain operations \nlike the one we were testing, partitioning seems to pay off big time.\n", "msg_date": "Wed, 06 Jun 2007 10:17:29 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "[email protected] wrote:\n> various people (not database experts) are pushing to install Oracle \n> cluster so that they can move all of these to one table with a \n> customerID column.\n\nThey're blowing smoke if they think Oracle can do this. One of my applications had this exact same problem -- table-per-customer versus big-table-for-everyone. Oracle fell over dead, even with the best indexing possible, tuned by the experts, and using partitions keyed to the customerID.\n\nWe ended up breaking it up into table-per-customer because Oracle fell over dead when we had to do a big update on a customer's entire dataset. All other operations were slowed by the additional index on the customer-ID, especially complex joins. With a table-for-everyone, you're forced to create tricky partitioning or clustering, clever indexes, and even with that, big updates are problematic. And once you do this, then you become heavily tied to one RDBMS and your applications are no longer portable, because clustering, indexing, partitioning and other DB tuning tricks are very specific to each RDBMS.\n\nWhen we moved to Postgres, we never revisited this issue, because both Oracle and Postgres are able to handle thousands of tables well. As I wrote in a previous message on a different topic, often the design of your application is more important than the performance. In our case, the table-per-customer makes the applications simpler, and security is MUCH easier.\n\nOracle is simply not better than Postgres in this regard. As far as I know, there is only one specific situation (discussed frequently here) where Oracle is faster: the count(), min() and max() functions, and I know significant progress has been made since I started using Postgres. I have not found any other query where Oracle is significantly better, and I've found several where Postgres is the clear winner.\n\nIt's telling that Oracle's license contract prohibits you from publishing comparisons and benchmarks. You have to wonder why.\n\nCraig\n", "msg_date": "Wed, 06 Jun 2007 09:23:53 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "Craig James wrote:\n>\n> Oracle is simply not better than Postgres in this regard. As far as I \n> know, there is only one specific situation (discussed frequently here) \n> where Oracle is faster: the count(), min() and max() functions, and I \n> know significant progress has been made since I started using \n> Postgres. I have not found any other query where Oracle is \n> significantly better, and I've found several where Postgres is the \n> clear winner. \nIn my testing between a commercial database that cannot be named and \npostgresql, I found max() / min() to be basically the same, even with \nwhere clauses and joins happening.\n\ncount(*), OTOH, is a still a clear winner for the big commercial \ndatabase. With smaller sets (1 Million or so) both dbs are in the same \nballpark.\n\nWith 30+million rows, count(*) took 2 minutes on pgsql and 4 seconds on \nthe big database.\n\nOTOH, there are some things, like importing data, which are MUCH faster \nin pgsql than in the big database.\n", "msg_date": "Wed, 06 Jun 2007 11:49:36 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "Scott Marlowe wrote:\n> OTOH, there are some things, like importing data, which are MUCH faster \n> in pgsql than in the big database.\n\nAn excellent point, I forgot about this. The COPY command is the best thing since the invention of a shirt pocket. We have a database-per-customer design, and one of the mosterous advantages of Postgres is that we can easily do backups. A pg_dump, then scp to a backup server, and in just a minute or two we have a full backup. For recovery, pg_restore is equally fast and amazing. Last time I checked, Oracle didn't have anything close to this.\n\nCraig\n\n\n", "msg_date": "Wed, 06 Jun 2007 10:32:13 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "On 6/6/07, Craig James <[email protected]> wrote:\n> They're blowing smoke if they think Oracle can do this.\n\nOracle could handle this fine.\n\n> Oracle fell over dead, even with the best indexing possible,\n> tuned by the experts, and using partitions keyed to the\n> customerID.\n\nI don't think so, whoever tuned this likely didn't know what they were doing.\n\n> It's telling that Oracle's license contract prohibits you from\n> publishing comparisons and benchmarks. You have to wonder why.\n\nThey did this for the same reason as everyone else. They don't want\nnon-experts tuning the database incorrectly, writing a benchmark paper\nabout it, and making the software look bad.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 6 Jun 2007 14:01:59 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "On 6/6/07, Craig James <[email protected]> wrote:\n> Last time I checked, Oracle didn't have anything close to this.\n\nWhen did you check, 15 years ago? Oracle has direct-path\nimport/export and data pump; both of which make generic COPY look like\na turtle. The new PostgreSQL bulk-loader takes similar concepts from\nOracle and is fairly faster than COPY.\n\nDon't get me wrong, I'm pro-PostgreSQL... but spouting personal\nobservations on other databases as facts just boasts an\nPostgreSQL-centric egotistical view of the world. If you don't tune\nOracle, it will suck. If you don't understand Oracle architecture\nwhen you tune an application, it will suck; just like PostgreSQL.\nPeople who don't have extensive experience in the other databases just\nhear what you say and regurgitate it as fact; which it is not.\n\nLook at how many people in these lists still go on and on about MySQL\nflaws based on their experience with MySQL 3.23. Times change and it\ndoesn't do anyone any good to be ignorant of other databases. If\nyou're going to speak about another database in a comparison, please\nstay current or specify the database you're comparing against.\n\nThis is nothing against you, but it always starts an avalanche of,\n\"look how perfect we are compared to everyone else.\"\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 6 Jun 2007 14:15:48 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "On Wed, Jun 06, 2007 at 12:06:09AM +0200, Steinar H. Gunderson wrote:\n\n> Wasn't there also talk about adding the ability to mark individual\n> partitions as read-only, thus bypassing MVCC and allowing queries\n> to be satisfied using indexes only?\n\nI have a (different) problem that read-only data segments (maybe\npartitions, maybe something else) would help, so I know for sure that\nsomeone is working on a problem like this, but I don't think it's the\nsort of thing that's going to come any time soon.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nI remember when computers were frustrating because they *did* exactly what \nyou told them to. That actually seems sort of quaint now.\n\t\t--J.D. Baldwin\n", "msg_date": "Wed, 6 Jun 2007 15:23:51 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "On Tue, Jun 05, 2007 at 03:31:55PM -0700, [email protected] wrote:\n> various people (not database experts) are pushing to install Oracle \n> cluster so that they can move all of these to one table with a customerID \n> column.\n\nWell, you will always have to deal with the sort of people who will\nbase their technical prescriptions on the shiny ads they read in\nSuperGlobalNetworkedExecutiveGoFast, or whatever rag they're reading\nthese days. I usually encourage such people actually to perform the\nanalysis of the license, salary, contingency, and migrations costs\n(and do a similar analysis myself, actually, so when they have\noverlooked the 30 things that individually cost $1million a piece, I\ncan point them out). More than one jaw has had to be picked up off\nthe floor when presented with the bill for RAC. Frequently, people\ndiscover that it is a good way to turn your tidy money-making\nenterprise into a giant money hole that produces a sucking sound on\nthe other end of which is Oracle Corporation. \n\nAll of that aside, I have pretty severe doubts that RAC would be a\nwin for you. A big honkin' single database in Postgres ought to be\nable to do this too, if you throw enough hardware money at it. But\nit seems a waste to re-implement something that's already apparently\nworking for you in favour of something more expensive that you don't\nseem to need.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nWhen my information changes, I alter my conclusions. What do you do sir?\n\t\t--attr. John Maynard Keynes\n", "msg_date": "Wed, 6 Jun 2007 15:32:11 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "On 6/6/07, Andrew Sullivan <[email protected]> wrote:\n> Well, you will always have to deal with the sort of people who will\n> base their technical prescriptions on the shiny ads they read in\n> SuperGlobalNetworkedExecutiveGoFast, or whatever rag they're reading\n> these days.\n\nAlways.\n\n> I usually encourage such people actually to perform the\n> analysis of the license, salary, contingency, and migrations costs\n\nYes, this is the best way.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 6 Jun 2007 15:40:46 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "On Wed, Jun 06, 2007 at 02:01:59PM -0400, Jonah H. Harris wrote:\n> They did this for the same reason as everyone else. They don't want\n> non-experts tuning the database incorrectly, writing a benchmark paper\n> about it, and making the software look bad.\n\nI agree that Oracle is a fine system, and I have my doubts about the\nlikelihood Oracle will fall over under fairly heavy loads. But I\nthink the above is giving Oracle Corp a little too much credit. \n\nCorporations exist to make money, and the reason they prohibit doing\nanything with their software and then publishing it without their\napproval is because they want to control all the public perception of\ntheir software, whether deserved or not. Every user of any large\nsoftware system (Oracle or otherwise) has their favourite horror\nstory about the grotty corners of that software;\ncommercially-licensed people just aren't allowed to prove it in\npublic. It's not only the clueless Oracle is protecting themselves\nagainst; it's also the smart, accurate, but expensive corner-case\ntesters. I get to complain that PostgreSQL is mostly fast but has\nterrible outlier performance problems. I can think of another system\nthat I've used that certainly had a similar issue, but I couldn't\nshow you the data to prove it. Everyone who used it knew about it,\nthough.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nA certain description of men are for getting out of debt, yet are\nagainst all taxes for raising money to pay it off.\n\t\t--Alexander Hamilton\n", "msg_date": "Wed, 6 Jun 2007 15:40:58 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "control of benchmarks (was: Thousands of tables)" }, { "msg_contents": "On 6/6/07, Andrew Sullivan <[email protected]> wrote:\n> But I think the above is giving Oracle Corp a little too\n> much credit.\n\nPerhaps. However, Oracle has a thousand or so knobs which can control\nalmost every aspect of every subsystem. If you know how they interact\nwith each other and how to use them properly, they can make a huge\ndifference in performance. Most people do not know all the knobs or\nunderstand what difference each can make given the theory and\narchitecture of the system, which results in poor general\nconfigurations. Arguably, there is a cost associated with having\nsomeone staffed and/or consulted that has the depth of knowledge\nrequired to tune it in such a manner which goes back to a basic\ncost/benefit analysis.\n\nOracle, while seeming like a one-size-fits-all system, has the same\nbasic issue as PostgreSQL and everyone else; to get optimum\nperformance, it has to be tuned specifically for the\napplication/workload at hand.\n\n> Corporations exist to make money, and the reason they prohibit doing\n> anything with their software and then publishing it without their\n> approval is because they want to control all the public perception of\n> their software, whether deserved or not.\n\nOf course. Which is why audited benchmarks like SPEC and TPC are\naround. While they may not represent one's particular workload, they\nare the only way to fairly demonstrate comparable performance.\n\n> Every user of any large software system (Oracle or otherwise)\n> has their favourite horror story about the grotty corners of\n> that software;\n\nOf course, but they also never say why it was caused. With Oracle,\nalmost all bad-performance cases I've seen are related to improper\ntuning and/or hardware; even by experienced DBAs.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 6 Jun 2007 15:57:08 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: control of benchmarks (was: Thousands of tables)" }, { "msg_contents": "Jonah H. Harris wrote:\n> On 6/6/07, Craig James <[email protected]> wrote:\n>> They're blowing smoke if they think Oracle can do this.\n> \n> Oracle could handle this fine.\n>\n>> Oracle fell over dead, even with the best indexing possible,\n>> tuned by the experts, and using partitions keyed to the\n>> customerID.\n> \n> I don't think so, whoever tuned this likely didn't know what they were \n> doing.\n\nWrong on both counts.\n\nYou didn't read my message. I said that *BOTH* Oracle and Postgres performed well with table-per-customer. I wasn't Oracle bashing. In fact, I was doing the opposite: Someone's coworker claimed ORACLE was the miracle cure for all problems, and I was simply pointing out that there are no miracle cures. (I prefer Postgres for many reasons, but Oracle is a fine RDBMS that I have used extensively.)\n\nThe technical question is simple: Table-per-customer or big-table-for-everyone. The answer is, \"it depends.\" It depends on your application, your read-versus-write ratio, the table size, the design of your application software, and a dozen other factors. There is no simple answer, but there are important technical insights which, I'm happy to report, various people contributed to this discussion. Perhaps you have some technical insight too, because it really is an important question.\n\nThe reason I assert (and stand by this) that \"They're blowing smoke\" when they claim Oracle has the magic cure, is because Oracle and Postgres are both relational databases, they write their data to disks, and they both have indexes with O(log(N)) retrieval/update times. Oracle doesn't have a magical workaround to these facts, nor does Postgres.\n\nCraig\n", "msg_date": "Wed, 06 Jun 2007 13:13:48 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "On 6/6/07, Craig James <[email protected]> wrote:\n> You didn't read my message. I said that *BOTH* Oracle\n> and Postgres performed well with table-per-customer.\n\nYes, I did. My belief is that Oracle can handle all customers in a\nsingle table.\n\n> The technical question is simple: Table-per-customer or\n> big-table-for-everyone. The answer is, \"it depends.\"\n\nI agree, it does depend on the data, workload, etc. No\none-size-fits-all answer there.\n\n> The reason I assert (and stand by this) that \"They're\n> blowing smoke\" when they claim Oracle has the magic\n> cure, is because Oracle and Postgres are both relational\n> databases, they write their data to disks, and they both\n> have indexes with O(log(N)) retrieval/update times. Oracle\n> doesn't have a magical workaround to these facts,\n> nor does Postgres.\n\nAgreed that they are similar on the basics, but they do use\nsignificantly different algorithms and optimizations. Likewise, there\nis more tuning that can be done with Oracle given the amount of time\nand money one has to spend on it. Again, cost/benefit analysis on\nthis type of an issue... but you're right, there is no \"magic cure\".\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 6 Jun 2007 16:20:12 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" }, { "msg_contents": "On Wed, 6 Jun 2007, Scott Marlowe wrote:\n\n>> > pretty doggone cheap relative to the table searches, which means it\n>> > almost certainly will lose badly if you carry the subdivision out to\n>> > the extent that the individual tables become small. (This last could\n>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n>> what is considered 'small'? a few thousand records, a few million records?\n>\n> I would say small is when the individual tables are in the 10 to 20 Megabyte \n> range. How many records that is depends on record width, of course. \n> Basically, once the tables get small enough that you don't really need \n> indexes much, since you tend to grab 25% or more of each one that you're \n> going to hit in a query.\n\nthanks, that makes a lot of sense\n\n>> what multiplication factor would there need to be on the partitioning to\n>> make it worth while? 100 tables, 1000 tables, 10000 tables?\n> Really depends on the size of the master table I think. If the master table \n> is about 500 Megs in size, and you partition it down to about 1 meg per child \n> table, you're probably ok. Walking through 500 entries for constraint \n> exclusion seems pretty speedy from the tests I've run on a 12M row table that \n> was about 250 Megs, split into 200 to 400 or so equisized child tables. The \n> time to retrieve 85,000 rows that were all neighbors went from 2 to 6 \n> seconds, to about 0.2 seconds, and we got rid of indexes entirely since they \n> weren't really needed anymore.\n\nremember, I'm talking about a case wher eyou don't have to go through \ncontraint checking. you know to start with what customerID you are dealing \nwith so you just check the tables for that customer\n\n>> the company that I'm at started out with a seperate database per customer\n>> (not useing postgres), there are basicly zero cross-customer queries, with\n>> a large volume of updates and lookups.\n>>\n>> overall things have now grown to millions of updates/day (some multiple of\n>> this in lookups), and ~2000 customers, with tens of millions of rows\n>> between them.\n>>\n>> having each one as a seperate database has really helped us over the years\n>> as it's made it easy to scale (run 500 databases on each server instead of\n>> 1000, performance just doubled)\n> I think that for what you're doing, partitioning at the database level is \n> probably a pretty good compromise solution. Like you say, it's easy to put \n> busy databases on a new server to balance out the load. Hardware is cheap.\n>\n>> various people (not database experts) are pushing to install Oracle\n>> cluster so that they can move all of these to one table with a customerID\n>> column.\n> Have these people identified a particular problem they're trying to solve, or \n> is this a religious issue for them? From your description it sounds like a \n> matter of dogma, not problem solving.\n\nin part it is, in part it's becouse the commercial database companies have \ntold management that doing database replication is impossible with so many \ndatabases (we first heard this back when we had 300 or so databases), \nwe've gone the expensive EMC disk-layer replication route, but they think \nthat mergeing everything will simplify things somehow so the database can \ndo it's job better.\n\nI see it as just a limitation on the replication solution offered by the \nbigname vendors.\n\n>> the database folks won't comment much on this either way, but they don't\n>> seem enthusiastic to combine all the data togeather.\n> I think they can see the fecal matter heading towards the rotational cooling \n> device on this one. I can't imagine this being a win from the perspective of \n> saving the company money.\n\nneither do I.\n\nDavid Lang\n", "msg_date": "Wed, 6 Jun 2007 22:22:26 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Thousands of tables versus on table?" } ]
[ { "msg_contents": "Does anyone have any experience running pg on multiple IBM 3950's set \nup as a single machine ?\n\nDave\n", "msg_date": "Mon, 4 Jun 2007 22:00:09 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql running on a virtual cluster" } ]
[ { "msg_contents": "I have some questions about the performance of certain types of SQL \nstatements.\n\nWhat sort of speed increase is there usually with binding parameters \n(and thus preparing statements) v. straight sql with interpolated \nvariables? Will Postgresql realize that the following queries are \neffectively the same (and thus re-use the query plan) or will it \nthink they are different?\n\n\tSELECT * FROM mytable WHERE item = 5;\n\tSELECT * FROM mytable WHERE item = 10;\n\nObviously to me or you they could use the same plan. From what I \nunderstand (correct me if I'm wrong), if you use parameter binding - \nlike \"SELECT * FROM mytable WHERE item = ?\" - Postgresql will know \nthat the queries can re-use the query plan, but I don't know if the \nsystem will recognize this with above situation.\n\nAlso, what's the difference between prepared statements (using \nPREPARE and EXECUTE) and regular functions (CREATE FUNCTION)? How do \nthey impact performance? From what I understand there is no exact \nparallel to stored procedures (as in MS SQL or oracle, that are \ncompletely precompiled) in Postgresql. At the same time, the \ndocumentation (and other sites as well, probably because they don't \nknow what they're talking about when it comes to databases) is vague \nbecause PL/pgSQL is often said to be able to write stored procedures \nbut nowhere does it say that PL/pgSQL programs are precompiled.\n\nThanks\nJason\n", "msg_date": "Mon, 4 Jun 2007 23:18:30 -0400", "msg_from": "Jason Lustig <[email protected]>", "msg_from_op": true, "msg_subject": "Question about SQL performance" }, { "msg_contents": "On Mon, Jun 04, 2007 at 11:18:30PM -0400, Jason Lustig wrote:\n> I have some questions about the performance of certain types of SQL \n> statements.\n> \n> What sort of speed increase is there usually with binding parameters \n> (and thus preparing statements) v. straight sql with interpolated \n> variables? Will Postgresql realize that the following queries are \n> effectively the same (and thus re-use the query plan) or will it \n> think they are different?\n> \n> \tSELECT * FROM mytable WHERE item = 5;\n> \tSELECT * FROM mytable WHERE item = 10;\n>\n> Obviously to me or you they could use the same plan. From what I \n> understand (correct me if I'm wrong), if you use parameter binding - \n> like \"SELECT * FROM mytable WHERE item = ?\" - Postgresql will know \n> that the queries can re-use the query plan, but I don't know if the \n> system will recognize this with above situation.\n\nAlthough they could use the same plan, it is possible that using the\nsame plan is non-optimal. For example, if I know that 99% of the table\ncontains item = 5, but only 1% of the table contains item = 10, then\nthe 'best plan' may be a sequential scan for item = 5, but an index scan\nfor item = 10.\n\nIn the case of a prepared query, PostgreSQL will pick a plan that will\nbe good for all values, which may not be best for specific queries. You\nsave parsing time and planning time, but may risk increasing execution\ntime.\n\n> Also, what's the difference between prepared statements (using \n> PREPARE and EXECUTE) and regular functions (CREATE FUNCTION)? How do \n> they impact performance? From what I understand there is no exact \n> parallel to stored procedures (as in MS SQL or oracle, that are \n> completely precompiled) in Postgresql. At the same time, the \n> documentation (and other sites as well, probably because they don't \n> know what they're talking about when it comes to databases) is vague \n> because PL/pgSQL is often said to be able to write stored procedures \n> but nowhere does it say that PL/pgSQL programs are precompiled.\n\nI think you can find all of these answers in the documentation, including\nmy comments about prepared queries. Does it matter if the program is\nprecompiled? I believe it is, but why would it matter?\n\nAre you addressing a real performance problem? Or are you trying to avoid\nissues that you are not sure if they exist or not? :-)\n\nPrepared queries are going to improve performance due to being able to\nexecute multiple queries without communicating back to the\nclient. Especially for short queries, network latency can be a\nsignificant factor for execution speed.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Tue, 5 Jun 2007 01:23:12 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Question about SQL performance" }, { "msg_contents": "\n> What sort of speed increase is there usually with binding parameters \n> (and thus preparing statements) v. straight sql with interpolated \n> variables? Will Postgresql realize that the following queries are \n> effectively the same (and thus re-use the query plan) or will it think \n> they are different?\n>\n> \tSELECT * FROM mytable WHERE item = 5;\n> \tSELECT * FROM mytable WHERE item = 10;\n\n\tNo, if you send the above as text (not prepared) they are two different \nqueries.\n\tPostgres' query executor is so fast that parsing and planning can take \nlonger than query execution sometimes. This is true of very simple selects \nlike above, or some very complex queries which take a long time to plan \nbut don't actually process a lot of rows.\n\tI had this huge query (1 full page of SQL) with 5 joins, aggregates and \nsubqueries, returning about 30 rows ; it executed in about 5 ms, planning \nand parsing time was significant...\n\n> Obviously to me or you they could use the same plan. From what I \n> understand (correct me if I'm wrong), if you use parameter binding - \n> like \"SELECT * FROM mytable WHERE item = ?\" - Postgresql will know that \n> the queries can re-use the query plan, but I don't know if the system \n> will recognize this with above situation.\n\n\tIt depends if your client library is smart enough to prepare the \nstatements...\n\n> Also, what's the difference between prepared statements (using PREPARE \n> and EXECUTE) and regular functions (CREATE FUNCTION)? How do they impact \n> performance? From what I understand there is no exact parallel to stored \n> procedures (as in MS SQL or oracle, that are completely precompiled) in \n> Postgresql. At the same time, the documentation (and other sites as \n> well, probably because they don't know what they're talking about when \n> it comes to databases) is vague because PL/pgSQL is often said to be \n> able to write stored procedures but nowhere does it say that PL/pgSQL \n> programs are precompiled.\n\n\tPG stores the stored procedures as text. On first invocation, in each \nconnection, they are \"compiled\", ie. all statements in the SP are \nprepared, so the first invocation in a connection is slower than next \ninvocations. This is a problem if you do not use persistent connections.\n\n\tA simple select, when prepared, will take about 25 microseconds inside a \nSP and 50-100 microseconds as a query over the network. If not prepared, \nabout 150 µs or 2-3x slower.\n\n\tFYI Postgres beats MyISAM on \"small simple selects\" if you use prepared \nqueries.\n\n\n\tI use the following Python code to auto-prepare my queries :\n\ndb = PGConn( a function that returns a DB connection )\ndb.prep_exec( \"SELECT * FROM stuff WHERE id = %s\", 1 )\t# prepares and \nexecutes\ndb.prep_exec( \"SELECT * FROM stuff WHERE id = %s\", 2 )\t# executes only\n\n\nclass PGConn( object ):\n\t\n\tdef __init__( self, db_connector ):\n\t\tself.db_connector = db_connector\n\t\tself.reconnect()\n\t\n\tdef reconnect( self ):\n\t\tself.prep_cache = {}\n\t\tself.db = self.db_connector()\n\t\tself.db.set_isolation_level( 0 ) # autocommit\n\t\n\tdef cursor( self ):\n#\t\treturn self.db.cursor( cursor_factory=psycopg2.extras.DictCursor )\n\t\treturn self.db.cursor( )\n\t\t\n\tdef execute( self, sql, *args ):\n\t\tcursor = self.cursor()\n\t\ttry:\n\t\t\tcursor.execute( sql, args )\n\t\texcept:\n\t\t\tcursor.execute( \"ROLLBACK\" )\n\t\t\traise\n\t\treturn cursor\n\n\tdef executemany( self, sql, *args ):\n\t\tcursor = self.cursor()\n\t\ttry:\n\t\t\tcursor.executemany( sql, args )\n\t\texcept:\n\t\t\tcursor.execute( \"ROLLBACK\" )\n\t\t\traise\n\t\treturn cursor\n\n\tdef prep_exec( self, sql, *args ):\n\t\tcursor = self.cursor()\n\t\tstmt = self.prep_cache.get( sql )\n\t\tif stmt is None:\n\t\t\tname = \"stmt_%s\" % (len( self.prep_cache ) + 1)\n\t\t\tif args:\n\t\t\t\tprep = sql % tuple( \"$%d\"%(x+1) for x in xrange( len( args )) )\n\t\t\telse:\n\t\t\t\tprep = sql\n\t\t\tprep = \"PREPARE %s AS %s\" % (name, prep)\n\t\t\tcursor.execute( prep )\n\t\t\tif args:\n\t\t\t\tstmt = \"EXECUTE %s( %s )\" % (name, \", \".join( [\"%s\"] * len( args ) ))\n\t\t\telse:\n\t\t\t\tstmt = \"EXECUTE %s\" % (name,)\n\t\t\tself.prep_cache[ sql ] = stmt\n\t\t\t\n\t\ttry:\n\t\t\tcursor.execute( stmt, args )\n\t\texcept Exception, e:\n\t\t\ttraceback.print_exc()\n\t\t\tprint \"Error while executing prepared SQL statement :\", stmt\n\t\t\tprint \"Arguments :\", args\n\t\t\tprint \"Original SQL is :\", sql\n\t\t\tcursor.execute( \"ROLLBACK\" )\n\t\t\traise\n\t\t\n\t\treturn cursor\n\n", "msg_date": "Tue, 05 Jun 2007 07:52:12 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about SQL performance" }, { "msg_contents": "Jason Lustig wrote:\n> I have some questions about the performance of certain types of SQL \n> statements.\n> \n> What sort of speed increase is there usually with binding parameters \n> (and thus preparing statements) v. straight sql with interpolated \n> variables? Will Postgresql realize that the following queries are \n> effectively the same (and thus re-use the query plan) or will it think \n> they are different?\n\nPG will plan \"raw\" sql every time you issue a query.\n\n> SELECT * FROM mytable WHERE item = 5;\n> SELECT * FROM mytable WHERE item = 10;\n> \n> Obviously to me or you they could use the same plan. \n\nExcept that in-between query 1 and 2 I inserted 10 million rows where \nitem=10. Still obvious?\n\n > From what I\n> understand (correct me if I'm wrong), if you use parameter binding - \n> like \"SELECT * FROM mytable WHERE item = ?\" - Postgresql will know that \n> the queries can re-use the query plan, but I don't know if the system \n> will recognize this with above situation.\n\nIf you are using PREPARE/EXECUTE (or your client-side library is doing \nit for you).\n\n> Also, what's the difference between prepared statements (using PREPARE \n> and EXECUTE) and regular functions (CREATE FUNCTION)? How do they impact \n> performance? \n\nFunctions can be in any language, but if they both are in SQL and do the \nsame thing, no real difference.\n\n > From what I understand there is no exact parallel to stored\n> procedures (as in MS SQL or oracle, that are completely precompiled) in \n> Postgresql. \n\nYou can write functions in C - that's compiled. Not sure if java \nprocedural code has its byte-code cached between sessions.\n\n > At the same time, the documentation (and other sites as\n> well, probably because they don't know what they're talking about when \n> it comes to databases) is vague because PL/pgSQL is often said to be \n> able to write stored procedures but nowhere does it say that PL/pgSQL \n> programs are precompiled.\n\nI don't see the connection.\n1. You can write procedural code in pl/pgsql\n2. It's not precompiled (it's \"compiled\" on first use)\n\nAre you looking to solve a particular problem?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 05 Jun 2007 09:50:56 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about SQL performance" } ]
[ { "msg_contents": "Hi,\n\nexplain analyze SELECT am.campaign_id, am.optimize_type,\nam.creative_id, am.optimize_by_days, am.impressions_delta,\nam.clicks_delta, am.channel_code, am.cost,dm.allocation_map_id,\nSUM(CASE dm.sqldate when 20070602 then dm.impressions_delivered else 0\nend) as deliv_yest, SUM(CASE sign(20070526 - dm.sqldate) when -1 then\ndm.impressions_delivered else 0 end) as deliv_wk1, SUM(CASE\nsign(20070519 - dm.sqldate) when -1 then dm.impressions_delivered else\n0 end) as deliv_wk2, SUM(CASE sign(20070512 - dm.sqldate ) when -1\nthen dm.impressions_delivered else 0 end) as deliv_wk3, SUM(CASE\nsign(20070505 - dm.sqldate) when -1 then dm.impressions_delivered else\n0 end) as deliv_wk4, SUM(CASE sign(20070428 - dm.sqldate) when -1 then\ndm.impressions_delivered else 0 end) as deliv_wk5, SUM(CASE\nsign(20070421 - dm.sqldate) when -1 then dm.impressions_delivered else\n0 end) as deliv_wk6, SUM(CASE sign(20070414 - dm.sqldate) when -1 then\ndm.impressions_delivered else 0 end) as deliv_wk7, SUM(CASE\nsign(20070407 - dm.sqldate) when -1 then dm.impressions_delivered else\n0 end) as deliv_wk8, SUM(CASE dm.sqldate when 20070602 then\ndm.clicks_delivered else 0 end) as clicks_yest, SUM(CASE sign(20070526\n- dm.sqldate) when -1 then dm.clicks_delivered else 0 end) as\nclicks_wk1, SUM(CASE sign(20070519 - dm.sqldate) when -1 then\ndm.clicks_delivered else 0 end) as clicks_wk2, SUM(CASE sign(20070512\n-dm.sqldate) when -1 then dm.clicks_delivered else 0 end) as\nclicks_wk3, SUM(CASE sign(20070505 - dm.sqldate) when -1 then\ndm.clicks_delivered else 0 end) as clicks_wk4, SUM(CASE sign(20070428\n- dm.sqldate) when -1 then dm.clicks_delivered else 0 end) as\nclicks_wk5, SUM(CASE sign(20070421 - dm.sqldate) when -1 then\ndm.clicks_delivered else 0 end) as clicks_wk6, SUM(CASE sign(20070414\n- dm.sqldate) when -1 then dm.clicks_delivered else 0 end) as\nclicks_wk7, SUM(CASE sign(20070407 -dm.sqldate) when -1 then\ndm.clicks_delivered else 0 end) as clicks_wk8 FROM dl_mp dm INNER JOIN\n (SELECT cr.campaign_id, cr.optimize_type, cr.creative_id,\ncr.optimize_by_days, am1.impressions_delta, am1.clicks_delta,\nam1.channel_code , am1.id , cr.cost FROM al_mp am1 INNER JOIN (SELECT\nc.campaign_id , c.optimize_type, cr1.id AS creative_id,\nc.optimize_by_days, c.cost FROM crt cr1 INNER JOIN (SELECT\nc1.asset_id AS campaign_id, ca.value AS optimize_type,\nc1.optimize_by_days AS optimize_by_days , c1.cost as cost FROM cmp c1\nINNER JOIN (SELECT ca2.campaign_id AS campaign_id, ca3.value AS value\nFROM cmp_attr ca2, cmp_attr ca3 WHERE ca2.campaign_id =\nca3.campaign_id AND ca2.attribute = 'OPTIMIZE_STATUS' AND ca2.value\n= '1' AND ca3. attribute = 'OPTIMIZE_TYPE') as ca ON c1.asset_id =\nca.campaign_id WHERE 20070603 BETWEEN (c1.start_date - interval '1\nday') AND (c1.end_date + interval '1 day') AND c1.status = 'A' AND\nc1.revenue_type != 'FOC') AS c ON cr1.campaign_id =c.campaign_id\nAND c.optimize_by_days > 0 WHERE cr1.status != 'HID') AS cr ON\ncr.creative_id = am1.creative_id WHERE am1.status = 'A') AS am ON\nam.id = dm.allocation_map_id AND am.creative_id = dm.creative_id AND\nam.channel_code = dm.channel_code GROUP BY am.campaign_id,\nam.optimize_type, am.creative_id, am.optimize_by_days ,\nam.impressions_delta,am.clicks_delta , am.channel_code, am.cost ,\ndm.allocation_map_id;\n\n\n\n\n.\n\n\n\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=92536.47..92536.69 rows=1 width=138) (actual\ntime=3194317.938..3194324.811 rows=1584 loops=1)\n -> Nested Loop (cost=66901.04..92536.40 rows=1 width=138) (actual\ntime=9044.558..3193988.565 rows=13556 loops=1)\n Join Filter: ((\"outer\".channel_code = \"inner\".channel_code)\nAND (\"inner\".creative_id = \"outer\".creative_id))\n -> Nested Loop (cost=40269.84..41486.82 rows=1 width=122)\n(actual time=442.818..119250.727 rows=11483 loops=1)\n -> Nested Loop (cost=105.83..333.55 rows=1 width=94)\n(actual time=17.199..117.536 rows=263 loops=1)\n -> Nested Loop (cost=105.83..329.53 rows=1\nwidth=24) (actual time=17.175..106.429 rows=263 loops=1)\n -> Nested Loop (cost=105.83..171.93\nrows=1 width=16) (actual time=17.125..40.490 rows=38 loops=1)\n -> Bitmap Heap Scan on cmp_attr ca2\n(cost=105.83..168.20 rows=1 width=4) (actual time=1.759..5.767\nrows=1186 loops=1)\n Recheck Cond:\n((attribute)::text = 'OPTIMIZE_STATUS'::text)\n Filter: ((value)::text = '1'::text)\n -> Bitmap Index Scan on\ncampaign_attributes_pk (cost=0.00..105.83 rows=60 width=0) (actual\ntime=1.721..1.721 rows=1279 loops=1)\n Index Cond:\n((attribute)::text = 'OPTIMIZE_STATUS'::text)\n -> Index Scan using cmp_pk1 on cmp\nc1 (cost=0.00..3.72 rows=1 width=12) (actual time=0.025..0.026 rows=0\nloops=1186)\n Index Cond: (c1.asset_id =\n\"outer\".campaign_id)\n Filter: (('20070603'::text >=\n((start_date - '1 day'::interval))::text) AND ('20070603'::text <=\n((end_date + '1 day'::interval))::text) AND (status = 'A'::bpchar) AND\n(revenue_type <> 'FOC'::bpchar) AND (optimize_by_days > 0))\n -> Index Scan using creative_c_id on crt\ncr1 (cost=0.00..156.55 rows=84 width=8) (actual time=0.051..1.699\nrows=7 loops=38)\n Index Cond: (cr1.campaign_id =\n\"outer\".asset_id)\n Filter: (status <> 'HID'::bpchar)\n -> Index Scan using campaign_attributes_pk on\ncmp_attr ca3 (cost=0.00..4.01 rows=1 width=82) (actual\ntime=0.027..0.031 rows=1 loops=263)\n Index Cond: ((\"outer\".campaign_id =\nca3.campaign_id) AND ((ca3.attribute)::text = 'OPTIMIZE_TYPE'::text))\n -> Bitmap Heap Scan on al_mp am1\n(cost=40164.01..41146.99 rows=502 width=28) (actual\ntime=447.274..452.698 rows=44 loops=263)\n Recheck Cond: (\"outer\".id = am1.creative_id)\n Filter: ((status)::text = 'A'::text)\n -> Bitmap Index Scan on alc_map_idx\n(cost=0.00..40164.01 rows=502 width=0) (actual time=447.145..447.145\nrows=144 loops=263)\n Index Cond: (\"outer\".id = am1.creative_id)\n -> Bitmap Heap Scan on dl_mp dm (cost=26631.20..50697.13\nrows=20140 width=32) (actual time=266.680..267.745 rows=1 loops=11483)\n Recheck Cond: (\"outer\".id = dm.allocation_map_id)\n -> Bitmap Index Scan on dl_mp_amap_dt\n(cost=0.00..26631.20 rows=20140 width=0) (actual time=266.436..266.436\nrows=1 loops=11483)\n Index Cond: (\"outer\".id = dm.allocation_map_id)\n Total runtime: 3194328.561 ms\n(30 rows)\n\n\nBefor doing vaccum full on the database this query use to take less\nthan 4min. But now after doing vacumming reindexing the tables it is\ntaking 73mins.\n\nAfter observing the explain analyse it seems like it is not selecting\nthe required index properly.\n\nSo can anybody suggest any thing??\n\n-- \nRegards\nGauri\n", "msg_date": "Tue, 5 Jun 2007 14:56:39 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Problem" }, { "msg_contents": "Hi,\n\nexplain analyze SELECT am.campaign_id, am.optimize_type,\nam.creative_id, am.optimize_by_days, am.impressions_delta,\nam.clicks_delta, am.channel_code, am.cost,dm.allocation_map_id,\nSUM(CASE dm.sqldate when 20070602 then dm.impressions_delivered else 0\nend) as deliv_yest, SUM(CASE sign(20070526 - dm.sqldate) when -1 then\ndm.impressions_delivered else 0 end) as deliv_wk1, SUM(CASE\nsign(20070519 - dm.sqldate) when -1 then dm.impressions_delivered else\n0 end) as deliv_wk2, SUM(CASE sign(20070512 - dm.sqldate ) when -1\nthen dm.impressions_delivered else 0 end) as deliv_wk3, SUM(CASE\nsign(20070505 - dm.sqldate) when -1 then dm.impressions_delivered else\n0 end) as deliv_wk4, SUM(CASE sign(20070428 - dm.sqldate) when -1 then\ndm.impressions_delivered else 0 end) as deliv_wk5, SUM(CASE\nsign(20070421 - dm.sqldate) when -1 then dm.impressions_delivered else\n0 end) as deliv_wk6, SUM(CASE sign(20070414 - dm.sqldate) when -1 then\ndm.impressions_delivered else 0 end) as deliv_wk7, SUM(CASE\nsign(20070407 - dm.sqldate) when -1 then dm.impressions_delivered else\n0 end) as deliv_wk8, SUM(CASE dm.sqldate when 20070602 then\ndm.clicks_delivered else 0 end) as clicks_yest, SUM(CASE sign(20070526\n- dm.sqldate) when -1 then dm.clicks_delivered else 0 end) as\nclicks_wk1, SUM(CASE sign(20070519 - dm.sqldate) when -1 then\ndm.clicks_delivered else 0 end) as clicks_wk2, SUM(CASE sign(20070512\n-dm.sqldate) when -1 then dm.clicks_delivered else 0 end) as\nclicks_wk3, SUM(CASE sign(20070505 - dm.sqldate) when -1 then\ndm.clicks_delivered else 0 end) as clicks_wk4, SUM(CASE sign(20070428\n- dm.sqldate) when -1 then dm.clicks_delivered else 0 end) as\nclicks_wk5, SUM(CASE sign(20070421 - dm.sqldate) when -1 then\ndm.clicks_delivered else 0 end) as clicks_wk6, SUM(CASE sign(20070414\n- dm.sqldate) when -1 then dm.clicks_delivered else 0 end) as\nclicks_wk7, SUM(CASE sign(20070407 -dm.sqldate) when -1 then\ndm.clicks_delivered else 0 end) as clicks_wk8 FROM dl_mp dm INNER JOIN\n (SELECT cr.campaign_id, cr.optimize_type, cr.creative_id,\ncr.optimize_by_days, am1.impressions_delta, am1.clicks_delta,\nam1.channel_code , am1.id , cr.cost FROM al_mp am1 INNER JOIN (SELECT\nc.campaign_id , c.optimize_type, cr1.id AS creative_id,\nc.optimize_by_days, c.cost FROM crt cr1 INNER JOIN (SELECT\nc1.asset_id AS campaign_id, ca.value AS optimize_type,\nc1.optimize_by_days AS optimize_by_days , c1.cost as cost FROM cmp c1\nINNER JOIN (SELECT ca2.campaign_id AS campaign_id, ca3.value AS value\nFROM cmp_attr ca2, cmp_attr ca3 WHERE ca2.campaign_id =\nca3.campaign_id AND ca2.attribute = 'OPTIMIZE_STATUS' AND ca2.value\n= '1' AND ca3. attribute = 'OPTIMIZE_TYPE') as ca ON c1.asset_id =\nca.campaign_id WHERE 20070603 BETWEEN (c1.start_date - interval '1\nday') AND (c1.end_date + interval '1 day') AND c1.status = 'A' AND\nc1.revenue_type != 'FOC') AS c ON cr1.campaign_id =c.campaign_id\nAND c.optimize_by_days > 0 WHERE cr1.status != 'HID') AS cr ON\ncr.creative_id = am1.creative_id WHERE am1.status = 'A') AS am ON\nam.id = dm.allocation_map_id AND am.creative_id = dm.creative_id AND\nam.channel_code = dm.channel_code GROUP BY am.campaign_id,\nam.optimize_type, am.creative_id, am.optimize_by_days ,\nam.impressions_delta,am.clicks_delta , am.channel_code, am.cost ,\ndm.allocation_map_id;\n\n\n\n\n.\n\n\n\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=92536.47..92536.69 rows=1 width=138) (actual\ntime=3194317.938..3194324.811 rows=1584 loops=1)\n -> Nested Loop (cost=66901.04..92536.40 rows=1 width=138) (actual\ntime=9044.558..3193988.565 rows=13556 loops=1)\n Join Filter: ((\"outer\".channel_code = \"inner\".channel_code)\nAND (\"inner\".creative_id = \"outer\".creative_id))\n -> Nested Loop (cost=40269.84..41486.82 rows=1 width=122)\n(actual time=442.818..119250.727 rows=11483 loops=1)\n -> Nested Loop (cost=105.83..333.55 rows=1 width=94)\n(actual time=17.199..117.536 rows=263 loops=1)\n -> Nested Loop (cost=105.83..329.53 rows=1\nwidth=24) (actual time=17.175..106.429 rows=263 loops=1)\n -> Nested Loop (cost=105.83..171.93\nrows=1 width=16) (actual time=17.125..40.490 rows=38 loops=1)\n -> Bitmap Heap Scan on cmp_attr ca2\n(cost=105.83..168.20 rows=1 width=4) (actual time=1.759..5.767\nrows=1186 loops=1)\n Recheck Cond:\n((attribute)::text = 'OPTIMIZE_STATUS'::text)\n Filter: ((value)::text = '1'::text)\n -> Bitmap Index Scan on\ncampaign_attributes_pk (cost=0.00..105.83 rows=60 width=0) (actual\ntime=1.721..1.721 rows=1279 loops=1)\n Index Cond:\n((attribute)::text = 'OPTIMIZE_STATUS'::text)\n -> Index Scan using cmp_pk1 on cmp\nc1 (cost=0.00..3.72 rows=1 width=12) (actual time=0.025..0.026 rows=0\nloops=1186)\n Index Cond: (c1.asset_id =\n\"outer\".campaign_id)\n Filter: (('20070603'::text >=\n((start_date - '1 day'::interval))::text) AND ('20070603'::text <=\n((end_date + '1 day'::interval))::text) AND (status = 'A'::bpchar) AND\n(revenue_type <> 'FOC'::bpchar) AND (optimize_by_days > 0))\n -> Index Scan using creative_c_id on crt\ncr1 (cost=0.00..156.55 rows=84 width=8) (actual time=0.051..1.699\nrows=7 loops=38)\n Index Cond: (cr1.campaign_id =\n\"outer\".asset_id)\n Filter: (status <> 'HID'::bpchar)\n -> Index Scan using campaign_attributes_pk on\ncmp_attr ca3 (cost=0.00..4.01 rows=1 width=82) (actual\ntime=0.027..0.031 rows=1 loops=263)\n Index Cond: ((\"outer\".campaign_id =\nca3.campaign_id) AND ((ca3.attribute)::text = 'OPTIMIZE_TYPE'::text))\n -> Bitmap Heap Scan on al_mp am1\n(cost=40164.01..41146.99 rows=502 width=28) (actual\ntime=447.274..452.698 rows=44 loops=263)\n Recheck Cond: (\"outer\".id = am1.creative_id)\n Filter: ((status)::text = 'A'::text)\n -> Bitmap Index Scan on alc_map_idx\n(cost=0.00..40164.01 rows=502 width=0) (actual time=447.145..447.145\nrows=144 loops=263)\n Index Cond: (\"outer\".id = am1.creative_id)\n -> Bitmap Heap Scan on dl_mp dm (cost=26631.20..50697.13\nrows=20140 width=32) (actual time=266.680..267.745 rows=1 loops=11483)\n Recheck Cond: (\"outer\".id = dm.allocation_map_id)\n -> Bitmap Index Scan on dl_mp_amap_dt\n(cost=0.00..26631.20 rows=20140 width=0) (actual time=266.436..266.436\nrows=1 loops=11483)\n Index Cond: (\"outer\".id = dm.allocation_map_id)\n Total runtime: 3194328.561 ms\n(30 rows)\n\n\nBefor doing vaccum full on the database this query use to take less\nthan 4min. But now after doing vacumming reindexing the tables it is\ntaking 73mins.\n\nAfter observing the explain analyse it seems like it is not selecting\nthe required index properly.\n\nSo can anybody suggest any thing??\n\n--\nRegards\nGauri\n", "msg_date": "Tue, 5 Jun 2007 15:23:35 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Problem" }, { "msg_contents": "On Tue, Jun 05, 2007 at 03:23:35PM +0530, Gauri Kanekar wrote:\n> Befor doing vaccum full on the database this query use to take less\n> than 4min. But now after doing vacumming reindexing the tables it is\n> taking 73mins.\n\nDid you analyze the table recently? Some of the selectivity estimates seem\nquite a bit off -- you could try raising the statistics target.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 5 Jun 2007 15:16:45 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Problem" }, { "msg_contents": "\"Gauri Kanekar\" <[email protected]> writes:\n\n> Befor doing vaccum full on the database this query use to take less\n> than 4min. But now after doing vacumming reindexing the tables it is\n> taking 73mins.\n\nVacuum full is generally not necessary. You do need to ensure regular vacuum\nis run frequently on heavily updated tables though.\n\n> After observing the explain analyse it seems like it is not selecting\n> the required index properly.\n>\n> So can anybody suggest any thing??\n\n-> Bitmap Index Scan on campaign_attributes_pk (cost=0.00..105.83 rows=60 width=0) (actual time=1.721..1.721 rows=1279 loops=1)\n\nWhen's the last time you analyzed your tables? Postgres is guessing it'll find\n60 rows and instead finding over a thousands rows...\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 05 Jun 2007 14:32:33 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Problem" } ]
[ { "msg_contents": "I have a table:\nwebdigest=# \\d wd_urlusermaps\n锟斤拷 \"public.wd_urlusermaps\"\n锟街讹拷锟斤拷 | 锟斤拷锟斤拷 | 锟斤拷锟轿达拷\n---------+-----------------------------+-------------------------------------------------------------\nid | integer | not null default nextval('wd_urlusermaps_id_seq'::regclass)\nurlid | integer | not null\ntag | character varying(512) |\ntitle | character varying(512) |\nsummary | character varying(1024) |\ncomment | character varying(1024) |\nctime | timestamp without time zone |\nmtime | timestamp without time zone |\nshare | smallint |\nuserid | integer |\nimport | smallint | default 0\n锟斤拷锟斤拷:\n\"wd_urlusermaps_pkey\" PRIMARY KEY, btree (id) CLUSTER\n\"urlusermaps_urlid_userid\" UNIQUE, btree (urlid, userid)\n\"urlusermaps_urlid\" btree (urlid)\n\"urlusermaps_userid\" btree (userid)\n\"wd_urlusermaps_ctime_idx\" btree (ctime)\n\"wd_urlusermaps_share_idx\" btree (\"share\")\n\nand target statistic set to 1000, and two different query plan:\n\nwebdigest=# explain analyze select A.id as\nfav_id,A.urlid,A.tag,A.title,A.summary,A.comment,A.ctime,A.share from\nwd_urlusermaps A where share =1 and A.userid='219177' ORDER BY A.id DESC\nlimit 20 ;\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.00..4932.56 rows=20 width=96) (actual\ntime=730.461..2374.435 rows=20 loops=1)\n-> Index Scan Backward using wd_urlusermaps_pkey on wd_urlusermaps a\n(cost=0.00..269810.77 rows=1094 width=96) (actual time=730.456..2374.367\nrows=20 loops=1)\nFilter: ((\"share\" = 1) AND (userid = 219177))\nTotal runtime: 2374.513 ms\n(4 rows)\n\nwebdigest=# explain analyze select A.id as\nfav_id,A.urlid,A.tag,A.title,A.summary,A.comment,A.ctime,A.share from\nwd_urlusermaps A where share =1 and A.userid='219177' ORDER BY A.id DESC\nlimit 40 ;\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=6805.77..6805.87 rows=40 width=96) (actual time=5.731..5.905\nrows=40 loops=1)\n-> Sort (cost=6805.77..6808.50 rows=1094 width=96) (actual\ntime=5.726..5.785 rows=40 loops=1)\nSort Key: id\n-> Index Scan using urlusermaps_userid on wd_urlusermaps a\n(cost=0.00..6750.55 rows=1094 width=96) (actual time=0.544..5.616\nrows=41 loops=1)\nIndex Cond: (userid = 219177)\nFilter: (\"share\" = 1)\nTotal runtime: 6.013 ms\n(7 rows)\n\nthe userid=219177 got 2000+ record and around 40 shared=1, why above 2 query\nshows so much difference?\n\nany hint would be greatly appreciated.\n\n-laser\n\n", "msg_date": "Wed, 06 Jun 2007 11:45:43 +0800", "msg_from": "weiping <[email protected]>", "msg_from_op": true, "msg_subject": "weird query plan" }, { "msg_contents": "sorry, forgot to mention our version, it's postgresql 8.2.3\n\n-laser\n> I have a table:\n> webdigest=# \\d wd_urlusermaps\n> 锟斤拷 \"public.wd_urlusermaps\"\n> 锟街讹拷锟斤拷 | 锟斤拷锟斤拷 | 锟斤拷锟轿达拷\n> ---------+-----------------------------+-------------------------------------------------------------\n> id | integer | not null default nextval('wd_urlusermaps_id_seq'::regclass)\n> urlid | integer | not null\n> tag | character varying(512) |\n> title | character varying(512) |\n> summary | character varying(1024) |\n> comment | character varying(1024) |\n> ctime | timestamp without time zone |\n> mtime | timestamp without time zone |\n> share | smallint |\n> userid | integer |\n> import | smallint | default 0\n> 锟斤拷锟斤拷:\n> \"wd_urlusermaps_pkey\" PRIMARY KEY, btree (id) CLUSTER\n> \"urlusermaps_urlid_userid\" UNIQUE, btree (urlid, userid)\n> \"urlusermaps_urlid\" btree (urlid)\n> \"urlusermaps_userid\" btree (userid)\n> \"wd_urlusermaps_ctime_idx\" btree (ctime)\n> \"wd_urlusermaps_share_idx\" btree (\"share\")\n>\n> and target statistic set to 1000, and two different query plan:\n>\n> webdigest=# explain analyze select A.id as\n> fav_id,A.urlid,A.tag,A.title,A.summary,A.comment,A.ctime,A.share from\n> wd_urlusermaps A where share =1 and A.userid='219177' ORDER BY A.id DESC\n> limit 20 ;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..4932.56 rows=20 width=96) (actual\n> time=730.461..2374.435 rows=20 loops=1)\n> -> Index Scan Backward using wd_urlusermaps_pkey on wd_urlusermaps a\n> (cost=0.00..269810.77 rows=1094 width=96) (actual time=730.456..2374.367\n> rows=20 loops=1)\n> Filter: ((\"share\" = 1) AND (userid = 219177))\n> Total runtime: 2374.513 ms\n> (4 rows)\n>\n> webdigest=# explain analyze select A.id as\n> fav_id,A.urlid,A.tag,A.title,A.summary,A.comment,A.ctime,A.share from\n> wd_urlusermaps A where share =1 and A.userid='219177' ORDER BY A.id DESC\n> limit 40 ;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=6805.77..6805.87 rows=40 width=96) (actual time=5.731..5.905\n> rows=40 loops=1)\n> -> Sort (cost=6805.77..6808.50 rows=1094 width=96) (actual\n> time=5.726..5.785 rows=40 loops=1)\n> Sort Key: id\n> -> Index Scan using urlusermaps_userid on wd_urlusermaps a\n> (cost=0.00..6750.55 rows=1094 width=96) (actual time=0.544..5.616\n> rows=41 loops=1)\n> Index Cond: (userid = 219177)\n> Filter: (\"share\" = 1)\n> Total runtime: 6.013 ms\n> (7 rows)\n>\n> the userid=219177 got 2000+ record and around 40 shared=1, why above 2 query\n> shows so much difference?\n>\n> any hint would be greatly appreciated.\n>\n> -laser\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n> \n", "msg_date": "Wed, 06 Jun 2007 11:59:16 +0800", "msg_from": "weiping <[email protected]>", "msg_from_op": true, "msg_subject": "Re: weird query plan" }, { "msg_contents": "I changed the query to :\nEXPLAIN ANALYZE select id from wd_urlusermaps where id in (select id\nfrom wd_urlusermaps where share =1 and userid='219177') order by id desc\nlimit 20;\n\nand it's much better now (from real execute time), but the cost report\nhigher\nthen slower one above, may be I should do some tunning on planner\nparameter or\nis it a planner bug?\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=16118.83..16118.88 rows=20 width=4) (actual\ntime=17.539..17.619 rows=20 loops=1)\n-> Sort (cost=16118.83..16121.57 rows=1094 width=4) (actual\ntime=17.534..17.560 rows=20 loops=1)\nSort Key: public.wd_urlusermaps.id\n-> Nested Loop (cost=6753.28..16063.61 rows=1094 width=4) (actual\ntime=16.739..17.439 rows=41 loops=1)\n-> HashAggregate (cost=6753.28..6764.22 rows=1094 width=4) (actual\ntime=16.707..16.786 rows=41 loops=1)\n-> Index Scan using urlusermaps_userid on wd_urlusermaps\n(cost=0.00..6750.55 rows=1094 width=4) (actual time=1.478..16.563\nrows=41 loops=1)\nIndex Cond: (userid = 219177)\nFilter: (\"share\" = 1)\n-> Index Scan using wd_urlusermaps_pkey on wd_urlusermaps\n(cost=0.00..8.49 rows=1 width=4) (actual time=0.008..0.010 rows=1 loops=41)\nIndex Cond: (public.wd_urlusermaps.id = public.wd_urlusermaps.id)\nTotal runtime: 17.762 ms\n(11 rows)\n\n> sorry, forgot to mention our version, it's postgresql 8.2.3\n>\n> -laser\n> \n>> I have a table:\n>> webdigest=# \\d wd_urlusermaps\n>> 锟斤拷 \"public.wd_urlusermaps\"\n>> 锟街讹拷锟斤拷 | 锟斤拷锟斤拷 | 锟斤拷锟轿达拷\n>> ---------+-----------------------------+-------------------------------------------------------------\n>> id | integer | not null default nextval('wd_urlusermaps_id_seq'::regclass)\n>> urlid | integer | not null\n>> tag | character varying(512) |\n>> title | character varying(512) |\n>> summary | character varying(1024) |\n>> comment | character varying(1024) |\n>> ctime | timestamp without time zone |\n>> mtime | timestamp without time zone |\n>> share | smallint |\n>> userid | integer |\n>> import | smallint | default 0\n>> 锟斤拷锟斤拷:\n>> \"wd_urlusermaps_pkey\" PRIMARY KEY, btree (id) CLUSTER\n>> \"urlusermaps_urlid_userid\" UNIQUE, btree (urlid, userid)\n>> \"urlusermaps_urlid\" btree (urlid)\n>> \"urlusermaps_userid\" btree (userid)\n>> \"wd_urlusermaps_ctime_idx\" btree (ctime)\n>> \"wd_urlusermaps_share_idx\" btree (\"share\")\n>>\n>> and target statistic set to 1000, and two different query plan:\n>>\n>> webdigest=# explain analyze select A.id as\n>> fav_id,A.urlid,A.tag,A.title,A.summary,A.comment,A.ctime,A.share from\n>> wd_urlusermaps A where share =1 and A.userid='219177' ORDER BY A.id DESC\n>> limit 20 ;\n>> QUERY PLAN\n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..4932.56 rows=20 width=96) (actual\n>> time=730.461..2374.435 rows=20 loops=1)\n>> -> Index Scan Backward using wd_urlusermaps_pkey on wd_urlusermaps a\n>> (cost=0.00..269810.77 rows=1094 width=96) (actual time=730.456..2374.367\n>> rows=20 loops=1)\n>> Filter: ((\"share\" = 1) AND (userid = 219177))\n>> Total runtime: 2374.513 ms\n>> (4 rows)\n>>\n>> webdigest=# explain analyze select A.id as\n>> fav_id,A.urlid,A.tag,A.title,A.summary,A.comment,A.ctime,A.share from\n>> wd_urlusermaps A where share =1 and A.userid='219177' ORDER BY A.id DESC\n>> limit 40 ;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=6805.77..6805.87 rows=40 width=96) (actual time=5.731..5.905\n>> rows=40 loops=1)\n>> -> Sort (cost=6805.77..6808.50 rows=1094 width=96) (actual\n>> time=5.726..5.785 rows=40 loops=1)\n>> Sort Key: id\n>> -> Index Scan using urlusermaps_userid on wd_urlusermaps a\n>> (cost=0.00..6750.55 rows=1094 width=96) (actual time=0.544..5.616\n>> rows=41 loops=1)\n>> Index Cond: (userid = 219177)\n>> Filter: (\"share\" = 1)\n>> Total runtime: 6.013 ms\n>> (7 rows)\n>>\n>> the userid=219177 got 2000+ record and around 40 shared=1, why above 2 query\n>> shows so much difference?\n>>\n>> any hint would be greatly appreciated.\n>>\n>> -laser\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n>>\n>>\n>> \n>> \n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n>\n> \n", "msg_date": "Wed, 06 Jun 2007 15:24:24 +0800", "msg_from": "weiping <[email protected]>", "msg_from_op": true, "msg_subject": "different query plan because different limit # (Re: weird query plan)" }, { "msg_contents": "continue digging shows:\nset cpu_tuple_cost to 0.1;\nexplain analyze select * from wd_urlusermaps where share =1 and\nuserid='219177' order by id desc limit 20;\nSET\n时锟斤拷: 0.256 ms\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=7063.98..7064.03 rows=20 width=110) (actual\ntime=6.047..6.130 rows=20 loops=1)\n-> Sort (cost=7063.98..7066.71 rows=1094 width=110) (actual\ntime=6.043..6.070 rows=20 loops=1)\nSort Key: id\n-> Index Scan using urlusermaps_userid on wd_urlusermaps\n(cost=0.00..7008.76 rows=1094 width=110) (actual time=0.710..5.838\nrows=41 loops=1)\nIndex Cond: (userid = 219177)\nFilter: (\"share\" = 1)\nTotal runtime: 6.213 ms\n(7 rows)\n\nnow it's what i need, which means we should increase cpu_tuple_cost for\nlarge\nRAM node (we got 16G RAN and the table only serveral hundred M) to avoid\nsort\nhappened too early. is it true?\n\n-laser\n> I changed the query to :\n> EXPLAIN ANALYZE select id from wd_urlusermaps where id in (select id\n> from wd_urlusermaps where share =1 and userid='219177') order by id desc\n> limit 20;\n>\n> and it's much better now (from real execute time), but the cost report\n> higher\n> then slower one above, may be I should do some tunning on planner\n> parameter or\n> is it a planner bug?\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=16118.83..16118.88 rows=20 width=4) (actual\n> time=17.539..17.619 rows=20 loops=1)\n> -> Sort (cost=16118.83..16121.57 rows=1094 width=4) (actual\n> time=17.534..17.560 rows=20 loops=1)\n> Sort Key: public.wd_urlusermaps.id\n> -> Nested Loop (cost=6753.28..16063.61 rows=1094 width=4) (actual\n> time=16.739..17.439 rows=41 loops=1)\n> -> HashAggregate (cost=6753.28..6764.22 rows=1094 width=4) (actual\n> time=16.707..16.786 rows=41 loops=1)\n> -> Index Scan using urlusermaps_userid on wd_urlusermaps\n> (cost=0.00..6750.55 rows=1094 width=4) (actual time=1.478..16.563\n> rows=41 loops=1)\n> Index Cond: (userid = 219177)\n> Filter: (\"share\" = 1)\n> -> Index Scan using wd_urlusermaps_pkey on wd_urlusermaps\n> (cost=0.00..8.49 rows=1 width=4) (actual time=0.008..0.010 rows=1 loops=41)\n> Index Cond: (public.wd_urlusermaps.id = public.wd_urlusermaps.id)\n> Total runtime: 17.762 ms\n> (11 rows)\n>\n> \n>> sorry, forgot to mention our version, it's postgresql 8.2.3\n>>\n>> -laser\n>> \n>> \n>>> I have a table:\n>>> webdigest=# \\d wd_urlusermaps\n>>> 锟斤拷 \"public.wd_urlusermaps\"\n>>> 锟街讹拷锟斤拷 | 锟斤拷锟斤拷 | 锟斤拷锟轿达拷\n>>> ---------+-----------------------------+-------------------------------------------------------------\n>>> id | integer | not null default nextval('wd_urlusermaps_id_seq'::regclass)\n>>> urlid | integer | not null\n>>> tag | character varying(512) |\n>>> title | character varying(512) |\n>>> summary | character varying(1024) |\n>>> comment | character varying(1024) |\n>>> ctime | timestamp without time zone |\n>>> mtime | timestamp without time zone |\n>>> share | smallint |\n>>> userid | integer |\n>>> import | smallint | default 0\n>>> 锟斤拷锟斤拷:\n>>> \"wd_urlusermaps_pkey\" PRIMARY KEY, btree (id) CLUSTER\n>>> \"urlusermaps_urlid_userid\" UNIQUE, btree (urlid, userid)\n>>> \"urlusermaps_urlid\" btree (urlid)\n>>> \"urlusermaps_userid\" btree (userid)\n>>> \"wd_urlusermaps_ctime_idx\" btree (ctime)\n>>> \"wd_urlusermaps_share_idx\" btree (\"share\")\n>>>\n>>> and target statistic set to 1000, and two different query plan:\n>>>\n>>> webdigest=# explain analyze select A.id as\n>>> fav_id,A.urlid,A.tag,A.title,A.summary,A.comment,A.ctime,A.share from\n>>> wd_urlusermaps A where share =1 and A.userid='219177' ORDER BY A.id DESC\n>>> limit 20 ;\n>>> QUERY PLAN\n>>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Limit (cost=0.00..4932.56 rows=20 width=96) (actual\n>>> time=730.461..2374.435 rows=20 loops=1)\n>>> -> Index Scan Backward using wd_urlusermaps_pkey on wd_urlusermaps a\n>>> (cost=0.00..269810.77 rows=1094 width=96) (actual time=730.456..2374.367\n>>> rows=20 loops=1)\n>>> Filter: ((\"share\" = 1) AND (userid = 219177))\n>>> Total runtime: 2374.513 ms\n>>> (4 rows)\n>>>\n>>> webdigest=# explain analyze select A.id as\n>>> fav_id,A.urlid,A.tag,A.title,A.summary,A.comment,A.ctime,A.share from\n>>> wd_urlusermaps A where share =1 and A.userid='219177' ORDER BY A.id DESC\n>>> limit 40 ;\n>>> QUERY PLAN\n>>> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Limit (cost=6805.77..6805.87 rows=40 width=96) (actual time=5.731..5.905\n>>> rows=40 loops=1)\n>>> -> Sort (cost=6805.77..6808.50 rows=1094 width=96) (actual\n>>> time=5.726..5.785 rows=40 loops=1)\n>>> Sort Key: id\n>>> -> Index Scan using urlusermaps_userid on wd_urlusermaps a\n>>> (cost=0.00..6750.55 rows=1094 width=96) (actual time=0.544..5.616\n>>> rows=41 loops=1)\n>>> Index Cond: (userid = 219177)\n>>> Filter: (\"share\" = 1)\n>>> Total runtime: 6.013 ms\n>>> (7 rows)\n>>>\n>>> the userid=219177 got 2000+ record and around 40 shared=1, why above 2 query\n>>> shows so much difference?\n>>>\n>>> any hint would be greatly appreciated.\n>>>\n>>> -laser\n>>>\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>>> subscribe-nomail command to [email protected] so that your\n>>> message can get through to the mailing list cleanly\n>>>\n>>>\n>>> \n>>> \n>>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>>\n>>\n>> \n>> \n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n>\n> \n", "msg_date": "Wed, 06 Jun 2007 15:40:57 +0800", "msg_from": "weiping <[email protected]>", "msg_from_op": true, "msg_subject": "Re: different query plan because different limit # (Re:\n\tweird query plan)" }, { "msg_contents": "\"weiping\" <[email protected]> writes:\n\n> -> Index Scan using urlusermaps_userid on wd_urlusermaps\n> (cost=0.00..6750.55 rows=1094 width=4) (actual time=1.478..16.563 rows=41 loops=1)\n> Index Cond: (userid = 219177)\n> Filter: (\"share\" = 1)\n\nIt's estimating 1094 rows and getting 41 rows. You might considering raising\nthe statistics target for that table.\n\nDoes it get accurate estimates for the number of rows for each of these?\n\nexplain analyze select * from wd_urlusermaps where userid=219177 \nexplain analyze select * from wd_urlusermaps where share=1\n\n(the latter might take a while)\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 06 Jun 2007 10:21:59 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: different query plan because different limit # (Re: weird query\n\tplan)" }, { "msg_contents": "weiping <[email protected]> writes:\n> -> Index Scan using urlusermaps_userid on wd_urlusermaps a\n> (cost=0.00..6750.55 rows=1094 width=96) (actual time=0.544..5.616\n> rows=41 loops=1)\n> Index Cond: (userid = 219177)\n> Filter: (\"share\" = 1)\n\n> the userid=219177 got 2000+ record and around 40 shared=1, why above 2 query\n> shows so much difference?\n\nProbably because the rowcount estimate is so far off (1094 vs 41).\n\nPossibly boosting the statistics target would help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Jun 2007 10:40:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: weird query plan " } ]
[ { "msg_contents": "Hi there,\n\nWe run a small ISP with a FreeBSD/freeradius/postgresql 8.2.4 backend\nand 200+ users. Authentication happens via UAM/hotspot and I see a lot\nof authorisation and accounting packets that are handled via PL/PGSQL\nfunctions directly in the database.\n\nEverything seems to work 100% except that a few times a day I see\n\nJun 6 10:41:31 caligula postgres[57347]: [4-1] radiususer: LOG:\nduration: 19929.291 ms statement: SELECT fn_accounting_start(...)\n\nin my logs. I'm logging slow queries with log_min_duration_statement =\n500 in my postgresql.conf. Sometimes another query runs equally slow or\neven slower (I've seen 139 seconds!!!) a few minutes before or after as\nwell, but then everything is back to normal.\n\nEven though I haven't yet indexed my data I know that the system is\nperformant because my largest table (the accounting one) only has 5000+\nrows, the entire database is only a few MB's and I have plenty of memory\n(2GB), shared_buffers = 100MB and max_fsm_pages = 179200. Also from\nbriefly enabling\n\nlog_parser_stats = on\nlog_planner_stats = on\nlog_executor_stats = on\n\nI saw that most queries are 100% satisfied from cache so the disk\ndoesn't even get hit. Finally, the problem seems unrelated to load\nbecause it happens at 4am just as likely as at peak traffic time.\n\nWhat the heck could cause such erratic behaviour? I suspect some type of\nresource problem but what and how could I dig deeper?\n\nGunther\n\n", "msg_date": "Wed, 06 Jun 2007 21:20:54 +0200", "msg_from": "Gunther Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "VERY slow queries at random" }, { "msg_contents": "On Wed, Jun 06, 2007 at 09:20:54PM +0200, Gunther Mayer wrote:\n> \n> What the heck could cause such erratic behaviour? I suspect some type of\n> resource problem but what and how could I dig deeper?\n\nIs something (perhaps implicitly) locking the table? That will cause\nthis.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\n\"The year's penultimate month\" is not in truth a good way of saying\nNovember.\n\t\t--H.W. Fowler\n", "msg_date": "Wed, 6 Jun 2007 15:56:49 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VERY slow queries at random" }, { "msg_contents": "Gunther Mayer wrote:\n> Hi there,\n>\n> We run a small ISP with a FreeBSD/freeradius/postgresql 8.2.4 backend\n> and 200+ users. Authentication happens via UAM/hotspot and I see a lot\n> of authorisation and accounting packets that are handled via PL/PGSQL\n> functions directly in the database.\n>\n> Everything seems to work 100% except that a few times a day I see\n>\n> Jun 6 10:41:31 caligula postgres[57347]: [4-1] radiususer: LOG:\n> duration: 19929.291 ms statement: SELECT fn_accounting_start(...)\n>\n> in my logs. I'm logging slow queries with log_min_duration_statement =\n> 500 in my postgresql.conf. Sometimes another query runs equally slow or\n> even slower (I've seen 139 seconds!!!) a few minutes before or after as\n> well, but then everything is back to normal.\n>\n> Even though I haven't yet indexed my data I know that the system is\n> performant because my largest table (the accounting one) only has 5000+\n> rows, the entire database is only a few MB's and I have plenty of memory\n> (2GB), shared_buffers = 100MB and max_fsm_pages = 179200. Also from\n> briefly enabling\n>\n> log_parser_stats = on\n> log_planner_stats = on\n> log_executor_stats = on\n>\n> I saw that most queries are 100% satisfied from cache so the disk\n> doesn't even get hit. Finally, the problem seems unrelated to load\n> because it happens at 4am just as likely as at peak traffic time.\n>\n> What the heck could cause such erratic behaviour? I suspect some type of\n> resource problem but what and how could I dig deeper? \n\nMaybe your hard drive is set to spin down after a certain period of \nidle, and since most all your data is coming from memory, then it might \nbe that on the rare occasion when it needs to hit the drive it's not \nspun up anymore.\n\nMaybe some other process is cranking up (cron jobs???) that are chewing \nup all your I/O bandwidth?\n\nHard to say. Anything in the system logs that would give you a hint? \nTry correlating them by the time of the slow pgsql queries.\n\n", "msg_date": "Wed, 06 Jun 2007 16:27:47 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VERY slow queries at random" }, { "msg_contents": "could be that the checkpoints are done too seldom.\nwhat is your wal checkpoint config?\n\nKristo\nOn 07.06.2007, at 0:27, Scott Marlowe wrote:\n\n> Gunther Mayer wrote:\n>> Hi there,\n>>\n>> We run a small ISP with a FreeBSD/freeradius/postgresql 8.2.4 backend\n>> and 200+ users. Authentication happens via UAM/hotspot and I see a \n>> lot\n>> of authorisation and accounting packets that are handled via PL/PGSQL\n>> functions directly in the database.\n>>\n>> Everything seems to work 100% except that a few times a day I see\n>>\n>> Jun 6 10:41:31 caligula postgres[57347]: [4-1] radiususer: LOG:\n>> duration: 19929.291 ms statement: SELECT fn_accounting_start(...)\n>>\n>> in my logs. I'm logging slow queries with \n>> log_min_duration_statement =\n>> 500 in my postgresql.conf. Sometimes another query runs equally \n>> slow or\n>> even slower (I've seen 139 seconds!!!) a few minutes before or \n>> after as\n>> well, but then everything is back to normal.\n>>\n>> Even though I haven't yet indexed my data I know that the system is\n>> performant because my largest table (the accounting one) only has \n>> 5000+\n>> rows, the entire database is only a few MB's and I have plenty of \n>> memory\n>> (2GB), shared_buffers = 100MB and max_fsm_pages = 179200. Also from\n>> briefly enabling\n>>\n>> log_parser_stats = on\n>> log_planner_stats = on\n>> log_executor_stats = on\n>>\n>> I saw that most queries are 100% satisfied from cache so the disk\n>> doesn't even get hit. Finally, the problem seems unrelated to load\n>> because it happens at 4am just as likely as at peak traffic time.\n>>\n>> What the heck could cause such erratic behaviour? I suspect some \n>> type of\n>> resource problem but what and how could I dig deeper?\n>\n> Maybe your hard drive is set to spin down after a certain period of \n> idle, and since most all your data is coming from memory, then it \n> might be that on the rare occasion when it needs to hit the drive \n> it's not spun up anymore.\n>\n> Maybe some other process is cranking up (cron jobs???) that are \n> chewing up all your I/O bandwidth?\n>\n> Hard to say. Anything in the system logs that would give you a \n> hint? Try correlating them by the time of the slow pgsql queries.\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n", "msg_date": "Thu, 7 Jun 2007 10:09:25 +0300", "msg_from": "Kristo Kaiv <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VERY slow queries at random" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Wed, Jun 06, 2007 at 09:20:54PM +0200, Gunther Mayer wrote:\n> \n>> What the heck could cause such erratic behaviour? I suspect some type of\n>> resource problem but what and how could I dig deeper?\n>> \n>\n> Is something (perhaps implicitly) locking the table? That will cause\n> this.\n> \nThere are a whole bunch of update queries that fire all the time but \nafaik none of them ever lock the entire table. To the best of my \nknowledge UPDATE ... WHERE ... only locks those rows that it actually \noperates on, in my case this is always a single row. No explicit locking \nis done anywhere, but perhaps you're right and it is a locking issue. \nQuestion is, how do I find out about locks at the time when I only get \ntold about the slow query *after* it has completed and postgres has told \nme so by logging a slow query entry in my logs?\n\nGunther\n", "msg_date": "Thu, 07 Jun 2007 16:22:47 +0200", "msg_from": "Gunther Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VERY slow queries at random" }, { "msg_contents": "Scott Marlowe wrote:\n> Gunther Mayer wrote:\n>> Hi there,\n>>\n>> We run a small ISP with a FreeBSD/freeradius/postgresql 8.2.4 backend\n>> and 200+ users. Authentication happens via UAM/hotspot and I see a lot\n>> of authorisation and accounting packets that are handled via PL/PGSQL\n>> functions directly in the database.\n>>\n>> Everything seems to work 100% except that a few times a day I see\n>>\n>> Jun 6 10:41:31 caligula postgres[57347]: [4-1] radiususer: LOG:\n>> duration: 19929.291 ms statement: SELECT fn_accounting_start(...)\n>>\n>> in my logs. I'm logging slow queries with log_min_duration_statement =\n>> 500 in my postgresql.conf. Sometimes another query runs equally slow or\n>> even slower (I've seen 139 seconds!!!) a few minutes before or after as\n>> well, but then everything is back to normal.\n>>\n>> Even though I haven't yet indexed my data I know that the system is\n>> performant because my largest table (the accounting one) only has 5000+\n>> rows, the entire database is only a few MB's and I have plenty of memory\n>> (2GB), shared_buffers = 100MB and max_fsm_pages = 179200. Also from\n>> briefly enabling\n>>\n>> log_parser_stats = on\n>> log_planner_stats = on\n>> log_executor_stats = on\n>>\n>> I saw that most queries are 100% satisfied from cache so the disk\n>> doesn't even get hit. Finally, the problem seems unrelated to load\n>> because it happens at 4am just as likely as at peak traffic time.\n>>\n>> What the heck could cause such erratic behaviour? I suspect some type of\n>> resource problem but what and how could I dig deeper? \n>\n> Maybe your hard drive is set to spin down after a certain period of \n> idle, and since most all your data is coming from memory, then it \n> might be that on the rare occasion when it needs to hit the drive it's \n> not spun up anymore.\nI doubt that as a serious amount of logging is taking place on the box \nall the time which goes straight to disk. Also, no disk in the world \nwould take more than a minute to spin up...\n> Maybe some other process is cranking up (cron jobs???) that are \n> chewing up all your I/O bandwidth?\nHmm, I investigated that too but if that was the case the queries would \nrun slow always at the same time of the day.\n> Hard to say. Anything in the system logs that would give you a hint? \n> Try correlating them by the time of the slow pgsql queries.\nNothing relevant in the system logs at the time of the slow query \nappearing. I have in the mean time tweaked syslog-ng.conf such that as \nsoon as it detects a \"duration: <greater than 500>ms\" log message it \nspawns top and top -m io and redirects the output to file. At least in \nthat way I can check what's keeping the system busy immediately *after* \na slow query has occured. Of course now Murphy's law has it that since \nI've done that (30 hours ago) not a single slow query has fired, but \nhey, I'll look at the results once I have them.\n\nOn another note, autovacuum couldn't cause such issues, could it? I do \nhave autovacuum enabled (autovacuum=on as well as \nstats_start_collector=on, stats_block_level = on and stats_row_level = \non), is there any possibility that autovacuum is not as resource \nfriendly as advertised?\n\nGunther\n", "msg_date": "Thu, 07 Jun 2007 17:32:52 +0200", "msg_from": "Gunther Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VERY slow queries at random" }, { "msg_contents": "Kristo Kaiv wrote:\n> could be that the checkpoints are done too seldom.\n> what is your wal checkpoint config?\n>\nwal checkpoint config is on pg defaults everywhere, all relevant config \noptions are commented out. I'm no expert in wal stuff but I don't see \nhow that could cause the problem?\n\nGunther\n", "msg_date": "Thu, 07 Jun 2007 17:38:05 +0200", "msg_from": "Gunther Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VERY slow queries at random" }, { "msg_contents": "On Thu, Jun 07, 2007 at 04:22:47PM +0200, Gunther Mayer wrote:\n> There are a whole bunch of update queries that fire all the time but \n> afaik none of them ever lock the entire table. To the best of my \n> knowledge UPDATE ... WHERE ... only locks those rows that it actually \n> operates on, in my case this is always a single row.\n\nWell that shouldn't be biting you, then (you're not in SERIALIZABLE\nmode, right?). The other obvious bit would be checkpoint storms. \nWhat's your bgwriter config like?\n\n> Question is, how do I find out about locks at the time when I only get \n> told about the slow query *after* it has completed and postgres has told \n> me so by logging a slow query entry in my logs?\n\nYou can't :(\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThis work was visionary and imaginative, and goes to show that visionary\nand imaginative work need not end up well. \n\t\t--Dennis Ritchie\n", "msg_date": "Thu, 7 Jun 2007 11:45:50 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VERY slow queries at random" }, { "msg_contents": "Gunther Mayer wrote:\n\n> On another note, autovacuum couldn't cause such issues, could it? I do \n> have autovacuum enabled (autovacuum=on as well as \n> stats_start_collector=on, stats_block_level = on and stats_row_level = \n> on), is there any possibility that autovacuum is not as resource \n> friendly as advertised?\n\nHmm. I am not sure where did you read that but I don't think it has\never been stated that autovacuum is resource friendly in the default\nconfiguration (I, for one, have never tried, intended or wanted to state\nthat). I suggest tuning the autovacuum_vacuum_cost_delay parameters if\nyou want it to interfere less with your regular operation.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 7 Jun 2007 11:58:12 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VERY slow queries at random" }, { "msg_contents": "On Thu, 7 Jun 2007, Gunther Mayer wrote:\n\n> wal checkpoint config is on pg defaults everywhere, all relevant config \n> options are commented out. I'm no expert in wal stuff but I don't see how \n> that could cause the problem?\n\nCheckpoints are very resource intensive and can cause other processes \n(including your selects) to hang for a considerable period of time while \nthey are processing. With the default parameters, they can happen very \nfrequently. Normally checkpoint_segments and checkpoint_timeout are \nincreased in order to keep this from happening.\n\nThis would normally be an issue only if you're writing a substantial \namount of data to your tables. If there are a lot of writes going on, you \nmight get some improvement by adjusting those parameters upward; the \ndefaults are pretty low. Make sure you read \nhttp://www.postgresql.org/docs/8.2/static/wal-configuration.html first so \nyou know what you're playing with, there are some recovery implications \ninvoved.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 7 Jun 2007 15:42:41 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VERY slow queries at random" }, { "msg_contents": "\nOn 07.06.2007, at 22:42, Greg Smith wrote:\n\n> On Thu, 7 Jun 2007, Gunther Mayer wrote:\n>\n>> wal checkpoint config is on pg defaults everywhere, all relevant \n>> config options are commented out. I'm no expert in wal stuff but I \n>> don't see how that could cause the problem?\n>\n> Checkpoints are very resource intensive and can cause other \n> processes (including your selects) to hang for a considerable \n> period of time while they are processing. With the default \n> parameters, they can happen very frequently. Normally \n> checkpoint_segments and checkpoint_timeout are increased in order \n> to keep this from happening.\n>\n> This would normally be an issue only if you're writing a \n> substantial amount of data to your tables. If there are a lot of \n> writes going on, you might get some improvement by adjusting those \n> parameters upward; the defaults are pretty low. Make sure you read \n> http://www.postgresql.org/docs/8.2/static/wal-configuration.html \n> first so you know what you're playing with, there are some recovery \n> implications invoved.\n\nI remember us having problems with 8.0 background writer, you might \nwant to try turning it off. Not sure if it behaves as badly in 8.2.\nincreasing wal buffers might be a good idea also.\n\nKristo\n\n", "msg_date": "Fri, 8 Jun 2007 11:27:18 +0300", "msg_from": "Kristo Kaiv <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VERY slow queries at random" } ]
[ { "msg_contents": "Question,\n\nDoes (pg_stat_get_db_blocks_fetched(oid)-pg_stat_get_db_blocks_hit(oid)*8) =\nnumber of KB read from disk for the listed database since the last server\nstartup?\n\nThanks,\n\nChris\n\nQuestion,Does (pg_stat_get_db_blocks_fetched(oid)-pg_stat_get_db_blocks_hit(oid)*8) = number of KB read from disk for the listed database since the last server startup?Thanks,Chris", "msg_date": "Wed, 6 Jun 2007 16:58:59 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Is this true?" }, { "msg_contents": "On Wed, 2007-06-06 at 16:58 -0400, Chris Hoover wrote:\n> Question,\n> \n> Does (pg_stat_get_db_blocks_fetched(oid)-pg_stat_get_db_blocks_hit\n> (oid)*8) = number of KB read from disk for the listed database since\n> the last server startup?\n\nThat will give you the number of blocks requested from the OS. The OS\ndoes it's own caching, and so many of those reads might come from the OS\nbuffer cache, and not the disk itself.\n\nAlso, if you're concerned with the number since the last server restart,\nmake sure you have stats_reset_on_server_start set appropriately.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Wed, 06 Jun 2007 14:37:04 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this true?" } ]
[ { "msg_contents": "Our usage pattern has recently left me with some very bloated database clusters. \n I have, in the past, scheduled downtime to run VACUUM FULL and tried CLUSTER \nas well, followed by a REINDEX on all tables. This does work, however the \nexclusive lock has become a real thorn in my side. As our system grows, I am \nhaving trouble scheduling enough downtime for either of these operations or a \nfull dump/reload. I do run VACUUM regularly, it's just that sometimes we need \nto go back and update a huge percentage of rows in a single batch due to \nchanging customer requirements, leaving us with significant table bloat.\n\nSo within the last few days my db cluster has grown from 290GB to 370GB and \nbecause of some other major data updates on my TO-DO list, I expect this to \ndouble and I'll be bumping up against my storage capacity.\n\nThe root of my question is due to my not understanding why the tables can't be \nin read-only mode while one of these is occurring? Since most of our usage is \nOLAP, this really wouldn't matter much as long as the users could still query \ntheir data while it was running. Is there some way I can allow users read-only \naccess to this data while things are cleaned up in the background? INSERTs can \nwait, SELECTs cannot.\n\nSo how do other people handle such a problem when downtime is heavily frowned \nupon? We have 24/7 access ( but again, the users only read data ).\n", "msg_date": "Wed, 06 Jun 2007 16:04:44 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "reclaiming disk space after major updates" }, { "msg_contents": "On Wed, Jun 06, 2007 at 04:04:44PM -0600, Dan Harris wrote:\n> of these operations or a full dump/reload. I do run VACUUM regularly, it's \n> just that sometimes we need to go back and update a huge percentage of rows \n> in a single batch due to changing customer requirements, leaving us with \n> significant table bloat.\n\nDo you need to update those rows in one transaction (i.e. is the\nrequirement that they all get updated such that the change only\nbecomes visible at once)? If not, you can do this in batches and\nvacuum in between. Batch updates are the prime sucky area in\nPostgres.\n\nAnother trick, if the table is otherwise mostly static, is to do the\nupdating in a copy of the table, and then use the transactional DDL\nfeatures of postgres to change the table names.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nEverything that happens in the world happens at some place.\n\t\t--Jane Jacobs \n", "msg_date": "Thu, 7 Jun 2007 15:20:25 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reclaiming disk space after major updates" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Wed, Jun 06, 2007 at 04:04:44PM -0600, Dan Harris wrote:\n>> of these operations or a full dump/reload. I do run VACUUM regularly, it's \n>> just that sometimes we need to go back and update a huge percentage of rows \n>> in a single batch due to changing customer requirements, leaving us with \n>> significant table bloat.\n> \n> Do you need to update those rows in one transaction (i.e. is the\n> requirement that they all get updated such that the change only\n> becomes visible at once)? If not, you can do this in batches and\n> vacuum in between. Batch updates are the prime sucky area in\n> Postgres.\n\nThey don't always have to be in a single transaction, that's a good idea to \nbreak it up and vacuum in between, I'll consider that. Thanks\n\n> \n> Another trick, if the table is otherwise mostly static, is to do the\n> updating in a copy of the table, and then use the transactional DDL\n> features of postgres to change the table names.\n\nI thought of this, but it seems to break other application logic that feeds a \nsteady streams of inserts into the tables.\n\nThanks again for your thoughts. I guess I'll just have to work around this \nproblem in application logic.\n\n\n", "msg_date": "Thu, 07 Jun 2007 15:26:56 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reclaiming disk space after major updates" }, { "msg_contents": "On Thu, Jun 07, 2007 at 03:26:56PM -0600, Dan Harris wrote:\n> \n> They don't always have to be in a single transaction, that's a good idea to \n> break it up and vacuum in between, I'll consider that. Thanks\n\nIf you can do it this way, it helps _a lot_. I've had to do this\nsort of thing, and breaking into groups of a couple thousand or so\nreally made the difference.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nUnfortunately reformatting the Internet is a little more painful \nthan reformatting your hard drive when it gets out of whack.\n\t\t--Scott Morris\n", "msg_date": "Fri, 8 Jun 2007 10:03:54 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reclaiming disk space after major updates" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Thu, Jun 07, 2007 at 03:26:56PM -0600, Dan Harris wrote:\n>> They don't always have to be in a single transaction, that's a good idea to \n>> break it up and vacuum in between, I'll consider that. Thanks\n> \n> If you can do it this way, it helps _a lot_. I've had to do this\n> sort of thing, and breaking into groups of a couple thousand or so\n> really made the difference.\n> \n> A\n> \n\nOne more point in my original post.. For my own education, why does VACUUM FULL \nprevent reads to a table when running (I'm sure there's a good reason)? I can \ncertainly understand blocking writes, but if I could still read from it, I'd \nhave no problems at all!\n\n-Dan\n", "msg_date": "Fri, 08 Jun 2007 08:29:24 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] reclaiming disk space after major updates" }, { "msg_contents": "On Fri, Jun 08, 2007 at 08:29:24AM -0600, Dan Harris wrote:\n> \n> One more point in my original post.. For my own education, why does VACUUM \n> FULL prevent reads to a table when running (I'm sure there's a good \n> reason)? I can certainly understand blocking writes, but if I could still \n> read from it, I'd have no problems at all!\n\nIt has to take an exclusive lock, because it actually moves the bits\naround on disk. Since your SELECT query could be asking for data\nthat is actually in-flight, you lose. This is conceptually similar\nto the way defrag works on old FAT-type filesystems: if you used one,\nyou'll remember that when you were defragging your disk, if you did\nanything else on that disk the defrag would keep restarting. This\nwas because the OS was trying to move bits around, and when you did\nstuff, you screwed up its optimization. The database works\ndifferently, by taking an exclusive lock, but the basic conceptual\nproblem is the same.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nUnfortunately reformatting the Internet is a little more painful \nthan reformatting your hard drive when it gets out of whack.\n\t\t--Scott Morris\n", "msg_date": "Fri, 8 Jun 2007 11:10:57 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] reclaiming disk space after major updates" } ]
[ { "msg_contents": "Gang,\n\nI'm running a mid-size production 8.0 environment. I'd really like \nto upgrade to 8.2, so I've been doing some testing to make sure my \napp works well with 8.2, and I ran across this weirdness. I set up \nand configured 8.2 in the standard way, MacOSX Tiger, current \npatches, download src, configure, make, make install, initdb, start \nthe db, create a few users, dump out my 8.0 DB (its about 13 GB raw \ntext), load it into 8.2.4, vacuum analyze.\n\nThis is a simple query the shows some weird behavior. I have two \ntables, task and taskinstance. A taskinstance is tied to a campaign \nthrough the task table (taskinstance points at task which points at \ncampaign). Very simple. To select all the taskinstances associated \nwith a certain campaign, I use this query:\n\nselect id from taskinstance where taskid in (select id from task \nwhere campaignid = 75);\n\nNow, I know this could (and should) be rewritten to not use the WHERE \nx IN () style, but this is actually a sub-query to a larger query- \nThe bigger query was acting slow, and I've narrowed it down to this \nsnippet. Task has a total of ~2000 rows, in which 11 of them belong \nto campaign 75. TaskInstance has around 650,000 rows.\n\nThis query runs great on production under 8.0 (27ms), but under 8.2.4 \n(on my mac) I'm seeing times in excess of 50,000ms. Note that on \n8.2.4, if I run the query again, it gets successively faster \n(50,000ms->6000ms->27ms). Is this normal? If I change the \ncampaignid from 75 to another number, it jumps back to 50,000ms, \nwhich leads me to believe that postgresql is somehow caching the \nresults of the query and not figuring out a better way to run the query.\n\nIndexes:\nTaskinstance has \"taskid_taskinstance_key\" btree (taskid)\nTask has \"Task_campaignId_key\" btree (campaignid)\n\nExplain Outputs:\n\n-- 8.2\n\n\nexplain analyze select id from taskinstance where taskid in (select \nid from task where campaignid = 75);\n \nQUERY PLAN\n------------------------------------------------------------------------ \n---------------------------------------------------------------------\n Nested Loop (cost=37.65..15068.50 rows=2301 width=4) (actual \ntime=99.986..50905.512 rows=881 loops=1)\n -> HashAggregate (cost=16.94..17.01 rows=7 width=4) (actual \ntime=0.213..0.236 rows=9 loops=1)\n -> Index Scan using \"Task_campaignId_key\" on task \n(cost=0.00..16.93 rows=7 width=4) (actual time=0.091..0.197 rows=9 \nloops=1)\n Index Cond: (campaignid = 76)\n -> Bitmap Heap Scan on taskinstance (cost=20.71..2143.26 \nrows=556 width=8) (actual time=421.423..5655.745 rows=98 loops=9)\n Recheck Cond: (taskinstance.taskid = task.id)\n -> Bitmap Index Scan on taskid_taskinstance_key \n(cost=0.00..20.57 rows=556 width=0) (actual time=54.709..54.709 \nrows=196 loops=9)\n Index Cond: (taskinstance.taskid = task.id)\n Total runtime: 50907.264 ms\n(9 rows)\n\n\n\n-- 8.0\n\n explain analyze select id from taskinstance where taskid in (select \nid from task where campaignid = 75);\n \nQUERY PLAN\n------------------------------------------------------------------------ \n-------------------------------------------------------------------\n Nested Loop (cost=13.70..17288.28 rows=2640 width=4) (actual \ntime=0.188..21.496 rows=1599 loops=1)\n -> HashAggregate (cost=13.70..13.70 rows=8 width=4) (actual \ntime=0.153..0.217 rows=11 loops=1)\n -> Index Scan using \"Task_campaignId_key\" on task \n(cost=0.00..13.68 rows=8 width=4) (actual time=0.026..0.082 rows=11 \nloops=1)\n Index Cond: (campaignid = 75)\n -> Index Scan using taskid_taskinstance_key on taskinstance \n(cost=0.00..2152.28 rows=563 width=8) (actual time=0.012..0.832 \nrows=145 loops=11)\n Index Cond: (taskinstance.taskid = \"outer\".id)\n Total runtime: 27.406 ms\n(7 rows)\n\nThe weird thing is that on 8.2, I don't see any sequential scans \ntaking place, it seems to be properly using the indexes.\n\nIf anyone has any ideas, I'd appreciate your thoughts. This one has \ngot me boggled. If I can provide any more information that would \nhelpful, please let me know.\n\nThanks for any light you could shed on my situation!\n\n/kurt\n\n", "msg_date": "Wed, 6 Jun 2007 19:27:27 -0400", "msg_from": "Kurt Overberg <[email protected]>", "msg_from_op": true, "msg_subject": "Weird 8.2.4 performance" }, { "msg_contents": "Kurt Overberg wrote:\n\n> Explain Outputs:\n> \n> -- 8.2\n> \n> \n\n> -> Bitmap Heap Scan on taskinstance (cost=20.71..2143.26 rows=556 \n> width=8) (actual time=421.423..5655.745 rows=98 loops=9)\n> Recheck Cond: (taskinstance.taskid = task.id)\n> -> Bitmap Index Scan on taskid_taskinstance_key \n> (cost=0.00..20.57 rows=556 width=0) (actual time=54.709..54.709 rows=196 \n> loops=9)\n\n> -- 8.0\n> \n\n> -> Index Scan using taskid_taskinstance_key on taskinstance \n> (cost=0.00..2152.28 rows=563 width=8) (actual time=0.012..0.832 rows=145 \n> loops=11)\n\n\n8.2 is deciding to use a bitmap index scan on taskid_taskinstance_key, \nwhich seems to be slower (!) than a plain old index scan that 8.0 is \nusing. A dirty work around is to disable bitmap scans via:\n\nSET enable_bitmapscan=off\n\nbut it is probably worthwhile to try to find out *why* the bitmap scan \nis 1) slow and 2) chosen at all given 1).\n\nOne thought that comes to mind - is work_mem smaller on your 8.2 system \nthan the 8.0 one? (or in fact is it very small on both?). Also it might \nbe interesting to see your non-default postgresql.conf settings for both \nsystems.\n\nCheers\n\nMark\n", "msg_date": "Thu, 07 Jun 2007 12:01:51 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird 8.2.4 performance" }, { "msg_contents": "On Jun 6, 2007, at 18:27 , Kurt Overberg wrote:\n\n> select id from taskinstance where taskid in (select id from task \n> where campaignid = 75);\n>\n> Now, I know this could (and should) be rewritten to not use the \n> WHERE x IN () style, but this is actually a sub-query to a larger \n> query.\n\nGranted, it won't explain why this particular query is slower in 8.2, \nbut it shouldn't be to hard to drop in something like\n\nSELECT id\nFROM taskinstance\nNATURAL JOIN (\n SELECT id AS taskid, campaignid\n FROM tasks) t\nWHERE campaignid = 75\n\nAIUI, the planner can sometimes rewrite IN as a join, but I don't \nknow whether or not that's is happening in this case. I'm guessing \nnot as I see nested loops in the plans. (I'm a novice at reading \nplans, so take this with at least a teaspoon of salt. :) )\n\n> if I run the query again, it gets successively faster (50,000ms- \n> >6000ms->27ms). Is this normal? If I change the campaignid from \n> 75 to another number, it jumps back to 50,000ms, which leads me to \n> believe that postgresql is somehow caching the results of the query \n> and not figuring out a better way to run the query.\n\nAs the query is repeated, the associated rows are probably already in \nmemory, leading to the speedups you're seeing.\n\n> -- 8.2\n\n> Recheck Cond: (taskinstance.taskid = task.id)\n> -> Bitmap Index Scan on taskid_taskinstance_key \n> (cost=0.00..20.57 rows=556 width=0) (actual time=54.709..54.709 \n> rows=196 loops=9)\n> Index Cond: (taskinstance.taskid = task.id)\n\n\n> -- 8.0\n\n> -> Index Scan using taskid_taskinstance_key on taskinstance \n> (cost=0.00..2152.28 rows=563 width=8) (actual time=0.012..0.832 \n> rows=145 loops=11)\n> Index Cond: (taskinstance.taskid = \"outer\".id)\n\nI see that the row estimates in both of the query plans are off a \nlittle. Perhaps increasing the statistics would help? Also, you can \nsee that 8.2 is using bitmap scans, which aren't available in 8.0. \nPerhaps try setting enable_bitmapscan off and running the query again \nto see if there's a performance difference.\n\n> The weird thing is that on 8.2, I don't see any sequential scans \n> taking place, it seems to be properly using the indexes.\n\nAs an aside, whether the planner decides to use a sequential scan or \nan index has more to do with the particular query: indexes are not a \nguaranteed performance win.\n\nHope this helps a bit.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Wed, 6 Jun 2007 19:14:17 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird 8.2.4 performance" }, { "msg_contents": "Mark Kirkwood wrote:\n> \n> 8.2 is deciding to use a bitmap index scan on taskid_taskinstance_key, \n> which seems to be slower (!) than a plain old index scan that 8.0 is \n> using. A dirty work around is to disable bitmap scans via:\n\nI'm having difficulty figuring out why it's doing this at all. There's \nonly one index involved, and it's over the primary-key to boot!\n\nAn EXPLAIN ANALYSE with enable_bitmapscan off should say why PG thinks \nthe costs are cheaper than they actually are.\n\nPS - well worded question Kurt. All the relevant information neatly laid \nout, explain analyse on both platforms - you should be charging to let \npeople help ;-)\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 07 Jun 2007 10:23:19 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird 8.2.4 performance" }, { "msg_contents": "On Wed, Jun 06, 2007 at 07:27:27PM -0400, Kurt Overberg wrote:\n> This query runs great on production under 8.0 (27ms), but under 8.2.4 \n> (on my mac) I'm seeing times in excess of 50,000ms. Note that on \n> 8.2.4, if I run the query again, it gets successively faster \n> (50,000ms->6000ms->27ms). Is this normal?\n\nYour production server probably has all the data in your cache, and your Mac\nhas not. Furthermore, they seem to be running on different data sets, judging\nfrom your EXPLAIN ANALYZE.\n\nHow big did you say these tables were?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 7 Jun 2007 11:35:27 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird 8.2.4 performance" }, { "msg_contents": "On Thu, Jun 07, 2007 at 11:35:27AM +0200, Steinar H. Gunderson wrote:\n> How big did you say these tables were?\n\nSorry, you already said that -- 650k rows for one of them. If that table\ndoesn't fit in the cache on your Mac, you pretty much lose. From the EXPLAIN\noutput, it looks like it fits very nicely in cache on your server. Thus, I\ndon't think the difference is between 8.0 and 8.2, but rather your production\nserver and your test machine.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 7 Jun 2007 11:43:04 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird 8.2.4 performance" }, { "msg_contents": "Steinar H. Gunderson wrote:\n> On Thu, Jun 07, 2007 at 11:35:27AM +0200, Steinar H. Gunderson wrote:\n> \n> If that table\n> doesn't fit in the cache on your Mac, you pretty much lose. From the EXPLAIN\n> output, it looks like it fits very nicely in cache on your server. Thus, I\n> don't think the difference is between 8.0 and 8.2, but rather your production\n> server and your test machine.\n>\n\nThat's a good point, however its not immediately obvious that the \nproduction server is *not* running MacOSX Tiger (or has any more \nmemory)... Kurt can you post the relevant specs for the the 8.0 and 8.2 \nboxes?\n\nCheers\n\nMark\n\n", "msg_date": "Thu, 07 Jun 2007 23:12:52 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird 8.2.4 performance" }, { "msg_contents": "Thank you everyone for the replies. I'll try to answer everyone's \nquestions in one post.\n\n* Regarding production/mac memory and cache usage. This query HAS \nbeen running on 8.0 on my Mac, I just got that particular query \nexplain from our production system because I had to nuke my local 8.0 \ndatabase before installing 8.2.4 due to disk space limitations. The \nquery that this sample query is part of run in under 5 seconds when I \nwas running 8.0 locally on my mac, and it did a bunch of agregations \nbased on task instance.\n\n* work_mem is set to 1 megabyte (the default) on both 8.0 and 8.2.4.\n\n* setting enable_bitmapscan = false on 8.2.4\n\n0605=# explain analyze select id from taskinstance where taskid in \n(select id from task where campaignid = 76);\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-------\nNested Loop (cost=16.94..15484.61 rows=2309 width=4) (actual \ntime=44.751..8498.689 rows=1117 loops=1)\n -> HashAggregate (cost=16.94..17.01 rows=7 width=4) (actual \ntime=0.144..0.194 rows=10 loops=1)\n -> Index Scan using \"Task_campaignId_key\" on task \n(cost=0.00..16.93 rows=7 width=4) (actual time=0.069..0.116 rows=10 \nloops=1)\n Index Cond: (campaignid = 51)\n -> Index Scan using taskid_taskinstance_key on taskinstance \n(cost=0.00..2202.73 rows=554 width=8) (actual time=20.305..849.640 \nrows=112 loops=10)\n Index Cond: (taskinstance.taskid = task.id)\nTotal runtime: 8499.599 ms\n\n...FWIW, this query returns about 900 rows. TaskInstance is a fairly \nlarge table in width (20 columns, about 15 are varchar, 3 timestamps \nand a few ints)\nand height (650,000) rows. I can't really run the same query \nmultiple times due to caching, so I change up \"campaignid\". Is there \na way to flush that cache? Turning off bitmap scans definitely seems \nto help things, but I'm concerned that when/if I flip my production \nmachine, I'm going to run into who-knows-what. I don't really have a \nset of SQL acceptance tests to test jumping from rev to rev (I know I \nshould- BAD DEVELOPER, BAD!).\n\n* Configuration\n\n- My production environment is running RedHat 2.6.9.ELsmp on a server \nwith 16GB of memory\n\n- My old 8.0 database on my mac only had this modified from default:\n\nshared_buffers = 100\nwork_mem = 1024\n\n- 8.2.4 database seemed to go through some sort of auto-config when I \ninstalled it, settings I think are different are as follows:\n\nshared_buffers = 128MB # min 128kB or \nmax_connections*16kB\nwork_mem = 100MB # when I ran the original \nquery, this was set to 1MB, increased on Mark Kirkwood's advice, \nseemed to help a bit but not really\n\n8.2.4 Database size- 25 GB (from du -sh on the directory 'base')\n\n* Richard Huxton\n\nThanks for the kind words- I'm glad I was able to 'ask a good \nquestion'. I'm very new to this mailing list, but I'm on many Java/ \nStruts/Perl mailing lists and have seen enough poorly worded/spelled/ \nasked questions to last a lifetime. My situation is: I'm the senior \n(read: first) developer at a small but growing startup. Everything I \nknow about PostgreSQL I've learned over the past 4 years in which our \ntiny little DB grew from one database with 100 users to over a 4 node \nSlony setup 300,000 users. Somehow, I'm not sure why, but I find \nmyself in the awkward position of being the 'go-to guy' for all \ndatabase related stuff at my company. What I don't know could fill \nvolumes, but I've been able to keep the durn database running for \nover 4 years (which is mostly a testament to how awesome PostgreSQL \nis)- so when I hit something that makes no sense, I KNOW that if I \nhave any hope of getting one of ye postgresql gods to help me with an \nobscure, non-sensical problem such as this one, I'd better include as \nmuch context as possible. :-) FWIW- we're looking to hire a \nPostgreSQL hired gun to help me with this and many other things. \nIdeally, that person would be in Boston, MA, USA and be able to come \ninto the office, but we'd consider remote people too. If you're \ninterested, drop me a line.\n\nThanks again for the replies, gang. Have there been many reported \nperformance related problems regarding people upgrading from 8.0->8.2?\n\nIs there a primer somewhere on how to read EXPLAIN output?\n\nThanks again for helping me with this...\n\n/kurt\n\n\n\nOn Jun 7, 2007, at 5:23 AM, Richard Huxton wrote:\n\n> Mark Kirkwood wrote:\n>> 8.2 is deciding to use a bitmap index scan on \n>> taskid_taskinstance_key, which seems to be slower (!) than a plain \n>> old index scan that 8.0 is using. A dirty work around is to \n>> disable bitmap scans via:\n>\n> I'm having difficulty figuring out why it's doing this at all. \n> There's only one index involved, and it's over the primary-key to \n> boot!\n>\n> An EXPLAIN ANALYSE with enable_bitmapscan off should say why PG \n> thinks the costs are cheaper than they actually are.\n>\n> PS - well worded question Kurt. All the relevant information neatly \n> laid out, explain analyse on both platforms - you should be \n> charging to let people help ;-)\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n\n", "msg_date": "Thu, 7 Jun 2007 07:18:22 -0400", "msg_from": "Kurt Overberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird 8.2.4 performance" }, { "msg_contents": "On Thu, Jun 07, 2007 at 07:18:22AM -0400, Kurt Overberg wrote:\n> - My production environment is running RedHat 2.6.9.ELsmp on a server \n> with 16GB of memory\n\nSeriously, this (the RAM amount) _is_ all the difference. (You don't say how\nmuch RAM is in your Mac, but something tells me it's not 16GB.) If you install\n8.2.4 on your server, there's no reason why the query you pasted shouldn't be\nat least as fast as on 8.0.\n\n> Is there a primer somewhere on how to read EXPLAIN output?\n\nYes, the documentation contains one.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 7 Jun 2007 13:28:07 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird 8.2.4 performance" }, { "msg_contents": "Kurt Overberg <[email protected]> writes:\n> ... Turning off bitmap scans definitely seems \n> to help things,\n\nI really seriously doubt that. On queries like this, where each inner\nscan is fetching a couple hundred rows, the small extra overhead of a\nbitmap scan should easily pay for itself. I think you're looking\nentirely at caching effects that allow a re-read of the same data to\ngo faster.\n\nYou might try running the same query plan several times in a row and\nnoting the lowest time, then repeat for the other query plan. This will\nget you comparable fully-cached times, which I bet will be very close\nto the same.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Jun 2007 09:32:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird 8.2.4 performance " }, { "msg_contents": "Le jeudi 07 juin 2007, Kurt Overberg a écrit :\n> Is there a primer somewhere on how to read EXPLAIN output?\n\nThose Robert Treat slides are a great reading:\n http://www.postgresql.org/communityfiles/13.sxi\n\nRegards,\n-- \ndim\n", "msg_date": "Thu, 7 Jun 2007 15:54:31 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: {Spam} Re: Weird 8.2.4 performance" }, { "msg_contents": "Kurt Overberg wrote:\n> work_mem = 100MB # when I ran the original query, \n> this was set to 1MB, increased on Mark Kirkwood's advice, seemed to help \n> a bit but not really\n> \n\nFor future reference, be careful with this parameter, as *every* \nconnection will use this much memory for each sort or hash (i.e it's not \nshared and can be allocated several times by each connection!)...yeah, I \nknow I suggested increasing it to see what effect it would have :-).\n\nAnd I'd agree with Steiner and others, looks like caching effects are \nthe cause of the timing difference between production and the mac!\n\nCheers\n\nMark\n\n\n\n", "msg_date": "Fri, 08 Jun 2007 11:04:54 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird 8.2.4 performance" }, { "msg_contents": "Mark Kirkwood wrote:\n> Kurt Overberg wrote:\n>> work_mem = 100MB # when I ran the original \n>> query, this was set to 1MB, increased on Mark Kirkwood's advice, \n>> seemed to help a bit but not really\n>>\n> \n> For future reference, be careful with this parameter, as *every* \n> connection will use this much memory for each sort or hash (i.e it's not \n> shared and can be allocated several times by each connection!)...yeah, I \n> know I suggested increasing it to see what effect it would have :-).\n\nThis is however a parameter that can be set on the fly for the specific \nquery.\n\nJoshua D. Drake\n\n\n> \n> And I'd agree with Steiner and others, looks like caching effects are \n> the cause of the timing difference between production and the mac!\n> \n> Cheers\n> \n> Mark\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Thu, 07 Jun 2007 16:09:34 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird 8.2.4 performance" } ]
[ { "msg_contents": "Hello,\n\n\nPostgres: 8.2\nos: Linux 4CPU, 4 GB RAM, Raid 1, 32 bit system\nwork_mem: 600 Mb\n\n\nI have some tables which may become quite large (currently up to 6 Gb) .\nI initially fill them using copy from (files) .\n\nThe import is fast enough as I only have a primary key on the table:\nabout 18 minutes\n(over 300 Mb/minute)\n\nThen I need 5 additional indexes on it. Creation time: 30 minutes\n\n\nsubsequently I compute some aggregations which need 4 hours and 30\nminutes additional time\n\n\nAnd now the problem:\n\nIf I get additional data for the table, the import become much more\nslower due to the indexes (about 30 times slower !):\n\nThe performance degradation is probably due to the fact that all\nindexs are too large to be kept in memory. \nMoreover I guess that the indexes fill factors are too high (90%)\n\nDuring this second import, I have about 20% iowait time.\n\n\n\nThe usual solution is to drop the indexes before the second import and\nrebuild them afterwards, but I feel unconfident doing this as I don't\nknow how the system will react if some SELECT statements occures when\nthe index are missing. I can hardly avoid this.\n\n\nSo my idea for the second import process:\n\n\n1) make a copy of the table:\n\n create table B as select * from table A;\n alter table B add constraint B_pk primary key (id);\n\n\n2) import the new data in table B\n\n copy B from file;\n\n3) create the required indexes on B\n\n create index Bix_1 on B..\n create index Bix_2 on B..\n create index Bix_2 on B..\n create index Bix_2 on B..\n \n4) replace table A with table B\n\n alter table A renam to A_trash;\n alter table B renam to A;\n drop table A_trash;\n\n (and rename the indexes to get the original state)\n \n \n \n \n \n This seems to work but with side effects:\n \n The only objects that refer to the tables are functions and indexes.\n \nIf a function is called within a same session before and after the table\nrenaming, the second attempt fails (or use the table A_trash if it still\nexists). So I should close the session and start a new one before\nfurther processing. Errors in other live sessions are acceptable, but\nmaybe you know a way to avoid them?)\n\n\n\nAnd now a few questions :-)\n\n- do you see any issue that prevent this workflow to work?\n\n- is there any other side effect to take care of ?\n\n- what is the maximum acceptable value for the parameter work_mem for my\nconfiguration \n (see the complete configuration below)\n \n- has anybody built a similar workflow ? \n\n- could this be a feature request to extend the capabilities of copy\nfrom ?\n\n\n\nThanks for your time and attention,\n\nMarc Mamin\n\n \n\n\n\n\n\ncopy from performance on large tables with indexes\n\n\n\n\nHello,\n\n\nPostgres: 8.2\nos: Linux 4CPU, 4 GB RAM, Raid 1, 32 bit system\nwork_mem: 600 Mb\n\n\nI have some tables which may become quite large (currently up to 6 Gb) .\nI initially fill them using copy from (files) .\n\nThe import is fast enough as I only have a primary key on the table:  about 18 minutes\n(over 300 Mb/minute)\n\nThen I need 5 additional indexes on it. Creation time: 30 minutes\n\n\nsubsequently I compute some aggregations which need 4 hours and 30 minutes additional time\n\n\nAnd now the problem:\n\nIf I get additional data for the table, the import become much more slower due to the indexes (about 30 times slower !):\nThe performance degradation  is probably  due to the fact that all indexs are too large to be kept in memory. \nMoreover I guess that the indexes fill factors are too high (90%)\n\nDuring this second import, I have about 20% iowait time.\n\n\n\nThe usual solution is to drop the indexes before the second import and rebuild them afterwards, but I feel unconfident doing this as I don't know how the system will react if some SELECT statements occures when the index are missing. I can hardly avoid this.\n\nSo my idea for the second import process:\n\n\n1) make a copy of the table:\n\n   create table B as select * from table A;\n   alter table B add constraint B_pk primary key (id);\n\n\n2) import the new data in table B\n\n   copy B from file;\n\n3) create the required indexes on B\n\n   create index Bix_1 on B..\n   create index Bix_2 on B..\n   create index Bix_2 on B..\n   create index Bix_2 on B..\n   \n4) replace table A with table B\n\n   alter table A renam to A_trash;\n   alter table B renam to A;\n   drop table A_trash;\n\n (and rename the indexes to get the  original state)\n \n \n \n \n \n This seems to work but with side effects:\n \n The only objects that refer to the tables are functions and indexes.\n \nIf a function is called within a same session before and after the table renaming, the second attempt fails (or use the table A_trash if it still exists). So I should close the session and start a new one before further processing. Errors in other live sessions are acceptable, but maybe you know a way to avoid them?)\n\n\nAnd now a few questions :-)\n\n- do you see any issue that prevent this workflow to work?\n\n- is there any other side effect to take care of ?\n\n- what is the maximum acceptable value for the parameter work_mem for my configuration \n  (see the complete configuration below)\n  \n- has anybody built a similar workflow ?  \n\n- could this be a feature request to extend the capabilities of copy from ?\n\n\n\nThanks for your time and attention,\n\nMarc Mamin", "msg_date": "Thu, 7 Jun 2007 11:17:40 +0200", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": true, "msg_subject": "copy from performance on large tables with indexes" } ]
[ { "msg_contents": "On a FreeBSD system, is page size for shared_buffers calculation 8K? And is\npage size for shmall calculation 4K? The documentation hints at these\nvalues. Anyone know?\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nOn a FreeBSD system, is page size for shared_buffers calculation 8K?\nAnd is page size for shmall calculation 4K? The documentation hints at\nthese values. Anyone know?-- Yudhvir Singh Sidhu408 375 3134 cell", "msg_date": "Thu, 7 Jun 2007 09:47:52 -0700", "msg_from": "\"Y Sidhu\" <[email protected]>", "msg_from_op": true, "msg_subject": "How Are The Variables Related?" } ]
[ { "msg_contents": "About six months ago, our normally fast postgres server started \nhaving performance issues. Queries that should have been instant were \ntaking up to 20 seconds to complete (like selects on the primary key \nof a table). Running the same query 4 times in a row would yield \ndramatically different results... 1.001 seconds, 5 seconds, 22 \nseconds, 0.01 seconds, to complete.\n\nAt the time we upgraded the hardware and the performance problems \nwent away. But I did not feel like we had solved the underlying problem.\n\nNow, six months later, the same thing is happening... and I'm kind of \nglad because now, I'd like to find out what the real issue is. I'm \njust starting to diagnose it so I don't know a lot yet, but what I do \nknow, I'll share with you here in the hopes of starting off on the \nright track.\n\nI've already described the main symptom. Here are some other random \nobservations:\n- The server log shows frequent \"archived transaction log file\" \nentries. Usually once every 10 minutes or so, but sometimes 2 or 3 \nper minute.\n- The server box seems otherwise to be responsive. CPU sits at about \n90% idle.\n- When queries are especially slow, the server shows a big spike in \nread/write activity.\n- This morning I did a VACUUM ANALYZE. It seemed to help for 30 \nminutes or so, but then it was back to being slowish. I'd hate to \nschedule these because it feels more like a band-aid. For a long time \nwe've been doing just fine with autovacuum, so why start scheduling \nvacuums now?\n\nHere's info about our configuration. Any advise/pointers would be \nmuch appreciated. Thanks!\n\nComputer: Mac Pro Dual Core Intel\nOperating System: Mac OS 10.4.7 Client\nMemory: 4GB RAM\nData Drives: 3 drives in a software RAID (internal)\nLog/Backup Drive: 1 (the startup disk, internal)\n\nPostgres Version: 8.1.4\nData Size: 5.1 GB\n# of Tables: 60\nSize of Tables: Most are under 100,000 records. A few are in the \nmillions. Largest is 7058497.\nAverage Number of Simultaneous Client Connections: 250\n\nmax_connections = 500\nshared_buffers = 10000\nwork_mem = 2048\nmax_stack_depth = 6000\neffective_cache_size = 30000\nfsync = on\nwal_sync_method = fsync\narchive_command = 'cp -i %p /Users/postgres/officelink/wal_archive/%f \n</dev/null'\nmax_fsm_pages = 150000\nstats_start_collector = on\nstats_row_level = on\nlog_min_duration_statement = 2000\nlog_line_prefix = '%t %h '\nsuperuser_reserved_connections = 3\nautovacuum = on\nautovacuum_naptime = 60\nautovacuum_vacuum_threshold = 150\nautovacuum_vacuum_scale_factor = 0.00000001\nautovacuum_analyze_scale_factor = 0.00000001\n\nsudo pico /etc/rc\nsysctl -w kern.sysv.shmmax=4294967296\nsysctl -w kern.sysv.shmall=1048576\n\nsudo pico /etc/sysctl.conf\nkern.maxproc=2048\nkern.maxprocperuid=800\nkern.maxfiles=40000\nkern.maxfilesperproc=30000\n\nProcesses: 470 total, 2 running, 4 stuck, 464 sleeping... 587 \nthreads 13:34:50\nLoad Avg: 0.45, 0.34, 0.33 CPU usage: 5.1% user, 5.1% sys, \n89.7% idle\nSharedLibs: num = 157, resident = 26.9M code, 3.29M data, 5.44M \nLinkEdit\nMemRegions: num = 15307, resident = 555M + 25.5M private, 282M shared\nPhysMem: 938M wired, 934M active, 2.13G inactive, 3.96G used, \n43.1M free\nVM: 116G + 90.1M 1213436(0) pageins, 263418(0) pageouts\n\n PID COMMAND %CPU TIME #TH #PRTS #MREGS RPRVT RSHRD \nRSIZE VSIZE\n29804 postgres 0.0% 0:03.24 1 9 27 1.27M 245M \n175M 276M\n29720 postgres 0.0% 0:01.89 1 9 27 1.25M 245M \n125M 276M\n29714 postgres 0.0% 0:03.70 1 10 27 1.30M 245M \n215M 276M\n29711 postgres 0.0% 0:01.38 1 10 27 1.21M 245M \n107M 276M\n29707 postgres 0.0% 0:01.27 1 9 27 1.16M 245M \n78.2M 276M\n29578 postgres 0.0% 0:01.33 1 9 27 1.16M 245M \n67.8M 276M\n29556 postgres 0.0% 0:00.39 1 9 27 1.09M 245M \n91.8M 276M\n29494 postgres 0.0% 0:00.19 1 9 27 1.05M 245M \n26.5M 276M\n29464 postgres 0.0% 0:01.98 1 9 27 1.16M 245M \n88.8M 276M\n29425 postgres 0.0% 0:01.61 1 9 27 1.17M 245M \n112M 276M\n29406 postgres 0.0% 0:01.42 1 9 27 1.15M 245M \n118M 276M\n29405 postgres 0.0% 0:00.13 1 9 26 924K 245M \n17.9M 276M\n29401 postgres 0.0% 0:00.98 1 10 27 1.13M 245M \n84.4M 276M\n29400 postgres 0.0% 0:00.90 1 10 27 1.14M 245M \n78.4M 276M\n29394 postgres 0.0% 0:01.56 1 10 27 1.17M 245M \n111M 276M\nAbout six months ago, our normally fast postgres server started having performance issues. Queries that should have been instant were taking up to 20 seconds to complete (like selects on the primary key of a table). Running the same query 4 times in a row would yield dramatically different results... 1.001 seconds, 5 seconds, 22 seconds, 0.01 seconds, to complete.At the time we upgraded the hardware and the performance problems went away. But I did not feel like we had solved the underlying problem.Now, six months later, the same thing is happening... and I'm kind of glad because now, I'd like to find out what the real issue is. I'm just starting to diagnose it so I don't know a lot yet, but what I do know, I'll share with you here in the hopes of starting off on the right track.I've already described the main symptom. Here are some other random observations:- The server log shows frequent \"archived transaction log file\" entries. Usually once every 10 minutes or so, but sometimes 2 or 3 per minute.- The server box seems otherwise to be responsive. CPU sits at about 90% idle.- When queries are especially slow, the server shows a big spike in read/write activity.- This morning I did a VACUUM ANALYZE. It seemed to help for 30 minutes or so, but then it was back to being slowish. I'd hate to schedule these because it feels more like a band-aid. For a long time we've been doing just fine with autovacuum, so why start scheduling vacuums now?Here's info about our configuration. Any advise/pointers would be much appreciated. Thanks!Computer: Mac Pro Dual Core IntelOperating System: Mac OS 10.4.7 ClientMemory: 4GB RAMData Drives: 3 drives in a software RAID (internal)Log/Backup Drive: 1 (the startup disk, internal)Postgres Version: 8.1.4Data Size: 5.1 GB# of Tables: 60Size of Tables: Most are under 100,000 records. A few are in the millions. Largest is 7058497.Average Number of Simultaneous Client Connections: 250max_connections = 500shared_buffers = 10000work_mem = 2048max_stack_depth = 6000 effective_cache_size = 30000 fsync = onwal_sync_method = fsyncarchive_command = 'cp -i %p /Users/postgres/officelink/wal_archive/%f </dev/null'max_fsm_pages = 150000 stats_start_collector = onstats_row_level = onlog_min_duration_statement = 2000log_line_prefix = '%t %h 'superuser_reserved_connections = 3 autovacuum = onautovacuum_naptime = 60autovacuum_vacuum_threshold = 150autovacuum_vacuum_scale_factor = 0.00000001autovacuum_analyze_scale_factor = 0.00000001sudo pico /etc/rcsysctl -w kern.sysv.shmmax=4294967296 sysctl -w kern.sysv.shmall=1048576sudo pico /etc/sysctl.confkern.maxproc=2048kern.maxprocperuid=800kern.maxfiles=40000  kern.maxfilesperproc=30000Processes:  470 total, 2 running, 4 stuck, 464 sleeping... 587 threads 13:34:50Load Avg:  0.45, 0.34, 0.33     CPU usage:  5.1% user, 5.1% sys, 89.7% idleSharedLibs: num =  157, resident = 26.9M code, 3.29M data, 5.44M LinkEditMemRegions: num = 15307, resident =  555M + 25.5M private,  282M sharedPhysMem:   938M wired,  934M active, 2.13G inactive, 3.96G used, 43.1M freeVM:  116G + 90.1M   1213436(0) pageins, 263418(0) pageouts  PID COMMAND      %CPU   TIME   #TH #PRTS #MREGS RPRVT  RSHRD  RSIZE  VSIZE29804 postgres     0.0%  0:03.24   1     9    27  1.27M   245M   175M   276M29720 postgres     0.0%  0:01.89   1     9    27  1.25M   245M   125M   276M29714 postgres     0.0%  0:03.70   1    10    27  1.30M   245M   215M   276M29711 postgres     0.0%  0:01.38   1    10    27  1.21M   245M   107M   276M29707 postgres     0.0%  0:01.27   1     9    27  1.16M   245M  78.2M   276M29578 postgres     0.0%  0:01.33   1     9    27  1.16M   245M  67.8M   276M29556 postgres     0.0%  0:00.39   1     9    27  1.09M   245M  91.8M   276M29494 postgres     0.0%  0:00.19   1     9    27  1.05M   245M  26.5M   276M29464 postgres     0.0%  0:01.98   1     9    27  1.16M   245M  88.8M   276M29425 postgres     0.0%  0:01.61   1     9    27  1.17M   245M   112M   276M29406 postgres     0.0%  0:01.42   1     9    27  1.15M   245M   118M   276M29405 postgres     0.0%  0:00.13   1     9    26   924K   245M  17.9M   276M29401 postgres     0.0%  0:00.98   1    10    27  1.13M   245M  84.4M   276M29400 postgres     0.0%  0:00.90   1    10    27  1.14M   245M  78.4M   276M29394 postgres     0.0%  0:01.56   1    10    27  1.17M   245M   111M   276M", "msg_date": "Thu, 7 Jun 2007 13:48:43 -0400", "msg_from": "Joe Lester <[email protected]>", "msg_from_op": true, "msg_subject": "Getting Slow" }, { "msg_contents": "On Thu, Jun 07, 2007 at 01:48:43PM -0400, Joe Lester wrote:\n> of a table). Running the same query 4 times in a row would yield \n> dramatically different results... 1.001 seconds, 5 seconds, 22 \n> seconds, 0.01 seconds, to complete.\n\n> - When queries are especially slow, the server shows a big spike in \n> read/write activity.\n\nMy bet is that you're maxing your disk subsystem somehow. The\nproblem with being I/O bound is that it doesn't matter how great you\ndo on average: if you have too much I/O traffic, it looks like you're\nstopped. Softraid can be expensive -- first thing I'd look at is to\nsee whether you are in fact hitting 100% of your I/O capacity and, if\nso, what your options are for getting more room there.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\n\"The year's penultimate month\" is not in truth a good way of saying\nNovember.\n\t\t--H.W. Fowler\n", "msg_date": "Thu, 7 Jun 2007 14:37:48 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting Slow" }, { "msg_contents": "On Thu, Jun 07, 2007 at 01:48:43PM -0400, Joe Lester wrote:\n> - The server log shows frequent \"archived transaction log file\" \n> entries. Usually once every 10 minutes or so, but sometimes 2 or 3 \n> per minute.\n\nSounds like you've got a lot of writes going. You might want more power in\nyour I/O?\n\n> Operating System: Mac OS 10.4.7 Client\n\nIs there a particular reason for this? It's not known to be the best server\nOS around -- it's hard to say that an OS change would do anything for your\nproblem, but it looks like an unusual choice.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 7 Jun 2007 20:50:19 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting Slow" }, { "msg_contents": "Joe Lester wrote:\n\n> max_fsm_pages = 150000\n\nThis may be a bit too low -- it's just a little more than 1 GB, which\nmeans it might fail to keep track of all your tables (or it may not, if\nyou don't have many updates).\n\n> autovacuum_naptime = 60\n> autovacuum_vacuum_threshold = 150\n> autovacuum_vacuum_scale_factor = 0.00000001\n> autovacuum_analyze_scale_factor = 0.00000001\n\nThe scale factors seems awfully low. How about 0.01 instead and see if\nyou avoid vacuuming all your tables with every iteration ... have you\nnoticed how much work autovacuum is really doing? It may be too much.\n\nAlso if autovacuum is eating all your I/O you may want to look into\nthrottling it back a bit by setting autovacuum_vacuum_cost_delay to a\nnon-zero value.\n\n-- \nAlvaro Herrera http://www.advogato.org/person/alvherre\n\"La tristeza es un muro entre dos jardines\" (Khalil Gibran)\n", "msg_date": "Thu, 7 Jun 2007 14:58:54 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting Slow" }, { "msg_contents": "On Thu, 7 Jun 2007, Joe Lester wrote:\n\n> Memory: 4GB RAM\n>\n> shared_buffers = 10000\n> work_mem = 2048\n> effective_cache_size = 30000\n\nWith these parameters, your server has 80MB dedicated to its internal \ncaching, is making query decisions assuming the operating system only has \n240MB of memory available for its caching, and is only allowing individual \nclients to have a tiny amount of memory to work with before they have to \nswap things to disk. You're not giving it anywhere close to enough memory \nto effectively work with a 5GB database, and your later reports show \nyou're barely using 1/2 the RAM in this system usefully.\n\nMultiply all these parameters by 10X, restart your server, and then you'll \nbe in the right ballpark for a system with 4GB of RAM. There might be \nsome other tuning work left after that, but these values are so far off \nthat until you fix them it's hard to say what else needs to be done. See \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm for more \ninformation on this topic.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 7 Jun 2007 15:26:58 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting Slow" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Also if autovacuum is eating all your I/O you may want to look into\n> throttling it back a bit by setting autovacuum_vacuum_cost_delay to a\n> non-zero value.\n\nBTW, why is it that autovacuum_cost_delay isn't enabled by default?\nI can hardly believe that anyone will want to run it without that.\n*Especially* not with multiple workers configured by default.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Jun 2007 21:46:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting Slow " }, { "msg_contents": "Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > Also if autovacuum is eating all your I/O you may want to look into\n> > throttling it back a bit by setting autovacuum_vacuum_cost_delay to a\n> > non-zero value.\n> \n> BTW, why is it that autovacuum_cost_delay isn't enabled by default?\n> I can hardly believe that anyone will want to run it without that.\n> *Especially* not with multiple workers configured by default.\n\nJust because we haven't agreed a value. Default autovacuum parameters\nis something we should definitely discuss for 8.3.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 7 Jun 2007 22:24:28 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting Slow" } ]
[ { "msg_contents": "Hey All,\n\nI have a table, let's call it A, whose primary key, a_id, is referenced\nin a second table, let's call it B. For each unique A.a_id there are\ngenerally many rows in B with the same a_id. My problem is that I want\nto delete a row in A when the last row in B that references it is\ndeleted. Right now I just query for rows in A that aren't referenced by\nB, and that worked great when the tables were small, but it takes over\nan hour now that the tables have grown larger (over 200 million rows in\nB and 14 million in A). The delete has to do a sequential scan of both\ntables since I'm looking for what's not in the indexes.\n\nI was going to try creating a trigger after delete on B for each row to\ncheck for more rows in B with the same a_id, and delete the row in A if\nnone found. In general I will be deleting 10's of millions of rows from\nB and 100's of thousands of rows from A on a daily basis. What do you\nthink? Does anyone have any other suggestions on different ways to\napproach this?\n\nThanks,\nEd\n", "msg_date": "Thu, 7 Jun 2007 13:02:55 -0700", "msg_from": "\"Tyrrill, Ed\" <[email protected]>", "msg_from_op": true, "msg_subject": "Best way to delete unreferenced rows?" }, { "msg_contents": "Tyrrill, Ed wrote:\n\n> I have a table, let's call it A, whose primary key, a_id, is referenced\n> in a second table, let's call it B. For each unique A.a_id there are\n> generally many rows in B with the same a_id. My problem is that I want\n> to delete a row in A when the last row in B that references it is\n> deleted. Right now I just query for rows in A that aren't referenced by\n> B, and that worked great when the tables were small, but it takes over\n> an hour now that the tables have grown larger (over 200 million rows in\n> B and 14 million in A). The delete has to do a sequential scan of both\n> tables since I'm looking for what's not in the indexes.\n> \n> I was going to try creating a trigger after delete on B for each row to\n> check for more rows in B with the same a_id, and delete the row in A if\n> none found. In general I will be deleting 10's of millions of rows from\n> B and 100's of thousands of rows from A on a daily basis. What do you\n> think? Does anyone have any other suggestions on different ways to\n> approach this?\n\nEssentially what you're doing is taking the one-hour job and spreading out in little chunks over thousands of queries. If you have 10^7 rows in B and 10^5 rows in A, then on average you have 100 references from B to A. That means that 99% of the time, your trigger will scan B and find that there's nothing to do. This could add a lot of overhead to your ordinary transactions, costing a lot more in the long run than just doing the once-a-day big cleanout.\n\nYou didn't send the specifics of the query you're using, along with an EXPLAIN ANALYZE of it in operation. It also be that your SQL is not optimal, and that somebody could suggest a more efficient query.\n\nIt's also possible that it's not the sequential scans that are the problem, but rather that it just takes a long time to delete 100,000 rows from table A because you have a lot of indexes. Or it could be a combination of performance problems.\n\nYou haven't given us enough information to really analyze your problem. Send more details!\n\nCraig\n", "msg_date": "Thu, 07 Jun 2007 15:50:45 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to delete unreferenced rows?" }, { "msg_contents": "Craig James wrote:\n> Tyrrill, Ed wrote:\n>\n>> I have a table, let's call it A, whose primary key, a_id, is\nreferenced\n>> in a second table, let's call it B. For each unique A.a_id there are\n>> generally many rows in B with the same a_id. My problem is that I\nwant\n>> to delete a row in A when the last row in B that references it is\n>> deleted. Right now I just query for rows in A that aren't referenced\nby\n>> B, and that worked great when the tables were small, but it takes\nover\n>> an hour now that the tables have grown larger (over 200 million rows\nin\n>> B and 14 million in A). The delete has to do a sequential scan of\nboth\n>> tables since I'm looking for what's not in the indexes.\n>> \n>> I was going to try creating a trigger after delete on B for each row\nto\n>> check for more rows in B with the same a_id, and delete the row in A\nif\n>> none found. In general I will be deleting 10's of millions of rows\nfrom\n>> B and 100's of thousands of rows from A on a daily basis. What do\nyou\n>> think? Does anyone have any other suggestions on different ways to\n>> approach this?\n>\n> Essentially what you're doing is taking the one-hour job and spreading\n> out in little chunks over thousands of queries. If you have 10^7 rows\n> in B and 10^5 rows in A, then on average you have 100 references from\nB\n> to A. That means that 99% of the time, your trigger will scan B and\nfind\n> that there's nothing to do. This could add a lot of overhead to your\n> ordinary transactions, costing a lot more in the long run than just\ndoing\n> the once-a-day big cleanout.\n>\n> You didn't send the specifics of the query you're using, along with an\n> EXPLAIN ANALYZE of it in operation. It also be that your SQL is not\n> optimal, and that somebody could suggest a more efficient query.\n>\n> It's also possible that it's not the sequential scans that are the\n> problem, but rather that it just takes a long time to delete 100,000\n> rows from table A because you have a lot of indexes. Or it could be\n> a combination of performance problems.\n>\n> You haven't given us enough information to really analyze your\nproblem.\n> Send more details!\n>\n> Craig\n\nOk. Yes, there are a bunch of indexes on A that may slow down the\ndelete, but if I just run the select part of the delete statement\nthrough explain analyze then that is the majority of the time. The\ncomplete sql statement for the delete is:\n\ndelete from backupobjects where record_id in (select\nbackupobjects.record_id from backupobjects left outer join\nbackup_location using(record_id) where backup_location.record_id is null\n)\n\nWhat I've referred to as A is backupobjects, and B is backup_location.\nHere is explain analyze of just the select:\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using(record_id) where\nbackup_location.record_id is null;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-------------------\n Merge Left Join (cost=38725295.93..42505394.70 rows=13799645 width=8)\n(actual time=6503583.342..8220629.311 rows=93524 loops=1)\n Merge Cond: (\"outer\".record_id = \"inner\".record_id)\n Filter: (\"inner\".record_id IS NULL)\n -> Index Scan using backupobjects_pkey on backupobjects\n(cost=0.00..521525.10 rows=13799645 width=8) (actual\ntime=15.955..357813.621 rows=13799645 loops=1)\n -> Sort (cost=38725295.93..39262641.69 rows=214938304 width=8)\n(actual time=6503265.293..7713657.750 rows=214938308 loops=1)\n Sort Key: backup_location.record_id\n -> Seq Scan on backup_location (cost=0.00..3311212.04\nrows=214938304 width=8) (actual time=11.175..1881179.825 rows=214938308\nloops=1)\n Total runtime: 8229178.269 ms\n(8 rows)\n\nI ran vacuum analyze after the last time any inserts, deletes, or\nupdates were done, and before I ran the query above. I've attached my\npostgresql.conf. The machine has 4 GB of RAM.\n\nThanks,\nEd", "msg_date": "Fri, 8 Jun 2007 09:45:37 -0700", "msg_from": "\"Tyrrill, Ed\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best way to delete unreferenced rows?" }, { "msg_contents": "Tyrrill, Ed wrote:\n> QUERY PLAN\n> \n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> -------------------\n> Merge Left Join (cost=38725295.93..42505394.70 rows=13799645 width=8)\n> (actual time=6503583.342..8220629.311 rows=93524 loops=1)\n> Merge Cond: (\"outer\".record_id = \"inner\".record_id)\n> Filter: (\"inner\".record_id IS NULL)\n> -> Index Scan using backupobjects_pkey on backupobjects\n> (cost=0.00..521525.10 rows=13799645 width=8) (actual\n> time=15.955..357813.621 rows=13799645 loops=1)\n> -> Sort (cost=38725295.93..39262641.69 rows=214938304 width=8)\n> (actual time=6503265.293..7713657.750 rows=214938308 loops=1)\n> Sort Key: backup_location.record_id\n> -> Seq Scan on backup_location (cost=0.00..3311212.04\n> rows=214938304 width=8) (actual time=11.175..1881179.825 rows=214938308\n> loops=1)\n> Total runtime: 8229178.269 ms\n> (8 rows)\n> \n> I ran vacuum analyze after the last time any inserts, deletes, or\n> updates were done, and before I ran the query above. I've attached my\n> postgresql.conf. The machine has 4 GB of RAM.\n\nI thought maybe someone with more expertise than me might answer this, but since they haven't I'll just make a comment. It looks to me like the sort of 214 million rows is what's killing you. I suppose you could try to increase the sort memory, but that's a lot of memory. It seems to me an index merge of a relation this large would be faster, but that's a topic for the experts.\n\nOn a theoretical level, the problem is that it's sorting the largest table. Perhaps you could re-cast the query so that it only has to sort the smaller table, something like\n\n select a.id from a where a.id not in (select distinct b.id from b)\n\nwhere \"b\" is the smaller table. There's still no guarantee that it won't do a sort on \"a\", though. In fact one of the clever things about Postgres is that it can convert a query like the one above into a regular join, unless you do something like \"select ... offset 0\" which blocks the optimizer from doing the rearrangement.\n\nBut I think the first approach is to try to tune for a better plan using your original query.\n\nCraig\n", "msg_date": "Mon, 11 Jun 2007 08:44:30 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to delete unreferenced rows?" }, { "msg_contents": "Craig James wrote:\n> Tyrrill, Ed wrote:\n> > QUERY PLAN\n> >\n> >\n> ------------------------------------------------------------------------\n> >\n> ------------------------------------------------------------------------\n> > -------------------\n> > Merge Left Join (cost=38725295.93..42505394.70 rows=13799645\n> width=8)\n> > (actual time=6503583.342..8220629.311 rows=93524 loops=1)\n> > Merge Cond: (\"outer\".record_id = \"inner\".record_id)\n> > Filter: (\"inner\".record_id IS NULL)\n> > -> Index Scan using backupobjects_pkey on backupobjects\n> > (cost=0.00..521525.10 rows=13799645 width=8) (actual\n> > time=15.955..357813.621 rows=13799645 loops=1)\n> > -> Sort (cost=38725295.93..39262641.69 rows=214938304 width=8)\n> > (actual time=6503265.293..7713657.750 rows=214938308 loops=1)\n> > Sort Key: backup_location.record_id\n> > -> Seq Scan on backup_location (cost=0.00..3311212.04\n> > rows=214938304 width=8) (actual time=11.175..1881179.825\n> rows=214938308\n> > loops=1)\n> > Total runtime: 8229178.269 ms\n> > (8 rows)\n> >\n> > I ran vacuum analyze after the last time any inserts, deletes, or\n> > updates were done, and before I ran the query above. I've attached\n> my\n> > postgresql.conf. The machine has 4 GB of RAM.\n> \n> I thought maybe someone with more expertise than me might answer this,\n> but since they haven't I'll just make a comment. It looks to me like\n> the sort of 214 million rows is what's killing you. I suppose you\n> could try to increase the sort memory, but that's a lot of memory. It\n> seems to me an index merge of a relation this large would be faster,\n> but that's a topic for the experts.\n> \n> On a theoretical level, the problem is that it's sorting the largest\n> table. Perhaps you could re-cast the query so that it only has to\n> sort the smaller table, something like\n> \n> select a.id from a where a.id not in (select distinct b.id from b)\n> \n> where \"b\" is the smaller table. There's still no guarantee that it\n> won't do a sort on \"a\", though. In fact one of the clever things\n> about Postgres is that it can convert a query like the one above into\n> a regular join, unless you do something like \"select ... offset 0\"\n> which blocks the optimizer from doing the rearrangement.\n> \n> But I think the first approach is to try to tune for a better plan\n> using your original query.\n> \n> Craig\n\nThanks for the input Craig. I actually started out with a query similar\nto what you suggest, but the performance was days to complete back when\nthe larger table, backup_location, was still under 100 million rows.\nThe current query is the best performance to date. I have been playing\naround with work_mem, and doubling it to 128MB did result in some\nimprovement, but doubleing it again to 256MB showed no further gain.\nHere is the explain analyze with work_mem increased to 128MB:\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using (record_id) where\nbackup_location.record_id is null;\n\nQUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=36876242.28..40658535.53 rows=13712990 width=8)\n(actual time=5795768.950..5795768.950 rows=0 loops=1)\n Merge Cond: (\"outer\".record_id = \"inner\".record_id)\n Filter: (\"inner\".record_id IS NULL)\n -> Index Scan using backupobjects_pkey on backupobjects\n(cost=0.00..520571.89 rows=13712990 width=8) (actual\ntime=2.490..201516.228 rows=13706121 loops=1)\n -> Sort (cost=36876242.28..37414148.76 rows=215162592 width=8)\n(actual time=4904205.255..5440137.309 rows=215162559 loops=1)\n Sort Key: backup_location.record_id\n -> Seq Scan on backup_location (cost=0.00..3314666.92\nrows=215162592 width=8) (actual time=4.186..1262641.774 rows=215162559\nloops=1)\n Total runtime: 5796322.535 ms\n\n\n", "msg_date": "Mon, 11 Jun 2007 18:09:46 -0700", "msg_from": "Ed Tyrrill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to delete unreferenced rows?" }, { "msg_contents": "Craig James wrote:\n> Tyrrill, Ed wrote:\n> > QUERY PLAN\n> >\n> >\n>\n------------------------------------------------------------------------\n> >\n>\n------------------------------------------------------------------------\n> > -------------------\n> > Merge Left Join (cost=38725295.93..42505394.70 rows=13799645\n> width=8)\n> > (actual time=6503583.342..8220629.311 rows=93524 loops=1)\n> > Merge Cond: (\"outer\".record_id = \"inner\".record_id)\n> > Filter: (\"inner\".record_id IS NULL)\n> > -> Index Scan using backupobjects_pkey on backupobjects\n> > (cost=0.00..521525.10 rows=13799645 width=8) (actual\n> > time=15.955..357813.621 rows=13799645 loops=1)\n> > -> Sort (cost=38725295.93..39262641.69 rows=214938304 width=8)\n> > (actual time=6503265.293..7713657.750 rows=214938308 loops=1)\n> > Sort Key: backup_location.record_id\n> > -> Seq Scan on backup_location (cost=0.00..3311212.04\n> > rows=214938304 width=8) (actual time=11.175..1881179.825\n> rows=214938308\n> > loops=1)\n> > Total runtime: 8229178.269 ms\n> > (8 rows)\n> >\n> > I ran vacuum analyze after the last time any inserts, deletes, or\n> > updates were done, and before I ran the query above. I've attached\n> my\n> > postgresql.conf. The machine has 4 GB of RAM.\n> \n> I thought maybe someone with more expertise than me might answer this,\n> but since they haven't I'll just make a comment. It looks to me like\n> the sort of 214 million rows is what's killing you. I suppose you\n> could try to increase the sort memory, but that's a lot of memory. It\n> seems to me an index merge of a relation this large would be faster,\n> but that's a topic for the experts.\n> \n> On a theoretical level, the problem is that it's sorting the largest\n> table. Perhaps you could re-cast the query so that it only has to\n> sort the smaller table, something like\n> \n> select a.id from a where a.id not in (select distinct b.id from b)\n> \n> where \"b\" is the smaller table. There's still no guarantee that it\n> won't do a sort on \"a\", though. In fact one of the clever things\n> about Postgres is that it can convert a query like the one above into\n> a regular join, unless you do something like \"select ... offset 0\"\n> which blocks the optimizer from doing the rearrangement.\n> \n> But I think the first approach is to try to tune for a better plan\n> using your original query.\n> \n> Craig\n\nThanks for the input Craig. I actually started out with a query similar\nto what you suggest, but the performance was days to complete back when\nthe larger table, backup_location, was still under 100 million rows.\nThe current query is the best performance to date. I have been playing\naround with work_mem, and doubling it to 128MB did result in some\nimprovement, but doubleing it again to 256MB showed no further gain.\nHere is the explain analyze with work_mem increased to 128MB:\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using (record_id) where\nbackup_location.record_id is null;\n\nQUERY\nPLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------\n Merge Left Join (cost=36876242.28..40658535.53 rows=13712990 width=8)\n(actual time=5795768.950..5795768.950 rows=0 loops=1)\n Merge Cond: (\"outer\".record_id = \"inner\".record_id)\n Filter: (\"inner\".record_id IS NULL)\n -> Index Scan using backupobjects_pkey on backupobjects\n(cost=0.00..520571.89 rows=13712990 width=8) (actual\ntime=2.490..201516.228 rows=13706121 loops=1)\n -> Sort (cost=36876242.28..37414148.76 rows=215162592 width=8)\n(actual time=4904205.255..5440137.309 rows=215162559 loops=1)\n Sort Key: backup_location.record_id\n -> Seq Scan on backup_location (cost=0.00..3314666.92\nrows=215162592 width=8) (actual time=4.186..1262641.774 rows=215162559\nloops=1)\n Total runtime: 5796322.535 ms\n\n", "msg_date": "Tue, 12 Jun 2007 08:56:24 -0700", "msg_from": "\"Tyrrill, Ed\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best way to delete unreferenced rows?" } ]
[ { "msg_contents": "Hi all,\n\nI had a database which uses to hold some 50 Mill records and disk\nspace used was 103 GB. I deleted around 34 Mill records but still the\ndisk size is same. Can some on please shed some light on this.\n\nThank in advance for all the help.\n\nDhawal Choksi\n\n", "msg_date": "Fri, 08 Jun 2007 01:22:14 -0700", "msg_from": "choksi <[email protected]>", "msg_from_op": true, "msg_subject": "Database size" }, { "msg_contents": "am Fri, dem 08.06.2007, um 1:22:14 -0700 mailte choksi folgendes:\n> Hi all,\n> \n> I had a database which uses to hold some 50 Mill records and disk\n> space used was 103 GB. I deleted around 34 Mill records but still the\n> disk size is same. Can some on please shed some light on this.\n\nDELETE only mark rows as deleted, if you need the space you need a\nVACUUM FULL.\n\nRead more: http://www.postgresql.org/docs/current/static/sql-vacuum.html\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Fri, 15 Jun 2007 06:58:06 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size" }, { "msg_contents": "choksi writes:\n\n> I had a database which uses to hold some 50 Mill records and disk\n> space used was 103 GB. I deleted around 34 Mill records but still the\n> disk size is same. Can some on please shed some light on this.\n\nWhen records are deleted they are only marked in the database.\nWhen you run vacuum in the database that space will be marked so new data \ncan use the space.\n\nTo lower the space used you need to run \"vacuum full\".\nThat however can take a while and I think it will lock the database for \nsome operations. \n", "msg_date": "Fri, 15 Jun 2007 13:15:37 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size" }, { "msg_contents": "Hello group,\n\nMoreover a reindex (REINDEX <name of your database> while in pgsql) followed \nby an ANALYZE will claim more space.\n\nRegards\nJ6M\n----- Original Message ----- \nFrom: \"Francisco Reyes\" <[email protected]>\nTo: \"choksi\" <[email protected]>\nCc: <[email protected]>\nSent: Friday, June 15, 2007 7:15 PM\nSubject: Re: [PERFORM] Database size\n\n\n> choksi writes:\n>\n>> I had a database which uses to hold some 50 Mill records and disk\n>> space used was 103 GB. I deleted around 34 Mill records but still the\n>> disk size is same. Can some on please shed some light on this.\n>\n> When records are deleted they are only marked in the database.\n> When you run vacuum in the database that space will be marked so new data \n> can use the space.\n>\n> To lower the space used you need to run \"vacuum full\".\n> That however can take a while and I think it will lock the database for \n> some operations.\n\n", "msg_date": "Mon, 18 Jun 2007 10:39:15 +0200", "msg_from": "\"J6M\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size" } ]
[ { "msg_contents": "I need some help. I have started taking snapshots of performance of my\ndatabases with concerns to io. I created a view on each cluster defined as:\n SELECT pg_database.datname AS database_name,\npg_stat_get_db_blocks_fetched(pg_database.oid) AS blocks_fetched,\npg_stat_get_db_blocks_hit(pg_database.oid) AS blocks_hit,\npg_stat_get_db_blocks_fetched(pg_database.oid) -\npg_stat_get_db_blocks_hit(pg_database.oid) AS physical_reads\n FROM pg_database\n WHERE pg_stat_get_db_blocks_fetched(pg_database.oid) > 0\n ORDER BY pg_stat_get_db_blocks_fetched(pg_database.oid) -\npg_stat_get_db_blocks_hit(pg_database.oid) DESC;\n\nI am taking 5 minute snapshots of this view.\n\nWhen I look at my data, I am getting row like this:\ndatabase_name: xxx\nblocks_fetched: 2396915583\nblocks_hit: 1733190669\nphysical_reads: 663724914\nsnapshot_timestamp: 2007-06-08 09:20:01.396079\n\ndatabase_name: xxx\nblocks_fetched: 2409671770\nblocks_hit: 1733627788\nphysical_reads: 676043982\nsnapshot_timestamp: 2007-06-08 09:25:01.512911\n\nSubtracting these 2 lines gives me a 5 minute number of\nblocks_fetched: 12756187\nblocks_hit: 437119\nphysical_reads: 12319068\n\nIf I am interpreting these number correctly, for this 5 minute interval I\nended up hitting only 3.43% of the requested data in my shared_buffer, and\nended up requesting 12,319,068 blocks from the os? Since a postgres block\nis 8KB, that's 98,553,544 KB (~94GB)!\n\nAre my assumptions correct in this? I am just having a hard time fathoming\nthis. For this particular db, that is almost 1/2 of the total database (it\nis a 200GB+ db) requested in just 5 minutes!\n\nThanks for any clarification on this.\n\nChris\n12756187\n 12756187\n\nI need some help.  I have started taking snapshots of performance of my databases with concerns to io.  I created a view on each cluster defined as: SELECT pg_database.datname AS database_name, pg_stat_get_db_blocks_fetched(pg_database.oid) AS blocks_fetched, pg_stat_get_db_blocks_hit(pg_database.oid) AS blocks_hit, pg_stat_get_db_blocks_fetched(pg_database.oid) - pg_stat_get_db_blocks_hit(pg_database.oid) AS physical_reads\n   FROM pg_database  WHERE pg_stat_get_db_blocks_fetched(pg_database.oid) > 0  ORDER BY pg_stat_get_db_blocks_fetched(pg_database.oid) - pg_stat_get_db_blocks_hit(pg_database.oid) DESC;I am taking 5 minute snapshots of this view.\nWhen I look at my data, I am getting row like this:database_name: xxxblocks_fetched: 2396915583blocks_hit: 1733190669physical_reads: 663724914snapshot_timestamp: 2007-06-08 09:20:01.396079\ndatabase_name: xxxblocks_fetched: 2409671770\nblocks_hit: 1733627788\nphysical_reads: 676043982\nsnapshot_timestamp: 2007-06-08 09:25:01.512911Subtracting these 2 lines gives me a 5 minute number ofblocks_fetched: 12756187blocks_hit: 437119physical_reads: 12319068If I am interpreting these number correctly, for this 5 minute interval I ended up hitting only \n3.43% of the requested data in my shared_buffer, and ended up requesting 12,319,068 blocks from the os?  Since a postgres block is 8KB, that's 98,553,544 KB (~94GB)!Are my assumptions correct in this?  I am just having a hard time fathoming this.  For this particular db, that is almost 1/2 of the total database (it is a 200GB+ db) requested in just 5 minutes!\nThanks for any clarification on this.Chris\n12756187\n \n12756187", "msg_date": "Fri, 8 Jun 2007 11:44:16 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Please help me understand these numbers" }, { "msg_contents": "In response to \"Chris Hoover\" <[email protected]>:\n\n> I need some help. I have started taking snapshots of performance of my\n> databases with concerns to io. I created a view on each cluster defined as:\n> SELECT pg_database.datname AS database_name,\n> pg_stat_get_db_blocks_fetched(pg_database.oid) AS blocks_fetched,\n> pg_stat_get_db_blocks_hit(pg_database.oid) AS blocks_hit,\n> pg_stat_get_db_blocks_fetched(pg_database.oid) -\n> pg_stat_get_db_blocks_hit(pg_database.oid) AS physical_reads\n> FROM pg_database\n> WHERE pg_stat_get_db_blocks_fetched(pg_database.oid) > 0\n> ORDER BY pg_stat_get_db_blocks_fetched(pg_database.oid) -\n> pg_stat_get_db_blocks_hit(pg_database.oid) DESC;\n> \n> I am taking 5 minute snapshots of this view.\n> \n> When I look at my data, I am getting row like this:\n> database_name: xxx\n> blocks_fetched: 2396915583\n> blocks_hit: 1733190669\n> physical_reads: 663724914\n> snapshot_timestamp: 2007-06-08 09:20:01.396079\n> \n> database_name: xxx\n> blocks_fetched: 2409671770\n> blocks_hit: 1733627788\n> physical_reads: 676043982\n> snapshot_timestamp: 2007-06-08 09:25:01.512911\n> \n> Subtracting these 2 lines gives me a 5 minute number of\n> blocks_fetched: 12756187\n> blocks_hit: 437119\n> physical_reads: 12319068\n> \n> If I am interpreting these number correctly, for this 5 minute interval I\n> ended up hitting only 3.43% of the requested data in my shared_buffer, and\n> ended up requesting 12,319,068 blocks from the os? Since a postgres block\n> is 8KB, that's 98,553,544 KB (~94GB)!\n> \n> Are my assumptions correct in this?\n\nIt certainly seems possible.\n\n> I am just having a hard time fathoming\n> this. For this particular db, that is almost 1/2 of the total database (it\n> is a 200GB+ db) requested in just 5 minutes!\n\nWhat are your share_buffers setting and the total RAM available to the OS?\n\nMy guess would be that you have plenty of RAM in the system (8G+ ?) but that\nyou haven't allocated very much of it to shared_buffers (only a few 100 meg?).\nAs a result, PostgreSQL is constantly asking the OS for disk blocks that it\ndoesn't have cached, but the OS has those disk blocks cached in RAM.\n\nIf my guess is right, you'll probably see improved performance by allocating\nmore shared memory to PostgreSQL, thus avoiding having to move data from\none area in memory to another before it can be used.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Fri, 8 Jun 2007 12:09:41 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please help me understand these numbers" }, { "msg_contents": "On 6/8/07, Bill Moran <[email protected]> wrote:\n>\n> In response to \"Chris Hoover\" <[email protected]>:\n>\n> > I need some help. I have started taking snapshots of performance of my\n> > databases with concerns to io. I created a view on each cluster defined\n> as:\n> > SELECT pg_database.datname AS database_name,\n> > pg_stat_get_db_blocks_fetched(pg_database.oid) AS blocks_fetched,\n> > pg_stat_get_db_blocks_hit(pg_database.oid) AS blocks_hit,\n> > pg_stat_get_db_blocks_fetched(pg_database.oid) -\n> > pg_stat_get_db_blocks_hit(pg_database.oid) AS physical_reads\n> > FROM pg_database\n> > WHERE pg_stat_get_db_blocks_fetched(pg_database.oid) > 0\n> > ORDER BY pg_stat_get_db_blocks_fetched(pg_database.oid) -\n> > pg_stat_get_db_blocks_hit(pg_database.oid) DESC;\n> >\n> > I am taking 5 minute snapshots of this view.\n> >\n> > When I look at my data, I am getting row like this:\n> > database_name: xxx\n> > blocks_fetched: 2396915583\n> > blocks_hit: 1733190669\n> > physical_reads: 663724914\n> > snapshot_timestamp: 2007-06-08 09:20:01.396079\n> >\n> > database_name: xxx\n> > blocks_fetched: 2409671770\n> > blocks_hit: 1733627788\n> > physical_reads: 676043982\n> > snapshot_timestamp: 2007-06-08 09:25:01.512911\n> >\n> > Subtracting these 2 lines gives me a 5 minute number of\n> > blocks_fetched: 12756187\n> > blocks_hit: 437119\n> > physical_reads: 12319068\n> >\n> > If I am interpreting these number correctly, for this 5 minute interval\n> I\n> > ended up hitting only 3.43% of the requested data in my shared_buffer,\n> and\n> > ended up requesting 12,319,068 blocks from the os? Since a postgres\n> block\n> > is 8KB, that's 98,553,544 KB (~94GB)!\n> >\n> > Are my assumptions correct in this?\n>\n> It certainly seems possible.\n>\n> > I am just having a hard time fathoming\n> > this. For this particular db, that is almost 1/2 of the total database\n> (it\n> > is a 200GB+ db) requested in just 5 minutes!\n>\n> What are your share_buffers setting and the total RAM available to the OS?\n>\n> My guess would be that you have plenty of RAM in the system (8G+ ?) but\n> that\n> you haven't allocated very much of it to shared_buffers (only a few 100\n> meg?).\n> As a result, PostgreSQL is constantly asking the OS for disk blocks that\n> it\n> doesn't have cached, but the OS has those disk blocks cached in RAM.\n>\n> If my guess is right, you'll probably see improved performance by\n> allocating\n> more shared memory to PostgreSQL, thus avoiding having to move data from\n> one area in memory to another before it can be used.\n>\n> --\n> Bill Moran\n> Collaborative Fusion Inc.\n> http://people.collaborativefusion.com/~wmoran/\n>\n> [email protected]\n> Phone: 412-422-3463x4023\n>\n\nWow, that's amazing. You pretty much hit my config on the head. 9GB ram\nwith 256MB shared_buffers.\n\nI have just started playing with my shared_buffers config on another server\nthat tends to be my main problem server. I just ran across these\ninformational functions the other day, and they are opening up some great\nterritory for me that I have been wanting to know about for a while.\n\nI was starting to bump my shared_buffers up slowly. Would it be more\nadvisable to just push them to 25% of my ram and start there or work up\nslowly. I was going slowly since it takes a database restart to change the\nparameter.\n\nAny advise would be welcome.\n\nChris\n\nOn 6/8/07, Bill Moran <[email protected]> wrote:\nIn response to \"Chris Hoover\" <[email protected]>:> I need some help.  I have started taking snapshots of performance of my> databases with concerns to io.  I created a view on each cluster defined as:\n>  SELECT pg_database.datname AS database_name,> pg_stat_get_db_blocks_fetched(pg_database.oid) AS blocks_fetched,> pg_stat_get_db_blocks_hit(pg_database.oid) AS blocks_hit,> pg_stat_get_db_blocks_fetched(pg_database.oid) -\n> pg_stat_get_db_blocks_hit(pg_database.oid) AS physical_reads>    FROM pg_database>   WHERE pg_stat_get_db_blocks_fetched(pg_database.oid) > 0>   ORDER BY pg_stat_get_db_blocks_fetched(pg_database.oid) -\n> pg_stat_get_db_blocks_hit(pg_database.oid) DESC;>> I am taking 5 minute snapshots of this view.>> When I look at my data, I am getting row like this:> database_name: xxx> blocks_fetched: 2396915583\n> blocks_hit: 1733190669> physical_reads: 663724914> snapshot_timestamp: 2007-06-08 09:20:01.396079>> database_name: xxx> blocks_fetched: 2409671770> blocks_hit: 1733627788\n> physical_reads: 676043982> snapshot_timestamp: 2007-06-08 09:25:01.512911>> Subtracting these 2 lines gives me a 5 minute number of> blocks_fetched: 12756187> blocks_hit: 437119\n> physical_reads: 12319068>> If I am interpreting these number correctly, for this 5 minute interval I> ended up hitting only 3.43% of the requested data in my shared_buffer, and> ended up requesting 12,319,068 blocks from the os?  Since a postgres block\n> is 8KB, that's 98,553,544 KB (~94GB)!>> Are my assumptions correct in this?It certainly seems possible.> I am just having a hard time fathoming> this.  For this particular db, that is almost 1/2 of the total database (it\n> is a 200GB+ db) requested in just 5 minutes!What are your share_buffers setting and the total RAM available to the OS?My guess would be that you have plenty of RAM in the system (8G+ ?) but that\nyou haven't allocated very much of it to shared_buffers (only a few 100 meg?).As a result, PostgreSQL is constantly asking the OS for disk blocks that itdoesn't have cached, but the OS has those disk blocks cached in RAM.\nIf my guess is right, you'll probably see improved performance by allocatingmore shared memory to PostgreSQL, thus avoiding having to move data fromone area in memory to another before it can be used.\n--Bill MoranCollaborative Fusion Inc.http://people.collaborativefusion.com/~wmoran/[email protected]\nPhone: 412-422-3463x4023Wow, that's amazing.  You pretty much hit my config on the head.  9GB ram with 256MB shared_buffers.I have just started playing with my shared_buffers config on another server that tends to be my main problem server.  I just ran across these informational functions the other day, and they are opening up some great territory for me that I have been wanting to know about for a while.\nI was starting to bump my shared_buffers up slowly.  Would it be more advisable to just push them to 25% of my ram and start there or work up slowly.  I was going slowly since it takes a database restart to change the parameter.\nAny advise would be welcome.Chris", "msg_date": "Fri, 8 Jun 2007 13:05:22 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Please help me understand these numbers" }, { "msg_contents": "In response to \"Chris Hoover\" <[email protected]>:\n\n> On 6/8/07, Bill Moran <[email protected]> wrote:\n> >\n> > In response to \"Chris Hoover\" <[email protected]>:\n> >\n> > > I need some help. I have started taking snapshots of performance of my\n> > > databases with concerns to io. I created a view on each cluster defined\n> > as:\n> > > SELECT pg_database.datname AS database_name,\n> > > pg_stat_get_db_blocks_fetched(pg_database.oid) AS blocks_fetched,\n> > > pg_stat_get_db_blocks_hit(pg_database.oid) AS blocks_hit,\n> > > pg_stat_get_db_blocks_fetched(pg_database.oid) -\n> > > pg_stat_get_db_blocks_hit(pg_database.oid) AS physical_reads\n> > > FROM pg_database\n> > > WHERE pg_stat_get_db_blocks_fetched(pg_database.oid) > 0\n> > > ORDER BY pg_stat_get_db_blocks_fetched(pg_database.oid) -\n> > > pg_stat_get_db_blocks_hit(pg_database.oid) DESC;\n> > >\n> > > I am taking 5 minute snapshots of this view.\n> > >\n> > > When I look at my data, I am getting row like this:\n> > > database_name: xxx\n> > > blocks_fetched: 2396915583\n> > > blocks_hit: 1733190669\n> > > physical_reads: 663724914\n> > > snapshot_timestamp: 2007-06-08 09:20:01.396079\n> > >\n> > > database_name: xxx\n> > > blocks_fetched: 2409671770\n> > > blocks_hit: 1733627788\n> > > physical_reads: 676043982\n> > > snapshot_timestamp: 2007-06-08 09:25:01.512911\n> > >\n> > > Subtracting these 2 lines gives me a 5 minute number of\n> > > blocks_fetched: 12756187\n> > > blocks_hit: 437119\n> > > physical_reads: 12319068\n> > >\n> > > If I am interpreting these number correctly, for this 5 minute interval\n> > I\n> > > ended up hitting only 3.43% of the requested data in my shared_buffer,\n> > and\n> > > ended up requesting 12,319,068 blocks from the os? Since a postgres\n> > block\n> > > is 8KB, that's 98,553,544 KB (~94GB)!\n> > >\n> > > Are my assumptions correct in this?\n> >\n> > It certainly seems possible.\n> >\n> > > I am just having a hard time fathoming\n> > > this. For this particular db, that is almost 1/2 of the total database\n> > (it\n> > > is a 200GB+ db) requested in just 5 minutes!\n> >\n> > What are your share_buffers setting and the total RAM available to the OS?\n> >\n> > My guess would be that you have plenty of RAM in the system (8G+ ?) but\n> > that\n> > you haven't allocated very much of it to shared_buffers (only a few 100\n> > meg?).\n> > As a result, PostgreSQL is constantly asking the OS for disk blocks that\n> > it\n> > doesn't have cached, but the OS has those disk blocks cached in RAM.\n> >\n> > If my guess is right, you'll probably see improved performance by\n> > allocating\n> > more shared memory to PostgreSQL, thus avoiding having to move data from\n> > one area in memory to another before it can be used.\n> >\n> > --\n> > Bill Moran\n> > Collaborative Fusion Inc.\n> > http://people.collaborativefusion.com/~wmoran/\n> >\n> > [email protected]\n> > Phone: 412-422-3463x4023\n> >\n> \n> Wow, that's amazing. You pretty much hit my config on the head. 9GB ram\n> with 256MB shared_buffers.\n\nSome days are better than others :)\n\n> I have just started playing with my shared_buffers config on another server\n> that tends to be my main problem server. I just ran across these\n> informational functions the other day, and they are opening up some great\n> territory for me that I have been wanting to know about for a while.\n\nHave a look at the pg_buffercache module, which can be pretty useful for\nfiguring out what data is being accessed.\n\n> I was starting to bump my shared_buffers up slowly. Would it be more\n> advisable to just push them to 25% of my ram and start there or work up\n> slowly. I was going slowly since it takes a database restart to change the\n> parameter.\n\nI looked back through and couldn't find which version of PostgreSQL you\nwere using. If it's 8.X, the current wisdom is to start with 25 - 30% of\nyour unused RAM for shared buffers (by \"unused\", it's meant to take into\naccount any other applications running on the same machine and their\nRAM requirements) and then tune down or up as seems to help. So, my\nrecommendation would be to bump shared_buffers up to around 2G and go\nfrom there.\n\nAnother thing that I realized wasn't in your original email is if you're\nhaving any sort of problems? If there are slow queries or other\nperformance issues, do before/after tests to see if you're adjusting\nvalues in the right direction. If you don't have any performance issues\noutstanding, it can be easy to waste a lot of time/energy tweaking\nsettings that don't really help anything.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Fri, 8 Jun 2007 13:37:18 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please help me understand these numbers" }, { "msg_contents": "On 6/8/07, Bill Moran <[email protected]> wrote:\n>\n> In response to \"Chris Hoover\" <[email protected]>:\n>\n> > On 6/8/07, Bill Moran <[email protected]> wrote:\n> > >\n> > > In response to \"Chris Hoover\" <[email protected]>:\n> > >\n> > > > I need some help. I have started taking snapshots of performance of\n> my\n> > > > databases with concerns to io. I created a view on each cluster\n> defined\n> > > as:\n> > > > SELECT pg_database.datname AS database_name,\n> > > > pg_stat_get_db_blocks_fetched(pg_database.oid) AS blocks_fetched,\n> > > > pg_stat_get_db_blocks_hit(pg_database.oid) AS blocks_hit,\n> > > > pg_stat_get_db_blocks_fetched(pg_database.oid) -\n> > > > pg_stat_get_db_blocks_hit(pg_database.oid) AS physical_reads\n> > > > FROM pg_database\n> > > > WHERE pg_stat_get_db_blocks_fetched(pg_database.oid) > 0\n> > > > ORDER BY pg_stat_get_db_blocks_fetched(pg_database.oid) -\n> > > > pg_stat_get_db_blocks_hit(pg_database.oid) DESC;\n> > > >\n> > > > I am taking 5 minute snapshots of this view.\n> > > >\n> > > > When I look at my data, I am getting row like this:\n> > > > database_name: xxx\n> > > > blocks_fetched: 2396915583\n> > > > blocks_hit: 1733190669\n> > > > physical_reads: 663724914\n> > > > snapshot_timestamp: 2007-06-08 09:20:01.396079\n> > > >\n> > > > database_name: xxx\n> > > > blocks_fetched: 2409671770\n> > > > blocks_hit: 1733627788\n> > > > physical_reads: 676043982\n> > > > snapshot_timestamp: 2007-06-08 09:25:01.512911\n> > > >\n> > > > Subtracting these 2 lines gives me a 5 minute number of\n> > > > blocks_fetched: 12756187\n> > > > blocks_hit: 437119\n> > > > physical_reads: 12319068\n> > > >\n> > > > If I am interpreting these number correctly, for this 5 minute\n> interval\n> > > I\n> > > > ended up hitting only 3.43% of the requested data in my\n> shared_buffer,\n> > > and\n> > > > ended up requesting 12,319,068 blocks from the os? Since a postgres\n> > > block\n> > > > is 8KB, that's 98,553,544 KB (~94GB)!\n> > > >\n> > > > Are my assumptions correct in this?\n> > >\n> > > It certainly seems possible.\n> > >\n> > > > I am just having a hard time fathoming\n> > > > this. For this particular db, that is almost 1/2 of the total\n> database\n> > > (it\n> > > > is a 200GB+ db) requested in just 5 minutes!\n> > >\n> > > What are your share_buffers setting and the total RAM available to the\n> OS?\n> > >\n> > > My guess would be that you have plenty of RAM in the system (8G+ ?)\n> but\n> > > that\n> > > you haven't allocated very much of it to shared_buffers (only a few\n> 100\n> > > meg?).\n> > > As a result, PostgreSQL is constantly asking the OS for disk blocks\n> that\n> > > it\n> > > doesn't have cached, but the OS has those disk blocks cached in RAM.\n> > >\n> > > If my guess is right, you'll probably see improved performance by\n> > > allocating\n> > > more shared memory to PostgreSQL, thus avoiding having to move data\n> from\n> > > one area in memory to another before it can be used.\n> > >\n> > > --\n> > > Bill Moran\n> > > Collaborative Fusion Inc.\n> > > http://people.collaborativefusion.com/~wmoran/\n> > >\n> > > [email protected]\n> > > Phone: 412-422-3463x4023\n> > >\n> >\n> > Wow, that's amazing. You pretty much hit my config on the head. 9GB\n> ram\n> > with 256MB shared_buffers.\n>\n> Some days are better than others :)\n>\n> > I have just started playing with my shared_buffers config on another\n> server\n> > that tends to be my main problem server. I just ran across these\n> > informational functions the other day, and they are opening up some\n> great\n> > territory for me that I have been wanting to know about for a while.\n>\n> Have a look at the pg_buffercache module, which can be pretty useful for\n> figuring out what data is being accessed.\n>\n> > I was starting to bump my shared_buffers up slowly. Would it be more\n> > advisable to just push them to 25% of my ram and start there or work up\n> > slowly. I was going slowly since it takes a database restart to change\n> the\n> > parameter.\n>\n> I looked back through and couldn't find which version of PostgreSQL you\n> were using. If it's 8.X, the current wisdom is to start with 25 - 30% of\n> your unused RAM for shared buffers (by \"unused\", it's meant to take into\n> account any other applications running on the same machine and their\n> RAM requirements) and then tune down or up as seems to help. So, my\n> recommendation would be to bump shared_buffers up to around 2G and go\n> from there.\n>\n> Another thing that I realized wasn't in your original email is if you're\n> having any sort of problems? If there are slow queries or other\n> performance issues, do before/after tests to see if you're adjusting\n> values in the right direction. If you don't have any performance issues\n> outstanding, it can be easy to waste a lot of time/energy tweaking\n> settings that don't really help anything.\n>\n> --\n> Bill Moran\n> Collaborative Fusion Inc.\n> http://people.collaborativefusion.com/~wmoran/\n>\n> [email protected]\n> Phone: 412-422-3463x4023\n>\n\n\nSorry, I am on 8.1.3 (move to 8.1.9 is being started). I do have some\nperformance issues but they are sporadic. I am trying to make sure my\nservers are all running well. I believe that they are ok most of the time,\nbut we are walking on the edge. They can easily be pushed over and have my\ncustomers complaining of slowness. So, I am trying to look at tuning back\naway from the edge.\n\nThanks for your help,\n\nChris\n\nOn 6/8/07, Bill Moran <[email protected]> wrote:\nIn response to \"Chris Hoover\" <[email protected]>:> On 6/8/07, Bill Moran <[email protected]\n> wrote:> >> > In response to \"Chris Hoover\" <[email protected]>:> >> > > I need some help.  I have started taking snapshots of performance of my\n> > > databases with concerns to io.  I created a view on each cluster defined> > as:> > >  SELECT pg_database.datname AS database_name,> > > pg_stat_get_db_blocks_fetched(pg_database.oid) AS blocks_fetched,\n> > > pg_stat_get_db_blocks_hit(pg_database.oid) AS blocks_hit,> > > pg_stat_get_db_blocks_fetched(pg_database.oid) -> > > pg_stat_get_db_blocks_hit(pg_database.oid) AS physical_reads\n> > >    FROM pg_database> > >   WHERE pg_stat_get_db_blocks_fetched(pg_database.oid) > 0> > >   ORDER BY pg_stat_get_db_blocks_fetched(pg_database.oid) -> > > pg_stat_get_db_blocks_hit(pg_database.oid) DESC;\n> > >> > > I am taking 5 minute snapshots of this view.> > >> > > When I look at my data, I am getting row like this:> > > database_name: xxx> > > blocks_fetched: 2396915583\n> > > blocks_hit: 1733190669> > > physical_reads: 663724914> > > snapshot_timestamp: 2007-06-08 09:20:01.396079> > >> > > database_name: xxx> > > blocks_fetched: 2409671770\n> > > blocks_hit: 1733627788> > > physical_reads: 676043982> > > snapshot_timestamp: 2007-06-08 09:25:01.512911> > >> > > Subtracting these 2 lines gives me a 5 minute number of\n> > > blocks_fetched: 12756187> > > blocks_hit: 437119> > > physical_reads: 12319068> > >> > > If I am interpreting these number correctly, for this 5 minute interval\n> > I> > > ended up hitting only 3.43% of the requested data in my shared_buffer,> > and> > > ended up requesting 12,319,068 blocks from the os?  Since a postgres> > block\n> > > is 8KB, that's 98,553,544 KB (~94GB)!> > >> > > Are my assumptions correct in this?> >> > It certainly seems possible.> >> > > I am just having a hard time fathoming\n> > > this.  For this particular db, that is almost 1/2 of the total database> > (it> > > is a 200GB+ db) requested in just 5 minutes!> >> > What are your share_buffers setting and the total RAM available to the OS?\n> >> > My guess would be that you have plenty of RAM in the system (8G+ ?) but> > that> > you haven't allocated very much of it to shared_buffers (only a few 100> > meg?).\n> > As a result, PostgreSQL is constantly asking the OS for disk blocks that> > it> > doesn't have cached, but the OS has those disk blocks cached in RAM.> >> > If my guess is right, you'll probably see improved performance by\n> > allocating> > more shared memory to PostgreSQL, thus avoiding having to move data from> > one area in memory to another before it can be used.> >> > --> > Bill Moran\n> > Collaborative Fusion Inc.> > http://people.collaborativefusion.com/~wmoran/> >> > \[email protected]> > Phone: 412-422-3463x4023> >>> Wow, that's amazing.  You pretty much hit my config on the head.  9GB ram> with 256MB shared_buffers.\nSome days are better than others :)> I have just started playing with my shared_buffers config on another server> that tends to be my main problem server.  I just ran across these> informational functions the other day, and they are opening up some great\n> territory for me that I have been wanting to know about for a while.Have a look at the pg_buffercache module, which can be pretty useful forfiguring out what data is being accessed.> I was starting to bump my shared_buffers up slowly.  Would it be more\n> advisable to just push them to 25% of my ram and start there or work up> slowly.  I was going slowly since it takes a database restart to change the> parameter.I looked back through and couldn't find which version of PostgreSQL you\nwere using.  If it's 8.X, the current wisdom is to start with 25 - 30% ofyour unused RAM for shared buffers (by \"unused\", it's meant to take intoaccount any other applications running on the same machine and their\nRAM requirements) and then tune down or up as seems to help.  So, myrecommendation would be to bump shared_buffers up to around 2G and gofrom there.Another thing that I realized wasn't in your original email is if you're\nhaving any sort of problems?  If there are slow queries or otherperformance issues, do before/after tests to see if you're adjustingvalues in the right direction.  If you don't have any performance issues\noutstanding, it can be easy to waste a lot of time/energy tweakingsettings that don't really help anything.--Bill MoranCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/[email protected]: 412-422-3463x4023Sorry, I am on 8.1.3 (move to \n8.1.9 is being started).  I do have some performance issues but they are sporadic.  I am trying to make sure my servers are all running well.  I believe that they are ok most of the time, but we are walking on the edge.  They can easily be pushed over and have my customers complaining of slowness.  So, I am trying to look at tuning back away from the edge. \nThanks for your help,Chris", "msg_date": "Fri, 8 Jun 2007 13:52:26 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Please help me understand these numbers" } ]
[ { "msg_contents": "Is it possible that providing 128G of ram is too much ? Will other \nsystems in the server bottleneck ?\n\nDave\n", "msg_date": "Fri, 8 Jun 2007 12:31:47 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "How much ram is too much" }, { "msg_contents": "Dave Cramer wrote:\n> Is it possible that providing 128G of ram is too much ? Will other \n> systems in the server bottleneck ?\n\nWhat CPU and OS are you considering?\n\n-- \nGuy Rouillier\n", "msg_date": "Fri, 08 Jun 2007 12:46:05 -0400", "msg_from": "Guy Rouillier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much ram is too much" }, { "msg_contents": "What is your expected data size and usage pattern? What are the other \ncomponents in the system?\n\nOn Fri, 8 Jun 2007, Dave Cramer wrote:\n\n> Is it possible that providing 128G of ram is too much ? Will other systems in \n> the server bottleneck ?\n>\n> Dave\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n", "msg_date": "Fri, 8 Jun 2007 09:52:03 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much ram is too much" }, { "msg_contents": "It's an IBM x3850 using linux redhat 4.0\n\n\nOn 8-Jun-07, at 12:46 PM, Guy Rouillier wrote:\n\n> Dave Cramer wrote:\n>> Is it possible that providing 128G of ram is too much ? Will other \n>> systems in the server bottleneck ?\n>\n> What CPU and OS are you considering?\n>\n> -- \n> Guy Rouillier\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n\n", "msg_date": "Fri, 8 Jun 2007 13:08:52 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How much ram is too much" }, { "msg_contents": "> On Fri, 8 Jun 2007, Dave Cramer wrote:\n>\n>> Is it possible that providing 128G of ram is too much ? Will other systems\n>> in the server bottleneck ?\n\nthe only way 128G of ram would be too much is if your total database size \n(including indexes) is smaller then this.\n\nnow it may not gain you as much of an advantage going from 64G to 128G as \nit does going from 32G to 64G, but that depends on many variables as \nothers have been asking.\n\nDavid Lang\n", "msg_date": "Fri, 8 Jun 2007 10:30:40 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: How much ram is too much" }, { "msg_contents": "[email protected] wrote:\n>> On Fri, 8 Jun 2007, Dave Cramer wrote:\n>>\n>>> Is it possible that providing 128G of ram is too much ? Will other \n>>> systems\n>>> in the server bottleneck ?\n> \n> the only way 128G of ram would be too much is if your total database \n> size (including indexes) is smaller then this.\n> \n> now it may not gain you as much of an advantage going from 64G to 128G \n> as it does going from 32G to 64G, but that depends on many variables as \n> others have been asking.\n\nI don't know about the IBM but I know some of the HPs require slower ram \nto actually get to 128G.\n\nJoshua D. Drake\n\n\n> \n> David Lang\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Fri, 08 Jun 2007 10:38:23 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much ram is too much" }, { "msg_contents": "Dave Cramer �rta:\n> It's an IBM x3850 using linux redhat 4.0\n\nIsn't that a bit old? I have a RedHat 4.2 somewhere\nthat was bundled with Applixware 3. :-)\n\n-- \n----------------------------------\nZolt�n B�sz�rm�nyi\nCybertec Geschwinde & Sch�nig GmbH\nhttp://www.postgresql.at/\n\n", "msg_date": "Fri, 08 Jun 2007 19:38:42 +0200", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": false, "msg_subject": "[OT] Re: How much ram is too much" }, { "msg_contents": "Zoltan Boszormenyi wrote:\n> Dave Cramer írta:\n>> It's an IBM x3850 using linux redhat 4.0\n> \n> Isn't that a bit old? I have a RedHat 4.2 somewhere\n> that was bundled with Applixware 3. :-)\n\nHe means redhat ES/AS 4 I assume.\n\nJ\n\n\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n\n", "msg_date": "Fri, 08 Jun 2007 11:10:17 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OT] Re: How much ram is too much" }, { "msg_contents": "\nOn 8-Jun-07, at 2:10 PM, Joshua D. Drake wrote:\n\n> Zoltan Boszormenyi wrote:\n>> Dave Cramer �rta:\n>>> It's an IBM x3850 using linux redhat 4.0\n>> Isn't that a bit old? I have a RedHat 4.2 somewhere\n>> that was bundled with Applixware 3. :-)\n>\n> He means redhat ES/AS 4 I assume.\n>\nYes AS4\n> J\n>\n>\n>\n>\n> -- \n>\n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> Providing the most comprehensive PostgreSQL solutions since 1997\n> http://www.commandprompt.com/\n>\n> Donate to the PostgreSQL Project: http://www.postgresql.org/about/ \n> donate\n> PostgreSQL Replication: http://www.commandprompt.com/products/\n>\n>\n\n", "msg_date": "Fri, 8 Jun 2007 14:52:51 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [OT] Re: How much ram is too much" }, { "msg_contents": "Joshua D. Drake �rta:\n> Zoltan Boszormenyi wrote:\n>> Dave Cramer �rta:\n>>> It's an IBM x3850 using linux redhat 4.0\n>>\n>> Isn't that a bit old? I have a RedHat 4.2 somewhere\n>> that was bundled with Applixware 3. :-)\n>\n> He means redhat ES/AS 4 I assume.\n>\n> J\n\nI guessed that, hence the smiley.\nBut it's very unfortunate that version numbers\nare reused - it can cause confusion.\nThere was a RH 4.0 already a long ago,\nwhen the commercial and the community\nversion were the same. I think Microsoft\nwill avoid reusing its versions when year 2095 comes. :-)\n\n-- \n----------------------------------\nZolt�n B�sz�rm�nyi\nCybertec Geschwinde & Sch�nig GmbH\nhttp://www.postgresql.at/\n\n", "msg_date": "Fri, 08 Jun 2007 20:54:39 +0200", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OT] Re: How much ram is too much" }, { "msg_contents": "On Fri, Jun 08, 2007 at 08:54:39PM +0200, Zoltan Boszormenyi wrote:\n> Joshua D. Drake �rta:\n> >Zoltan Boszormenyi wrote:\n> >>Dave Cramer �rta:\n> >>>It's an IBM x3850 using linux redhat 4.0\n> >>Isn't that a bit old? I have a RedHat 4.2 somewhere\n> >>that was bundled with Applixware 3. :-)\n> >He means redhat ES/AS 4 I assume.\n> I guessed that, hence the smiley.\n> But it's very unfortunate that version numbers\n> are reused - it can cause confusion.\n> There was a RH 4.0 already a long ago,\n> when the commercial and the community\n> version were the same. I think Microsoft\n> will avoid reusing its versions when year 2095 comes. :-)\n\nHe should have written RHEL 4.0. RH 4.0 is long enough ago, though,\nthat I think few would assume it meant the much older release.\n\nYou'll find a similar thing with products like \"CuteFTP 7.0\" or\n\"CuteFTP Pro 3.0\".\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Fri, 8 Jun 2007 15:00:25 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [OT] Re: How much ram is too much" }, { "msg_contents": "[email protected] �rta:\n> On Fri, Jun 08, 2007 at 08:54:39PM +0200, Zoltan Boszormenyi wrote:\n> \n>> Joshua D. Drake �rta:\n>> \n>>> Zoltan Boszormenyi wrote:\n>>> \n>>>> Dave Cramer �rta:\n>>>> \n>>>>> It's an IBM x3850 using linux redhat 4.0\n>>>>> \n>>>> Isn't that a bit old? I have a RedHat 4.2 somewhere\n>>>> that was bundled with Applixware 3. :-)\n>>>> \n>>> He means redhat ES/AS 4 I assume.\n>>> \n>> I guessed that, hence the smiley.\n>> But it's very unfortunate that version numbers\n>> are reused - it can cause confusion.\n>> There was a RH 4.0 already a long ago,\n>> when the commercial and the community\n>> version were the same. I think Microsoft\n>> will avoid reusing its versions when year 2095 comes. :-)\n>> \n>\n> He should have written RHEL 4.0. RH 4.0 is long enough ago, though,\n> that I think few would assume it meant the much older release.\n> \n\nYes. But up until RHEL 8.0/9.0 ( or plain 9 without decimals ;-) )\nI can make cheap jokes telling that I can give you a free upgrade. :-)\n\n> You'll find a similar thing with products like \"CuteFTP 7.0\" or\n> \"CuteFTP Pro 3.0\".\n> \n\nI am sure there are others, too. But enough of this OT,\nI am really interested in the main thread's topic.\n\nBest regards,\n\n-- \n----------------------------------\nZolt�n B�sz�rm�nyi\nCybertec Geschwinde & Sch�nig GmbH\nhttp://www.postgresql.at/\n\n", "msg_date": "Fri, 08 Jun 2007 21:14:03 +0200", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OT] Re: How much ram is too much" }, { "msg_contents": "Dave Cramer wrote:\n> It's an IBM x3850 using linux redhat 4.0\n\nI had to look that up, web site says it is a 4-processor, dual-core (so \n8 cores) Intel Xeon system. It also says \"Up to 64GB DDR II ECC \nmemory\", so are you sure you can even get 128 GB RAM?\n\nIf you could, I'd expect diminishing returns from the Xeon northbridge \nmemory access. If you are willing to spend that kind of money on \nmemory, you'd be better off with Opteron or Sparc.\n\n-- \nGuy Rouillier\n", "msg_date": "Fri, 08 Jun 2007 15:41:43 -0400", "msg_from": "Guy Rouillier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much ram is too much" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\nZoltan Boszormenyi wrote:\n> Joshua D. Drake �rta:\n>> Zoltan Boszormenyi wrote:\n>>> Dave Cramer �rta:\n>>>> It's an IBM x3850 using linux redhat 4.0\n>>>\n>>> Isn't that a bit old? I have a RedHat 4.2 somewhere\n>>> that was bundled with Applixware 3. :-)\n>>\n>> He means redhat ES/AS 4 I assume.\n>>\n>> J\n> \n> I guessed that, hence the smiley.\n> But it's very unfortunate that version numbers\n> are reused - it can cause confusion.\n> There was a RH 4.0 already a long ago,\n> when the commercial and the community\n> version were the same. I think Microsoft\n> will avoid reusing its versions when year 2095 comes. :-)\n\nWell, RedHat Linux, and RedHat Linux Enterprise Server/Advanced Servers\nare clearly different products :-P\n\nAnd yes, I even owned Applix :)\n\nAndreas\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGac2FHJdudm4KnO0RAkpcAJwI+RTIJgAc5Db1bnsu7tRNiU9vzACeIGvl\nLP0CSxc5dML0BMerI+u1xYc=\n=qiye\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 08 Jun 2007 23:43:34 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OT] Re: How much ram is too much" }, { "msg_contents": "On Jun 8, 2007, at 11:31 AM, Dave Cramer wrote:\n> Is it possible that providing 128G of ram is too much ? Will other \n> systems in the server bottleneck ?\n\nProviding to what? PostgreSQL? The OS? My bet is that you'll run into \nissues with how shared_buffers are managed if you actually try and \nset them to anything remotely close to 128GB.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Sun, 10 Jun 2007 22:11:13 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much ram is too much" }, { "msg_contents": "Actually this one is an opteron, so it looks like it's all good.\n\nDave\nOn 8-Jun-07, at 3:41 PM, Guy Rouillier wrote:\n\n> Dave Cramer wrote:\n>> It's an IBM x3850 using linux redhat 4.0\n>\n> I had to look that up, web site says it is a 4-processor, dual-core \n> (so 8 cores) Intel Xeon system. It also says \"Up to 64GB DDR II \n> ECC memory\", so are you sure you can even get 128 GB RAM?\n>\n> If you could, I'd expect diminishing returns from the Xeon \n> northbridge memory access. If you are willing to spend that kind \n> of money on memory, you'd be better off with Opteron or Sparc.\n>\n> -- \n> Guy Rouillier\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n", "msg_date": "Mon, 11 Jun 2007 06:55:36 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How much ram is too much" }, { "msg_contents": "\nOn 10-Jun-07, at 11:11 PM, Jim Nasby wrote:\n\n> On Jun 8, 2007, at 11:31 AM, Dave Cramer wrote:\n>> Is it possible that providing 128G of ram is too much ? Will other \n>> systems in the server bottleneck ?\n>\n> Providing to what? PostgreSQL? The OS? My bet is that you'll run \n> into issues with how shared_buffers are managed if you actually try \n> and set them to anything remotely close to 128GB.\n\nWell, we'd give 25% of it to postgres, and the rest to the OS.\n\nWhat is it specifically you are referring to ?\n\nDave\n> --\n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n", "msg_date": "Mon, 11 Jun 2007 11:09:42 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How much ram is too much" }, { "msg_contents": "On Mon, Jun 11, 2007 at 11:09:42AM -0400, Dave Cramer wrote:\n> >and set them to anything remotely close to 128GB.\n> \n> Well, we'd give 25% of it to postgres, and the rest to the OS.\n\nAre you quite sure that PostgreSQL's management of the buffers is\nefficient with such a large one? In the past, that wasn't the case\nfor relatively small buffers; with the replacement of single-pass\nLRU, that has certainly changed, but I'd be surprised if anyone\ntested a buffer as large as 32G.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe whole tendency of modern prose is away from concreteness.\n\t\t--George Orwell\n", "msg_date": "Mon, 11 Jun 2007 11:34:14 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much ram is too much" }, { "msg_contents": "Hi Andrew\nOn 11-Jun-07, at 11:34 AM, Andrew Sullivan wrote:\n\n> On Mon, Jun 11, 2007 at 11:09:42AM -0400, Dave Cramer wrote:\n>>> and set them to anything remotely close to 128GB.\n>>\n>> Well, we'd give 25% of it to postgres, and the rest to the OS.\n>\n> Are you quite sure that PostgreSQL's management of the buffers is\n> efficient with such a large one?\n\nNo, I'm not sure of this.\n> In the past, that wasn't the case\n> for relatively small buffers; with the replacement of single-pass\n> LRU, that has certainly changed, but I'd be surprised if anyone\n> tested a buffer as large as 32G.\n\nSo does anyone have experience above 32G ?\n\nDave\n>\n> A\n>\n> -- \n> Andrew Sullivan | [email protected]\n> The whole tendency of modern prose is away from concreteness.\n> \t\t--George Orwell\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n\n", "msg_date": "Mon, 11 Jun 2007 13:22:04 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much ram is too much" }, { "msg_contents": "Hi Andrew\nOn 11-Jun-07, at 11:34 AM, Andrew Sullivan wrote:\n\n> On Mon, Jun 11, 2007 at 11:09:42AM -0400, Dave Cramer wrote:\n>>> and set them to anything remotely close to 128GB.\n>>\n>> Well, we'd give 25% of it to postgres, and the rest to the OS.\n>\n> Are you quite sure that PostgreSQL's management of the buffers is\n> efficient with such a large one?\n\nNo, I'm not sure of this.\n> In the past, that wasn't the case\n> for relatively small buffers; with the replacement of single-pass\n> LRU, that has certainly changed, but I'd be surprised if anyone\n> tested a buffer as large as 32G.\n\nSo does anyone have experience above 32G ?\n\nDave\n>\n> A\n>\n> -- Andrew Sullivan | [email protected]\n> The whole tendency of modern prose is away from concreteness.\n> \t\t--George Orwell\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n\n", "msg_date": "Mon, 11 Jun 2007 13:59:08 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How much ram is too much" } ]
[ { "msg_contents": "Hi,\n\n\nWhile monitioring we noticed that there are no details in the pg_statistics\nfor a particular table. Can you let us know what might be the reason? Also\nwhat steps can be taken care for adding the statistics?\n\nNote: The queries which are running on this table are taken longer time then\nal the other queries.\n\n\nThanks,\nNimesh.\n\nHi,\n \n \nWhile monitioring we noticed that there are no details in the pg_statistics for a particular table. Can you let us know what might be the reason? Also what steps can be taken care for adding the statistics?\n \nNote: The queries which are running on this table are taken longer time then al the other queries.\n \n \nThanks,\nNimesh.", "msg_date": "Mon, 11 Jun 2007 14:20:27 +0530", "msg_from": "\"Nimesh Satam\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_statistic doesnt contain details for specific table" }, { "msg_contents": "Nimesh Satam wrote:\n> While monitioring we noticed that there are no details in the pg_statistics\n> for a particular table. Can you let us know what might be the reason? Also\n> what steps can be taken care for adding the statistics?\n\nHave you ANALYZEd the table?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 11 Jun 2007 09:54:12 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_statistic doesnt contain details for specific table" }, { "msg_contents": "Heikki,\n\n\nThank you for replying.\n\nWe have already used analyze command on the table.\nWe have also ran the vacuum analyze command.\n\n\nBut they are not helping.\n\nThanks,\nNimesh.\n\n\nOn 6/11/07, Heikki Linnakangas <[email protected]> wrote:\n>\n> Nimesh Satam wrote:\n> > While monitioring we noticed that there are no details in the\n> pg_statistics\n> > for a particular table. Can you let us know what might be the reason?\n> Also\n> > what steps can be taken care for adding the statistics?\n>\n> Have you ANALYZEd the table?\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n\nHeikki,\n \n \nThank you for replying. \n \nWe have already used analyze command on the table.\nWe have also ran the vacuum analyze command.\n \n \nBut they are not helping.\n \nThanks,\nNimesh.\nOn 6/11/07, Heikki Linnakangas <[email protected]> wrote:\nNimesh Satam wrote:> While monitioring we noticed that there are no details in the pg_statistics\n> for a particular table. Can you let us know what might be the reason? Also> what steps can be taken care for adding the statistics?Have you ANALYZEd the table?--  Heikki Linnakangas  EnterpriseDB   \nhttp://www.enterprisedb.com", "msg_date": "Mon, 11 Jun 2007 14:28:32 +0530", "msg_from": "\"Nimesh Satam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_statistic doesnt contain details for specific table" }, { "msg_contents": "On Mon, Jun 11, 2007 at 02:28:32PM +0530, Nimesh Satam wrote:\n> We have already used analyze command on the table.\n> We have also ran the vacuum analyze command.\n> \n> But they are not helping.\n\nIs there any data in the table? What does ANALYZE VERBOSE or VACUUM\nANALYZE VERBOSE show for this table? Is there any chance that\nsomebody set all of the columns' statistics targets to zero?\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 11 Jun 2007 07:29:53 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_statistic doesnt contain details for specific table" }, { "msg_contents": "Michael,\n\n\nFollowing is the output of Vacuum analze on the same table:\n\n\n*psql =# VACUUM ANALYZE verbose cam_attr;\nINFO: vacuuming \"public.cam_attr\"\nINFO: index \"cam_attr_pk\" now contains 11829 row versions in 63 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"cam_attr\": found 0 removable, 11829 nonremovable row versions in 103\npages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 236 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.cam_attr\"\nINFO: \"cam_attr\": scanned 103 of 103 pages, containing 11829 live rows and\n0 dead rows; 6000 rows in sample, 11829 estimated total rows\nVACUUM\n*\n\nAlso how do we check if the statistics are set to Zero for the table?\n\nRegards,\nNimesh.\n\n\nOn 6/11/07, Michael Fuhr <[email protected]> wrote:\n>\n> On Mon, Jun 11, 2007 at 02:28:32PM +0530, Nimesh Satam wrote:\n> > We have already used analyze command on the table.\n> > We have also ran the vacuum analyze command.\n> >\n> > But they are not helping.\n>\n> Is there any data in the table? What does ANALYZE VERBOSE or VACUUM\n> ANALYZE VERBOSE show for this table? Is there any chance that\n> somebody set all of the columns' statistics targets to zero?\n>\n> --\n> Michael Fuhr\n>\n\nMichael,\n \n \nFollowing is the output of Vacuum analze on the same table:\n \n \npsql =# VACUUM ANALYZE verbose cam_attr;INFO:  vacuuming \"public.cam_attr\"INFO:  index \"cam_attr_pk\" now contains 11829 row versions in 63 pagesDETAIL:  0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  \"cam_attr\": found 0 removable, 11829 nonremovable row versions in 103 pagesDETAIL:  0 dead row versions cannot be removed yet.There were 236 unused item pointers.\n0 pages are entirely empty.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  analyzing \"public.cam_attr\"INFO:  \"cam_attr\": scanned 103 of 103 pages, containing 11829 live rows and 0 dead rows; 6000 rows in sample, 11829 estimated total rows\nVACUUMAlso how do we check if the statistics are set to Zero for the table?\n \nRegards,\nNimesh.\n \n \nOn 6/11/07, Michael Fuhr <[email protected]> wrote:\nOn Mon, Jun 11, 2007 at 02:28:32PM +0530, Nimesh Satam wrote:> We have already used analyze command on the table.\n> We have also ran the vacuum analyze command.>> But they are not helping.Is there any data in the table?  What does ANALYZE VERBOSE or VACUUMANALYZE VERBOSE show for this table?  Is there any chance that\nsomebody set all of the columns' statistics targets to zero?--Michael Fuhr", "msg_date": "Mon, 11 Jun 2007 19:22:24 +0530", "msg_from": "\"Nimesh Satam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_statistic doesnt contain details for specific table" }, { "msg_contents": "On Mon, Jun 11, 2007 at 07:22:24PM +0530, Nimesh Satam wrote:\n> INFO: analyzing \"public.cam_attr\"\n> INFO: \"cam_attr\": scanned 103 of 103 pages, containing 11829 live rows and\n> 0 dead rows; 6000 rows in sample, 11829 estimated total rows\n\nLooks reasonable.\n\n> Also how do we check if the statistics are set to Zero for the table?\n\nSELECT attname, attstattarget\n FROM pg_attribute\n WHERE attrelid = 'public.cam_attr'::regclass\n AND attnum > 0\n AND NOT attisdropped;\n\nIf nobody has changed the statistics targets then they're all\nprobably -1. Negative attstattarget values mean to use the system\ndefault, which you can see with:\n\nSHOW default_statistics_target;\n\nHow exactly are you determining that no statistics are showing up\nfor this table? Are you running a query like the following?\n\nSELECT *\n FROM pg_stats\n WHERE schemaname = 'public' AND tablename = 'cam_attr';\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 11 Jun 2007 08:34:23 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_statistic doesnt contain details for specific table" } ]
[ { "msg_contents": "Hi all,\n\nIt seems that I have an issue with the performance of a PostgreSQL server.\n\nI'm running write-intensive, TPC-C like tests. The workload consist of \n150 to 200 thousand transactions. The performance varies dramatically, \nbetween 5 and more than 9 hours (I don't have the exact figure for the \nlongest experiment). Initially the server is relatively fast. It \nfinishes the first batch of 50k transactions in an hour. This is \nprobably due to the fact that the database is RAM-resident during this \ninterval. As soon as the database grows bigger than the RAM the \nperformance, not surprisingly, degrades, because of the slow disks.\nMy problem is that the performance is rather variable, and to me \nnon-deterministic. A 150k test can finish in approx. 3h30mins but \nconversely it can take more than 5h to complete.\nPreferably I would like to see *steady-state* performance (where my \ninterpretation of the steady-state is that the average \nthroughput/response time does not change over time). Is the steady-state \nachievable despite the MVCC and the inherent non-determinism between \nexperiments? What could be the reasons for the variable performance?\n- misconfiguration of the PG parameters (e.g. autovacuum does not cope \nwith the dead tuples on the MVCC architecture)\n- file fragmentation\n- index bloat\n- ???\nThe initial size of the database (actually the output of the 'du -h' \ncommand) is ~ 400 MB. The size increases dramatically, somewhere between \n600MB and 1.1GB\n\nI have doubted the client application at some point too. However, other \nserver combinations using different DBMS exhibit steady state \nperformance.As a matter of fact when PG is paired with Firebird, through \nstatement-based replication middleware, the performance of the pair is \nsteady too.\n\nThe hardware configuration:\nClient machine\n- 1.5 GHz CPU Pentium 4\n- 1GB Rambus RAM\n- Seagate st340810a IDE disk (40GB), 5400 rpms\n\nServer machine\n- 1.5 GHz CPU Pentium 4\n- 640 MB Rambus RAM\n- Seagate Barracuda 7200.9 rpms\n- Seagate st340810a IDE disk (40GB) - the WAL is stored on an ext2 partition\n\nThe Software configuration:\nThe client application is a multi-threaded Java client running on Win \n2000 Pro sp4\nThe database server version is 8.1.5 running on Fedora Core 6.\nPlease find attached:\n1 - the output of vmstat taken after the first 60k transactions were \nexecuted\n2 - the postgresql.conf file\n\nAny help would be appreciated.\n\nBest regards,\nVladimir\n-- \n\nVladimir Stankovic \tT: +44 20 7040 0273\nResearch Student/Research Assistant \tF: +44 20 7040 8585\nCentre for Software Reliability \tE: [email protected]\nCity University\t\t\t\t\nNorthampton Square, London EC1V 0HB", "msg_date": "Mon, 11 Jun 2007 14:04:49 +0100", "msg_from": "Vladimir Stankovic <[email protected]>", "msg_from_op": true, "msg_subject": "Variable (degrading) perfomance" }, { "msg_contents": "On 6/11/07, Vladimir Stankovic <[email protected]> wrote:\n> Hi all,\n>\n> It seems that I have an issue with the performance of a PostgreSQL server.\n>\n> I'm running write-intensive, TPC-C like tests. The workload consist of\n> 150 to 200 thousand transactions. The performance varies dramatically,\n> between 5 and more than 9 hours (I don't have the exact figure for the\n> longest experiment). Initially the server is relatively fast. It\n> finishes the first batch of 50k transactions in an hour. This is\n> probably due to the fact that the database is RAM-resident during this\n> interval. As soon as the database grows bigger than the RAM the\n> performance, not surprisingly, degrades, because of the slow disks.\n> My problem is that the performance is rather variable, and to me\n> non-deterministic. A 150k test can finish in approx. 3h30mins but\n> conversely it can take more than 5h to complete.\n> Preferably I would like to see *steady-state* performance (where my\n> interpretation of the steady-state is that the average\n> throughput/response time does not change over time). Is the steady-state\n> achievable despite the MVCC and the inherent non-determinism between\n> experiments? What could be the reasons for the variable performance?\n> - misconfiguration of the PG parameters (e.g. autovacuum does not cope\n> with the dead tuples on the MVCC architecture)\n> - file fragmentation\n> - index bloat\n> - ???\n\nvmstat is telling you that the server is i/o bound. an iostat will\ntell be helpful to tell you where things are binding up...either the\ndata volume, wal volume, or both. I suspect your sorts are spilling\nto disk which is likely the cause of the variable performance,\ninteracting with autovacuum. Another possibility is vacuum is bogging\nyou down. look for pg_tmp folders inside the database tree to see if\nthis is happening. Also you want to see if your server is swapping.\n\nfirst, I'd suggest bumping maintenance_work_mem to 256mb. I'd also\nsuggest bumping work_mem higher, but you are going to have to\ncalculate how far to go based on how many active queries with sort are\ngoing to fire simultaneously. It can be a fine line because your a\nbit underpowered memory but your database is small as well. bumping\nwork_mem but throwing your server into swap solves nothing.\n\nmerlin\n", "msg_date": "Fri, 15 Jun 2007 15:39:22 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Variable (degrading) perfomance" } ]
[ { "msg_contents": "Hi All,\n\nI really hope someone can shed some light on my problem. I'm not sure if\nthis is a posgres or potgis issue.\n\nAnyway, we have 2 development laptops and one live server, somehow I\nmanaged to get the same query to perform very well om my laptop, but on\nboth the server and the other laptop it's really performing bad.\n\nAll three environments are running the same versions of everything, the\ntwo laptops are identical and the server is a monster compared to the\nlaptops.\n\nI have narrowed down the problem (I think) and it's the query planner\nusing different plans and I haven't got a clue why. Can anyone please\nshed some light on this?\n\nEXPLAIN ANALYZE\nSELECT l.*\nFROM layer l, theme t, visiblelayer v, layertype lt, style s\nWHERE l.the_geom && geomfromtext('POLYGON((-83.0 -90.0, -83.0 90.0, 97.0\n90.0, 97.0 -90.0, -83.0 -90.0))') \nAND t.name = 'default' \nAND v.themeid = t.id \nAND v.zoomlevel = 1 \nAND v.enabled \nAND l.layertypeid = v.layertypeid \nAND lt.id = l.layertypeid \nAND s.id = v.styleid \nORDER BY lt.zorder ASC\n\n----------------------------------\n\n Sort (cost=181399.77..182144.30 rows=297812 width=370) (actual\ntime=1384.976..1385.072 rows=180 loops=1)\n Sort Key: lt.zorder\n -> Hash Join (cost=31.51..52528.64 rows=297812 width=370) (actual\ntime=398.656..1384.574 rows=180 loops=1)\n Hash Cond: (l.layertypeid = v.layertypeid)\n -> Seq Scan on layer l (cost=0.00..43323.41 rows=550720\nwidth=366) (actual time=0.016..1089.049 rows=540490 loops=1)\n Filter: (the_geom &&\n'010300000001000000050000000000000000C054C000000000008056C00000000000C054C0000000000080564000000000004058400000000000805640000000000040584000000000008056C00000000000C054C000000000008056C0'::geometry)\n -> Hash (cost=31.42..31.42 rows=7 width=12) (actual\ntime=1.041..1.041 rows=3 loops=1)\n -> Hash Join (cost=3.90..31.42 rows=7 width=12) (actual\ntime=0.107..1.036 rows=3 loops=1)\n Hash Cond: (v.styleid = s.id)\n -> Nested Loop (cost=2.74..30.17 rows=7 width=16)\n(actual time=0.080..1.002 rows=3 loops=1)\n Join Filter: (v.themeid = t.id)\n -> Seq Scan on theme t (cost=0.00..1.01\nrows=1 width=4) (actual time=0.004..0.005 rows=1 loops=1)\n Filter: (name = 'default'::text)\n -> Hash Join (cost=2.74..29.07 rows=7\nwidth=20) (actual time=0.071..0.988 rows=3 loops=1)\n Hash Cond: (lt.id = v.layertypeid)\n -> Seq Scan on layertype lt \n(cost=0.00..18.71 rows=671 width=8) (actual time=0.007..0.473 rows=671\nloops=1)\n -> Hash (cost=2.65..2.65 rows=7\nwidth=12) (actual time=0.053..0.053 rows=3 loops=1)\n -> Seq Scan on visiblelayer v \n(cost=0.00..2.65 rows=7 width=12) (actual time=0.022..0.047 rows=3 loops=1)\n Filter: ((zoomlevel = 1)\nAND enabled)\n -> Hash (cost=1.07..1.07 rows=7 width=4) (actual\ntime=0.020..0.020 rows=7 loops=1)\n -> Seq Scan on style s (cost=0.00..1.07\nrows=7 width=4) (actual time=0.005..0.012 rows=7 loops=1)\n Total runtime: 1385.313 ms\n\n----------------------------------\n\n Sort (cost=37993.10..37994.11 rows=403 width=266) (actual\ntime=32.053..32.451 rows=180 loops=1)\n Sort Key: lt.zorder\n -> Nested Loop (cost=0.00..37975.66 rows=403 width=266) (actual\ntime=0.130..31.254 rows=180 loops=1)\n -> Nested Loop (cost=0.00..30.28 rows=1 width=12) (actual\ntime=0.105..0.873 rows=3 loops=1)\n -> Nested Loop (cost=0.00..23.14 rows=1 width=4)\n(actual time=0.086..0.794 rows=3 loops=1)\n -> Nested Loop (cost=0.00..11.14 rows=2 width=8)\n(actual time=0.067..0.718 rows=3 loops=1)\n Join Filter: (s.id = v.styleid)\n -> Seq Scan on style s (cost=0.00..2.02\nrows=2 width=4) (actual time=0.018..0.048 rows=7 loops=1)\n -> Seq Scan on visiblelayer v \n(cost=0.00..4.47 rows=7 width=12) (actual time=0.031..0.079 rows=3 loops=7)\n Filter: ((zoomlevel = 1) AND enabled)\n -> Index Scan using theme_id_pkey on theme t \n(cost=0.00..5.98 rows=1 width=4) (actual time=0.009..0.012 rows=1 loops=3)\n Index Cond: (v.themeid = t.id)\n Filter: (name = 'default'::text)\n -> Index Scan using layertype_id_pkey on layertype lt \n(cost=0.00..7.12 rows=1 width=8) (actual time=0.010..0.014 rows=1 loops=3)\n Index Cond: (lt.id = v.layertypeid)\n -> Index Scan using fki_layer_layertypeid on layer l \n(cost=0.00..36843.10 rows=88183 width=262) (actual time=0.031..9.825\nrows=60 loops=3)\n Index Cond: (l.layertypeid = v.layertypeid)\n Filter: (the_geom &&\n'010300000001000000050000000000000000C054C000000000008056C00000000000C054C0000000000080564000000000004058400000000000805640000000000040584000000000008056C00000000000C054C000000000008056C0'::geometry)\n Total runtime: 33.107 ms\n\n----------------------------------\n\nThanx in advance.\nChristo Du Preez\n\n\n", "msg_date": "Mon, 11 Jun 2007 17:10:02 +0200", "msg_from": "Christo Du Preez <[email protected]>", "msg_from_op": true, "msg_subject": "test / live environment, major performance difference" }, { "msg_contents": "On 2007-06-11 Christo Du Preez wrote:\n> I really hope someone can shed some light on my problem. I'm not sure\n> if this is a posgres or potgis issue.\n> \n> Anyway, we have 2 development laptops and one live server, somehow I\n> managed to get the same query to perform very well om my laptop, but\n> on both the server and the other laptop it's really performing bad.\n\nYou write that you have 3 systems, but provided only two EXPLAIN ANALYZE\nresults. I will assume that the latter is from your laptop while the\nformer is from one of the badly performing systems.\n\n> All three environments are running the same versions of everything,\n> the two laptops are identical and the server is a monster compared to\n> the laptops.\n\nPlease provide information what exactly those \"same versions of\neverything\" are. What's the PostgreSQL configuration on each system? Do\nall three systems have the same configuration? Information on the\nhardware wouldn't hurt either.\n\n[...]\n> Sort (cost=181399.77..182144.30 rows=297812 width=370) (actual\n> time=1384.976..1385.072 rows=180 loops=1)\n[...]\n> Sort (cost=37993.10..37994.11 rows=403 width=266) (actual\n> time=32.053..32.451 rows=180 loops=1)\n\nThe row estimate of the former plan is way off (297812 estimated <-> 180\nactual). Did you analyze the table recently? Maybe you need to increase\nthe statistics target.\n\nRegards\nAnsgar Wiechers\n-- \n\"The Mac OS X kernel should never panic because, when it does, it\nseriously inconveniences the user.\"\n--http://developer.apple.com/technotes/tn2004/tn2118.html\n", "msg_date": "Mon, 11 Jun 2007 17:58:49 +0200", "msg_from": "Ansgar -59cobalt- Wiechers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "> -----Original Message-----\n> From: Christo Du Preez\n> Sent: Monday, June 11, 2007 10:10 AM\n>\n> I have narrowed down the problem (I think) and it's the query \n> planner using different plans and I haven't got a clue why. \n> Can anyone please shed some light on this?\n\nDifferent plans can be caused by several different things like different\nserver versions, different planner settings in the config file, different\nschemas, or different statistics. You say the server versions are the same,\nso that's not it. Is the schema the same? One isn't missing indexes that\nthe other has? Do they both have the same data, or at least very close to\nthe same data? Have you run analyze on both of them to update their\nstatistics? Do they have the same planner settings in the config file? I\nwould check that stuff out and see if it helps.\n\nDave\n\n", "msg_date": "Mon, 11 Jun 2007 11:16:52 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "\nOn 2007-06-11 Christo Du Preez wrote:\n> I really hope someone can shed some light on my problem. I'm not sure\n> if this is a posgres or potgis issue.\n>\n> Anyway, we have 2 development laptops and one live server, somehow I\n> managed to get the same query to perform very well om my laptop, but\n> on both the server and the other laptop it's really performing bad.\n\nOne simple possibility that bit me in the past: If you do pg_dump/pg_restore to create a copy of the database, you have to ANALYZE the newly-restored database. I mistakenly assumed that pg_restore would do this, but you have to run ANALYZE explicitely after a restore.\n\nCraig\n\n", "msg_date": "Mon, 11 Jun 2007 11:26:55 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "I wonder if my dump/restore routine isn't causing this issue. Seeing\nthat I do the db development on my laptop (the fast one) and then\nrestores it on the other two machines. I have confirmed if all the\nindexes are present after a restore.\n\nThis is the routine:\n\n/usr/local/pgsql/bin/pg_dump -t layer mapdb | gzip > layer.gz\n\nrsync --progress --rsh=ssh layer.gz\nroot@???.???.???.???:/home/postgres/layer.gz\n\n--\n\n/usr/local/pgsql/bin/pg_dump -t visiblelayer mapdb | gzip > visiblelayer.gz\n\nrsync --progress --rsh=ssh visiblelayer.gz\nroot@???.???.???.???:/home/postgres/visiblelayer.gz\n\n--\n\n/usr/local/pgsql/bin/pg_dump -t style mapdb | gzip > style.gz\n\nrsync --progress --rsh=ssh style.gz\nroot@???.???.???.???:/home/postgres/style.gz\n\n--\n\n/usr/local/pgsql/bin/pg_dump -t layertype mapdb | gzip > layertype.gz\n\nrsync --progress --rsh=ssh layertype.gz\nroot@???.???.???.???:/home/postgres/layertype.gz\n\n--\n\nDROP TABLE visiblelayer;\nDROP TABLE style;\nDROP TABLE layer;\nDROP TABLE layertype;\n\ngunzip -c layertype.gz | /usr/local/pgsql/bin/psql mapdb\ngunzip -c style.gz | /usr/local/pgsql/bin/psql mapdb\ngunzip -c visiblelayer.gz | /usr/local/pgsql/bin/psql mapdb\ngunzip -c layer.gz | /usr/local/pgsql/bin/psql mapdb\n\n/usr/local/pgsql/bin/vacuumdb -d mapdb -z -v\n\nCraig James wrote:\n>\n> On 2007-06-11 Christo Du Preez wrote:\n>> I really hope someone can shed some light on my problem. I'm not sure\n>> if this is a posgres or potgis issue.\n>>\n>> Anyway, we have 2 development laptops and one live server, somehow I\n>> managed to get the same query to perform very well om my laptop, but\n>> on both the server and the other laptop it's really performing bad.\n>\n> One simple possibility that bit me in the past: If you do\n> pg_dump/pg_restore to create a copy of the database, you have to\n> ANALYZE the newly-restored database. I mistakenly assumed that\n> pg_restore would do this, but you have to run ANALYZE explicitely\n> after a restore.\n>\n> Craig\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n\n-- \nChristo Du Preez\n\nSenior Software Engineer\nMecola IT\nMobile:\t +27 [0]83 326 8087\nSkype:\t christodupreez\nWebsite: http://www.locateandtrade.co.za\n\n", "msg_date": "Tue, 12 Jun 2007 08:36:28 +0200", "msg_from": "Christo Du Preez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "Where do I set the planner settings or are you reffering to settings in\npostgres.conf that may affect the planner?\n\nThe one badly performing laptop is the same as mine (the fast one) and\nthe server is much more powerful.\n\nLaptops: Intel Centrino Duo T2600 @ 2.16GHz, 1.98 GB RAM\n\nServer: 2 xIntel Pentium D CPU 3.00GHz, 4 GB RAM\n\nAll three systems are running Suse 10.2, with the same PosgreSQL, same\nconfigs, same databases. As far as I know, same everything.\n\nPostgreSQL 8.2.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2\n20061115 (prerelease) (SUSE Linux)\nPOSTGIS=\"1.2.1\" GEOS=\"3.0.0rc4-CAPI-1.3.3\" PROJ=\"Rel. 4.5.0, 22 Oct\n2006\" USE_STATS\n\nThanx for all the advice\n\nDave Dutcher wrote:\n>> -----Original Message-----\n>> From: Christo Du Preez\n>> Sent: Monday, June 11, 2007 10:10 AM\n>>\n>> I have narrowed down the problem (I think) and it's the query \n>> planner using different plans and I haven't got a clue why. \n>> Can anyone please shed some light on this?\n>> \n>\n> Different plans can be caused by several different things like different\n> server versions, different planner settings in the config file, different\n> schemas, or different statistics. You say the server versions are the same,\n> so that's not it. Is the schema the same? One isn't missing indexes that\n> the other has? Do they both have the same data, or at least very close to\n> the same data? Have you run analyze on both of them to update their\n> statistics? Do they have the same planner settings in the config file? I\n> would check that stuff out and see if it helps.\n>\n> Dave\n>\n>\n>\n> \n\n-- \nChristo Du Preez\n\nSenior Software Engineer\nMecola IT\nMobile:\t +27 [0]83 326 8087\nSkype:\t christodupreez\nWebsite: http://www.locateandtrade.co.za\n\n\n", "msg_date": "Tue, 12 Jun 2007 09:38:06 +0200", "msg_from": "Christo Du Preez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "> From: Christo Du Preez\n> Sent: Tuesday, June 12, 2007 2:38 AM\n> \n> Where do I set the planner settings or are you reffering to \n> settings in postgres.conf that may affect the planner?\n> \n\nYes I'm reffering to settings in postgres.conf. I'm wondering if\nenable_indexscan or something got turned off on the server for some reason.\nHere is a description of those settings:\n\nhttp://www.postgresql.org/docs/8.2/interactive/runtime-config-query.html\n\nSo when you move data from the laptop to the server, I see that your script\ncorrectly runs an analyze after the load, so have you run analyze on the\nfast laptop lately? Hopefully running analyze wouldn't make the planner\nchoose a worse plan on the laptop, but if we are trying to get things\nconsistant between the laptop and server, that is something I would try.\n\nIf the consistancy problem really is a problem of the planner not using\nindex scans on the server, then if you can, please post the table definition\nfor the table with a million rows and an EXPLAIN ANALYZE of a query which\nselects a few rows from the table.\n\nDave\n\n", "msg_date": "Tue, 12 Jun 2007 09:28:11 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test / live environment, major performance difference" } ]
[ { "msg_contents": "Hi all,\n\nIt seems that I have an issue with the performance of a PostgreSQL server.\n\nI'm running write-intensive, TPC-C like tests. The workload consist of \n150 to 200 thousand transactions. The performance varies dramatically, \nbetween 5 and more than 9 hours (I don't have the exact figure for the \nlongest experiment). Initially the server is relatively fast. It \nfinishes the first batch of 50k transactions in an hour. This is \nprobably due to the fact that the database is RAM-resident during this \ninterval. As soon as the database grows bigger than the RAM the \nperformance, not surprisingly, degrades, because of the slow disks.\nMy problem is that the performance is rather variable, and to me \nnon-deterministic. A 150k test can finish in approx. 3h30mins but \nconversely it can take more than 5h to complete.\nPreferably I would like to see *steady-state* performance (where my \ninterpretation of the steady-state is that the average \nthroughput/response time does not change over time). Is the steady-state \nachievable despite the MVCC and the inherent non-determinism between \nexperiments? What could be the reasons for the variable performance?\n- misconfiguration of the PG parameters (e.g. autovacuum does not cope \nwith the dead tuples on the MVCC architecture)\n- file fragmentation\n- index bloat\n- ???\nThe initial size of the database (actually the output of the 'du -h' \ncommand) is ~ 400 MB. The size increases dramatically, somewhere between \n600MB and 1.1GB\n\nI have doubted the client application at some point too. However, other \nserver combinations using different DBMS exhibit steady state \nperformance.As a matter of fact when PG is paired with Firebird, through \nstatement-based replication middleware, the performance of the pair is \nsteady too.\n\nThe hardware configuration:\nClient machine\n- 1.5 GHz CPU Pentium 4\n- 1GB Rambus RAM\n- Seagate st340810a IDE disk (40GB), 5400 rpms\n\nServer machine\n- 1.5 GHz CPU Pentium 4\n- 640 MB Rambus RAM\n- Seagate Barracuda 7200.9 rpms\n- Seagate st340810a IDE disk (40GB) - the WAL is stored on an ext2 \npartition\n\nThe Software configuration:\nThe client application is a multi-threaded Java client running on Win \n2000 Pro sp4\nThe database server version is 8.1.5 running on Fedora Core 6.\nPlease find attached:\n1 - the output of vmstat taken after the first 60k transactions were \nexecuted\n2 - the postgresql.conf file\n\nAny help would be appreciated.\n\nBest regards,\nVladimir\n\nP.S. Apologies for possible multiple posts\n-- \n\nVladimir Stankovic T: +44 20 7040 0273\nResearch Student/Research Assistant F: +44 20 7040 8585\nCentre for Software Reliability E: [email protected]\nCity University \nNorthampton Square, London EC1V 0HB", "msg_date": "Mon, 11 Jun 2007 18:21:19 +0100", "msg_from": "Vladimir Stankovic <[email protected]>", "msg_from_op": true, "msg_subject": "Variable (degrading) performance" }, { "msg_contents": "Vladimir Stankovic wrote:\n> I'm running write-intensive, TPC-C like tests. The workload consist of \n> 150 to 200 thousand transactions. The performance varies dramatically, \n> between 5 and more than 9 hours (I don't have the exact figure for the \n> longest experiment). Initially the server is relatively fast. It \n> finishes the first batch of 50k transactions in an hour. This is \n> probably due to the fact that the database is RAM-resident during this \n> interval. As soon as the database grows bigger than the RAM the \n> performance, not surprisingly, degrades, because of the slow disks.\n> My problem is that the performance is rather variable, and to me \n> non-deterministic. A 150k test can finish in approx. 3h30mins but \n> conversely it can take more than 5h to complete.\n> Preferably I would like to see *steady-state* performance (where my \n> interpretation of the steady-state is that the average \n> throughput/response time does not change over time). Is the steady-state \n> achievable despite the MVCC and the inherent non-determinism between \n> experiments? What could be the reasons for the variable performance?\n\nSteadiness is a relative; you'll never achieve perfectly steady \nperformance where every transaction takes exactly X milliseconds. That \nsaid, PostgreSQL is not as steady as many other DBMS's by nature, \nbecause of the need to vacuum. Another significant source of \nunsteadiness is checkpoints, though it's not as bad with fsync=off, like \nyou're running.\n\nI'd suggest using the vacuum_cost_delay to throttle vacuums so that they \ndon't disturb other transactions as much. You might also want to set up \nmanual vacuums for the bigger tables, instead of relying on autovacuum, \nbecause until the recent changes in CVS head, autovacuum can only vacuum \none table at a time, and while it's vacuuming a big table, the smaller \nheavily-updated tables are neglected.\n\n> The database server version is 8.1.5 running on Fedora Core 6.\n\nHow about upgrading to 8.2? You might also want to experiment with CVS \nHEAD to get the autovacuum improvements, as well as a bunch of other \nperformance improvements.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 11 Jun 2007 18:51:43 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Variable (degrading) performance" }, { "msg_contents": "Heikki,\n\nThanks for the response.\n\nHeikki Linnakangas wrote:\n> Vladimir Stankovic wrote:\n>> I'm running write-intensive, TPC-C like tests. The workload consist \n>> of 150 to 200 thousand transactions. The performance varies \n>> dramatically, between 5 and more than 9 hours (I don't have the exact \n>> figure for the longest experiment). Initially the server is \n>> relatively fast. It finishes the first batch of 50k transactions in \n>> an hour. This is probably due to the fact that the database is \n>> RAM-resident during this interval. As soon as the database grows \n>> bigger than the RAM the performance, not surprisingly, degrades, \n>> because of the slow disks.\n>> My problem is that the performance is rather variable, and to me \n>> non-deterministic. A 150k test can finish in approx. 3h30mins but \n>> conversely it can take more than 5h to complete.\n>> Preferably I would like to see *steady-state* performance (where my \n>> interpretation of the steady-state is that the average \n>> throughput/response time does not change over time). Is the \n>> steady-state achievable despite the MVCC and the inherent \n>> non-determinism between experiments? What could be the reasons for \n>> the variable performance?\n>\n> Steadiness is a relative; you'll never achieve perfectly steady \n> performance where every transaction takes exactly X milliseconds. That \n> said, PostgreSQL is not as steady as many other DBMS's by nature, \n> because of the need to vacuum. Another significant source of \n> unsteadiness is checkpoints, though it's not as bad with fsync=off, \n> like you're running.\nWhat I am hoping to see is NOT the same value for all the executions of \nthe same type of transaction (after some transient period). Instead, I'd \nlike to see that if I take appropriately-sized set of transactions I \nwill see at least steady-growth in transaction average times, if not \nexactly the same average. Each chunk would possibly include sudden \nperformance drop due to the necessary vacuum and checkpoints. The \nperformance might be influenced by the change in the data set too.\nI am unhappy about the fact that durations of experiments can differ \neven 30% (having in mind that they are not exactly the same due to the \nnon-determinism on the client side) . I would like to eliminate this \nvariability. Are my expectations reasonable? What could be the cause(s) \nof this variability?\n>\n> I'd suggest using the vacuum_cost_delay to throttle vacuums so that \n> they don't disturb other transactions as much. You might also want to \n> set up manual vacuums for the bigger tables, instead of relying on \n> autovacuum, because until the recent changes in CVS head, autovacuum \n> can only vacuum one table at a time, and while it's vacuuming a big \n> table, the smaller heavily-updated tables are neglected.\n>\n>> The database server version is 8.1.5 running on Fedora Core 6.\n>\n> How about upgrading to 8.2? You might also want to experiment with CVS \n> HEAD to get the autovacuum improvements, as well as a bunch of other \n> performance improvements.\n>\nI will try these, but as I said my primary goal is to have \nsteady/'predictable' performance, not necessarily to obtain the fastest \nPG results.\n\nBest regards,\nVladimir\n\n-- \nVladimir Stankovic \tT: +44 20 7040 0273\nResearch Student/Research Assistant \tF: +44 20 7040 8585\nCentre for Software Reliability \tE: [email protected]\nCity University\t\t\t\t\nNorthampton Square, London EC1V 0HB \n\n", "msg_date": "Tue, 12 Jun 2007 16:24:30 +0100", "msg_from": "Vladimir Stankovic <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Variable (degrading) performance" }, { "msg_contents": "Vladimir Stankovic wrote:\n> What I am hoping to see is NOT the same value for all the executions of \n> the same type of transaction (after some transient period). Instead, I'd \n> like to see that if I take appropriately-sized set of transactions I \n> will see at least steady-growth in transaction average times, if not \n> exactly the same average. Each chunk would possibly include sudden \n> performance drop due to the necessary vacuum and checkpoints. The \n> performance might be influenced by the change in the data set too.\n> I am unhappy about the fact that durations of experiments can differ \n> even 30% (having in mind that they are not exactly the same due to the \n> non-determinism on the client side) . I would like to eliminate this \n> variability. Are my expectations reasonable? What could be the cause(s) \n> of this variability?\n\nYou should see that if you define your \"chunk\" to be long enough. Long \nenough is probably hours, not minutes or seconds. As I said earlier, \ncheckpoints and vacuum are a major source of variability.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 12 Jun 2007 19:20:51 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Variable (degrading) performance" } ]
[ { "msg_contents": "Configuration\nOS: FreeBSD 6.1 Stable\nPostgresql: 8.1.4\n\nRAID card 1 with 8 drives. 7200 RPM SATA RAID10\nRAID card 2 with 4 drives. 10K RPM SATA RAID10\n\nBesides having pg_xlog in the 10K RPM drives what else can I do to best use \nthose drives other than putting some data in them?\n\nIostat shows the drives getting used very little, even during constant \nupdates and vacuum.\n\nSome of the postgresl.conf settings that may be relevant.\nwal_buffers = 64\ncheckpoint_segments = 64\n\nIf nothing else I will start to put index files in the 10K RPM RAID. \n\nAs for the version of postgreql.. we are likely getting a second \nmachine, break off some of the data, change programs to read data from both \nand at some point when there is little data in the 8.1.4, upgrade the 8.1.4 \nmachine. The new machine will have 8.2.4\n\nWe have a lot of historical data that never changes which is the main \ndriving factor behind looking to split the database into current and \nhistorical. \n", "msg_date": "Mon, 11 Jun 2007 21:14:43 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Best use of second controller with faster disks?" }, { "msg_contents": "\nOn Jun 11, 2007, at 9:14 PM, Francisco Reyes wrote:\n\n> RAID card 1 with 8 drives. 7200 RPM SATA RAID10\n> RAID card 2 with 4 drives. 10K RPM SATA RAID10\n>\n\nwhat raid card have you got? i'm playing with an external enclosure \nwhich has an areca sata raid in it and connects to the host via fibre \nchannel. it is wicked fast, and supports a RAID6 which seems to be \nas fast as the RAID10 in my initial testing on this unit.\n\nWhat drives are you booting from? If you're booting from the 4-drive \nRAID10, perhaps split that into a pair of RAID1's and boot from one \nand use the other as the pg log disk.\n\nhowever, I must say that with my 16 disk array, peeling the log off \nthe main volume actually slowed it down a bit. I think that the raid \ncard is just so fast at doing the RAID6 computations and having the \nstriping is a big gain over the dedicated RAID1 for the log.\n\nRight now I'm testing an 8-disk RAID6 configuration on the same \ndevice; it seems slower than the 16-disk RAID6, but I haven't yet \ntried 8-disk RAID10 with dedicated log yet.\n\n> Besides having pg_xlog in the 10K RPM drives what else can I do to \n> best use those drives other than putting some data in them?\n>\n> Iostat shows the drives getting used very little, even during \n> constant updates and vacuum.\n>\n> Some of the postgresl.conf settings that may be relevant.\n> wal_buffers = 64\n> checkpoint_segments = 64\n\ni'd bump checkpoint_segements up to 256 given the amount of disk \nyou've got dedicated to it. be sure to increase checkpoint timeout too.\n\nAnd if you can move to 6.2 FreeBSD you should pick up some speed on \nthe network layer and possibly the disk I/O.\n\n", "msg_date": "Tue, 12 Jun 2007 14:24:59 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best use of second controller with faster disks?" }, { "msg_contents": "Vivek Khera writes:\n\n> what raid card have you got?\n\n2 3ware cards.\nI believe both are 9550SX \n \n> i'm playing with an external enclosure \n> which has an areca sata raid in it and connects to the host via fibre \n> channel. \n\nWhat is the OS? FreeBSD?\nOne of the reasons I stick with 3ware is that it is well supported in \nFreeBSD and has a pretty decent management program\n\n> it is wicked fast, and supports a RAID6 which seems to be \n> as fast as the RAID10 in my initial testing on this unit.\n\nMy next \"large\" machine I am also leaning towards RAID6. The space different \nis just too big to ignore.\n3ware recommends RAID6 for 5+ drives.\n \n> What drives are you booting from?\n\nBooting from the 8 drive raid.\n\n> If you're booting from the 4-drive \n> RAID10, perhaps split that into a pair of RAID1's and boot from one \n> and use the other as the pg log disk.\n\nMaybe for the next machine.\n\n> however, I must say that with my 16 disk array, peeling the log off \n> the main volume actually slowed it down a bit. I think that the raid \n> card is just so fast at doing the RAID6 computations and having the \n> striping is a big gain over the dedicated RAID1 for the log.\n\nCould be.\nSeems like RAID6 is supposed to be a good balance between performance and \navailable space.\n\n> Right now I'm testing an 8-disk RAID6 configuration on the same \n> device; it seems slower than the 16-disk RAID6, but I haven't yet \n> tried 8-disk RAID10 with dedicated log yet.\n\nIs all this within the same controller?\n \n> i'd bump checkpoint_segements up to 256 given the amount of disk \n> you've got dedicated to it. be sure to increase checkpoint timeout too.\n\nThanks. Will try that.\n", "msg_date": "Tue, 12 Jun 2007 20:33:03 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best use of second controller with faster disks?" }, { "msg_contents": "\nOn Jun 12, 2007, at 8:33 PM, Francisco Reyes wrote:\n\n> Vivek Khera writes:\n>\n>> what raid card have you got?\n>\n> 2 3ware cards.\n> I believe both are 9550SX\n>> i'm playing with an external enclosure which has an areca sata \n>> raid in it and connects to the host via fibre channel.\n>\n> What is the OS? FreeBSD?\n\nFreeBSD, indeed. The vendor, Partners Data Systems, did a wonderful \njob ensuring that everything integrated well to the point of talking \nwith various FreeBSD developers, LSI engineers, etc., and sent me a \nfully tested system end-to-end with a Sun X4100 M2, LSI 4Gb Fibre \ncard, and their RAID array, with FreeBSD installed already.\n\nI can't recommend them enough -- if you need a high-end RAID system \nfor FreeBSD (or other OS, I suppose) do check them out.\n\n>> Right now I'm testing an 8-disk RAID6 configuration on the same \n>> device; it seems slower than the 16-disk RAID6, but I haven't yet \n>> tried 8-disk RAID10 with dedicated log yet.\n>\n> Is all this within the same controller?\n\nYes, the system is in testing right now, so I'm playing with all \nsort of different disk configurations and it seems that the 16-disk \nRAID6 is the winner so far. The next best was the 14-disk RAID6 + 2 \ndisk RAID1 for log.\n\nI have separate disks built-in to the system for boot.\n\n", "msg_date": "Wed, 13 Jun 2007 10:02:40 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best use of second controller with faster disks?" }, { "msg_contents": "Vivek Khera writes:\n\n> FreeBSD, indeed. The vendor, Partners Data Systems, did a wonderful \n\nThis one?\nhttp://www.partnersdata.com\n\n> job ensuring that everything integrated well to the point of talking \n> with various FreeBSD developers, LSI engineers, etc., and sent me a \n> fully tested system end-to-end with a Sun X4100 M2, LSI 4Gb Fibre \n> card, and their RAID array, with FreeBSD installed already.\n\nIs there a management program in FreeBSD for the Areca card?\nSo I understand the setup you are describing..\nMachine has Areca controller\nConnects to external enclosure\nEnclosure has LSI controller \n \n> I have separate disks built-in to the system for boot.\n\nHow did you get FreeBSD to newfs such a large setup?\nnewfs -s /dev/raw-disk?\n\nWhat are the speed/size of the disks?\n7K rpm?\n\n", "msg_date": "Wed, 13 Jun 2007 22:36:19 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best use of second controller with faster disks?" }, { "msg_contents": "\nOn Jun 13, 2007, at 10:36 PM, Francisco Reyes wrote:\n\n>> FreeBSD, indeed. The vendor, Partners Data Systems, did a wonderful\n>\n> This one?\n> http://www.partnersdata.com\n>\n\nthat's the one.\n\n>> job ensuring that everything integrated well to the point of \n>> talking with various FreeBSD developers, LSI engineers, etc., and \n>> sent me a fully tested system end-to-end with a Sun X4100 M2, LSI \n>> 4Gb Fibre card, and their RAID array, with FreeBSD installed \n>> already.\n>\n> Is there a management program in FreeBSD for the Areca card?\n> So I understand the setup you are describing..\n> Machine has Areca controller\n> Connects to external enclosure\n> Enclosure has LSI controller\n\nIn the past I've had systems with RAID cards: LSI and Adaptec. The \nLSI 320-2X is the fastest one I've ever had. The adaptec ones suck \nbecause there is no management software for them on the newer cards \nfor freebsd, especially under amd64.\n\nThe system I'm working on now is thus:\n\nSun X4100 M2 with an LSI 4Gb fibre channel card connected to an \nexternal self-contained RAID enclosure, the Triton RAID from Partners \nData. The Triton unit has in it an Areca SATA RAID controller and 16 \ndisks.\n\n>> I have separate disks built-in to the system for boot.\n>\n> How did you get FreeBSD to newfs such a large setup?\n> newfs -s /dev/raw-disk?\n\nIt is only 2Tb raw, 1.7Tb formatted :-) I just used sysinstall to \nrun fdisk, label, and newfs for me. Since it is just postgres data, \nno file will ever be larger than 1Gb I didn't need to make any \nadjustments to the newfs parameters.\n\n>\n> What are the speed/size of the disks?\n> 7K rpm?\n\nI splurged for the 10kRPM drives, even though they are smaller 150Gb \neach.\n\n", "msg_date": "Thu, 14 Jun 2007 10:33:13 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best use of second controller with faster disks?" }, { "msg_contents": "Vivek Khera writes:\n\n> no file will ever be larger than 1Gb I didn't need to make any \n> adjustments to the newfs parameters.\n\nYou should consider using \"newfs -i 65536\" for partitions to be used for \npostgresql. You will get more usable space and will still have lots of free \ninodes.\n\nFor my next postgresql server I am likely going to do \"newfs -i 262144\"\n\nOn my current primary DB I have 2049 inodes in use and 3,539,389 free.\nThat was with newfs -i 65536. \n", "msg_date": "Tue, 19 Jun 2007 15:36:02 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best use of second controller with faster disks?" } ]
[ { "msg_contents": "Good day,\n\nI have noticed that my server never uses indexing. No matter what I do.\n\nAs an example I took a table with about 650 rows, having a parentid\nfield with an index on parentid.\n\nEXPLAIN ANALYZE\nSELECT *\n FROM layertype\nwhere parentid = 300;\n\nOn my laptop the explain analyze looks like this:\n\n\"Index Scan using fki_layertype_parentid on layertype (cost=0.00..8.27\nrows=1 width=109)\"\n\" Index Cond: (parentid = 300)\"\n\nand on the problem server:\n\n\"Seq Scan on layertype (cost=0.00..20.39 rows=655 width=110)\"\n\" Filter: (parentid = 300)\"\n\n.........\n\nI have dropped the index, recreated it, vacuumed the table, just about\neverything I could think of, And there is just no way I can get the\nquery planner to use the index.\n\nPostgreSQL 8.2.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2\n20061115 (prerelease) (SUSE Linux)\nPOSTGIS=\"1.2.1\" GEOS=\"3.0.0rc4-CAPI-1.3.3\" PROJ=\"Rel. 4.5.0, 22 Oct\n2006\" USE_STATS\n\n", "msg_date": "Tue, 12 Jun 2007 15:32:40 +0200", "msg_from": "Christo Du Preez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "try it with a table with 650K rows...\n\nOn Tue, 2007-06-12 at 15:32 +0200, Christo Du Preez wrote:\n> Good day,\n> \n> I have noticed that my server never uses indexing. No matter what I do.\n> \n> As an example I took a table with about 650 rows, having a parentid\n> field with an index on parentid.\n> \n> EXPLAIN ANALYZE\n> SELECT *\n> FROM layertype\n> where parentid = 300;\n> \n> On my laptop the explain analyze looks like this:\n> \n> \"Index Scan using fki_layertype_parentid on layertype (cost=0.00..8.27\n> rows=1 width=109)\"\n> \" Index Cond: (parentid = 300)\"\n> \n> and on the problem server:\n> \n> \"Seq Scan on layertype (cost=0.00..20.39 rows=655 width=110)\"\n> \" Filter: (parentid = 300)\"\n> \n> .........\n> \n> I have dropped the index, recreated it, vacuumed the table, just about\n> everything I could think of, And there is just no way I can get the\n> query planner to use the index.\n> \n> PostgreSQL 8.2.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2\n> 20061115 (prerelease) (SUSE Linux)\n> POSTGIS=\"1.2.1\" GEOS=\"3.0.0rc4-CAPI-1.3.3\" PROJ=\"Rel. 4.5.0, 22 Oct\n> 2006\" USE_STATS\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n", "msg_date": "Tue, 12 Jun 2007 09:45:20 -0400", "msg_from": "Reid Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "\nOn Jun 12, 2007, at 8:32 , Christo Du Preez wrote:\n\n> I have noticed that my server never uses indexing. No matter what I \n> do.\n>\n> As an example I took a table with about 650 rows, having a parentid\n> field with an index on parentid.\n>\n> EXPLAIN ANALYZE\n> SELECT *\n> FROM layertype\n> where parentid = 300;\n\nThe planner weighs the cost of the different access methods and \nchoses the one that it believes is lowest in cost. An index scan is \nnot always faster than a sequential scan. With so few rows, it's \nprobably faster for the server to read the whole table rather than \nreading the index and looking up the corresponding row. If you want \nto test this, you can set enable_seqscan to false and try running \nyour query again.\n\nhttp://www.postgresql.org/docs/8.2/interactive/runtime-config- \nquery.html#RUNTIME-CONFIG-QUERY-ENABLE\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Tue, 12 Jun 2007 08:47:05 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "\n\"Christo Du Preez\" <[email protected]> writes:\n\n> On my laptop the explain analyze looks like this:\n>\n> \"Index Scan using fki_layertype_parentid on layertype (cost=0.00..8.27\n> rows=1 width=109)\"\n> \" Index Cond: (parentid = 300)\"\n\nThat's not \"explain analyze\", that's just plain \"explain\".\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 12 Jun 2007 14:53:26 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "The actual table I noticed the problem has a million rows and it still\ndoesn't use indexing\n\nReid Thompson wrote:\n> try it with a table with 650K rows...\n>\n> On Tue, 2007-06-12 at 15:32 +0200, Christo Du Preez wrote:\n> \n>> Good day,\n>>\n>> I have noticed that my server never uses indexing. No matter what I do.\n>>\n>> As an example I took a table with about 650 rows, having a parentid\n>> field with an index on parentid.\n>>\n>> EXPLAIN ANALYZE\n>> SELECT *\n>> FROM layertype\n>> where parentid = 300;\n>>\n>> On my laptop the explain analyze looks like this:\n>>\n>> \"Index Scan using fki_layertype_parentid on layertype (cost=0.00..8.27\n>> rows=1 width=109)\"\n>> \" Index Cond: (parentid = 300)\"\n>>\n>> and on the problem server:\n>>\n>> \"Seq Scan on layertype (cost=0.00..20.39 rows=655 width=110)\"\n>> \" Filter: (parentid = 300)\"\n>>\n>> .........\n>>\n>> I have dropped the index, recreated it, vacuumed the table, just about\n>> everything I could think of, And there is just no way I can get the\n>> query planner to use the index.\n>>\n>> PostgreSQL 8.2.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2\n>> 20061115 (prerelease) (SUSE Linux)\n>> POSTGIS=\"1.2.1\" GEOS=\"3.0.0rc4-CAPI-1.3.3\" PROJ=\"Rel. 4.5.0, 22 Oct\n>> 2006\" USE_STATS\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n>> \n>\n>\n> \n\n-- \nChristo Du Preez\n\nSenior Software Engineer\nMecola IT\nMobile:\t +27 [0]83 326 8087\nSkype:\t christodupreez\nWebsite: http://www.locateandtrade.co.za\n\n", "msg_date": "Tue, 12 Jun 2007 16:11:33 +0200", "msg_from": "Christo Du Preez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "Christo Du Preez wrote:\n> The actual table I noticed the problem has a million rows and it still\n> doesn't use indexing\n\nSo ANALYZE it.\n\n-- \nAlvaro Herrera Developer, http://www.PostgreSQL.org/\n\"Amanece. (Ignacio Reyes)\n El Cerro San Crist�bal me mira, c�nicamente, con ojos de virgen\"\n", "msg_date": "Tue, 12 Jun 2007 10:24:38 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "On Tue, Jun 12, 2007 at 03:32:40PM +0200, Christo Du Preez wrote:\n> As an example I took a table with about 650 rows, having a parentid\n> field with an index on parentid.\n\nTry a bigger table. Using an index for only 650 rows is almost always\nsuboptimal, so it's no wonder the planner doesn't use the index.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 12 Jun 2007 16:30:33 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "On Tue, Jun 12, 2007 at 04:11:33PM +0200, Christo Du Preez wrote:\n> The actual table I noticed the problem has a million rows and it still\n> doesn't use indexing\n\nThen please post an EXPLAIN ANALYZE of the query that is slow, along with the\ntable definition and indexes.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 12 Jun 2007 16:59:10 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "Christo Du Preez <[email protected]> writes:\n> On my laptop the explain analyze looks like this:\n\n> \"Index Scan using fki_layertype_parentid on layertype (cost=0.00..8.27\n> rows=1 width=109)\"\n> \" Index Cond: (parentid = 300)\"\n\nOK ...\n\n> and on the problem server:\n\n> \"Seq Scan on layertype (cost=0.00..20.39 rows=655 width=110)\"\n> \" Filter: (parentid = 300)\"\n\nThe server thinks that every row of the table matches the WHERE clause.\nThat being the case, it's making the right choice to use a seqscan.\nThe question is why is the rows estimate so far off? Have you ANALYZEd\nthe table lately?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Jun 2007 12:09:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test / live environment, major performance difference " }, { "msg_contents": "Yes, I have just about tried every combination of vacuum on the\ndatabase. Just to make 100% sure.\n\nTom Lane wrote:\n> Christo Du Preez <[email protected]> writes:\n> \n>> On my laptop the explain analyze looks like this:\n>> \n>\n> \n>> \"Index Scan using fki_layertype_parentid on layertype (cost=0.00..8.27\n>> rows=1 width=109)\"\n>> \" Index Cond: (parentid = 300)\"\n>> \n>\n> OK ...\n>\n> \n>> and on the problem server:\n>> \n>\n> \n>> \"Seq Scan on layertype (cost=0.00..20.39 rows=655 width=110)\"\n>> \" Filter: (parentid = 300)\"\n>> \n>\n> The server thinks that every row of the table matches the WHERE clause.\n> That being the case, it's making the right choice to use a seqscan.\n> The question is why is the rows estimate so far off? Have you ANALYZEd\n> the table lately?\n>\n> \t\t\tregards, tom lane\n>\n>\n> \n\n-- \nChristo Du Preez\n\nSenior Software Engineer\nMecola IT\nMobile:\t +27 [0]83 326 8087\nSkype:\t christodupreez\nWebsite: http://www.locateandtrade.co.za\n\n", "msg_date": "Tue, 12 Jun 2007 18:59:16 +0200", "msg_from": "Christo Du Preez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "Christo Du Preez <[email protected]> writes:\n> Yes, I have just about tried every combination of vacuum on the\n> database. Just to make 100% sure.\n\nWell, there's something mighty wacko about that rowcount estimate;\neven if you didn't have stats, the estimate for a simple equality\nconstraint oughtn't be 100% match.\n\nWhat do you get from SELECT * FROM pg_stats WHERE tablename = 'layertype'\non both systems?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Jun 2007 13:37:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test / live environment, major performance difference " }, { "msg_contents": "Fast:\n\n\"public\";\"layertype\";\"id\";0;4;-1;\"\";\"\";\"{1,442,508,575,641,708,774,840,907,973,1040}\";0.999995\n\"public\";\"layertype\";\"label\";0;14;-0.971429;\"{arch,bank,bench,canyon,gap,hill,hills,levee,mountain,mountains}\";\"{0.00300752,0.00300752,0.00300752,0.00300752,0.00300752,0.00300752,0.00300752,0.00300752,0.00300752,0.00300752}\";\"{\"abandoned\nairfield\",boatyard,corridor,forest(s),\"intermittent lake\",\"metro\nstation\",\"park headquarters\",reefs,\"section of bank\",swamp,zoo}\";0.107307\n\"public\";\"layertype\";\"parentid\";0.98797;4;2;\"{4,1}\";\"{0.00902256,0.00300752}\";\"\";-0.142857\n\"public\";\"layertype\";\"zorder\";0;4;9;\"{0}\";\"{0.98797}\";\"{1,2,3,4,5,6,7,8}\";0.928955\n\"public\";\"layertype\";\"description\";0.100752;74;-0.888722;\"{\"a branch of\na canyon or valley\",\"a low, isolated, rounded hill\",\"a near-level\nshallow, natural depression or basin, usually containing an intermittent\nlake, pond, or pool\",\"a relatively shallow, wide depression, the bottom\nof which usually has a continuous gradient\",\"a shore zone of coarse\nunconsolidated sediment that extends from the low-water line to the\nhighest reach of storm waves\",\"a surface-navigation hazard composed of\nconsolidated material\",\"a surface-navigation hazard composed of\nunconsolidated\nmaterial\"}\";\"{0.00300752,0.00300752,0.00300752,0.00300752,0.00300752,0.00300752,0.00300752}\";\"{\"a\nbarrier constructed across a stream to impound water\",\"a comparatively\ndepressed area on an icecap\",\"a facility for pumping oil through a\npipeline\",\"a large house, mansion, or chateau, on a large estate\",\"an\narea drained by a stream\",\"an elongate (tongue-like) extension of a flat\nsea floor into an adjacent higher feature\",\"a place where caravans stop\nfor rest\",\"a series of associated ridges or seamounts\",\"a sugar mill no\nlonger used as a sugar mill\",\"bowl-like hollows partially surrounded by\ncliffs or steep slopes at the head of a glaciated\nvalley\",\"well-delineated subdivisions of a large and complex positive\nfeature\"}\";-0.0178932\n\"public\";\"layertype\";\"code\";0.0135338;9;-1;\"\";\"\";\"{A.ADM1,H.HBRX,H.STMM,L.RGNL,S.BUSTN,S.HTL,S.PKLT,S.TRIG,T.MTS,U.GAPU,V.VINS}\";0.995628\n\nSlow:\n\n\"public\";\"layertype\";\"id\";0;4;-1;\"\";\"\";\"{1,437,504,571,638,705,772,839,906,973,1040}\";-0.839432\n\"public\";\"layertype\";\"label\";0;15;-0.965723;\"{arch,bank,bench,canyon,country,gap,hill,hills,levee,mountain}\";\"{0.00298063,0.00298063,0.00298063,0.00298063,0.00298063,0.00298063,0.00298063,0.00298063,0.00298063,0.00298063}\";\"{\"abandoned\nairfield\",boatyard,\"cotton plantation\",fork,\"intermittent oxbow\nlake\",\"military installation\",\"park headquarters\",reef,\"second-order\nadministrative division\",swamp,zoo}\";-0.0551452\n\"public\";\"layertype\";\"parentid\";0.00745157;4;7;\"{300}\";\"{0.976155}\";\"{1,1,4,5,8,12}\";0.92262\n\"public\";\"layertype\";\"zorder\";0;4;8;\"{0}\";\"{0.971684}\";\"{1,2,3,3,5,7,7}\";0.983028\n\"public\";\"layertype\";\"description\";0.110283;74;-0.879285;\"{\"a branch of\na canyon or valley\",\"a low, isolated, rounded hill\",\"a near-level\nshallow, natural depression or basin, usually containing an intermittent\nlake, pond, or pool\",\"a relatively shallow, wide depression, the bottom\nof which usually has a continuous gradient\",\"a shore zone of coarse\nunconsolidated sediment that extends from the low-water line to the\nhighest reach of storm waves\",\"a surface-navigation hazard composed of\nconsolidated material\",\"a surface-navigation hazard composed of\nunconsolidated\nmaterial\"}\";\"{0.00298063,0.00298063,0.00298063,0.00298063,0.00298063,0.00298063,0.00298063}\";\"{\"a\nbarrier constructed across a stream to impound water\",\"a comparatively\ndepressed area on an icecap\",\"a facility for pumping water from a major\nwell or through a pipeline\",\"a large inland body of standing water\",\"an\narea drained by a stream\",\"an embankment bordering a canyon, valley, or\nseachannel\",\"a place where diatomaceous earth is extracted\",\"a series of\nassociated ridges or seamounts\",\"a sugar mill no longer used as a sugar\nmill\",\"bowl-like hollows partially surrounded by cliffs or steep slopes\nat the head of a glaciated valley\",\"well-delineated subdivisions of a\nlarge and complex positive feature\"}\";0.0103485\n\"public\";\"layertype\";\"code\";0.023845;9;-1;\"\";\"\";\"{A.ADM1,H.INLT,H.STMM,L.RNGA,S.BUSTN,S.HUT,S.PKLT,S.TRIG,T.MTS,U.GAPU,V.VINS}\";-0.852108\n\nThis table contains identical data.\n\nThanx for your help Tom\n\n\n\n\nTom Lane wrote:\n> Christo Du Preez <[email protected]> writes:\n> \n>> Yes, I have just about tried every combination of vacuum on the\n>> database. Just to make 100% sure.\n>> \n>\n> Well, there's something mighty wacko about that rowcount estimate;\n> even if you didn't have stats, the estimate for a simple equality\n> constraint oughtn't be 100% match.\n>\n> What do you get from SELECT * FROM pg_stats WHERE tablename = 'layertype'\n> on both systems?\n>\n> \t\t\tregards, tom lane\n>\n>\n> \n\n-- \nChristo Du Preez\n\nSenior Software Engineer\nMecola IT\nMobile:\t +27 [0]83 326 8087\nSkype:\t christodupreez\nWebsite: http://www.locateandtrade.co.za\n\n", "msg_date": "Tue, 12 Jun 2007 19:53:00 +0200", "msg_from": "Christo Du Preez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: test / live environment, major performance difference" }, { "msg_contents": "Christo Du Preez <[email protected]> writes:\n> Fast:\n> \"public\";\"layertype\";\"parentid\";0.98797;4;2;\"{4,1}\";\"{0.00902256,0.00300752}\";\"\";-0.142857\n\n> Slow:\n> \"public\";\"layertype\";\"parentid\";0.00745157;4;7;\"{300}\";\"{0.976155}\";\"{1,1,4,5,8,12}\";0.92262\n\nWell, those statistics are almost completely different, and what the\nslow one says is that parentid = 300 accounts for 97% of the table.\nSo that's why you get different plans. If that is not reflective of\nreality, then you have not ANALYZEd the table lately.\n\nMaybe it's a pilot-error problem, like not doing the ANALYZE as a user\nwith sufficient privileges? IIRC you have to be table owner, database\nowner, or superuser to ANALYZE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Jun 2007 14:42:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: test / live environment, major performance difference " } ]
[ { "msg_contents": "Hi all,\n\n I have a server with 4GB of memory and I'm tweaking the PostgreSQL \nconfiguration. This server will be dedicated to run PostgreSQL so I'd \nlike to dedicate as much as possible RAM to it.\n\n I have dedicated 1GB to shared_buffers (shared_buffers=131072) but \nI'm not sure if this will be the maximum memory used by PostgreSQL or \nadditional to this it will take more memory. Because if shared_buffers \nis the maximum I could raise that value even more.\n\nCheers!\n-- \nArnau\n", "msg_date": "Tue, 12 Jun 2007 16:27:39 +0200", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "How much memory PostgreSQL is going to use?" }, { "msg_contents": "In response to Arnau <[email protected]>:\n\n> Hi all,\n> \n> I have a server with 4GB of memory and I'm tweaking the PostgreSQL \n> configuration. This server will be dedicated to run PostgreSQL so I'd \n> like to dedicate as much as possible RAM to it.\n> \n> I have dedicated 1GB to shared_buffers (shared_buffers=131072) but \n> I'm not sure if this will be the maximum memory used by PostgreSQL or \n> additional to this it will take more memory. Because if shared_buffers \n> is the maximum I could raise that value even more.\n\nIndividual backend processes will allocate more memory above shared_buffers\nfor processing individual queries. See work_mem.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 12 Jun 2007 10:37:16 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much memory PostgreSQL is going to use?" } ]
[ { "msg_contents": "Hi there,\n\nUsing explicitly VACUUM command give me the opportunity to fine tune my \nVACUUM scheduling parameters, after I analyze the log generated by VACUUM \nVERBOSE.\n\nOn the other hand I'd like to use the auto-vacuum mechanism because of its \nfacilities. Unfortunately, after I made some initial estimations for \nautovacuum_naptime, and I set the specific data into pg_autovacuum table, I \nhave not a feedback from the auto-vacuum mechanism to check that it works \nwell or not. It would be nice to have some kind of log similar with the one \ngenerated by VACUUM VERBOSE. Is the auto-vacuum mechanism able to provide \nsuch a useful log ?\n\nTIA,\nSabin \n\n\n", "msg_date": "Tue, 12 Jun 2007 18:54:12 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "VACUUM vs auto-vacuum daemon" }, { "msg_contents": "Sabin Coanda wrote:\n> Hi there,\n> \n> Using explicitly VACUUM command give me the opportunity to fine tune my \n> VACUUM scheduling parameters, after I analyze the log generated by VACUUM \n> VERBOSE.\n> \n> On the other hand I'd like to use the auto-vacuum mechanism because of its \n> facilities. Unfortunately, after I made some initial estimations for \n> autovacuum_naptime, and I set the specific data into pg_autovacuum table, I \n> have not a feedback from the auto-vacuum mechanism to check that it works \n> well or not. It would be nice to have some kind of log similar with the one \n> generated by VACUUM VERBOSE. Is the auto-vacuum mechanism able to provide \n> such a useful log ?\n\nNo, sorry, autovacuum is not currently very good regarding reporting its\nactivities. It's a lot better in 8.3 but even there it doesn't report\nthe full VACUUM VERBOSE log. It looks like this:\n\nLOG: automatic vacuum of table \"alvherre.public.foo\": index scans: 0\n pages: 45 removed, 0 remain\n tuples: 10000 removed, 0 remain\n system usage: CPU 0.00s/0.00u sec elapsed 0.01 sec\nLOG: automatic analyze of table \"alvherre.public.foo\" system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 12 Jun 2007 12:08:02 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM vs auto-vacuum daemon" }, { "msg_contents": "In response to \"Sabin Coanda\" <[email protected]>:\n\n> Hi there,\n> \n> Using explicitly VACUUM command give me the opportunity to fine tune my \n> VACUUM scheduling parameters, after I analyze the log generated by VACUUM \n> VERBOSE.\n> \n> On the other hand I'd like to use the auto-vacuum mechanism because of its \n> facilities. Unfortunately, after I made some initial estimations for \n> autovacuum_naptime, and I set the specific data into pg_autovacuum table, I \n> have not a feedback from the auto-vacuum mechanism to check that it works \n> well or not. It would be nice to have some kind of log similar with the one \n> generated by VACUUM VERBOSE. Is the auto-vacuum mechanism able to provide \n> such a useful log ?\n\nDitto what Alvaro said.\n\nHowever, you can get some measure of tracking my running VACUUM VERBOSE\non a regular basis to see how well autovacuum is keeping up. There's\nno problem with running manual vacuum and autovacuum together, and you'll\nbe able to gather _some_ information about how well autovacuum is\nkeeping up.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 12 Jun 2007 12:25:55 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM vs auto-vacuum daemon" }, { "msg_contents": "Hi Bill,\n\n...\n>\n> However, you can get some measure of tracking my running VACUUM VERBOSE\n> on a regular basis to see how well autovacuum is keeping up. There's\n> no problem with running manual vacuum and autovacuum together, and you'll\n> be able to gather _some_ information about how well autovacuum is\n> keeping up.\n\nWell, I think it is useful just if I am able to synchronize the autovacuum \nto run always after I run vacuum verbose. But I don't know how to do that. \nDo you ?\n\nSabin \n\n\n", "msg_date": "Tue, 12 Jun 2007 19:42:10 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VACUUM vs auto-vacuum daemon" }, { "msg_contents": "In response to \"Sabin Coanda\" <[email protected]>:\n\n> Hi Bill,\n> \n> ...\n> >\n> > However, you can get some measure of tracking my running VACUUM VERBOSE\n> > on a regular basis to see how well autovacuum is keeping up. There's\n> > no problem with running manual vacuum and autovacuum together, and you'll\n> > be able to gather _some_ information about how well autovacuum is\n> > keeping up.\n> \n> Well, I think it is useful just if I am able to synchronize the autovacuum \n> to run always after I run vacuum verbose. But I don't know how to do that. \n> Do you ?\n\nNo, I don't. Why would you want to do that?\n\nPersonally, I'd be more interested in whether autovacuum, running whenever\nit wants without me knowing, is keeping the table bloat under control.\n\nIf this were a concern for me (which it was during initial testing of\nour DB) I would run vacuum verbose once a day to watch sizes and what\nnot. After a while, I'd switch to once a week, then probably settle on\nonce a month to ensure nothing ever gets out of hand. Put it in a cron\njob and have the output mailed.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 13 Jun 2007 10:46:10 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM vs auto-vacuum daemon" }, { "msg_contents": "Bill Moran wrote:\n> In response to \"Sabin Coanda\" <[email protected]>:\n> \n>> Hi Bill,\n>>\n>> ...\n>>> However, you can get some measure of tracking my running VACUUM VERBOSE\n>>> on a regular basis to see how well autovacuum is keeping up. There's\n>>> no problem with running manual vacuum and autovacuum together, and you'll\n>>> be able to gather _some_ information about how well autovacuum is\n>>> keeping up.\n>> Well, I think it is useful just if I am able to synchronize the autovacuum \n>> to run always after I run vacuum verbose. But I don't know how to do that. \n>> Do you ?\n> \n> No, I don't. Why would you want to do that?\n> \n> Personally, I'd be more interested in whether autovacuum, running whenever\n> it wants without me knowing, is keeping the table bloat under control.\n\nanalyze verbose.\n\n> \n> If this were a concern for me (which it was during initial testing of\n> our DB) I would run vacuum verbose once a day to watch sizes and what\n> not. After a while, I'd switch to once a week, then probably settle on\n> once a month to ensure nothing ever gets out of hand. Put it in a cron\n> job and have the output mailed.\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Wed, 13 Jun 2007 08:08:56 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM vs auto-vacuum daemon" } ]
[ { "msg_contents": "Hi All,\n\nIs there some kind of performance testing utility available for\npostgresql Something I can run after installing postgresql to help me\nidentify if my installation is optimal.\n\nI've been battling for days now trying to sort out performance issues\nand something like that may just identify issues I'm not even aware of\nor considering at this stage.\n\nRegards,\nChristo Du Preez\n\n", "msg_date": "Wed, 13 Jun 2007 12:25:20 +0200", "msg_from": "Christo Du Preez <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Testing Utility" }, { "msg_contents": "Christo Du Preez wrote:\n> Is there some kind of performance testing utility available for\n> postgresql Something I can run after installing postgresql to help me\n> identify if my installation is optimal.\n\nNot really. There's contrib/pgbench, but I wouldn't recommend using it \nfor that purpose since the optimal configuration depends on your \napplication, and pgbench is likely nothing like your application.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 13 Jun 2007 11:25:47 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Testing Utility" }, { "msg_contents": "[Christo Du Preez - Wed at 12:25:20PM +0200]\n> Is there some kind of performance testing utility available for\n> postgresql Something I can run after installing postgresql to help me\n> identify if my installation is optimal.\n> \n> I've been battling for days now trying to sort out performance issues\n> and something like that may just identify issues I'm not even aware of\n> or considering at this stage.\n\nIf you are really having performance problems, my general experience is\nthat you should look into the queries and usage patterns rather than the\nconfiguration. The server configuration can only give marginal\nbenefits, compared to query and usage tuning.\n\nIt often a good idea to turn on the stats collector, even if it slows\ndown postgres a bit.\n\nOne of the things the stats collector gives is the pg_stat_activity\nview, where you can find everything the server is working on exactly\nnow; checking up this view while you are actually experiencing\nproblems can give a lot of information.\n\nAnother thing I've noticed, is that the activity from our applications\noften can come in bursts; the server can be like 70% idle most of the\ntime, but when the server is struck by 4-5 IO-heavy queries at the\nsame time in addition to the steady flow of simple queries, it can\neasily get trashed. I've made up an algorithm to stop this from\nhappening, before running a transaction which is either heavy or not\nconsidered very important, the activity view will be scanned, and if\nthe server is already working with many queries, the application will\nsleep a bit and try again - and eventually return an error message\n(\"please try again later\") if it's doing interactive stuff.\n\nAnother thing that we've experienced - be aware of pending\ntransactions! It's important to commit or roll back every transaction\nwithin reasonable time - if (i.e. due to a programming mistake or a\nDBA starting a transaction in psql) a transaction is pending for\nseveral hours or even ays, it is really very bad for the performance.\n\nAnother experience we have is that autovacuum can be quite naughty\nwhen one has some really huge tables. This can be tweaked by\ndisabling autovacuum at those tables, and running a nightly vacuum\ninstead.\n\nApologies for not replying to your question, though ;-)\n\n", "msg_date": "Wed, 13 Jun 2007 12:44:05 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Testing Utility" }, { "msg_contents": "\nOn Jun 13, 2007, at 6:25 AM, Christo Du Preez wrote:\n\n> Is there some kind of performance testing utility available for\n> postgresql Something I can run after installing postgresql to help me\n> identify if my installation is optimal.\n\nYour own app is the only one that will give you meaningful \nresults... You need to either run your app against it or simulate \nyour applications' access patterns.\n\nAny other load will lead to different optimizations.\n\n", "msg_date": "Wed, 13 Jun 2007 15:13:19 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Testing Utility" } ]
[ { "msg_contents": "Hi,\nI've a table with 300 000 records and I'm trying to do a search:\n\nSELECT * FROM addresses WHERE address ILIKE '%Jean Paul%' AND\n(l_pc='4250' or r_pc='4250') AND (l_struc='O' or r_struc='O') AND\n(prenm ILIKE 'Street')\n\nIt performs in 2 seconds in a dual Xeon 2.4mhz with 2Gb of RAM.\nI'm using Postgresql 8.1 on ubuntu.\nI've indexes on l_pc, r_pc, l_struc,r_struc and prenm (all btrees)\nWhat I'm doing wrong to have such a slow query?\n\nThanks,\nNuno Mariz\n", "msg_date": "Wed, 13 Jun 2007 12:41:27 +0100", "msg_from": "\"Tyler Durden\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimize slow query" }, { "msg_contents": "Le mercredi 13 juin 2007, Tyler Durden a écrit :\n> Hi,\n> I've a table with 300 000 records and I'm trying to do a search:\n>\n> SELECT * FROM addresses WHERE address ILIKE '%Jean Paul%' AND\n> (l_pc='4250' or r_pc='4250') AND (l_struc='O' or r_struc='O') AND\n> (prenm ILIKE 'Street')\n>\n> It performs in 2 seconds in a dual Xeon 2.4mhz with 2Gb of RAM.\n> I'm using Postgresql 8.1 on ubuntu.\n> I've indexes on l_pc, r_pc, l_struc,r_struc and prenm (all btrees)\n> What I'm doing wrong to have such a slow query?\nCould you add 'explain analyze' to your post ? \nAnd how much content have the text fields ? (isn't tsearch2 an option for \nyou ?)\n>\n> Thanks,\n> Nuno Mariz\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n\n\n\n-- \nCédric Villemain\nAdministrateur de Base de Données\nCel: +33 (0)6 74 15 56 53\nhttp://dalibo.com - http://dalibo.org", "msg_date": "Wed, 13 Jun 2007 13:47:23 +0200", "msg_from": "=?iso-8859-1?q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize slow query" }, { "msg_contents": "On 6/13/07, Tyler Durden <[email protected]> wrote:\n> Hi,\n> I've a table with 300 000 records and I'm trying to do a search:\n>\n> SELECT * FROM addresses WHERE address ILIKE '%Jean Paul%' AND\n> (l_pc='4250' or r_pc='4250') AND (l_struc='O' or r_struc='O') AND\n> (prenm ILIKE 'Street')\n>\n> It performs in 2 seconds in a dual Xeon 2.4mhz with 2Gb of RAM.\n> I'm using Postgresql 8.1 on ubuntu.\n> I've indexes on l_pc, r_pc, l_struc,r_struc and prenm (all btrees)\n> What I'm doing wrong to have such a slow query?\n>\n> Thanks,\n> Nuno Mariz\n>\n\nMy bad!\nSorry, I've missed an index on l_struc and r_struc.\n\nThanks, anyway.\n", "msg_date": "Wed, 13 Jun 2007 12:49:07 +0100", "msg_from": "\"Tyler Durden\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize slow query" } ]
[ { "msg_contents": "Hi,\n\nI'm doing WAL shipping to do a warm standby system (8.2.4).\n\nThe problem is that the pg_xlog dir on the master just gets bigger and\nbigger (never seems to truncate) and the corresponding archive directory on\nthe slave also gets bigger and bigger. Is there a way to moderate this?\n\nThanks\n\nMike\n\nHi,I'm doing WAL shipping to do a warm standby system (8.2.4).The problem is that the pg_xlog dir on the master just gets bigger and bigger (never seems to truncate) and the corresponding archive directory on the slave also gets bigger and bigger. Is there a way to moderate this? \nThanksMike", "msg_date": "Wed, 13 Jun 2007 09:35:27 -0400", "msg_from": "\"Michael Dengler\" <[email protected]>", "msg_from_op": true, "msg_subject": "WAL shipping and ever expanding pg_xlog" }, { "msg_contents": "OK....it looks like the slave machine has run out of space and that caused\nthe xlog files to pile up on the master.\n\nStill...how do I prevent such all of the shipped WAL segments from remaining\non the slave machine? Do I need to retain every single one? Can they be\nsafely removed after the slave machine has restored the particular segment?\n\nThanks\n\nMike\n\n\nOn 6/13/07, Michael Dengler <[email protected]> wrote:\n>\n> Hi,\n>\n> I'm doing WAL shipping to do a warm standby system (8.2.4).\n>\n> The problem is that the pg_xlog dir on the master just gets bigger and\n> bigger (never seems to truncate) and the corresponding archive directory on\n> the slave also gets bigger and bigger. Is there a way to moderate this?\n>\n> Thanks\n>\n> Mike\n>\n>\n\nOK....it looks like the slave machine has run out of space and that caused the xlog files to pile up on the master.Still...how do I prevent such all of the shipped WAL segments from remaining on the slave machine? Do I need to retain every single one? Can they be safely removed after the slave machine has restored the particular segment?\nThanksMikeOn 6/13/07, Michael Dengler <[email protected]> wrote:\nHi,I'm doing WAL shipping to do a warm standby system (8.2.4).The problem is that the pg_xlog dir on the master just gets bigger and bigger (never seems to truncate) and the corresponding archive directory on the slave also gets bigger and bigger. Is there a way to moderate this? \nThanksMike", "msg_date": "Wed, 13 Jun 2007 09:44:14 -0400", "msg_from": "\"Michael Dengler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL shipping and ever expanding pg_xlog" }, { "msg_contents": "On 6/13/07, Michael Dengler <[email protected]> wrote:\n> OK....it looks like the slave machine has run out of space and that caused\n> the xlog files to pile up on the master.\n>\n> Still...how do I prevent such all of the shipped WAL segments from remaining\n> on the slave machine? Do I need to retain every single one? Can they be\n> safely removed after the slave machine has restored the particular segment?\n\nAre you using the pg_standy utility? It has options to control this...\n\nmerlin\n", "msg_date": "Wed, 13 Jun 2007 11:29:19 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL shipping and ever expanding pg_xlog" }, { "msg_contents": "Doug Knight just informed me about the pg_standby module.\n\nWorks like a charm!\n\nThanks\n\nMike\n\n\nOn 6/13/07, Merlin Moncure <[email protected]> wrote:\n>\n> On 6/13/07, Michael Dengler <[email protected]> wrote:\n> > OK....it looks like the slave machine has run out of space and that\n> caused\n> > the xlog files to pile up on the master.\n> >\n> > Still...how do I prevent such all of the shipped WAL segments from\n> remaining\n> > on the slave machine? Do I need to retain every single one? Can they be\n> > safely removed after the slave machine has restored the particular\n> segment?\n>\n> Are you using the pg_standy utility? It has options to control this...\n>\n> merlin\n>\n\nDoug Knight just informed me about the pg_standby module.Works like a charm!ThanksMikeOn 6/13/07, Merlin Moncure <\[email protected]> wrote:On 6/13/07, Michael Dengler <\[email protected]> wrote:> OK....it looks like the slave machine has run out of space and that caused> the xlog files to pile up on the master.>> Still...how do I prevent such all of the shipped WAL segments from remaining\n> on the slave machine? Do I need to retain every single one? Can they be> safely removed after the slave machine has restored the particular segment?Are you using the pg_standy utility?  It has options to control this...\nmerlin", "msg_date": "Wed, 13 Jun 2007 12:12:39 -0400", "msg_from": "\"Michael Dengler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL shipping and ever expanding pg_xlog" } ]
[ { "msg_contents": "\n \nHello everybody,\n\nWe're using PostgreSQL 8.1.0 on AIX 5.3 through NFS (a Netapp Filer hosts the database files), and we're encoutering somes issues with vaccums. PostgreSQL binaries are built with xlc 6 (C for AIX Compiler 6.0.0.6) on AIX 5.2 (yes, I know, building on 5.2 and running on 5.3 is not the best way to avoid bugs...).\n\n\nWe have strong performance constraints with this database, so we planned vacuums with a crontab :\n- Frequent vacuum analyze on some heavily-updated tables (few rows, but a lot of insert/update/delete). The frequency varies between 5 and 15 minutes.\n- A nightly (not FULL) vacuum on the entire database.\nWe don't use autovacuum or FULL vacuum, because the high havailability needed for the database. We prefer to keep it under control.\n\n\nSince some weeks, the amount of data hosted by the database grows, and, some nights, the database vacuum seems to \"freeze\" during his execution. In verbose mode, the logs show that the vacuum clean up a table (not always the same table), and... no more news. The system shows a vacuum process, which seems to be sleeping (no CPU used, no memory consumption...). In addition, the logs of our application show that database transactions seems to be slower. \n\nFor some internal reasons, the only way for us to workaround this problem is to shutdown of the application and the database. After a full restart, things are ok. \n\n\nSome questions :\n\n1) During the nightly database vacuum, some vacuums run concurrently (vacuums on heavily-updated tables). Can this concurrency cause some deadlocks ? We're planning to patch our shell scripts to avoid this concurrency.\n\n\n2) I believed that the poor performances during the vacuum freeze were due to the obsolete data statistics. But after a full restart of the dabatase, performances are good. Does PostgreSQL rebuild his statistics during startup ? \n\n\n3) Can we explain the freeze with a bad database configuration ? For instance, postgreSQL running out of connections, or whatever, causing the vacuum process to wait for free ressources ?\n\n\n4) This morning, just before the database vacuum freeze, the logs show this error :\n<2007-06-13 03:20:35 DFT%>ERROR: could not open relation 16391/16394/107937: A system call received an interrupt.\n<2007-06-13 03:20:35 DFT%>CONTEXT: writing block 2 of relation 16391/16394/107937\n<2007-06-13 03:20:40 DFT%>LOG: could not fsync segment 0 of relation 16392/16394/107925: A system call received an interrupt.\n<2007-06-13 03:20:40 DFT%>ERROR: storage sync failed on magnetic disk: A system call received an interrupt.\n\nThis is the first time we're encountering this error. Can it be a cause of the vacuum freeze ?\n \n\nRegards,\n\n\n-- \n Loic Restoux\n Capgemini Telecom & Media / ITDR\n tel : 02 99 27 82 30\n e-mail : [email protected] \n\nThis message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.\n\n", "msg_date": "Wed, 13 Jun 2007 18:33:53 +0200", "msg_from": "=?iso-8859-1?Q?RESTOUX=2C_Lo=EFc?= <[email protected]>", "msg_from_op": true, "msg_subject": "[PG 8.1.0 / AIX 5.3] Vacuum processes freezing " }, { "msg_contents": "=?iso-8859-1?Q?RESTOUX=2C_Lo=EFc?= <[email protected]> writes:\n> Since some weeks, the amount of data hosted by the database grows, and, som=\n> e nights, the database vacuum seems to \"freeze\" during his execution. In v=\n> erbose mode, the logs show that the vacuum clean up a table (not always the=\n> same table), and... no more news. The system shows a vacuum process, which=\n> seems to be sleeping (no CPU used, no memory consumption...).\n\nHave you looked into pg_locks to see if it's blocked on someone else's lock?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Jun 2007 14:06:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PG 8.1.0 / AIX 5.3] Vacuum processes freezing " }, { "msg_contents": "On Wed, 2007-06-13 at 18:33 +0200, RESTOUX, Loïc wrote:\n\n> 2) I believed that the poor performances during the vacuum freeze were due to the obsolete data statistics. But after a full restart of the dabatase, performances are good. Does PostgreSQL rebuild his statistics during startup ? \n\nYou probably don't need to run VACUUM FREEZE.\n\nVACUUM FREEZE will thrash the disks much more than normal VACUUM. We're\nimproving that somewhat in 8.3, but the basic issue is that VACUUM\nFREEZE cleans out more dead rows and so will dirty more data blocks.\n\nAre you concurrently running DDL, Truncate or CLUSTER? That will\ninterfere with the operation of VACUUM.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Jun 2007 16:23:23 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PG 8.1.0 / AIX 5.3] Vacuum processes freezing" }, { "msg_contents": "\n\nHi Tom, thanks for your reply,\n\n\n> Have you looked into pg_locks to see if it's blocked on \n> someone else's lock?\n\nYes, we looked into pg_locks and the vacuumdb process wasn't blocked. The table showed \nfour locks for vacuum, all with grant=true. \n\nIn fact, we found that a similar bug has been fixed in 8.1.1 :\n> # Fix bgwriter problems after recovering from errors (Tom)\n> The background writer was found to leak buffer pins after write errors. \n> While not fatal in itself, this might lead to mysterious blockages of later VACUUM commands.\n( http://www.postgresql.org/docs/8.1/static/release-8-1-1.html )\n\nCan anyone confirm that the symptoms of this bug correspond to our problem ? \nWe saw some logs like :\n<2007-06-11 12:44:04 DFT%>LOG: could not fsync segment 0 of relation 16391/16394/107912: \nA system call received an interrupt.\n<2007-06-11 12:44:04 DFT%>ERROR: storage sync failed on magnetic disk: A system call \nreceived an interrupt.\n\nOr :\n<2007-06-16 12:25:45 DFT%>ERROR: could not open relation 16393/16394/107926: A system \ncall received an interrupt.\n<2007-06-16 12:25:45 DFT%>CONTEXT: writing block 3 of relation 16393/16394/107926\n\nBut we can't see a relation between the fsync errors and the vacuum blockages. After a fsync error, \nsometimes the vacuum works fine, sometimes it hangs. Is there any way to reproduce manually this \nbug, in order to confirm that our problem is caused by this bug, and that it has been fixed \nin the 8.1.9 for sure ? How can I find the patch for this bug in the source code ?\n\n\nRegards,\n\n-- \n Loic Restoux\n Capgemini Telecom & Media / ITDR\n tel : 02 99 27 82 30\n e-mail : [email protected] \n \n\nThis message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.\n\n", "msg_date": "Wed, 20 Jun 2007 17:27:52 +0200", "msg_from": "=?iso-8859-1?Q?RESTOUX=2C_Lo=EFc?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PG 8.1.0 / AIX 5.3] Vacuum processes freezing " } ]
[ { "msg_contents": "I am trying to update a field in one table with a\nfield from another table like:\n\nupdate co set\nfirest_id=fco.firest_id,fire_dist=fco.fire_dist from\nfco where co.xno=fco.xno\n\nTable co has 384964 records\nTable fco has 383654 records\n\nThe xno fields in both tables are indexed but they\ndon't seem to be used. I would expect the update to\nbe faster than 6.3 minutes or is that expectation\nwrong? Here is the results of Explain Analyze:\n\n\"Hash Join (cost=15590.22..172167.03 rows=383654\nwidth=215) (actual time=1473.297..43032.178\nrows=383654 loops=1)\"\n\" Hash Cond: (co.xno = fco.xno)\"\n\" -> Seq Scan on co (cost=0.00..123712.64\nrows=384964 width=195) (actual time=440.196..37366.682\nrows=384964 loops=1)\"\n\" -> Hash (cost=7422.54..7422.54 rows=383654\nwidth=34) (actual time=995.651..995.651 rows=383654\nloops=1)\"\n\" -> Seq Scan on fco (cost=0.00..7422.54\nrows=383654 width=34) (actual time=4.641..509.947\nrows=383654 loops=1)\"\n\"Total runtime: 378258.707 ms\"\n\nThanks, \n\nFred\n\n\n \n____________________________________________________________________________________\nLooking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.\nhttp://farechase.yahoo.com/\n", "msg_date": "Wed, 13 Jun 2007 10:13:50 -0700 (PDT)", "msg_from": "Mark Makarowsky <[email protected]>", "msg_from_op": true, "msg_subject": "Update table performance problem" }, { "msg_contents": "Mark Makarowsky <[email protected]> writes:\n> \"Hash Join (cost=15590.22..172167.03 rows=383654\n> width=215) (actual time=1473.297..43032.178\n> rows=383654 loops=1)\"\n> \" Hash Cond: (co.xno = fco.xno)\"\n> \" -> Seq Scan on co (cost=0.00..123712.64\n> rows=384964 width=195) (actual time=440.196..37366.682\n> rows=384964 loops=1)\"\n> \" -> Hash (cost=7422.54..7422.54 rows=383654\n> width=34) (actual time=995.651..995.651 rows=383654\n> loops=1)\"\n> \" -> Seq Scan on fco (cost=0.00..7422.54\n> rows=383654 width=34) (actual time=4.641..509.947\n> rows=383654 loops=1)\"\n> \"Total runtime: 378258.707 ms\"\n\nAccording to the top line, the actual scanning and joining took 43 sec;\nso the rest of the time went somewhere else. Possibilities include\nthe actual data insertion (wouldn't take 5 minutes), index updates\n(what indexes are on this table?), constraint checks, triggers, ...\n\nYou failed to mention which PG version this is. 8.1 and up would show\ntime spent in triggers separately, so we could eliminate that\npossibility if it's 8.1 or 8.2. My suspicion without any data is\na lot of indexes on the table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Jun 2007 14:15:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update table performance problem " }, { "msg_contents": "The version is:\n\n\"PostgreSQL 8.2.4 on i686-pc-mingw32, compiled by GCC\ngcc.exe (GCC) 3.4.2 (mingw-special)\"\n\nHere is the table definition for co and fco. There\naren't any rules constraints, triggers, etc. on the\ntables. Only an index on each table for the xno\nfield. Any other thoughts?\n\nCREATE TABLE co\n(\n xno character(10),\n longitude double precision,\n latitude double precision,\n firest_id character(8),\n fire_dist double precision,\n polst_id character(8),\n pol_dist double precision,\n fnew_id character(10),\n fnew_dist double precision,\n pnew_id character(10),\n pnew_dist double precision,\n seihazm020 bigint,\n acc_val integer,\n valley integer,\n flood_id bigint,\n chance character varying\n) \nWITHOUT OIDS;\nALTER TABLE co OWNER TO postgres;\n-- Index: co_xno\n\n-- DROP INDEX co_xno;\n\nCREATE UNIQUE INDEX co_xno\n ON co\n USING btree\n (xno);\n\nCREATE TABLE fco\n(\n firest_id character(8),\n fire_dist double precision,\n xno character(10)\n) \nWITHOUT OIDS;\nALTER TABLE fco OWNER TO postgres;\n\n-- Index: fco_xno\n\n-- DROP INDEX fco_xno;\n\nCREATE UNIQUE INDEX fco_xno\n ON fco\n USING btree\n (xno);\n\n--- Tom Lane <[email protected]> wrote:\n\n> Mark Makarowsky <[email protected]>\n> writes:\n> > \"Hash Join (cost=15590.22..172167.03 rows=383654\n> > width=215) (actual time=1473.297..43032.178\n> > rows=383654 loops=1)\"\n> > \" Hash Cond: (co.xno = fco.xno)\"\n> > \" -> Seq Scan on co (cost=0.00..123712.64\n> > rows=384964 width=195) (actual\n> time=440.196..37366.682\n> > rows=384964 loops=1)\"\n> > \" -> Hash (cost=7422.54..7422.54 rows=383654\n> > width=34) (actual time=995.651..995.651\n> rows=383654\n> > loops=1)\"\n> > \" -> Seq Scan on fco (cost=0.00..7422.54\n> > rows=383654 width=34) (actual time=4.641..509.947\n> > rows=383654 loops=1)\"\n> > \"Total runtime: 378258.707 ms\"\n> \n> According to the top line, the actual scanning and\n> joining took 43 sec;\n> so the rest of the time went somewhere else. \n> Possibilities include\n> the actual data insertion (wouldn't take 5 minutes),\n> index updates\n> (what indexes are on this table?), constraint\n> checks, triggers, ...\n> \n> You failed to mention which PG version this is. 8.1\n> and up would show\n> time spent in triggers separately, so we could\n> eliminate that\n> possibility if it's 8.1 or 8.2. My suspicion\n> without any data is\n> a lot of indexes on the table.\n> \n> \t\t\tregards, tom lane\n> \n\n\n\n \n____________________________________________________________________________________\nGot a little couch potato? \nCheck out fun summer activities for kids.\nhttp://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz \n", "msg_date": "Wed, 13 Jun 2007 12:47:36 -0700 (PDT)", "msg_from": "Mark Makarowsky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Update table performance problem " } ]
[ { "msg_contents": "Hi there,\n\nI'd like to understand completely the report generated by VACUUM VERBOSE.\nPlease tell me where is it documented ?\n\nTIA,\nSabin \n\n\n", "msg_date": "Thu, 14 Jun 2007 17:31:26 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Parsing VACUUM VERBOSE" }, { "msg_contents": "Sabin,\n\nOn 6/14/07, Sabin Coanda <[email protected]> wrote:\n> I'd like to understand completely the report generated by VACUUM VERBOSE.\n> Please tell me where is it documented ?\n\nYou can take a look to what I did for pgFouine:\nhttp://pgfouine.projects.postgresql.org/vacuum.html\n\n--\nGuillaume\n", "msg_date": "Thu, 14 Jun 2007 16:52:17 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parsing VACUUM VERBOSE" }, { "msg_contents": "Le jeudi 14 juin 2007, Sabin Coanda a écrit :\n> I'd like to understand completely the report generated by VACUUM VERBOSE.\n> Please tell me where is it documented ?\n\nTry the pgfouine reporting tool :\n http://pgfouine.projects.postgresql.org/\n http://pgfouine.projects.postgresql.org/reports/sample_vacuum.html\n\nIt's easier to understand the vacuum verbose output from the generated report.\n-- \ndim", "msg_date": "Thu, 14 Jun 2007 16:53:31 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parsing VACUUM VERBOSE" }, { "msg_contents": "Hi Guillaume,\n\nVery interesting !\n\nMerci beaucoup,\nSabin \n\n\n", "msg_date": "Thu, 14 Jun 2007 18:27:52 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parsing VACUUM VERBOSE" }, { "msg_contents": "On 6/14/07, Dimitri Fontaine <[email protected]> wrote:\n>\n> Le jeudi 14 juin 2007, Sabin Coanda a écrit:\n> > I'd like to understand completely the report generated by VACUUM\n> VERBOSE.\n> > Please tell me where is it documented ?\n>\n> Try the pgfouine reporting tool :\n> http://pgfouine.projects.postgresql.org/\n> http://pgfouine.projects.postgresql.org/reports/sample_vacuum.html\n>\n> It's easier to understand the vacuum verbose output from the generated\n> report.\n> --\n> dim\n>\n>\nCan anyone share what value they have set log_min_duration_statement to?\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nOn 6/14/07, Dimitri Fontaine <[email protected]> wrote:\nLe jeudi 14 juin 2007, Sabin Coanda a écrit:> I'd like to understand completely the report generated by VACUUM VERBOSE.> Please tell me where is it documented ?Try the pgfouine reporting tool :\n  http://pgfouine.projects.postgresql.org/  http://pgfouine.projects.postgresql.org/reports/sample_vacuum.html\nIt's easier to understand the vacuum verbose output from the generated report.--dimCan anyone share what value they have set log_min_duration_statement to?\n-- Yudhvir Singh Sidhu408 375 3134 cell", "msg_date": "Thu, 14 Jun 2007 08:30:21 -0700", "msg_from": "\"Y Sidhu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parsing VACUUM VERBOSE" }, { "msg_contents": "On 6/14/07, Y Sidhu <[email protected]> wrote:\n> Can anyone share what value they have set log_min_duration_statement to?\n\nIt's OT but we use different values for different databases and needs.\n\nOn a very loaded database with a lot of complex queries (lots of join\non big tables, proximity queries, full text queries), we use 100 ms.\nIt logs ~ 300 000 queries. It allows us to detect big regressions or\nnew queries which are very slow.\n\nOn another database where I want to track transaction leaks, I'm\nforced to put it to 0ms.\n\nBasically, the answer is: set it to the lowest value you can afford\nwithout impacting too much your performances (and if you use syslog,\nuse async I/O or send your log to the network).\n\n--\nGuillaume\n", "msg_date": "Thu, 14 Jun 2007 23:01:54 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parsing VACUUM VERBOSE" }, { "msg_contents": "On 6/14/07, Guillaume Smet <[email protected]> wrote:\n>\n> On 6/14/07, Y Sidhu <[email protected]> wrote:\n> > Can anyone share what value they have set log_min_duration_statement to?\n>\n> It's OT but we use different values for different databases and needs.\n>\n> On a very loaded database with a lot of complex queries (lots of join\n> on big tables, proximity queries, full text queries), we use 100 ms.\n> It logs ~ 300 000 queries. It allows us to detect big regressions or\n> new queries which are very slow.\n>\n> On another database where I want to track transaction leaks, I'm\n> forced to put it to 0ms.\n>\n> Basically, the answer is: set it to the lowest value you can afford\n> without impacting too much your performances (and if you use syslog,\n> use async I/O or send your log to the network).\n>\n> --\n> Guillaume\n>\n\nI am trying to answer the question of how to tell if the cleanup of an index\nmay be locked by a long transaction. And in the bigger context, why vacuums\nare taking long? What triggers them? I came across the following query which\nshows one table 'connect_tbl' with high \"heap hits\" and \"low heap buffer %\"\nNow, 'heap' seems to be a memory construct. Any light shedding is\nappreciated.\n\nmydb=# SELECT\nmydb-# 'HEAP:'||relname AS table_name,\nmydb-# (heap_blks_read+heap_blks_hit) AS heap_hits,\n ROUND(((heap_blks_hit)::NUMERIC/(heap_blks_read+heap_blks_hit)*100),\n2)\nmydb-# ROUND(((heap_blks_hit)::NUMERIC/(heap_blks_read+heap_blks_hit)*100),\n2)\nmydb-# AS heap_buffer_percentage\nmydb-# FROM pg_statio_user_tables\nmydb-# WHERE(heap_blks_read+heap_blks_hit)>0\nmydb-# UNION\nmydb-# SELECT\nmydb-# 'TOAST:'||relname,\nmydb-# (toast_blks_read+toast_blks_hit),\nmydb-#\nROUND(((toast_blks_hit)::NUMERIC/(toast_blks_read+toast_blks_hit)*100), 2)\nmydb-# FROM pg_statio_user_tables\nmydb-# WHERE(toast_blks_read+toast_blks_hit)>0\nmydb-# UNION\nmydb-# SELECT\nmydb-# 'INDEX:'||relname,\nmydb-# (idx_blks_read+idx_blks_hit),\nmydb-# ROUND(((idx_blks_hit)::NUMERIC/(idx_blks_read+idx_blks_hit)*100), 2)\nmydb-# FROM pg_statio_user_tables\nmydb-# WHERE(idx_blks_read+idx_blks_hit)>0;\n table_name | heap_hits | heap_buffer_percentage\n------------------------------------+--------------+----------------------------------\n HEAP:connect_tbl | 890878 | 43.18\n HEAP:tblbound_tbl | 43123 | 13.80\n HEAP:tblcruel_tbl | 225819 | 6.98\n INDEX:connect_tbl | 287224 | 79.82\n INDEX:tblbound_tbl | 81640 | 90.28\n INDEX:tblcruel_tbl | 253014 | 50.73\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nOn 6/14/07, Guillaume Smet <[email protected]> wrote:\nOn 6/14/07, Y Sidhu <[email protected]> wrote:> Can anyone share what value they have set log_min_duration_statement to?It's OT but we use different values for different databases and needs.\nOn a very loaded database with a lot of complex queries (lots of joinon big tables, proximity queries, full text queries), we use 100 ms.It logs ~ 300 000 queries. It allows us to detect big regressions or\nnew queries which are very slow.On another database where I want to track transaction leaks, I'mforced to put it to 0ms.Basically, the answer is: set it to the lowest value you can affordwithout impacting too much your performances (and if you use syslog,\nuse async I/O or send your log to the network).--GuillaumeI\nam trying to answer the question of how to tell if the cleanup of an\nindex may be locked by a long transaction. And in the bigger context,\nwhy vacuums are taking long? What triggers them? I came across the\nfollowing query which shows one table 'connect_tbl'  with high\n\"heap hits\" and \"low heap buffer %\" Now, 'heap' seems to be a memory\nconstruct. Any light shedding is appreciated.\n\nmydb=# SELECT\nmydb-# 'HEAP:'||relname AS table_name,\nmydb-# (heap_blks_read+heap_blks_hit) AS heap_hits,\n        ROUND(((heap_blks_hit)::NUMERIC/(heap_blks_read+heap_blks_hit)*100), 2)\nmydb-# ROUND(((heap_blks_hit)::NUMERIC/(heap_blks_read+heap_blks_hit)*100), 2)\nmydb-# AS heap_buffer_percentage\nmydb-# FROM pg_statio_user_tables\nmydb-# WHERE(heap_blks_read+heap_blks_hit)>0\nmydb-# UNION\nmydb-# SELECT\nmydb-# 'TOAST:'||relname,\nmydb-# (toast_blks_read+toast_blks_hit),\nmydb-# ROUND(((toast_blks_hit)::NUMERIC/(toast_blks_read+toast_blks_hit)*100), 2)\nmydb-# FROM pg_statio_user_tables\nmydb-# WHERE(toast_blks_read+toast_blks_hit)>0\nmydb-# UNION\nmydb-# SELECT\nmydb-# 'INDEX:'||relname,\nmydb-# (idx_blks_read+idx_blks_hit),\nmydb-# ROUND(((idx_blks_hit)::NUMERIC/(idx_blks_read+idx_blks_hit)*100), 2)\nmydb-# FROM pg_statio_user_tables\nmydb-# WHERE(idx_blks_read+idx_blks_hit)>0;\n       \ntable_name           \n| heap_hits | heap_buffer_percentage\n------------------------------------+--------------+----------------------------------\n HEAP:connect_tbl        \n|    890878\n|                 \n43.18\n HEAP:tblbound_tbl        \n|     43123\n|                 \n13.80\n HEAP:tblcruel_tbl         \n|    225819\n|                  \n6.98\n INDEX:connect_tbl        \n|    287224\n|                 \n79.82\n INDEX:tblbound_tbl        \n|     81640\n|                 \n90.28\n INDEX:tblcruel_tbl         \n|    253014\n|                 \n50.73-- Yudhvir Singh Sidhu408 375 3134 cell", "msg_date": "Thu, 14 Jun 2007 14:34:13 -0700", "msg_from": "\"Y Sidhu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parsing VACUUM VERBOSE" }, { "msg_contents": "\n\"\"Guillaume Smet\"\" <[email protected]> wrote in message \nnews:[email protected]...\n> Sabin,\n>\n> On 6/14/07, Sabin Coanda <[email protected]> wrote:\n>> I'd like to understand completely the report generated by VACUUM VERBOSE.\n>> Please tell me where is it documented ?\n>\n> You can take a look to what I did for pgFouine:\n> http://pgfouine.projects.postgresql.org/vacuum.html\n>\n\nHi Guillaume,\n\nI tried pgFouine.php app on a sample log file but it reports me some errors. \nCould you give me some startup support, please ?\nI attach the log here to find what's wrong.\n\nRegards,\nSabin \n\n\nbegin 666 postgresql-2007-06-18_160048.log\nM,C P-RTP-BTQ.\" Q-CHP,#HT.2!%15-4(%LQ.3 W.5TZ(%LM,5T@3$]'.B @\nM9&%T86)A<V4@<WES=&5M('=A<R!S:'5T(&1O=VX@870@,C P-RTP-BTQ.\" Q\nM-CHP,#HT-R!%15-4\"C(P,#<M,#8M,3@@,38Z,# Z-#D@14535\"!;,3DP-SE=\nM.B!;+3%=($Q/1SH@(&-H96-K<&]I;G0@<F5C;W)D(&ES(&%T(#,W+T$S-SDX\nM-C8T\"C(P,#<M,#8M,3@@,38Z,# Z-#D@14535\"!;,3DP-SE=.B!;+3%=($Q/\nM1SH@(')E9&\\@<F5C;W)D(&ES(&%T(#,W+T$S-SDX-C8T.R!U;F1O(')E8V]R\nM9\"!I<R!A=\" P+S [('-H=71D;W=N(%12544*,C P-RTP-BTQ.\" Q-CHP,#HT\nM.2!%15-4(%LQ.3 W.5TZ(%LM,5T@3$]'.B @;F5X=\"!T<F%N<V%C=&EO;B!)\nM1#H@,\"\\W.#@V,C$[(&YE>'0@3TE$.B V-C,W.#<*,C P-RTP-BTQ.\" Q-CHP\nM,#HT.2!%15-4(%LQ.3 W.5TZ(%LM,5T@3$]'.B @;F5X=\"!-=6QT:5AA8W1)\nM9#H@,S8[(&YE>'0@375L=&E886-T3V9F<V5T.B W,0HR,# W+3 V+3$X(#$V\nM.C P.C0Y($5%4U0@6S$Y,#<Y73H@6RTQ72!,3T<Z(\"!D871A8F%S92!S>7-T\nM96T@:7,@<F5A9'D*,C P-RTP-BTQ.\" Q-CHP,3HR-R!%15-4(%LQ.3 Y-UTZ\nM(%LM,5T@3$]'.B @9'5R871I;VXZ(#@Q+C4P.\"!M<R @<W1A=&5M96YT.B!3\nM150@1&%T95-T>6QE/4E33SM314Q%0U0@;VED+\"!P9U]E;F-O9&EN9U]T;U]C\nM:&%R*&5N8V]D:6YG*2!!4R!E;F-O9&EN9RP@9&%T;&%S='-Y<V]I9 H)(\"!&\nM4D]-('!G7V1A=&%B87-E(%=(15)%(&]I9\" ](#0V.30R,PHR,# W+3 V+3$X\nM(#$V.C Q.C(W($5%4U0@6S$Y,#DW73H@6RTQ72!,3T<Z(\"!D=7)A=&EO;CH@\nM,\"XP.3$@;7,@('-T871E;65N=#H@<V5T(&-L:65N=%]E;F-O9&EN9R!T;R G\nM54Y)0T]$12<*,C P-RTP-BTQ.\" Q-CHP,3HS-R!%15-4(%LQ.3 Y-UTZ(%LM\nM,5T@3$]'.B @9'5R871I;VXZ(#DX+C8S-R!M<R @<W1A=&5M96YT.B!314Q%\nM0U0@*B!&4D]-(\")T8D-O;&QE8W1I;VYS(@HR,# W+3 V+3$X(#$V.C Q.C,W\nM($5%4U0@6S$Y,#DW73H@6RTQ72!,3T<Z(\"!D=7)A=&EO;[email protected](@;7,@\nM('-T871E;65N=#H@4T5,14-4(&9O<FUA=%]T>7!E*&]I9\"PM,2D@87,@='EP\nM;F%M92!&4D]-('!G7W1Y<&4@5TA%4D4@;VED(#T@,C,*,C P-RTP-BTQ.\" Q\nM-CHP,3HS-R!%15-4(%LQ.3 Y-UTZ(%LM,5T@3$]'.B @9'5R871I;VXZ(#0N\nM,S<Q(&US(\"!S=&%T96UE;G0Z(%-%3$5#5\"!#05-%(%=(14X@='EP8F%S971Y\nM<&4],\"!42$5.(&]I9\"!E;'-E('1Y<&)A<V5T>7!E($5.1\"!!4R!B87-E='EP\nM90H)(\"!&4D]-('!G7W1Y<&4@5TA%4D4@;VED/3(S\"C(P,#<M,#8M,3@@,38Z\nM,#$Z,S<@14535\"!;,3DP.3==.B!;+3%=($Q/1SH@(&1U<F%T:6]N.B P+C0S\nM,B!M<R @<W1A=&5M96YT.B!314Q%0U0@9F]R;6%T7W1Y<&4H;VED+#$P-\"D@\nM87,@='EP;F%M92!&4D]-('!G7W1Y<&4@5TA%4D4@;VED(#T@,3 T,PHR,# W\nM+3 V+3$X(#$V.C Q.C,W($5%4U0@6S$Y,#DW73H@6RTQ72!,3T<Z(\"!D=7)A\nM=&EO;CH@,\"XU,C,@;7,@('-T871E;65N=#H@4T5,14-4($-!4T4@5TA%3B!T\nM>7!B87-E='EP93TP(%1(14X@;VED(&5L<V4@='EP8F%S971Y<&4@14Y$($%3\nM(&)A<V5T>7!E\"@D@($923TT@<&=?='EP92!72$5212!O:60],3 T,PHR,# W\nM+3 V+3$X(#$V.C Q.C,W($5%4U0@6S$Y,#DW73H@6RTQ72!,3T<Z(\"!D=7)A\nM=&EO;CH@,\"XS-S$@;7,@('-T871E;65N=#H@4T5,14-4(&9O<FUA=%]T>7!E\nM*&]I9\"PM,2D@87,@='EP;F%M92!&4D]-('!G7W1Y<&4@5TA%4D4@;VED(#T@\nM,C,*,C P-RTP-BTQ.\" Q-CHP,3HS-R!%15-4(%LQ.3 Y-UTZ(%LM,5T@3$]'\nM.B @9'5R871I;VXZ(# N-3$T(&US(\"!S=&%T96UE;G0Z(%-%3$5#5\"!#05-%\nM(%=(14X@='EP8F%S971Y<&4],\"!42$5.(&]I9\"!E;'-E('1Y<&)A<V5T>7!E\nM($5.1\"!!4R!B87-E='EP90H)(\"!&4D]-('!G7W1Y<&4@5TA%4D4@;VED/3(S\nM\"C(P,#<M,#8M,3@@,38Z,#$Z,S<@14535\"!;,3DP.3==.B!;+3%=($Q/1SH@\nM(&1U<F%T:6]N.B P+C,W.2!M<R @<W1A=&5M96YT.B!314Q%0U0@9F]R;6%T\nM7W1Y<&4H;VED+\"TQ*2!A<R!T>7!N86UE($923TT@<&=?='EP92!72$5212!O\nM:60@/2 R,PHR,# W+3 V+3$X(#$V.C Q.C,W($5%4U0@6S$Y,#DW73H@6RTQ\nM72!,3T<Z(\"!D=7)A=&EO;CH@,\"XU,3<@;7,@('-T871E;65N=#H@4T5,14-4\nM($-!4T4@5TA%3B!T>7!B87-E='EP93TP(%1(14X@;VED(&5L<V4@='EP8F%S\nM971Y<&4@14Y$($%3(&)A<V5T>7!E\"@D@($923TT@<&=?='EP92!72$5212!O\nM:60],C,*,C P-RTP-BTQ.\" Q-CHP,3HS-R!%15-4(%LQ.3 Y-UTZ(%LM,5T@\nM3$]'.B @9'5R871I;VXZ(# N,S<Q(&US(\"!S=&%T96UE;G0Z(%-%3$5#5\"!F\nM;W)M871?='EP92AO:60L+3$I(&%S('1Y<&YA;64@1E)/32!P9U]T>7!E(%=(\nM15)%(&]I9\" ](#(S\"C(P,#<M,#8M,3@@,38Z,#$Z,S<@14535\"!;,3DP.3==\nM.B!;+3%=($Q/1SH@(&1U<F%T:6]N.B P+C4Q,\"!M<R @<W1A=&5M96YT.B!3\nM14Q%0U0@0T%312!72$5.('1Y<&)A<V5T>7!E/3 @5$A%3B!O:60@96QS92!T\nM>7!B87-E='EP92!%3D0@05,@8F%S971Y<&4*\"2 @1E)/32!P9U]T>7!E(%=(\nM15)%(&]I9#TR,PHR,# W+3 V+3$X(#$V.C Q.C,W($5%4U0@6S$Y,#DW73H@\nM6RTQ72!,3T<Z(\"!D=7)A=&EO;CH@,\"XS-S$@;7,@('-T871E;65N=#H@4T5,\nM14-4(&9O<FUA=%]T>7!E*&]I9\"PM,2D@87,@='EP;F%M92!&4D]-('!G7W1Y\nM<&4@5TA%4D4@;VED(#T@,C,*,C P-RTP-BTQ.\" Q-CHP,3HS-R!%15-4(%LQ\nM.3 Y-UTZ(%LM,5T@3$]'.B @9'5R871I;VXZ(# N-3$Y(&US(\"!S=&%T96UE\nM;G0Z(%-%3$5#5\"!#05-%(%=(14X@='EP8F%S971Y<&4],\"!42$5.(&]I9\"!E\nM;'-E('1Y<&)A<V5T>7!E($5.1\"!!4R!B87-E='EP90H)(\"!&4D]-('!G7W1Y\n0<&4@5TA%4D4@;VED/3(S\"@``\n`\nend\n\n", "msg_date": "Mon, 18 Jun 2007 16:09:05 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parsing VACUUM VERBOSE" }, { "msg_contents": "On 6/18/07, Sabin Coanda <[email protected]> wrote:\n>\n>\n> \"\"Guillaume Smet\"\" <[email protected]> wrote in message\n> news:[email protected]...\n> > Sabin,\n> >\n> > On 6/14/07, Sabin Coanda <[email protected]> wrote:\n> >> I'd like to understand completely the report generated by VACUUM\n> VERBOSE.\n> >> Please tell me where is it documented ?\n> >\n> > You can take a look to what I did for pgFouine:\n> > http://pgfouine.projects.postgresql.org/vacuum.html\n> >\n>\n> Hi Guillaume,\n>\n> I tried pgFouine.php app on a sample log file but it reports me some\n> errors.\n> Could you give me some startup support, please ?\n> I attach the log here to find what's wrong.\n>\n> Regards,\n> Sabin\n>\n>\n> begin 666 postgresql-2007-06-18_160048.log\n> M,C P-RTP-BTQ.\" Q-CHP,#HT.2!%15-4(%LQ.3 W.5TZ(%LM,5T@3$]'.B @\n> M9&%T86)A<V4@<WES=&5M('=A<R!S:'5T(&1O=VX@870@,C P-RTP-BTQ.\" Q\n> M-CHP,#HT-R!%15-4\"C(P,#<M,#8M,3@@,38Z,# Z-#D@14535\"!;,3DP-SE=\n> M.B!;+3%=($Q/1SH@(&-H96-K<&]I;G0@<F5C;W)D(&ES(&%T(#,W+T$S-SDX\n> M-C8T\"C(P,#<M,#8M,3@@,38Z,# Z-#D@14535\"!;,3DP-SE=.B!;+3%=($Q/\n> M1SH@(')E9&\\@<F5C;W)D(&ES(&%T(#,W+T$S-SDX-C8T.R!U;F1O(')E8V]R\n> M9\"!I<R!A=\" P+S [('-H=71D;W=N(%12544*,C P-RTP-BTQ.\" Q-CHP,#HT\n> M.2!%15-4(%LQ.3 W.5TZ(%LM,5T@3$]'.B @;F5X=\"!T<F%N<V%C=&EO;B!)\n> M1#H@,\"\\W.#@V,C$[(&YE>'0@3TE$.B V-C,W.#<*,C P-RTP-BTQ.\" Q-CHP\n> M,#HT.2!%15-4(%LQ.3 W.5TZ(%LM,5T@3$]'.B @;F5X=\"!-=6QT:5AA8W1)\n> M9#H@,S8[(&YE>'0@375L=&E886-T3V9F<V5T.B W,0HR,# W+3 V+3$X(#$V\n> M.C P.C0Y($5%4U0@6S$Y,#<Y73H@6RTQ72!,3T<Z(\"!D871A8F%S92!S>7-T\n> M96T@:7,@<F5A9'D*,C P-RTP-BTQ.\" Q-CHP,3HR-R!%15-4(%LQ.3 Y-UTZ\n> M(%LM,5T@3$]'.B @9'5R871I;VXZ(#@Q+C4P.\"!M<R @<W1A=&5M96YT.B!3\n> M150@1&%T95-T>6QE/4E33SM314Q%0U0@;VED+\"!P9U]E;F-O9&EN9U]T;U]C\n> M:&%R*&5N8V]D:6YG*2!!4R!E;F-O9&EN9RP@9&%T;&%S='-Y<V]I9 H)(\"!&\n> M4D]-('!G7V1A=&%B87-E(%=(15)%(&]I9\" ](#0V.30R,PHR,# W+3 V+3$X\n> M(#$V.C Q.C(W($5%4U0@6S$Y,#DW73H@6RTQ72!,3T<Z(\"!D=7)A=&EO;CH@\n> M,\"XP.3$@;7,@('-T871E;65N=#H@<V5T(&-L:65N=%]E;F-O9&EN9R!T;R G\n> M54Y)0T]$12<*,C P-RTP-BTQ.\" Q-CHP,3HS-R!%15-4(%LQ.3 Y-UTZ(%LM\n> M,5T@3$]'.B @9'5R871I;VXZ(#DX+C8S-R!M<R @<W1A=&5M96YT.B!314Q%\n> M0U0@*B!&4D]-(\")T8D-O;&QE8W1I;VYS(@HR,# W+3 V+3$X(#$V.C Q.C,W\n> M($5%4U0@6S$Y,#DW73H@6RTQ72!,3T<Z(\"!D=7)A=&EO;[email protected](@;7,@\n> M('-T871E;65N=#H@4T5,14-4(&9O<FUA=%]T>7!E*&]I9\"PM,2D@87,@='EP\n> M;F%M92!&4D]-('!G7W1Y<&4@5TA%4D4@;VED(#T@,C,*,C P-RTP-BTQ.\" Q\n> M-CHP,3HS-R!%15-4(%LQ.3 Y-UTZ(%LM,5T@3$]'.B @9'5R871I;VXZ(#0N\n> M,S<Q(&US(\"!S=&%T96UE;G0Z(%-%3$5#5\"!#05-%(%=(14X@='EP8F%S971Y\n> M<&4],\"!42$5.(&]I9\"!E;'-E('1Y<&)A<V5T>7!E($5.1\"!!4R!B87-E='EP\n> M90H)(\"!&4D]-('!G7W1Y<&4@5TA%4D4@;VED/3(S\"C(P,#<M,#8M,3@@,38Z\n> M,#$Z,S<@14535\"!;,3DP.3==.B!;+3%=($Q/1SH@(&1U<F%T:6]N.B P+C0S\n> M,B!M<R @<W1A=&5M96YT.B!314Q%0U0@9F]R;6%T7W1Y<&4H;VED+#$P-\"D@\n> M87,@='EP;F%M92!&4D]-('!G7W1Y<&4@5TA%4D4@;VED(#T@,3 T,PHR,# W\n> M+3 V+3$X(#$V.C Q.C,W($5%4U0@6S$Y,#DW73H@6RTQ72!,3T<Z(\"!D=7)A\n> M=&EO;CH@,\"XU,C,@;7,@('-T871E;65N=#H@4T5,14-4($-!4T4@5TA%3B!T\n> M>7!B87-E='EP93TP(%1(14X@;VED(&5L<V4@='EP8F%S971Y<&4@14Y$($%3\n> M(&)A<V5T>7!E\"@D@($923TT@<&=?='EP92!72$5212!O:60],3 T,PHR,# W\n> M+3 V+3$X(#$V.C Q.C,W($5%4U0@6S$Y,#DW73H@6RTQ72!,3T<Z(\"!D=7)A\n> M=&EO;CH@,\"XS-S$@;7,@('-T871E;65N=#H@4T5,14-4(&9O<FUA=%]T>7!E\n> M*&]I9\"PM,2D@87,@='EP;F%M92!&4D]-('!G7W1Y<&4@5TA%4D4@;VED(#T@\n> M,C,*,C P-RTP-BTQ.\" Q-CHP,3HS-R!%15-4(%LQ.3 Y-UTZ(%LM,5T@3$]'\n> M.B @9'5R871I;VXZ(# N-3$T(&US(\"!S=&%T96UE;G0Z(%-%3$5#5\"!#05-%\n> M(%=(14X@='EP8F%S971Y<&4],\"!42$5.(&]I9\"!E;'-E('1Y<&)A<V5T>7!E\n> M($5.1\"!!4R!B87-E='EP90H)(\"!&4D]-('!G7W1Y<&4@5TA%4D4@;VED/3(S\n> M\"C(P,#<M,#8M,3@@,38Z,#$Z,S<@14535\"!;,3DP.3==.B!;+3%=($Q/1SH@\n> M(&1U<F%T:6]N.B P+C,W.2!M<R @<W1A=&5M96YT.B!314Q%0U0@9F]R;6%T\n> M7W1Y<&4H;VED+\"TQ*2!A<R!T>7!N86UE($923TT@<&=?='EP92!72$5212!O\n> M:60@/2 R,PHR,# W+3 V+3$X(#$V.C Q.C,W($5%4U0@6S$Y,#DW73H@6RTQ\n> M72!,3T<Z(\"!D=7)A=&EO;CH@,\"XU,3<@;7,@('-T871E;65N=#H@4T5,14-4\n> M($-!4T4@5TA%3B!T>7!B87-E='EP93TP(%1(14X@;VED(&5L<V4@='EP8F%S\n> M971Y<&4@14Y$($%3(&)A<V5T>7!E\"@D@($923TT@<&=?='EP92!72$5212!O\n> M:60],C,*,C P-RTP-BTQ.\" Q-CHP,3HS-R!%15-4(%LQ.3 Y-UTZ(%LM,5T@\n> M3$]'.B @9'5R871I;VXZ(# N,S<Q(&US(\"!S=&%T96UE;G0Z(%-%3$5#5\"!F\n> M;W)M871?='EP92AO:60L+3$I(&%S('1Y<&YA;64@1E)/32!P9U]T>7!E(%=(\n> M15)%(&]I9\" ](#(S\"C(P,#<M,#8M,3@@,38Z,#$Z,S<@14535\"!;,3DP.3==\n> M.B!;+3%=($Q/1SH@(&1U<F%T:6]N.B P+C4Q,\"!M<R @<W1A=&5M96YT.B!3\n> M14Q%0U0@0T%312!72$5.('1Y<&)A<V5T>7!E/3 @5$A%3B!O:60@96QS92!T\n> M>7!B87-E='EP92!%3D0@05,@8F%S971Y<&4*\"2 @1E)/32!P9U]T>7!E(%=(\n> M15)%(&]I9#TR,PHR,# W+3 V+3$X(#$V.C Q.C,W($5%4U0@6S$Y,#DW73H@\n> M6RTQ72!,3T<Z(\"!D=7)A=&EO;CH@,\"XS-S$@;7,@('-T871E;65N=#H@4T5,\n> M14-4(&9O<FUA=%]T>7!E*&]I9\"PM,2D@87,@='EP;F%M92!&4D]-('!G7W1Y\n> M<&4@5TA%4D4@;VED(#T@,C,*,C P-RTP-BTQ.\" Q-CHP,3HS-R!%15-4(%LQ\n> M.3 Y-UTZ(%LM,5T@3$]'.B @9'5R871I;VXZ(# N-3$Y(&US(\"!S=&%T96UE\n> M;G0Z(%-%3$5#5\"!#05-%(%=(14X@='EP8F%S971Y<&4],\"!42$5.(&]I9\"!E\n> M;'-E('1Y<&)A<V5T>7!E($5.1\"!!4R!B87-E='EP90H)(\"!&4D]-('!G7W1Y\n> 0<&4@5TA%4D4@;VED/3(S\"@``\n> `\n> end\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\nGuillaume and Sabin,\n\nI am following this discussion with great interest. I have PG running on\nFreeBSD and am forced to run pgFouine on a separate Linux box. I am hoping I\ncan create a log file. and then copy that over and have pgFouine analyze it\non the Linux box.\na. I created a log file out of vacuum verbose, is that right? It is not\ncomplete because I don't know how to dump it into a file in some sort of\nautmoated fashion. So, I have to take what is on the screen and copy it off.\nb. I can also set a variable \"log_min_duration_statement\" in pgsql.conf\n\nI guess I am like Sabin,, and need some hand holding to get started.\n\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nOn 6/18/07, Sabin Coanda <[email protected]> wrote:\n\"\"Guillaume Smet\"\" <[email protected]> wrote in messagenews:[email protected]...> Sabin,\n>> On 6/14/07, Sabin Coanda <[email protected]> wrote:>> I'd like to understand completely the report generated by VACUUM VERBOSE.\n>> Please tell me where is it documented ?>> You can take a look to what I did for pgFouine:> http://pgfouine.projects.postgresql.org/vacuum.html\n>Hi Guillaume,I tried pgFouine.php app on a sample log file but it reports me some errors.Could you give me some startup support, please ?I attach the log here to find what's wrong.\nRegards,Sabinbegin 666 postgresql-2007-06-18_160048.logM,C P-RTP-BTQ.\" Q-CHP,#HT.2!%15-4(%LQ.3 W.5TZ(%LM,5T@3$]'.B @M9&%T86)A<V4@<WES=&5M('=A<R!S:'5T(&1O=VX@870\n@,C P-RTP-BTQ.\" QM-CHP,#HT-R!%15-4\"C(P,#<M,#8M,3@@,38Z,# Z-#D@14535\"!;,3DP-SE=M.B!;+3%=($Q/1SH@(&-H96-K<&]I;G0@<F5C;W)D(&ES(&%T(#,W+T$S-SDXM-C8T\"C(P,#<M,#8M,3@@,38Z,# \nZ-#D@14535\"!;,3DP-SE=.B!;+3%=($Q/M1SH@(')E9&\\@<F5C;W)D(&ES(&%T(#,W+T$S-SDX-C8T.R!U;F1O(')E8V]RM9\"!I<R!A=\" P+S [('-H=71D;W=N(%12544*,C P-RTP-BTQ.\" Q-CHP,#HTM.2!%15-4\n(%LQ.3 W.5TZ(%LM,5T@3$]'.B @;F5X=\"!T<F%N<V%C=&EO;B!)M1#H@,\"\\W.#@V,C$[(&YE>'0@3TE$.B V-C,W.#<*,C P-RTP-BTQ.\" Q-CHPM,#HT.2!%15-4(%LQ.3 W.5TZ(%LM,5T@3$]'.B @;F5X=\"!-=6QT:5AA8W1)\nM9#H@,S8[(&YE>'0@375L=&E886-T3V9F<V5T.B W,0HR,# W+3 V+3$X(#$VM.C P.C0Y($5%4U0@6S$Y,#<Y73H@6RTQ72!,3T<Z(\"!D871A8F%S92!S>7-TM96T@:7,@<F5A9'D*,C P-RTP-BTQ.\" Q-CHP,3HR-R!%15-4(%LQ.3 Y-UTZ\nM(%LM,5T@3$]'.B @9'5R871I;VXZ(#@Q+C4P.\"!M<R @<W1A=&5M96YT.B!3M150@1&%T95-T>6QE/4E33SM314Q%0U0@;VED+\"!P9U]E;F-O9&EN9U]T;U]CM:&%R*&5N8V]D:6YG*2!!4R!E;F-O9&EN9RP@9\n&%T;&%S='-Y<V]I9 H)(\"!&M4D]-('!G7V1A=&%B87-E(%=(15)%(&]I9\" ](#0V.30R,PHR,# W+3 V+3$XM(#$V.C Q.C(W($5%4U0@6S$Y,#DW73H@6RTQ72!,3T<Z(\"!D=7)A=&EO;CH@M,\"XP.3$\n@;7,@('-T871E;65N=#H@<V5T(&-L:65N=%]E;F-O9&EN9R!T;R GM54Y)0T]$12<*,C P-RTP-BTQ.\" Q-CHP,3HS-R!%15-4(%LQ.3 Y-UTZ(%LMM,5T@3$]'.B @9'5R871I;VXZ(#DX+C8S-R!M<R @<W1A=&5M96YT.B!314Q%\nM0U0@*B!&4D]-(\")T8D-O;&QE8W1I;VYS(@HR,# W+3 V+3$X(#$V.C Q.C,WM($5%4U0@6S$Y,#DW73H@6RTQ72!,3T<Z(\"!D=7)A=&EO;[email protected](@;7,@M('-T871E;65N=#H@4T5,14-4(&9O<FUA=%]T>7!E*&]I9\"PM,\n2D@87,@='EPM;F%M92!&4D]-('!G7W1Y<&4@5TA%4D4@;VED(#T@,C,*,C P-RTP-BTQ.\" QM-CHP,3HS-R!%15-4(%LQ.3 Y-UTZ(%LM,5T@3$]'.B @9'5R871I;VXZ(#0NM,S<Q(&US(\"!S=&%T96UE;G0Z(%-%3$5#5\"!#05-%(%=(14X@='EP8F%S971Y\nM<&4],\"!42$5.(&]I9\"!E;'-E('1Y<&)A<V5T>7!E($5.1\"!!4R!B87-E='EPM90H)(\"!&4D]-('!G7W1Y<&4@5TA%4D4@;VED/3(S\"C(P,#<M,#8M,3@@,38ZM,#$Z,S<@14535\"!;,\n3DP.3==.B!;+3%=($Q/1SH@(&1U<F%T:6]N.B P+C0SM,B!M<R @<W1A=&5M96YT.B!314Q%0U0@9F]R;6%T7W1Y<&4H;VED+#$P-\"D@M87,@='EP;F%M92!&4D]-('!G7W1Y<&4@5TA%4D4@;VED(#T@,3 T,PHR,# W\nM+3 V+3$X(#$V.C Q.C,W($5%4U0@6S$Y,#DW73H@6RTQ72!,3T<Z(\"!D=7)AM=&EO;CH@,\"XU,C,@;7,@('-T871E;65N=#H@4T5,14-4($-!4T4@5TA%3B!TM>7!B87-E='EP93TP(%1(14X@;VED(&5L<V4@='EP8F%S971Y<&4@14Y$($%3\nM(&)A<V5T>7!E\"@D@($923TT@<&=?='EP92!72$5212!O:60],3 T,PHR,# WM+3 V+3$X(#$V.C Q.C,W($5%4U0@6S$Y,#DW73H@6RTQ72!,3T<Z(\"!D=7)AM=&EO;CH@,\"XS-S$@;7,@('-T871E;65N=#H@4T5,14-4(&9O<FUA=%]T>7!E\nM*&]I9\"PM,2D@87,@='EP;F%M92!&4D]-('!G7W1Y<&4@5TA%4D4@;VED(#T@M,C,*,C P-RTP-BTQ.\" Q-CHP,3HS-R!%15-4(%LQ.3 Y-UTZ(%LM,5T@3$]'M.B @9'5R871I;VXZ(# N-3$T(&US(\"!S=&%T96UE;G0Z(%-%3$5#5\"!#05-%\nM(%=(14X@='EP8F%S971Y<&4],\"!42$5.(&]I9\"!E;'-E('1Y<&)A<V5T>7!EM($5.1\"!!4R!B87-E='EP90H)(\"!&4D]-('!G7W1Y<&4@5TA%4D4@;VED/3(SM\"C(P,#<M,#8M,3@@,38Z,#$Z,S<@14535\"!;,\n3DP.3==.B!;+3%=($Q/1SH@M(&1U<F%T:6]N.B P+C,W.2!M<R @<W1A=&5M96YT.B!314Q%0U0@9F]R;6%TM7W1Y<&4H;VED+\"TQ*2!A<R!T>7!N86UE($923TT@<&=?='EP92!72$5212!OM:60@/2 R,PHR,# W+3 V+3$X(#$V.C \nQ.C,W($5%4U0@6S$Y,#DW73H@6RTQM72!,3T<Z(\"!D=7)A=&EO;CH@,\"XU,3<@;7,@('-T871E;65N=#H@4T5,14-4M($-!4T4@5TA%3B!T>7!B87-E='EP93TP(%1(14X@;VED(&5L<V4@='EP8F%SM971Y<&4@14Y$($%3(&)A<V5T>7!E\"@D@($923TT@<&=?='EP92!72$5212!O\nM:60],C,*,C P-RTP-BTQ.\" Q-CHP,3HS-R!%15-4(%LQ.3 Y-UTZ(%LM,5T@M3$]'.B @9'5R871I;VXZ(# N,S<Q(&US(\"!S=&%T96UE;G0Z(%-%3$5#5\"!FM;W)M871?='EP92AO:60L+3$I(&%S('1Y<&YA;64@1E)/32!P9U]T>7!E(%=(\nM15)%(&]I9\" ](#(S\"C(P,#<M,#8M,3@@,38Z,#$Z,S<@14535\"!;,3DP.3==M.B!;+3%=($Q/1SH@(&1U<F%T:6]N.B P+C4Q,\"!M<R @<W1A=&5M96YT.B!3M14Q%0U0@0T%312!72$5.('1Y<&)A<V5T>7!E/3 @\n5$A%3B!O:60@96QS92!TM>7!B87-E='EP92!%3D0@05,@8F%S971Y<&4*\"2 @1E)/32!P9U]T>7!E(%=(M15)%(&]I9#TR,PHR,# W+3 V+3$X(#$V.C Q.C,W($5%4U0@6S$Y,#DW73H@M6RTQ72!,3T<Z(\"!D=7)A=&EO;CH@,\"XS-S$@;7,@('-T871E;65N=#H@4T5,\nM14-4(&9O<FUA=%]T>7!E*&]I9\"PM,2D@87,@='EP;F%M92!&4D]-('!G7W1YM<&4@5TA%4D4@;VED(#T@,C,*,C P-RTP-BTQ.\" Q-CHP,3HS-R!%15-4(%LQM.3 Y-UTZ(%LM,5T@3$]'.B @9'5R871I;VXZ(# N-3$Y(&US(\"!S=&%T96UE\nM;G0Z(%-%3$5#5\"!#05-%(%=(14X@='EP8F%S971Y<&4],\"!42$5.(&]I9\"!EM;'-E('1Y<&)A<V5T>7!E($5.1\"!!4R!B87-E='EP90H)(\"!&4D]-('!G7W1Y0<&4@5TA%4D4@;VED/3(S\"@``\n`end---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmasterGuillaume and Sabin,\n\nI am following this discussion with great interest. I have PG running\non FreeBSD and am forced to run pgFouine on a separate Linux box. I am\nhoping I can create a log file. and then copy that over and have\npgFouine analyze it on the Linux box. \na.  I created a log file out of vacuum verbose, is that right? It\nis not complete because I don't know how to dump it into a file in some\nsort of autmoated fashion. So, I have to take what is on the screen and\ncopy it off.\nb.  I can also set a variable \"log_min_duration_statement\" in pgsql.conf\n\nI guess I am like Sabin,, and need some hand holding to get started.\n-- Yudhvir Singh Sidhu408 375 3134 cell", "msg_date": "Mon, 18 Jun 2007 11:06:46 -0700", "msg_from": "\"Y Sidhu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parsing VACUUM VERBOSE" }, { "msg_contents": "On 6/18/07, Sabin Coanda <[email protected]> wrote:\n> Hi Guillaume,\n>\n> I tried pgFouine.php app on a sample log file but it reports me some errors.\n> Could you give me some startup support, please ?\n> I attach the log here to find what's wrong.\n\nSorry for the delay. I answered to your private email this evening.\n\n--\nGuillaume\n", "msg_date": "Tue, 19 Jun 2007 00:08:04 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parsing VACUUM VERBOSE" }, { "msg_contents": "On 6/18/07, Y Sidhu <[email protected]> wrote:\n> I am following this discussion with great interest. I have PG running on\n> FreeBSD and am forced to run pgFouine on a separate Linux box. I am hoping I\n> can create a log file. and then copy that over and have pgFouine analyze it\n> on the Linux box.\n> a. I created a log file out of vacuum verbose, is that right? It is not\n> complete because I don't know how to dump it into a file in some sort of\n> autmoated fashion. So, I have to take what is on the screen and copy it off.\n\nIf you want to analyze a VACUUM log, just run vacuumdb with the option\nyou need (for example -a -z -v -f for a vacuum full analyze verbose).\n\n# vacuumdb -a -z -v -f > your_log_file.log\n\nThen analyze this log file as explained on the pgFouine website.\n\n> b. I can also set a variable \"log_min_duration_statement\" in pgsql.conf\n>\n> I guess I am like Sabin,, and need some hand holding to get started.\n\nThis is completely different and it's useful for query log analysis.\nSo you don't care if you just want to analyze your vacuum behaviour.\n\n--\nGuillaume\n", "msg_date": "Tue, 19 Jun 2007 00:11:15 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parsing VACUUM VERBOSE" } ]
[ { "msg_contents": "Looking for replication solutions, I find:\n\nSlony-I\n Seems good, single master only, master is a single point of failure,\n no good failover system for electing a new master or having a failed\n master rejoin the cluster. Slave databases are mostly for safety or\n for parallelizing queries for performance. Suffers from O(N^2) \n communications (N = cluster size).\n\nSlony-II\n Seems brilliant, a solid theoretical foundation, at the forefront of\n computer science. But can't find project status -- when will it be\n available? Is it a pipe dream, or a nearly-ready reality?\n\nPGReplication\n Appears to be a page that someone forgot to erase from the old GBorg site.\n\nPGCluster\n Seems pretty good, but web site is not current, there are releases in use\n that are not on the web site, and also seems to always be a couple steps\n behind the current release of Postgres. Two single-points failure spots,\n load balancer and the data replicator.\n\nIs this a good summary of the status of replication? Have I missed any important solutions or mischaracterized anything?\n\nThanks!\nCraig\n\n(Sorry about the premature send of this message earlier, please ignore.)\n\n\n", "msg_date": "Thu, 14 Jun 2007 16:12:58 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": true, "msg_subject": "Replication" }, { "msg_contents": "What about \"Daffodil Replicator\" - GPL - \nhttp://sourceforge.net/projects/daffodilreplica/\n\n\n-- \nThanks,\n\nEugene Ogurtsov\nInternal Development Chief Architect\nSWsoft, Inc.\n\n\n\nCraig A. James wrote:\n> Looking for replication solutions, I find:\n>\n> Slony-I\n> Seems good, single master only, master is a single point of failure,\n> no good failover system for electing a new master or having a failed\n> master rejoin the cluster. Slave databases are mostly for safety or\n> for parallelizing queries for performance. Suffers from O(N^2) \n> communications (N = cluster size).\n>\n> Slony-II\n> Seems brilliant, a solid theoretical foundation, at the forefront of\n> computer science. But can't find project status -- when will it be\n> available? Is it a pipe dream, or a nearly-ready reality?\n>\n> PGReplication\n> Appears to be a page that someone forgot to erase from the old GBorg \n> site.\n>\n> PGCluster\n> Seems pretty good, but web site is not current, there are releases in \n> use\n> that are not on the web site, and also seems to always be a couple steps\n> behind the current release of Postgres. Two single-points failure \n> spots,\n> load balancer and the data replicator.\n>\n> Is this a good summary of the status of replication? Have I missed \n> any important solutions or mischaracterized anything?\n>\n> Thanks!\n> Craig\n>\n> (Sorry about the premature send of this message earlier, please ignore.)\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n", "msg_date": "Fri, 15 Jun 2007 09:17:46 +0700", "msg_from": "Eugene Ogurtsov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "On 6/14/07, Craig A. James <[email protected]> wrote:\n> Looking for replication solutions, I find:\n>\n> Slony-I\n> Seems good, single master only, master is a single point of failure,\n> no good failover system for electing a new master or having a failed\n> master rejoin the cluster. Slave databases are mostly for safety or\n> for parallelizing queries for performance. Suffers from O(N^2)\n> communications (N = cluster size).\n\nwith reasonable sysadmin you can implement failover system yourself.\nregarding communications, you can cascade the replication to reduce\nload on the master. If you were implementing a large replication\ncluster, this would probably be a good idea. Slony is powerful,\ntrigger based, and highly configurable.\n\n> Slony-II\n> Seems brilliant, a solid theoretical foundation, at the forefront of\n> computer science. But can't find project status -- when will it be\n> available? Is it a pipe dream, or a nearly-ready reality?\n\naiui, this has not gone beyond early planning phases.\n\n> PGReplication\n> Appears to be a page that someone forgot to erase from the old GBorg site.\n>\n> PGCluster\n> Seems pretty good, but web site is not current, there are releases in use\n> that are not on the web site, and also seems to always be a couple steps\n> behind the current release of Postgres. Two single-points failure spots,\n> load balancer and the data replicator.\n>\n> Is this a good summary of the status of replication? Have I missed any important solutions or mischaracterized anything?\n\npgpool 1/2 is a reasonable solution. it's statement level\nreplication, which has some downsides, but is good for certain things.\npgpool 2 has a neat distributed table mechanism which is interesting.\nYou might want to be looking here if you have extremely high ratios of\nread to write but need to service a huge transaction volume.\n\nPITR is a HA solution which 'replicates' a database cluster to an\narchive or a warm (can be brought up quickly, but not available for\nquerying) standby server. Overhead is very low and it's easy to set\nup. This is maybe the simplest and best solution if all you care\nabout is continuous backup. There are plans (a GSoC project,\nactually) to make the warm standby live for (read only)\nqueries...if/when complete, this would provide a replication mechanism\nsimilar. but significantly better to, mysql binary log replication,\nand would provide an excellent compliment to slony.\n\nthere is also the mammoth replicator...I don't know anything about it,\nmaybe someone could comment?\n\nmerlin\n", "msg_date": "Fri, 15 Jun 2007 10:27:38 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" } ]
[ { "msg_contents": "Looking for replication solutions, I find:\n\nSlony-I\n Seems good, single master only, master is a single point of failure,\n no good failover system for electing a new master or having a failed\n master rejoin the cluster. Slave databases are mostly for safety or\n for parallelizing queries for performance. Suffers from O(N^2) \n communications (N = cluster size).\n\nSlony-II\n Seems brilliant, a solid theoretical foundation, at the forefront of\n computer science. But can't find project status -- when will it be\n available? Is it a pipe dream, or a nearly-ready reality?\n\nPGReplication\n Appears to be a page that someone forgot to erase from the old GBorg site.\n\nPGCluster\n Seems pretty good, but web site is not current, there are releases in use\n that are not on the web site, and also seems to always be a couple steps\n behind the current release of Postgres. Two single-points failure spots,\n load balancer and the data replicator.\n\nIs this a good summary of the status of replication? Have I missed any important solutions or mischaracterized anything?\n\nThanks!\nCraig\n\n", "msg_date": "Thu, 14 Jun 2007 16:14:02 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Replication" }, { "msg_contents": "Craig James wrote:\n> Looking for replication solutions, I find:\n> \n> Slony-I\n> Seems good, single master only, master is a single point of failure,\n> no good failover system for electing a new master or having a failed\n> master rejoin the cluster. Slave databases are mostly for safety or\n> for parallelizing queries for performance. Suffers from O(N^2) \n> communications (N = cluster size).\n\nYep\n\n> \n> Slony-II\n> Seems brilliant, a solid theoretical foundation, at the forefront of\n> computer science. But can't find project status -- when will it be\n> available? Is it a pipe dream, or a nearly-ready reality?\n> \n\nDead\n\n\n> PGReplication\n> Appears to be a page that someone forgot to erase from the old GBorg site.\n> \n\nDead\n\n\n> PGCluster\n> Seems pretty good, but web site is not current, there are releases in use\n> that are not on the web site, and also seems to always be a couple steps\n> behind the current release of Postgres. Two single-points failure spots,\n> load balancer and the data replicator.\n> \n\nSlow as all get out for writes but cool idea\n\n> Is this a good summary of the status of replication? Have I missed any \n> important solutions or mischaracterized anything?\n> \n\nlog shipping, closed source solutions\n\n\n> Thanks!\n> Craig\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Thu, 14 Jun 2007 16:22:25 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "Which replication problem are you trying to solve?\n\nOn Thu, 14 Jun 2007, Craig James wrote:\n\n> Looking for replication solutions, I find:\n>\n> Slony-I\n> Seems good, single master only, master is a single point of failure,\n> no good failover system for electing a new master or having a failed\n> master rejoin the cluster. Slave databases are mostly for safety or\n> for parallelizing queries for performance. Suffers from O(N^2) \n> communications (N = cluster size).\n>\n> Slony-II\n> Seems brilliant, a solid theoretical foundation, at the forefront of\n> computer science. But can't find project status -- when will it be\n> available? Is it a pipe dream, or a nearly-ready reality?\n>\n> PGReplication\n> Appears to be a page that someone forgot to erase from the old GBorg site.\n>\n> PGCluster\n> Seems pretty good, but web site is not current, there are releases in use\n> that are not on the web site, and also seems to always be a couple steps\n> behind the current release of Postgres. Two single-points failure spots,\n> load balancer and the data replicator.\n>\n> Is this a good summary of the status of replication? Have I missed any \n> important solutions or mischaracterized anything?\n>\n> Thanks!\n> Craig\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Thu, 14 Jun 2007 16:26:10 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "On 6/15/07, Craig James <[email protected]> wrote:\n[snip]\n> Is this a good summary of the status of replication? Have I missed any important solutions or mischaracterized anything?\n\n* Mammoth Replicator, commercial.\n\n* Continuent uni/cluster, commercial\n(http://www.continuent.com/index.php?option=com_content&task=view&id=212&Itemid=169).\n\n* pgpool-II. Supports load-balancing and replication by implementing a\nproxy that duplicates all updates to all slaves. It can partition data\nby doing this, and it can semi-intelligently route queries to the\nappropriate servers.\n\n* Cybertec. This is a commercial packaging of PGCluster-II from an\nAustrian company.\n\n* Greenplum Database (formerly Bizgres MPP), commercial. Not so much a\nreplication solution as a way to parallelize queries, and targeted at\nthe data warehousing crowd. Similar to ExtenDB, but tightly integrated\nwith PostgreSQL.\n\n* DRDB (http://www.drbd.org/), a device driver that replicates disk\nblocks to other nodes. This works for failover only, not for scaling\nreads. Easy migration of devices if combined with an NFS export.\n\n* Skytools (https://developer.skype.com/SkypeGarage/DbProjects/SkyTools),\na collection of replication tools from the Skype people. Purports to\nbe simpler to use than Slony.\n\nLastly, and perhaps most promisingly, there's the Google Summer of\nCode effort by Florian Pflug\n(http://code.google.com/soc/postgres/appinfo.html?csaid=6545828A8197EBC6)\nto implement true log-based replication, where PostgreSQL's\ntransaction logs are used to keep live slave servers up to date with a\nmaster. In theory, such a system would be extremely simple to set up\nand use, especially since it should, as far as I can see, also\ntransparently replicate the schema for you.\n\nAlexander.\n", "msg_date": "Fri, 15 Jun 2007 01:47:13 +0200", "msg_from": "\"Alexander Staubo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": ">>> On Thu, Jun 14, 2007 at 6:14 PM, in message <[email protected]>,\nCraig James <[email protected]> wrote: \n> Looking for replication solutions, I find:\n> \n> Slony-I\n> Slony-II\n> PGReplication\n> PGCluster\n \nYou wouldn't guess it from the name, but pgpool actually supports replication:\n \nhttp://pgpool.projects.postgresql.org/\n \n\n\n", "msg_date": "Thu, 14 Jun 2007 18:57:15 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "Thanks to all who replied and filled in the blanks. The problem with the web is you never know if you've missed something.\n\nJoshua D. Drake wrote:\n>> Looking for replication solutions, I find...\n>> Slony-II\n> Dead\n\nWow, I'm surprised. Is it dead for lack of need, lack of resources, too complex, or all of the above? It sounded like such a promising theoretical foundation.\n\nBen wrote:\n> Which replication problem are you trying to solve?\n\nMost of our data is replicated offline using custom tools tailored to our loading pattern, but we have a small amount of \"global\" information, such as user signups, system configuration, advertisements, and such, that go into a single small (~5-10 MB) \"global database\" used by all servers.\n\nWe need \"nearly-real-time replication,\" and instant failover. That is, it's far more important for the system to keep working than it is to lose a little data. Transactional integrity is not important. Actual hardware failures are rare, and if a user just happens to sign up, or do \"save preferences\", at the instant the global-database server goes down, it's not a tragedy. But it's not OK for the entire web site to go down when the one global-database server fails.\n\nSlony-I can keep several slave databases up to date, which is nice. And I think I can combine it with a PGPool instance on each server, with the master as primary and few Slony-copies as secondary. That way, if the master goes down, the PGPool servers all switch to their secondary Slony slaves, and read-only access can continue. If the master crashes, users will be able to do most activities, but new users can't sign up, and existing users can't change their preferences, until either the master server comes back, or one of the slaves is promoted to master.\n\nThe problem is, there don't seem to be any \"vote a new master\" type of tools for Slony-I, and also, if the original master comes back online, it has no way to know that a new master has been elected. So I'd have to write a bunch of SOAP services or something to do all of this.\n\nI would consider PGCluster, but it seems to be a patch to Postgres itself. I'm reluctant to introduce such a major piece of technology into our entire system, when only one tiny part of it needs the replication service.\n\nThanks,\nCraig\n", "msg_date": "Thu, 14 Jun 2007 17:38:01 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> Most of our data is replicated offline using custom tools tailored to\n> our loading pattern, but we have a small amount of \"global\" information,\n> such as user signups, system configuration, advertisements, and such,\n> that go into a single small (~5-10 MB) \"global database\" used by all\n> servers.\n\nSlony provides near instantaneous failovers (in the single digit seconds\n range). You can script an automatic failover if the master server\nbecomes unreachable. That leaves you the problem of restarting your app\n(or making it reconnect) to the new master.\n\n5-10MB data implies such a fast initial replication, that making the\nserver rejoin the cluster by setting it up from scratch is not an issue.\n\n\n> The problem is, there don't seem to be any \"vote a new master\" type of\n> tools for Slony-I, and also, if the original master comes back online,\n> it has no way to know that a new master has been elected. So I'd have\n> to write a bunch of SOAP services or something to do all of this.\n\nYou don't need SOAP services, and you do not need to elect a new master.\nif dbX goes down, dbY takes over, you should be able to decide on a\nstatic takeover pattern easily enough.\n\nThe point here is, that the servers need to react to a problem, but you\nprobably want to get the admin on duty to look at the situation as\nquickly as possible anyway. With 5-10MB of data in the database, a\ncomplete rejoin from scratch to the cluster is measured in minutes.\n\nFurthermore, you need to checkout pgpool, I seem to remember that it has\nsome bad habits in routing queries. (E.g. it wants to apply write\nqueries to all nodes, but slony makes the other nodes readonly.\nFurthermore, anything inside a BEGIN is sent to the master node, which\nis bad with some ORMs, that by default wrap any access into a transaction)\n\nAndreas\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGceUXHJdudm4KnO0RAgh/AJ4kXFpzoQAEnn1B7K6pzoCxk0wFxQCggGF1\nmA1KWvcKtfJ6ZcPiajJK1i4=\n=eoNN\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 15 Jun 2007 03:02:15 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "Andreas Kostyrka wrote:\n> Slony provides near instantaneous failovers (in the single digit seconds\n> range). You can script an automatic failover if the master server\n> becomes unreachable.\n\nBut Slony slaves are read-only, correct? So the system isn't fully functional once the master goes down.\n\n> That leaves you the problem of restarting your app\n> (or making it reconnect) to the new master.\n\nDon't you have to run a Slony app to convert one of the slaves into the master?\n\n> 5-10MB data implies such a fast initial replication, that making the\n> server rejoin the cluster by setting it up from scratch is not an issue.\n\nThe problem is to PREVENT it from rejoining the cluster. If you have some semi-automatic process that detects the dead server and converts a slave to the master, and in the mean time the dead server manages to reboot itself (or its network gets fixed, or whatever the problem was), then you have two masters sending out updates, and you're screwed.\n\n>> The problem is, there don't seem to be any \"vote a new master\" type of\n>> tools for Slony-I, and also, if the original master comes back online,\n>> it has no way to know that a new master has been elected. So I'd have\n>> to write a bunch of SOAP services or something to do all of this.\n> \n> You don't need SOAP services, and you do not need to elect a new master.\n> if dbX goes down, dbY takes over, you should be able to decide on a\n> static takeover pattern easily enough.\n\nI can't see how that is true. Any self-healing distributed system needs something like the following:\n\n - A distributed system of nodes that check each other's health\n - A way to detect that a node is down and to transmit that\n information across the nodes\n - An election mechanism that nominates a new master if the\n master fails\n - A way for a node coming online to determine if it is a master\n or a slave\n\nAny solution less than this can cause corruption because you can have two nodes that both think they're master, or end up with no master and no process for electing a master. As far as I can tell, Slony doesn't do any of this. Is there a simpler solution? I've never heard of one.\n\n> The point here is, that the servers need to react to a problem, but you\n> probably want to get the admin on duty to look at the situation as\n> quickly as possible anyway.\n\nNo, our requirement is no administrator interaction. We need instant, automatic recovery from failure so that the system stays online.\n\n> Furthermore, you need to checkout pgpool, I seem to remember that it has\n> some bad habits in routing queries. (E.g. it wants to apply write\n> queries to all nodes, but slony makes the other nodes readonly.\n> Furthermore, anything inside a BEGIN is sent to the master node, which\n> is bad with some ORMs, that by default wrap any access into a transaction)\n\nI should have been more clear about this. I was planning to use PGPool in the PGPool-1 mode (not the new PGPool-2 features that allow replication). So it would only be acting as a failover mechanism. Slony would be used as the replication mechanism.\n\nI don't think I can use PGPool as the replicator, because then it becomes a new single point of failure that could bring the whole system down. If you're using it for INSERT/UPDATE, then there can only be one PGPool server.\n\nI was thinking I'd put a PGPool server on every machine in failover mode only. It would have the Slony master as the primary connection, and a Slony slave as the failover connection. The applications would route all INSERT/UPDATE statements directly to the Slony master, and all SELECT statements to the PGPool on localhost. When the master failed, all of the PGPool servers would automatically switch to one of the Slony slaves.\n\nThis way, the system would keep running on the Slony slaves (so it would be read-only), until a sysadmin could get the master Slony back online. And when the master came online, the PGPool servers would automatically reconnect and write-access would be restored.\n\nDoes this make sense?\n\nCraig\n", "msg_date": "Thu, 14 Jun 2007 18:44:52 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication" }, { "msg_contents": "Craig James wrote:\n> Andreas Kostyrka wrote:\n>> Slony provides near instantaneous failovers (in the single digit seconds\n>> range). You can script an automatic failover if the master server\n>> becomes unreachable.\n> \n> But Slony slaves are read-only, correct? So the system isn't fully \n> functional once the master goes down.\n\nThat is what promotion is for.\n\nJoshua D. Drake\n\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Thu, 14 Jun 2007 19:37:07 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "On 6/15/07, Craig James <[email protected]> wrote:\n> I don't think I can use PGPool as the replicator, because then it becomes a new single point of failure that could bring the whole system down. If you're using it for INSERT/UPDATE, then there can only be one PGPool server.\n\nAre you sure? I have been considering this possibility, too, but I\ndidn't find anything in the documentation. The main mechanism of the\nproxy is taking received updates and playing them one multiple servers\nwith 2PC, and the proxies should not need to keep any state about\nthis, so why couldn't you install multiple proxies?\n\nAlexander.\n", "msg_date": "Fri, 15 Jun 2007 10:28:33 +0200", "msg_from": "\"Alexander Staubo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "Hello,\n\nOn Thu, 2007-06-14 at 16:14 -0700, Craig James wrote:\n> Cluster\n> Seems pretty good, but web site is not current, \n\nhttp://www.pgcluster.org is a bit up2date, also\nhttp://pgfoundry.org/projects/pgcluster is up2date (at least downloads\npage :) )\n\nRegards,\n-- \nDevrim GÜNDÜZ\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, ODBCng - http://www.commandprompt.com/", "msg_date": "Fri, 15 Jun 2007 18:16:41 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "On Thu, 14 Jun 2007 17:38:01 -0700\nCraig James <[email protected]> wrote:\n\n> I would consider PGCluster, but it seems to be a patch to Postgres\n> itself. I'm reluctant to introduce such a major piece of technology\n\nYes it is. For most of the time it is not very much behind actual\nversions of postgresql. The project's biggest drawbacks, as I see:\n\n- horrible documentation\n- changing configuration without any warning/help to the \"user\"\n(as far as there are only \"rc\"-s, I can't really blame the\ndevelopers for that... :) )\n\n- there are only \"rc\" -s, no \"stable\" version available for current\npostgresql releases.\n\nI think this project needs someone speaking english very well, and\nhaving the time and will to coordinate and document all the code that\nis written. Otherwise the idea and the solution seems to be very good.\nIf someone - with big luck and lot of try-fail efforts - sets up a\nworking system, then it will be stable and working for long time.\n\n> into our entire system, when only one tiny part of it needs the\n> replication service.\n> \n> Thanks,\n> Craig\n\nRgds,\nAkos\n\n-- \nÜdvözlettel,\nGábriel Ákos\n-=E-Mail :[email protected]|Web: http://www.i-logic.hu =-\n-=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n", "msg_date": "Fri, 15 Jun 2007 19:42:48 +0200", "msg_from": "=?UTF-8?B?R8OhYnJpZWwgw4Frb3M=?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "Hi,\n\nJoshua D. Drake wrote:\n>> Slony-II\n>> Seems brilliant, a solid theoretical foundation, at the forefront of\n>> computer science. But can't find project status -- when will it be\n>> available? Is it a pipe dream, or a nearly-ready reality?\n>>\n> \n> Dead\n\nNot quite... there's still Postgres-R, see www.postgres-r.org And I'm \ncontinuously working on it, despite not having updated the website for \nalmost a year now...\n\nI planned on releasing the next development snapshot together with 8.3, \nas that seems to be delayed, that seems realistic ;-)\n\nRegards\n\nMarkus\n\n", "msg_date": "Mon, 18 Jun 2007 19:40:57 +0200", "msg_from": "Markus Schiltknecht <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "Markus Schiltknecht wrote:\n> Not quite... there's still Postgres-R, see www.postgres-r.org And I'm \n> continuously working on it, despite not having updated the website for \n> almost a year now...\n> \n> I planned on releasing the next development snapshot together with 8.3, \n> as that seems to be delayed, that seems realistic ;-)\n\nIs Postgres-R the same thing as Slony-II? There's a lot of info and news around about Slony-II, but your web page doesn't seem to mention it.\n\nWhile researching replication solutions, I had a heck of a time sorting out the dead or outdated web pages (like the stuff on gborg) from the active projects.\n\nEither way, it's great to know you're working on it.\n\nCraig\n", "msg_date": "Mon, 18 Jun 2007 11:32:43 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication" }, { "msg_contents": "Hi,\n\nCraig James wrote:\n> Is Postgres-R the same thing as Slony-II? There's a lot of info and \n> news around about Slony-II, but your web page doesn't seem to mention it.\n\nHm... true. Good point. Maybe I should add a FAQ:\n\nPostgres-R has been the name of the research project by Bettina Kemme et \nal. Slony-II was the name Neil and Gavin gave their attempt to continue \nthat project.\n\nI've based my work on the old (6.4.2) Postgres-R source code - and I'm \nstill calling it Postgres-R, probably Postgres-R (8) to distinguish it \nfrom the original one. But I'm thinking about changing the name \ncompletely... however, I'm a developer, not a marketing guru.\n\n> While researching replication solutions, I had a heck of a time sorting \n> out the dead or outdated web pages (like the stuff on gborg) from the \n> active projects.\n\nYeah, that's one of the main problems with replication for PostgreSQL. I \nhope Postgres-R (or whatever name I'll come up with in the future) can \nchange that.\n\n> Either way, it's great to know you're working on it.\n\nMaybe you want to join its mailing list [1]? I'll try to get some \ndiscussion going there in the near future.\n\nRegards\n\nMarkus\n\n[1]: Postgres-R on gborg:\nhttp://pgfoundry.org/projects/postgres-r/\n", "msg_date": "Mon, 18 Jun 2007 20:54:46 +0200", "msg_from": "Markus Schiltknecht <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "On Thu, 2007-06-14 at 16:14 -0700, Craig James wrote:\n> Looking for replication solutions, I find:\n> \n> Slony-I\n> Seems good, single master only, master is a single point of failure,\n> no good failover system for electing a new master or having a failed\n> master rejoin the cluster. Slave databases are mostly for safety or\n> for parallelizing queries for performance. Suffers from O(N^2) \n> communications (N = cluster size).\n> \n\nThere's MOVE SET which transfers the origin (master) from one node to\nanother without losing any committed transactions.\n\nThere's also FAILOVER, which can set a new origin even if the old origin\nis completely gone, however you will lose the transactions that haven't\nbeen replicated yet.\n\nTo have a new node join the cluster, you SUBSCRIBE SET, and you can MOVE\nSET to it later if you want that to be the master.\n\nRegards,\n\tJeff Davis\n\n\n", "msg_date": "Tue, 19 Jun 2007 14:22:49 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "On Mon, Jun 18, 2007 at 08:54:46PM +0200, Markus Schiltknecht wrote:\n> Postgres-R has been the name of the research project by Bettina Kemme et \n> al. Slony-II was the name Neil and Gavin gave their attempt to continue \n> that project.\n\nThis isn't quite true. Slony-II was originally conceived by Jan as\nan attempt to implement some of the Postgres-R ideas. For our uses,\nhowever, Postgres-R had built into it a rather knotty design problem:\nunder high-contention workloads, it will automatically increase the\nnumber of ROLLBACKs users experience. Jan had some ideas on how to\nsolve this by moving around the GC events and doing slightly\ndifferent things with them.\n\nTo that end, Afilias sponsored a small workshop in Toronto during one\nof the coldest weeks the city has ever seen. This should have been a\nclue, perhaps. ;-) Anyway, the upshot of this was that two or three\ndifferent approaches were attempted in prototypes. AFAIK, Neil and\nGavin got the farthest, but just about everyone who was involved in\nthe original workshop all independently concluded that the approach\nwe were attempting to get to work was doomed -- it might go, but\nthe overhead was great enough that it wouldn't be any benefit. \n\nPart of the problem, as near as I could tell, was that we had no\ngroup communication protocol that would really work. Spread needed a\n_lot_ of work (where \"lot of work\" may mean \"rewrite\"), and I just\ndidn't have the humans to put on that problem. Another part of the\nproblem was that, for high-contention workloads like the ones we\nhappened to be working on, an optimistic approach like Postgres-R is\nprobably always going to be a loser.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n", "msg_date": "Wed, 20 Jun 2007 11:06:46 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" }, { "msg_contents": "Hi,\n\nAndrew Sullivan wrote:\n> This isn't quite true. Slony-II was originally conceived by Jan as\n> an attempt to implement some of the Postgres-R ideas.\n\nOh, right, thanks for that correction.\n\n> Part of the problem, as near as I could tell, was that we had no\n> group communication protocol that would really work. Spread needed a\n> _lot_ of work (where \"lot of work\" may mean \"rewrite\"), and I just\n> didn't have the humans to put on that problem. Another part of the\n> problem was that, for high-contention workloads like the ones we\n> happened to be working on, an optimistic approach like Postgres-R is\n> probably always going to be a loser.\n\nHm.. for high-contention on single rows, sure, yes - you would mostly \nget rollbacks for conflicting transactions. But the optimism there is \njustified, as I think most real world transactions don't conflict (or \nelse you can work around such high single row contention).\n\nYou are right in that the serialization of the GCS can be bottleneck. \nHowever, there's lots of research going on in that area and I'm \nconvinced that Postgres-R has it's value.\n\nRegards\n\nMarkus\n\n", "msg_date": "Thu, 21 Jun 2007 17:14:39 +0200", "msg_from": "Markus Schiltknecht <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication" } ]
[ { "msg_contents": "Ok, slony supports two kinds of operation here: failover (which moves the master node to a new one without the old master node being present, it also drops the old node from replication) and move set (which moves the master node with cooperation)\n\nThe usecases for these two are slightly different. one is for all kinds of scheduled maintenance, while the other is what you do when you've got a hardware failure.\n\nAndreas\n\n-- Ursprüngl. Mitteil. --\nBetreff:\tRe: [PERFORM] Replication\nVon:\tCraig James <[email protected]>\nDatum:\t\t15.06.2007 01:48\n\nAndreas Kostyrka wrote:\n> Slony provides near instantaneous failovers (in the single digit seconds\n> range). You can script an automatic failover if the master server\n> becomes unreachable.\n\nBut Slony slaves are read-only, correct? So the system isn't fully functional once the master goes down.\n\n> That leaves you the problem of restarting your app\n> (or making it reconnect) to the new master.\n\nDon't you have to run a Slony app to convert one of the slaves into the master?\n\n> 5-10MB data implies such a fast initial replication, that making the\n> server rejoin the cluster by setting it up from scratch is not an issue.\n\nThe problem is to PREVENT it from rejoining the cluster. If you have some semi-automatic process that detects the dead server and converts a slave to the master, and in the mean time the dead server manages to reboot itself (or its network gets fixed, or whatever the problem was), then you have two masters sending out updates, and you're screwed.\n\n>> The problem is, there don't seem to be any \"vote a new master\" type of\n>> tools for Slony-I, and also, if the original master comes back online,\n>> it has no way to know that a new master has been elected. So I'd have\n>> to write a bunch of SOAP services or something to do all of this.\n> \n> You don't need SOAP services, and you do not need to elect a new master.\n> if dbX goes down, dbY takes over, you should be able to decide on a\n> static takeover pattern easily enough.\n\nI can't see how that is true. Any self-healing distributed system needs something like the following:\n\n - A distributed system of nodes that check each other's health\n - A way to detect that a node is down and to transmit that\n information across the nodes\n - An election mechanism that nominates a new master if the\n master fails\n - A way for a node coming online to determine if it is a master\n or a slave\n\nAny solution less than this can cause corruption because you can have two nodes that both think they're master, or end up with no master and no process for electing a master. As far as I can tell, Slony doesn't do any of this. Is there a simpler solution? I've never heard of one.\n\n> The point here is, that the servers need to react to a problem, but you\n> probably want to get the admin on duty to look at the situation as\n> quickly as possible anyway.\n\nNo, our requirement is no administrator interaction. We need instant, automatic recovery from failure so that the system stays online.\n\n> Furthermore, you need to checkout pgpool, I seem to remember that it has\n> some bad habits in routing queries. (E.g. it wants to apply write\n> queries to all nodes, but slony makes the other nodes readonly.\n> Furthermore, anything inside a BEGIN is sent to the master node, which\n> is bad with some ORMs, that by default wrap any access into a transaction)\n\nI should have been more clear about this. I was planning to use PGPool in the PGPool-1 mode (not the new PGPool-2 features that allow replication). So it would only be acting as a failover mechanism. Slony would be used as the replication mechanism.\n\nI don't think I can use PGPool as the replicator, because then it becomes a new single point of failure that could bring the whole system down. If you're using it for INSERT/UPDATE, then there can only be one PGPool server.\n\nI was thinking I'd put a PGPool server on every machine in failover mode only. It would have the Slony master as the primary connection, and a Slony slave as the failover connection. The applications would route all INSERT/UPDATE statements directly to the Slony master, and all SELECT statements to the PGPool on localhost. When the master failed, all of the PGPool servers would automatically switch to one of the Slony slaves.\n\nThis way, the system would keep running on the Slony slaves (so it would be read-only), until a sysadmin could get the master Slony back online. And when the master came online, the PGPool servers would automatically reconnect and write-access would be restored.\n\nDoes this make sense?\n\nCraig\n\n", "msg_date": "Fri, 15 Jun 2007 07:21:58 +0200", "msg_from": "\"Andreas Kostyrka\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication" } ]
[ { "msg_contents": "Hello i would like to know if not determining a max size value for a character\nvarying's fields decrease the perfomance (perhaps size of stockage ? or\nsomething else ?)\n\nIf not it is a good way to not specify a max size value ?\nIf it has an importance is it possible to have a general environnment variable\nsaying to postgres to automatically truncate fields which postgres have to\ninsert or update with a length superior at the max length.\n\nSorry for my bad english...\n\nLot of thanks\n", "msg_date": "Sat, 16 Jun 2007 11:53:10 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "determining maxsize for character varying" }, { "msg_contents": "[email protected] <[email protected]> schrieb:\n\n> Hello i would like to know if not determining a max size value for a character\n> varying's fields decrease the perfomance (perhaps size of stockage ? or\n> something else ?)\n\nNo problem because of the TOAST-technology:\nhttp://www.postgresql.org/docs/current/static/storage-toast.html\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Sat, 16 Jun 2007 12:55:25 +0200", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: determining maxsize for character varying" }, { "msg_contents": "Thanks\n\nif i understand well that means that if i choose character varying(3) or\ncharacter varying(8) or character varying(32) or character varying with no max\nlength the fields will take the same place in the disk (8kb) except for fields\ntoo long to take place in the 8kb whose are stored in another place ?\n\nIs that correct ?\n\nSo for small strings it's better to choose character(n) when it's possible ?\n\n\nBest regards,\n\nLoic\n\nSelon Andreas Kretschmer <[email protected]>:\n\n> [email protected] <[email protected]> schrieb:\n>\n> > Hello i would like to know if not determining a max size value for a\n> character\n> > varying's fields decrease the perfomance (perhaps size of stockage ? or\n> > something else ?)\n>\n> No problem because of the TOAST-technology:\n> http://www.postgresql.org/docs/current/static/storage-toast.html\n>\n>\n> Andreas\n> --\n> Really, I'm not out to destroy Microsoft. That will just be a completely\n> unintentional side effect. (Linus Torvalds)\n> \"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\n> Kaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\n\n", "msg_date": "Sat, 16 Jun 2007 13:35:09 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: determining maxsize for character varying" }, { "msg_contents": "On lau, 2007-06-16 at 13:35 +0200, [email protected] wrote:\n> Thanks\n> \n> if i understand well that means that if i choose character varying(3) or\n> character varying(8) or character varying(32) or character varying with no max\n> length the fields will take the same place in the disk (8kb) except for fields\n> too long to take place in the 8kb whose are stored in another place ?\n> \n> Is that correct ?\n\nnot at all\n\na varchar will occupy the bytelength of your actual string,\n+ a small fixed overhead+padding, except when the total rowsize causes\nTOASTing\n\nin single-byte encodings, the string 'okparanoid' will occupy\nthe same amount of diskspace in a varchar, varchar(10) or a\nvarchar(1000) column, namely around 16 bytes.\n\nhope this helps\n\ngnari\n\n\n", "msg_date": "Sat, 16 Jun 2007 12:30:20 +0000", "msg_from": "Ragnar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: determining maxsize for character varying" }, { "msg_contents": "[email protected] writes:\n> Hello i would like to know if not determining a max size value for a\n> character varying's fields decrease the perfomance (perhaps size of\n> stockage ?\n\nNo, more the other way around: specifying varchar(N) when you had to\npick N out of the air decreases performance, because of all the\nessentially useless checks of the string length that Postgres has to\nmake. If you cannot defend a specific limit N as being required by your\napplication, then just make it unconstrained varchar (or better text).\n\nDo *not* use char(N) for data with highly variable width; that one\ndefinitely will cost you performance and disk space.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 16 Jun 2007 11:07:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: determining maxsize for character varying " } ]
[ { "msg_contents": "I am a Java Software architect, DBA, and project manager for the\nUniversity of Illinois, Department of Web Services. We use PostgreSQL\nto serve about 2 million pages of dynamic content a month; everything\nfrom calendars, surveys, forms, discussion boards, RSS feeds, etc. I am\nreally impressed with this tool.\n\n \n\nThe only major problem area I have found where PostgreSQL is really\nlacking is in \"what should my initial configuration settings be?\" I\nrealize that there are many elements that can impact a DBA's specific\ndatabase settings but it would be nice to have a \"configuration tool\"\nthat would get someone up and running better in the beginning. \n\n \n\nThis is my idea:\n\n \n\nA JavaScript HTML page that would have some basic questions at the top:\n\n1) How much memory do you have?\n\n2) How many connections will be made to the database?\n\n3) What operating system do you use?\n\n4) Etc...\n\n \n\nNext the person would press a button, \"generate\", found below the\nquestions. The JavaScript HTML page would then generate content for two\nIframes at the bottom on the page. One Iframe would contain the\ncontents of the postgresql.conf file. The postgresql.conf settings\nwould be tailored more to the individuals needs than the standard\ndefault file. The second Iframe would contain the default settings one\nshould consider using with their operating system.\n\n \n\nMy web team would be very happy to develop this for the PostgreSQL\nproject. It would have saved us a lot of time by having a\nconfiguration tool in the beginning. I am willing to make this a very\nhigh priority for my team.\n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI am a Java Software architect, DBA, and project manager for\nthe University of\n Illinois, Department of\nWeb Services.  We use PostgreSQL to serve about 2 million pages of dynamic\ncontent a month; everything from calendars, surveys, forms, discussion boards,\nRSS feeds, etc.  I am really impressed with this tool.\n \nThe only major problem area I have found where PostgreSQL is\nreally lacking is in “what should my initial configuration settings be?” \nI realize that there are many elements that can impact a DBA’s specific\ndatabase settings but it would be nice to have a “configuration tool”\nthat would get someone up and running better in the beginning.  \n \nThis is my idea:\n \nA JavaScript HTML page that would have some basic questions\nat the top:\n1) How much memory do you have?\n2) How many connections will be made to the database?\n3) What operating system do you use?\n4) Etc…\n \nNext the person would press a button, “generate”,\nfound below the questions.  The JavaScript HTML page would then generate\ncontent for two Iframes at the bottom on the page.  One Iframe would\ncontain the contents of the postgresql.conf file.  The postgresql.conf\nsettings would be tailored more to the individuals needs than the standard\ndefault file.  The second Iframe would contain the default settings one\nshould consider using with their operating system.\n \nMy web team would be very happy to develop this for the\nPostgreSQL project.   It would have saved us a lot of time by having\na configuration tool in the beginning.  I am willing to make this a very\nhigh priority for my team.\n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Mon, 18 Jun 2007 10:04:15 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Volunteer to build a configuration tool" }, { "msg_contents": "On 18/06/07, Campbell, Lance <[email protected]> wrote:\n>\n> Next the person would press a button, \"generate\", found below the questions.\n> The JavaScript HTML page would then generate content for two Iframes at the\n> bottom on the page. One Iframe would contain the contents of the\n> postgresql.conf file. The postgresql.conf settings would be tailored more\n> to the individuals needs than the standard default file. The second Iframe\n> would contain the default settings one should consider using with their\n> operating system.\n>\n\n I think it could be a great help to newbies. IMVHO a bash script in\ndialog could be better than a javascript file. There are many\nadministrators with no graphics navigator or with no javascript.\n\n>\n\n-- \nhttp://www.advogato.org/person/mgonzalez/\n", "msg_date": "Mon, 18 Jun 2007 11:16:03 -0400", "msg_from": "\"Mario Gonzalez\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "Mario,\nThe JavaScript configuration tool I proposed would not be in the install\nof PostgreSQL. It would be an HTML page. It would be part of the HTML\ndocumentation or it could be a separate HTML page that would be linked\nfrom the HTML documentation.\n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n-----Original Message-----\nFrom: Mario Gonzalez [mailto:[email protected]] \nSent: Monday, June 18, 2007 10:16 AM\nTo: Campbell, Lance\nCc: [email protected]; [email protected]\nSubject: Re: [DOCS] Volunteer to build a configuration tool\n\nOn 18/06/07, Campbell, Lance <[email protected]> wrote:\n>\n> Next the person would press a button, \"generate\", found below the\nquestions.\n> The JavaScript HTML page would then generate content for two Iframes\nat the\n> bottom on the page. One Iframe would contain the contents of the\n> postgresql.conf file. The postgresql.conf settings would be tailored\nmore\n> to the individuals needs than the standard default file. The second\nIframe\n> would contain the default settings one should consider using with\ntheir\n> operating system.\n>\n\n I think it could be a great help to newbies. IMVHO a bash script in\ndialog could be better than a javascript file. There are many\nadministrators with no graphics navigator or with no javascript.\n\n>\n\n-- \nhttp://www.advogato.org/person/mgonzalez/\n", "msg_date": "Mon, 18 Jun 2007 12:30:28 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "On 6/18/07, Campbell, Lance <[email protected]> wrote:\n>\n> Mario,\n> The JavaScript configuration tool I proposed would not be in the install\n> of PostgreSQL. It would be an HTML page. It would be part of the HTML\n> documentation or it could be a separate HTML page that would be linked\n> from the HTML documentation.\n>\n> Thanks,\n>\n> Lance Campbell\n> Project Manager/Software Architect\n> Web Services at Public Affairs\n> University of Illinois\n> 217.333.0382\n> http://webservices.uiuc.edu\n>\n> -----Original Message-----\n> From: Mario Gonzalez [mailto:[email protected]]\n> Sent: Monday, June 18, 2007 10:16 AM\n> To: Campbell, Lance\n> Cc: [email protected]; [email protected]\n> Subject: Re: [DOCS] Volunteer to build a configuration tool\n>\n> On 18/06/07, Campbell, Lance <[email protected]> wrote:\n> >\n> > Next the person would press a button, \"generate\", found below the\n> questions.\n> > The JavaScript HTML page would then generate content for two Iframes\n> at the\n> > bottom on the page. One Iframe would contain the contents of the\n> > postgresql.conf file. The postgresql.conf settings would be tailored\n> more\n> > to the individuals needs than the standard default file. The second\n> Iframe\n> > would contain the default settings one should consider using with\n> their\n> > operating system.\n> >\n>\n> I think it could be a great help to newbies. IMVHO a bash script in\n> dialog could be better than a javascript file. There are many\n> administrators with no graphics navigator or with no javascript.\n>\n> >\n>\n> --\n> http://www.advogato.org/person/mgonzalez/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\nEXCELLENT idea Lance.\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nOn 6/18/07, Campbell, Lance <[email protected]> wrote:\nMario,The JavaScript configuration tool I proposed would not be in the installof PostgreSQL.  It would be an HTML page.  It would be part of the HTMLdocumentation or it could be a separate HTML page that would be linked\nfrom the HTML documentation.Thanks,Lance CampbellProject Manager/Software ArchitectWeb Services at Public AffairsUniversity of Illinois217.333.0382\nhttp://webservices.uiuc.edu-----Original Message-----From: Mario Gonzalez [mailto:[email protected]]Sent: Monday, June 18, 2007 10:16 AMTo: Campbell, Lance\nCc: [email protected]; [email protected]: Re: [DOCS] Volunteer to build a configuration tool\nOn 18/06/07, Campbell, Lance <[email protected]> wrote:>> Next the person would press a button, \"generate\", found below thequestions.>  The JavaScript HTML page would then generate content for two Iframes\nat the> bottom on the page.  One Iframe would contain the contents of the> postgresql.conf file.  The postgresql.conf settings would be tailoredmore> to the individuals needs than the standard default file.  The second\nIframe> would contain the default settings one should consider using withtheir> operating system.>  I think it could be a great help to newbies. IMVHO a bash script indialog could be better than a javascript file. There are many\nadministrators with no graphics navigator or with no javascript.>--http://www.advogato.org/person/mgonzalez/---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmasterEXCELLENT idea Lance.-- Yudhvir Singh Sidhu408 375 3134 cell", "msg_date": "Mon, 18 Jun 2007 11:11:06 -0700", "msg_from": "\"Y Sidhu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] Volunteer to build a configuration tool" }, { "msg_contents": "On 18/06/07, Campbell, Lance <[email protected]> wrote:\n> Mario,\n> The JavaScript configuration tool I proposed would not be in the install\n> of PostgreSQL. It would be an HTML page. It would be part of the HTML\n> documentation or it could be a separate HTML page that would be linked\n> from the HTML documentation.\n>\n\n Ok, then I'm not the correct person to make that decision, however\njust a tip: the postgresql documentation was wrote in DocBook SGML\n\n>\n\n-- \nhttp://www.advogato.org/person/mgonzalez/\n", "msg_date": "Mon, 18 Jun 2007 14:55:11 -0400", "msg_from": "\"Mario Gonzalez\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "> This is my idea:\n>\n> A JavaScript HTML page that would have some basic questions at the top:\n>\n> 1) How much memory do you have?\n>\n> 2) How many connections will be made to the database?\n>\n> 3) What operating system do you use?\n>\n> 4) Etc�\n>\n> Next the person would press a button, �generate�, found below the \n> questions. The JavaScript HTML page would then generate content for \n> two Iframes at the bottom on the page. One Iframe would contain the \n> contents of the postgresql.conf file. The postgresql.conf settings \n> would be tailored more to the individuals needs than the standard \n> default file. The second Iframe would contain the default settings one \n> should consider using with their operating system.\n>\n> My web team would be very happy to develop this for the PostgreSQL \n> project. It would have saved us a lot of time by having a \n> configuration tool in the beginning. I am willing to make this a very \n> high priority for my team.\n>\n\nHi Lance,\n\nI agree that having a page that can assist in generating a base \nconfiguration file is an excellent way to start off with a good \nconfiguration that can assist a system administrator in getting half way \nto a good configuration. We've recently gone through a process of \nconfiguring a machine and it is a time consuming task of testing and \nbenchmarking various configuration details.\n\nMy thoughts:\nUsing the browser is a great idea as a universal platform. I can \nforeseen a problem in that some users won't have GUI access to the \nmachine that they are setting up. I don't have much direct experience in \nthis field, but I suspect that a great number of installations happen \n'headless'? This can easily be circumvented by hosting the configuration \nbuilder on a public internet site, possibly postgresql.org?\n\nAlso, Javascript isn't the easiest language to use to get all the \ndecisions that need to be made for various configuration options. Would \nit not be a better idea to host a configuration builder centrally, \npossible on postgresql.org and have the documentation reference it, \nincluding the docs that come packaged with postgresql (README, INSTALL \ndocumentation?). This would mean that you wouldn't be able to package \nthe configuration builder, but you would be able to implement more \napplication logic and more complex decision making in a hosted \napplication. Of course, I have no idea of the skills that your team \nalready have :)\n\n\n\nTo add ideas: perhaps a more advanced tool would be able to add comment \nindicating a suggested range for the particular setting. For example, \nwith 2Gb of RAM, it chooses a workmem of, say, 768Mb, with a comment \nindicating a suggested range of 512Mb - 1024Mb.\n\n\nThanks for taking the time to put this together and for offering the \nservices of your team.\n\nKind regards,\nJames", "msg_date": "Mon, 18 Jun 2007 21:22:33 +0200", "msg_from": "James Neethling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "On 6/19/07, Campbell, Lance <[email protected]> wrote:\n> Mario,\nLance,\n\n> The JavaScript configuration tool I proposed would not be in the install\n> of PostgreSQL. It would be an HTML page. It would be part of the HTML\n> documentation or it could be a separate HTML page that would be linked\n> from the HTML documentation.\nSo you're not after a tool that configures postgres at all,\njust one that can give you sensible guesstimates for some\nparameters based on your intended use?\n\n\n> Thanks,\n>\n> Lance Campbell\nCheers,\nAndrej\n\n-- \nPlease don't top post, and don't use HTML e-Mail :} Make your quotes concise.\n\nhttp://www.american.edu/econ/notes/htmlmail.htm\n", "msg_date": "Tue, 19 Jun 2007 09:24:46 +1200", "msg_from": "\"Andrej Ricnik-Bay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "On Mon, 18 Jun 2007, Campbell, Lance wrote:\n\n> The postgresql.conf settings would be tailored more to the individuals \n> needs than the standard default file. The second Iframe would contain \n> the default settings one should consider using with their operating \n> system.\n\nI'd toyed with making a Javascript based tool for this but concluded it \nwasn't ever going to be robust enough for my purposes. It wouldn't hurt \nto have it around through, as almost anything is an improvement over the \ncurrent state of affairs for new users.\n\nAs far as prior art goes here, there was an ambitious tool driven by Josh \nBerkus called Configurator that tried to address this need but never got \noff the ground, you might want to swipe ideas from it. See \nhttp://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/configurator/configurator/ for \nsome documents/code and \nhttp://pgfoundry.org/docman/index.php?group_id=1000106 for a handy \nOpen-Office spreadsheet.\n\nIf you want this to take off as a project, make sure you can release the \ncode under a free software license compatible with the PostgreSQL project, \nso others can contribute to it and it can be assimilated by the core \nproject if it proves helpful. I know I wouldn't spend a minute working on \nthis if that's not the case.\n\nI'd suggest you try and get the basic look fleshed out with some \nreasonable values for the parameters, then release the source and let \nother people nail down the parts you're missing. Don't get stressed about \nmaking sure you have a good value to set for everything before releasing a \nbeta, it's a lot easier for others to come in and help fix a couple of \nparameters once the basic framework is in place.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n\n", "msg_date": "Mon, 18 Jun 2007 19:00:11 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "one thing to point out to people about this idea is that nothing says that \nthis page needs to be served via a webserver. If all the calculations are \ndone in javascript this could be a local file that you open with a \nbrowser.\n\ndo any of the text-mode browsers implement javascript? if so then you have \nan answer even for the deeply buried isolated headless servers.\n\nDavid Lang\n\n On Mon, 18 Jun 2007, \nCampbell, Lance wrote:\n\n> \n> I am a Java Software architect, DBA, and project manager for the\n> University of Illinois, Department of Web Services. We use PostgreSQL\n> to serve about 2 million pages of dynamic content a month; everything\n> from calendars, surveys, forms, discussion boards, RSS feeds, etc. I am\n> really impressed with this tool.\n>\n>\n>\n> The only major problem area I have found where PostgreSQL is really\n> lacking is in \"what should my initial configuration settings be?\" I\n> realize that there are many elements that can impact a DBA's specific\n> database settings but it would be nice to have a \"configuration tool\"\n> that would get someone up and running better in the beginning.\n>\n>\n>\n> This is my idea:\n>\n>\n>\n> A JavaScript HTML page that would have some basic questions at the top:\n>\n> 1) How much memory do you have?\n>\n> 2) How many connections will be made to the database?\n>\n> 3) What operating system do you use?\n>\n> 4) Etc...\n", "msg_date": "Mon, 18 Jun 2007 16:09:34 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "\nOn Jun 18, 2007, at 4:09 PM, [email protected] wrote:\n\n> one thing to point out to people about this idea is that nothing \n> says that this page needs to be served via a webserver. If all the \n> calculations are done in javascript this could be a local file that \n> you open with a browser.\n>\n> do any of the text-mode browsers implement javascript? if so then \n> you have an answer even for the deeply buried isolated headless \n> servers.\n\nIt doesn't really matter.\n\nThe implementation is likely to be trivial, and could be independently\nknocked out by anyone in their favorite language in a few hours.\n\nThe tricky bits are going to be defining the problem and creating the\nalogrithm to do the maths from input to output.\n\nIf that's so, the language or platform the proof-of-concept code is\nwritten for isn't that important, as it's likely to be portable to \nanything\nelse without too much effort.\n\nBut the tricky bits seem quite tricky (and the first part, defining the\nproblem, is something where someone developing it on their\nown, without some discussion with other users and devs\ncould easily end up way off in the weeds).\n\nCheers,\n Steve\n\n>\n> David Lang\n>\n> On Mon, 18 Jun 2007, Campbell, Lance wrote:\n>\n>> I am a Java Software architect, DBA, and project manager for the\n>> University of Illinois, Department of Web Services. We use \n>> PostgreSQL\n>> to serve about 2 million pages of dynamic content a month; everything\n>> from calendars, surveys, forms, discussion boards, RSS feeds, \n>> etc. I am\n>> really impressed with this tool.\n>>\n>>\n>>\n>> The only major problem area I have found where PostgreSQL is really\n>> lacking is in \"what should my initial configuration settings be?\" I\n>> realize that there are many elements that can impact a DBA's specific\n>> database settings but it would be nice to have a \"configuration tool\"\n>> that would get someone up and running better in the beginning.\n>>\n>>\n>>\n>> This is my idea:\n>>\n>>\n>>\n>> A JavaScript HTML page that would have some basic questions at the \n>> top:\n>>\n>> 1) How much memory do you have?\n>>\n>> 2) How many connections will be made to the database?\n>>\n>> 3) What operating system do you use?\n>>\n>> 4) Etc...\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n", "msg_date": "Mon, 18 Jun 2007 16:35:11 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "On Mon, 18 Jun 2007, [email protected] wrote:\n\n> do any of the text-mode browsers implement javascript?\n\nhttp://links.twibright.com/\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 18 Jun 2007 20:05:23 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "Campbell, Lance wrote:\n\n> Next the person would press a button, “generate”, found below the\n> questions. The JavaScript HTML page would then generate content for two\n> Iframes at the bottom on the page. One Iframe would contain the\n> contents of the postgresql.conf file. The postgresql.conf settings\n> would be tailored more to the individuals needs than the standard\n> default file. The second Iframe would contain the default settings one\n> should consider using with their operating system.\n> \nMan, it's not that easy. :-) Mainly because you will need some database\nactivity. For example, work_mem, checkpoint_segments, and\ncheckpoint_timeout depends on the database's dynamic.\nDatabase are not that static so another idea is to build a piece of\nsoftware that monitors the database and do the modifications based on\nsome observations (log, stats, etc). Don't forget that some of these\noptions need a restart. So maybe your tool just advise the DBA that\nhe/she could change option X to Y.\nSuch a tool was proposed later [1] but it's not up to date. :(\n\n[1] http://pgfoundry.org/projects/pgautotune/\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/\n", "msg_date": "Mon, 18 Jun 2007 23:28:57 -0300", "msg_from": "Euler Taveira de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "On Mon, Jun 18, 2007 at 04:35:11PM -0700, Steve Atkins wrote:\n> \n> On Jun 18, 2007, at 4:09 PM, [email protected] wrote:\n> \n> The tricky bits are going to be defining the problem and creating the\n> alogrithm to do the maths from input to output.\n\n\nWhy not methodically discuss the the alogrithms on pgsql-performance,\nthus improving the chance of being on target up front. Plus, us newbies\nget to see what you are thinking thus expanding our universe. I know I'd \nread every word.\n\nThanks for doing this, btw.\n", "msg_date": "Tue, 19 Jun 2007 09:00:19 -0400", "msg_from": "Ray Stell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "It would be cool if someone started a generic configuration+benchmark\nutility that could be used with virtually any software. Something like\nthis:\n\n1. Create a configuration file parser for your specific application, be\nit PostgreSQL, MySQL, Apache, whatever.\n\n2. Create a min/max or X,Y,Z configuration option file that determines\nwhich options to try. ie:\n\nshared_buffers = 1000-20000[1000] //1000 is the increment by\nwal_buffers = 8,16,32\n...\n\n3. Create start/stop scripts for the specific application\n\n4. Create a benchmark script for the application that returns relevant\nmetrics. In PGSQL's case, it would be tied in to PG bench probably. In\nApache's case AB. This utility would of course need to know how to read\nthe metrics to determine what is \"best\".\n\n5. Run the utility. Ideally it would use some sort of genetic algorithm\nto benchmark the application initially to get base numbers, then\none-by-one apply the different configuration options and re-run the\nbenchmark. It would output the metrics for each run and once it is done,\npick the best run and let you know what those settings are.\n\nI don't think something like this would be very difficult at all to\nwrite, and it would be modular enough to work for virtually any\napplication. For a database it would take a while to run depending on\nthe benchmark script, but even that you could have a \"fast\" and \"slow\"\nbenchmark script that could be easily run when you first install\nPostgreSQL. This way too your not worrying about how much memory the\nsystem has, or how many disks they have, etc... The system will figure\nout the best possible settings for a specific benchmark. \n\nNot to mention people could easily take a SQL log of their own\napplication running, and use that as the benchmark to get \"real world\"\nnumbers. \n\nAny other sort of configuration \"suggestion\" utility will always have\nthe question of what do you recommend? How much data do you try to get\nand what can be determined from that data to get the best settings? Is\nit really going to be that much better then the default, at least enough\nbetter to warrant the work and effort put into it?\n\nOn Mon, 2007-06-18 at 10:04 -0500, Campbell, Lance wrote:\n> I am a Java Software architect, DBA, and project manager for the\n> University of Illinois, Department of Web Services. We use PostgreSQL\n> to serve about 2 million pages of dynamic content a month; everything\n> from calendars, surveys, forms, discussion boards, RSS feeds, etc. I\n> am really impressed with this tool.\n> \n> \n> \n> The only major problem area I have found where PostgreSQL is really\n> lacking is in “what should my initial configuration settings be?” I\n> realize that there are many elements that can impact a DBA’s specific\n> database settings but it would be nice to have a “configuration tool”\n> that would get someone up and running better in the beginning. \n> \n> \n> \n> This is my idea:\n> \n> \n> \n> A JavaScript HTML page that would have some basic questions at the\n> top:\n> \n> 1) How much memory do you have?\n> \n> 2) How many connections will be made to the database?\n> \n> 3) What operating system do you use?\n> \n> 4) Etc…\n> \n> \n> \n> Next the person would press a button, “generate”, found below the\n> questions. The JavaScript HTML page would then generate content for\n> two Iframes at the bottom on the page. One Iframe would contain the\n> contents of the postgresql.conf file. The postgresql.conf settings\n> would be tailored more to the individuals needs than the standard\n> default file. The second Iframe would contain the default settings\n> one should consider using with their operating system.\n> \n> \n> \n> My web team would be very happy to develop this for the PostgreSQL\n> project. It would have saved us a lot of time by having a\n> configuration tool in the beginning. I am willing to make this a very\n> high priority for my team.\n> \n> \n> \n> Thanks,\n> \n> \n> \n> Lance Campbell\n> \n> Project Manager/Software Architect\n> \n> Web Services at Public Affairs\n> \n> University of Illinois\n> \n> 217.333.0382\n> \n> http://webservices.uiuc.edu\n> \n> \n> \n> \n-- \nMike Benoit <[email protected]>", "msg_date": "Wed, 20 Jun 2007 00:18:39 +0000", "msg_from": "Mike Benoit <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "On Wed, 20 Jun 2007, Mike Benoit wrote:\n\n> It would be cool if someone started a generic configuration+benchmark\n> utility that could be used with virtually any software.\n\nIt would be cool. It would also be impossible.\n\n> Create a benchmark script for the application that returns relevant \n> metrics. In PGSQL's case, it would be tied in to PG bench probably. In \n> Apache's case AB. This utility would of course need to know how to read \n> the metrics to determine what is \"best\".\n\nThe usual situation in these benchmarks is that you get parameters that \nadjust along a curve where there's a trade-off between, say, total \nthroughput and worse-case latency. Specifying \"best\" here would require a \nwhole specification language if you want to model how real tuning efforts \nwork. The AB case is a little simpler, but for PostgreSQL you'd want \nsomething like \"With this database and memory sizing, I want the best \nthroughput possible where maximum latency is usually <5 seconds with 1-30 \nclients running this transaction, while still maintaining at least 400 TPS \nwith up to 100 clients, and the crash recovery time can't take more than \n10 minutes\". There are all sorts of local min/max situations and \nnon-robust configurations an automated tool will put you into if you don't \nforce an exhaustive search by being very specific like this.\n\n> I don't think something like this would be very difficult at all to\n> write\n\nHere I just smile and say that proves you've never tried to write one :) \nIt's a really hard problem that gets harder the more you poke at it. \nThere's certainly lots of value to writing a utility that automatically \ntests out multiple parameter values in a batch and compares the results. \nIf you're not doing that now, you should consider scripting something up \nthat does. Going beyond that to having it pick the optimal parameters \nmore automatically would take AI much stronger than just a genetic \nalgorithm approach.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 20 Jun 2007 01:03:19 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> On Wed, 20 Jun 2007, Mike Benoit wrote:\n>> I don't think something like this would be very difficult at all to\n>> write\n\n> Here I just smile and say that proves you've never tried to write one :) \n\nI'm with Greg on this. It's not that easy to optimize in a\nmulti-parameter space even if all conditions are favorable, and they\nnever are.\n\nI think what would be much more useful in the long run is some\nserious study of the parameters themselves. For instance,\nrandom_page_cost is a self-admitted oversimplification of reality.\nWe know that good settings for it depend critically on how large\nyour DB is relative to your RAM; which means there are at least two\nparameters there, but no one's done any serious thinking about how\nto disentangle 'em.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Jun 2007 01:43:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool " }, { "msg_contents": "On Wed, 20 Jun 2007, Tom Lane wrote:\n\n> I think what would be much more useful in the long run is some serious \n> study of the parameters themselves. For instance, random_page_cost is a \n> self-admitted oversimplification of reality.\n\nIf I could figure out who would sponsor such a study that's what I'd be \ndoing right now. I have studies on many of the commit-related parameters \nI'll have ready in another few days, those are straightforward to map out. \nBut you know what I have never found? A good benchmark that demonstrates \nhow well complicated queries perform to run studies on things like \nrandom_page_cost against. Many of the tuning knobs on the query optimizer \nseem very opaque to me so far, and I'm not sure how to put together a \nproper test to illuminate their operation and map out their useful range.\n\nHere's an example of one of the simplest questions in this area to \ndemonstate things I wonder about. Let's say I have a properly indexed \ndatabase of some moderate size such that you're in big trouble if you do a \nsequential scan. How can I tell if effective_cache_size is in the right \nballpark so it will do what I want to effectively navigate that? People \nback into a setting for that parameter right now based on memory in their \nsystem, but I never see anybody going \"since your main table is X GB \nlarge, and its index is Y GB, you really need enough memory to set \neffective_cache_size to Z GB if you want queries/joins on that table to \nperform well\".\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 20 Jun 2007 02:45:27 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool " }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> On Wed, 20 Jun 2007, Tom Lane wrote:\n>> I think what would be much more useful in the long run is some serious \n>> study of the parameters themselves. For instance, random_page_cost is a \n>> self-admitted oversimplification of reality.\n\n> If I could figure out who would sponsor such a study that's what I'd be \n> doing right now.\n\nHmm ... Sun? EDB? Greenplum? [I'm afraid Red Hat is not likely to\nstep up to the plate right now, they have other priorities]\n\n> Many of the tuning knobs on the query optimizer \n> seem very opaque to me so far,\n\nAt least some of them are demonstrably broken. The issue here is to\ndevelop a mental model that is both simple enough to work with, and\nrich enough to predict real-world behavior.\n\n> Here's an example of one of the simplest questions in this area to \n> demonstate things I wonder about. Let's say I have a properly indexed \n> database of some moderate size such that you're in big trouble if you do a \n> sequential scan. How can I tell if effective_cache_size is in the right \n> ballpark so it will do what I want to effectively navigate that?\n\nAs the guy who put in effective_cache_size, I'd say it's on the broken\nside of the fence. Think about how to replace it with a more useful\nparameter, not how to determine a good value for it. \"Useful\" means\nboth \"easy to determine a value for\" and \"strong for estimating query\ncosts\", which are contradictory to some extent, but that's the problem\nto be solved --- and effective_cache_size doesn't really win on either\nmetric.\n\nTo me, the worst catch-22 we face in this area is that we'd like the\noptimizer's choices of plan to be stable and understandable, but the\nreal-world costs of queries depend enormously on short-term conditions\nsuch as how much of the table has been sucked into RAM recently by\nother queries. I have no good answer to that one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Jun 2007 03:06:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool " }, { "msg_contents": "\n> To me, the worst catch-22 we face in this area is that we'd like the\n> optimizer's choices of plan to be stable and understandable, but the\n> real-world costs of queries depend enormously on short-term conditions\n> such as how much of the table has been sucked into RAM recently by\n> other queries. I have no good answer to that one.\n\n\tYeah, there is currently no way to tell the optimizer things like :\n\n\t- this table/portion of a table is not frequently accessed, so it won't \nbe in the cache, so please use low-seek plans (like bitmap index scan)\n\t- this table/portion of a table is used all the time so high-seek-count \nplans can be used like index scan or nested loops since everything is in \nRAM\n\n\tExcept planner hints (argh) I see no way to give this information to the \nmachine... since it's mostly in the mind of the DBA. Maybe a per-table \n\"cache temperature\" param (hot, warm, cold), but what about the log table, \nthe end of which is cached, but not the old records ? It's messy.\n\n\tStill PG does a pretty excellent job most of the time.\n", "msg_date": "Wed, 20 Jun 2007 09:49:02 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "On Wed, 20 Jun 2007, PFC wrote:\n\n> Except planner hints (argh) I see no way to give this information to the \n> machine... since it's mostly in the mind of the DBA.\n\nAnd the mind of the DBA has a funny way of being completely wrong some \ndays about what's really happening under the hood.\n\n> Maybe a per-table \"cache temperature\" param (hot, warm, cold), but what \n> about the log table, the end of which is cached, but not the old records \n> ? It's messy.\n\nOne of the things that was surprising to me when I started looking at the \norganization of the PostgreSQL buffer cache is how little gross \ninformation about its contents is available. I kept expecting to find a \nsummary section where you could answer questions like \"how much of the \ncache currently has information about index/table X?\" used as an input to \nthe optimizer. I understand that the design model expects much of this is \nunknowable due to the interaction with the OS cache, and in earlier \nversions you couldn't make shared_buffers big enough for its contents to \nbe all that interesting, so until recently this wasn't worth collecting.\n\nBut in the current era, where it's feasible to have multi-GB caches \nefficiently managed by PG and one can expect processor time is relatively \ncheap, it seems to me one way to give a major boost to the optimizer is to \nadd some overhead to buffer cache management so it collects such \ninformation. When I was trying to do a complete overhaul on the \nbackground writer, the #1 problem was that I had to assemble my own \nstatistics on what was inside the buffer cache as it was scanned, because \na direct inspection of every buffer is the only way to know things like \nwhat percentage of the cache is currently dirty.\n\nI can't figure out if I'm relieved or really worried to discover that Tom \nisn't completely sure what to do with effective_cache_size either.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 20 Jun 2007 11:21:01 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "It is amazing how many times you can read something before it actually\nsinks in.\n\nThere seems to be two possible approaches to optimizing PostgreSQL 8.2:\n\nFile caching approach:\nThis approach is based on the fact that the OS will cache the necessary\nPostgreSQL files. The key here is to set the size of\neffective_cache_size value as high as you think the OS has memory to\ncache the files. This approach would need the value of shared_buffers\nto be relatively low. Otherwise you are in a cense storing the data\ntwice. One would also have to make sure that work_mem is not too high.\nSince the files would be cached by the OS, work_mem could be relatively\nlow. This is an ideal approach if you have a dedicated server since\nthere would be no other software using memory or accessing files that\nthe OS would try to cache.\n\nMemory driven approach:\nIn this approach you want to create a large value for shared_buffers.\nYou are relying on shared_buffers to hold the most commonly accessed\ndisk blocks. The value for effective_cache_size would be relatively\nsmall since you are not relying on the OS to cache files. This seems\nlike it would be the ideal situation if you have other applications\nrunning on the box. By setting shared_buffers to a high value you are\nguaranteeing memory available to PostgreSQL (this assumes the other\napplications did not suck up to much memory to make your OS use virtual\nmemory). This also seems more like how Oracle approaches things. \n\nDo I understand the possible optimization paths correctly? The only\nquestion I have about this approach is: if I use the \"memory driven\napproach\" since effective_cache_size would be small I would assume I\nwould need to fiddle with random_page_cost since there would be know way\nfor PostgreSQL to know I have a well configured system.\n\nIf everything I said is correct then I agree \"Why have\neffective_cache_size?\" Why not just go down the approach that Oracle\nhas taken and require people to rely more on shared_buffers and the\ngeneral memory driven approach? Why rely on the disk caching of the OS?\nMemory is only getting cheaper.\n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Greg Smith\nSent: Wednesday, June 20, 2007 10:21 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Volunteer to build a configuration tool\n\nOn Wed, 20 Jun 2007, PFC wrote:\n\n> Except planner hints (argh) I see no way to give this information to\nthe \n> machine... since it's mostly in the mind of the DBA.\n\nAnd the mind of the DBA has a funny way of being completely wrong some \ndays about what's really happening under the hood.\n\n> Maybe a per-table \"cache temperature\" param (hot, warm, cold), but\nwhat \n> about the log table, the end of which is cached, but not the old\nrecords \n> ? It's messy.\n\nOne of the things that was surprising to me when I started looking at\nthe \norganization of the PostgreSQL buffer cache is how little gross \ninformation about its contents is available. I kept expecting to find a\n\nsummary section where you could answer questions like \"how much of the \ncache currently has information about index/table X?\" used as an input\nto \nthe optimizer. I understand that the design model expects much of this\nis \nunknowable due to the interaction with the OS cache, and in earlier \nversions you couldn't make shared_buffers big enough for its contents to\n\nbe all that interesting, so until recently this wasn't worth collecting.\n\nBut in the current era, where it's feasible to have multi-GB caches \nefficiently managed by PG and one can expect processor time is\nrelatively \ncheap, it seems to me one way to give a major boost to the optimizer is\nto \nadd some overhead to buffer cache management so it collects such \ninformation. When I was trying to do a complete overhaul on the \nbackground writer, the #1 problem was that I had to assemble my own \nstatistics on what was inside the buffer cache as it was scanned,\nbecause \na direct inspection of every buffer is the only way to know things like \nwhat percentage of the cache is currently dirty.\n\nI can't figure out if I'm relieved or really worried to discover that\nTom \nisn't completely sure what to do with effective_cache_size either.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n", "msg_date": "Wed, 20 Jun 2007 11:40:32 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "Campbell, Lance wrote:\n> It is amazing how many times you can read something before it actually\n> sinks in.\n> \n> There seems to be two possible approaches to optimizing PostgreSQL 8.2:\n\nRight.\n\n> File caching approach:\n> This approach is based on the fact that the OS will cache the necessary\n> PostgreSQL files. The key here is to set the size of\n> effective_cache_size value as high as you think the OS has memory to\n> cache the files. This approach would need the value of shared_buffers\n> to be relatively low. Otherwise you are in a cense storing the data\n> twice. One would also have to make sure that work_mem is not too high.\n> Since the files would be cached by the OS, work_mem could be relatively\n> low. This is an ideal approach if you have a dedicated server since\n> there would be no other software using memory or accessing files that\n> the OS would try to cache.\n\nThere's no particular danger in setting work_mem too high in this \napproach. In fact, it's more important avoid a too large worm_mem \nsetting with the other approach, because if you set it too high you can \nforce the system to swap, while with the \"file caching approach\" the OS \nwill just evict some of the cached pages to make room for sorts etc.\n\n> Memory driven approach:\n> In this approach you want to create a large value for shared_buffers.\n> You are relying on shared_buffers to hold the most commonly accessed\n> disk blocks. The value for effective_cache_size would be relatively\n> small since you are not relying on the OS to cache files.\n\neffective_cache_size should be set to the estimated amount of memory \navailable for caching, *including* shared_buffers. So it should be set \nto a similar value in both approaches.\n\n> This seems\n> like it would be the ideal situation if you have other applications\n> running on the box.\n\nActually it's the opposite. If there's other applications competing for \nthe memory, it's better to let the OS manage the cache because it can \nmake decisions on which pages to keep in cache and which to evict across \nall applications.\n\n> By setting shared_buffers to a high value you are\n> guaranteeing memory available to PostgreSQL (this assumes the other\n> applications did not suck up to much memory to make your OS use virtual\n> memory). \n\nYou're guaranteeing memory available to PostgreSQL, at the cost of said \nmemory being unavailable from other applications. Or as you point out, \nin the worst case you end up swapping.\n\n> Do I understand the possible optimization paths correctly? The only\n> question I have about this approach is: if I use the \"memory driven\n> approach\" since effective_cache_size would be small I would assume I\n> would need to fiddle with random_page_cost since there would be know way\n> for PostgreSQL to know I have a well configured system.\n\nI don't see how effective_cache_size or the other settings affect \nrandom_page_cost. random_page_cost should mostly depend on your I/O \nhardware, though I think it's common practice to lower it when your \ndatabase is small enough to fit mostly or completely in cache on the \ngrounds that random access in memory is almost as fast as sequential access.\n\n> If everything I said is correct then I agree \"Why have\n> effective_cache_size?\" Why not just go down the approach that Oracle\n> has taken and require people to rely more on shared_buffers and the\n> general memory driven approach? Why rely on the disk caching of the OS?\n> Memory is only getting cheaper.\n\nThat has been discussed before many times, search the archives on direct \nI/O for previous flamewars on that subject. In a nutshell, we rely on \nthe OS to not only do caching for us, but I/O scheduling and readahead \nas well. That saves us a lot of code, and the OS is in a better position \nto do that as well, because it knows the I/O hardware and disk layout so \nthat it can issue the I/O requests in the most efficient way.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 20 Jun 2007 17:59:38 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "On Wed, 2007-06-20 at 11:21 -0400, Greg Smith wrote:\n...\n> One of the things that was surprising to me when I started looking at the \n> organization of the PostgreSQL buffer cache is how little gross \n> information about its contents is available. I kept expecting to find a \n> summary section where you could answer questions like \"how much of the \n> cache currently has information about index/table X?\" used as an input to \n> the optimizer. I understand that the design model expects much of this is \n> unknowable due to the interaction with the OS cache, and in earlier \n> versions you couldn't make shared_buffers big enough for its contents to \n> be all that interesting, so until recently this wasn't worth collecting.\n> \n> But in the current era, where it's feasible to have multi-GB caches \n> efficiently managed by PG and one can expect processor time is relatively \n> cheap, it seems to me one way to give a major boost to the optimizer is to \n> add some overhead to buffer cache management so it collects such \n> information. When I was trying to do a complete overhaul on the \n> background writer, the #1 problem was that I had to assemble my own \n> statistics on what was inside the buffer cache as it was scanned, because \n> a direct inspection of every buffer is the only way to know things like \n> what percentage of the cache is currently dirty.\n...\n\nOne problem with feeding the current state of the buffer cache to the\nplanner is that the planner may be trying to prepare a plan which will\nexecute 10,000 times. For many interesting queries, the state of the\ncache will be very different after the first execution, as indexes and\nactive portions of tables are brought in.\n\nFor that matter, an early stage of query execution could significantly\nchange the contents of the buffer cache as seen by a later stage of the\nexecution, even inside a single query.\n\nI'm not saying that inspecting the buffer cache more is a bad idea, but\ngathering useful information with the current planner is a bit tricky.\n\nFor purposes of idle speculation, one could envision some non-trivial\nchanges to PG which would make really slick use this data:\n\n(1) Allow PG to defer deciding whether to perform an index scan or\nsequential scan until the moment it is needed, and then ask the buffer\ncache what % of the pages from the relevant indexes/tables are currently\ncached.\n\n(2) Automatically re-plan prepared queries with some kind of frequency\n(exponential in # of executions? fixed-time?), to allow the plans to\nadjust to changes in the buffer-cache.\n\nBesides being hard to build, the problem with these approaches (or any\nother approach which takes current temporary state into account) is that\nas much as some of us might want to make use of every piece of data\navailable to make the planner into a super-brain, there are lots of\nother folks who just want plan stability. The more dynamic the system\nis, the less predictable it can be, and especially in mission-critical\nstuff, predictability matters more than . Tom said it really well in a\nrecent post, \n\n\"To me, the worst catch-22 we face in this area is that we'd like the\noptimizer's choices of plan to be stable and understandable, but the\nreal-world costs of queries depend enormously on short-term conditions\nsuch as how much of the table has been sucked into RAM recently by\nother queries. I have no good answer to that one.\"\n", "msg_date": "Wed, 20 Jun 2007 10:27:17 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "On Wed, 20 Jun 2007, Mark Lewis wrote:\n\n> as much as some of us might want to make use of every piece of data \n> available to make the planner into a super-brain, there are lots of \n> other folks who just want plan stability.\n\nIt's not like it has to be on for everybody. I look forward to the day \nwhen I could see this:\n\n$ cat postgresql.conf | grep brain\n# - Super-brain Query Optimizer -\nsbqo = on # Enables the super-brain\nsbqo_reconsider_interval = 5s # How often to update plans\nsbqo_brain_size = friggin_huge # Possible values are wee, not_so_wee, and\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 20 Jun 2007 16:23:44 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "On Wed, 20 Jun 2007, Campbell, Lance wrote:\n\n> If everything I said is correct then I agree \"Why have \n> effective_cache_size?\" Why not just go down the approach that Oracle \n> has taken and require people to rely more on shared_buffers and the \n> general memory driven approach? Why rely on the disk caching of the OS?\n\nFirst off, it may help explain the dynamics here if you know that until \nfairly recent releases, the PostgreSQL shared_buffers cache had some \nperformance issues that made it impractical to make it too large. It \nhasn't been that long that relying more heavily on the Postgres cache was \ntechnically feasible. I think the user community at large is still \nassimilating all the implications of that shift, and as such some of the \nterritory with making the Postgres memory really large is still being \nmapped out.\n\nThere are also still some issues left in that area. For example, the \nbigger your shared_buffers cache is, the worse the potential is for having \na checkpoint take a really long time and disrupt operations. There are OS \ntunables that can help work around that issue; similar ones for the \nPostgreSQL buffer cache won't be available until the 8.3 release.\n\nIn addition to all that, there are still several reasons to keep relying \non the OS cache:\n\n1) The OS cache memory is shared with other applications, so relying on it \nlowers the average memory footprint of PostgreSQL. The database doesn't \nhave to be a pig that constantly eats all the memory up, while still \nutilizing it when necessary.\n\n2) The OS knows a lot more about the disk layout and similar low-level \ndetails and can do optimizations a platform-independant program like \nPostgres can't assume are available.\n\n3) There are more people working on optimizing the caching algorithms in \nmodern operating systems than are coding on this project. Using that \nsophisticated cache leverages their work.\n\n\"The Oracle Way\" presumes that you've got such a massive development staff \nthat you can solve these problems better yourself than the community at \nlarge, and then support that solution on every platform. This is why they \nended up with solutions like raw partitions, where they just put their own \nfilesystem on the disk and figure out how to make that work well \neverywhere. If you look at trends in this area, at this point the \nunderlying operating systems have gotten good enough that tricks like that \nare becoming marginal. Pushing more work toward the OS is a completely \nviable design choice that strengthens every year.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 21 Jun 2007 03:14:48 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> On Wed, 20 Jun 2007, Campbell, Lance wrote:\n>> If everything I said is correct then I agree \"Why have \n>> effective_cache_size?\" Why not just go down the approach that Oracle \n>> has taken and require people to rely more on shared_buffers and the \n>> general memory driven approach? Why rely on the disk caching of the OS?\n\n> [ reasons why snipped ]\n\nThere's another reason for not setting shared_buffers huge, beyond the\ngood ones Greg listed: the kernel may or may not consider a large\nshared-memory segment as potentially swappable. If the kernel starts\nswapping out low-usage areas of the shared-buffer arena, you lose badly:\naccessing a supposedly \"in cache\" page takes just as long as fetching it\nfrom the disk file would've, and if a dirty page gets swapped out,\nyou'll have to swap it back in before you can write it; making a total\nof *three* I/Os expended to get it down to where it should have been,\nnot one. So unless you can lock the shared memory segment in RAM, it's\nbest to keep it small enough that all the buffers are heavily used.\nMarginal-use pages will be handled much more effectively in the O/S\ncache.\n\nI'd also like to re-emphasize the point about \"don't be a pig if you\ndon't have to\". It would be very bad if Postgres automatically operated\non the assumption that it should try to consume all available resources.\nPersonally, I run half a dozen postmasters (of varying vintages) on one\nnot-especially-impressive development machine. I can do this exactly\nbecause the default configuration doesn't try to eat the whole machine.\n\nTo get back to the comparison to Oracle: Oracle can assume that it's\nrunning on a dedicated machine, because their license fees are more than\nthe price of the machine anyway. We shouldn't make that assumption,\nat least not in the out-of-the-box configuration.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2007 10:29:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool " }, { "msg_contents": "Tom Lane wrote:\n> There's another reason for not setting shared_buffers huge, beyond the\n> good ones Greg listed: the kernel may or may not consider a large\n> shared-memory segment as potentially swappable. \n\nAnother is that on Windows, shared memory access is more expensive and \nvarious people have noted that the smallest value for shared_buffers you \ncan get away with can yield better performance as it leaves more free \nfor the kernel to use, more efficiently.\n\nRegards, Dave.\n\n", "msg_date": "Thu, 21 Jun 2007 16:09:07 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "On Thu, Jun 21, 2007 at 03:14:48AM -0400, Greg Smith wrote:\n\n> \"The Oracle Way\" presumes that you've got such a massive development staff \n> that you can solve these problems better yourself than the community at \n> large, and then support that solution on every platform. \n\nNot that Greg is suggesting otherwise, but to be fair to Oracle (and\nother large database vendors), the raw partitions approach was also a\ncompletely sensible design decision back when they made it. In the\nlate 70s and early 80s, the capabilities of various filesystems were\nwildly uneven (read the _UNIX Hater's Handbook_ on filesystems, for\ninstance, if you want an especially jaundiced view). Moreover, since\nit wasn't clear that UNIX and UNIX-like things were going to become\nthe dominant standard -- VMS was an obvious contender for a long\ntime, and for good reason -- it made sense to have a low-level\nstructure that you could rely on.\n\nOnce they had all that code and had made all those assumptions while\nrelying on it, it made no sense to replace it all. It's now mostly\nmature and robust, and it is probably a better decision to focus on\nincremental improvements to it than to rip it all out and replace it\nwith something likely to be buggy and surprising. The PostgreSQL\ndevelopers' practice of sighing gently every time someone comes along\ninsisting that threads are keen or that shared memory sucks relies on\nthe same, perfectly sensible premise: why throw away a working\nlow-level part of your design to get an undemonstrated benefit and\nprobably a whole lot of new bugs?\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n", "msg_date": "Thu, 21 Jun 2007 11:59:14 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "Greg,\nI have a PostgreSQL database that runs on a dedicated server. The\nserver has 24Gig of memory. What would be the max size I would ever\nwant to set the shared_buffers to if I where to relying on the OS for\ndisk caching approach? It seems that no matter how big your dedicated\nserver is there would be a top limit to the size of shared_buffers.\n \nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Greg Smith\nSent: Thursday, June 21, 2007 2:15 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Volunteer to build a configuration tool\n\nOn Wed, 20 Jun 2007, Campbell, Lance wrote:\n\n> If everything I said is correct then I agree \"Why have \n> effective_cache_size?\" Why not just go down the approach that Oracle \n> has taken and require people to rely more on shared_buffers and the \n> general memory driven approach? Why rely on the disk caching of the\nOS?\n\nFirst off, it may help explain the dynamics here if you know that until \nfairly recent releases, the PostgreSQL shared_buffers cache had some \nperformance issues that made it impractical to make it too large. It \nhasn't been that long that relying more heavily on the Postgres cache\nwas \ntechnically feasible. I think the user community at large is still \nassimilating all the implications of that shift, and as such some of the\n\nterritory with making the Postgres memory really large is still being \nmapped out.\n\nThere are also still some issues left in that area. For example, the \nbigger your shared_buffers cache is, the worse the potential is for\nhaving \na checkpoint take a really long time and disrupt operations. There are\nOS \ntunables that can help work around that issue; similar ones for the \nPostgreSQL buffer cache won't be available until the 8.3 release.\n\nIn addition to all that, there are still several reasons to keep relying\n\non the OS cache:\n\n1) The OS cache memory is shared with other applications, so relying on\nit \nlowers the average memory footprint of PostgreSQL. The database doesn't\n\nhave to be a pig that constantly eats all the memory up, while still \nutilizing it when necessary.\n\n2) The OS knows a lot more about the disk layout and similar low-level \ndetails and can do optimizations a platform-independant program like \nPostgres can't assume are available.\n\n3) There are more people working on optimizing the caching algorithms in\n\nmodern operating systems than are coding on this project. Using that \nsophisticated cache leverages their work.\n\n\"The Oracle Way\" presumes that you've got such a massive development\nstaff \nthat you can solve these problems better yourself than the community at \nlarge, and then support that solution on every platform. This is why\nthey \nended up with solutions like raw partitions, where they just put their\nown \nfilesystem on the disk and figure out how to make that work well \neverywhere. If you look at trends in this area, at this point the \nunderlying operating systems have gotten good enough that tricks like\nthat \nare becoming marginal. Pushing more work toward the OS is a completely \nviable design choice that strengthens every year.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n", "msg_date": "Thu, 21 Jun 2007 11:32:22 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "On Thu, 21 Jun 2007, Campbell, Lance wrote:\n\n> I have a PostgreSQL database that runs on a dedicated server. The\n> server has 24Gig of memory. What would be the max size I would ever\n> want to set the shared_buffers to if I where to relying on the OS for\n> disk caching approach? It seems that no matter how big your dedicated\n> server is there would be a top limit to the size of shared_buffers.\n\nIt's impossible to say exactly what would work optimally in this sort of \nsituation. The normal range is 25-50% of total memory, but there's no \nhard reason for that balance; for all we know your apps might work best \nwith 20GB in shared_buffers and only a relatively small 4GB left over for \nthe rest of the OS to use. Push it way up and and see what you get.\n\nThis is part of why the idea of an \"advanced\" mode for this tool is \nsuspect. Advanced tuning usually requires benchmarking with as close to \nreal application data as you can get in order to make good forward \nprogress.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 23 Jun 2007 15:28:17 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "On Jun 23, 2007, at 2:28 PM, Greg Smith wrote:\n> On Thu, 21 Jun 2007, Campbell, Lance wrote:\n>> I have a PostgreSQL database that runs on a dedicated server. The\n>> server has 24Gig of memory. What would be the max size I would ever\n>> want to set the shared_buffers to if I where to relying on the OS for\n>> disk caching approach? It seems that no matter how big your \n>> dedicated\n>> server is there would be a top limit to the size of shared_buffers.\n>\n> It's impossible to say exactly what would work optimally in this \n> sort of situation. The normal range is 25-50% of total memory, but \n> there's no hard reason for that balance; for all we know your apps \n> might work best with 20GB in shared_buffers and only a relatively \n> small 4GB left over for the rest of the OS to use. Push it way up \n> and and see what you get.\n>\n> This is part of why the idea of an \"advanced\" mode for this tool is \n> suspect. Advanced tuning usually requires benchmarking with as \n> close to real application data as you can get in order to make good \n> forward progress.\n\nAgreed. EnterpriseDB comes with a feature called \"DynaTune\" that \nlooks at things like server memory and sets a best-guess at a bunch \nof parameters. Truth is, it works fine for 90% of cases, because \nthere's just a lot of installations where tuning postgresql.conf \nisn't that critical.\n\nThe real issue is that the \"stock\" postgresql.conf is just horrible. \nIt was originally tuned for something like a 486, but even the recent \nchanges have only brought it up to the \"pentium era\" (case in point: \n24MB of shared buffers equates to a machine with 128MB of memory, \ngive or take). Given that, I think an 80% solution would be to just \npost small/medium/large postgresql.conf files somewhere.\n\nI also agree 100% with Tom that the cost estimators need serious \nwork. One simple example: nothing in the planner looks at what \npercent of a relation is actually in shared_buffers. If it did that, \nit would probably be reasonable to extrapolate that percentage into \nhow much is sitting in kernel cache, which would likely be miles \nahead of what's currently done.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Mon, 25 Jun 2007 19:19:01 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "\"Jim Nasby\" <[email protected]> writes:\n\n> The real issue is that the \"stock\" postgresql.conf is just horrible. It was\n> originally tuned for something like a 486, but even the recent changes have\n> only brought it up to the \"pentium era\" (case in point: 24MB of shared buffers\n> equates to a machine with 128MB of memory, give or take). \n\nI think it's more that the stock configure has to assume it's not a dedicated\nbox. Picture someone installing Postgres on their debian box because it's\nrequired for Gnucash. Even having 24M suddenly disappear from the box is quite\na bit.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 26 Jun 2007 09:01:59 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" }, { "msg_contents": "Lance,\n\n> I have a PostgreSQL database that runs on a dedicated server. The\n> server has 24Gig of memory. What would be the max size I would ever\n> want to set the shared_buffers to if I where to relying on the OS for\n> disk caching approach? It seems that no matter how big your dedicated\n> server is there would be a top limit to the size of shared_buffers.\n\nThere's not, actually. Under some circumstances (mainly Solaris 10 + UFS \non AMD) it can actually be beneficial to have s_b be 80% of RAM and bypass \nthe FS cache entirely. This isn't usually the case, but it's not to be \nruled out.\n\nIf you're relying on the FS cache and not using direct I/O, though, you \nwant to keep at least 50% of memory free for use by the cache. At below \n50%, you lose a significant part of the benefit of the cache without \nlosing the cost of it. Of course, that assumes that your database is \nbigger than ram; there isn't much need to have either s_b or the f.s.c. be \nmore than twice the size of your whole database.\n\nIn general, a setting s_b to 25% of RAM on a dedicated machine, and 10% \n(with a max of 512MB) on a shared machine, is a nice safe default which \nwill do OK for most applications. \n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Wed, 27 Jun 2007 16:19:45 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Volunteer to build a configuration tool" } ]
[ { "msg_contents": "Hello from Paris\nI am DBA for Oracle and beginner on Postgres. For an company in France, I\nmust make a comparative study, between Postgres and Oracle. Can you send any\nuseful document which can help me.\nScalability ? Performance? Benchmark ? Availability ? Architecture ?\nLimitation : users, volumes ? Resouces needed ? Support ?\nRegards\n\ncordialement\ndavid tokmatchi\n+33 6 80 89 54 74\n\n\nHello from Paris\nI am DBA for Oracle and beginner on Postgres. For an company in France, I must make a comparative study, between Postgres and Oracle. Can you send any useful document which can help me.\n\nScalability ? Performance? Benchmark ? Availability ? Architecture ? Limitation : users, volumes ? Resouces needed ? Support ?\n\nRegards cordialementdavid tokmatchi\n+33 6 80 89 54 74", "msg_date": "Mon, 18 Jun 2007 17:55:00 +0200", "msg_from": "\"David Tokmatchi\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres VS Oracle" }, { "msg_contents": "On 6/18/07, David Tokmatchi <[email protected]> wrote:\n> Scalability ? Performance? Benchmark ? Availability ? Architecture ?\n> Limitation : users, volumes ? Resouces needed ? Support ?\n\nAside from the Wikipedia database comparison, I'm not aware of any\ndirect PostgreSQL-to-Oracle comparison.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Mon, 18 Jun 2007 12:10:30 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres VS Oracle" }, { "msg_contents": "This document:\n \nhttp://www-css.fnal.gov/dsg/external/freeware/mysql-vs-pgsql.html\n \ncould answer some of your questions.\n \nIgor\n\n________________________________\n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of David Tokmatchi\nSent: Monday, June 18, 2007 11:55 AM\nTo: [email protected]; [email protected];\[email protected]; [email protected];\[email protected]\nSubject: [ADMIN] Postgres VS Oracle\n\n\nHello from Paris\nI am DBA for Oracle and beginner on Postgres. For an company in France,\nI must make a comparative study, between Postgres and Oracle. Can you\nsend any useful document which can help me. \nScalability ? Performance? Benchmark ? Availability ? Architecture ?\nLimitation : users, volumes ? Resouces needed ? Support ? \nRegards \n\ncordialement\ndavid tokmatchi \n+33 6 80 89 54 74 \n\n\n\n\n\nThis document:\n \nhttp://www-css.fnal.gov/dsg/external/freeware/mysql-vs-pgsql.html\n \ncould answer some of your questions.\n \nIgor\n\n\nFrom: [email protected] \n[mailto:[email protected]] On Behalf Of David \nTokmatchiSent: Monday, June 18, 2007 11:55 AMTo: \[email protected]; [email protected]; \[email protected]; [email protected]; \[email protected]: [ADMIN] Postgres VS \nOracle\n\n\nHello \nfrom Paris\nI \nam DBA for Oracle and beginner on \nPostgres. For an company in France, I must make a comparative study, \nbetween Postgres and Oracle. Can you send any useful document which \ncan help me. \nScalability \n? Performance? Benchmark ? Availability ? Architecture ? Limitation : users, \nvolumes ? Resouces needed ? Support ? \nRegards \ncordialementdavid tokmatchi +33 6 80 89 54 \n74", "msg_date": "Mon, 18 Jun 2007 12:25:58 -0400", "msg_from": "\"Igor Neyman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres VS Oracle" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nIt's even harder, as Oracle disallows publishing benchmark figures in\ntheir license. As a cynic, I might ask, what Oracle is fearing?\n\nAndreas\n\nJonah H. Harris wrote:\n> On 6/18/07, David Tokmatchi <[email protected]> wrote:\n>> Scalability ? Performance? Benchmark ? Availability ? Architecture ?\n>> Limitation : users, volumes ? Resouces needed ? Support ?\n> \n> Aside from the Wikipedia database comparison, I'm not aware of any\n> direct PostgreSQL-to-Oracle comparison.\n> \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGdrfHHJdudm4KnO0RAqKQAJ96t7WkLG/VbqkWTW60g6QC5eU4HgCfShNd\no3+YPVnPJ2nwXcpi4ow28nw=\n=1CwN\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 18 Jun 2007 18:50:15 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres VS Oracle" }, { "msg_contents": "On 6/18/07, Andreas Kostyrka <[email protected]> wrote:\n> As a cynic, I might ask, what Oracle is fearing?\n\nAs a realist, I might ask, how many times do we have to answer this\ntype of anti-commercial-database flamewar-starting question?\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Mon, 18 Jun 2007 13:02:39 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres VS Oracle" }, { "msg_contents": "David,\n\nFirst of all, it's considered very rude to cross-post to 5 different mailing \nlists. pgsql-advocacy is the right list for this question; please don't post \nto more than one list at a time in the future.\n\n> I am DBA for Oracle and beginner on Postgres. For an company in France, I\n> must make a comparative study, between Postgres and Oracle. Can you send\n> any useful document which can help me.\n> Scalability ? Performance? Benchmark ? Availability ? Architecture ?\n> Limitation : users, volumes ? Resouces needed ? Support ?\n> Regards\n\nYou may not be aware, but we have a large French PostgreSQL community:\nwww.postgresqlfr.org\n\nI know that Jean-Paul and Dimitri have experience in porting applications, so \nyou should probably contact them to get local help & information on comparing \nthe two DBMSes.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Mon, 18 Jun 2007 10:08:11 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres VS Oracle" }, { "msg_contents": "Jonah H. Harris wrote:\n> On 6/18/07, Andreas Kostyrka <[email protected]> wrote:\n>> As a cynic, I might ask, what Oracle is fearing?\n> \n> As a realist, I might ask, how many times do we have to answer this\n> type of anti-commercial-database flamewar-starting question?\n\nDepends? How many times are you going to antagonize the people that ask?\n\n1. It has *nothing* to do with anti-commercial. It is anti-proprietary \nwhich is perfectly legitimate.\n\n2. Oracle, Microsoft, and IBM have a \"lot\" to fear in the sense of a \ndatabase like PostgreSQL. We can compete in 90-95% of cases where people \nwould traditionally purchase a proprietary system for many, many \nthousands (if not hundreds of thousands) of dollars.\n\nSincerely,\n\nJoshua D. Drake\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Mon, 18 Jun 2007 10:17:37 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "\n> 2. Oracle, Microsoft, and IBM have a \"lot\" to fear in the sense of a \n> database like PostgreSQL. We can compete in 90-95% of cases where people \n> would traditionally purchase a proprietary system for many, many \n> thousands (if not hundreds of thousands) of dollars.\n\n\tOracle also fears benchmarks made by people who don't know how to tune \nOracle properly...\n", "msg_date": "Mon, 18 Jun 2007 19:27:00 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "PFC wrote:\n> \n>> 2. Oracle, Microsoft, and IBM have a \"lot\" to fear in the sense of a \n>> database like PostgreSQL. We can compete in 90-95% of cases where \n>> people would traditionally purchase a proprietary system for many, \n>> many thousands (if not hundreds of thousands) of dollars.\n> \n> Oracle also fears benchmarks made by people who don't know how to \n> tune Oracle properly...\n\nYes that is one argument that is made (and a valid one) but it is \nassuredly not the only one that can be made, that would be legitimate.\n\nJoshua D. Drake\n\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Mon, 18 Jun 2007 10:32:13 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "On 6/18/07, Joshua D. Drake <[email protected]> wrote:\n> Depends? How many times are you going to antagonize the people that ask?\n\nAs many times as necessary. Funny how the anti-proprietary-database\narguments can continue forever and no one brings up the traditional\nRTFM-like response of, \"hey, this was already discussed in thread XXX,\nread that before posting again.\"\n\n> 1. It has *nothing* to do with anti-commercial. It is anti-proprietary\n> which is perfectly legitimate.\n\nAs long as closed-mindedness is legitimate, sure.\n\n> 2. Oracle, Microsoft, and IBM have a \"lot\" to fear in the sense of a\n> database like PostgreSQL. We can compete in 90-95% of cases where people\n> would traditionally purchase a proprietary system for many, many\n> thousands (if not hundreds of thousands) of dollars.\n\nThey may well have a lot to fear, but that doesn't mean they do;\nanything statement in that area is pure assumption.\n\nI'm in no way saying we can't compete, I'm just saying that the\ncontinued closed-mindedness and inside-the-box thinking only serves to\nperpetuate malcontent toward the proprietary vendors by turning\npersonal experiences into sacred-mailing-list gospel.\n\nAll of us have noticed the anti-MySQL bashing based on problems with\nMySQL 3.23... Berkus and others (including yourself, if I am correct),\nhave corrected people on not making invalid comparisons against\nancient versions. I'm only doing the same where Oracle, IBM, and\nMicrosoft are concerned.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Mon, 18 Jun 2007 13:38:44 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\nJonah H. Harris wrote:\n> On 6/18/07, Andreas Kostyrka <[email protected]> wrote:\n>> As a cynic, I might ask, what Oracle is fearing?\n> \n> As a realist, I might ask, how many times do we have to answer this\n> type of anti-commercial-database flamewar-starting question?\n> \n\nWell, my experience when working with certain DBs is much like I had\nsome years ago, when I was forced to work with different SCO Unix legacy\nboxes. \"Why do I have to put up with this silliness?\", and with\ndatabases there is no way to get a sensible tool set by \"shopping\naround\" and installing GNU packages en masse :(\n\nFurthermore not being allowed to talk about performance is a real hard\nmisfeature, like DRM. Consider:\n\n1.) Performance is certainly an important aspect of my work as a DBA.\n2.) Gaining experience as a DBA is not trivial, it's clearly a\ndiscipline that cannot be learned from a book, you need experience. As a\ndeveloper I can gain experience on my own. As a DBA, I need some nice\nhardware and databases that are big enough to be nontrivial.\n3.) The above points make it vital to be able to discuss my experiences.\n4.) Oracle's license NDA makes exchanging experience harder.\n\nSo as an endeffect, the limited number of playing grounds (#2 above)\nkeeps hourly rates for DBAs high. Oracle's NDA limits secondary\nknowledge effects, so in effect it keeps the price for Oracle knowhow\npotentially even higher.\n\nOr put bluntly, the NDA mindset benefits completly and only Oracle, and\nis a clear drawback for customers. It makes Oracle-supplied consultants\n\"gods\", no matter how much hot air they produce. They've got the benefit\nof having internal peer knowledge, and as consumer there is not much\nthat I can do counter it. I'm not even allowed to document externally\nthe pitfalls and experiences I've made, so the next poor sob will walk\non the same landmine.\n\nAndreas\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGdsT5HJdudm4KnO0RAoASAJ9b229Uhsuxn9qGfU5I0QUfTC/dqQCfZK/b\n65XQFcc0aRBVptxW5uzLejY=\n=UIF6\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 18 Jun 2007 19:46:33 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres VS Oracle" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\nPFC wrote:\n> \n>> 2. Oracle, Microsoft, and IBM have a \"lot\" to fear in the sense of a\n>> database like PostgreSQL. We can compete in 90-95% of cases where\n>> people would traditionally purchase a proprietary system for many,\n>> many thousands (if not hundreds of thousands) of dollars.\n> \n> Oracle also fears benchmarks made by people who don't know how to\n> tune Oracle properly...\n\nWell, bad results are as interesting as good results. And this problems\napplies to all other databases.\n\nAndreas\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGdsXdHJdudm4KnO0RArTkAKCZs6ht4z0lb2zHtr5MfXj8CsTZdQCgmwE5\nJAD6Hkul1iIML42GO1vAM0c=\n=FMRt\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 18 Jun 2007 19:50:22 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "Joshua D. Drake wrote:\n> 2. Oracle, Microsoft, and IBM have a \"lot\" to fear in the sense of a \n> database like PostgreSQL. We can compete in 90-95% of cases where people \n> would traditionally purchase a proprietary system for many, many \n> thousands (if not hundreds of thousands) of dollars.\n\nWell, I'm sure that is part of it, perhaps the major part. But part of \nalso is likely to be avoiding every shlub with a computer doing some \noff-the-wall comparison showing X to be 1000 times \"better\" than Oracle, \nSQL Server or DB2; then the corresponding vendor has to spend endless \ntime and money refuting all these half-baked comparisons.\n\n-- \nGuy Rouillier\n", "msg_date": "Mon, 18 Jun 2007 13:50:46 -0400", "msg_from": "Guy Rouillier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] [ADMIN] Postgres VS Oracle" }, { "msg_contents": "Jonah H. Harris wrote:\n> On 6/18/07, Joshua D. Drake <[email protected]> wrote:\n>> Depends? How many times are you going to antagonize the people that ask?\n> \n> As many times as necessary. Funny how the anti-proprietary-database\n> arguments can continue forever and no one brings up the traditional\n> RTFM-like response of, \"hey, this was already discussed in thread XXX,\n> read that before posting again.\"\n\nYeah funny how you didn't do that ;) (of course neither did I).\n\n> \n>> 1. It has *nothing* to do with anti-commercial. It is anti-proprietary\n>> which is perfectly legitimate.\n> \n> As long as closed-mindedness is legitimate, sure.\n\nIt isn't closed minded to consider anti-proprietary a bad thing. It is \nan opinion and a valid one. One that many have made part of their lives \nin a very pro-commercial and profitable manner.\n\n> \n>> 2. Oracle, Microsoft, and IBM have a \"lot\" to fear in the sense of a\n>> database like PostgreSQL. We can compete in 90-95% of cases where people\n>> would traditionally purchase a proprietary system for many, many\n>> thousands (if not hundreds of thousands) of dollars.\n> \n> They may well have a lot to fear, but that doesn't mean they do;\n> anything statement in that area is pure assumption.\n\n95% of life is assumption. Some of it based on experience, some of it \nbased on pure conjecture, some based on all kinds of other things.\n\n> \n> I'm in no way saying we can't compete, I'm just saying that the\n> continued closed-mindedness and inside-the-box thinking only serves to\n> perpetuate malcontent toward the proprietary vendors by turning\n> personal experiences into sacred-mailing-list gospel.\n\nIt is amazing how completely misguided you are in this response. I \nhaven't said anything closed minded. I only responded to your rather \nantagonistic response to a reasonably innocuous question of: \"As a \ncynic, I might ask, what Oracle is fearing? \"\n\nIt is a good question to ask, and a good question to discuss.\n\n> \n> All of us have noticed the anti-MySQL bashing based on problems with\n> MySQL 3.23... Berkus and others (including yourself, if I am correct),\n> have corrected people on not making invalid comparisons against\n> ancient versions. I'm only doing the same where Oracle, IBM, and\n> Microsoft are concerned.\n\nI haven't seen any bashing going on yet. Shall we start with the closed \nmindedness and unfairness of per cpu license and support models?\n\nJoshua D. Drake\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Mon, 18 Jun 2007 10:51:11 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "On 6/18/07, Joshua D. Drake <[email protected]> wrote:\n> Yeah funny how you didn't do that ;) (of course neither did I).\n\nI agree, an oops on my part :)\n\n> It is amazing how completely misguided you are in this response. I\n> haven't said anything closed minded. I only responded to your rather\n> antagonistic response to a reasonably innocuous question of: \"As a\n> cynic, I might ask, what Oracle is fearing? \"\n\nI wasn't responding to you, just to the seemingly closed-mindedness of\nthe original question/statement. We're all aware of the reasons, for\nand against, proprietary system licenses prohibiting benchmarking.\n\n> It is a good question to ask, and a good question to discuss.\n\nCertainly, but can one expect to get a realistic answer to an, \"is\nOracle fearing something\" question on he PostgreSQL list? Or was it\njust a backhanded attempt at pushing the topic again? My vote is for\nthe latter; it served no purpose other than to push the\ncompetitiveness topic again.\n\n> I haven't seen any bashing going on yet. Shall we start with the closed\n> mindedness and unfairness of per cpu license and support models?\n\nNot preferably, you make me type too much :)\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Mon, 18 Jun 2007 13:57:06 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\nJonah H. Harris wrote:\n> \n> All of us have noticed the anti-MySQL bashing based on problems with\n> MySQL 3.23... Berkus and others (including yourself, if I am correct),\n> have corrected people on not making invalid comparisons against\n> ancient versions. I'm only doing the same where Oracle, IBM, and\n> Microsoft are concerned.\n> \n\nMy, my, I fear my asbestos are trying to feel warm inside ;)\n\nWell, there is not much MySQL bashing going around. And MySQL 5 has\nenough \"features\" and current MySQL AB support for it is so \"good\", that\nthere is no need to bash MySQL based on V3 problems. MySQL5 is still a\njoke, and one can quite safely predict the answers to tickets, with well\nover 50% guess rate.\n\n(Hint: I don't consider the answer: \"Redo your schema\" to be a\nsatisfactory answer. And philosophically, the query optimizer in MySQL\nis near perfect. OTOH, considering the fact that many operations in\nMySQL still have just one way to execute, it's easy to choose the\nfastest plan, isn't it *g*)\n\nAndreas\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGdsgCHJdudm4KnO0RAg2oAKCdabTyQCcK8eC0+ErVJLlX59nNjgCfQjaO\nhhfSxBoESyCU/mTQo3gbQRM=\n=RqB7\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 18 Jun 2007 19:59:30 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "(cut down the reply-tos)\n\nJoshua D. Drake wrote:\n\n> PFC wrote:\n>\n>>\n>>> 2. Oracle, Microsoft, and IBM have a \"lot\" to fear in the sense of a \n>>> database like PostgreSQL. We can compete in 90-95% of cases where \n>>> people would traditionally purchase a proprietary system for many, \n>>> many thousands (if not hundreds of thousands) of dollars.\n>>\n>>\n>> Oracle also fears benchmarks made by people who don't know how to \n>> tune Oracle properly...\n>\n>\n> Yes that is one argument that is made (and a valid one) but it is \n> assuredly not the only one that can be made, that would be legitimate.\n>\n\nGiven how many bogus MySQL vr.s Postgresql benchmarks I've seen, where \nPostgres is running untuned \"out of the box\", it's a sufficient reasons, \nIMHO.\n\nBrian\n\n", "msg_date": "Mon, 18 Jun 2007 14:01:55 -0400", "msg_from": "Brian Hurt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [PERFORM] [ADMIN] Postgres VS Oracle" }, { "msg_contents": "All,\n\nOn Mon, Jun 18, 2007 at 07:50:22PM +0200, Andreas Kostyrka wrote:\n\n[something]\n\nIt would appear that this was the flame-fest that was predicted. \nParticularly as this has been copied to five lists. If you all want\nto have an argument about what Oracle should or should not do, could\nyou at least limit it to one list?\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nEverything that happens in the world happens at some place.\n\t\t--Jane Jacobs \n", "msg_date": "Mon, 18 Jun 2007 14:09:28 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "On 6/18/07, Andrew Sullivan <[email protected]> wrote:\n> It would appear that this was the flame-fest that was predicted.\n> Particularly as this has been copied to five lists. If you all want\n> to have an argument about what Oracle should or should not do, could\n> you at least limit it to one list?\n\nYeah, Josh B. asked it to be toned down to the original list which\nshould've been involved. Which I think should be pgsql-admin or\npgsql-advocacy... your thoughts?\n\nI think the Oracle discussion is over, David T. just needs URL references IMHO.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Mon, 18 Jun 2007 14:16:56 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\nJonah H. Harris wrote:\n> Certainly, but can one expect to get a realistic answer to an, \"is\n> Oracle fearing something\" question on he PostgreSQL list? Or was it\n> just a backhanded attempt at pushing the topic again? My vote is for\n> the latter; it served no purpose other than to push the\n> competitiveness topic again.\n\nWell, I'm a cynic at heart, really. So there was no bad intend behind it.\n\nAnd it was a nice comment, because I would base it on my personal\nexperiences with certain vendors, it wouldn't be near as nice.\n\nThe original question was about comparisons between PG and Oracle.\n\nNow, I could answer this question from my personal experiences with the\nproduct and support. That would be way more stronger worded than my\nsmall cynic question.\n\nAnother thing, Joshua posted a guesstimate that PG can compete in 90-95%\ncases with Oracle. Because Oracle insists on secrecy, I'm somehow\ninclined to believe the side that talks openly. And while I don't like\nto question Joshua's comment, I think he overlooked one set of problems,\n namely the cases where Oracle is not able to compete with PG. It's hard\nto quantify how many of these cases there are performance-wise, well,\nbecause Oracle insists on that silly NDA, but there are clearly cases\nwhere PG is superior.\n\nAndreas\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGds8WHJdudm4KnO0RAvb0AJ4gBec4yikrAOvDi5C3kc5NLGYteACghewU\nPkfrnXgCRfZlEdeMA2DZGTE=\n=BpUw\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 18 Jun 2007 20:29:42 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "On Mon, Jun 18, 2007 at 02:16:56PM -0400, Jonah H. Harris wrote:\n> pgsql-advocacy... your thoughts?\n\nI've picked -advocacy.\n\n> \n> I think the Oracle discussion is over, David T. just needs URL references \n> IMHO.\n\nI don't think we can speak about Oracle; if we were licenced, we'd be\nviolating it, and since we're not, we can't possibly know about it,\nright ;-) But there are some materials about why to use Postgres on\nthe website:\n\nhttp://www.postgresql.org/about/advantages\n\nA\n\n\n-- \nAndrew Sullivan | [email protected]\nWhen my information changes, I alter my conclusions. What do you do sir?\n\t\t--attr. John Maynard Keynes\n", "msg_date": "Mon, 18 Jun 2007 14:38:32 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "On Mon, Jun 18, 2007 at 02:38:32PM -0400, Andrew Sullivan wrote:\n> I've picked -advocacy.\n\nActually, I _had_ picked advocacy, but had an itchy trigger finger. \nApologies, all.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nA certain description of men are for getting out of debt, yet are\nagainst all taxes for raising money to pay it off.\n\t\t--Alexander Hamilton\n", "msg_date": "Mon, 18 Jun 2007 14:40:06 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "> > Depends? How many times are you going to antagonize the people that ask?\n> As many times as necessary. Funny how the anti-proprietary-database\n> arguments can continue forever and no one brings up the traditional\n> RTFM-like response of, \"hey, this was already discussed in thread XXX,\n> read that before posting again.\"\n\nHey! I was about to! :)\n\nAs an Informix/DB2 admin I can tell you that those forums/lists get\npounded with the same kind of crap. My take: It is a bad policy, so\nhound the vendor, and leave the rest of us alone. Convincing or not\nconvincing me isn't going to move the cause.\n\nAnd now the rule of not cross-posting has been broken... commence the\ndownward spiral!\n\n> > 1. It has *nothing* to do with anti-commercial. It is anti-proprietary\n> > which is perfectly legitimate.\n> As long as closed-mindedness is legitimate, sure.\n> > 2. Oracle, Microsoft, and IBM have a \"lot\" to fear in the sense of a\n> > database like PostgreSQL. We can compete in 90-95% of cases where people\n> > would traditionally purchase a proprietary system for many, many\n> > thousands (if not hundreds of thousands) of dollars.\n> They may well have a lot to fear, but that doesn't mean they do;\n> anything statement in that area is pure assumption.\n\nYep, and the 90-95% number is straight out-of-the-air. And I believe\nthat exactly 17 angels can dance on the head of a pin.\n\n-- \nAdam Tauno Williams, Network & Systems Administrator\nConsultant - http://www.whitemiceconsulting.com\nDeveloper - http://www.opengroupware.org\n\n", "msg_date": "Mon, 18 Jun 2007 14:54:10 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "On Jun 18, 10:55 am, [email protected] (\"David Tokmatchi\")\nwrote:\n> Hello from Paris\n> I am DBA for Oracle and beginner on Postgres. For an company in France, I\n> must make a comparative study, between Postgres and Oracle. Can you send any\n> useful document which can help me.\n> Scalability ? Performance? Benchmark ? Availability ? Architecture ?\n> Limitation : users, volumes ? Resouces needed ? Support ?\n> Regards\n>\n> cordialement\n> david tokmatchi\n> +33 6 80 89 54 74\n\nThis is good to know:\n\n\"Comparison of different SQL implementations\"\nhttp://troels.arvin.dk/db/rdbms/\n\n", "msg_date": "Mon, 18 Jun 2007 21:17:59 -0000", "msg_from": "=?iso-8859-1?q?Rodrigo_De_Le=F3n?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres VS Oracle" }, { "msg_contents": "PFC wrote:\n>\n>> 2. Oracle, Microsoft, and IBM have a \"lot\" to fear in the sense of a\n>> database like PostgreSQL. We can compete in 90-95% of cases where\n>> people would traditionally purchase a proprietary system for many,\n>> many thousands (if not hundreds of thousands) of dollars.\n>\n> Oracle also fears benchmarks made by people who don't know how to\n> tune Oracle properly...\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\nThen Oracle fears its users, which explains the \"no-benchmark\" policy,\nwhich I cannot see any PR honk being able to spin that in any way that\ndoesn't make Oracle look like it's hiding its head in the sand.\n\n\n-- \nThe NCP Revue -- http://www.ncprevue.com/blog\n\n", "msg_date": "Mon, 18 Jun 2007 21:02:07 -0600", "msg_from": "John Meyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [PERFORM] [ADMIN] Postgres VS Oracle" }, { "msg_contents": "On 6/18/07, John Meyer <[email protected]> wrote:\n> Then Oracle fears its users, which explains the \"no-benchmark\" policy,\n> which I cannot see any PR honk being able to spin that in any way that\n> doesn't make Oracle look like it's hiding its head in the sand.\n\nThe humor I see in constant closed-minded presumptions is slowly fading.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Mon, 18 Jun 2007 23:41:09 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [PERFORM] [ADMIN] Postgres VS Oracle" }, { "msg_contents": "I am new to Postgresql Database. My setup is backend is postgresql\ndatabase, frontend is Java(JDBC). I installed the postgres in windows\nplatform. Now I want to setup server and client configuration. Kindly\nguide me how to set the configuration parameters, in server and client\nmachines. Waiting for your fav reply.\n\nThanks & Regards\nJayakumar M\n\n\n\nDISCLAIMER:\nThis email (including any attachments) is intended for the sole use of the intended recipient/s and may contain material that is CONFIDENTIAL AND PRIVATE COMPANY INFORMATION. Any review or reliance by others or copying or distribution or forwarding of any or all of the contents in this message is STRICTLY PROHIBITED. If you are not the intended recipient, please contact the sender by email and delete all copies; your cooperation in this regard is appreciated.\n", "msg_date": "Tue, 19 Jun 2007 09:27:08 +0530", "msg_from": "\"Jayakumar_Mukundaraju\" <[email protected]>", "msg_from_op": false, "msg_subject": "Server and Client configuration." }, { "msg_contents": "Jonah H. Harris wrote:\n> On 6/18/07, John Meyer <[email protected]> wrote:\n>> Then Oracle fears its users, which explains the \"no-benchmark\" policy,\n>> which I cannot see any PR honk being able to spin that in any way that\n>> doesn't make Oracle look like it's hiding its head in the sand.\n>\n> The humor I see in constant closed-minded presumptions is slowly fading.\n>\n\nThis isn't even a straight out slam at Oracle. My degree comes in mass\ncommunications, and I cannot understand how anybody could shake the\nperception that Oracle is afraid of open, independent investigations of\nits programs.\nLet's take this at the most beneficial angle that we can, and that is\nthat Oracle has seen too many people run their programs straight into\nthe ground with some rather lousy benchmarking. If that's the case,\nthen the solution is not to bar each and every bench mark out there, it\nis to publish the methodologies to properly tune an Oracle installation.\nIt is not to attempt to strangle conversation.\n\n\n-- \nThe NCP Revue -- http://www.ncprevue.com/blog\n\n", "msg_date": "Mon, 18 Jun 2007 22:05:34 -0600", "msg_from": "John Meyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [PERFORM] [ADMIN] Postgres VS Oracle" }, { "msg_contents": "Jayakumar_Mukundaraju wrote:\n> \n> I am new to Postgresql Database. My setup is backend is postgresql\n> database, frontend is Java(JDBC). I installed the postgres in windows\n> platform. Now I want to setup server and client configuration. Kindly\n> guide me how to set the configuration parameters, in server and client\n> machines. Waiting for your fav reply.\n\nThese should contain all you need:\nhttp://www.postgresql.org/docs/current/static/index.html\nhttp://jdbc.postgresql.org/documentation/82/index.html\nhttp://jdbc.postgresql.org/development/privateapi/index.html\n\nYours,\nLaurenz Albe\n", "msg_date": "Tue, 19 Jun 2007 10:09:56 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Server and Client configuration." }, { "msg_contents": "On Mon, 2007-06-18 at 13:02 -0400, Jonah H. Harris wrote:\n> On 6/18/07, Andreas Kostyrka <[email protected]> wrote:\n> > As a cynic, I might ask, what Oracle is fearing?\n> \n> As a realist, I might ask, how many times do we have to answer this\n> type of anti-commercial-database flamewar-starting question?\n> \n\nAs a nudist, I think I have to answer, \"About every 9 weeks, it would\nseem\".\n\nAndy\n", "msg_date": "Tue, 19 Jun 2007 10:23:27 +0200", "msg_from": "Andrew Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "I don't want to add gas to the flamewar, but I gotta ask. What is in \nthe the 90 to 95% referred to in this email.\n\nCarol\nOn Jun 18, 2007, at 1:17 PM, Joshua D. Drake wrote:\n\n> Jonah H. Harris wrote:\n>> On 6/18/07, Andreas Kostyrka <[email protected]> wrote:\n>>> As a cynic, I might ask, what Oracle is fearing?\n>> As a realist, I might ask, how many times do we have to answer this\n>> type of anti-commercial-database flamewar-starting question?\n>\n> Depends? How many times are you going to antagonize the people that \n> ask?\n>\n> 1. It has *nothing* to do with anti-commercial. It is anti- \n> proprietary which is perfectly legitimate.\n>\n> 2. Oracle, Microsoft, and IBM have a \"lot\" to fear in the sense of \n> a database like PostgreSQL. We can compete in 90-95% of cases where \n> people would traditionally purchase a proprietary system for many, \n> many thousands (if not hundreds of thousands) of dollars.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n> -- \n>\n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> Providing the most comprehensive PostgreSQL solutions since 1997\n> http://www.commandprompt.com/\n>\n> Donate to the PostgreSQL Project: http://www.postgresql.org/about/ \n> donate\n> PostgreSQL Replication: http://www.commandprompt.com/products/\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n\n", "msg_date": "Tue, 19 Jun 2007 08:39:41 -0400", "msg_from": "Carol Walter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "Στις Τρίτη 19 Ιούνιος 2007 15:39, ο/η Carol Walter έγραψε:\n> I don't want to add gas to the flamewar, but I gotta ask. What is in\n> the the 90 to 95% referred to in this email.\n\nshort answer: all cases, possibly except when running a Bank or something \nsimilar.\n\n>\n> Carol\n>\n> On Jun 18, 2007, at 1:17 PM, Joshua D. Drake wrote:\n> > Jonah H. Harris wrote:\n> >> On 6/18/07, Andreas Kostyrka <[email protected]> wrote:\n> >>> As a cynic, I might ask, what Oracle is fearing?\n> >>\n> >> As a realist, I might ask, how many times do we have to answer this\n> >> type of anti-commercial-database flamewar-starting question?\n> >\n> > Depends? How many times are you going to antagonize the people that\n> > ask?\n> >\n> > 1. It has *nothing* to do with anti-commercial. It is anti-\n> > proprietary which is perfectly legitimate.\n> >\n> > 2. Oracle, Microsoft, and IBM have a \"lot\" to fear in the sense of\n> > a database like PostgreSQL. We can compete in 90-95% of cases where\n> > people would traditionally purchase a proprietary system for many,\n> > many thousands (if not hundreds of thousands) of dollars.\n> >\n> > Sincerely,\n> >\n> > Joshua D. Drake\n> >\n> > --\n> >\n> > === The PostgreSQL Company: Command Prompt, Inc. ===\n> > Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> > Providing the most comprehensive PostgreSQL solutions since 1997\n> > http://www.commandprompt.com/\n> >\n> > Donate to the PostgreSQL Project: http://www.postgresql.org/about/\n> > donate\n> > PostgreSQL Replication: http://www.commandprompt.com/products/\n> >\n> >\n> > ---------------------------(end of\n> > broadcast)---------------------------\n> > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that\n> > your\n> > message can get through to the mailing list cleanly\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n-- \nAchilleas Mantzios\n", "msg_date": "Tue, 19 Jun 2007 15:48:11 +0300", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "Andrew Kelly wrote:\n> On Mon, 2007-06-18 at 13:02 -0400, Jonah H. Harris wrote:\n>> On 6/18/07, Andreas Kostyrka <[email protected]> wrote:\n>>> As a cynic, I might ask, what Oracle is fearing?\n>> As a realist, I might ask, how many times do we have to answer this\n>> type of anti-commercial-database flamewar-starting question?\n>>\n> \n> As a nudist, I think I have to answer, \"About every 9 weeks, it would\n> seem\".\n\nJeese! You could have forwarned us to shut our eyes!\n\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n", "msg_date": "Tue, 19 Jun 2007 08:50:38 -0400", "msg_from": "Geoffrey Myers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "Andrew Kelly wrote:\n> On Mon, 2007-06-18 at 13:02 -0400, Jonah H. Harris wrote:\n>> On 6/18/07, Andreas Kostyrka <[email protected]> wrote:\n>>> As a cynic, I might ask, what Oracle is fearing?\n>> As a realist, I might ask, how many times do we have to answer this\n>> type of anti-commercial-database flamewar-starting question?\n>>\n> \n> As a nudist, I think I have to answer, \"About every 9 weeks, it would\n> seem\".\n\nJeese! You could have warned us to shield our eyes!\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n", "msg_date": "Tue, 19 Jun 2007 08:52:04 -0400", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "Can we please trim this down to just advocacy?\n\nOn Jun 18, 2007, at 1:17 PM, Joshua D. Drake wrote:\n\n> Jonah H. Harris wrote:\n>> On 6/18/07, Andreas Kostyrka <[email protected]> wrote:\n>>> As a cynic, I might ask, what Oracle is fearing?\n>> As a realist, I might ask, how many times do we have to answer this\n>> type of anti-commercial-database flamewar-starting question?\n>\n> Depends? How many times are you going to antagonize the people that \n> ask?\n>\n> 1. It has *nothing* to do with anti-commercial. It is anti- \n> proprietary which is perfectly legitimate.\n>\n> 2. Oracle, Microsoft, and IBM have a \"lot\" to fear in the sense of \n> a database like PostgreSQL. We can compete in 90-95% of cases where \n> people would traditionally purchase a proprietary system for many, \n> many thousands (if not hundreds of thousands) of dollars.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n> -- \n>\n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> Providing the most comprehensive PostgreSQL solutions since 1997\n> http://www.commandprompt.com/\n>\n> Donate to the PostgreSQL Project: http://www.postgresql.org/about/ \n> donate\n> PostgreSQL Replication: http://www.commandprompt.com/products/\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n>\n\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Tue, 19 Jun 2007 09:15:22 -0400", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "On 6/19/07, Jim Nasby <[email protected]> wrote:\n> Can we please trim this down to just advocacy?\n\nCould you please verify that we hadn't before replying to almost\n24-hour old mail?\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Tue, 19 Jun 2007 09:18:04 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] [ADMIN] Postgres VS Oracle" }, { "msg_contents": "Josh Berkus wrote:\n> David,\n> \n> First of all, it's considered very rude to cross-post to 5 different mailing \n> lists. pgsql-advocacy is the right list for this question; please don't post \n> to more than one list at a time in the future.\n> \n>> I am DBA for Oracle and beginner on Postgres. For an company in France, I\n>> must make a comparative study, between Postgres and Oracle. Can you send\n>> any useful document which can help me.\n>> Scalability ? Performance? Benchmark ? Availability ? Architecture ?\n>> Limitation : users, volumes ? Resouces needed ? Support ?\n>> Regards\n> \n> You may not be aware, but we have a large French PostgreSQL community:\n> www.postgresqlfr.org\n> \n> I know that Jean-Paul and Dimitri have experience in porting applications, so \n> you should probably contact them to get local help & information on comparing \n> the two DBMSes.\n\nI'm not French but I've written a few web apps that used both PostgreSQL and \nOracle, among others, as back ends. That is, the same app was deployed to \nboth RDBMSes. We had very small data sets, so I cannot speak authoritatively \nabout high-end performance or scalability. My main concerns were SQL \ncompatibility and completeness, ease of development and ease of database \nadministration.\n\nOracle and PostgreSQL came out about even on SQL compatibility and \ncompleteness. I do not know where either has an advantage. Moving DDL \nbetween the two was a matter of knowing that PostgreSQL calls CLOB \"TEXT\" and \nBLOB \"BYTEA\" - annoying but not fatal. Working in Java there is no difference \nbetween the SQL or JDBC calls once the database is up.\n\nI particularly look for features like subSELECTs anywhere SQL allows them, \ncomplete JOIN syntax, and literal row expressions\n(\"( 'Smith', 30, 0, 'Mr.')\"). Both systems are excellent in this regard.\n\nEase of development has to do with tools like psql. PostgreSQL is easier for \nme to use. Oracle has these huge and somewhat opaque tools, from my point of \nview. Oracle's tools seem to me geared primarily for folks who manage \nenterprise databases and probably aren't intended as much for the lowly \nprogrammer during app development.\n\nFor maintenance I find Postgres much easier. Oracle's tools and procedures, \ninstallation style and the like have much more of a \"big iron\" feel to them, \nwhich might lead one to wonder if PostgreSQL is lackadaisical about enterprise \ndb maintenance. It is not. AFAICS either product gives the DBA everything \nneeded to keep that terabyte data store humming. The learning bump for \nPostgreSQL looks much smaller to me, though.\n\nAt the low end PostgreSQL is clearly superior. I am much more able to \neffectively manage small- to moderate-load databases without being a fully \nexpert DBA using PostgreSQL. I am not experienced at managing large-scale \ndatabases but on the smaller scale I've done a bit, and PostgeSQL is much \nlighter-weight on the practitioner's mind.\n\nYMMV.\n\n-- \nLew\n", "msg_date": "Tue, 19 Jun 2007 09:28:58 -0400", "msg_from": "Lew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres VS Oracle" }, { "msg_contents": "John Meyer wrote:\n>>> make Oracle look like it's hiding its head in the sand.\n\nOne of my grammatical bugbears is the misuse of \"its\" and \"it's\". I got quite \nthe thrill from seeing their correct use both within the same phrase. Thank you.\n\nJonah H. Harris wrote:\n>> The humor I see in constant closed-minded presumptions is slowly fading.\n\nInstead of engaing in /ad hominem/ attack (\"you're close-minded, therefore \nyour assertions are false\" - /non sequitur/ - even close-minded assertions can \nbe true irrespective of the level of presumption and completely irrespective \nof your attempt to spin them as jokes) why not address the claim on its merits?\n\nJohn Meyer wrote:\n> This isn't even a straight out slam at Oracle. My degree comes in mass\n> communications, and I cannot understand how anybody could shake the\n> perception that Oracle is afraid of open, independent investigations of\n> its programs.\n\nNot that anecdotal evidence constitutes proof, but I certainly perceive their \nclosed-mouthed and restrictive policy in that light. Why hide the facts \nunless you have something to hide?\n\n> Let's take this at the most beneficial angle that we can, and that is\n> that Oracle has seen too many people run their programs straight into\n> the ground with some rather lousy benchmarking. If that's the case,\n> then the solution is not to bar each and every bench mark out there, it\n> is to publish the methodologies to properly tune an Oracle installation.\n> It is not to attempt to strangle conversation.\n\nOpenness promotes progress and growth - it's true in accounting, legal \nsystems, software development and marketing, not to say everyday living.\n\nOracle would only benefit from an open conversation.\n\n-- \nLew\n", "msg_date": "Tue, 19 Jun 2007 09:36:02 -0400", "msg_from": "Lew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [PERFORM] [ADMIN] Postgres VS Oracle" }, { "msg_contents": "Josh Berkus wrote:\n> David,\n> \n> First of all, it's considered very rude to cross-post to 5 different mailing \n> lists. pgsql-advocacy is the right list for this question; please don't post \n> to more than one list at a time in the future.\n> \n>> I am DBA for Oracle and beginner on Postgres. For an company in France, I\n>> must make a comparative study, between Postgres and Oracle. Can you send\n>> any useful document which can help me.\n>> Scalability ? Performance? Benchmark ? Availability ? Architecture ?\n>> Limitation : users, volumes ? Resouces needed ? Support ?\n>> Regards\n> \n> You may not be aware, but we have a large French PostgreSQL community:\n> www.postgresqlfr.org\n> \n> I know that Jean-Paul and Dimitri have experience in porting applications, so \n> you should probably contact them to get local help & information on comparing \n> the two DBMSes.\n\nI'm not French but I've written a few web apps that used both PostgreSQL and \nOracle, among others, as back ends. That is, the same app was deployed to \nboth RDBMSes. We had very small data sets, so I cannot speak authoritatively \nabout high-end performance or scalability. My main concerns were SQL \ncompatibility and completeness, ease of development and ease of database \nadministration.\n\nOracle and PostgreSQL came out about even on SQL compatibility and \ncompleteness. I do not know where either has an advantage. Moving DDL \nbetween the two was a matter of knowing that PostgreSQL calls CLOB \"TEXT\" and \nBLOB \"BYTEA\" - annoying but not fatal. Working in Java there is no difference \nbetween the SQL or JDBC calls once the database is up.\n\nI particularly look for features like subSELECTs anywhere SQL allows them, \ncomplete JOIN syntax, and literal row expressions\n(\"( 'Smith', 30, 0, 'Mr.')\"). Both systems are excellent in this regard.\n\nEase of development has to do with tools like psql. PostgreSQL is easier for \nme to use. Oracle has these huge and somewhat opaque tools, from my point of \nview. Oracle's tools seem to me geared primarily for folks who manage \nenterprise databases and probably aren't intended as much for the lowly \nprogrammer during app development.\n\nFor maintenance I find Postgres much easier. Oracle's tools and procedures, \ninstallation style and the like have much more of a \"big iron\" feel to them, \nwhich might lead one to wonder if PostgreSQL is lackadaisical about enterprise \ndb maintenance. It is not. AFAICS either product gives the DBA everything \nneeded to keep that terabyte data store humming. The learning bump for \nPostgreSQL looks much smaller to me, though.\n\nAt the low end PostgreSQL is clearly superior. I am much more able to \neffectively manage small- to moderate-load databases without being a fully \nexpert DBA using PostgreSQL. I am not experienced at managing large-scale \ndatabases but on the smaller scale I've done a bit, and PostgreSQL is much \nlighter-weight on the practitioner's mind.\n\nYMMV.\n\n-- \nLew\n", "msg_date": "Tue, 19 Jun 2007 09:38:30 -0400", "msg_from": "Lew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres VS Oracle" }, { "msg_contents": "[email protected] (Carol Walter) writes:\n> I don't want to add gas to the flamewar, but I gotta ask. What is in\n> the the 90 to 95% referred to in this email.\n\nI'd say, look at the Oracle feature set for things that it has that\nPostgreSQL doesn't.\n\nFour that come to mind:\n\n- ORAC = multimaster replication\n- Integration with hardware vendors' High Availability systems\n- Full fledged table partitioning\n- Windowing functions (SQL:2003 stuff, used in OLAP)\n\nThese are features Truly Needed for a relatively small percentage of\nsystems. They're typically NOT needed for:\n\n - departmental applications that operate during office hours\n - light weight web apps that aren't challenging the limits of\n the most expensive hardware\n - any application where reliability requirements do not warrant\n spending $1M to make it more reliable\n - applications that make relatively unsophisticated use of data\n (e.g. - it's not worth the analysis to figure out a partitioning\n design, and nobody's running queries so sophisticated that they\n need windowing analytics)\n\nI expect both of those lists are incomplete, but those are big enough\nlists to, I think, justify the claim, at least in loose terms.\n\nThe most important point is that third one, I think: \n \"any application where reliability requirements do not warrant\n spending $1M to make it more reliable\"\n\nAdopting ORAC and/or other HA technologies makes it necessary to spend\na Big Pile Of Money, on hardware and the humans to administer it.\n\nAny system whose importance is not sufficient to warrant *actually\nspending* an extra $1M on improving its reliability is *certain* NOT\nto benefit from either ORAC or HA, because you can't get any relevant\nbenefits without spending pretty big money. Maybe the number is lower\nthan $1M, but I think that's the right order of magnitude.\n-- \noutput = reverse(\"ofni.secnanifxunil\" \"@\" \"enworbbc\")\nhttp://linuxdatabases.info/info/nonrdbms.html\n\"One disk to rule them all, One disk to find them. One disk to bring\nthem all and in the darkness grind them. In the Land of Redmond where\nthe shadows lie.\" -- The Silicon Valley Tarot Henrique Holschuh\n", "msg_date": "Tue, 19 Jun 2007 09:39:02 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "[email protected] (Achilleas Mantzios) writes:\n>> I don't want to add gas to the flamewar, but I gotta ask. What is in\n>> the the 90 to 95% referred to in this email.\n>\n> short answer: all cases, possibly except when running a Bank or something \n> similar.\n\nNo, it's not to do with what enterprise you're running; the question\nis what functionality is missing.\n\nAt the simplest level, I'd say that there are Oracle (+DB2) feature\nsets that *are compelling*, particularly in the High Availability\narea.\n\nHowever, those feature sets are ones that require spending a Big Pile\nOf Money (BPOM) to enable them. \n\nFor instance, ORAC (multimaster replication) requires buying a bunch\nof servers and spending a BPOM configuring and administering them.\n\nIf you haven't got the BPOM, or your application isn't so \"mission\ncritical\" as to justify budgeting a BPOM, then, simply put, you won't\nbe using ORAC functionality, and that discards one of the major\njustifications for buying Oracle.\n\n*NO* small business has that BPOM to spend on this, so *NO* database\noperated by a small business can possibly justify \"buying Oracle\nbecause of ORAC.\"\n\nThere will be a lot of \"departmental\" sorts of applications that:\n\n- Aren't that mission critical\n\n- Don't have data models so sophisticated as to require the \"features\n at the edges\" of the big name commercial DBMSes (e.g. - partitioning,\n OLAP/Windowing features) that PostgreSQL currently lacks\n \nand those two categorizations, it seems to me, likely define a\nfrontier that allow a whole lot of databases to fall into the \"don't\nneed the Expensive Guys\" region.\n-- \n\"cbbrowne\",\"@\",\"cbbrowne.com\"\nhttp://www3.sympatico.ca/cbbrowne/oses.html\nRules of the Evil Overlord #219. \"I will be selective in the hiring of\nassassins. Anyone who attempts to strike down the hero the first\ninstant his back is turned will not even be considered for the job.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Tue, 19 Jun 2007 09:49:45 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "On Tuesday 19 June 2007 00:05, John Meyer wrote:\n> Jonah H. Harris wrote:\n> > On 6/18/07, John Meyer <[email protected]> wrote:\n> >> Then Oracle fears its users, which explains the \"no-benchmark\" policy,\n> >> which I cannot see any PR honk being able to spin that in any way that\n> >> doesn't make Oracle look like it's hiding its head in the sand.\n> >\n> > The humor I see in constant closed-minded presumptions is slowly fading.\n>\n> This isn't even a straight out slam at Oracle. My degree comes in mass\n> communications, and I cannot understand how anybody could shake the\n> perception that Oracle is afraid of open, independent investigations of\n> its programs.\n> Let's take this at the most beneficial angle that we can, and that is\n> that Oracle has seen too many people run their programs straight into\n> the ground with some rather lousy benchmarking. If that's the case,\n> then the solution is not to bar each and every bench mark out there, it\n> is to publish the methodologies to properly tune an Oracle installation.\n> It is not to attempt to strangle conversation.\n\nYou do realize that not everyone who publishes a benchmark will actually \n*want* to do a fair comparison, right? \n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Tue, 19 Jun 2007 10:41:29 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [PERFORM] [ADMIN] Postgres VS Oracle" }, { "msg_contents": "\n> The most important point is that third one, I think:\n> \"any application where reliability requirements do not warrant\n> spending $1M to make it more reliable\"\n>\n> Adopting ORAC and/or other HA technologies makes it necessary to spend\n> a Big Pile Of Money, on hardware and the humans to administer it.\n\nIf I were CIO that did not follow the Postgres groups regularly, I would \ntake that to mean that Oracle is automatically more reliable than PG \nbecause you can spend a BPOM to make it so.\n\nLet's ask a different question. If you take BPOM / 2, and instead of \nbuying Oracle, hire consultants to work on a PG solution, could the PG \nsolution achieve the same reliability as Oracle? Would it take the same \namount of time? Or heck, spend the full BPOM on hardening PG against \nfailure - could PG achieve that reliability?\n\nOr, by spending BPOM for Oracle strictly to get that reliability, are you \nonly buying \"enterpriseyness\" (i.e. someone to blame and the ability to \none-up a buddy at the golf course)?\n\nCheers,\n-J\n\n", "msg_date": "Tue, 19 Jun 2007 10:54:06 -0400 (EDT)", "msg_from": "Joshua_Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] [PERFORM] Postgres VS Oracle" }, { "msg_contents": "[email protected] (Joshua_Kramer) writes:\n>> The most important point is that third one, I think:\n>> \"any application where reliability requirements do not warrant\n>> spending $1M to make it more reliable\"\n>>\n>> Adopting ORAC and/or other HA technologies makes it necessary to\n>> spend a Big Pile Of Money, on hardware and the humans to administer\n>> it.\n>\n> If I were CIO that did not follow the Postgres groups regularly, I\n> would take that to mean that Oracle is automatically more reliable\n> than PG because you can spend a BPOM to make it so.\n\nThat would be incorrect.\n\nIn cases where you *do not* spend the BPOM, there is not any\nparticular evidence available to indicate that Oracle is, in any\ninteresting way, more reliable than PostgreSQL.\n\nHow many CIOs check into the PostgreSQL advocacy group, just to pick\nout one article?\n\n> Let's ask a different question. If you take BPOM / 2, and instead of\n> buying Oracle, hire consultants to work on a PG solution, could the PG\n> solution achieve the same reliability as Oracle? Would it take the\n> same amount of time? Or heck, spend the full BPOM on hardening PG\n> against failure - could PG achieve that reliability?\n>\n> Or, by spending BPOM for Oracle strictly to get that reliability, are\n> you only buying \"enterpriseyness\" (i.e. someone to blame and the\n> ability to one-up a buddy at the golf course)?\n\nThe major difference, as far as I can see, is that if you spend BPOM\non Oracle, then you can take advantage of some High Availability\nfeatures for Oracle that haven't been implemented for PostgreSQL.\n\nOn the one hand...\n- If you spend LESS THAN the BPOM, then you don't get anything.\n\nOn the other hand...\n- If you spend SPOM (Some Pile Of Money ;-)) on hardening a PostgreSQL\n instance, you may be able to get some improved reliability, but not in\n the form of specific features (e.g. - ORAC) that 'smell like a\n product.'\n\nOn the gripping hand...\n- It is not entirely clear to what degree you can be certain to be\n getting anything better than \"enterpriseyness.\" \n\n For instance, if your disk array blows up (or has a microcode bug\n that makes it scribble randomly on disk), then that is liable to\n destroy your database, irrespective of what other technologies are\n in use as a result of spending the BPOM.\n\n In other words, some risks are certain to be retained, and fancy\n DBMS features can't necessarily mitigate them.\n-- \noutput = reverse(\"ofni.secnanifxunil\" \"@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/wp.html\n\"We are all somehow dreadfully cracked about the head, and sadly need\nmending.\" --/Moby-Dick/, Ch 17 \n", "msg_date": "Tue, 19 Jun 2007 11:22:17 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres VS Oracle" }, { "msg_contents": "\n> That would be incorrect.\n\nFactually, you are correct that it's incorrect. I'm talking about the \nperception.\n\n> How many CIOs check into the PostgreSQL advocacy group, just to pick\n> out one article?\n\nFew that I know of, which makes my point stronger and brings us to this:\n\n> instance, you may be able to get some improved reliability, but not in\n> the form of specific features (e.g. - ORAC) that 'smell like a\n> product.'\n\nSo, on one hand you can pay BPOM to Oracle for all the enterpriseyness and \nfresh NOS (New Oracle Smell) money can buy. Or...\n\n> In other words, some risks are certain to be retained, and fancy\n> DBMS features can't necessarily mitigate them.\n\n...you can pay SSPOM (Some Smaller Pile Of Money) to a PG vendor to harden \nPG. You won't get the enterprisey NOS, but the end result will be the \nsame. The question then becomes, what are the second-level costs? (i.e., \nwill high-reliability project X complete just as fast by hardening PG as \nit would by using Oracle's built-in features? What are the costs to train \nOracle DBA's on PG - or what are the costs of their downtime while they \nlearn PG?)\n\nCheers,\n-J\n\n", "msg_date": "Tue, 19 Jun 2007 12:03:46 -0400 (EDT)", "msg_from": "Josh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres VS Oracle" }, { "msg_contents": "Chris Browne wrote:\n> [email protected] (Joshua_Kramer) writes:\n>>> The most important point is that third one, I think:\n>>> \"any application where reliability requirements do not warrant\n>>> spending $1M to make it more reliable\"\n>>>\n>>> Adopting ORAC and/or other HA technologies makes it necessary to\n>>> spend a Big Pile Of Money, on hardware and the humans to administer\n>>> it.\n>> If I were CIO that did not follow the Postgres groups regularly, I\n>> would take that to mean that Oracle is automatically more reliable\n>> than PG because you can spend a BPOM to make it so.\n> \n> That would be incorrect.\n> \n> In cases where you *do not* spend the BPOM, there is not any\n> particular evidence available to indicate that Oracle is, in any\n> interesting way, more reliable than PostgreSQL.\n\nNo but there is perception which is quite a bit more powerful.\n\nJoshua D. Drake\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Tue, 19 Jun 2007 09:50:29 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres VS Oracle" }, { "msg_contents": "On Mon, 2007-06-18 at 17:55 +0200, David Tokmatchi wrote:\n\n> I am DBA for Oracle and beginner on Postgres. For an company in\n> France, I must make a comparative study, between Postgres and Oracle.\n> Can you send any useful document which can help me.\n> Scalability ? Performance? Benchmark ? Availability ? Architecture ?\n> Limitation : users, volumes ? Resouces needed ? Support ?\n\nI would suggest you make your comparison based upon your specific needs,\nnot a purely abstract comparison. If your not sure what your\nrequirements are, research those first.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Jun 2007 21:05:34 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres VS Oracle" }, { "msg_contents": "On Tue, Jun 19, 2007 at 11:22:17AM -0400, Chris Browne wrote:\n> In cases where you *do not* spend the BPOM, there is not any\n> particular evidence available to indicate that Oracle is, in any\n> interesting way, more reliable than PostgreSQL.\n\nI hate to say this, but as true as the above is, it has very close to\nzero relevance to the way most senior managers make decisions.\n\nIt appears to suppose that the way decisions are usually made is\nsomething like this: 1. Establish the problem; 2. Identify what is\nneeded to solve the problem; 3. Evaluate what available technologies\nmeet the requirements established in step 2.\n\nThe _actual_ way corporate decisions are made is mostly gut feel. The\nsimple truth is that most senior managers, even CIOs and CTOs, are\nusually long past the period where technical detail is meaningful to\nthem. They do not -- and probably should not -- know many of the\ndetails of the problems they are nevertheless responsible for\nsolving. Instead, they have to weigh costs and benefits, on the\nbasis of poor evidence and without enough time to get the proper\nevidence. Geeks who hang out here would probably be appalled at the\nslapdash sort of evidence that undergirds large numbers of big\ntechnical decisions. But CIOs and CTOs aren't evaluating technology;\nthey're mitigating risk. \n\nOnce you understand that risk mitigation is practically the only job\nthey have, then buying Oracle in most cases is a no-brainer. It has\nthe best reputation, and has all these features (some of which you\nmight not buy, but _could_ if your Oracle rep were to tell you it\nwould solve some problem you may or may not have) to protect you. \nSo, the only other calculation that should enter the picture is how\nmuch money you have to spend, and how risky it would be to tie that\nup in Oracle licenses. In some cases, that turns out to be too risky,\nand Postgres becomes a viable choice. It's only exceptionally\nvisionary senior managers who operate in other ways. \n\nThere are two important consequences of this. One is that competing\nwith MySQL is worth it, because MySQL is often regarded as the thing\none uses to \"go cheap\" when one can't afford Oracle. Those people\nwill move from MySQL to Oracle as soon as practical, because their\nDBAs often are appalled at the way MySQL works; they might get\naddicted to the excellent features of PostgreSQL, though. The second\nis that marketing to management by using arguments, listing lots of\ntechnical detail and features, and the like, will never work. \nThey'll ignore such cluttered and crowded brochures, because they\ndon't deal in technical detail. We have to make PostgreSQL a\nlow-risk choice for them.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n", "msg_date": "Wed, 20 Jun 2007 10:48:10 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "On managerial choosing (was: Postgres VS Oracle)" }, { "msg_contents": "On Wednesday 20 June 2007, Andrew Sullivan wrote:\n> On Tue, Jun 19, 2007 at 11:22:17AM -0400, Chris Browne wrote:\n> > In cases where you *do not* spend the BPOM, there is not any\n> > particular evidence available to indicate that Oracle is, in any\n> > interesting way, more reliable than PostgreSQL.\n>\n> I hate to say this, but as true as the above is, it has very close to\n> zero relevance to the way most senior managers make decisions.\n\n[...]\n\nI also hate to say this, but I fully agree with whatever you wrote in this \nmail.\n\n[...]\n\n> much money you have to spend, and how risky it would be to tie that\n> up in Oracle licenses. In some cases, that turns out to be too risky,\n> and Postgres becomes a viable choice. It's only exceptionally\n> visionary senior managers who operate in other ways.\n\nYes, maybe 10 out of 100. Likely less.\n\n[...]\n> addicted to the excellent features of PostgreSQL, though. The second\n> is that marketing to management by using arguments, listing lots of\n> technical detail and features, and the like, will never work.\n> They'll ignore such cluttered and crowded brochures, because they\n> don't deal in technical detail. We have to make PostgreSQL a\n> low-risk choice for them.\n\nI wonder if this is something which really is a job of the core team or \ncommunity (I think of the 'traditional' PG users who are more technically \nfocussed and maybe not enthusiastic about too much CIO/CTO flavored \ncommunication). Actually this kind of communication looks to me like to be \nperfectly done by commercial PG vendors?\n\nAnastasios", "msg_date": "Wed, 20 Jun 2007 19:57:58 +0200", "msg_from": "Anastasios Hatzis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On managerial choosing (was: Postgres VS Oracle)" }, { "msg_contents": "On Wed, Jun 20, 2007 at 07:57:58PM +0200, Anastasios Hatzis wrote:\n> focussed and maybe not enthusiastic about too much CIO/CTO flavored \n> communication). Actually this kind of communication looks to me like to be \n> perfectly done by commercial PG vendors?\n\nWell, sure, but I sort of assume that the -advocacy list has\nsubscribed to it only people who are interested in promoting\nPostgreSQL. So I figure that this group probably needs to think\nabout the audiences for its output; and C*O people make up one of\nthem.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe fact that technology doesn't work is no bar to success in the marketplace.\n\t\t--Philip Greenspun\n", "msg_date": "Wed, 20 Jun 2007 15:45:33 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On managerial choosing (was: Postgres VS Oracle)" }, { "msg_contents": "Andrew Kelly wrote:\n> On Mon, 2007-06-18 at 13:02 -0400, Jonah H. Harris wrote:\n> \n>> On 6/18/07, Andreas Kostyrka <[email protected]> wrote:\n>> \n>>> As a cynic, I might ask, what Oracle is fearing?\n>>> \n>> As a realist, I might ask, how many times do we have to answer this\n>> type of anti-commercial-database flamewar-starting question?\n>>\n>> \n>\n> As a nudist, I think I have to answer, \"About every 9 weeks, it would\n> seem\".\n\nAs a surrealist, I'd have to say purple.\n", "msg_date": "Wed, 20 Jun 2007 17:13:15 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [PERFORM] Postgres VS Oracle" } ]
[ { "msg_contents": "Hi,\n\nI have an application which really exercises the performance of \npostgresql in a major way, and I am running into a performance \nbottleneck with Postgresql 8.1 that I do not yet understand.\n\nHere are the details:\n\n- There is a primary table, with some secondary tables\n- The principle transaction consists of a \"SELECT...FOR UPDATE\", \nfollowed by either an INSERT or an UPDATE on the primary table\n- INSERTs, DELETEs, and UPDATEs may occur on the secondary table \ndepending on what happens with the primary table, for any given \ntransaction. The secondary table has about 10x the number of rows as \nthe primary.\n- All operations are carefully chosen so that highly discriminatory \nindexes are used to locate the record(s) in question. The execution \nplans show INDEX SCAN operations being done in all cases.\n- At any given time, there are up to 100 of these operations going on at \nonce against the same database.\n\nWhat I am seeing:\n\n- In postgresql 7.4, the table activity seems to be gated by locks, and \nruns rather slowly except when the sizes of the tables are small.\n- In postgresql 8.1, locks do not seem to be an issue, and the activity \nruns about 10x faster than for postgresql 7.4.\n- For EITHER database version, the scaling behavior is not the log(n) \nbehavior I'd expect (where n is the number of rows in the table), but \nmuch more like linear performance. That is, as the tables grow, \nperformance drops off precipitously. For a primary table size up to \n100,000 rows or so, I get somewhere around 700 transactions per minute, \non average. Between 100,000 and 1,000,000 rows I got some 150 \ntransactions per minute. At about 1,500,000 rows I get about 40 \ntransactions per minute.\n- Access to a row in the secondary table (which right now has 13,000,000 \nrows in it) via an index that has extremely good discriminatory ability \non a busy machine takes about 90 seconds elapsed time at the moment - \nwhich I feel is pretty high.\n\nI tried increasing the shared_buffers parameter to see if it had any \nimpact on overall throughput. It was moderately helpful going from the \nsmall default value up to 8192, but less helpful when I increased it \nbeyond that. Currently I have it set to 131072.\n\nQuestion: Does anyone have any idea what bottleneck I am hitting? An \nindex's performance should in theory scale as the log of the number of \nrows - what am I missing here?\n\nThanks very much!\nKarl\n", "msg_date": "Mon, 18 Jun 2007 15:30:31 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Karl Wright wrote:\n> Hi,\n> \n> I have an application which really exercises the performance of \n> postgresql in a major way, and I am running into a performance \n> bottleneck with Postgresql 8.1 that I do not yet understand.\n> \n> Here are the details:\n> \n> - There is a primary table, with some secondary tables\n> - The principle transaction consists of a \"SELECT...FOR UPDATE\", \n> followed by either an INSERT or an UPDATE on the primary table\n> - INSERTs, DELETEs, and UPDATEs may occur on the secondary table \n> depending on what happens with the primary table, for any given \n> transaction. The secondary table has about 10x the number of rows as \n> the primary.\n> - All operations are carefully chosen so that highly discriminatory \n> indexes are used to locate the record(s) in question. The execution \n> plans show INDEX SCAN operations being done in all cases.\n> - At any given time, there are up to 100 of these operations going on at \n> once against the same database.\n> \n> What I am seeing:\n> \n> - In postgresql 7.4, the table activity seems to be gated by locks, and \n> runs rather slowly except when the sizes of the tables are small.\n> - In postgresql 8.1, locks do not seem to be an issue, and the activity \n> runs about 10x faster than for postgresql 7.4.\n> - For EITHER database version, the scaling behavior is not the log(n) \n> behavior I'd expect (where n is the number of rows in the table), but \n> much more like linear performance. That is, as the tables grow, \n> performance drops off precipitously. For a primary table size up to \n> 100,000 rows or so, I get somewhere around 700 transactions per minute, \n> on average. Between 100,000 and 1,000,000 rows I got some 150 \n> transactions per minute. At about 1,500,000 rows I get about 40 \n> transactions per minute.\n> - Access to a row in the secondary table (which right now has 13,000,000 \n> rows in it) via an index that has extremely good discriminatory ability \n> on a busy machine takes about 90 seconds elapsed time at the moment - \n> which I feel is pretty high.\n> \n> I tried increasing the shared_buffers parameter to see if it had any \n> impact on overall throughput. It was moderately helpful going from the \n> small default value up to 8192, but less helpful when I increased it \n> beyond that. Currently I have it set to 131072.\n> \n> Question: Does anyone have any idea what bottleneck I am hitting? An \n> index's performance should in theory scale as the log of the number of \n> rows - what am I missing here?\n> \n> Thanks very much!\n> Karl\n> \n\nI suppose I should also have noted that the postgresql processes that \nare dealing with the transactions seem to be CPU bound. Here's a \"top\" \nfrom the running system:\n\ntop - 15:58:50 up 4 days, 4:45, 1 user, load average: 17.14, 21.05, 22.46\nTasks: 194 total, 15 running, 177 sleeping, 0 stopped, 2 zombie\nCpu(s): 98.4% us, 1.5% sy, 0.0% ni, 0.0% id, 0.1% wa, 0.0% hi, 0.0% si\nMem: 16634256k total, 16280244k used, 354012k free, 144560k buffers\nSwap: 8008360k total, 56k used, 8008304k free, 15071968k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n15966 postgres 18 0 1052m 1.0g 1.0g R 66.5 6.3 0:18.64 postmaster\n14683 postgres 17 0 1053m 1.0g 1.0g R 54.9 6.3 0:17.90 postmaster\n17050 postgres 15 0 1052m 93m 90m S 50.3 0.6 0:06.42 postmaster\n16816 postgres 18 0 1052m 166m 162m R 46.3 1.0 0:04.80 postmaster\n16697 postgres 18 0 1052m 992m 988m R 42.3 6.1 0:15.49 postmaster\n17272 postgres 16 0 1053m 277m 273m S 30.8 1.7 0:09.91 postmaster\n16659 postgres 16 0 1052m 217m 213m R 29.8 1.3 0:06.60 postmaster\n15509 postgres 18 0 1052m 1.0g 1.0g R 23.2 6.4 0:26.72 postmaster\n16329 postgres 18 0 1052m 195m 191m R 16.9 1.2 0:05.54 postmaster\n14019 postgres 20 0 1052m 986m 983m R 16.5 6.1 0:16.50 postmaster\n17002 postgres 18 0 1052m 38m 35m R 12.6 0.2 0:02.98 postmaster\n16960 postgres 15 0 1053m 453m 449m S 3.3 2.8 0:10.39 postmaster\n16421 postgres 15 0 1053m 1.0g 1.0g S 2.3 6.2 0:23.59 postmaster\n13588 postgres 15 0 1052m 1.0g 1.0g D 0.3 6.4 0:47.89 postmaster\n24708 root 15 0 2268 1136 836 R 0.3 0.0 0:05.92 top\n 1 root 15 0 1584 520 452 S 0.0 0.0 0:02.08 init\n\nKarl\n\n\n\n", "msg_date": "Mon, 18 Jun 2007 15:59:30 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Karl Wright <[email protected]> writes:\n> - At any given time, there are up to 100 of these operations going on at \n> once against the same database.\n\nIt sounds like your hardware is far past \"maxed out\". Which is odd\nsince tables with a million or so rows are pretty small for modern\nhardware. What's the CPU and disk hardware here, exactly? What do you\nsee when watching vmstat or iostat (as appropriate for OS, which you\ndidn't mention either)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jun 2007 22:56:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access " }, { "msg_contents": "\n> Question: Does anyone have any idea what bottleneck I am hitting? An \n> index's performance should in theory scale as the log of the number of \n> rows - what am I missing here?\n\n\tThese can help people on the list to help you :\n\n\t- Your hardware config (CPU, RAM, disk) ?\n\t- EXPLAIN ANALYZE from slow queries ?\n\t- VACUUM and ANALYZE : yes ? how often ?\n\t- VACUUM VERBOSE output\n\n\tfor huge bits of text with long line length, mail sucks, upload to a web \nhost or something.\n", "msg_date": "Tue, 19 Jun 2007 08:42:51 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Tom Lane wrote:\n> Karl Wright <[email protected]> writes:\n>> - At any given time, there are up to 100 of these operations going on at \n>> once against the same database.\n> \n> It sounds like your hardware is far past \"maxed out\". Which is odd\n> since tables with a million or so rows are pretty small for modern\n> hardware. What's the CPU and disk hardware here, exactly? What do you\n> see when watching vmstat or iostat (as appropriate for OS, which you\n> didn't mention either)?\n> \n> \t\t\tregards, tom lane\n> \n\nYes, I was surprised as well, which is why I decided to post.\n\nThe hardware is a Dell 2950, two processor, dual-core each processor, 16 \nGB memory, with a RAID disk controller. The operating system is Debian \nLinux (sarge plus mods, currently using the Postgresql 8.1 backport).\n\nAlso, as I said before, I have done extensive query analysis and found \nthat the plans for the queries that are taking a long time are in fact \nvery reasonable. Here's an example from the application log of a query \nthat took way more time than its plan would seem to indicate it should:\n\n >>>>>>\n[2007-06-18 09:39:49,783]ERROR Found a query that took more than a \nminute: [UPDATE intrinsiclink SET isnew=? WHERE ((jobid=? AND \nchildidhash=? AND childid=?)) AND (isnew=? OR isnew=?)]\n[2007-06-18 09:39:49,783]ERROR Parameter 0: 'B'\n[2007-06-18 09:39:49,783]ERROR Parameter 1: '1181766706097'\n[2007-06-18 09:39:49,783]ERROR Parameter 2: \n'7E130F3B688687757187F1638D8776ECEF3009E0'\n[2007-06-18 09:39:49,783]ERROR Parameter 3: \n'http://norwich.openguides.org/?action=index;index_type=category;index_value=Cafe;format=atom'\n[2007-06-18 09:39:49,783]ERROR Parameter 4: 'E'\n[2007-06-18 09:39:49,783]ERROR Parameter 5: 'N'\n[2007-06-18 09:39:49,797]ERROR Plan: Index Scan using i1181764142395 on \nintrinsiclink (cost=0.00..14177.29 rows=5 width=253)\n[2007-06-18 09:39:49,797]ERROR Plan: Index Cond: ((jobid = $2) AND \n((childidhash)::text = ($3)::text))\n[2007-06-18 09:39:49,797]ERROR Plan: Filter: ((childid = ($4)::text) \nAND ((isnew = ($5)::bpchar) OR (isnew = ($6)::bpchar)))\n[2007-06-18 09:39:49,797]ERROR\n<<<<<<\n(The intrinsiclink table above is the \"child table\" I was referring to \nearlier, with 13,000,000 rows at the moment.)\n\nOvernight I shut things down and ran a VACUUM operation to see if that \nmight help. I'll post again when I find out if indeed that changed any \nperformance numbers. If not, I'll be able to post vmstat output at that \ntime.\n\nKarl\n\n\n\n", "msg_date": "Tue, 19 Jun 2007 07:04:17 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "An overnight VACUUM helped things quite a bit. I am now getting \nthroughput of around 75 transactions per minute, where before I was \ngetting 30. Also, the CPU is no longer pegged, and the machines load \naverage has dropped to an acceptable 6-10 from somewhere above 20.\n\nWhile this is still pretty far off the best performance I saw (when the \ntables were smaller), it's reasonably consistent with O(log(n)) \nperformance at least.\n\nThis particular run lasted four days before a VACUUM became essential. \nThe symptom that indicates that VACUUM is needed seems to be that the \nCPU usage of any given postgresql query skyrockets. Is this essentially \ncorrect?\n\nKarl\n\nKarl Wright wrote:\n> Tom Lane wrote:\n>> Karl Wright <[email protected]> writes:\n>>> - At any given time, there are up to 100 of these operations going on \n>>> at once against the same database.\n>>\n>> It sounds like your hardware is far past \"maxed out\". Which is odd\n>> since tables with a million or so rows are pretty small for modern\n>> hardware. What's the CPU and disk hardware here, exactly? What do you\n>> see when watching vmstat or iostat (as appropriate for OS, which you\n>> didn't mention either)?\n>>\n>> regards, tom lane\n>>\n> \n> Yes, I was surprised as well, which is why I decided to post.\n> \n> The hardware is a Dell 2950, two processor, dual-core each processor, 16 \n> GB memory, with a RAID disk controller. The operating system is Debian \n> Linux (sarge plus mods, currently using the Postgresql 8.1 backport).\n> \n> Also, as I said before, I have done extensive query analysis and found \n> that the plans for the queries that are taking a long time are in fact \n> very reasonable. Here's an example from the application log of a query \n> that took way more time than its plan would seem to indicate it should:\n> \n> >>>>>>\n> [2007-06-18 09:39:49,783]ERROR Found a query that took more than a \n> minute: [UPDATE intrinsiclink SET isnew=? WHERE ((jobid=? AND \n> childidhash=? AND childid=?)) AND (isnew=? OR isnew=?)]\n> [2007-06-18 09:39:49,783]ERROR Parameter 0: 'B'\n> [2007-06-18 09:39:49,783]ERROR Parameter 1: '1181766706097'\n> [2007-06-18 09:39:49,783]ERROR Parameter 2: \n> '7E130F3B688687757187F1638D8776ECEF3009E0'\n> [2007-06-18 09:39:49,783]ERROR Parameter 3: \n> 'http://norwich.openguides.org/?action=index;index_type=category;index_value=Cafe;format=atom' \n> \n> [2007-06-18 09:39:49,783]ERROR Parameter 4: 'E'\n> [2007-06-18 09:39:49,783]ERROR Parameter 5: 'N'\n> [2007-06-18 09:39:49,797]ERROR Plan: Index Scan using i1181764142395 on \n> intrinsiclink (cost=0.00..14177.29 rows=5 width=253)\n> [2007-06-18 09:39:49,797]ERROR Plan: Index Cond: ((jobid = $2) AND \n> ((childidhash)::text = ($3)::text))\n> [2007-06-18 09:39:49,797]ERROR Plan: Filter: ((childid = ($4)::text) \n> AND ((isnew = ($5)::bpchar) OR (isnew = ($6)::bpchar)))\n> [2007-06-18 09:39:49,797]ERROR\n> <<<<<<\n> (The intrinsiclink table above is the \"child table\" I was referring to \n> earlier, with 13,000,000 rows at the moment.)\n> \n> Overnight I shut things down and ran a VACUUM operation to see if that \n> might help. I'll post again when I find out if indeed that changed any \n> performance numbers. If not, I'll be able to post vmstat output at that \n> time.\n> \n> Karl\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n", "msg_date": "Tue, 19 Jun 2007 08:56:56 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "Karl Wright wrote:\n\n> This particular run lasted four days before a VACUUM became essential. \n> The symptom that indicates that VACUUM is needed seems to be that the \n> CPU usage of any given postgresql query skyrockets. Is this essentially \n> correct?\n\nAre you saying you weren't used to run VACUUM all the time? If so,\nthat's where the problem lies.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4\n\"C�mo ponemos nuestros dedos en la arcilla del otro. Eso es la amistad; jugar\nal alfarero y ver qu� formas se pueden sacar del otro\" (C. Halloway en\nLa Feria de las Tinieblas, R. Bradbury)\n", "msg_date": "Tue, 19 Jun 2007 09:26:43 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Alvaro Herrera wrote:\n> Karl Wright wrote:\n> \n>> This particular run lasted four days before a VACUUM became essential. \n>> The symptom that indicates that VACUUM is needed seems to be that the \n>> CPU usage of any given postgresql query skyrockets. Is this essentially \n>> correct?\n> \n> Are you saying you weren't used to run VACUUM all the time? If so,\n> that's where the problem lies.\n> \n\nPostgresql 7.4 VACUUM runs for so long that starting it with a cron job \neven every 24 hours caused multiple instances of VACUUM to eventually be \nrunning in my case. So I tried to find a VACUUM schedule that permitted \neach individual vacuum to finish before the next one started. A vacuum \nseemed to require 4-5 days with this particular database - or at least \nit did for 7.4. So I had the VACUUM schedule set to run every six days.\n\nI will be experimenting with 8.1 to see how long it takes to complete a \nvacuum under load conditions tonight.\n\nKarl\n\n", "msg_date": "Tue, 19 Jun 2007 09:37:10 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "\n\"Karl Wright\" <[email protected]> writes:\n\n> This particular run lasted four days before a VACUUM became essential. The\n> symptom that indicates that VACUUM is needed seems to be that the CPU usage of\n> any given postgresql query skyrockets. Is this essentially correct?\n\nPostgres is designed on the assumption that VACUUM is run regularly. By\n\"regularly\" we're talking of an interval usually on the order of hours, or\neven less. On some workloads some tables need to be vacuumed every 5 minutes,\nfor example.\n\nVACUUM doesn't require shutting down the system, it doesn't lock any tables or\notherwise prevent other jobs from making progress. It does add extra i/o but\nthere are knobs to throttle its i/o needs. The intention is that VACUUM run in\nthe background more or less continually using spare i/o bandwidth.\n\nThe symptom of not having run vacuum regularly is that tables and indexes\nbloat to larger sizes than necessary. If you run \"VACUUM VERBOSE\" it'll tell\nyou how much bloat your tables and indexes are suffering from (though the\noutput is a bit hard to interpret).\n\nTable and index bloat slow things down but not generally by increasing cpu\nusage. Usually they slow things down by causing queries to require more i/o.\n\nIt's only UPDATES and DELETES that create garbage tuples that need to be\nvacuumed though. If some of your tables are mostly insert-only they might need\nto be vacuumed as frequently or at all.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 19 Jun 2007 14:46:18 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Karl Wright wrote:\n> Alvaro Herrera wrote:\n> >Karl Wright wrote:\n> >\n> >>This particular run lasted four days before a VACUUM became essential. \n> >>The symptom that indicates that VACUUM is needed seems to be that the \n> >>CPU usage of any given postgresql query skyrockets. Is this essentially \n> >>correct?\n> >\n> >Are you saying you weren't used to run VACUUM all the time? If so,\n> >that's where the problem lies.\n> \n> Postgresql 7.4 VACUUM runs for so long that starting it with a cron job \n> even every 24 hours caused multiple instances of VACUUM to eventually be \n> running in my case. So I tried to find a VACUUM schedule that permitted \n> each individual vacuum to finish before the next one started. A vacuum \n> seemed to require 4-5 days with this particular database - or at least \n> it did for 7.4. So I had the VACUUM schedule set to run every six days.\n\nHow large is the database? I must admit I have never seen a database\nthat took 4 days to vacuum. This could mean that your database is\nhumongous, or that the vacuum strategy is wrong for some reason.\n\nYou know that you can run vacuum on particular tables, right? It would\nbe probably a good idea to run vacuum on the most updated tables, and\nleave alone those that are not or little updated (hopefully the biggest;\nthis would mean that an almost-complete vacuum run would take much less\nthan a whole day).\n\nOr maybe vacuum was stuck waiting on a lock somewhere.\n\n> I will be experimenting with 8.1 to see how long it takes to complete a \n> vacuum under load conditions tonight.\n\nYou can also turn autovacuum on in 8.1, which might help quite a bit\nwith finding a good vacuum schedule (you would need a bit of tuning it\nthough, of course).\n\nIn any case, if you are struggling for performance you are strongly\nadviced to upgrade to 8.2.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J\n\"No single strategy is always right (Unless the boss says so)\"\n (Larry Wall)\n", "msg_date": "Tue, 19 Jun 2007 09:46:37 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "On Tue, 2007-06-19 at 09:37 -0400, Karl Wright wrote:\n> Alvaro Herrera wrote:\n> > Karl Wright wrote:\n> > \n> >> This particular run lasted four days before a VACUUM became essential. \n> >> The symptom that indicates that VACUUM is needed seems to be that the \n> >> CPU usage of any given postgresql query skyrockets. Is this essentially \n> >> correct?\n> > \n> > Are you saying you weren't used to run VACUUM all the time? If so,\n> > that's where the problem lies.\n> > \n> \n> Postgresql 7.4 VACUUM runs for so long that starting it with a cron job \n> even every 24 hours caused multiple instances of VACUUM to eventually be \n> running in my case. So I tried to find a VACUUM schedule that permitted \n> each individual vacuum to finish before the next one started. A vacuum \n> seemed to require 4-5 days with this particular database - or at least \n> it did for 7.4. So I had the VACUUM schedule set to run every six days.\n> \n> I will be experimenting with 8.1 to see how long it takes to complete a \n> vacuum under load conditions tonight.\n\nThe longer you wait between vacuuming, the longer each vacuum is going\nto take. \n\nThere is of course a point of diminishing returns for vacuum where this\nno longer holds true; if you vacuum too frequently the overhead of\nrunning the vacuum will dominate the running time. But 6 days for a\nbusy database is probably way, way, way past that threshold.\n\nGenerally, the busier the database the more frequently you need to\nvacuum, not less. If your update/delete transaction rate is high enough\nthen you may need to vacuum multiple times per hour, at least on some\ntables. Playing with autovacuum might help you out here, because it can\nlook at how badly a vacuum is needed and adjust the vacuuming rate on\nthe fly on a per-table basis. Be sure to look up some reasonable\nautovacuum settings first; the 8.1 defaults aren't.\n\n-- Mark\n", "msg_date": "Tue, 19 Jun 2007 06:50:30 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of\n\tconcurrent access" }, { "msg_contents": "Gregory Stark wrote:\n> \"Karl Wright\" <[email protected]> writes:\n> \n>> This particular run lasted four days before a VACUUM became essential. The\n>> symptom that indicates that VACUUM is needed seems to be that the CPU usage of\n>> any given postgresql query skyrockets. Is this essentially correct?\n> \n> Postgres is designed on the assumption that VACUUM is run regularly. By\n> \"regularly\" we're talking of an interval usually on the order of hours, or\n> even less. On some workloads some tables need to be vacuumed every 5 minutes,\n> for example.\n\nFine - but what if the previous vacuum is still in progress, and does \nnot finish in 5 minutes?\n\n> \n> VACUUM doesn't require shutting down the system, it doesn't lock any tables or\n> otherwise prevent other jobs from making progress. It does add extra i/o but\n> there are knobs to throttle its i/o needs. The intention is that VACUUM run in\n> the background more or less continually using spare i/o bandwidth.\n> \n\nThis spare bandwidth is apparently hard to come by in my particular \napplication. That's the only way I can reconcile your information with \nit taking 4 days to complete.\n\n> The symptom of not having run vacuum regularly is that tables and indexes\n> bloat to larger sizes than necessary. If you run \"VACUUM VERBOSE\" it'll tell\n> you how much bloat your tables and indexes are suffering from (though the\n> output is a bit hard to interpret).\n> \n> Table and index bloat slow things down but not generally by increasing cpu\n> usage. Usually they slow things down by causing queries to require more i/o.\n> \n\nYes, that's what I understood, which is why I was puzzled by the effects \nI was seeing.\n\n> It's only UPDATES and DELETES that create garbage tuples that need to be\n> vacuumed though. If some of your tables are mostly insert-only they might need\n> to be vacuumed as frequently or at all.\n> \n\nWell, the smaller tables don't change much, but the bigger tables have a \n lively mix of inserts and updates, so I would expect these would need \nvacuuming often.\n\nI'll post again when I can find a vacuum schedule that seems to work.\n\nKarl\n", "msg_date": "Tue, 19 Jun 2007 10:02:12 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "Alvaro Herrera wrote:\n> Karl Wright wrote:\n>> Alvaro Herrera wrote:\n>>> Karl Wright wrote:\n>>>\n>>>> This particular run lasted four days before a VACUUM became essential. \n>>>> The symptom that indicates that VACUUM is needed seems to be that the \n>>>> CPU usage of any given postgresql query skyrockets. Is this essentially \n>>>> correct?\n>>> Are you saying you weren't used to run VACUUM all the time? If so,\n>>> that's where the problem lies.\n>> Postgresql 7.4 VACUUM runs for so long that starting it with a cron job \n>> even every 24 hours caused multiple instances of VACUUM to eventually be \n>> running in my case. So I tried to find a VACUUM schedule that permitted \n>> each individual vacuum to finish before the next one started. A vacuum \n>> seemed to require 4-5 days with this particular database - or at least \n>> it did for 7.4. So I had the VACUUM schedule set to run every six days.\n> \n> How large is the database? I must admit I have never seen a database\n> that took 4 days to vacuum. This could mean that your database is\n> humongous, or that the vacuum strategy is wrong for some reason.\n> \n\nThe database is humongus, and the machine is under intense load. On the \ninstance where this long vacuum occurred, there were several large \ntables - one with 7,000,000 rows, one with 14,000,000, one with \n140,000,000, and one with 250,000,000.\n\n> You know that you can run vacuum on particular tables, right? It would\n> be probably a good idea to run vacuum on the most updated tables, and\n> leave alone those that are not or little updated (hopefully the biggest;\n> this would mean that an almost-complete vacuum run would take much less\n> than a whole day).\n\nYeah, sorry, that doesn't apply here.\n\n> \n> Or maybe vacuum was stuck waiting on a lock somewhere.\n> \n>> I will be experimenting with 8.1 to see how long it takes to complete a \n>> vacuum under load conditions tonight.\n> \n> You can also turn autovacuum on in 8.1, which might help quite a bit\n> with finding a good vacuum schedule (you would need a bit of tuning it\n> though, of course).\n> \n> In any case, if you are struggling for performance you are strongly\n> adviced to upgrade to 8.2.\n> \n\nOk - that's something I should be able to do once we can go to debian's \netch release. There's a backport of 8.2 available there. (The one for \nsarge is still considered 'experimental').\n\nKarl\n\n", "msg_date": "Tue, 19 Jun 2007 10:06:25 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "In response to Karl Wright <[email protected]>:\n\n> Alvaro Herrera wrote:\n> > Karl Wright wrote:\n> >> Alvaro Herrera wrote:\n> >>> Karl Wright wrote:\n> >>>\n> >>>> This particular run lasted four days before a VACUUM became essential. \n> >>>> The symptom that indicates that VACUUM is needed seems to be that the \n> >>>> CPU usage of any given postgresql query skyrockets. Is this essentially \n> >>>> correct?\n> >>> Are you saying you weren't used to run VACUUM all the time? If so,\n> >>> that's where the problem lies.\n> >> Postgresql 7.4 VACUUM runs for so long that starting it with a cron job \n> >> even every 24 hours caused multiple instances of VACUUM to eventually be \n> >> running in my case. So I tried to find a VACUUM schedule that permitted \n> >> each individual vacuum to finish before the next one started. A vacuum \n> >> seemed to require 4-5 days with this particular database - or at least \n> >> it did for 7.4. So I had the VACUUM schedule set to run every six days.\n> > \n> > How large is the database? I must admit I have never seen a database\n> > that took 4 days to vacuum. This could mean that your database is\n> > humongous, or that the vacuum strategy is wrong for some reason.\n> \n> The database is humongus, and the machine is under intense load. On the \n> instance where this long vacuum occurred, there were several large \n> tables - one with 7,000,000 rows, one with 14,000,000, one with \n> 140,000,000, and one with 250,000,000.\n\nDon't rule out the possibility that the only way to fix this _might_ be to\nthrow more hardware at it. Proper configuration can buy you a lot, but if\nyour usage is exceeding the available bandwidth of the IO subsystem, the\nonly way you're going to get better performance is to put in a faster IO\nsubsystem.\n\n> > You know that you can run vacuum on particular tables, right? It would\n> > be probably a good idea to run vacuum on the most updated tables, and\n> > leave alone those that are not or little updated (hopefully the biggest;\n> > this would mean that an almost-complete vacuum run would take much less\n> > than a whole day).\n> \n> Yeah, sorry, that doesn't apply here.\n\nWhy not? I see no reason why an appropriate autovaccum schedule would not\napply to your scenario. I'm not saying it does, only that your response\ndoes not indicate that it doesn't, and thus I'm concerned that you're\nwriting autovacuum off without proper research.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 19 Jun 2007 10:15:04 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of\n concurrent access" }, { "msg_contents": "Bill Moran wrote:\n> In response to Karl Wright <[email protected]>:\n> \n>> Alvaro Herrera wrote:\n>>> Karl Wright wrote:\n>>>> Alvaro Herrera wrote:\n>>>>> Karl Wright wrote:\n>>>>>\n>>>>>> This particular run lasted four days before a VACUUM became essential. \n>>>>>> The symptom that indicates that VACUUM is needed seems to be that the \n>>>>>> CPU usage of any given postgresql query skyrockets. Is this essentially \n>>>>>> correct?\n>>>>> Are you saying you weren't used to run VACUUM all the time? If so,\n>>>>> that's where the problem lies.\n>>>> Postgresql 7.4 VACUUM runs for so long that starting it with a cron job \n>>>> even every 24 hours caused multiple instances of VACUUM to eventually be \n>>>> running in my case. So I tried to find a VACUUM schedule that permitted \n>>>> each individual vacuum to finish before the next one started. A vacuum \n>>>> seemed to require 4-5 days with this particular database - or at least \n>>>> it did for 7.4. So I had the VACUUM schedule set to run every six days.\n>>> How large is the database? I must admit I have never seen a database\n>>> that took 4 days to vacuum. This could mean that your database is\n>>> humongous, or that the vacuum strategy is wrong for some reason.\n>> The database is humongus, and the machine is under intense load. On the \n>> instance where this long vacuum occurred, there were several large \n>> tables - one with 7,000,000 rows, one with 14,000,000, one with \n>> 140,000,000, and one with 250,000,000.\n> \n> Don't rule out the possibility that the only way to fix this _might_ be to\n> throw more hardware at it. Proper configuration can buy you a lot, but if\n> your usage is exceeding the available bandwidth of the IO subsystem, the\n> only way you're going to get better performance is to put in a faster IO\n> subsystem.\n> \n>>> You know that you can run vacuum on particular tables, right? It would\n>>> be probably a good idea to run vacuum on the most updated tables, and\n>>> leave alone those that are not or little updated (hopefully the biggest;\n>>> this would mean that an almost-complete vacuum run would take much less\n>>> than a whole day).\n>> Yeah, sorry, that doesn't apply here.\n> \n> Why not? I see no reason why an appropriate autovaccum schedule would not\n> apply to your scenario. I'm not saying it does, only that your response\n> does not indicate that it doesn't, and thus I'm concerned that you're\n> writing autovacuum off without proper research.\n> \n\nI'm not writing off autovacuum - just the concept that the large tables \naren't the ones that are changing. Unfortunately, they *are* the most \ndynamically updated.\n\nKarl\n\n", "msg_date": "Tue, 19 Jun 2007 10:17:07 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "\"Karl Wright\" <[email protected]> writes:\n\n> Fine - but what if the previous vacuum is still in progress, and does not\n> finish in 5 minutes?\n\nYes, well, there are problems with this design but the situation is already\nmuch improved in 8.2 and there are more improvements on the horizon.\n\nBut it's likely that much of your pain is artificial here and once your\ndatabase is cleaned up a bit more it will be easier to manage. \n\n> Well, the smaller tables don't change much, but the bigger tables have a lively\n> mix of inserts and updates, so I would expect these would need vacuuming often.\n\nHm, I wonder if you're running into a performance bug that was fixed sometime\nback around then. It involved having large numbers of tuples indexed with the\nsame key value. Every search for a single record required linearly searching\nthrough the entire list of values.\n\nIf you have thousands of updates against the same tuple between vacuums you'll\nhave the same kind of situation and queries against that key will indeed\nrequire lots of cpu.\n\nTo help any more you'll have to answer the basic questions like how many rows\nare in the tables that take so long to vacuum, and how large are they on disk.\nOn 7.4 I think the best way to get the table size actually is by doing\n\"select relfilenode from pg_class where relname = 'tablename'\" and then looking\nin the postgres directory for the files in base/*/<relfilenode>*\n\nThe best information would be to do vacuum verbose and report the data it\nprints out.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 19 Jun 2007 15:19:34 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "A useful utility that I've found is PgFouine. It has an option to \nanalyze VACUUM VERBOSE logs. It has been instrumental in helping me \nfigure out whats been going on with my VACUUM that is taking 4+ \nhours, specifically tracking the tables that are taking the longest. \nI highly recommend checking it out. It would also perhaps be a good \nidea rather than simply starting a vacuum every 6 days, set it so \nthat it starts again as soon as it finishes (using a lock file or \nsomething that is polled for every few hours or minutes). This way, \na vacuum will kick off right when the other one finishes, hopefully \nslowly decreasing in time over time.\n\nHope this helps...\n\n/kurt\n\n\nOn Jun 19, 2007, at 10:06 AM, Karl Wright wrote:\n\n> Alvaro Herrera wrote:\n>> Karl Wright wrote:\n>>> Alvaro Herrera wrote:\n>>>> Karl Wright wrote:\n>>>>\n>>>>> This particular run lasted four days before a VACUUM became \n>>>>> essential. The symptom that indicates that VACUUM is needed \n>>>>> seems to be that the CPU usage of any given postgresql query \n>>>>> skyrockets. Is this essentially correct?\n>>>> Are you saying you weren't used to run VACUUM all the time? If so,\n>>>> that's where the problem lies.\n>>> Postgresql 7.4 VACUUM runs for so long that starting it with a \n>>> cron job even every 24 hours caused multiple instances of VACUUM \n>>> to eventually be running in my case. So I tried to find a VACUUM \n>>> schedule that permitted each individual vacuum to finish before \n>>> the next one started. A vacuum seemed to require 4-5 days with \n>>> this particular database - or at least it did for 7.4. So I had \n>>> the VACUUM schedule set to run every six days.\n>> How large is the database? I must admit I have never seen a database\n>> that took 4 days to vacuum. This could mean that your database is\n>> humongous, or that the vacuum strategy is wrong for some reason.\n>\n> The database is humongus, and the machine is under intense load. \n> On the instance where this long vacuum occurred, there were several \n> large tables - one with 7,000,000 rows, one with 14,000,000, one \n> with 140,000,000, and one with 250,000,000.\n>\n>> You know that you can run vacuum on particular tables, right? It \n>> would\n>> be probably a good idea to run vacuum on the most updated tables, and\n>> leave alone those that are not or little updated (hopefully the \n>> biggest;\n>> this would mean that an almost-complete vacuum run would take much \n>> less\n>> than a whole day).\n>\n> Yeah, sorry, that doesn't apply here.\n>\n>> Or maybe vacuum was stuck waiting on a lock somewhere.\n>>> I will be experimenting with 8.1 to see how long it takes to \n>>> complete a vacuum under load conditions tonight.\n>> You can also turn autovacuum on in 8.1, which might help quite a bit\n>> with finding a good vacuum schedule (you would need a bit of \n>> tuning it\n>> though, of course).\n>> In any case, if you are struggling for performance you are strongly\n>> adviced to upgrade to 8.2.\n>\n> Ok - that's something I should be able to do once we can go to \n> debian's etch release. There's a backport of 8.2 available there. \n> (The one for sarge is still considered 'experimental').\n>\n> Karl\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n", "msg_date": "Tue, 19 Jun 2007 10:29:08 -0400", "msg_from": "Kurt Overberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Karl Wright <[email protected]> writes:\n> Also, as I said before, I have done extensive query analysis and found \n> that the plans for the queries that are taking a long time are in fact \n> very reasonable. Here's an example from the application log of a query \n> that took way more time than its plan would seem to indicate it should:\n\n> [2007-06-18 09:39:49,797]ERROR Plan: Index Scan using i1181764142395 on \n> intrinsiclink (cost=0.00..14177.29 rows=5 width=253)\n> [2007-06-18 09:39:49,797]ERROR Plan: Index Cond: ((jobid = $2) AND \n> ((childidhash)::text = ($3)::text))\n> [2007-06-18 09:39:49,797]ERROR Plan: Filter: ((childid = ($4)::text) \n> AND ((isnew = ($5)::bpchar) OR (isnew = ($6)::bpchar)))\n\nI see the discussion thread has moved on to consider lack-of-vacuuming\nas the main problem, but I didn't want to let this pass without\ncomment. The above plan is not necessarily good at all --- it depends\non how many rows are selected by the index condition alone (ie, jobid\nand childidhash) versus how many are selected by the index and filter\nconditions. If the index retrieves many rows, most of which are\neliminated by the filter condition, it's still gonna take a long time.\n\nIn this case it looks like the planner is afraid that that's exactly\nwhat will happen --- a cost of 14177 suggests that several thousand row\nfetches are expected to happen, and yet it's only predicting 5 rows out\nafter the filter. It's using this plan anyway because it has no better\nalternative, but you should think about whether a different index\ndefinition would help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2007 10:36:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access " }, { "msg_contents": "Tom Lane wrote:\n> Karl Wright <[email protected]> writes:\n>> Also, as I said before, I have done extensive query analysis and found \n>> that the plans for the queries that are taking a long time are in fact \n>> very reasonable. Here's an example from the application log of a query \n>> that took way more time than its plan would seem to indicate it should:\n> \n>> [2007-06-18 09:39:49,797]ERROR Plan: Index Scan using i1181764142395 on \n>> intrinsiclink (cost=0.00..14177.29 rows=5 width=253)\n>> [2007-06-18 09:39:49,797]ERROR Plan: Index Cond: ((jobid = $2) AND \n>> ((childidhash)::text = ($3)::text))\n>> [2007-06-18 09:39:49,797]ERROR Plan: Filter: ((childid = ($4)::text) \n>> AND ((isnew = ($5)::bpchar) OR (isnew = ($6)::bpchar)))\n> \n> I see the discussion thread has moved on to consider lack-of-vacuuming\n> as the main problem, but I didn't want to let this pass without\n> comment. The above plan is not necessarily good at all --- it depends\n> on how many rows are selected by the index condition alone (ie, jobid\n> and childidhash) versus how many are selected by the index and filter\n> conditions. If the index retrieves many rows, most of which are\n> eliminated by the filter condition, it's still gonna take a long time.\n> \n> In this case it looks like the planner is afraid that that's exactly\n> what will happen --- a cost of 14177 suggests that several thousand row\n> fetches are expected to happen, and yet it's only predicting 5 rows out\n> after the filter. It's using this plan anyway because it has no better\n> alternative, but you should think about whether a different index\n> definition would help.\n> \n> \t\t\tregards, tom lane\n> \n\nWell, that's odd, because the hash in question that it is using is the \nSHA-1 hash of a URL. There's essentially one row per URL in this table. \n Even with a large table I would not expect more than a couple of \ncollisions at most.\n\nHow does it arrive at that estimate of 14,000?\n\nKarl\n\n", "msg_date": "Tue, 19 Jun 2007 10:48:09 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "Karl Wright <[email protected]> writes:\n> [2007-06-18 09:39:49,797]ERROR Plan: Index Scan using i1181764142395 on \n> intrinsiclink (cost=0.00..14177.29 rows=5 width=253)\n> [2007-06-18 09:39:49,797]ERROR Plan: Index Cond: ((jobid = $2) AND \n> ((childidhash)::text = ($3)::text))\n> [2007-06-18 09:39:49,797]ERROR Plan: Filter: ((childid = ($4)::text) \n> AND ((isnew = ($5)::bpchar) OR (isnew = ($6)::bpchar)))\n\n>> In this case it looks like the planner is afraid that that's exactly\n>> what will happen --- a cost of 14177 suggests that several thousand row\n>> fetches are expected to happen, and yet it's only predicting 5 rows out\n>> after the filter.\n\n> Well, that's odd, because the hash in question that it is using is the \n> SHA-1 hash of a URL. There's essentially one row per URL in this table. \n\nWhat about isnew?\n\nAlso, how many rows do *you* expect out of the query? The planner is\nnot going to be aware of the hashed relationship between childidhash\nand childid --- it'll think those are independent conditions which they\nevidently aren't. So it may be that the query really does retrieve\nthousands of rows, and the rows=5 estimate is bogus because it's\ndouble-counting the selectivity of the childid condition.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2007 10:56:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access " }, { "msg_contents": "Tom Lane wrote:\n> Karl Wright <[email protected]> writes:\n>> [2007-06-18 09:39:49,797]ERROR Plan: Index Scan using i1181764142395 on \n>> intrinsiclink (cost=0.00..14177.29 rows=5 width=253)\n>> [2007-06-18 09:39:49,797]ERROR Plan: Index Cond: ((jobid = $2) AND \n>> ((childidhash)::text = ($3)::text))\n>> [2007-06-18 09:39:49,797]ERROR Plan: Filter: ((childid = ($4)::text) \n>> AND ((isnew = ($5)::bpchar) OR (isnew = ($6)::bpchar)))\n> \n>>> In this case it looks like the planner is afraid that that's exactly\n>>> what will happen --- a cost of 14177 suggests that several thousand row\n>>> fetches are expected to happen, and yet it's only predicting 5 rows out\n>>> after the filter.\n> \n>> Well, that's odd, because the hash in question that it is using is the \n>> SHA-1 hash of a URL. There's essentially one row per URL in this table. \n> \n> What about isnew?\n\nIsnew is simply a flag which I want to set for all rows that belong to \nthis particular child, but only if it's one of two particular values.\n\n> \n> Also, how many rows do *you* expect out of the query? The planner is\n> not going to be aware of the hashed relationship between childidhash\n> and childid --- it'll think those are independent conditions which they\n> evidently aren't. So it may be that the query really does retrieve\n> thousands of rows, and the rows=5 estimate is bogus because it's\n> double-counting the selectivity of the childid condition.\n> \n\nThis can vary, but I expect there to be at on average a few dozen rows \nreturned from the overall query. The only way the index-condition part \nof the query can be returning thousands of rows would be if: (a) there \nis really a lot of data of this kind, or (b) the hash function is \nbasically not doing its job and there are thousands of collisions occurring.\n\nIn fact, that's not the case. In psql I just did the following analysis:\n\n >>>>>>\nmetacarta=> explain select count(*) from intrinsiclink where \njobid=1181766706097 and \nchildidhash='7E130F3B688687757187F1638D8776ECEF3009E0';\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=14992.23..14992.24 rows=1 width=0)\n -> Index Scan using i1181764142395 on intrinsiclink \n(cost=0.00..14971.81 rows=8167 width=0)\n Index Cond: ((jobid = 1181766706097::bigint) AND \n((childidhash)::text = '7E130F3B688687757187F1638D8776ECEF3009E0'::text))\n(3 rows)\n\nmetacarta=> select count(*) from intrinsiclink where jobid=1181766706097 \nand childidhash='7E130F3B688687757187F1638D8776ECEF3009E0';\n count\n-------\n 0\n(1 row)\n<<<<<<\n\nGranted this is well after-the-fact, but you can see that the cost \nestimate is wildly wrong in this case.\n\nI did an ANALYZE on that table and repeated the explain, and got this:\n\n >>>>>>\nmetacarta=> analyze intrinsiclink;\nANALYZE\nmetacarta=> explain select count(*) from intrinsiclink where \njobid=1181766706097 and \nchildidhash='7E130F3B688687757187F1638D8776ECEF3009E0';\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=15276.36..15276.37 rows=1 width=0)\n -> Index Scan using i1181764142395 on intrinsiclink \n(cost=0.00..15255.53 rows=8333 width=0)\n Index Cond: ((jobid = 1181766706097::bigint) AND \n((childidhash)::text = '7E130F3B688687757187F1638D8776ECEF3009E0'::text))\n(3 rows)\n<<<<<<\n\n... even more wildly wrong.\n\nKarl\n\n> \t\t\tregards, tom lane\n> \n\n", "msg_date": "Tue, 19 Jun 2007 11:28:23 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "Gregory Stark writes:\n\n> VACUUM doesn't require shutting down the system, it doesn't lock any tables or\n> otherwise prevent other jobs from making progress. It does add extra i/o but\n\nIn addition to what Gregory pointed out, you may want to also consider using \nAutovacuum. That may also help.\n", "msg_date": "Tue, 19 Jun 2007 11:40:44 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Alvaro Herrera writes:\n\n> How large is the database? I must admit I have never seen a database\n> that took 4 days to vacuum. This could mean that your database is\n> humongous, or that the vacuum strategy is wrong for some reason.\n\nSpecially with 16GB of RAM.\n\nI have a setup with several databases (the largest of which is 1TB database) \nand I do a nightly vacuum analyze for ALL databases. It takes about 22 \nhours. And this is with constant updates to the large 1TB database. This is \nwith Postgresql 8.1.3 \n", "msg_date": "Tue, 19 Jun 2007 11:45:49 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "\n\"Karl Wright\" <[email protected]> writes:\n\n>> In this case it looks like the planner is afraid that that's exactly\n>> what will happen --- a cost of 14177 suggests that several thousand row\n>> fetches are expected to happen, and yet it's only predicting 5 rows out\n>> after the filter. It's using this plan anyway because it has no better\n>> alternative, but you should think about whether a different index\n>> definition would help.\n\nAnother index won't help if the reason the cost is so high isn't because the\nindex isn't very selective but because there are lots of dead tuples.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 19 Jun 2007 16:46:20 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Karl Wright <[email protected]> writes:\n> I did an ANALYZE on that table and repeated the explain, and got this:\n> ...\n> ... even more wildly wrong.\n\nHmm. You might need to increase the statistics target for your larger\ntables. It's probably not a big deal for queries like this one, but I'm\nworried that you may be getting bad plans for complicated joins.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2007 11:48:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access " }, { "msg_contents": "Karl Wright writes:\n\n> I'm not writing off autovacuum - just the concept that the large tables \n> aren't the ones that are changing. Unfortunately, they *are* the most \n> dynamically updated.\n\nWould be possible for you to partition the tables?\nBy date or some other fashion to try to have some tables not get affected by \nthe updates/inserts?\n\nI am in the process of breaking a DB.. to have tables by dates. Our \nhistorical data never changes.\n\nAlso, what is the physical size of all this data?\n\n", "msg_date": "Tue, 19 Jun 2007 11:49:58 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Karl Wright wrote:\n\n> I did an ANALYZE on that table and repeated the explain, and got this:\n> \n> >>>>>>\n> metacarta=> analyze intrinsiclink;\n> ANALYZE\n> metacarta=> explain select count(*) from intrinsiclink where \n> jobid=1181766706097 and \n> childidhash='7E130F3B688687757187F1638D8776ECEF3009E0';\n> QUERY PLAN \n> \n> ------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=15276.36..15276.37 rows=1 width=0)\n> -> Index Scan using i1181764142395 on intrinsiclink \n> (cost=0.00..15255.53 rows=8333 width=0)\n> Index Cond: ((jobid = 1181766706097::bigint) AND \n> ((childidhash)::text = '7E130F3B688687757187F1638D8776ECEF3009E0'::text))\n> (3 rows)\n> <<<<<<\n> \n> ... even more wildly wrong.\n\nInteresting. What is the statistics target for this table? Try\nincreasing it, with ALTER TABLE ... SET STATISTICS, rerun analyze, and\ntry again.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 19 Jun 2007 11:50:23 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "\"Gregory Stark\" <[email protected]> writes:\n\n> \"Karl Wright\" <[email protected]> writes:\n>\n>>> In this case it looks like the planner is afraid that that's exactly\n>>> what will happen --- a cost of 14177 suggests that several thousand row\n>>> fetches are expected to happen, and yet it's only predicting 5 rows out\n>>> after the filter. It's using this plan anyway because it has no better\n>>> alternative, but you should think about whether a different index\n>>> definition would help.\n>\n> Another index won't help if the reason the cost is so high isn't because the\n> index isn't very selective but because there are lots of dead tuples.\n\nSorry, I didn't mean to say that was definitely the case, only that having\nbloated tables with lots of dead index pointers could have similar symptoms\nbecause the query still has to follow all those index pointers.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 19 Jun 2007 17:12:05 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Francisco Reyes wrote:\n\n> I have a setup with several databases (the largest of which is 1TB \n> database) and I do a nightly vacuum analyze for ALL databases. It takes \n> about 22 hours. And this is with constant updates to the large 1TB \n> database. This is with Postgresql 8.1.3\n\n22h nightly? Wow, you have long nights ;-).\n\nOn a serious note, the index vacuum improvements in 8.2 might help you \nto cut that down. You seem to be happy with your setup, but I thought \nI'd mention it..\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 19 Jun 2007 18:57:41 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of \n\tconcurrent access" }, { "msg_contents": "Heikki Linnakangas writes:\n\n> On a serious note, the index vacuum improvements in 8.2 might help you \n> to cut that down. You seem to be happy with your setup, but I thought \n> I'd mention it..\n\nI am really, really trying.. to go to 8.2.\nI have a thread on \"general\" going on for about a week.\nI am unable to restore a database on 8.2.4.. on a particular machine.\nDon't know if the issue is the machine configuration or whether I have found \na Postgresql bug.\n\nThe plan is to copy the data over and work on migrating to the second \nmachine.\n\nAlso we are splitting the database so historical information (which never \nchanges for us) will be in one DB and all the active/current data will be on \nanother. This way our backups/vacuums will be faster.\n", "msg_date": "Tue, 19 Jun 2007 14:45:36 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Francisco Reyes wrote:\n> Alvaro Herrera writes:\n> \n>> How large is the database? I must admit I have never seen a database\n>> that took 4 days to vacuum. This could mean that your database is\n>> humongous, or that the vacuum strategy is wrong for some reason.\n> \n> Specially with 16GB of RAM.\n> \n> I have a setup with several databases (the largest of which is 1TB \n> database) and I do a nightly vacuum analyze for ALL databases. It takes \n> about 22 hours. And this is with constant updates to the large 1TB \n> database. This is with Postgresql 8.1.3\n\nOkay - I started a VACUUM with the 8.1 database yesterday morning, \nhaving the database remain under load. As of 12:30 today (~27 hours), \nthe original VACUUM was still running. At that point:\n\n(a) I had to shut it down anyway because I needed to do another \nexperiment having to do with database export/import performance, and\n(b) the performance of individual queries had already degraded \nsignificantly in the same manner as what I'd seen before.\n\nSo, I guess this means that there's no way I can keep the database \nadequately vacuumed with my anticipated load and hardware. One thing or \nthe other will have to change.\n\nIs the VACUUM in 8.2 significantly faster than the one in 8.1? Or, is \nthe database less sensitive performance-wise to delayed VACUUM commands?\n\nKarl\n", "msg_date": "Wed, 20 Jun 2007 13:28:08 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of \n\tconcurrent access" }, { "msg_contents": "Karl Wright wrote:\n> So, I guess this means that there's no way I can keep the database \n> adequately vacuumed with my anticipated load and hardware. One thing or \n> the other will have to change.\n\nHave you checked your maintenance_work_mem setting? If it's not large \nenough, vacuum will need to scan through all indexes multiple times \ninstead of just once. With 16 GB of RAM you should set it to something \nlike 2GB I think, or even more.\n\n> Is the VACUUM in 8.2 significantly faster than the one in 8.1?\n\nYes, in particular if you have a lot of indexes. Scanning the indexes \nwas done in index page order, which in worst case means random I/O, and \nwe used to do an extra scan of all index pages to collect empty ones. \nNow it's all done as a single sequential pass.\n\n> Or, is \n> the database less sensitive performance-wise to delayed VACUUM commands?\n\nNo.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 20 Jun 2007 18:40:37 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of \n\tconcurrent access" }, { "msg_contents": "Karl Wright wrote:\n\n> (b) the performance of individual queries had already degraded \n> significantly in the same manner as what I'd seen before.\n\nYou didn't answer whether you had smaller, more frequently updated\ntables that need more vacuuming. This comment makes me think you do. I\nthink what you should be looking at is whether you can forget vacuuming\nthe whole database in one go, and make it more granular.\n\n-- \nAlvaro Herrera http://www.flickr.com/photos/alvherre/\n\"Having your biases confirmed independently is how scientific progress is\nmade, and hence made our great society what it is today\" (Mary Gardiner)\n", "msg_date": "Wed, 20 Jun 2007 13:53:00 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Karl Wright writes:\n\n> Okay - I started a VACUUM with the 8.1 database yesterday morning, \n> having the database remain under load. As of 12:30 today (~27 hours), \n> the original VACUUM was still running. At that point:\n\nI don't recall if you said it already, but what is your \nmaintenance_work_mem?\n \n> (a) I had to shut it down anyway because I needed to do another \n> experiment having to do with database export/import performance, and\n\nDo you know which tables change the most often?\nHave you tried to do vacuum of those one at a time and see how long they \ntake?\n\n> (b) the performance of individual queries had already degraded \n> significantly in the same manner as what I'd seen before.\n\nIf you have a lot of inserts perhaps you can do analyze more often also. \n \n> So, I guess this means that there's no way I can keep the database \n> adequately vacuumed with my anticipated load and hardware.\n\nIt is a possibility, but you could consider other strategies.. totally \ndependant on the programs accessing the data..\n\nFor example:\ndo you have any historical data that never changes?\nCould that be moved to a different database in that same machine or another \nmachine? That would decrease your vacuum times.\nAlso partitioning the data so data that never changes is in separate \ntables may also help (but I am not sure of this).\n\nGiven the sizes you sent to the list, it may be simply that it is more than \nthe hardware can handle.\n", "msg_date": "Wed, 20 Jun 2007 13:53:07 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Alvaro Herrera wrote:\n> Karl Wright wrote:\n> \n>> (b) the performance of individual queries had already degraded \n>> significantly in the same manner as what I'd seen before.\n> \n> You didn't answer whether you had smaller, more frequently updated\n> tables that need more vacuuming. This comment makes me think you do. I\n> think what you should be looking at is whether you can forget vacuuming\n> the whole database in one go, and make it more granular.\n> \n\nI am afraid that I did answer this. My largest tables are the ones \ncontinually being updated. The smaller ones are updated only infrequently.\n\nKarl\n\n", "msg_date": "Wed, 20 Jun 2007 13:55:20 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "Karl Wright wrote:\n> Alvaro Herrera wrote:\n> >Karl Wright wrote:\n> >\n> >>(b) the performance of individual queries had already degraded \n> >>significantly in the same manner as what I'd seen before.\n> >\n> >You didn't answer whether you had smaller, more frequently updated\n> >tables that need more vacuuming. This comment makes me think you do. I\n> >think what you should be looking at is whether you can forget vacuuming\n> >the whole database in one go, and make it more granular.\n> \n> I am afraid that I did answer this. My largest tables are the ones \n> continually being updated. The smaller ones are updated only infrequently.\n\nCan you afford to vacuum them in parallel?\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/5ZYLFMCVHXC\n\"Java is clearly an example of money oriented programming\" (A. Stepanov)\n", "msg_date": "Wed, 20 Jun 2007 13:57:58 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Francisco Reyes wrote:\n> Karl Wright writes:\n> \n>> Okay - I started a VACUUM with the 8.1 database yesterday morning, \n>> having the database remain under load. As of 12:30 today (~27 hours), \n>> the original VACUUM was still running. At that point:\n> \n> I don't recall if you said it already, but what is your \n> maintenance_work_mem?\n> \n\nI'm trying that now.\n\n>> (a) I had to shut it down anyway because I needed to do another \n>> experiment having to do with database export/import performance, and\n> \n> Do you know which tables change the most often?\n> Have you tried to do vacuum of those one at a time and see how long they \n> take?\n\nI can certainly do that, but at the rate it's currently operating that \nmay take many more days.\n\n> \n>> (b) the performance of individual queries had already degraded \n>> significantly in the same manner as what I'd seen before.\n> \n> If you have a lot of inserts perhaps you can do analyze more often also.\n\nI'm not getting bad query plans; I'm getting good plans but slow \nexecution. This is consistent with what someone else said, which was \nthat if you didn't run VACUUM enough, then dead tuples would cause \nperformance degradation of the kind I am seeing.\n\n(FWIW, ANALYZE operations are kicked off after every 30,000 inserts, \nupdates, or deletes, by the application itself).\n\n>> So, I guess this means that there's no way I can keep the database \n>> adequately vacuumed with my anticipated load and hardware.\n> \n> It is a possibility, but you could consider other strategies.. totally \n> dependant on the programs accessing the data..\n> \n> For example:\n> do you have any historical data that never changes?\n\nSome, but it's insignificant compared to the big tables that change all \nover the place.\n\n> Could that be moved to a different database in that same machine or \n> another machine? That would decrease your vacuum times.\n\nThat's not an option, since we ship appliances and this would require \nthat each appliance somehow come in pairs.\n\n> Also partitioning the data so data that never changes is in separate \n> tables may also help (but I am not sure of this).\n> \n\nRight, see earlier discussion.\n\n> Given the sizes you sent to the list, it may be simply that it is more \n> than the hardware can handle.\n> \n\nI'm going to recommend going to 8.2 so that we get as much improvement \nas possible before panicking entirely. :-)\n\nKarl\n", "msg_date": "Wed, 20 Jun 2007 14:01:34 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of \n\tconcurrent access" }, { "msg_contents": "Alvaro Herrera wrote:\n> Karl Wright wrote:\n>> Alvaro Herrera wrote:\n>>> Karl Wright wrote:\n>>>\n>>>> (b) the performance of individual queries had already degraded \n>>>> significantly in the same manner as what I'd seen before.\n>>> You didn't answer whether you had smaller, more frequently updated\n>>> tables that need more vacuuming. This comment makes me think you do. I\n>>> think what you should be looking at is whether you can forget vacuuming\n>>> the whole database in one go, and make it more granular.\n>> I am afraid that I did answer this. My largest tables are the ones \n>> continually being updated. The smaller ones are updated only infrequently.\n> \n> Can you afford to vacuum them in parallel?\n> \n\nHmm, interesting question. If VACUUM is disk limited then it wouldn't \nhelp, probably, unless I moved various tables to different disks \nsomehow. Let me think about whether that might be possible.\n\nKarl\n\n", "msg_date": "Wed, 20 Jun 2007 14:03:28 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "Karl Wright wrote:\n> Alvaro Herrera wrote:\n> >Karl Wright wrote:\n\n> >>I am afraid that I did answer this. My largest tables are the ones \n> >>continually being updated. The smaller ones are updated only \n> >>infrequently.\n> >\n> >Can you afford to vacuum them in parallel?\n> \n> Hmm, interesting question. If VACUUM is disk limited then it wouldn't \n> help, probably, unless I moved various tables to different disks \n> somehow. Let me think about whether that might be possible.\n\nWell, is it disk limited? Do you have the vacuum_delay stuff enabled?\n\n-- \nAlvaro Herrera http://www.flickr.com/photos/alvherre/\n\"I would rather have GNU than GNOT.\" (ccchips, lwn.net/Articles/37595/)\n", "msg_date": "Wed, 20 Jun 2007 14:06:28 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Is there a sensible way to partition the large table into smaller \ntables?\n\nMike Stone\n", "msg_date": "Wed, 20 Jun 2007 14:08:55 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables,\n lots of concurrent access" }, { "msg_contents": "Michael Stone wrote:\n> Is there a sensible way to partition the large table into smaller tables?\n\nIt entirely depends on your data set.\n\nJoshua D. Drake\n\n\n> \n> Mike Stone\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Wed, 20 Jun 2007 11:14:45 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "On Wed, Jun 20, 2007 at 02:01:34PM -0400, Karl Wright wrote:\n> (FWIW, ANALYZE operations are kicked off after every 30,000 inserts, \n> updates, or deletes, by the application itself).\n\nI don't think you should do it that way. I suspect that automatic\nVACUUM ANALYSE way more often on each table -- like maybe in a loop\n-- would be better for your case. \n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n", "msg_date": "Wed, 20 Jun 2007 14:15:11 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "On Wed, Jun 20, 2007 at 11:14:45AM -0700, Joshua D. Drake wrote:\n>Michael Stone wrote:\n>>Is there a sensible way to partition the large table into smaller tables?\n>\n>It entirely depends on your data set.\n\nYes, that's why it was a question rather than a suggestion. :)\n\nMike Stone\n", "msg_date": "Wed, 20 Jun 2007 14:20:53 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables,\n lots of concurrent access" }, { "msg_contents": "On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:\n\n> I am afraid that I did answer this. My largest tables\n> are the ones continually being updated. The smaller\n> ones are updated only infrequently. \n\nYou know, it actually sounds like you're getting whacked by the same \nproblem that got us a while back. It sounds like you weren't vacuuming \nfrequently enough initially, and then tried vacuuming later, only after \nyou noticed performance degrade.\n\nUnfortunately what that means, is for several weeks or months, Postgres \nhas not been reusing rows on your (admittedly) active and large tables; \nit just appends at the end, and lets old rows slowly bloat that table \nlarger and larger. Indexes too, will suffer from dead pages. As \nfrightening/sickening as this sounds, you may need to dump/restore the \nreally huge table, or vacuum-full to put it on a crash diet, and then \nmaintain a strict daily or bi-daily vacuum schedule to keep it under \ncontrol.\n\nThe reason I think this: even with several 200M row tables, vacuums \nshouldn't take over 24 hours. Ever. Do a vacuum verbose and see just \nhow many pages it's trying to reclaim. I'm willing to wager it's \nseveral orders of magnitude higher than the max_fsm_pages setting \nyou've stuck in your config file.\n\nYou'll also want to see which rows in your 250M+ table are actually \nactive, and shunt the stable rows to another (warehouse) table maybe \navailable only via view or table partition. I mean, your most active \ntable is also the largest? Seems a bit backward, to me.\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n", "msg_date": "Wed, 20 Jun 2007 14:25:28 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Shaun Thomas wrote:\n> On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:\n> \n> \n>>I am afraid that I did answer this. My largest tables\n>>are the ones continually being updated. The smaller\n>>ones are updated only infrequently. \n> \n> \n> You know, it actually sounds like you're getting whacked by the same \n> problem that got us a while back. It sounds like you weren't vacuuming \n> frequently enough initially, and then tried vacuuming later, only after \n> you noticed performance degrade.\n> \n> Unfortunately what that means, is for several weeks or months, Postgres \n> has not been reusing rows on your (admittedly) active and large tables; \n> it just appends at the end, and lets old rows slowly bloat that table \n> larger and larger. Indexes too, will suffer from dead pages. As \n> frightening/sickening as this sounds, you may need to dump/restore the \n> really huge table, or vacuum-full to put it on a crash diet, and then \n> maintain a strict daily or bi-daily vacuum schedule to keep it under \n> control.\n> \n\nA nice try, but I had just completed a VACUUM on this database three \nhours prior to starting the VACUUM that I gave up on after 27 hours. So \nI don't see how much more frequently I could do it. (The one I did \nearlier finished in six hours - but to accomplish that I had to shut \ndown EVERYTHING else that machine was doing.)\n\nKarl\n\n\n> The reason I think this: even with several 200M row tables, vacuums \n> shouldn't take over 24 hours. Ever. Do a vacuum verbose and see just \n> how many pages it's trying to reclaim. I'm willing to wager it's \n> several orders of magnitude higher than the max_fsm_pages setting \n> you've stuck in your config file.\n> \n> You'll also want to see which rows in your 250M+ table are actually \n> active, and shunt the stable rows to another (warehouse) table maybe \n> available only via view or table partition. I mean, your most active \n> table is also the largest? Seems a bit backward, to me.\n> \n\n", "msg_date": "Wed, 20 Jun 2007 17:29:41 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "On Wed, Jun 20, 2007 at 05:29:41PM -0400, Karl Wright wrote:\n> A nice try, but I had just completed a VACUUM on this database three \n> hours prior to starting the VACUUM that I gave up on after 27 hours. \n\nYou keep putting it that way, but your problem is essentially that\nyou have several tables that _all_ need to be vacuumed. VACUUM need\nnot actually be a database-wide operation.\n\n> earlier finished in six hours - but to accomplish that I had to shut \n> down EVERYTHING else that machine was doing.)\n\nThis suggests to me that you simply don't have enough machine for the\njob. You probably need more I/O, and actually more CPU wouldn't\nhurt, because then you could run three VACUUMs on three separate\ntables (on three separate disks, of course) and not have to switch\nthem off and on the CPU.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nA certain description of men are for getting out of debt, yet are\nagainst all taxes for raising money to pay it off.\n\t\t--Alexander Hamilton\n", "msg_date": "Wed, 20 Jun 2007 17:40:24 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Karl Wright wrote:\n> Shaun Thomas wrote:\n>> On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:\n>>\n>>\n>>> I am afraid that I did answer this. My largest tables\n>>> are the ones continually being updated. The smaller\n>>> ones are updated only infrequently. \n>>\n>>\n>> You know, it actually sounds like you're getting whacked by the same \n>> problem that got us a while back. It sounds like you weren't \n>> vacuuming frequently enough initially, and then tried vacuuming \n>> later, only after you noticed performance degrade.\n>>\n>> Unfortunately what that means, is for several weeks or months, \n>> Postgres has not been reusing rows on your (admittedly) active and \n>> large tables; it just appends at the end, and lets old rows slowly \n>> bloat that table larger and larger. Indexes too, will suffer from \n>> dead pages. As frightening/sickening as this sounds, you may need to \n>> dump/restore the really huge table, or vacuum-full to put it on a \n>> crash diet, and then maintain a strict daily or bi-daily vacuum \n>> schedule to keep it under control.\n>>\n>\n> A nice try, but I had just completed a VACUUM on this database three \n> hours prior to starting the VACUUM that I gave up on after 27 hours. \n> So I don't see how much more frequently I could do it. (The one I did \n> earlier finished in six hours - but to accomplish that I had to shut \n> down EVERYTHING else that machine was doing.)\n\nSo, have you ever run vacuum full or reindex on this database?\n\nYou are aware of the difference between how vacuum and vacuum full work, \nright?\n\nvacuum := mark deleted tuples as available, leave in table\nvacuum full := compact tables to remove deleted tuples.\n\nWhile you should generally avoid vacuum full, if you've let your \ndatabase get so bloated that the majority of space in your tables is now \nempty / deleted tuples, you likely need to vacuuum full / reindex it.\n\nFor instance, on my tiny little 31 Gigabyte reporting database, the main \ntable takes up about 17 Gigs. This query gives you some idea how many \nbytes each row is taking on average:\n\nselect relname, relpages::float*8192 as size, reltuples, \n(relpages::double precision*8192)/reltuples::double precision as \nbytes_per_row from pg_class where relname = 'businessrequestsummary';\n relname | size | reltuples | bytes_per_row\n------------------------+-------------+-------------+-----------------\n businessrequestsummary | 17560944640 | 5.49438e+07 | 319.61656229454\n\nNote that these numbers are updated by running analyze...\n\nWhat does it say about your DB?\n", "msg_date": "Wed, 20 Jun 2007 17:45:56 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "Scott Marlowe wrote:\n> Karl Wright wrote:\n> \n>> Shaun Thomas wrote:\n>>\n>>> On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:\n>>>\n>>>\n>>>> I am afraid that I did answer this. My largest tables\n>>>> are the ones continually being updated. The smaller\n>>>> ones are updated only infrequently. \n>>>\n>>>\n>>>\n>>> You know, it actually sounds like you're getting whacked by the same \n>>> problem that got us a while back. It sounds like you weren't \n>>> vacuuming frequently enough initially, and then tried vacuuming \n>>> later, only after you noticed performance degrade.\n>>>\n>>> Unfortunately what that means, is for several weeks or months, \n>>> Postgres has not been reusing rows on your (admittedly) active and \n>>> large tables; it just appends at the end, and lets old rows slowly \n>>> bloat that table larger and larger. Indexes too, will suffer from \n>>> dead pages. As frightening/sickening as this sounds, you may need to \n>>> dump/restore the really huge table, or vacuum-full to put it on a \n>>> crash diet, and then maintain a strict daily or bi-daily vacuum \n>>> schedule to keep it under control.\n>>>\n>>\n>> A nice try, but I had just completed a VACUUM on this database three \n>> hours prior to starting the VACUUM that I gave up on after 27 hours. \n>> So I don't see how much more frequently I could do it. (The one I did \n>> earlier finished in six hours - but to accomplish that I had to shut \n>> down EVERYTHING else that machine was doing.)\n> \n> \n> So, have you ever run vacuum full or reindex on this database?\n> \n\nNo. However, this database has only existed since last Thursday afternoon.\n\n> You are aware of the difference between how vacuum and vacuum full work, \n> right?\n> \n> vacuum := mark deleted tuples as available, leave in table\n> vacuum full := compact tables to remove deleted tuples.\n> \n> While you should generally avoid vacuum full, if you've let your \n> database get so bloated that the majority of space in your tables is now \n> empty / deleted tuples, you likely need to vacuuum full / reindex it.\n> \n\nIf the database is continually growing, should VACUUM FULL be necessary?\n\n> For instance, on my tiny little 31 Gigabyte reporting database, the main \n> table takes up about 17 Gigs. This query gives you some idea how many \n> bytes each row is taking on average:\n> \n> select relname, relpages::float*8192 as size, reltuples, \n> (relpages::double precision*8192)/reltuples::double precision as \n> bytes_per_row from pg_class where relname = 'businessrequestsummary';\n> relname | size | reltuples | bytes_per_row\n> ------------------------+-------------+-------------+-----------------\n> businessrequestsummary | 17560944640 | 5.49438e+07 | 319.61656229454\n> \n> Note that these numbers are updated by running analyze...\n> \n> What does it say about your DB?\n> \n\nI wish I could tell you. Like I said, I had to abandon this project to \ntest out an upgrade procedure involving pg_dump and pg_restore. (The \nupgrade also seems to take a very long time - over 6 hours so far.) \nWhen it is back online I can provide further information.\n\nKarl\n", "msg_date": "Wed, 20 Jun 2007 19:22:47 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "Karl Wright wrote:\n> Scott Marlowe wrote:\n>> Karl Wright wrote:\n>>\n>>> Shaun Thomas wrote:\n>>>\n>>>> On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:\n>>>>\n>>>>\n>>>>> I am afraid that I did answer this. My largest tables\n>>>>> are the ones continually being updated. The smaller\n>>>>> ones are updated only infrequently. \n>>>>\n>>>>\n>>>>\n>>>> You know, it actually sounds like you're getting whacked by the \n>>>> same problem that got us a while back. It sounds like you weren't \n>>>> vacuuming frequently enough initially, and then tried vacuuming \n>>>> later, only after you noticed performance degrade.\n>>>>\n>>>> Unfortunately what that means, is for several weeks or months, \n>>>> Postgres has not been reusing rows on your (admittedly) active and \n>>>> large tables; it just appends at the end, and lets old rows slowly \n>>>> bloat that table larger and larger. Indexes too, will suffer from \n>>>> dead pages. As frightening/sickening as this sounds, you may need \n>>>> to dump/restore the really huge table, or vacuum-full to put it on \n>>>> a crash diet, and then maintain a strict daily or bi-daily vacuum \n>>>> schedule to keep it under control.\n>>>>\n>>>\n>>> A nice try, but I had just completed a VACUUM on this database three \n>>> hours prior to starting the VACUUM that I gave up on after 27 \n>>> hours. So I don't see how much more frequently I could do it. (The \n>>> one I did earlier finished in six hours - but to accomplish that I \n>>> had to shut down EVERYTHING else that machine was doing.)\n>>\n>>\n>> So, have you ever run vacuum full or reindex on this database?\n>>\n>\n> No. However, this database has only existed since last Thursday \n> afternoon.\nWell, a couple of dozen update statements with no where clause on large \ntables could bloat it right up.\n\nIt's not about age so much as update / delete patterns.\n>\n>> You are aware of the difference between how vacuum and vacuum full \n>> work, right?\n>>\n>> vacuum := mark deleted tuples as available, leave in table\n>> vacuum full := compact tables to remove deleted tuples.\n>>\n>> While you should generally avoid vacuum full, if you've let your \n>> database get so bloated that the majority of space in your tables is \n>> now empty / deleted tuples, you likely need to vacuuum full / reindex \n>> it.\n>>\n> If the database is continually growing, should VACUUM FULL be necessary?\nIf it's only growing, with no deletes or updates, then no. Generally, \non a properly vacuumed database, vacuum full should never be needed.\n>> For instance, on my tiny little 31 Gigabyte reporting database, the \n>> main table takes up about 17 Gigs. This query gives you some idea \n>> how many bytes each row is taking on average:\n>>\n>> select relname, relpages::float*8192 as size, reltuples, \n>> (relpages::double precision*8192)/reltuples::double precision as \n>> bytes_per_row from pg_class where relname = 'businessrequestsummary';\n>> relname | size | reltuples | bytes_per_row\n>> ------------------------+-------------+-------------+-----------------\n>> businessrequestsummary | 17560944640 | 5.49438e+07 | 319.61656229454\n>>\n>> Note that these numbers are updated by running analyze...\n>>\n>> What does it say about your DB?\n>>\n>\n> I wish I could tell you. Like I said, I had to abandon this project \n> to test out an upgrade procedure involving pg_dump and pg_restore. \n> (The upgrade also seems to take a very long time - over 6 hours so \n> far.) When it is back online I can provide further information.\n\nWell, let us know. I would definitely recommend getting more / faster \ndisks. Right now I've got a simple 4 disk RAID10 on the way to replace \nthe single SATA drive I'm running on right now. I can't wait.\n", "msg_date": "Thu, 21 Jun 2007 10:44:37 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "Scott Marlowe wrote:\n> Karl Wright wrote:\n>> Scott Marlowe wrote:\n>>> Karl Wright wrote:\n>>>\n>>>> Shaun Thomas wrote:\n>>>>\n>>>>> On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:\n>>>>>\n>>>>>\n>>>>>> I am afraid that I did answer this. My largest tables\n>>>>>> are the ones continually being updated. The smaller\n>>>>>> ones are updated only infrequently. \n>>>>>\n>>>>>\n>>>>>\n>>>>> You know, it actually sounds like you're getting whacked by the \n>>>>> same problem that got us a while back. It sounds like you weren't \n>>>>> vacuuming frequently enough initially, and then tried vacuuming \n>>>>> later, only after you noticed performance degrade.\n>>>>>\n>>>>> Unfortunately what that means, is for several weeks or months, \n>>>>> Postgres has not been reusing rows on your (admittedly) active and \n>>>>> large tables; it just appends at the end, and lets old rows slowly \n>>>>> bloat that table larger and larger. Indexes too, will suffer from \n>>>>> dead pages. As frightening/sickening as this sounds, you may need \n>>>>> to dump/restore the really huge table, or vacuum-full to put it on \n>>>>> a crash diet, and then maintain a strict daily or bi-daily vacuum \n>>>>> schedule to keep it under control.\n>>>>>\n>>>>\n>>>> A nice try, but I had just completed a VACUUM on this database three \n>>>> hours prior to starting the VACUUM that I gave up on after 27 \n>>>> hours. So I don't see how much more frequently I could do it. (The \n>>>> one I did earlier finished in six hours - but to accomplish that I \n>>>> had to shut down EVERYTHING else that machine was doing.)\n>>>\n>>>\n>>> So, have you ever run vacuum full or reindex on this database?\n>>>\n>>\n>> No. However, this database has only existed since last Thursday \n>> afternoon.\n> Well, a couple of dozen update statements with no where clause on large \n> tables could bloat it right up.\n> \n> It's not about age so much as update / delete patterns.\n>>\n>>> You are aware of the difference between how vacuum and vacuum full \n>>> work, right?\n>>>\n>>> vacuum := mark deleted tuples as available, leave in table\n>>> vacuum full := compact tables to remove deleted tuples.\n>>>\n>>> While you should generally avoid vacuum full, if you've let your \n>>> database get so bloated that the majority of space in your tables is \n>>> now empty / deleted tuples, you likely need to vacuuum full / reindex \n>>> it.\n>>>\n>> If the database is continually growing, should VACUUM FULL be necessary?\n> If it's only growing, with no deletes or updates, then no. Generally, \n> on a properly vacuumed database, vacuum full should never be needed.\n>>> For instance, on my tiny little 31 Gigabyte reporting database, the \n>>> main table takes up about 17 Gigs. This query gives you some idea \n>>> how many bytes each row is taking on average:\n>>>\n>>> select relname, relpages::float*8192 as size, reltuples, \n>>> (relpages::double precision*8192)/reltuples::double precision as \n>>> bytes_per_row from pg_class where relname = 'businessrequestsummary';\n>>> relname | size | reltuples | bytes_per_row\n>>> ------------------------+-------------+-------------+-----------------\n>>> businessrequestsummary | 17560944640 | 5.49438e+07 | 319.61656229454\n>>>\n>>> Note that these numbers are updated by running analyze...\n>>>\n>>> What does it say about your DB?\n>>>\n>>\n>> I wish I could tell you. Like I said, I had to abandon this project \n>> to test out an upgrade procedure involving pg_dump and pg_restore. \n>> (The upgrade also seems to take a very long time - over 6 hours so \n>> far.) When it is back online I can provide further information.\n> \n> Well, let us know. I would definitely recommend getting more / faster \n> disks. Right now I've got a simple 4 disk RAID10 on the way to replace \n> the single SATA drive I'm running on right now. I can't wait.\n> \n\nI checked the disk picture - this is a RAID disk array with 6 drives, \nwith a bit more than 1Tbyte total storage. 15,000 RPM. It would be \nhard to get more/faster disk than that.\n\nKarl\n", "msg_date": "Thu, 21 Jun 2007 12:29:49 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:\n> I checked the disk picture - this is a RAID disk array with 6 drives, \n> with a bit more than 1Tbyte total storage. 15,000 RPM. It would be \n> hard to get more/faster disk than that.\n\nWhat kind of RAID? It's _easy_ to get faster disk that 6 drives in\nRAID5, even if they're 15,000 RPM. The rotation speed is the least\nof your problems in many RAID implementations.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\n\"The year's penultimate month\" is not in truth a good way of saying\nNovember.\n\t\t--H.W. Fowler\n", "msg_date": "Thu, 21 Jun 2007 13:11:48 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent access" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:\n> \n>> I checked the disk picture - this is a RAID disk array with 6 drives, \n>> with a bit more than 1Tbyte total storage. 15,000 RPM. It would be \n>> hard to get more/faster disk than that.\n>> \n>\n> What kind of RAID? It's _easy_ to get faster disk that 6 drives in\n> RAID5, even if they're 15,000 RPM. The rotation speed is the least\n> of your problems in many RAID implementations.\n> \nAlso, the controller means a lot. I'd rather have a 4 disk RAID-10 with \nan Areca card with BBU Cache than a 16 disk RAID 5 on an adaptec (with \nor without cache... :) )\n", "msg_date": "Thu, 21 Jun 2007 18:10:32 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:\n> \n>> I checked the disk picture - this is a RAID disk array with 6 drives, \n>> with a bit more than 1Tbyte total storage. 15,000 RPM. It would be \n>> hard to get more/faster disk than that.\n>> \n>\n> What kind of RAID? It's _easy_ to get faster disk that 6 drives in\n> RAID5, even if they're 15,000 RPM. The rotation speed is the least\n> of your problems in many RAID implementations.\n\nOh, and the driver rev means a lot too. Some older driver revisions for \nsome RAID cards are very slow.\n", "msg_date": "Thu, 21 Jun 2007 18:11:04 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "Scott Marlowe wrote:\n> Andrew Sullivan wrote:\n>> On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:\n>> \n>>> I checked the disk picture - this is a RAID disk array with 6 drives, \n>>> with a bit more than 1Tbyte total storage. 15,000 RPM. It would be \n>>> hard to get more/faster disk than that.\n>>> \n>>\n>> What kind of RAID? It's _easy_ to get faster disk that 6 drives in\n>> RAID5, even if they're 15,000 RPM. The rotation speed is the least\n>> of your problems in many RAID implementations.\n>> \n> Also, the controller means a lot. I'd rather have a 4 disk RAID-10 with \n> an Areca card with BBU Cache than a 16 disk RAID 5 on an adaptec (with \n> or without cache... :) )\n\nOh come on... Adaptec makes a great skeet.\n\nJoshua D. Drake\n\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Thu, 21 Jun 2007 16:24:56 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables, lots of concurrent\n access" }, { "msg_contents": "On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:\n>I checked the disk picture - this is a RAID disk array with 6 drives, \n>with a bit more than 1Tbyte total storage. 15,000 RPM. It would be \n>hard to get more/faster disk than that.\n\nWell, it's not hard to more disk than that, but you'd probably have to \nlook at an external storage array (or more than one). A larger number of \nlarger/slower drives, splitting indices away from data, etc., will \nalmost certainly outperform 6 disks, 15k RPM or not.\n\nMike Stone\n", "msg_date": "Mon, 25 Jun 2007 07:37:48 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables,\n lots of concurrent access" }, { "msg_contents": "Michael Stone wrote:\n> On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:\n>> I checked the disk picture - this is a RAID disk array with 6 drives, \n>> with a bit more than 1Tbyte total storage. 15,000 RPM. It would be \n>> hard to get more/faster disk than that.\n> \n> Well, it's not hard to more disk than that, but you'd probably have to \n> look at an external storage array (or more than one). A larger number of \n> larger/slower drives, splitting indices away from data, etc., will almost\n> certainly outperform 6 disks, 15k RPM or not.\n> \nI also have 6 hard drives (Four of these are 10,000RPM Ultra/320 SCSI hard\ndrives, and the other two will be soon), 4 of which are dedicated\nexclusively to the DBMS, and the other two are for everything else. I am\ncurrently running IBM DB2, using the 4 SCSI drives in raw mode and letting\nDB2 do the IO (except for the bottom level device drivers). I have all the\nIndices on one drive, and most of the data on the other three, except for\nsome very small, seldom used tables (one has two rows, one has about 10\nrows) that are managed by the OS on the other drives. I have tested this and\nthe bottleneck is the logfiles. For this reason, I am about to upgrade the\n\"everything else\" drives to SCSI drives (the logfiles are on one of these).\nThey are currently 7200 rpm EIDE drives, but the SCSI ones are sitting on\ntop of the computer now, ready to be installed.\n\nWhen I upgrade from RHEL3 to RHEL5 (disks for that are also sitting on top\nof the computer), I will be switching from DB2 to postgreSQL, and that will\nbe an opportunity to lay out the disks differently. I think the partitions\nwill end up being about the same, but for the four main data drives, I am\nthinking about doing something like this, where D is data and X is Index,\nand Ti is table.\n\nDrive 3\tDrive 4\tDrive 5\tDrive 6\nDT1\tXT1\n\tDT2\tXT2\n\t\tDT3\tXt3\nXT4\t\t\tDT4\netc.\n\nNow once that is set up and populated, it might make sense to move things\naround somewhat to further reduce seek contention. But that would require\nactually populating the database and measuring it.\nThis setup would probably be pretty good if using just T1 and T3, for\nexample, but less good if using just T1 and T2. So ideal, such as it is,\nwould depend on the accesses being made by the usual program to the database.\n\nThese drives are about 17 GBytes each, which is enough for the database in\nquestion. (The other two are about 80 GBytes each, which is enough to run\nLinux and my other stuff on.)\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 08:45:01 up 4 days, 16:20, 3 users, load average: 4.23, 4.24, 4.21\n", "msg_date": "Mon, 25 Jun 2007 09:06:55 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance query about large tables,\n lots of concurrent access" } ]
[ { "msg_contents": "Hello,\n\nWe have installed postgres 8.2.0\n\ndefault time zone which postgres server using is\n\ntemplate1=# SHOW timezone;\n TimeZone\n-----------\n ETC/GMT-5\n(1 row)\n\n\nBut we want to set this timezone parameter to IST.\nOur system timezone is also in IST. We are using solaris.\n\nPlease provide me some help regarding this.\n\n\n\nThanks,\n\nSoni\n\nHello,\n \nWe have installed postgres 8.2.0\n \ndefault time zone which postgres server using is \n \ntemplate1=# SHOW timezone; TimeZone----------- ETC/GMT-5(1 row) \n \nBut we want to set this timezone parameter to IST.\nOur system timezone is also in IST. We are using solaris.\n\nPlease provide me some help regarding this.\n \nThanks,\nSoni", "msg_date": "Tue, 19 Jun 2007 13:12:58 +0530", "msg_from": "\"soni de\" <[email protected]>", "msg_from_op": true, "msg_subject": "Regarding Timezone" }, { "msg_contents": "soni de wrote:\n> But we want to set this timezone parameter to IST.\n> Our system timezone is also in IST. We are using solaris.\n\nThis is the performance-list, and this is not a performance-related \nquestion. Please use the pgsql-general or pgsql-novice list for this \nkind of questions.\n\nPostgreSQL should pick up the correct timezone from system \nconfiguration. I don't know why that's not happening in your case, but \nyou can use the \"timezone\" parameter in postgresql.conf to set it \nmanually. See manual: \nhttp://www.postgresql.org/docs/8.2/interactive/runtime-config-client.html#GUC-TIMEZONE\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 19 Jun 2007 08:51:27 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding Timezone" }, { "msg_contents": "am Tue, dem 19.06.2007, um 13:12:58 +0530 mailte soni de folgendes:\n> Hello,\n> \n> We have installed postgres 8.2.0\n> \n> default time zone which postgres server using is\n> \n> template1=# SHOW timezone;\n> TimeZone\n> -----------\n> ETC/GMT-5\n> (1 row)\n> \n> \n> But we want to set this timezone parameter to IST.\n> Our system timezone is also in IST. We are using solaris.\n\nALTER DATABASE foo SET TIMEZONE TO 'bla';\n\nYou can alter the template-database.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Tue, 19 Jun 2007 09:55:05 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding Timezone" }, { "msg_contents": "\"soni de\" <[email protected]> writes:\n> But we want to set this timezone parameter to IST.\n\nWhich \"IST\" are you interested in? Irish, Israel, or Indian Standard Time?\nPostgres prefers to use the zic timezone names, which are less\nambiguous. Try this to see likely options:\n\nregression=# select * from pg_timezone_names where abbrev = 'IST';\n name | abbrev | utc_offset | is_dst\n---------------+--------+------------+--------\n Asia/Calcutta | IST | 05:30:00 | f\n Asia/Colombo | IST | 05:30:00 | f\n Europe/Dublin | IST | 01:00:00 | t\n Eire | IST | 01:00:00 | t\n(4 rows)\n\nIf you're after Indian Standard Time, set timezone to 'Asia/Calcutta'.\nYou'll probably also want to set timezone_abbreviations to 'India' so\nthat \"IST\" is interpreted the way you want in timestamp datatype input.\nSee\nhttp://www.postgresql.org/docs/8.2/static/datetime-config-files.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2007 12:20:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding Timezone " } ]
[ { "msg_contents": "Hi list members,I have a question regarding hardware issues for a SDI (Spatial data infrastructure). It will consist of PostgreSQL with PostGIS and a UMN Mapserver/pmapper set up.At our institute we are currently establishing a small GIS working group. The data storage for vector data should be the central PostGIS system. Raster data will be held in file system.Mostly the users are accessing the data base in read only mode. From the client side there is not much write access this only will be done by the admin of the system to load new datasets. A prototype is currently running on an old desktop pc with ubuntu dapper - not very powerfull, of course!We have about 10000 € to spend for a new server including the storage. Do you have any recommendations for us?I have read a lot of introductions to tune up PostgreSQL systems. Since I don't have the possibility to tune up the soft parameters like cache, mem sizes etc., I wondered about the hardware. Most things were about the I/O of harddisks, RAM and file system. Is the filesystem that relevant? Because wo want to stay at Ubuntu because of the software support, espacially for the GIS-Systems. I think we need at least about 300-500Gb for storage and the server you get for this price are about two dualcore 2.0 - 2.8 GHz Opterons.Do you have any suggestions for the hardware of a spatial data base in that pricing category?Thanks in advance and greetings from Luxembourg,Christian", "msg_date": "Tue, 19 Jun 2007 12:28:30 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Hardware suggestions" }, { "msg_contents": "> At our institute we are currently establishing a small GIS working group.\n> The data storage for vector data should be the central PostGIS system.\n> Raster data will be held in file system.\n> Mostly the users are accessing the data base in read only mode. From the\n> client side there is not much write access this only will be done by the\n> admin of the system to load new datasets. A prototype is currently running\n> on an old desktop pc with ubuntu dapper - not very powerfull, of course!\n> We have about 10000 € to spend for a new server including the storage. Do\n> you have any recommendations for us?\n\nWhen it comes to server-hardware I'd go for intel's dual-core\n(woodcrest) or quad-core. They seem to perform better atm. compared to\nopterons.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Tue, 19 Jun 2007 13:10:45 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions" }, { "msg_contents": "[email protected] writes:\n\n> sizes etc., I wondered about the hardware. Most things were about the I/O \n> of harddisks, RAM and file system. Is the filesystem that relevant? \n> Because wo want to stay at Ubuntu because of the software support, \n> espacially for the GIS-Systems. I think we need at least about 300-500Gb \n> for storage and the server you get for this price are about two dualcore \n> 2.0 - 2.8 GHz Opterons.\n\nI would suggest 8GB of RAM, 4 500GB (Seagate) drives in RAID10, a dual \ncore CPU (AMD or Dual Core) and 3ware or Areca controller.\n\nIf you don't need a 1U case and you can use a tower case you should be able \nto get those specs within your budget.\n\n", "msg_date": "Tue, 19 Jun 2007 11:37:07 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions" }, { "msg_contents": "[email protected] wrote:\n> Hi list members,\n>\n> I have a question regarding hardware issues for a SDI (Spatial data \n> infrastructure). It will consist of PostgreSQL with PostGIS and a UMN \n> Mapserver/pmapper set up.\n> At our institute we are currently establishing a small GIS working \n> group. The data storage for vector data should be the central PostGIS \n> system. Raster data will be held in file system.\n> Mostly the users are accessing the data base in read only mode. From \n> the client side there is not much write access this only will be done \n> by the admin of the system to load new datasets. A prototype is \n> currently running on an old desktop pc with ubuntu dapper - not very \n> powerfull, of course!\n> We have about 10000 � to spend for a new server including the storage. \n> Do you have any recommendations for us?\n> I have read a lot of introductions to tune up PostgreSQL systems. \n> Since I don't have the possibility to tune up the soft parameters like \n> cache, mem sizes etc., I wondered about the hardware. Most things were \n> about the I/O of harddisks, RAM and file system. Is the filesystem \n> that relevant? Because wo want to stay at Ubuntu because of the \n> software support, espacially for the GIS-Systems. I think we need at \n> least about 300-500Gb for storage and the server you get for this \n> price are about two dualcore 2.0 - 2.8 GHz Opterons.\n> Do you have any suggestions for the hardware of a spatial data base in \n> that pricing category?\n\nPay as much attention to your disk subsystem as to your CPU / memory \nsetup. Look at RAID-5 or RAID-10 depending on which is faster for your \nsetup. While RAID-10 is faster for a system seeing plenty of updates, \nand a bit more resiliant to drive failure, RAID-5 can give you a lot of \nstorage and very good read performance, so it works well for reporting / \nwarehousing setups.\n\nIt might well be that a large RAID-10 with software RAID is a good \nchoice for what you're doing, since it gets good read performance and is \npretty cheap to implement. If you're going to be doing updates a lot, \nthen look at a battery backed caching controller.\n\nMemory is a big deal. As much as you can reasonably afford to throw at \nthe system.\n\nThe file system can make a small to moderate impact on performance. Some \nloads are favored by JFS, others by XFS, and still others by ext2 for \nthe data portion (only the pg_xlog needs to be on ext3 meta journaling only)\n\n\n", "msg_date": "Wed, 20 Jun 2007 17:20:18 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions" }, { "msg_contents": "Scott Marlowe writes:\n\n> and a bit more resiliant to drive failure, RAID-5 can give you a lot of \n> storage and very good read performance, so it works well for reporting / \n\nNew controllers now also have Raid 6, which from the few reports I have seen \nseems to have a good compromise of performance and space.\n\n", "msg_date": "Thu, 21 Jun 2007 08:43:07 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions" }, { "msg_contents": "Francisco Reyes wrote:\n> Scott Marlowe writes:\n>\n>> and a bit more resiliant to drive failure, RAID-5 can give you a lot \n>> of storage and very good read performance, so it works well for \n>> reporting / \n>\n> New controllers now also have Raid 6, which from the few reports I \n> have seen seems to have a good compromise of performance and space.\n>\n\nVery true. And if they've gone to the trouble of implementing RAID-6, \nthey're usually at least halfway decent controllers.\n", "msg_date": "Thu, 21 Jun 2007 17:29:15 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions" }, { "msg_contents": "On Thu, 21 Jun 2007, Scott Marlowe wrote:\n\n> And if they've gone to the trouble of implementing RAID-6, they're \n> usually at least halfway decent controllers.\n\nUnfortunately the existance of the RAID-6 capable Adaptec 2820SA proves \nthis isn't always the case.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 22 Jun 2007 02:20:18 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions" }, { "msg_contents": "Greg Smith writes:\n\n> Unfortunately the existance of the RAID-6 capable Adaptec 2820SA proves \n> this isn't always the case.\n\nFor sata 3ware and Areca seem to perform well with raid 6 (from the few \nposts I have read on the subject).\n\nDon't know of SCSI controllers though.\n\n", "msg_date": "Fri, 22 Jun 2007 08:21:22 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions" } ]
[ { "msg_contents": "Gang,\n\nHoping you all can help me with a rather bizarre issue that I've run \nacross. I don't really need a solution, I think I have one, but I'd \nreally like to run it by everyone in case I'm headed in the wrong \ndirection.\n\nI'm running a small Slony (v1.1.5)/postgresql 8.0.4 cluster (on \nRedHat) that contains one master database, and two slaves. The db1 \n(the master) has been up for about 1.5 years, db2 (slave 1) for about \n9 months, and db3 (second slave) for about two months. I do a VACUUM \nANALYZE every morning on all three databases. However, the vacuum on \ndb1 takes approxiamately 4.5 hours, and on the slaves it takes about \n1/2 hour. As far as I can tell, my FSM settings are correct. This \nis concerning because the vacuum on db1 is starting to run into \nproduction hours. The master receives all inserts, updates and \ndeletes (as well as a fair number of selects). The slaves are select- \nonly.\n\nIn my investigation of this anomaly, I noticed that the data/ dir on \ndb1 (the master) is around 60 Gigs. The data directory on the slaves \nis around 25Gb. After about 3 months of head scratching, someone on \nthe irc channel suggested that it may be due to index bloat. \nAlthough, doing some research, it would seem that those problems were \nresolved in 7.4(ish), and it wouldn't account for one database being \n2.5x bigger. Another unknown is Slony overhead (both in size and \nvacuum times).\n\nThe ONLY thing I can think of is that I DROPped a large number of \ntables from db1 a few months ago (they weren't getting replicated). \nThis is on the order of 1700+ fairly largeish (50,000+ row) tables. \nI do not remember doing a vacuum full after dropping them, so perhaps \nthat's my problem. I'm planning on doing some maintenance this \nweekend, during which I will take the whole system down, then on db1, \nrun a VACUUM FULL ANALYZE on the whole database, then a REINDEX on my \nvery large tables. I may drop and recreate the indexes on my big \ntables, as I hear that may be faster than a REINDEX. I will probably \nrun a VACUUM FULL ANALYZE on the slaves as well.\n\nThoughts? Suggestions? Anyone think this will actually help my \nproblem of size and vacuum times? Do I need to take Slony down while \nI do this? Will the VACUUM FULL table locking interfere with Slony?\n\nThanks for any light you all can shed on these issues...\n\n/kurt\n\n", "msg_date": "Tue, 19 Jun 2007 09:57:23 -0400", "msg_from": "Kurt Overberg <[email protected]>", "msg_from_op": true, "msg_subject": "Maintenance question / DB size anomaly..." }, { "msg_contents": "Kurt Overberg <[email protected]> writes:\n> In my investigation of this anomaly, I noticed that the data/ dir on \n> db1 (the master) is around 60 Gigs. The data directory on the slaves \n> is around 25Gb. After about 3 months of head scratching, someone on \n> the irc channel suggested that it may be due to index bloat. \n\nThis is not something you need to guess about. Compare the table and\nindex sizes, one by one, between the master and slaves. Do a VACUUM\nVERBOSE on the one(s) that are radically bigger on the master, and look\nat what it has to say.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2007 10:12:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly... " }, { "msg_contents": "Kurt Overberg wrote:\n> \n> In my investigation of this anomaly, I noticed that the data/ dir on db1 \n> (the master) is around 60 Gigs. The data directory on the slaves is \n> around 25Gb. After about 3 months of head scratching, someone on the \n> irc channel suggested that it may be due to index bloat. Although, \n> doing some research, it would seem that those problems were resolved in \n> 7.4(ish), and it wouldn't account for one database being 2.5x bigger. \n> Another unknown is Slony overhead (both in size and vacuum times).\n\nCheck the oid2name/dbsize utilities in the contrib RPM for 8.0.x\nhttp://www.postgresql.org/docs/8.0/static/diskusage.html\nShouldn't be too hard to find out where the disk space is going.\n\nOh and 8.0.13 is the latest release of 8.0 series, so you'll want to use \nyour maintenance window to upgrade too. Lots of good bugfixes there.\n\n> The ONLY thing I can think of is that I DROPped a large number of tables \n> from db1 a few months ago (they weren't getting replicated). This is on \n> the order of 1700+ fairly largeish (50,000+ row) tables. I do not \n> remember doing a vacuum full after dropping them, so perhaps that's my \n> problem. I'm planning on doing some maintenance this weekend, during \n> which I will take the whole system down, then on db1, run a VACUUM FULL \n> ANALYZE on the whole database, then a REINDEX on my very large tables. \n> I may drop and recreate the indexes on my big tables, as I hear that may \n> be faster than a REINDEX. I will probably run a VACUUM FULL ANALYZE on \n> the slaves as well.\n\nYou'll probably find CLUSTER to be quicker than VACUUM FULL, although \nyou need enough disk-space free for temporary copies of the \ntable/indexes concerned.\n\nDropping and recreating indexes should prove much faster than VACUUMING \nwith them. Shouldn't matter for CLUSTER afaict.\n\n> Thoughts? Suggestions? Anyone think this will actually help my problem \n> of size and vacuum times? Do I need to take Slony down while I do \n> this? Will the VACUUM FULL table locking interfere with Slony?\n\nWell, I'd take the opportunity to uninstall/reinstall slony just to \ncheck my scripts/procedures are working.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 19 Jun 2007 15:13:30 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly..." }, { "msg_contents": "[email protected] (Kurt Overberg) writes:\n> In my investigation of this anomaly, I noticed that the data/ dir on\n> db1 (the master) is around 60 Gigs. The data directory on the slaves\n> is around 25Gb. After about 3 months of head scratching, someone on\n> the irc channel suggested that it may be due to index bloat.\n> Although, doing some research, it would seem that those problems were\n> resolved in 7.4(ish), and it wouldn't account for one database being\n> 2.5x bigger. Another unknown is Slony overhead (both in size and\n> vacuum times).\n\nThere are three tables in Slony-I that would be of interest; on the\nmaster, do a VACUUM VERBOSE on:\n\n - [clustername].sl_log_1\n - [clustername].sl_log_2\n - [clustername].sl_seqlog\n\nIf one or another is really bloated, that could be the cause of *some*\nproblems. Though that shouldn't account for 35GB of space :-).\n\n> The ONLY thing I can think of is that I DROPped a large number of\n> tables from db1 a few months ago (they weren't getting replicated).\n> This is on the order of 1700+ fairly largeish (50,000+ row) tables.\n> I do not remember doing a vacuum full after dropping them, so perhaps\n> that's my problem. I'm planning on doing some maintenance this\n> weekend, during which I will take the whole system down, then on db1,\n> run a VACUUM FULL ANALYZE on the whole database, then a REINDEX on my\n> very large tables. I may drop and recreate the indexes on my big\n> tables, as I hear that may be faster than a REINDEX. I will probably\n> run a VACUUM FULL ANALYZE on the slaves as well.\n\nWhen tables are dropped, so are the data files. So even if they were\nbloated, they should have simply disappeared. So I don't think that's\nthe problem.\n\n> Thoughts? Suggestions? Anyone think this will actually help my\n> problem of size and vacuum times? Do I need to take Slony down while\n> I do this? Will the VACUUM FULL table locking interfere with Slony?\n\nI'd be inclined to head to the filesystem level, and try to see what\ntables are bloated *there*.\n\nYou should be able to search for bloated tables via the command:\n\n$ find $PGDATA/base -name \"[0-9]+\\.[0-9]+\"\n\nThat would be likely to give you a listing of filenames that look\nsomething like:\n\n12341.1\n12341.2\n12341.3\n12341.4\n12341.5\n12341.6\n231441.1\n231441.2\n231441.3\n\nwhich indicates all table (or index) data files that had to be\nextended past 1GB.\n\nIn the above, the relation with OID 12341 would be >6GB in size,\nbecause it has been extended to have 6 additional files (in addition\nto the \"bare\" filename, 12341).\n\nYou can then go into a psql session, and run the query:\n select * from pg_class where oid = 12341;\nand thereby figure out what table is involved.\n\nI'll bet that if you do this on the \"origin\" node, you'll find that\nthere is some small number of tables that have *way* more 1GB\npartitions than there are on the subscriber nodes.\n\nThose are the tables that will need attention.\n\nYou could probably accomplish the reorganization more quickly via the\n\"CLUSTER\" statement; that will reorganize the table according based on\nthe ordering of one specified index, and then regenerate all the other\nindices. It's not MVCC-safe, so if you have reports running\nconcurrently, this could confuse them, but if you take the apps down,\nas you surely should, it won't be a problem.\n\nYou don't forcibly have to take Slony-I down during this, but the\nlocks taken out on tables by CLUSTER/VACUUM FULL will block slons from\ndoing any work until those transactions complete.\n\nI wouldn't think you need to do VACUUM FULL or CLUSTER against the\nsubscribers if they haven't actually bloated (and based on what you\nhave said, there is no indication that they have).\n-- \noutput = reverse(\"ofni.secnanifxunil\" \"@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/linuxdistributions.html\nThe quickest way to a man's heart is through his chest, with an axe. \n", "msg_date": "Tue, 19 Jun 2007 11:11:19 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly..." }, { "msg_contents": "Chris Browne <[email protected]> writes:\n> [email protected] (Kurt Overberg) writes:\n>> In my investigation of this anomaly, I noticed that the data/ dir on\n>> db1 (the master) is around 60 Gigs. The data directory on the slaves\n>> is around 25Gb. After about 3 months of head scratching, someone on\n>> the irc channel suggested that it may be due to index bloat.\n\n> I'd be inclined to head to the filesystem level, and try to see what\n> tables are bloated *there*.\n\nAt least as a first cut, it should be sufficient to look at\npg_class.relpages, which'd be far easier to correlate with table names\n;-). The relpages entry should be accurate as of the most recent VACUUM\non each table, which ought to be close enough unless I missed something\nabout the problem situation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2007 12:04:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly... " }, { "msg_contents": "Richard,\n\nThanks for the feedback! I found oid2name and have been mucking \nabout with it, but haven't really found anything that stands out \nyet. Most of the tables/indexes I'm comparing across machines seem \nto take up a similar amount of disk space. I think I'm going to have \nto get fancy and write some shell scripts. Regarding the slony \nconfiguration scripts, you're assuming that I have such scripts. Our \nslony install was originally installed by a contractor, and modified \nsince then so \"getting my act together with respect to slony\" is \nkinda beyond the scope of what I'm trying to accomplish with this \nmaintenance. I really just want to figure out whats going on with \ndb1, and want to do so in a way that won't ruin slony since right now \nit runs pretty well, and I doubt I'd be able to fix it if it \nseriously broke.\n\nUpon a cursory pass with oid2name, it seems that my sl_log_1_idx1 \nindex is out of hand:\n\n\n-bash-3.00$ oid2name -d mydb -f 955960160\n From database \"mydb\":\n Filenode Table Name\n--------------------------\n 955960160 sl_log_1_idx1\n\n-bash-3.00$ ls -al 955960160*\n-rw------- 1 postgres postgres 1073741824 Jun 19 11:08 955960160\n-rw------- 1 postgres postgres 1073741824 Jun 13 2006 955960160.1\n-rw------- 1 postgres postgres 909844480 Jun 19 10:47 955960160.10\n-rw------- 1 postgres postgres 1073741824 Jul 31 2006 955960160.2\n-rw------- 1 postgres postgres 1073741824 Sep 12 2006 955960160.3\n-rw------- 1 postgres postgres 1073741824 Oct 19 2006 955960160.4\n-rw------- 1 postgres postgres 1073741824 Nov 27 2006 955960160.5\n-rw------- 1 postgres postgres 1073741824 Feb 3 12:57 955960160.6\n-rw------- 1 postgres postgres 1073741824 Mar 2 11:57 955960160.7\n-rw------- 1 postgres postgres 1073741824 Mar 29 09:46 955960160.8\n-rw------- 1 postgres postgres 1073741824 Mar 29 09:46 955960160.9\n\n\nI know that slony runs its vacuuming in the background, but it \ndoesn't seem to be cleaning this stuff up. Interestingly, from my \nVACUUM pgfouine output,\nthat index doesn't take that long at all to vacuum analyze (compared \nto my other, much larger tables). Am I making the OID->filename \ntranslation properly?\n\nRunning this:\nSELECT relname, relpages FROM pg_class ORDER BY relpages DESC;\n...gives me...\n\nsl_log_1_idx1 | \n1421785\nxrefmembergroup | \n1023460\nanswerselectinstance | \n565343\n\n...does this jibe with what I'm seeing above? I guess I'll run a \nfull vacuum on the slony tables too? I figured something would else \nwould jump out bigger than this. FWIW, the same table on db2 and db3 \nis very small, like zero. I guess this is looking like it is \noverhead from slony? Should I take this problem over to the slony \ngroup?\n\nThanks again, gang-\n\n/kurt\n\n\n\nOn Jun 19, 2007, at 10:13 AM, Richard Huxton wrote:\n\n> Kurt Overberg wrote:\n>> In my investigation of this anomaly, I noticed that the data/ dir \n>> on db1 (the master) is around 60 Gigs. The data directory on the \n>> slaves is around 25Gb. After about 3 months of head scratching, \n>> someone on the irc channel suggested that it may be due to index \n>> bloat. Although, doing some research, it would seem that those \n>> problems were resolved in 7.4(ish), and it wouldn't account for \n>> one database being 2.5x bigger. Another unknown is Slony overhead \n>> (both in size and vacuum times).\n>\n> Check the oid2name/dbsize utilities in the contrib RPM for 8.0.x\n> http://www.postgresql.org/docs/8.0/static/diskusage.html\n> Shouldn't be too hard to find out where the disk space is going.\n>\n> Oh and 8.0.13 is the latest release of 8.0 series, so you'll want \n> to use your maintenance window to upgrade too. Lots of good \n> bugfixes there.\n>\n>> The ONLY thing I can think of is that I DROPped a large number of \n>> tables from db1 a few months ago (they weren't getting \n>> replicated). This is on the order of 1700+ fairly largeish (50,000 \n>> + row) tables. I do not remember doing a vacuum full after \n>> dropping them, so perhaps that's my problem. I'm planning on \n>> doing some maintenance this weekend, during which I will take the \n>> whole system down, then on db1, run a VACUUM FULL ANALYZE on the \n>> whole database, then a REINDEX on my very large tables. I may \n>> drop and recreate the indexes on my big tables, as I hear that may \n>> be faster than a REINDEX. I will probably run a VACUUM FULL \n>> ANALYZE on the slaves as well.\n>\n> You'll probably find CLUSTER to be quicker than VACUUM FULL, \n> although you need enough disk-space free for temporary copies of \n> the table/indexes concerned.\n>\n> Dropping and recreating indexes should prove much faster than \n> VACUUMING with them. Shouldn't matter for CLUSTER afaict.\n>\n>> Thoughts? Suggestions? Anyone think this will actually help my \n>> problem of size and vacuum times? Do I need to take Slony down \n>> while I do this? Will the VACUUM FULL table locking interfere \n>> with Slony?\n>\n> Well, I'd take the opportunity to uninstall/reinstall slony just to \n> check my scripts/procedures are working.\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n\n", "msg_date": "Tue, 19 Jun 2007 12:37:56 -0400", "msg_from": "Kurt Overberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maintenance question / DB size anomaly..." }, { "msg_contents": "Kurt Overberg wrote:\n> Richard,\n> \n> Thanks for the feedback! I found oid2name and have been mucking about \n> with it, but haven't really found anything that stands out yet. Most of \n> the tables/indexes I'm comparing across machines seem to take up a \n> similar amount of disk space. I think I'm going to have to get fancy \n> and write some shell scripts. Regarding the slony configuration \n> scripts, you're assuming that I have such scripts. Our slony install \n> was originally installed by a contractor, and modified since then so \n> \"getting my act together with respect to slony\" is kinda beyond the \n> scope of what I'm trying to accomplish with this maintenance. I really \n> just want to figure out whats going on with db1, and want to do so in a \n> way that won't ruin slony since right now it runs pretty well, and I \n> doubt I'd be able to fix it if it seriously broke.\n> \n> Upon a cursory pass with oid2name, it seems that my sl_log_1_idx1 index \n> is out of hand:\n\nIf the sl_log_1 table is large too, it'll be worth reading throught the \nFAQ to see if any of its notes apply.\n\nhttp://cbbrowne.com/info/faq.html\n\n> -bash-3.00$ oid2name -d mydb -f 955960160\n> From database \"mydb\":\n> Filenode Table Name\n> --------------------------\n> 955960160 sl_log_1_idx1\n> \n> -bash-3.00$ ls -al 955960160*\n> -rw------- 1 postgres postgres 1073741824 Jun 19 11:08 955960160\n> -rw------- 1 postgres postgres 1073741824 Jun 13 2006 955960160.1\n> -rw------- 1 postgres postgres 909844480 Jun 19 10:47 955960160.10\n> -rw------- 1 postgres postgres 1073741824 Jul 31 2006 955960160.2\n> -rw------- 1 postgres postgres 1073741824 Sep 12 2006 955960160.3\n> -rw------- 1 postgres postgres 1073741824 Oct 19 2006 955960160.4\n> -rw------- 1 postgres postgres 1073741824 Nov 27 2006 955960160.5\n> -rw------- 1 postgres postgres 1073741824 Feb 3 12:57 955960160.6\n> -rw------- 1 postgres postgres 1073741824 Mar 2 11:57 955960160.7\n> -rw------- 1 postgres postgres 1073741824 Mar 29 09:46 955960160.8\n> -rw------- 1 postgres postgres 1073741824 Mar 29 09:46 955960160.9\n> \n> \n> I know that slony runs its vacuuming in the background, but it doesn't \n> seem to be cleaning this stuff up. Interestingly, from my VACUUM \n> pgfouine output,\n> that index doesn't take that long at all to vacuum analyze (compared to \n> my other, much larger tables). Am I making the OID->filename \n> translation properly?\n\nLooks OK to me\n\n> Running this:\n> SELECT relname, relpages FROM pg_class ORDER BY relpages DESC;\n> ...gives me...\n> \n> sl_log_1_idx1 | 1421785\n> xrefmembergroup | 1023460\n> answerselectinstance | 565343\n> \n> ...does this jibe with what I'm seeing above? I guess I'll run a full \n> vacuum on the slony tables too? I figured something would else would \n> jump out bigger than this. FWIW, the same table on db2 and db3 is very \n> small, like zero. I guess this is looking like it is overhead from \n> slony? Should I take this problem over to the slony group?\n\nWell, pages are 8KB each (by default), so that'd be about 10.8GB, which \nseems to match your filesizes above.\n\nRead through the FAQ I linked to - for some reason Slony's not clearing \nout transactions it's replicated to your slaves (they *are* in sync, \naren't they?). Could be a transaction preventing vacuuming, or perhaps a \n partially dropped node?\n\nCheck the size of the sl_log_1 table and see if that tallies.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 19 Jun 2007 18:04:50 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly..." }, { "msg_contents": "Chris,\n\nI took your advice, and I had found that sl_log_1 seems to be causing \nsome of the problem. Here's the result of a VACUUM VERBOSE\n\nmydb # vacuum verbose _my_cluster.sl_log_1 ;\nINFO: vacuuming \"_my_cluster.sl_log_1\"\nINFO: index \"sl_log_1_idx1\" now contains 309404 row versions in \n1421785 pages\nDETAIL: 455001 index row versions were removed.\n1419592 index pages have been deleted, 1416435 are currently reusable.\nCPU 16.83s/5.07u sec elapsed 339.19 sec.\n^@^@^@INFO: index \"sl_log_1_idx2\" now contains 312864 row versions \nin 507196 pages\nDETAIL: 455001 index row versions were removed.\n506295 index pages have been deleted, 504998 are currently reusable.\nCPU 6.44s/2.27u sec elapsed 138.70 sec.\nINFO: \"sl_log_1\": removed 455001 row versions in 7567 pages\nDETAIL: CPU 0.56s/0.40u sec elapsed 6.63 sec.\nINFO: \"sl_log_1\": found 455001 removable, 309318 nonremovable row \nversions in 13764 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 51972 unused item pointers.\n0 pages are entirely empty.\nCPU 24.13s/7.85u sec elapsed 486.49 sec.\nINFO: vacuuming \"pg_toast.pg_toast_955960155\"\nINFO: index \"pg_toast_955960155_index\" now contains 9 row versions \nin 2 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_955960155\": found 0 removable, 9 nonremovable row \nversions in 3 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 3 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n\n...I then checked the disk and those pages are still there. If I do a:\n\nselect count(*) from _my_cluster.sl_log_1;\ncount\n-------\n 6366\n(1 row)\n\nWould a VACUUM FULL take care of this? It seems to me that its not \nclearing up the indexes properly. You are correct in that\nI do see things getting much bigger on the master than on the \nsubscriber nodes. Could this cause my slony replication to bog down?\n\nAlso- I have a question about this comment:\n\n>\n> You don't forcibly have to take Slony-I down during this, but the\n> locks taken out on tables by CLUSTER/VACUUM FULL will block slons from\n> doing any work until those transactions complete.\n\nThats because no writing will be done to the tables, thus, no slony \ntriggers will get triggered, correct? I'd rather not\nshut down slony if I dont have to, but will if it \"is safer/better/ \nmore badass\".\n\nFor those playing along at home,\n\n> $ find $PGDATA/base -name \"[0-9]+\\.[0-9]+\"\n>\n\n...I had to use:\n\nfind $PGDATA/base -name \"[0-9]*\\.[0-9]*\"\n\n...but the pluses should have worked too. Still a much better way \nthan how I was doing it. Thanks again for helping me with this, its \ngreatly appreciated!\n\n/kurt\n", "msg_date": "Tue, 19 Jun 2007 13:13:23 -0400", "msg_from": "Kurt Overberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maintenance question / DB size anomaly..." }, { "msg_contents": "Kurt Overberg <[email protected]> writes:\n> mydb # vacuum verbose _my_cluster.sl_log_1 ;\n> INFO: \"sl_log_1\": found 455001 removable, 309318 nonremovable row \n> versions in 13764 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n\nHmm. So you don't have a long-running-transactions problem (else that\nDETAIL number would have been large). What you do have is a failure\nto vacuum sl_log_1 on a regular basis (because there are so many\ndead/removable rows). I suspect also some sort of Slony problem,\nbecause AFAIK a properly operating Slony system shouldn't have that\nmany live rows in sl_log_1 either --- don't they all represent\nas-yet-unpropagated events? I'm no Slony expert though. You probably\nshould ask about that on the Slony lists.\n\n> ...I then checked the disk and those pages are still there.\n\nYes, regular VACUUM doesn't try very hard to shorten the disk file.\n\n> Would a VACUUM FULL take care of this?\n\nIt would, but it will take an unpleasantly long time with so many live\nrows to reshuffle. I'd advise first working to see if you can get the\ntable down to a few live rows. Then a VACUUM FULL will be a snap.\nAlso, you might want to do REINDEX after VACUUM FULL to compress the\nindexes --- VACUUM FULL isn't good at that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2007 17:33:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly... " }, { "msg_contents": "That's the thing thats kinda blowing my mind here, when I look at \nthat table:\n\ndb1=# select count(*) from _my_cluster.sl_log_1 ;\ncount\n-------\n 6788\n(1 row)\n\nAs far as my DB is concerned, there's only ~7000 rows (on average) \nwhen I look\nin there (it does fluctuate, I've seen it go as high as around 12k, \nbut then its\ngone back down, so I know events are moving around in there).\n\nSo from what I can tell- from the disk point of view, there's ~11Gb \nof data; from the\nvacuum point of view there's 309318 rows. From the psql point of \nview, there's only\naround 7,000. Am I missing something? Unless there's something \ngoing on under the\nhood that I don't know about (more than likely), it seems like my \nsl_log_1 table is munged or\nsomehow otherwise very screwed up. I fear that a re-shuffling or \ndropping/recreating\nthe index will mess it up further. Maybe when I take my production \nsystems down for\nmaintenance, can I wait until sl_log_1 clears out, so then I can just \ndrop that\ntable altogether (and re-create it of course)?\n\nThanks!\n\n/kurt\n\n\n\n\nOn Jun 19, 2007, at 5:33 PM, Tom Lane wrote:\n\n> Kurt Overberg <[email protected]> writes:\n>> mydb # vacuum verbose _my_cluster.sl_log_1 ;\n>> INFO: \"sl_log_1\": found 455001 removable, 309318 nonremovable row\n>> versions in 13764 pages\n>> DETAIL: 0 dead row versions cannot be removed yet.\n>\n> Hmm. So you don't have a long-running-transactions problem (else that\n> DETAIL number would have been large). What you do have is a failure\n> to vacuum sl_log_1 on a regular basis (because there are so many\n> dead/removable rows). I suspect also some sort of Slony problem,\n> because AFAIK a properly operating Slony system shouldn't have that\n> many live rows in sl_log_1 either --- don't they all represent\n> as-yet-unpropagated events? I'm no Slony expert though. You probably\n> should ask about that on the Slony lists.\n>\n>> ...I then checked the disk and those pages are still there.\n>\n> Yes, regular VACUUM doesn't try very hard to shorten the disk file.\n>\n>> Would a VACUUM FULL take care of this?\n>\n> It would, but it will take an unpleasantly long time with so many live\n> rows to reshuffle. I'd advise first working to see if you can get the\n> table down to a few live rows. Then a VACUUM FULL will be a snap.\n> Also, you might want to do REINDEX after VACUUM FULL to compress the\n> indexes --- VACUUM FULL isn't good at that.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n", "msg_date": "Tue, 19 Jun 2007 18:24:21 -0400", "msg_from": "Kurt Overberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maintenance question / DB size anomaly... " }, { "msg_contents": "Kurt Overberg <[email protected]> writes:\n> That's the thing thats kinda blowing my mind here, when I look at \n> that table:\n\n> db1=# select count(*) from _my_cluster.sl_log_1 ;\n> count\n> -------\n> 6788\n> (1 row)\n\nWell, that's real interesting. AFAICS there are only two possibilities:\n\n1. VACUUM sees the other 300k tuples as INSERT_IN_PROGRESS; a look at\nthe code shows that these are counted the same as plain live tuples,\nbut they'd not be visible to other transactions. I wonder if you could\nhave any really old open transactions that might have inserted all those\ntuples?\n\n2. The other 300k tuples are committed good, but they are not seen as\nvalid by a normal MVCC-aware transaction, probably because of\ntransaction wraparound. This would require the sl_log_1 table to have\nescaped vacuuming for more than 2 billion transactions, which seems a\nbit improbable but maybe not impossible. (You did say you were running\nPG 8.0.x, right? That's the last version without any strong defenses\nagainst transaction wraparound...)\n\nThe way to get some facts, instead of speculating, would be to get hold\nof the appropriate version of pg_filedump from\nhttp://sources.redhat.com/rhdb/ and dump out sl_log_1 with it\n(probably the -i option would be sufficient), then take a close look\nat the tuples that aren't visible to other transactions. (You could\ndo \"select ctid from sl_log_1\" to determine which ones are visible.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2007 19:26:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly... " }, { "msg_contents": "Kurt Overberg <[email protected]> wrote:\n>\n> That's the thing thats kinda blowing my mind here, when I look at \n> that table:\n> \n> db1=# select count(*) from _my_cluster.sl_log_1 ;\n> count\n> -------\n> 6788\n> (1 row)\n> \n> As far as my DB is concerned, there's only ~7000 rows (on average) \n> when I look\n> in there (it does fluctuate, I've seen it go as high as around 12k, \n> but then its\n> gone back down, so I know events are moving around in there).\n\nThis is consistent with my experience with Slony and sl_log_[12]\n\nI'm pretty sure that the slon processes vacuum sl_log_* on a fairly\nregular basis. I'm absolutely positive that slon occasionally switches\nfrom using sl_log_1, to sl_log_2, then truncates sl_log_1 (then, after\nsome time, does the same in reverse)\n\nSo, in order for you to get massive bloat of the sl_log_* tables, you\nmust be doing a LOT of transactions in the time before it switches\nlogs and truncates the unused version. Either that, or something is\ngoing wrong.\n\n> So from what I can tell- from the disk point of view, there's ~11Gb \n> of data; from the\n> vacuum point of view there's 309318 rows. From the psql point of \n> view, there's only\n> around 7,000. Am I missing something?\n\nSomething seems wrong here. Correct me if I'm missing something, but\nyou're saying the table takes up 11G on disk, but vacuum says there are\n~14000 pages. That would mean your page size is ~800K. Doesn't seem\nright.\n\n> Unless there's something \n> going on under the\n> hood that I don't know about (more than likely), it seems like my \n> sl_log_1 table is munged or\n> somehow otherwise very screwed up. I fear that a re-shuffling or \n> dropping/recreating\n> the index will mess it up further. Maybe when I take my production \n> systems down for\n> maintenance, can I wait until sl_log_1 clears out, so then I can just \n> drop that\n> table altogether (and re-create it of course)?\n\nPossibly drop this node from the Slony cluster and re-add it. Unless\nit's the origin node, in which case you'll have to switchover, then\nredo the origin then switch back ...\n\n> \n> Thanks!\n> \n> /kurt\n> \n> \n> \n> \n> On Jun 19, 2007, at 5:33 PM, Tom Lane wrote:\n> \n> > Kurt Overberg <[email protected]> writes:\n> >> mydb # vacuum verbose _my_cluster.sl_log_1 ;\n> >> INFO: \"sl_log_1\": found 455001 removable, 309318 nonremovable row\n> >> versions in 13764 pages\n> >> DETAIL: 0 dead row versions cannot be removed yet.\n> >\n> > Hmm. So you don't have a long-running-transactions problem (else that\n> > DETAIL number would have been large). What you do have is a failure\n> > to vacuum sl_log_1 on a regular basis (because there are so many\n> > dead/removable rows). I suspect also some sort of Slony problem,\n> > because AFAIK a properly operating Slony system shouldn't have that\n> > many live rows in sl_log_1 either --- don't they all represent\n> > as-yet-unpropagated events? I'm no Slony expert though. You probably\n> > should ask about that on the Slony lists.\n> >\n> >> ...I then checked the disk and those pages are still there.\n> >\n> > Yes, regular VACUUM doesn't try very hard to shorten the disk file.\n> >\n> >> Would a VACUUM FULL take care of this?\n> >\n> > It would, but it will take an unpleasantly long time with so many live\n> > rows to reshuffle. I'd advise first working to see if you can get the\n> > table down to a few live rows. Then a VACUUM FULL will be a snap.\n> > Also, you might want to do REINDEX after VACUUM FULL to compress the\n> > indexes --- VACUUM FULL isn't good at that.\n> >\n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of \n> > broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n> \n> \n> \n> \n> \n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information\nand is intended only for the individual named. If the reader of\nthis message is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Tue, 19 Jun 2007 19:55:17 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly..." }, { "msg_contents": "\nOn Jun 19, 2007, at 7:26 PM, Tom Lane wrote:\n\n> Kurt Overberg <[email protected]> writes:\n>> That's the thing thats kinda blowing my mind here, when I look at\n>> that table:\n>\n>> db1=# select count(*) from _my_cluster.sl_log_1 ;\n>> count\n>> -------\n>> 6788\n>> (1 row)\n>\n> Well, that's real interesting. AFAICS there are only two \n> possibilities:\n>\n> 1. VACUUM sees the other 300k tuples as INSERT_IN_PROGRESS; a look at\n> the code shows that these are counted the same as plain live tuples,\n> but they'd not be visible to other transactions. I wonder if you \n> could\n> have any really old open transactions that might have inserted all \n> those\n> tuples?\n>\n\nUnlikely- the database has been stopped and restarted, which I think \ncloses\nout transactions? Or could that cause the problems?\n\n\n> 2. The other 300k tuples are committed good, but they are not seen as\n> valid by a normal MVCC-aware transaction, probably because of\n> transaction wraparound. This would require the sl_log_1 table to have\n> escaped vacuuming for more than 2 billion transactions, which seems a\n> bit improbable but maybe not impossible. (You did say you were \n> running\n> PG 8.0.x, right? That's the last version without any strong defenses\n> against transaction wraparound...)\n\nYep, this 8.0.4. It has been running for over a year, fairly heavy \nupdates, so\nI would guess its possible.\n\n> The way to get some facts, instead of speculating, would be to get \n> hold\n> of the appropriate version of pg_filedump from\n> http://sources.redhat.com/rhdb/ and dump out sl_log_1 with it\n> (probably the -i option would be sufficient), then take a close look\n> at the tuples that aren't visible to other transactions. (You could\n> do \"select ctid from sl_log_1\" to determine which ones are visible.)\n>\n\nOkay, I've grabbed pg_filedump and got it running on the appropriate \nserver.\nI really have No Idea how to read its output though. Where does the \nctid from sl_log_1\nappear in the following listing?\n\n\nBlock 0 ********************************************************\n<Header> -----\nBlock Offset: 0x00000000 Offsets: Lower 20 (0x0014)\nBlock: Size 8192 Version 2 Upper 8176 (0x1ff0)\nLSN: logid 949 recoff 0xae63b06c Special 8176 (0x1ff0)\nItems: 0 Free Space: 8156\nLength (including item array): 24\n\nBTree Meta Data: Magic (0x00053162) Version (2)\n Root: Block (1174413) Level (3)\n FastRoot: Block (4622) Level (1)\n\n<Data> ------\nEmpty block - no items listed\n\n<Special Section> -----\nBTree Index Section:\n Flags: 0x0008 (META)\n Blocks: Previous (0) Next (0) Level (0)\n\n\n.../this was taken from the first page file (955960160.0 I guess you \ncould\ncall it). Does this look interesting to you, Tom?\n\nFWIW- this IS on my master DB. I've been slowly preparing an upgrade \nto 8.2, I guess\nI'd better get that inta gear, hmmm? :-(\n\n/kurt\n\n\n\n> \t\t\tregards, tom lane\n\n", "msg_date": "Tue, 19 Jun 2007 21:53:09 -0400", "msg_from": "Kurt Overberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maintenance question / DB size anomaly... " }, { "msg_contents": "Kurt Overberg <[email protected]> writes:\n> Okay, I've grabbed pg_filedump and got it running on the appropriate \n> server.\n> I really have No Idea how to read its output though. Where does the \n> ctid from sl_log_1\n> appear in the following listing?\n\nctid is (block number, item number)\n\n> Block 0 ********************************************************\n> BTree Meta Data: Magic (0x00053162) Version (2)\n> Root: Block (1174413) Level (3)\n> FastRoot: Block (4622) Level (1)\n\nThis seems to be an index, not the sl_log_1 table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2007 22:51:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly... " }, { "msg_contents": "OOooooookaaaaaaaaay. Since the discussion has wandered a bit I just \nwanted to restate things in an effort to clear the problem in my head.\n\nOkay, so the sl_log_1 TABLE looks okay. Its the indexes that seem to \nbe messed up, specifically sl_log_1_idx1 seems to think that there's\n > 300,000 rows in the table its associated with. I just want to fix \nthe index, really. So my question remains:\n\nIts it okay to dump and recreate that index (or reindex it) while the \nservers are down and the database is not being accessed?\n\nTom, Bill, Chris and Richard, thank you so much for your thoughts on \nthis matter so far. It helps to not feel \"so alone\" when dealing\nwith difficult issues (for me anyway) on a system I don't know so \nmuch about.\n\nThanks guys,\n\n/kurt\n\nOn Jun 19, 2007, at 10:51 PM, Tom Lane wrote:\n\n> Kurt Overberg <[email protected]> writes:\n>> Okay, I've grabbed pg_filedump and got it running on the appropriate\n>> server.\n>> I really have No Idea how to read its output though. Where does the\n>> ctid from sl_log_1\n>> appear in the following listing?\n>\n> ctid is (block number, item number)\n>\n>> Block 0 ********************************************************\n>> BTree Meta Data: Magic (0x00053162) Version (2)\n>> Root: Block (1174413) Level (3)\n>> FastRoot: Block (4622) Level (1)\n>\n> This seems to be an index, not the sl_log_1 table.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n", "msg_date": "Wed, 20 Jun 2007 09:10:08 -0400", "msg_from": "Kurt Overberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maintenance question / DB size anomaly... " }, { "msg_contents": "In response to Kurt Overberg <[email protected]>:\n\n> OOooooookaaaaaaaaay. Since the discussion has wandered a bit I just \n> wanted to restate things in an effort to clear the problem in my head.\n> \n> Okay, so the sl_log_1 TABLE looks okay. Its the indexes that seem to \n> be messed up, specifically sl_log_1_idx1 seems to think that there's\n> > 300,000 rows in the table its associated with. I just want to fix \n> the index, really. So my question remains:\n\nApologies, I must have misunderstood some previous message.\n\n> Its it okay to dump and recreate that index (or reindex it) while the \n> servers are down and the database is not being accessed?\n\nThere are people here who know _way_ more than me -- but I can't see any\nreason why you couldn't just REINDEX it while everything is running. There\nmay be some performance slowdown during the reindex, but everything should\ncontinue to chug along. A drop/recreate of the index should be OK as well.\n\n> Tom, Bill, Chris and Richard, thank you so much for your thoughts on \n> this matter so far. It helps to not feel \"so alone\" when dealing\n> with difficult issues (for me anyway) on a system I don't know so \n> much about.\n\n:D Isn't Open Source great!\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Wed, 20 Jun 2007 09:25:42 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly..." }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\nKurt Overberg wrote:\n> OOooooookaaaaaaaaay. Since the discussion has wandered a bit I just\n> wanted to restate things in an effort to clear the problem in my head.\n> \n> Okay, so the sl_log_1 TABLE looks okay. Its the indexes that seem to be\n> messed up, specifically sl_log_1_idx1 seems to think that there's\n>> 300,000 rows in the table its associated with. I just want to fix the\n> index, really. So my question remains:\n> \n> Its it okay to dump and recreate that index (or reindex it) while the\n> servers are down and the database is not being accessed?\n\nWell, I would probably stop the slon daemons => dropping needed indexes\nwhich slony needs can lead to quite a slowdown, and worse, the slowdown\nhappens because the database server is doing things the wrong way. But\nthat's mostly what you need to do.\n\nOTOH, depending upon the size of your database, you might consider\nstarting out from a scratch database.\n\nAndreas\n\n> \n> Tom, Bill, Chris and Richard, thank you so much for your thoughts on\n> this matter so far. It helps to not feel \"so alone\" when dealing\n> with difficult issues (for me anyway) on a system I don't know so much\n> about.\n\n#[email protected], #[email protected] are quite helpful, and\nsometimes faster than mail.\n\nAndreas\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGeTFtHJdudm4KnO0RAqDaAKDB1/eGqdwtLQdpTJzrChcp4J5M5wCglphW\nljxag882h33fDWXX1ILiUU8=\n=jzBw\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 20 Jun 2007 15:53:49 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly..." }, { "msg_contents": "Kurt Overberg <[email protected]> writes:\n> Okay, so the sl_log_1 TABLE looks okay. Its the indexes that seem to \n> be messed up, specifically sl_log_1_idx1 seems to think that there's\n>>> 300,000 rows in the table its associated with. I just want to fix \n> the index, really.\n\nI'm not sure how you arrive at that conclusion. The VACUUM VERBOSE\noutput you provided here:\nhttp://archives.postgresql.org/pgsql-performance/2007-06/msg00370.php\nshows clearly that there are lots of rows in the table as well as\nthe indexes. A REINDEX would certainly cut the size of the indexes\nbut it isn't going to do anything about the extraneous rows.\n\nWhen last heard from, you were working on getting pg_filedump output for\nsome of the bogus rows --- what was the result?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Jun 2007 11:22:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly... " }, { "msg_contents": "Dang it, Tom, don't you ever get tired of being right? I guess I had \nbeen focusing\non the index numbers since they came up first, and its the index \nfiles that are > 10Gb.\n\nOkay, so I did some digging with pg_filedump, and found the following:\n\n.\n.\n.\n.\nBlock 406 ********************************************************\n<Header> -----\nBlock Offset: 0x0032c000 Offsets: Lower 208 (0x00d0)\nBlock: Size 8192 Version 2 Upper 332 (0x014c)\nLSN: logid 950 recoff 0x9ebcc6e4 Special 8192 (0x2000)\nItems: 47 Free Space: 124\nLength (including item array): 212\n\n<Data> ------\nItem 1 -- Length: 472 Offset: 7720 (0x1e28) Flags: USED\n XMIN: 1489323584 CMIN: 1 XMAX: 0 CMAX|XVAC: 0\n Block Id: 406 linp Index: 1 Attributes: 6 Size: 32\n infomask: 0x0912 (HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID)\n\nItem 2 -- Length: 185 Offset: 7532 (0x1d6c) Flags: USED\n XMIN: 1489323584 CMIN: 4 XMAX: 0 CMAX|XVAC: 0\n Block Id: 406 linp Index: 2 Attributes: 6 Size: 32\n infomask: 0x0912 (HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID)\n\nItem 3 -- Length: 129 Offset: 7400 (0x1ce8) Flags: USED\n XMIN: 1489323590 CMIN: 2 XMAX: 0 CMAX|XVAC: 0\n Block Id: 406 linp Index: 3 Attributes: 6 Size: 32\n infomask: 0x0912 (HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID)\n\nItem 4 -- Length: 77 Offset: 7320 (0x1c98) Flags: USED\n XMIN: 1489323592 CMIN: 1 XMAX: 0 CMAX|XVAC: 0\n Block Id: 406 linp Index: 4 Attributes: 6 Size: 32\n infomask: 0x0912 (HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID)\n\n\n...I then looked in the DB:\n\nmydb=# select * from _my_cluster.sl_log_1 where ctid = '(406,1)';\nlog_origin | log_xid | log_tableid | log_actionseq | log_cmdtype | \nlog_cmddata\n------------+---------+-------------+---------------+------------- \n+-------------\n(0 rows)\n\nmydb=# select * from _my_cluster.sl_log_1 where ctid = '(406,2)';\nlog_origin | log_xid | log_tableid | log_actionseq | log_cmdtype | \nlog_cmddata\n------------+---------+-------------+---------------+------------- \n+-------------\n(0 rows)\n\nmydb=# select * from _my_cluster.sl_log_1 where ctid = '(406,3)';\nlog_origin | log_xid | log_tableid | log_actionseq | log_cmdtype | \nlog_cmddata\n------------+---------+-------------+---------------+------------- \n+-------------\n(0 rows)\n\n\n...is this what you were looking for, Tom? The only thing that \nstands out to me is\nthe XMAX_INVALID mask. Thoughts?\n\nThanks,\n\n/kurt\n\n\n\n\n\nOn Jun 20, 2007, at 11:22 AM, Tom Lane wrote:\n\n> Kurt Overberg <[email protected]> writes:\n>> Okay, so the sl_log_1 TABLE looks okay. Its the indexes that seem to\n>> be messed up, specifically sl_log_1_idx1 seems to think that there's\n>>>> 300,000 rows in the table its associated with. I just want to fix\n>> the index, really.\n>\n> I'm not sure how you arrive at that conclusion. The VACUUM VERBOSE\n> output you provided here:\n> http://archives.postgresql.org/pgsql-performance/2007-06/msg00370.php\n> shows clearly that there are lots of rows in the table as well as\n> the indexes. A REINDEX would certainly cut the size of the indexes\n> but it isn't going to do anything about the extraneous rows.\n>\n> When last heard from, you were working on getting pg_filedump \n> output for\n> some of the bogus rows --- what was the result?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Wed, 20 Jun 2007 14:09:43 -0400", "msg_from": "Kurt Overberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maintenance question / DB size anomaly... " }, { "msg_contents": "Kurt Overberg <[email protected]> writes:\n> Okay, so I did some digging with pg_filedump, and found the following:\n\n> Block 406 ********************************************************\n> Item 1 -- Length: 472 Offset: 7720 (0x1e28) Flags: USED\n> XMIN: 1489323584 CMIN: 1 XMAX: 0 CMAX|XVAC: 0\n> Block Id: 406 linp Index: 1 Attributes: 6 Size: 32\n> infomask: 0x0912 (HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID)\n\nThis is pretty much what you'd expect for a never-updated tuple...\n\n> mydb=# select * from _my_cluster.sl_log_1 where ctid = '(406,1)';\n> log_origin | log_xid | log_tableid | log_actionseq | log_cmdtype | \n> log_cmddata\n> ------------+---------+-------------+---------------+------------- \n> +-------------\n> (0 rows)\n\nso I have to conclude that you've got a wraparound problem. What is the\ncurrent XID counter? (pg_controldata will give you that, along with a\nlot of other junk.) It might also be interesting to take a look at\n\"ls -l $PGDATA/pg_clog\"; the mod times on the files in there would give\nus an idea how fast XIDs are being consumed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Jun 2007 14:37:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly... " }, { "msg_contents": "Drat! I'm wrong again. I thought for sure there wouldn't be a \nwraparound problem.\nSo does this affect the entire database server, or just this table? \nIs best way to\nproceed to immediately ditch this db and promote one of my slaves to \na master? I'm just\nconcerned about the data integrity. Note that I don't use OID for \nanything really, so I'm\nhoping I'll be safe.\n\nThanks again, Tom.\n\n/kurt\n\n\npg_controldata output:\n\n-bash-3.00$ pg_controldata\npg_control version number: 74\nCatalog version number: 200411041\nDatabase system identifier: 4903924957417782767\nDatabase cluster state: in production\npg_control last modified: Wed 20 Jun 2007 03:19:52 PM CDT\nCurrent log file ID: 952\nNext log file segment: 154\nLatest checkpoint location: 3B8/920F0D78\nPrior checkpoint location: 3B8/8328E4A4\nLatest checkpoint's REDO location: 3B8/9200BBF0\nLatest checkpoint's UNDO location: 0/0\nLatest checkpoint's TimeLineID: 1\nLatest checkpoint's NextXID: 1490547335\nLatest checkpoint's NextOID: 3714961319\nTime of latest checkpoint: Wed 20 Jun 2007 03:17:50 PM CDT\nDatabase block size: 8192\nBlocks per segment of large relation: 131072\nBytes per WAL segment: 16777216\nMaximum length of identifiers: 64\nMaximum number of function arguments: 32\nDate/time type storage: floating-point numbers\nMaximum length of locale name: 128\nLC_COLLATE: en_US.UTF-8\nLC_CTYPE: en_US.UTF-8\n-bash-3.00$ echo $PGDATA\n\n\nHere's the list from pg_clog for June:\n\n-rw------- 1 postgres postgres 262144 Jun 1 03:36 054D\n-rw------- 1 postgres postgres 262144 Jun 1 08:16 054E\n-rw------- 1 postgres postgres 262144 Jun 1 10:24 054F\n-rw------- 1 postgres postgres 262144 Jun 1 17:03 0550\n-rw------- 1 postgres postgres 262144 Jun 2 03:32 0551\n-rw------- 1 postgres postgres 262144 Jun 2 10:04 0552\n-rw------- 1 postgres postgres 262144 Jun 2 19:24 0553\n-rw------- 1 postgres postgres 262144 Jun 3 03:38 0554\n-rw------- 1 postgres postgres 262144 Jun 3 13:19 0555\n-rw------- 1 postgres postgres 262144 Jun 4 00:02 0556\n-rw------- 1 postgres postgres 262144 Jun 4 07:12 0557\n-rw------- 1 postgres postgres 262144 Jun 4 12:37 0558\n-rw------- 1 postgres postgres 262144 Jun 4 19:46 0559\n-rw------- 1 postgres postgres 262144 Jun 5 03:36 055A\n-rw------- 1 postgres postgres 262144 Jun 5 10:54 055B\n-rw------- 1 postgres postgres 262144 Jun 5 18:11 055C\n-rw------- 1 postgres postgres 262144 Jun 6 03:38 055D\n-rw------- 1 postgres postgres 262144 Jun 6 10:15 055E\n-rw------- 1 postgres postgres 262144 Jun 6 15:10 055F\n-rw------- 1 postgres postgres 262144 Jun 6 23:21 0560\n-rw------- 1 postgres postgres 262144 Jun 7 07:15 0561\n-rw------- 1 postgres postgres 262144 Jun 7 13:43 0562\n-rw------- 1 postgres postgres 262144 Jun 7 22:53 0563\n-rw------- 1 postgres postgres 262144 Jun 8 07:12 0564\n-rw------- 1 postgres postgres 262144 Jun 8 14:42 0565\n-rw------- 1 postgres postgres 262144 Jun 9 01:30 0566\n-rw------- 1 postgres postgres 262144 Jun 9 09:19 0567\n-rw------- 1 postgres postgres 262144 Jun 9 20:19 0568\n-rw------- 1 postgres postgres 262144 Jun 10 03:39 0569\n-rw------- 1 postgres postgres 262144 Jun 10 15:38 056A\n-rw------- 1 postgres postgres 262144 Jun 11 03:34 056B\n-rw------- 1 postgres postgres 262144 Jun 11 09:14 056C\n-rw------- 1 postgres postgres 262144 Jun 11 13:59 056D\n-rw------- 1 postgres postgres 262144 Jun 11 19:41 056E\n-rw------- 1 postgres postgres 262144 Jun 12 03:37 056F\n-rw------- 1 postgres postgres 262144 Jun 12 09:59 0570\n-rw------- 1 postgres postgres 262144 Jun 12 17:23 0571\n-rw------- 1 postgres postgres 262144 Jun 13 03:32 0572\n-rw------- 1 postgres postgres 262144 Jun 13 09:16 0573\n-rw------- 1 postgres postgres 262144 Jun 13 16:25 0574\n-rw------- 1 postgres postgres 262144 Jun 14 01:28 0575\n-rw------- 1 postgres postgres 262144 Jun 14 08:40 0576\n-rw------- 1 postgres postgres 262144 Jun 14 15:07 0577\n-rw------- 1 postgres postgres 262144 Jun 14 22:00 0578\n-rw------- 1 postgres postgres 262144 Jun 15 03:36 0579\n-rw------- 1 postgres postgres 262144 Jun 15 12:21 057A\n-rw------- 1 postgres postgres 262144 Jun 15 18:10 057B\n-rw------- 1 postgres postgres 262144 Jun 16 03:32 057C\n-rw------- 1 postgres postgres 262144 Jun 16 09:17 057D\n-rw------- 1 postgres postgres 262144 Jun 16 19:32 057E\n-rw------- 1 postgres postgres 262144 Jun 17 03:39 057F\n-rw------- 1 postgres postgres 262144 Jun 17 13:26 0580\n-rw------- 1 postgres postgres 262144 Jun 17 23:11 0581\n-rw------- 1 postgres postgres 262144 Jun 18 04:40 0582\n-rw------- 1 postgres postgres 262144 Jun 18 12:23 0583\n-rw------- 1 postgres postgres 262144 Jun 18 17:22 0584\n-rw------- 1 postgres postgres 262144 Jun 18 19:40 0585\n-rw------- 1 postgres postgres 262144 Jun 19 03:38 0586\n-rw------- 1 postgres postgres 262144 Jun 19 09:30 0587\n-rw------- 1 postgres postgres 262144 Jun 19 10:23 0588\n-rw------- 1 postgres postgres 262144 Jun 19 16:10 0589\n-rw------- 1 postgres postgres 262144 Jun 19 21:45 058A\n-rw------- 1 postgres postgres 262144 Jun 20 03:38 058B\n-rw------- 1 postgres postgres 262144 Jun 20 12:17 058C\n-rw------- 1 postgres postgres 131072 Jun 20 15:13 058D\n\n\nOn Jun 20, 2007, at 2:37 PM, Tom Lane wrote:\n>\n> so I have to conclude that you've got a wraparound problem. What \n> is the\n> current XID counter? (pg_controldata will give you that, along with a\n> lot of other junk.) It might also be interesting to take a look at\n> \"ls -l $PGDATA/pg_clog\"; the mod times on the files in there would \n> give\n> us an idea how fast XIDs are being consumed.\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Wed, 20 Jun 2007 16:22:10 -0400", "msg_from": "Kurt Overberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maintenance question / DB size anomaly... " }, { "msg_contents": "Kurt Overberg <[email protected]> writes:\n> Drat! I'm wrong again. I thought for sure there wouldn't be a \n> wraparound problem.\n\nWell, I'm not sure what it is now. You showed some invisible tuples\nwith XMINs of\n XMIN: 1489323584 CMIN: 1 XMAX: 0 CMAX|XVAC: 0\n XMIN: 1489323590 CMIN: 2 XMAX: 0 CMAX|XVAC: 0\n XMIN: 1489323592 CMIN: 1 XMAX: 0 CMAX|XVAC: 0\nbut the nextXID is\n 1490547335\nwhich is not that far ahead of those --- about 1.2 million transactions,\nor less than a day's time according to the clog timestamps, which\nsuggest that you're burning several million XIDs a day. Perhaps you've\nwrapped past them since your earlier check --- if you try the same\n\"select where ctid = \" queries now, do they show rows?\n\nThe other thing that's strange here is that an 8.0 installation should\nbe pretty aggressive about recycling pg_clog segments, and yet you've\ngot a bunch there. How far back do the files in pg_clog go --- what's\nthe numeric range of the filenames, and the date range of their mod\ntimes? Have you checked the postmaster log to see if you're getting any\ncomplaints about checkpoint failures or anything like that? It would\nalso be useful to look at the output of\nselect datname, age(datfrozenxid) from pg_database;\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Jun 2007 17:08:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maintenance question / DB size anomaly... " }, { "msg_contents": "Okay,\n\n\n select * from _my_cluster.sl_log_1 where ctid = '(1,1)';\n select * from _my_cluster.sl_log_1 where ctid = '(1,2)';\n select * from _my_cluster.sl_log_1 where ctid = '(1,3)';\n select * from _my_cluster.sl_log_1 where ctid = '(1,4)';\n\nall returns zero rows. When I do a dump of that file, I get:\n\nBlock 1 ********************************************************\n<Header> -----\nBlock Offset: 0x00002000 Offsets: Lower 408 (0x0198)\nBlock: Size 8192 Version 2 Upper 7680 (0x1e00)\nLSN: logid 955 recoff 0x0daed68c Special 8192 (0x2000)\nItems: 97 Free Space: 7272\nLength (including item array): 412\n\n<Data> ------\nItem 1 -- Length: 121 Offset: 8068 (0x1f84) Flags: USED\n XMIN: 1491480520 CMIN: 1 XMAX: 0 CMAX|XVAC: 0\n Block Id: 1 linp Index: 1 Attributes: 6 Size: 32\n infomask: 0x0912 (HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID)\n\n...the fact that they weren't in the table, but in the file (I did \nthe filedump first,\nthen the query), then redid the filedump, the results are the same, \nthe rows are still\nin the file. I have no idea how frequently these files are getting \nwritten to, I assume\nfrequently. I also looked at the last block listed in the file, \n6445, and also looked for\nitems 1-4, and also did not find them in the table using a similar \nselect as above. That seems\nkinda strange, since there's right this second 11,000 items in that \ntable, but I'll roll with it for awhile.\n\nIntrigued, I wanted to see what a filedump looked like of a row that \nWAS in the table:\n\nctid | log_origin | log_xid | log_tableid | log_actionseq | \nlog_cmdtype |\n (7,1) | 10 | 1491481037 | 8 | 473490934 | \nI | (memberid,answerid,taskinstanceid) values \n('144854','148707','0')\n\n\n\nBlock 7 ********************************************************\n<Header> -----\nBlock Offset: 0x0000e000 Offsets: Lower 424 (0x01a8)\nBlock: Size 8192 Version 2 Upper 508 (0x01fc)\nLSN: logid 955 recoff 0x0dc4bcc0 Special 8192 (0x2000)\nItems: 101 Free Space: 84\nLength (including item array): 428\n\n<Data> ------\nItem 1 -- Length: 129 Offset: 8060 (0x1f7c) Flags: USED\n XMIN: 1491481037 CMIN: 7 XMAX: 0 CMAX|XVAC: 0\n Block Id: 7 linp Index: 1 Attributes: 6 Size: 32\n infomask: 0x0912 (HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID)\n\n\n...the NextID was (taken about 5 mins after the previous filedump):\n Latest checkpoint's NextXID: 1491498183\n\n\nI don't see any real differences in the file entry for a row that is \nin the table, and one that I\ndon't see in the table. I hope I'm getting this right, its totally \nfascinating seeing how\nall this works.\n\nAbout your other questions:\n\n1. I have pg_clog segments all the way back to the start of the \ndatabase, all the way back\nto March 14th, 2006 (most likely when the database was first brought \nup on this machine).\nThe numeric names start at 0000 and go to 058E. I checked the recent \n(within last 8 days)\nand saw no errors containing the word 'checkpoint'. In fact, very \nfew errors at all.\nThe dang thing seems to be running pretty well, just a little slow.\n\nmydb=# select datname, age(datfrozenxid) from pg_database;\n datname | age\n-----------+------------\ntemplate1 | 1491520270\ntemplate0 | 1491520270\npostgres | 1491520270\nmydb | 1076194904\n\n\nOooooooo..... thats not good, is it? Thanks for taking an interest, \nTom. I'm most likely going to\npromote one of my subscribers to be master, then nuke this database, \nbut I have no problems keeping it\naround if you think I may have found some obscure bug that could help \nsomeone debug. Again, this\nDB gets vacuumed every day, and in the beginning, I think I remember \ndoing a vacuum full every\nday.\n\nThanks,\n\n/kurt\n\n\nOn Jun 20, 2007, at 5:08 PM, Tom Lane wrote:\n\n> Kurt Overberg <[email protected]> writes:\n>> Drat! I'm wrong again. I thought for sure there wouldn't be a\n>> wraparound problem.\n>\n> Well, I'm not sure what it is now. You showed some invisible tuples\n> with XMINs of\n> XMIN: 1489323584 CMIN: 1 XMAX: 0 CMAX|XVAC: 0\n> XMIN: 1489323590 CMIN: 2 XMAX: 0 CMAX|XVAC: 0\n> XMIN: 1489323592 CMIN: 1 XMAX: 0 CMAX|XVAC: 0\n> but the nextXID is\n> 1490547335\n> which is not that far ahead of those --- about 1.2 million \n> transactions,\n> or less than a day's time according to the clog timestamps, which\n> suggest that you're burning several million XIDs a day. Perhaps \n> you've\n> wrapped past them since your earlier check --- if you try the same\n> \"select where ctid = \" queries now, do they show rows?\n>\n> The other thing that's strange here is that an 8.0 installation should\n> be pretty aggressive about recycling pg_clog segments, and yet you've\n> got a bunch there. How far back do the files in pg_clog go --- what's\n> the numeric range of the filenames, and the date range of their mod\n> times? Have you checked the postmaster log to see if you're \n> getting any\n> complaints about checkpoint failures or anything like that? It would\n> also be useful to look at the output of\n> select datname, age(datfrozenxid) from pg_database;\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n", "msg_date": "Wed, 20 Jun 2007 20:43:57 -0400", "msg_from": "Kurt Overberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maintenance question / DB size anomaly... " } ]
[ { "msg_contents": "Please read the whole email before replying:\n\n \n\nI love the feedback I have received but I feel that somehow I did not\ncommunicate the intent of this mini project very well. So let me\noutline a few basics and who the audience was intended for.\n\n \n\n \n\nMini project title:\n\nInitial Configuration Tool for PostgreSQL for Dummies\n\n \n\n1) This is intended for newbie's. Not for experienced users or advanced\nDBAs.\n\n \n\n2) This tool is NOT intended to monitor your PostgreSQL efficiency.\n\n \n\n3) I suggested JavaScript because most people that get started with\nPostgreSQL will go to the web in order to find out about issues relating\nto configuration. I wanted a very simple way for people to access the\ntool that would not be tied to any particular environment or OS. If\nthere is someone that is using a text browser to view the web then they\nare probably geeky enough not to want to bother with using this tool.\n\n \n\n4) The intent is just to give people that have no clue a better starting\npoint than some very generic defaults.\n\n \n\nPlease think simple. I stress the word simple. The real challenge here\nis getting the formulas correct. Someone mentioned to not focus on the\nvalues but just get something out there for everyone to help tweak. I\nagree!\n\n \n\nWhat questions do you think should be asked in order to figure out what\nvalues should go into the formulas for the configuration suggestions? \n\n \n\nMy thoughts:\n\n \n\nWhat version of PostgreSQL are you using?\n\nHow much memory will be available to PostgreSQL?\n\nHow many connections will be made to PostgreSQL?\n\n \n\n \n\nThanks,\n\n \n\n \n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPlease read the whole email before replying:\n \nI love the feedback I have received but I feel that somehow I did not\ncommunicate the intent of this mini project very well.  So let me outline a few\nbasics and who the audience was intended for.\n \n \nMini project title:\nInitial Configuration Tool for PostgreSQL for Dummies\n \n1) This is intended for newbie's.  Not for experienced users or\nadvanced DBAs.\n \n2) This tool is NOT intended to monitor your PostgreSQL efficiency.\n \n3) I suggested JavaScript because most people that get started with\nPostgreSQL will go to the web in order to find out about issues relating to configuration. \nI wanted a very simple way for people to access the tool that would not be tied\nto any particular environment or OS.  If there is someone that is using a text\nbrowser to view the web then they are probably geeky enough not to want to\nbother with using this tool.\n \n4) The intent is just to give people that have no clue a better\nstarting point than some very generic defaults.\n \nPlease think simple.  I stress the word simple.  The real challenge here\nis getting the formulas correct.  Someone mentioned to not focus on the values\nbut just get something out there for everyone to help tweak.  I agree!\n \nWhat questions do you think should be asked in order to figure out what\nvalues should go into the formulas for the configuration suggestions?  \n \nMy thoughts:\n \nWhat version of PostgreSQL are you using?\nHow much memory will be available to PostgreSQL?\nHow many connections will be made to PostgreSQL?\n \n \nThanks,\n \n \n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Tue, 19 Jun 2007 11:23:50 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Campbell, Lance wrote:\n> Please think simple. I stress the word simple. The real challenge here\n> is getting the formulas correct. Someone mentioned to not focus on the\n> values but just get something out there for everyone to help tweak. I\n> agree!\n> \n> What questions do you think should be asked in order to figure out what\n> values should go into the formulas for the configuration suggestions? \n> \n> My thoughts:\n> \n> What version of PostgreSQL are you using?\n\nOK, obviously not needed if embedded in the manuals.\n\n > How many connections will be made to PostgreSQL?\nOK (but changed order)\n\n> How much memory will be available to PostgreSQL?\nWould structure it like:\n- What is total memory of your machine?\n- How much do you want to reserve for other apps (e.g. apache/java)?\n\nAlso:\n- How many disks will PG be using?\n- How much data do you think you'll store?\n- Will your usage be: mostly reads|balance of read+write|mostly writes\n- Are your searches: all very simple|few complex|lots of complex queries\n\nThen, with the output provide a commentary stating reasons why for the \nchosen values. e.g.\n random_page_cost = 1.0\n Because you have [effective_cache_size = 1GB] and [total db size = \n0.5GB] the cost of fetching a page is the same no matter what order you \nfetch them in.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 19 Jun 2007 17:54:53 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Campbell, Lance writes:\n\n> 3) I suggested JavaScript because most people that get started with \n> PostgreSQL will go to the web in order to find out about issues relating \n\nWhy not c?\nIt could then go into contrib.\nAnyways.. language is likely the least important issue..\nAs someone mentioned.. once the formulas are worked out it can be done in a \nfew languages.. as people desire..\n\n> How much memory will be available to PostgreSQL?\n> How many connections will be made to PostgreSQL?\n\nWill this be a dedicated Postgresql server?\nWill there be mostly reads or will there also be significant amount of \nwrites?\n\nAre you on a RAID system or do you have several disks over which you would \nlike to run postgresql on?\n", "msg_date": "Tue, 19 Jun 2007 12:58:26 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Francisco and Richard,\nWhy ask about disk or raid? How would that impact any settings in\npostgresql.conf?\n\nI did forget the obvious question:\n\nWhat OS are you using?\n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n\n-----Original Message-----\nFrom: Francisco Reyes [mailto:[email protected]] \nSent: Tuesday, June 19, 2007 11:58 AM\nTo: Campbell, Lance\nCc: [email protected]\nSubject: Re: [PERFORM] PostgreSQL Configuration Tool for Dummies\n\nCampbell, Lance writes:\n\n> 3) I suggested JavaScript because most people that get started with \n> PostgreSQL will go to the web in order to find out about issues\nrelating \n\nWhy not c?\nIt could then go into contrib.\nAnyways.. language is likely the least important issue..\nAs someone mentioned.. once the formulas are worked out it can be done\nin a \nfew languages.. as people desire..\n\n> How much memory will be available to PostgreSQL?\n> How many connections will be made to PostgreSQL?\n\nWill this be a dedicated Postgresql server?\nWill there be mostly reads or will there also be significant amount of \nwrites?\n\nAre you on a RAID system or do you have several disks over which you\nwould \nlike to run postgresql on?\n", "msg_date": "Tue, 19 Jun 2007 12:03:10 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Campbell, Lance wrote:\n> Francisco and Richard,\n> Why ask about disk or raid? How would that impact any settings in\n> postgresql.conf?\n\nWell, random_page_cost will depend on how fast your disk system can \nlocate a non-sequential page. If you have a 16-disk RAID-10 array that's \nnoticably less time than a single 5400rpm IDE in a laptop.\n\n> I did forget the obvious question:\n> \n> What OS are you using?\n\nTricky to keep simple, isn't it :-)\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 19 Jun 2007 18:09:56 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "\n\n> What version of PostgreSQL are you using?\n\n\tI think newbies should be pushed a bit to use the latest versions, maybe \nwith some advice on how to setup the apt sources (in debian/ubuntu) to get \nthem.\n\n> How much memory will be available to PostgreSQL?\n>\n> How many connections will be made to PostgreSQL?\n\n\tI also think Postgres newbies using PHP should be encouraged to use \nsomething like ligttpd/fastcgi instead of Apache. The fastcgi model \npermits use of very few database connections and working PHP processes \nsince lighttpd handles all the slow transfers to the client \nasynchronously. You can do the same with two Apache instances, one serving \nstatic pages and acting as a proxy for the second Apache serving dynamic \npages.\n\tWith this setup, even low-end server setups (For our personal sites, a \nfriend and I share a dedicated server with 256MB of RAM, which we rent for \n20€ a month). This thing will never run 200 Apache processes, but we have \nno problem with lighttpd/php-fcgi and postgres.\n", "msg_date": "Tue, 19 Jun 2007 19:12:33 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Campbell, Lance writes:\n\n> Francisco and Richard,\n> Why ask about disk or raid? How would that impact any settings in\n> postgresql.conf?\n\nIf the user has 2 disks and says that he will do a lot of updates he could \nput pg_xlog in the second disk.\n\n", "msg_date": "Tue, 19 Jun 2007 13:25:56 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "On Tue, 19 Jun 2007 12:58:26 -0400\nFrancisco Reyes <[email protected]> wrote:\n> Campbell, Lance writes:\n> > 3) I suggested JavaScript because most people that get started with \n> > PostgreSQL will go to the web in order to find out about issues relating \n> \n> Why not c?\n\nWhy not whatever and install it on www.PostgreSQL.org? Is there any\nreason that this tool would need to be run on every installation. Run\nit on the site and it can always be up to date and can be written in\nwhatever language is easiest to maintain on the mother system.\n\nI would also like to make a pitch for a JavaScript-free tool. Just\ncollect all the pertinent information, work it out and display the\nresults in a second page. Some people just don't like JavaScript and\nturn it off even if we can run it in our browser.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 19 Jun 2007 13:32:14 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "On 6/19/07, Francisco Reyes <[email protected]> wrote:\n>\n> Campbell, Lance writes:\n>\n> > Francisco and Richard,\n> > Why ask about disk or raid? How would that impact any settings in\n> > postgresql.conf?\n>\n> If the user has 2 disks and says that he will do a lot of updates he could\n> put pg_xlog in the second disk.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\nLet's not ask about disk or raid at this level of sanity tuning. It is\nimportant for a newbie to take the right first step. When it comes to disks,\nwe start talking I/O, SATA, SCSI and the varying degrees of SATA and SCSI,\nand controller cards. Then we throw in RAID and the different levels\ntherein. Add to that, we can talk about drivers controlling these drives and\nwhich OS is faster, more stable, etc. As you can see, a newbie would get\ndrowned. So, please keep it simple. I know many people on this list are\nGurus. We know you are the best in this field, but we are not and are just\ntrying to improve what we have.\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nOn 6/19/07, Francisco Reyes <[email protected]> wrote:\nCampbell, Lance writes:> Francisco and Richard,> Why ask about disk or raid?  How would that impact any settings in> postgresql.conf?If the user has 2 disks and says that he will do a lot of updates he could\nput pg_xlog in the second disk.---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmasterLet's\nnot ask about disk or raid at this level of sanity tuning. It is\nimportant for a newbie to take the right first step. When it comes to\ndisks, we start talking I/O, SATA, SCSI and the varying degrees of SATA\nand SCSI, and controller cards. Then we throw in RAID and the different\nlevels therein. Add to that, we can talk about drivers controlling\nthese drives and which OS is faster, more stable, etc. As you can see,\na newbie would get drowned. So, please keep it simple. I know many\npeople on this list are Gurus. We know you are the best in this field,\nbut we are not and are just trying to improve what we have.-- Yudhvir Singh Sidhu408 375 3134 cell", "msg_date": "Tue, 19 Jun 2007 10:49:01 -0700", "msg_from": "\"Y Sidhu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Yudhvir,\n\nI completely agree. I was just putting together a similar email.\n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n________________________________\n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Y Sidhu\nSent: Tuesday, June 19, 2007 12:49 PM\nTo: [email protected]\nSubject: Re: [PERFORM] PostgreSQL Configuration Tool for Dummies\n\n \n\n \n\nOn 6/19/07, Francisco Reyes <[email protected]> wrote:\n\nCampbell, Lance writes:\n\n> Francisco and Richard,\n> Why ask about disk or raid? How would that impact any settings in\n> postgresql.conf?\n\nIf the user has 2 disks and says that he will do a lot of updates he\ncould \nput pg_xlog in the second disk.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\nLet's not ask about disk or raid at this level of sanity tuning. It is\nimportant for a newbie to take the right first step. When it comes to\ndisks, we start talking I/O, SATA, SCSI and the varying degrees of SATA\nand SCSI, and controller cards. Then we throw in RAID and the different\nlevels therein. Add to that, we can talk about drivers controlling these\ndrives and which OS is faster, more stable, etc. As you can see, a\nnewbie would get drowned. So, please keep it simple. I know many people\non this list are Gurus. We know you are the best in this field, but we\nare not and are just trying to improve what we have.\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nYudhvir,\nI completely agree.  I was just putting\ntogether a similar email.\n \nThanks,\n \n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\nFrom:\[email protected]\n[mailto:[email protected]] On Behalf Of Y Sidhu\nSent: Tuesday, June 19, 2007 12:49\nPM\nTo:\[email protected]\nSubject: Re: [PERFORM] PostgreSQL\nConfiguration Tool for Dummies\n\n \n \n\nOn 6/19/07, Francisco\nReyes <[email protected]>\nwrote:\nCampbell, Lance\nwrites:\n\n> Francisco and Richard,\n> Why ask about disk or raid?  How would that impact any settings\nin\n> postgresql.conf?\n\nIf the user has 2 disks and says that he will do a lot of updates he could \nput pg_xlog in the second disk.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\nLet's not ask about disk or raid at this level of sanity tuning. It is\nimportant for a newbie to take the right first step. When it comes to disks, we\nstart talking I/O, SATA, SCSI and the varying degrees of SATA and SCSI, and\ncontroller cards. Then we throw in RAID and the different levels therein. Add\nto that, we can talk about drivers controlling these drives and which OS is\nfaster, more stable, etc. As you can see, a newbie would get drowned. So,\nplease keep it simple. I know many people on this list are Gurus. We know you\nare the best in this field, but we are not and are just trying to improve what\nwe have.\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell", "msg_date": "Tue, 19 Jun 2007 12:53:41 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "D'Arcy,\nI wanted to put it on the www.postgresql.org site. That is what I said\nin my original email. I don't believe anyone from the actual project\nhas contacted me.\n\nI am setting up a JavaScript version first. If someone wants to do a\ndifferent one feel free. I will have all of the calculations in the\nJavaScript so it should be easy to do it in any language.\n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of D'Arcy J.M.\nCain\nSent: Tuesday, June 19, 2007 12:32 PM\nTo: Francisco Reyes\nCc: [email protected]\nSubject: Re: [PERFORM] PostgreSQL Configuration Tool for Dummies\n\nOn Tue, 19 Jun 2007 12:58:26 -0400\nFrancisco Reyes <[email protected]> wrote:\n> Campbell, Lance writes:\n> > 3) I suggested JavaScript because most people that get started with \n> > PostgreSQL will go to the web in order to find out about issues\nrelating \n> \n> Why not c?\n\nWhy not whatever and install it on www.PostgreSQL.org? Is there any\nreason that this tool would need to be run on every installation. Run\nit on the site and it can always be up to date and can be written in\nwhatever language is easiest to maintain on the mother system.\n\nI would also like to make a pitch for a JavaScript-free tool. Just\ncollect all the pertinent information, work it out and display the\nresults in a second page. Some people just don't like JavaScript and\nturn it off even if we can run it in our browser.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n", "msg_date": "Tue, 19 Jun 2007 13:09:25 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "On Tue, 19 Jun 2007, Y Sidhu wrote:\n\n> On 6/19/07, Francisco Reyes <[email protected]> wrote:\n>>\n>> Campbell, Lance writes:\n>> \n>> > Francisco and Richard,\n>> > Why ask about disk or raid? How would that impact any settings in\n>> > postgresql.conf?\n>>\n>> If the user has 2 disks and says that he will do a lot of updates he could\n>> put pg_xlog in the second disk.\n>> \n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 2: Don't 'kill -9' the postmaster\n>> \n>\n> Let's not ask about disk or raid at this level of sanity tuning. It is\n> important for a newbie to take the right first step. When it comes to disks,\n> we start talking I/O, SATA, SCSI and the varying degrees of SATA and SCSI,\n> and controller cards. Then we throw in RAID and the different levels\n> therein. Add to that, we can talk about drivers controlling these drives and\n> which OS is faster, more stable, etc. As you can see, a newbie would get\n> drowned. So, please keep it simple. I know many people on this list are\n> Gurus. We know you are the best in this field, but we are not and are just\n> trying to improve what we have.\n\nI strongly agree.\n\nbesides, the number and types of drives, raid configurations, etc is so \nvariable that I strongly believe that the right answer is going to be \nsomething along the lines of 'run this tool and then enter the number(s) \nthat the tool reports' and then let the tool measure the end result of all \nthe variables rather then trying to calculate the results.\n\nDavid Lang\n", "msg_date": "Tue, 19 Jun 2007 12:02:39 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "On Tue, Jun 19, 2007 at 10:49:01AM -0700, Y Sidhu wrote:\n> On 6/19/07, Francisco Reyes <[email protected]> wrote:\n> >\n> >Campbell, Lance writes:\n> >\n> >> Francisco and Richard,\n> >> Why ask about disk or raid? How would that impact any settings in\n> >> postgresql.conf?\n> >\n> >If the user has 2 disks and says that he will do a lot of updates he could\n> >put pg_xlog in the second disk.\n> >\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 2: Don't 'kill -9' the postmaster\n> >\n> \n> Let's not ask about disk or raid at this level of sanity tuning. It is\n> important for a newbie to take the right first step. When it comes to disks,\n> we start talking I/O, SATA, SCSI and the varying degrees of SATA and SCSI,\n> and controller cards. Then we throw in RAID and the different levels\n> therein. Add to that, we can talk about drivers controlling these drives and\n> which OS is faster, more stable, etc. As you can see, a newbie would get\n> drowned. So, please keep it simple. I know many people on this list are\n> Gurus. We know you are the best in this field, but we are not and are just\n> trying to improve what we have.\n\n\n\nIgnoring the i/o subsystem in db configuration, there's an idea.\n\nYou could request some bonnie++ output (easy to aquire) as a baseline, \ndo your magic analysis based on this, and skip it if it is not provided\nwith a warning. Course the magic may be harder to come by. \n", "msg_date": "Tue, 19 Jun 2007 16:09:14 -0400", "msg_from": "Ray Stell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "PFC <[email protected]> writes:\n>> What version of PostgreSQL are you using?\n\n> \tI think newbies should be pushed a bit to use the latest versions,\n\nHow about pushed *hard* ? I'm constantly amazed at the number of people\nwho show up in the lists saying they installed 7.3.2 or whatever random\nversion they found in a dusty archive somewhere. \"Please upgrade\" is at\nleast one order of magnitude more valuable configuration advice than\nanything else we could tell them.\n\nIf the configurator is a live tool on the website, then it could be\naware of the latest release numbers and prod people with an appropriate\namount of urgency depending on how old they say their version is. This\nmay be the one good reason not to provide it as a standalone program.\n\n(No, we shouldn't make it try to \"phone home\" for latest release numbers\n--- in the first place, that won't work if the machine is really\nisolated from the net, and in the second place people will be suspicious\nof the motives.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jun 2007 17:40:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies " }, { "msg_contents": "Tom Lane wrote:\n> PFC <[email protected]> writes:\n>>> What version of PostgreSQL are you using?\n> \n>> \tI think newbies should be pushed a bit to use the latest versions,\n> \n> How about pushed *hard* ? I'm constantly amazed at the number of people\n> who show up in the lists saying they installed 7.3.2 or whatever random\n> version they found in a dusty archive somewhere. \"Please upgrade\" is at\n> least one order of magnitude more valuable configuration advice than\n> anything else we could tell them.\n\n(picking up an old thread while at a boring wait at the airport.. anyway)\n\nI keep trying to think of more nad better ways to do this :-) Perhaps we\nshould put some text on the bug reporting form (and in the documentation\nabout bug reporting) that's basically \"don't bother reporting a bug\nunless you're on the latest in a branch, and at least make sure you're\non one of the maojr releases listed on www.postgresql.org\"?\n\nSeems reasonable?\n\n//Magnus\n\n\n", "msg_date": "Thu, 05 Jul 2007 07:15:04 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Magnus,\n\n\"don't bother reporting a bug\n> unless you're on the latest in a branch, and at least make sure you're\n> on one of the maojr releases listed on www.postgresql.org\"?\n>\n> Seems reasonable?\n>\n\nabsolutely. Should be standard practice.\n\nHarald\n\n--\nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nSpielberger Straße 49\n70435 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nEuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July\nto Wednesday 11th July. See you there!\n\nMagnus,\"don't bother reporting a bugunless you're on the latest in a branch, and at least make sure you're\non one of the maojr releases listed on www.postgresql.org\"?Seems reasonable?absolutely. Should be standard practice. Harald\n--GHUM Harald Massapersuadere et programmareHarald Armin MassaSpielberger Straße 4970435 Stuttgart0173/9409607fx 01212-5-13695179 -EuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July to Wednesday 11th July. See you there!", "msg_date": "Fri, 6 Jul 2007 06:25:53 +0200", "msg_from": "\"Harald Armin Massa\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" } ]
[ { "msg_contents": "Below is a link to the HTML JavaScript configuration page I am creating:\n\n \n\nhttp://www.webservices.uiuc.edu/postgresql/\n\n \n\nI had many suggestions. Based on the feedback I received, I put\ntogether the initial list of questions. This list of questions can be\nchange.\n\n \n\nMemory\n\nThere are many different ways to ask about memory. Rather than ask a\nseries of questions I went with a single question, #2. If it is better\nto ask about the memory in a series of questions then please give me the\nquestions you would ask and why you would ask each of them. From my\nunderstanding the primary memory issue as it relates to PostgreSQL is\n\"how much memory is available to PostgreSQL\". Remember that this needs\nto be as simple as possible.\n\n \n\nMy next step is to list the PostgreSQL parameters found in the\npostgresql.conf file and how I will generate their values based on the\nquestions I have so far. I will primarily focus on PostgreSQL 8.2.x.\nOnce I have a consensus from everyone then I will put functionality\nbehind the \"Generate Suggested Settings\" button.\n\n \n\nThanks for all of the feedback, \n\n \n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBelow is a link to the HTML JavaScript configuration page I\nam creating:\n \nhttp://www.webservices.uiuc.edu/postgresql/\n \nI had many suggestions.  Based on the feedback I received,\nI put together the initial list of questions.  This list of questions can\nbe change.\n \nMemory\nThere are many different ways to ask about memory.  Rather\nthan ask a series of questions I went with a single question, #2.  If it\nis better to ask about the memory in a series of questions then please give me the\nquestions you would ask and why you would ask each of them.  From my\nunderstanding the primary memory issue as it relates to PostgreSQL is “how\nmuch memory is available to PostgreSQL”.  Remember that this needs\nto be as simple as possible.\n \nMy next step is to list the PostgreSQL parameters found in\nthe postgresql.conf file and how I will generate their values based on the\nquestions I have so far.  I will primarily focus on PostgreSQL 8.2.x. \nOnce I have a consensus from everyone then I will put functionality behind the “Generate\nSuggested Settings” button.\n \nThanks for all of the feedback, \n \n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Tue, 19 Jun 2007 13:15:59 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "On 6/19/07, Campbell, Lance <[email protected]> wrote:\n>\n> Below is a link to the HTML JavaScript configuration page I am creating:\n>\n>\n>\n> http://www.webservices.uiuc.edu/postgresql/\n>\n>\n>\n> I had many suggestions. Based on the feedback I received, I put together\n> the initial list of questions. This list of questions can be change.\n>\n>\n>\n> Memory\n>\n> There are many different ways to ask about memory. Rather than ask a\n> series of questions I went with a single question, #2. If it is better to\n> ask about the memory in a series of questions then please give me the\n> questions you would ask and why you would ask each of them. From my\n> understanding the primary memory issue as it relates to PostgreSQL is \"how\n> much memory is available to PostgreSQL\". Remember that this needs to be as\n> simple as possible.\n>\n>\n>\n> My next step is to list the PostgreSQL parameters found in the\n> postgresql.conf file and how I will generate their values based on the\n> questions I have so far. I will primarily focus on PostgreSQL 8.2.x.\n> Once I have a consensus from everyone then I will put functionality behind\n> the \"Generate Suggested Settings\" button.\n>\n>\n>\n> Thanks for all of the feedback,\n>\n>\n>\n>\n>\n> Lance Campbell\n>\n> Project Manager/Software Architect\n>\n> Web Services at Public Affairs\n>\n> University of Illinois\n>\n> 217.333.0382\n>\n> http://webservices.uiuc.edu\n>\n>\n>\nLance,\n\nSimply awesome!\n\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nOn 6/19/07, Campbell, Lance <[email protected]> wrote:\n\n\nBelow is a link to the HTML JavaScript configuration page I\nam creating:\n \nhttp://www.webservices.uiuc.edu/postgresql/\n\n \nI had many suggestions.  Based on the feedback I received,\nI put together the initial list of questions.  This list of questions can\nbe change.\n \nMemory\nThere are many different ways to ask about memory.  Rather\nthan ask a series of questions I went with a single question, #2.  If it\nis better to ask about the memory in a series of questions then please give me the\nquestions you would ask and why you would ask each of them.  From my\nunderstanding the primary memory issue as it relates to PostgreSQL is \"how\nmuch memory is available to PostgreSQL\".  Remember that this needs\nto be as simple as possible.\n \nMy next step is to list the PostgreSQL parameters found in\nthe postgresql.conf file and how I will generate their values based on the\nquestions I have so far.  I will primarily focus on PostgreSQL 8.2.x. \nOnce I have a consensus from everyone then I will put functionality behind the \"Generate\nSuggested Settings\" button.\n \nThanks for all of the feedback, \n \n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n\n \n\n\nLance,\n\nSimply awesome!-- Yudhvir Singh Sidhu408 375 3134 cell", "msg_date": "Tue, 19 Jun 2007 11:37:53 -0700", "msg_from": "\"Y Sidhu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Campbell, Lance writes:\n\nFor the \"6) Are your searches:\"\nHow about having \"many simple\"\n\n", "msg_date": "Tue, 19 Jun 2007 15:00:32 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "On Tue, 19 Jun 2007, Campbell, Lance wrote:\n\n> Memory\n>\n> There are many different ways to ask about memory. Rather than ask a\n> series of questions I went with a single question, #2. If it is better\n> to ask about the memory in a series of questions then please give me the\n> questions you would ask and why you would ask each of them. From my\n> understanding the primary memory issue as it relates to PostgreSQL is\n> \"how much memory is available to PostgreSQL\". Remember that this needs\n> to be as simple as possible.\n\nthere are three catagories of memory useage\n\n1. needed by other software\n2. available for postgres\n3. needed by the OS\n\nit's not clear if what you are asking is #2 or a combination of #2 and #3\n\nIMHO you should ask for #2 and #3, possibly along the lines of \"how much \nmemory is in the machine that isn't already used by other applications\"\n\nDavid Lang\n", "msg_date": "Tue, 19 Jun 2007 12:11:27 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "\n> there are three catagories of memory useage\n>\n> 1. needed by other software\n> 2. available for postgres\n> 3. needed by the OS\n\nThere's actually only two required memory questions:\n\nM1) How much RAM do you have on this machine?\nM2) Is this:\n\t() Dedicated PostgreSQL Server?\n\t() Server shared with a few other applications?\n\t() Desktop?\n\nI don't think the \"mostly reads / mostly writes\" question covers anything, \nnor is it likely to produce accurate answers. Instead, we need to ask the \nusers to characterize what type of application they are running:\n\nT1) Please characterize the general type of workload you will be running on \nthis database. Choose one of the following four:\n() WEB: any scripting-language application which mainly needs to support \n90% or more data reads, and many rapid-fire small queries over a large \nnumber of connections. Examples: forums, content management systems, \ndirectories. \n() OLTP: this application involves a large number of INSERTs, UPDATEs and \nDELETEs because most users are modifying data instead of just reading it. \nExamples: accounting, ERP, logging tools, messaging engines.\n() Data Warehousing: also called \"decision support\" and \"BI\", these \ndatabase support a fairly small number of large, complicated reporting \nqueries, very large tables, and large batch data loads.\n() Mixed/Other: if your application doesn't fit any of the above, our \nscript will try to pick \"safe, middle-of-the-road\" values.\n\nHmmm, drop question (6) too.\n\n(2) should read: \"What is the maximum number of database connections which \nyou'll need to support? If you don't know, we'll pick a default.\"\n\nOther questions we need:\n\nHow many/how fast processors do you have? Pick the option which seems \nclosest to what you have:\n() A single laptop processor\n() Single or dual older processors (1ghz)\n() Dual or quad current consumer processors (2ghz+)\n() Large, recent multi-core server system\n\n\"What OS Are You Using\", of course, needs to have Linux, Solaris, BSD, OSX \nand Windows. At some point, this tool will also need to generate for the \nuser any shmem settings that they need to make on the OS.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Tue, 19 Jun 2007 12:54:09 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "On Tue, 19 Jun 2007, Josh Berkus wrote:\n\n> \"What OS Are You Using\", of course, needs to have Linux, Solaris, BSD, OSX\n> and Windows. At some point, this tool will also need to generate for the\n> user any shmem settings that they need to make on the OS.\n\nI also noticed that on FreeBSD (6.2) at least the stock config simply \nwon't run without building a new kernel that bumps up all the SHM stuff or \ndropping down resource usage in the postgres config...\n\nOverall, I like the idea. I've been slowly working on weaning myself off \nof mysql and I think removing any roadblocks that new users might stumble \nupon seems like an excellent way to get more exposure.\n\nCharles\n\n> -- \n> --Josh\n>\n> Josh Berkus\n> PostgreSQL @ Sun\n> San Francisco\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n", "msg_date": "Tue, 19 Jun 2007 19:42:03 -0400 (EDT)", "msg_from": "Charles Sprickman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "On Tue, 19 Jun 2007, Josh Berkus wrote:\n\n> I don't think the \"mostly reads / mostly writes\" question covers anything,\n> nor is it likely to produce accurate answers. Instead, we need to ask the\n> users to characterize what type of application they are running:\n> T1) Please characterize the general type of workload you will be running on\n> this database. Choose one of the following four...\n\nWe've hashed through this area before, but for Lance's benefit I'll \nreiterate my dissenting position on this subject. If you're building a \n\"tool for dummies\", my opinion is that you shouldn't ask any of this \ninformation. I think there's an enormous benefit to providing something \nthat takes basic sizing information and gives conservative guidelines \nbased on that--as you say, \"safe, middle-of-the-road values\"--that are \nstill way, way more useful than the default values. The risk in trying to \nmake a complicated tool that satisfies all the users Josh is aiming his \nmore sophisticated effort at is that you'll lose the newbies.\n\nScan the archives of this mailing list for a bit. If you look at what \npeople discover they've being nailed by, it's rarely because they need to \noptimize something like random_page_cost. It's usually because they have \na brutally wrong value for one of the memory or vacuum parameters that are \nvery easy to provide reasonable suggestions for without needing a lot of \ninformation about the server.\n\nI wouldn't even bother asking how many CPUs somebody has for what Lance is \nbuilding. The kind of optimizations you'd do based on that are just too \ncomplicated to expect a tool to get them right and still be accessible to \na novice.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 20 Jun 2007 02:24:03 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Greg Smith wrote:\n> On Tue, 19 Jun 2007, Josh Berkus wrote:\n>\n>> I don't think the \"mostly reads / mostly writes\" question covers \n>> anything,\n>> nor is it likely to produce accurate answers. Instead, we need to \n>> ask the\n>> users to characterize what type of application they are running:\n>> T1) Please characterize the general type of workload you will be \n>> running on\n>> this database. Choose one of the following four...\n>\n> We've hashed through this area before, but for Lance's benefit I'll \n> reiterate my dissenting position on this subject. If you're building \n> a \"tool for dummies\", my opinion is that you shouldn't ask any of this \n> information. I think there's an enormous benefit to providing \n> something that takes basic sizing information and gives conservative \n> guidelines based on that--as you say, \"safe, middle-of-the-road \n> values\"--that are still way, way more useful than the default values. \n> The risk in trying to make a complicated tool that satisfies all the \n> users Josh is aiming his more sophisticated effort at is that you'll \n> lose the newbies. \nGenerally I agree, however, how about a first switch, for beginner / \nintermediate / advanced.\n\nThe choice you make determines how much detail we ask you about your \nsetup. Beginners get two or three simple questions, intermediate a \nhandful, and advanced gets grilled on everything. Then, just write the \nbeginner and maybe intermediate to begin with and ghost out the advanced \nuntil it's ready.\n", "msg_date": "Thu, 21 Jun 2007 10:12:33 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "On Thu, 21 Jun 2007, Scott Marlowe wrote:\n\n> Generally I agree, however, how about a first switch, for beginner / \n> intermediate / advanced.\n\nYou're describing a perfectly reasonable approach for a second generation \ntool in this area. I think it would be very helpful for the user \ncommunity to get a first generation one that works fairly well before \ngetting distracted at all by things like this. The people capable of \nfilling out the intermediate/advanced settings can probably just do a bit \nof reading and figure out most of what they should be doing themselves.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 22 Jun 2007 02:32:00 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "On Fri, 2007-06-22 at 02:32 -0400, Greg Smith wrote:\n> On Thu, 21 Jun 2007, Scott Marlowe wrote:\n> \n> > Generally I agree, however, how about a first switch, for beginner / \n> > intermediate / advanced.\n> \n> You're describing a perfectly reasonable approach for a second generation \n> tool in this area. I think it would be very helpful for the user \n> community to get a first generation one that works fairly well before \n> getting distracted at all by things like this. The people capable of \n> filling out the intermediate/advanced settings can probably just do a bit \n> of reading and figure out most of what they should be doing themselves.\n\nJust as an aside; how come the installation/setup \"Tutorial\" section -\nhttp://www.postgresql.org/docs/8.2/interactive/tutorial-start.html -\ndoesn't mention setting some rough reasonable defaults in\npostgresql.conf or even a reference to the parameter documentation\nsection. It seems like such a reference should exist between -\nhttp://www.postgresql.org/docs/8.2/interactive/tutorial-arch.html - and\n- http://www.postgresql.org/docs/8.2/interactive/tutorial-accessdb.html\n\nAt least something along those lines should be said at\nhttp://www.postgresql.org/docs/8.2/interactive/install-post.html\n\nPersonally, as DBA for more than a decade, I've got 0 sympathy for\npeople who setup a database but can't be bothered to read the\ndocumentation. But in the case of PostgreSQL the documentation could do\na better job of driving users to even the existence [and importance of]\npostgresql.conf and routine maintenance techniques.\nhttp://www.postgresql.org/docs/8.2/interactive/runtime-config.html\nhttp://www.postgresql.org/docs/8.2/interactive/maintenance.html\n\nSeems to me that even a remake of something like -\nhttp://www.iiug.org/~waiug/old/forum2000/SQLTunning/sld001.htm - focused\non PostgreSQL would be novel and very interesting.\n\nJust my two cents. \n\nPostgreSQL is awesome, BTW.\n\n\n\n", "msg_date": "Fri, 22 Jun 2007 08:05:46 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "\"\"Campbell, Lance\"\" <[email protected]> wrote in message news:[email protected]...\n Below is a link to the HTML JavaScript configuration page I am creating:\n\n \n\n http://www.webservices.uiuc.edu/postgresql/\n\n \n\n I had many suggestions. Based on the feedback I received, I put together the initial list of questions. This list of questions can be change.\n\n \n\nInstead of (or in addition to) configure dozens of settings, what do you say about a feedback adjustable control based on the existing system statistics and parsing logs (e.g http://pgfouine.projects.postgresql.org/index.html ) ?\n\nSuch an application improved with notifications would be useful for experimented users, too. A database is not static and it may evolve to different requirements. The initial configuration may be deprecated after one year.\n\nRegards,\nSabin\n\n\n\n\n\n\n \n\n\n\"\"Campbell, Lance\"\" <[email protected]> wrote in message news:[email protected]...\n\nBelow is a link to the HTML \n JavaScript configuration page I am creating:\n \nhttp://www.webservices.uiuc.edu/postgresql/\n \nI had many suggestions.  \n Based on the feedback I received, I put together the initial list of \n questions.  This list of questions can be \n change.\n \nInstead of  (or in addition to) configure \ndozens of settings, what do you say about a feedback adjustable control based on \nthe existing system statistics and parsing logs (e.g http://pgfouine.projects.postgresql.org/index.html ) \n?\n \nSuch an application improved \nwith notifications would be useful for experimented users, too. A \ndatabase is not static and it may evolve to different requirements. The \ninitial configuration may be deprecated after one \nyear.\n \nRegards,\nSabin", "msg_date": "Fri, 22 Jun 2007 16:24:47 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies - feedback adjustable\n\tcontrol" }, { "msg_contents": "On Fri, 22 Jun 2007, Sabin Coanda wrote:\n\n> Instead of (or in addition to) configure dozens of settings, what do you \n> say about a feedback adjustable control based on the existing system \n> statistics and parsing logs (e.g \n> http://pgfouine.projects.postgresql.org/index.html ) ?\n\nsomething like this would be useful for advanced tuneing, but the biggest \nproblem is that it's so difficult to fingoure out a starting point. bad \nchoices at the starting point can cause several orders of magnatude \ndifference in the database performsnce. In addition we know that the \ncurrent defaults are bad for just about everyone (we just can't decide \nwhat better defaults would be)\n\nthis horrible starting point gives people a bad first impression that a \nsimple tool like what's being discussed can go a long way towards solving.\n\nDavid Lang\n", "msg_date": "Fri, 22 Jun 2007 11:47:20 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies - feedback\n\tadjustable control" }, { "msg_contents": "On Fri, 22 Jun 2007, Adam Tauno Williams wrote:\n\n> Just as an aside; how come the installation/setup \"Tutorial\" section -\n> http://www.postgresql.org/docs/8.2/interactive/tutorial-start.html -\n> doesn't mention setting some rough reasonable defaults in\n> postgresql.conf or even a reference to the parameter documentation\n> section.\n\nI think that anyone who has been working with the software long to know \nwhat should go into such a section has kind of forgotten about this part \nof the documentation by the time they get there. It is an oversight and \nyours is an excellent suggestion.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 23 Jun 2007 15:32:45 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "On Fri, 22 Jun 2007, Sabin Coanda wrote:\n\n> Instead of (or in addition to) configure dozens of settings, what do you \n> say about a feedback adjustable control based on the existing system \n> statistics and parsing logs\n\nTake a look at the archive of this list for the end of April/Early May. \nThere's a thread there named \"Feature Request --- was: PostgreSQL \nPerformance Tuning\" that addressed this subject in length I think you'll \nfind interesting reading.\n\nI personally feel there's much more long-term potential for a tool that \ninspects the database, but the needs of something looking for getting good \nstarting configuration file (before there necessarily is even a populated \ndatabase) is different enough that it may justify building two different \ntools.\n\nI would suggest you or anything else building the starter configuration \ntool not stray from the path of getting the most important variables set \nto reasonable values. Trying to satisfy every possible user is the path \nthat leads to a design so complicated that it's unlikely you'll ever get a \nfinished build done at all.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 23 Jun 2007 15:44:04 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies - feedback\n\tadjustable control" }, { "msg_contents": "\n<[email protected]> wrote in message \nnews:[email protected]...\n> On Fri, 22 Jun 2007, Sabin Coanda wrote:\n>\n>> Instead of (or in addition to) configure dozens of settings, what do you \n>> say about a feedback adjustable control based on the existing system \n>> statistics and parsing logs (e.g \n>> http://pgfouine.projects.postgresql.org/index.html ) ?\n>\n> something like this would be useful for advanced tuneing, but the biggest \n> problem is that it's so difficult to fingoure out a starting point. bad \n> choices at the starting point can cause several orders of magnatude \n> difference in the database performsnce. In addition we know that the \n> current defaults are bad for just about everyone (we just can't decide \n> what better defaults would be)\n>\n\nYou are right. But an automatic tool beeing able to take decisions by \ndifferent inputs, would be able to set a startup configuration too, based on \nthe hw/sw environment, and interactive user requirements.\n\n> this horrible starting point gives people a bad first impression that a \n> simple tool like what's being discussed can go a long way towards solving.\n>\n\nWell, I think to an automatic tool, not an utopian application good for \neverything. For instance the existing automatic daemon have some abilities, \nbat not all of the VACUUM command. I'm realistic that good things may be \ndone in steps, not once.\n\nI would be super happy if an available automatic configuration tool would be \nable to set for the beginning just the shared_buffers or max_fsm_pages \nbased on the available memory. Adjustments can be done later.\n\nRegards,\nSabin \n\n\n", "msg_date": "Mon, 25 Jun 2007 14:47:43 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies - feedback adjustable\n\tcontrol" }, { "msg_contents": "Greg,\n\n> We've hashed through this area before, but for Lance's benefit I'll\n> reiterate my dissenting position on this subject. If you're building a\n> \"tool for dummies\", my opinion is that you shouldn't ask any of this\n> information. I think there's an enormous benefit to providing something\n> that takes basic sizing information and gives conservative guidelines\n> based on that--as you say, \"safe, middle-of-the-road values\"--that are\n> still way, way more useful than the default values. The risk in trying to\n> make a complicated tool that satisfies all the users Josh is aiming his\n> more sophisticated effort at is that you'll lose the newbies.\n\nThe problem is that there are no \"safe, middle-of-the-road\" values for some \nthings, particularly max_connections and work_mem. Particularly, there are \nvery different conf profiles between reporting applications and OLTP/Web. \nWe're talking about order-of-magnitude differences here, not just a few \npoints. e.g.:\n\nWeb app, typical machine:\nmax_connections = 200\nwork_mem = 256kb\ndefault_statistics_target=100\nautovacuum=on\n\nReporting app, same machine:\nmax_connections = 20\nwork_mem = 32mb\ndefault_statistics_target=500\nautovacuum=off\n\nPossibly we could make the language of the \"application type\" selection less \ntechnical, but I don't see it as dispensible even for a basic tool.\n\n> I wouldn't even bother asking how many CPUs somebody has for what Lance is\n> building. The kind of optimizations you'd do based on that are just too\n> complicated to expect a tool to get them right and still be accessible to\n> a novice.\n\nCPUs affects the various cpu_cost parameters, but I can but the idea that this \nshould only be part of the \"advanced\" tool.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Tue, 26 Jun 2007 08:26:28 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "On Tue, 26 Jun 2007, Josh Berkus wrote:\n\n> The problem is that there are no \"safe, middle-of-the-road\" values for some\n> things, particularly max_connections and work_mem.\n\nYour max_connections concern is one fact that haunts the idea of just \ngiving out some sample configs for people. Lance's tool asks outright the \nexpectation for max_connections which I think is the right thing to do.\n\n> Web app, typical machine:\n> work_mem = 256kb\n> default_statistics_target=100\n> autovacuum=on\n\n> Reporting app, same machine:\n> work_mem = 32mb\n> default_statistics_target=500\n> autovacuum=off\n\nI think people are stuck with actually learning a bit about work_mem \nwhether they like it or not, because it's important to make it larger but \nwe know going too high will be a problem with lots of connections doing \nsorts.\n\nAs for turning autovacuum on/off and the stats target, I'd expect useful \ndefaults for those would come out of how the current sample is asking \nabout read vs. write workloads and expected database size. Those simple \nto understand questions might capture enough of the difference between \nyour two types here.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 26 Jun 2007 14:14:24 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Greg,\n\n> Your max_connections concern is one fact that haunts the idea of just\n> giving out some sample configs for people. Lance's tool asks outright the\n> expectation for max_connections which I think is the right thing to do.\n...\n> I think people are stuck with actually learning a bit about work_mem\n> whether they like it or not, because it's important to make it larger but\n> we know going too high will be a problem with lots of connections doing\n> sorts.\n\nI find it extremely inconsistent that you want to select \"middle-of-the-road\" \ndefaults for some values and ask users detailed questions for other values. \nWhich are we trying to do, here?\n\nGiven an \"application type\" selection, which is a question which can be \nwritten in easy-to-understand terms, these values can be set at reasonable \ndefaults. In fact, for most performance tuning clients I had, we never \nactually looped back and tested the defaults by monitoring pg_temp, memstat \nand the log; performance was acceptable with the approximate values.\n\n> As for turning autovacuum on/off and the stats target, I'd expect useful\n> defaults for those would come out of how the current sample is asking\n> about read vs. write workloads and expected database size. Those simple\n> to understand questions might capture enough of the difference between\n> your two types here.\n\nBoth of the questions you cite above are unlikely to result in accurate \nanswers from users, and the read vs. write answer is actually quite useless \nexcept for the extreme cases (e.g. read-only or mostly-write). The deciding \nanswer in turning autovacuum off is whether or not the user does large bulk \nloads / ETL operations, which autovac would interfere with.\n\nThe fact that we can't expect an accurate answer on database size (except from \nthe minority of users who already have a full production DB) will be a \nchronic stumbling block for any conf tool we build. Quite a number of \nsettings want to know this figure: max_fsm_pages, maintenance_work_mem, \nmax_freeze_age, etc. Question is, would order-of-magnitude answers be likely \nto have better results? i.e.:\n\nHow large is your database expected to grow?\n[] Less than 100MB / thousands of rows\n[] 100mb to 1gb / tens to hundreds of thousands of rows\n[] 1 gb to 10 gb / millions of rows\n[] 10 gb to 100 gb / tens to hundreds of millions of rows\n[] 100 gb to 1 TB / billions of rows\n[] more than 1 TB / many billions of rows\n\n... users might have better guesses within those rough ranges, and it would \ngive us enough data to set rough values.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Tue, 26 Jun 2007 12:26:05 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "On Tue, 26 Jun 2007, Josh Berkus wrote:\n\n> I find it extremely inconsistent that you want to select \"middle-of-the-road\"\n> defaults for some values and ask users detailed questions for other values.\n> Which are we trying to do, here?\n\nI'd like to see people have a really simple set of questions to get them \npast the completely undersized initial configuration phase, then ship them \ntoward resources to help educate about the parts that could be problematic \nfor them based on what they do or don't know. I don't see an \ninconsistancy that I'd expect people to have a reasonable guess for \nmax_connections, while also telling them that setting sort_mem is \nimportant, a middle value has been assigned, but a really correct setting \nisn't something they can expect the simple config tool to figure out for \nthem; here's a pointer to the appropriate documentation to learn more.\n\n> The fact that we can't expect an accurate answer on database size \n> (except from the minority of users who already have a full production \n> DB) will be a chronic stumbling block for any conf tool we build.\n\nI'm still of the opinion that recommendations for settings like \nmax_fsm_pages and maintenance_work_mem should come out of a different type \nof tool that connects to the database.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 26 Jun 2007 18:05:21 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Greg,\n\n> I'd like to see people have a really simple set of questions to get them\n> past the completely undersized initial configuration phase, then ship them\n> toward resources to help educate about the parts that could be problematic\n> for them based on what they do or don't know. I don't see an\n> inconsistancy that I'd expect people to have a reasonable guess for\n> max_connections, while also telling them that setting sort_mem is\n> important, a middle value has been assigned, but a really correct setting\n> isn't something they can expect the simple config tool to figure out for\n> them; here's a pointer to the appropriate documentation to learn more.\n\nI disagree that this is acceptable, especially when we could set a better \nvalue using an easy-to-understand question. It's also been my experience (in \n3 years of professional performance tuning) that most users *don't* have an \naccurate guess for max_connections.\n\nI'm really not clear on why you think \"what flavor of application do you \nhave?\" is a difficult question. It's certainly one that my clients were able \nto answer easily. Overall, it seems like you're shooting for a conf tool \nwhich only really works for web apps, which isn't my personal goal or I think \na good use of our time.\n\n> I'm still of the opinion that recommendations for settings like\n> max_fsm_pages and maintenance_work_mem should come out of a different type\n> of tool that connects to the database.\n\nWell, there's several steps to this:\n\n1) Run conf tool when installing PG;\n2) Run conf tool++ after application is first up and running;\n3) Run conf tool++ after application has been in production\n\nThe (1) tool should at least provide a configuration which isn't going to lead \nto long term issues. For example, dramatically underallocating fsm_pages can \nresult in having to run VACUUM FULL and the associated downtime, so it's \nsomething we want to avoid at the outset.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Wed, 27 Jun 2007 11:02:35 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" } ]
[ { "msg_contents": "Now I am at the difficult part, what parameters to calculate and how to\ncalculate them. Everything below has to do with PostgreSQL version 8.2:\n\n \n\nThe parameters I would think we should calculate are:\n\nmax_connections\n\nshared_buffers\n\nwork_mem\n\nmaintenance_work_mem\n\neffective_cache_size\n\nrandom_page_cost\n\n \n\nAny other variables? I am open to suggestions.\n\n \n\n \n\nCalculations based on values supplied in the questions at the top of the\npage:\n\n \n\nmax_connection= question #3 or a minimum of 8\n\n \n\neffective_cache_size={question #2}MB\n\n \n\nmaintenance_work_mem= ({question #2} * .1) MB\n\n \n\nAny thoughts on the other variables based on the questions found at the\ntop of the below web page?\n\n \n\nhttp://www.webservices.uiuc.edu/postgresql/ \n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nNow I am at the difficult part, what parameters to calculate\nand how to calculate them.  Everything below has to do with PostgreSQL\nversion 8.2:\n \nThe parameters I would think we should calculate are:\nmax_connections\nshared_buffers\nwork_mem\nmaintenance_work_mem\neffective_cache_size\nrandom_page_cost\n \nAny other variables?  I am open to suggestions.\n \n \nCalculations based on values supplied in the questions at\nthe top of the page:\n \nmax_connection= question #3 or a minimum of 8\n \neffective_cache_size={question #2}MB\n \nmaintenance_work_mem= ({question #2} * .1) MB\n \nAny thoughts on the other variables based on the questions\nfound at the top of the below web page?\n \nhttp://www.webservices.uiuc.edu/postgresql/\n\n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Tue, 19 Jun 2007 15:35:18 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Lance,\n\n> The parameters I would think we should calculate are:\n>\n> max_connections\n>\n> shared_buffers\n>\n> work_mem\n>\n> maintenance_work_mem\n>\n> effective_cache_size\n>\n> random_page_cost\n\nActually, I'm going to argue against messing with random_page_cost. It's a \ncannon being used when a slingshot is called for. Instead (and this was \nthe reason for the \"What kind of CPU?\" question) you want to reduce the \ncpu_* costs. I generally find that if cpu_* are reduced as appropriate to \nmodern faster cpus, and effective_cache_size is set appropriately, a \nrandom_page_cost of 3.5 seems to work for appropriate choice of index \nscans.\n\nIf you check out my spreadsheet version of this:\nhttp://pgfoundry.org/docman/view.php/1000106/84/calcfactors.sxc\n... you'll see that the approach I found most effective was to create \nprofiles for each of the types of db applications, and then adjust the \nnumbers based on those. \n\nOther things to adjust:\nwal_buffers\ncheckpoint_segments\ncommit_delay\nvacuum_delay\nautovacuum\n\nAnyway, do you have a pgfoundry ID? I should add you to the project.\n\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Tue, 19 Jun 2007 15:46:37 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Campbell, Lance writes:\n\n> max_connections\n\nShouldn't that come straight from the user?\n\n", "msg_date": "Tue, 19 Jun 2007 20:10:58 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "At 4:35p -0400 on 19 Jun 2007, Lance Campbell wrote:\n> The parameters I would think we should calculate are:\n> max_connections\n> shared_buffers\n> work_mem\n> maintenance_work_mem\n> effective_cache_size\n> random_page_cost\n\n From an educational/newb standpoint, I notice that the page \ncurrently spews out a configuration file completely in line with \nwhat's currently there, comments and all. May I suggest highlighting \nwhat has been altered, perhaps above or below the textbox? It would \nmake it immediately obvious, and easier to add an explanation of the \nthought process involved. Something like\n\nWhat's changed from the default:\n\n<li>\n\t<p><strong>max_connections = 5</strong></p>\n\t<p>This follows directly from you put above. It is the maximum \nnumber of concurrent connections Postgres will allow.</p>\n</li>\n<li>\n\t<p><strong>shared_buffers = 10000</strong></p>\n\t<p>This setting will take some time to get exactly right for your \nneeds. Postgres uses this for ...</p>\n</li>\n\nNot something that necessarily needs to be spelled out in the .conf \nfile, but would, IMVHO, help minimally educate.\n\nKevin\n", "msg_date": "Wed, 20 Jun 2007 10:26:58 -0400", "msg_from": "Kevin Hunter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "Campbell, Lance wrote:\n> Now I am at the difficult part, what parameters to calculate and how to\n> calculate them. Everything below has to do with PostgreSQL version 8.2:\n> \n> \n> \n> The parameters I would think we should calculate are:\n> \n> max_connections\n> \n> shared_buffers\n> \n> work_mem\n> \n> maintenance_work_mem\n> \n> effective_cache_size\n> \n> random_page_cost\n> \n> \n> \n> Any other variables? I am open to suggestions.\n\n\nwe also should scale max_fsm_pages according to the database size and\nworkload answers - I also note that the configuration file it generates\nseems to look like on for PostgreSQL 7.x or something - I think we\nshould just include the specific parameters to change.\n\n\nStefan\n", "msg_date": "Thu, 21 Jun 2007 08:10:49 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" }, { "msg_contents": "\n> \"\"Campbell, Lance\"\" <[email protected]> wrote in message \n> news:[email protected]...\n> Now I am at the difficult part, what parameters to calculate and how to\n> calculate them. Everything below has to do with PostgreSQL version 8.2:\n>\n>\n> The parameters I would think we should calculate are:\n> max_connections\n> shared_buffers\n> work_mem\n> maintenance_work_mem\n> effective_cache_size\n> random_page_cost\n>\n> Any other variables? I am open to suggestions.\n\nI know this is mainly about tuning for performance but I do think you ought \nto give the option to change at least 'listen_address'. Something like:\n\nAccept connections on: - Local connections (Unix sockets/localhost)\n - All TCP/IP interfaces\n - Specific IP addresses: \n___________ (comma-seperated list)\n\nand maybe a pointer to the pg_hba.conf docs for further info.\n\nRegards,\n\nBen \n\n\n", "msg_date": "Thu, 21 Jun 2007 15:54:53 +0100", "msg_from": "\"Ben Trewern\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Configuration Tool for Dummies" } ]
[ { "msg_contents": "\n\tI have this \"poll results\" table with just 3 integer fields, which is \nnever updated, only inserted/deleted...\n\tDid the Devs consider an option to have VACUUM reduce the row header \nsizes for tuples that are long commited and are currently visible to all \ntransactions ? (even if this makes the tuples non-updateable, as long as \nthey can be deleted, it would be OK for this type of tables).\n\t\n", "msg_date": "Wed, 20 Jun 2007 09:54:32 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": true, "msg_subject": "Short row header" }, { "msg_contents": "PFC wrote:\n> \n> I have this \"poll results\" table with just 3 integer fields, which \n> is never updated, only inserted/deleted...\n> Did the Devs consider an option to have VACUUM reduce the row header \n> sizes for tuples that are long commited and are currently visible to all \n> transactions ?\n\nThat has been suggested before, but IIRC it wasn't considered to be \nworth it. It would only save 4 bytes (the xmin field) per tuple, the \nfree space would be scattered around all pages making it less useful, \nand having to deal with two different header formats would make \naccessing the header fields more complex.\n\n> (even if this makes the tuples non-updateable, as long as \n> they can be deleted, it would be OK for this type of tables).\n\nThat would save another 6 bytes per tuple (ctid field), but we generally \nstay away from things that impose limitations like that.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 20 Jun 2007 09:36:05 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Short row header" }, { "msg_contents": "\n\"PFC\" <[email protected]> writes:\n\n> \tI have this \"poll results\" table with just 3 integer fields, which is\n> never updated, only inserted/deleted...\n> \tDid the Devs consider an option to have VACUUM reduce the row header\n> sizes for tuples that are long commited and are currently visible to all\n> transactions ? (even if this makes the tuples non-updateable, as long as they\n> can be deleted, it would be OK for this type of tables).\n\nIt wouldn't actually speed up anything unless the space it frees up was then\nused by something. That would mean loading one of your polls into the small\nbits of space freed up in every page. For most tables like this you want to do\nlarge bulk loads and want your loads stored quickly in contiguous space so it\ncan be accessed quickly, not spread throughout the table.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 20 Jun 2007 10:05:47 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Short row header" } ]
[ { "msg_contents": "Hi\n\nI'd like to know how to get information about which PG entities are in\nkernel cache, if possible.\n\n-- \nRegards,\nSergey Konoplev\n", "msg_date": "Wed, 20 Jun 2007 16:44:02 +0400", "msg_from": "\"Sergey Konoplev\" <[email protected]>", "msg_from_op": true, "msg_subject": "cached entities" }, { "msg_contents": "In response to \"Sergey Konoplev\" <[email protected]>:\n\n> Hi\n> \n> I'd like to know how to get information about which PG entities are in\n> kernel cache, if possible.\n\nThat's going to be specific to the OS you're running.\n\nUnless you're talking about PG's shared_buffers -- if that's the case, have\na look at the pg_buffercache contrib module.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 20 Jun 2007 08:55:46 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cached entities" } ]
[ { "msg_contents": "Hello group\n\nI have a problem with a simple index scan using the primary key of a table\ntaking too long.\n\nRelevant facts:\npg version 7.3.4 (yeah very old, we are upgrading asap)\n\npostgresql.conf:\nshared_buffers = 25000\nrandom_page_cost = 2\neffective_cache_size = 200000\nsort_mem = 20000\n\nTable:\ndb=# \\d tbl_20070601\n Table \"public.tbl_20070601\"\n Column | Type | Modifiers\n------------------+-----------------------+-----------\n validtime | bigint | not null\n latitude | double precision | not null\n longitude | double precision | not null\n.....\nparname | character varying(20) | not null\n....\n(table has a lot of columns but these are the most important ones)\n\nIndexes: tbl_20060601_pkey primary key btree (validtime, latitude,\nlongitude, ..., parname, ...)\n\nValidtime is a timestamp for the row (not my design).\n\nthe query:\ndb=# explain analyze select * from tbl_20070601 where validtime between\n20070602000000 and 20070602235500 and latitude=60.2744 and\nlongitude=26.4417and parname in ('parameter');\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using tbl_20070601_pkey on tbl_20070601 t1\n(cost=0.00..365.13rows=13 width=137) (actual time=\n120.83..10752.64 rows=539 loops=1)\n Index Cond: ((validtime >= 20070602000000::bigint) AND (validtime <=\n20070602235500::bigint) AND (latitude = 60.2744::double precision) AND\n(longitude = 26.4417::double precision))\n Filter: (parname = 'temperature'::character varying)\n Total runtime: 10753.85 msec\n(4 rows)\n\ndb=# select count(*) from tbl_20070601;\n count\n---------\n 3715565\n(1 row)\n\nthe query is only returning 539 rows but it takes more than 10 seconds to\nexecute. The table has only inserts and never deletes or updates and it has\nbeen analyzed recently.\n\nIs there anything to tweak with the query and/or postgresql, or should the\nhardware be inspected? Server is 2-CPU 4GB RAM blade-server with a fibre\nconnection to a disk subsystem. Any more information I can give about the\nsystem?\n\n\nRegards\n\nMP\n\nHello groupI have a problem with a simple index scan using the primary key of a table taking too long.Relevant facts: pg version 7.3.4 (yeah very old, we are upgrading asap)postgresql.conf:\nshared_buffers = 25000 \nrandom_page_cost = 2\neffective_cache_size = 200000\nsort_mem = 20000Table:db=# \\d tbl_20070601             Table \"public.tbl_20070601\"      Column      |         Type          | Modifiers------------------+-----------------------+-----------\n validtime        | bigint                | not null latitude         | double precision      | not null longitude        | double precision      | not null..... parname          | character varying(20) | not null\n....(table has a lot of columns but these are the most important ones)Indexes: tbl_20060601_pkey primary key btree (validtime, latitude, longitude, ..., parname, ...)Validtime is a timestamp for the row (not my design).\nthe query:db=# explain analyze select * from tbl_20070601 where validtime between 20070602000000 and 20070602235500 and latitude=60.2744 and longitude=26.4417 and parname in ('parameter');                                                                                       QUERY PLAN                                   \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using tbl_20070601_pkey on tbl_20070601 t1  (cost=\n0.00..365.13 rows=13 width=137) (actual time=120.83..10752.64 rows=539 loops=1)   Index Cond: ((validtime >= 20070602000000::bigint) AND (validtime <= 20070602235500::bigint) AND (latitude = 60.2744::double precision) AND (longitude = \n26.4417::double precision))   Filter: (parname = 'temperature'::character varying) Total runtime: 10753.85 msec(4 rows)db=# select count(*) from tbl_20070601;   count--------- 3715565\n(1 row)the query is only returning 539 rows but it takes more than 10 seconds to execute. The table has only inserts and never deletes or updates and it has been analyzed recently.Is there anything to tweak with the query and/or postgresql, or should the hardware be inspected? Server is 2-CPU 4GB RAM blade-server with a fibre connection to a disk subsystem. Any more information I can give about the system?\nRegardsMP", "msg_date": "Wed, 20 Jun 2007 17:02:25 +0300", "msg_from": "\"Mikko Partio\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow indexscan" }, { "msg_contents": "\"Mikko Partio\" <[email protected]> writes:\n\n> Index Scan using tbl_20070601_pkey on tbl_20070601 t1\n> (cost=0.00..365.13rows=13 width=137) (actual time=\n> 120.83..10752.64 rows=539 loops=1)\n> Index Cond: ((validtime >= 20070602000000::bigint) AND (validtime <=\n> 20070602235500::bigint) AND (latitude = 60.2744::double precision) AND\n> (longitude = 26.4417::double precision))\n> Filter: (parname = 'temperature'::character varying)\n\nYou do realize that's going to scan the entire index range from\n20070602000000 to 20070602235500?\n\nIf this is a typical query you'd be better off putting the lat/long\ncolumns first in the index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Jun 2007 11:09:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow indexscan " }, { "msg_contents": "On 6/20/07, Tom Lane <[email protected]> wrote:\n>\n> \"Mikko Partio\" <[email protected]> writes:\n>\n> > Index Scan using tbl_20070601_pkey on tbl_20070601 t1\n> > (cost=0.00..365.13rows=13 width=137) (actual time=\n> > 120.83..10752.64 rows=539 loops=1)\n> > Index Cond: ((validtime >= 20070602000000::bigint) AND (validtime <=\n> > 20070602235500::bigint) AND (latitude = 60.2744::double precision) AND\n> > (longitude = 26.4417::double precision))\n> > Filter: (parname = 'temperature'::character varying)\n>\n> You do realize that's going to scan the entire index range from\n> 20070602000000 to 20070602235500?\n>\n> If this is a typical query you'd be better off putting the lat/long\n> columns first in the index.\n>\n> regards, tom lane\n\n\n\nThanks for the reply.\n\nAdding a new index does not speed up the query (although the planner decides\nto use the index):\n\ndb=# create index tbl_20070601_latlonvalidpar_index on tbl_20070601\n(latitude,longitude,validtime,parname);\nCREATE INDEX\n\ndb=# explain analyze select * from tbl_20070601 where validtime between\n20070602000000 and 20070602235500 and latitude=60.2744 and\nlongitude=26.4417and parname in ('temperature');\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using tbl_20070601_latlonvalidpar_index on tbl_20070601 t1\n(cost=0.00..29.18 rows=13 width=137) (actual time=3471.94..31542.90 rows=539\nloops=1)\n Index Cond: ((latitude = 60.2744::double precision) AND (longitude =\n26.4417::double precision) AND (validtime >= 20070602000000::bigint) AND\n(validtime <= 20070602235500::bigint) AND (parname =\n'temperature'::character varying))\n Total runtime: 31544.48 msec\n(3 rows)\n\n\nThis is a very typical query and therefore it should be made as fast as\npossible. There are several tables like this rowcount ranging from 3 million\nto 13 million. I have some possibilities to modify the queries as well as\nthe tables, but the actual table structure is hard coded.\n\nAny other suggestions?\n\nRegards\n\nMP\n\nOn 6/20/07, Tom Lane <[email protected]> wrote:\n\"Mikko Partio\" <[email protected]> writes:>  Index Scan using tbl_20070601_pkey on tbl_20070601 t1> (cost=0.00..365.13rows=13 width=137) (actual time=\n> 120.83..10752.64 rows=539 loops=1)>    Index Cond: ((validtime >= 20070602000000::bigint) AND (validtime <=> 20070602235500::bigint) AND (latitude = 60.2744::double precision) AND> (longitude = \n26.4417::double precision))>    Filter: (parname = 'temperature'::character varying)You do realize that's going to scan the entire index range from20070602000000 to 20070602235500?If this is a typical query you'd be better off putting the lat/long\ncolumns first in the index.                        regards, tom laneThanks for the reply.Adding a new index does not speed up the query (although the planner decides to use the index):\ndb=# create index tbl_20070601_latlonvalidpar_index on tbl_20070601 (latitude,longitude,validtime,parname);CREATE INDEXdb=# explain analyze select * from tbl_20070601 where validtime between 20070602000000 and 20070602235500 and latitude=\n60.2744 and longitude=26.4417 and parname in ('temperature');                                                                                                               QUERY PLAN           -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using tbl_20070601_latlonvalidpar_index on tbl_20070601 t1  (cost=0.00..29.18 rows=13 width=137) (actual time=3471.94..31542.90 rows=539 loops=1)   Index Cond: ((latitude = 60.2744::double precision) AND (longitude = \n26.4417::double precision) AND (validtime >= 20070602000000::bigint) AND (validtime <= 20070602235500::bigint) AND (parname = 'temperature'::character varying)) Total runtime: 31544.48 msec(3 rows)\nThis is a very typical query and therefore it should be made as fast as possible. There are several tables like this rowcount ranging from 3 million to 13 million. I have some possibilities to modify the queries as well as the tables, but the actual table structure is hard coded. \nAny other suggestions?RegardsMP", "msg_date": "Wed, 20 Jun 2007 18:55:56 +0300", "msg_from": "\"Mikko Partio\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow indexscan" }, { "msg_contents": "\"Mikko Partio\" <[email protected]> writes:\n> Adding a new index does not speed up the query (although the planner decides\n> to use the index):\n\nHm. Lots of dead rows maybe? What's your vacuuming policy?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Jun 2007 12:01:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow indexscan " }, { "msg_contents": "Mikko Partio wrote:\n> \n> \n> On 6/20/07, *Tom Lane* <[email protected] <mailto:[email protected]>> wrote:\n> \n> \"Mikko Partio\" <[email protected] <mailto:[email protected]>> writes:\n> \n> > Index Scan using tbl_20070601_pkey on tbl_20070601 t1\n> > (cost=0.00..365.13rows=13 width=137) (actual time=\n> > 120.83..10752.64 rows=539 loops=1)\n> > Index Cond: ((validtime >= 20070602000000::bigint) AND\n> (validtime <=\n> > 20070602235500::bigint) AND (latitude = 60.2744::double\n> precision) AND\n> > (longitude = 26.4417::double precision))\n> > Filter: (parname = 'temperature'::character varying)\n> \n> You do realize that's going to scan the entire index range from\n> 20070602000000 to 20070602235500?\n> \n> If this is a typical query you'd be better off putting the lat/long\n> columns first in the index.\n> \n> regards, tom lane\n> \n> \n> \n> Thanks for the reply.\n> \n> Adding a new index does not speed up the query (although the planner \n> decides to use the index):\n> \n> db=# create index tbl_20070601_latlonvalidpar_index on tbl_20070601 \n> (latitude,longitude,validtime,parname);\n> CREATE INDEX\n> \n> db=# explain analyze select * from tbl_20070601 where validtime between \n> 20070602000000 and 20070602235500 and latitude= 60.2744 and \n> longitude=26.4417 and parname in ('temperature');\n> \n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n> \n> Index Scan using tbl_20070601_latlonvalidpar_index on tbl_20070601 t1 \n> (cost=0.00..29.18 rows=13 width=137) (actual time=3471.94..31542.90 \n> rows=539 loops=1)\n> Index Cond: ((latitude = 60.2744::double precision) AND (longitude = \n> 26.4417::double precision) AND (validtime >= 20070602000000::bigint) AND \n> (validtime <= 20070602235500::bigint) AND (parname = \n> 'temperature'::character varying))\n> Total runtime: 31544.48 msec\n> (3 rows)\n> \n> \n> This is a very typical query and therefore it should be made as fast as \n> possible. There are several tables like this rowcount ranging from 3 \n> million to 13 million. I have some possibilities to modify the queries \n> as well as the tables, but the actual table structure is hard coded.\n> \n> Any other suggestions?\n\nTry increasing your default_statistics_target and rerunning explain \nanalyze. Secondly try increasing your work_mem.\n\nJoshua D. Drake\n\n\n> \n> Regards\n> \n> MP\n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Wed, 20 Jun 2007 09:01:55 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow indexscan" }, { "msg_contents": "On 6/20/07, Tom Lane <[email protected]> wrote:\n>\n> \"Mikko Partio\" <[email protected]> writes:\n> > Adding a new index does not speed up the query (although the planner\n> decides\n> > to use the index):\n>\n> Hm. Lots of dead rows maybe? What's your vacuuming policy?\n>\n> regards, tom lane\n\n\n\n\nThe table only gets inserts and selects, never updates or deletes so I guess\nvacuuming isn't necessary. Anyways:\n\n\ndb=# SET default_statistics_target TO 1000;\nSET\ndb=# vacuum analyze verbose tbl_20070601;\nINFO: --Relation public.tbl_20070601--\nINFO: Index tbl_20070601_pkey: Pages 95012; Tuples 3715565: Deleted 0.\n CPU 8.63s/1.82u sec elapsed 367.57 sec.\nINFO: Index tbl_20070601_latlonvalidpar_index: Pages 27385; Tuples 3715565:\nDeleted 0.\n CPU 1.55s/1.22u sec elapsed 23.27 sec.\nINFO: Removed 2865 tuples in 2803 pages.\n CPU 0.30s/0.20u sec elapsed 37.91 sec.\nINFO: Pages 83950: Changed 0, Empty 0; Tup 3715565: Vac 2865, Keep 0,\nUnUsed 0.\n Total CPU 12.32s/3.69u sec elapsed 449.98 sec.\nINFO: Analyzing public.tbl_20070601\nVACUUM\ndb=# set sort_mem to 50000;\nSET\ndb=# explain analyze * from tbl_20070601 where validtime between\n20070602000000 and 20070602235500 and latitude=60.2744 and\nlongitude=26.4417and parname in ('temperature');\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using tbl_20070601_latlonvalidpar on tbl_20070601 t1 (cost=\n0.00..28.46 rows=13 width=137) (actual time=37.81..1415.06 rows=539 loops=1)\n Index Cond: ((latitude = 60.2744::double precision) AND (longitude =\n26.4417::double precision) AND (validtime >= 20070602000000::bigint) AND\n(validtime <= 20070602235500::bigint) AND (parname =\n'temperature'::character varying))\n Total runtime: 1416.53 msec\n(3 rows)\n\n\nI guess the sort_mem helped, or then part of the rows are in the cache\nalready. Should increasing sort_mem help here since there are no sorts etc?\n\nRegards\n\nMP\n\nOn 6/20/07, Tom Lane <[email protected]> wrote:\n\"Mikko Partio\" <[email protected]> writes:> Adding a new index does not speed up the query (although the planner decides> to use the index):Hm.  Lots of dead rows maybe?  What's your vacuuming policy?\n                        regards, tom laneThe table only gets inserts and selects, never updates or deletes so I guess vacuuming isn't necessary. Anyways:db=# SET default_statistics_target TO 1000;\nSETdb=# vacuum analyze verbose tbl_20070601;INFO:  --Relation public.tbl_20070601--INFO:  Index tbl_20070601_pkey: Pages 95012; Tuples 3715565: Deleted 0.        CPU 8.63s/1.82u sec elapsed 367.57 sec.\nINFO:  Index tbl_20070601_latlonvalidpar_index: Pages 27385; Tuples 3715565: Deleted 0.        CPU 1.55s/1.22u sec elapsed 23.27 sec.INFO:  Removed 2865 tuples in 2803 pages.        CPU 0.30s/0.20u sec elapsed \n37.91 sec.INFO:  Pages 83950: Changed 0, Empty 0; Tup 3715565: Vac 2865, Keep 0, UnUsed 0.        Total CPU 12.32s/3.69u sec elapsed 449.98 sec.INFO:  Analyzing public.tbl_20070601VACUUMdb=# set sort_mem to 50000;\nSETdb=# explain analyze * from tbl_20070601 where validtime between 20070602000000 and 20070602235500 and latitude=60.2744 and longitude=26.4417 and parname in ('temperature');                                                                                                               QUERY PLAN           \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using tbl_20070601_latlonvalidpar on tbl_20070601 t1  (cost=0.00..28.46 rows=13 width=137) (actual time=37.81..1415.06 rows=539 loops=1)   Index Cond: ((latitude = 60.2744::double precision) AND (longitude = \n26.4417::double precision) AND (validtime >= 20070602000000::bigint) AND (validtime <= 20070602235500::bigint) AND (parname = 'temperature'::character varying)) Total runtime: 1416.53 msec(3 rows)\nI guess the sort_mem helped, or then part of the rows are in the cache already. Should increasing sort_mem help here since there are no sorts etc?RegardsMP", "msg_date": "Wed, 20 Jun 2007 19:43:33 +0300", "msg_from": "\"Mikko Partio\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow indexscan" }, { "msg_contents": "\nOn Jun 20, 2007, at 9:02 , Mikko Partio wrote:\n\n> Relevant facts:\n> pg version 7.3.4 (yeah very old, we are upgrading asap)\n\nThere have been many performance improvements�not to mention security \nand data-eating bug fixes�since then. Upgrading should be one of your \nhighest priorities. And it may even fix the issue at hand!\n\n> Index Scan using tbl_20070601_pkey on tbl_20070601 t1\n> (cost=0.00..365.13rows=13 width=137) (actual time=\n> 120.83..10752.64 rows=539 loops=1)\n\nSomething appears a bit off with your index, or at least the \nstatistics Postgres is using to estimate it. It's estimating that the \nquery will return 13 rows, but you're actually returning 539. Maybe \nthere's some corruption in the index which is leading to both the \nperformance issue you're seeing and the statistics issues. Have you \ntried REINDEX?\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Wed, 20 Jun 2007 11:53:44 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow indexscan" }, { "msg_contents": "Mikko,\n\nI don't follow this thread, just see familiar spherical coordinates we\nwork with a lot. If you need fast radial query you can use our\nq3c package available from q3c.sf.net. See some details \nhttp://www.sai.msu.su/~megera/wiki/SkyPixelization\n\nOleg\n\nOn Wed, 20 Jun 2007, Tom Lane wrote:\n\n> \"Mikko Partio\" <[email protected]> writes:\n>\n>> Index Scan using tbl_20070601_pkey on tbl_20070601 t1\n>> (cost=0.00..365.13rows=13 width=137) (actual time=\n>> 120.83..10752.64 rows=539 loops=1)\n>> Index Cond: ((validtime >= 20070602000000::bigint) AND (validtime <=\n>> 20070602235500::bigint) AND (latitude = 60.2744::double precision) AND\n>> (longitude = 26.4417::double precision))\n>> Filter: (parname = 'temperature'::character varying)\n>\n> You do realize that's going to scan the entire index range from\n> 20070602000000 to 20070602235500?\n>\n> If this is a typical query you'd be better off putting the lat/long\n> columns first in the index.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Wed, 20 Jun 2007 21:54:03 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow indexscan " }, { "msg_contents": "\"Mikko Partio\" <[email protected]> writes:\n> I guess the sort_mem helped, or then part of the rows are in the cache\n> already. Should increasing sort_mem help here since there are no sorts etc?\n\nNo, it wouldn't --- this change has to be due to the data being already\nloaded into cache.\n\nThere's no obvious reason for the previous query to be so slow, unless\nyou've got horrendously slow or overloaded disk hardware. What sort of\nmachine is this anyway, and was it doing any other work at the time?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Jun 2007 15:29:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow indexscan " }, { "msg_contents": "On 6/20/07, Tom Lane <[email protected]> wrote:\n>\n>\n> There's no obvious reason for the previous query to be so slow, unless\n> you've got horrendously slow or overloaded disk hardware. What sort of\n> machine is this anyway, and was it doing any other work at the time?\n\n\n\nGranted it is doing other work besides database-stuff, mainly CPU-intensive\ncalculations.\n\nThe creation of the (latitude,longitude,validtime,parname) index and moving\nthe database files from a RAID-5 to RAID-10 has decreased the query time to\n~4 seconds:\n\ndb=# explain analyze select * from tbl_20070601 where validtime between\n20070602000000 and 20070602235500 and latitude=60.2744 and\nlongitude=26.4417and parname in ('temperature');\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using tbl_20070601_latlonvalidparname_index on tbl_20070601\n(cost=0.00..28.46 rows=13 width=137) (actual time=94.52..3743.53 rows=539\nloops=1)\n Index Cond: ((latitude = 60.2744::double precision) AND (longitude =\n26.4417::double precision) AND (validtime >= 20070602000000::bigint) AND\n(validtime <= 20070602235500::bigint) AND (parname =\n'temperature'::character varying))\n Total runtime: 3744.56 msec\n(3 rows)\n\nThis is already a great improvement compared to the previous 8 seconds. Our\napp developers claim though that previously the same queries have run in\nless than 1 second. The database had a mysterious crash a few months ago\n(some tables lost their content) and the performance has been bad ever\nsince. I don't know the details of this crash since I just inherited the\nsystem recently and unfortunately no logfiles are left. Could the crash\nsomehow corrupt catalog files so that the querying gets slower? I know this\nis a long shot but I don't know what else to think of.\n\nAnyways thanks a lot for your help.\n\nRegards\n\nMP\n\nOn 6/20/07, Tom Lane <[email protected]> wrote:\nThere's no obvious reason for the previous query to be so slow, unlessyou've got horrendously slow or overloaded disk hardware.  What sort ofmachine is this anyway, and was it doing any other work at the time?\nGranted it is doing other work besides database-stuff, mainly CPU-intensive calculations.The creation of the (latitude,longitude,validtime,parname) index and moving the database files from a RAID-5 to RAID-10 has decreased the query time to ~4 seconds:\ndb=# explain analyze select * from tbl_20070601 where validtime between 20070602000000 and 20070602235500 and latitude=60.2744 and longitude=26.4417 and parname in ('temperature');                                                                                                               QUERY PLAN                   &nbsp\n ;  \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using tbl_20070601_latlonvalidparname_index on tbl_20070601  (cost=0.00..28.46 rows=13 width=137) (actual time=94.52..3743.53 rows=539 loops=1)   Index Cond: ((latitude = 60.2744::double precision) AND (longitude = \n26.4417::double precision) AND (validtime >= 20070602000000::bigint) AND (validtime <= 20070602235500::bigint) AND (parname = 'temperature'::character varying)) Total runtime: 3744.56 msec(3 rows)\nThis is already a great improvement compared to the previous 8 seconds. Our app developers claim though that previously the same queries have run in less than 1 second. The database had a mysterious crash a few months ago (some tables lost their content) and the performance has been bad ever since. I don't know the details of this crash since I just inherited the system recently and unfortunately no logfiles are left. Could the crash somehow corrupt catalog files so that the querying gets slower? I know this is a long shot but I don't know what else to think of.\nAnyways thanks a lot for your help.RegardsMP", "msg_date": "Mon, 25 Jun 2007 10:50:48 +0300", "msg_from": "\"Mikko Partio\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow indexscan" }, { "msg_contents": "\"Mikko Partio\" <[email protected]> writes:\n> This is already a great improvement compared to the previous 8 seconds. Our\n> app developers claim though that previously the same queries have run in\n> less than 1 second. The database had a mysterious crash a few months ago\n> (some tables lost their content) and the performance has been bad ever\n> since. I don't know the details of this crash since I just inherited the\n> system recently and unfortunately no logfiles are left. Could the crash\n> somehow corrupt catalog files so that the querying gets slower? I know this\n> is a long shot but I don't know what else to think of.\n\nI'd wonder more about what was done to recover from the crash. For\ninstance, if they had to reload the tables, then it seems possible that\nthis table was previously nicely clustered on the relevant index and\nis now quite disordered. You might check to see if any pg_index entries\nhave pg_index.indisclustered set, and if so try a CLUSTER command to\nre-order the table(s).\n\nAnother thing to try, which is at least slightly more useful than waving\na dead chicken at the DB, is to REINDEX everything in sight, including\nthe system catalogs. This'd fix any remaining index corruption, and\nprobably expose heap corruption if there is any.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jun 2007 10:24:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow indexscan " } ]
[ { "msg_contents": "Hello all,\n\none of my customers installed Postgres on a public server to access the data\nfrom several places. The problem is that it takes _ages_ to transfer data from\nthe database to the client app. At first I suspected a problem with the ODBC\ndriver and my application, but using pgAdminIII 1.6.3.6112 (on Windows XP)\ngives the same result.\n\nIn table \"tblItem\" there are exactly 50 records stored. The table has 58\ncolumns: 5 character varying and the rest integer.\n\nAs far as I can tell the Postgres installation is o.k.\n\nSELECT VERSION()\n\"PostgreSQL 8.2.4 on i386-portbld-freebsd6.2, compiled by GCC cc (GCC) 3.4.6\n[FreeBSD] 20060305\"\n\nEXPLAIN ANALYZE SELECT * FROM \"tblItem\"\n\"Seq Scan on \"tblItem\" (cost=0.00..2.50 rows=50 width=423) (actual\ntime=0.011..0.048 rows=50 loops=1)\"\n\"Total runtime: 0.150 ms\"\n\nThe database computer is connected via a 2MBit SDL connection. I myself have a\n768/128 KBit ADSL connection and pinging the server takes 150ms on average.\n\nIn the pgAdminIII Query Tool the following command takes 15-16 seconds:\nSELECT * FROM \"tblItem\"\n\nDuring the first 2 seconds the D/L speed is 10-15KB/s. The remaining time the\nU/L and D/L speed is constant at 1KB/s.\n\nMy customer reported that the same query takes 2-3 seconds for him (with 6MBit\nADSL and 50ms ping).\n\nSo my questions are:\n* Could there be anything wrong with the server configuration?\n* Is the ping difference between the customers and my machine responsible for\nthe difference in the query execution time?\n* Is this normal behaviour or could this be improved somehow?\n\nThanks in advance for any help.\n\nRainer\n\nPS: I tried selecting only selected columns from the table and the speed is\nproportional to the no. of rows which must be returned. For example selecting\nall 5 character columns takes 2 seconds. Selecting 26 integer columns takes\n7-8 seconds and selecting all integer columns takes 14 seconds.\n", "msg_date": "Thu, 21 Jun 2007 16:22:57 +0200", "msg_from": "Rainer Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "Data transfer very slow when connected via DSL" }, { "msg_contents": "Hello Rainer,\n\nThe database computer is connected via a 2MBit SDL connection. I myself have\n> a\n> 768/128 KBit ADSL connection and pinging the server takes 150ms on\n> average.\n>\n\nI do not have a solution, but I can confirm the problem :)\n\nOne PostgreSQL-Installation: Server 8.1 and 8.2 on Windows in the central;\nvarious others connected via VPN. Queries are subsecond when run locally\n(including data transfer), and up to 10 seconds and more via VPN, even in\n\"off-hours\"\n\nThe data-transfer is done via PG-Admin or via psycopg2 Python-Database\nadapter; nothing with ODBC or similiar in between.\n\nI did not find a solution so far; and for bulk data transfers I now\nprogrammed a workaround.\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nSpielberger Straße 49\n70435 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nEuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July\nto Wednesday 11th July. See you there!\n\nHello Rainer,The database computer is connected via a 2MBit SDL connection. I myself have a\n768/128 KBit ADSL connection and pinging the server takes 150ms on average.I do not have a solution, but I can confirm the problem :)One PostgreSQL-Installation: Server 8.1 and 8.2 on Windows in the central; various others connected via VPN. Queries are subsecond when run locally (including data transfer), and up to 10 seconds and more via VPN, even in \"off-hours\"\nThe data-transfer is done via PG-Admin or via psycopg2 Python-Database adapter; nothing with ODBC or similiar in between.I did not find a solution so far; and for bulk data transfers I now programmed a workaround.\nHarald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaSpielberger Straße 4970435 Stuttgart0173/9409607fx 01212-5-13695179 -EuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July to Wednesday 11th July. See you there!", "msg_date": "Thu, 21 Jun 2007 17:01:39 +0200", "msg_from": "\"Harald Armin Massa\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Rainer Bauer <[email protected]> writes:\n> one of my customers installed Postgres on a public server to access the data\n> from several places. The problem is that it takes _ages_ to transfer data from\n> the database to the client app. At first I suspected a problem with the ODBC\n> driver and my application, but using pgAdminIII 1.6.3.6112 (on Windows XP)\n> gives the same result.\n\nI seem to recall that we've seen similar reports before, always\ninvolving Windows :-(. Check whether you have any nonstandard\ncomponents hooking into the network stack on that machine.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2007 11:32:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL " }, { "msg_contents": "Hello Harald,\n\n>I do not have a solution, but I can confirm the problem :)\n\nAt least that rules out any misconfiguration issues :-(\n\n>I did not find a solution so far; and for bulk data transfers I now\n>programmed a workaround.\n\nBut that is surely based on some component installed on the server, isn't it?\n\nTo be honest I didn't expect top performance, but the speed I got suggested\nsome error on my part.\n\nRainer\n", "msg_date": "Thu, 21 Jun 2007 19:51:13 +0200", "msg_from": "Rainer Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Hello Tom,\n\n>I seem to recall that we've seen similar reports before, always\n>involving Windows :-(. Check whether you have any nonstandard\n>components hooking into the network stack on that machine.\n\nI just repeated the test by booting into \"Safe Mode with Network Support\", but\nthe results are the same. So I don't think that's the cause.\n\nApart from that, what response times could I expect?\n\nRainer\n", "msg_date": "Thu, 21 Jun 2007 19:51:23 +0200", "msg_from": "Rainer Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "I wrote:\n\n>Hello Harald,\n>\n>>I do not have a solution, but I can confirm the problem :)\n>\n>At least that rules out any misconfiguration issues :-(\n\nI did a quick test with my application and enabled the ODBC logging.\n\nFetching the 50 rows takes 12 seconds (without logging 8 seconds) and\nexamining the log I found what I suspected: the performance is directly\nrelated to the ping time to the server since fetching one tuple requires a\nround trip to the server.\n\nRainer\n\nPS: I wonder why pgAdminIII requires twice the time to retrieve the data.\n", "msg_date": "Thu, 21 Jun 2007 23:26:43 +0200", "msg_from": "Rainer Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Hi Rainer,\n\nbut did you try to execute your query directly from 'psql' ?...\n\nWhy I'm asking: seems to me your case is probably just network latency\ndependent, and what I noticed during last benchmarks with PostgreSQL\nthe SELECT query become very traffic hungry if you are using CURSOR.\nProgram 'psql' is implemented to not use CURSOR by default, so it'll\nbe easy to check if you're meeting this issue or not just by executing\nyour query remotely from 'psql'...\n\nRgds,\n-Dimitri\n\n\n\nOn 6/21/07, Rainer Bauer <[email protected]> wrote:\n> Hello Tom,\n>\n> >I seem to recall that we've seen similar reports before, always\n> >involving Windows :-(. Check whether you have any nonstandard\n> >components hooking into the network stack on that machine.\n>\n> I just repeated the test by booting into \"Safe Mode with Network Support\",\n> but\n> the results are the same. So I don't think that's the cause.\n>\n> Apart from that, what response times could I expect?\n>\n> Rainer\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n", "msg_date": "Thu, 21 Jun 2007 23:32:01 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Hello Dimitri,\n\n>but did you try to execute your query directly from 'psql' ?...\n\nmunnin=>\\timing\nmunnin=>select * from \"tblItem\";\n<data snipped>\n(50 rows)\nTime: 391,000 ms\n\n>Why I'm asking: seems to me your case is probably just network latency\n>dependent, and what I noticed during last benchmarks with PostgreSQL\n>the SELECT query become very traffic hungry if you are using CURSOR.\n>Program 'psql' is implemented to not use CURSOR by default, so it'll\n>be easy to check if you're meeting this issue or not just by executing\n>your query remotely from 'psql'...\n\nYes, see also my other post.\n\nUnfortunatelly this means that using my program to connect via DSL to the\nPostgres database is not possible.\n\nRainer\n", "msg_date": "Fri, 22 Jun 2007 00:09:17 +0200", "msg_from": "Rainer Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Let's stay optimist - at least now you know the main source of your problem! :))\n\nLet's see now with CURSOR...\n\nFirstly try this:\nmunnin=>\\timing\nmunnin=>\\set FETCH_COUNT 1;\nmunnin=>select * from \"tblItem\";\n\nwhat's the time you see here? (I think your application is working in\nthis manner)\n\nNow, change the FETCH_COUNT to 10, then 50, then 100 - your query\nexecution time should be better (at least I hope so :))\n\nAnd if it's better - you simply need to modify your FETCH clause with\nadapted \"FORWARD #\" value (the best example is psql source code\nitself, you may find ExecQueryUsingCursor function implementation\n(file common.c))...\n\nRgds,\n-Dimitri\n\nOn 6/22/07, Rainer Bauer <[email protected]> wrote:\n> Hello Dimitri,\n>\n> >but did you try to execute your query directly from 'psql' ?...\n>\n> munnin=>\\timing\n> munnin=>select * from \"tblItem\";\n> <data snipped>\n> (50 rows)\n> Time: 391,000 ms\n>\n> >Why I'm asking: seems to me your case is probably just network latency\n> >dependent, and what I noticed during last benchmarks with PostgreSQL\n> >the SELECT query become very traffic hungry if you are using CURSOR.\n> >Program 'psql' is implemented to not use CURSOR by default, so it'll\n> >be easy to check if you're meeting this issue or not just by executing\n> >your query remotely from 'psql'...\n>\n> Yes, see also my other post.\n>\n> Unfortunatelly this means that using my program to connect via DSL to the\n> Postgres database is not possible.\n>\n> Rainer\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n", "msg_date": "Fri, 22 Jun 2007 00:38:40 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Rainer Bauer wrote:\n> Hello Dimitri,\n>\n> \n>> but did you try to execute your query directly from 'psql' ?...\n>> \n>\n> munnin=>\\timing\n> munnin=>select * from \"tblItem\";\n> <data snipped>\n> (50 rows)\n> Time: 391,000 ms\n>\n> \n>> Why I'm asking: seems to me your case is probably just network latency\n>> dependent, and what I noticed during last benchmarks with PostgreSQL\n>> the SELECT query become very traffic hungry if you are using CURSOR.\n>> Program 'psql' is implemented to not use CURSOR by default, so it'll\n>> be easy to check if you're meeting this issue or not just by executing\n>> your query remotely from 'psql'...\n>> \n>\n> Yes, see also my other post.\n>\n> Unfortunatelly this means that using my program to connect via DSL to the\n> Postgres database is not possible.\n\nNote that I'm connected via wireless lan here at work (our wireless lan \ndoesn't connecto to our internal lan directly due to PCI issues) then to \nour internal network via VPN.\n\nWe are using Cisco with Cisco's vpn client software. I am running \nFedora core 4 on my laptop and I can fetch 10,000 rather chubby rows (a \nhundred or more bytes) in about 7 seconds.\n\nSo, postgresql over vpn works fine here. Note, no windows machines were \ninvolved in the making of this email. One is doing the job of tossing \nit on the internet when I hit send though.\n", "msg_date": "Thu, 21 Jun 2007 18:23:06 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Rainer Bauer <[email protected]> writes:\n> Fetching the 50 rows takes 12 seconds (without logging 8 seconds) and\n> examining the log I found what I suspected: the performance is directly\n> related to the ping time to the server since fetching one tuple requires a\n> round trip to the server.\n\nHm, but surely you can get it to fetch more than one row at once?\n\nThis previous post says that someone else solved an ODBC\nperformance problem with UseDeclareFetch=1:\nhttp://archives.postgresql.org/pgsql-odbc/2006-08/msg00014.php\n\nIt's not immediately clear why pgAdmin would have the same issue,\nthough, because AFAIK it doesn't rely on ODBC.\n\nI just finished looking through our archives for info about\nWindows-specific network performance problems. There are quite a few\nthreads, but the ones that were solved seem not to bear on your problem\n(unless the one above does). I found one pretty interesting thread\nsuggesting that the problem was buffer-size dependent:\nhttp://archives.postgresql.org/pgsql-performance/2006-12/msg00269.php\nbut that tailed off with no clear resolution. I think we're going to\nhave to get someone to watch the problem with a packet sniffer before\nwe can get much further.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2007 21:51:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL " }, { "msg_contents": "Tom,\n\nseems to me the problem here is rather simple: current issue depends\ncompletely on the low level 'implementation' of SELECT query in the\napplication. In case it's implemented with using of \"DECLARE ...\nCURSOR ...\" and then \"FETCH NEXT\" by default (most common case) it\nbrings application into \"ping-pong condition\" with database server:\neach next FETCH is possible only if the previous one is finished and\nserver received feedback from client with explicit fetch next order.\nIn this condition query response time became completely network\nlatency dependent:\n - each packet send/receive has a significant cost\n - you cannot reduce this cost as you cannot group more data within\na single packet and you waste your traffic\n - that's why TCP_NODELAY become so important here\n - with 150ms network latency the cost is ~300ms per FETCH (15sec(!)\nfor 50 lines)\n\nYou may think if you're working in LAN and your network latency is\n0.1ms you're not concerned by this issue - but in reality yes, you're\nimpacted! Each network card/driver has it's own max packet/sec\ntraffic capability (independent to volume) and once you hit it - your\nresponse time may only degrade with more concurrent sessions (even if\nyour CPU usage is still low)...\n\nThe solution here is simple:\n - don't use CURSOR in simple cases when you just reading/printing a\nSELECT results\n - in case it's too late to adapt your code or you absolutely need\nCURSOR for some reasons: replace default \"FETCH\" or \"FETCH NEXT\" by\n\"FETCH 100\" (100 rows generally will be enough) normally it'll work\njust straight forward (otherwise check you're verifying PQntuples()\nvalue correctly and looping to read all tuples)\n\nTo keep default network workload more optimal, I think we need to\nbring \"FETCH N\" more popular for developers and enable it (even\nhidden) by default in any ODBC/JDBC and other generic modules...\n\nRgds,\n-Dimitri\n\nOn 6/22/07, Tom Lane <[email protected]> wrote:\n> Rainer Bauer <[email protected]> writes:\n> > Fetching the 50 rows takes 12 seconds (without logging 8 seconds) and\n> > examining the log I found what I suspected: the performance is directly\n> > related to the ping time to the server since fetching one tuple requires a\n> > round trip to the server.\n>\n> Hm, but surely you can get it to fetch more than one row at once?\n>\n> This previous post says that someone else solved an ODBC\n> performance problem with UseDeclareFetch=1:\n> http://archives.postgresql.org/pgsql-odbc/2006-08/msg00014.php\n>\n> It's not immediately clear why pgAdmin would have the same issue,\n> though, because AFAIK it doesn't rely on ODBC.\n>\n> I just finished looking through our archives for info about\n> Windows-specific network performance problems. There are quite a few\n> threads, but the ones that were solved seem not to bear on your problem\n> (unless the one above does). I found one pretty interesting thread\n> suggesting that the problem was buffer-size dependent:\n> http://archives.postgresql.org/pgsql-performance/2006-12/msg00269.php\n> but that tailed off with no clear resolution. I think we're going to\n> have to get someone to watch the problem with a packet sniffer before\n> we can get much further.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Fri, 22 Jun 2007 10:06:47 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Hello Tom,\n\n>This previous post says that someone else solved an ODBC\n>performance problem with UseDeclareFetch=1:\n\nI thought about that too, but enabling UseDeclareFetch will slow down the\nquery: it takes 30 seconds instead of 8.\n\n>It's not immediately clear why pgAdmin would have the same issue,\n>though, because AFAIK it doesn't rely on ODBC.\n\nNo it doesn't. That's the reason I used it to verify the behaviour.\n\nBut I remember Dave Page mentioning using a virtual list control to display\nthe results and that means a round trip for every tuple.\n\n>I just finished looking through our archives for info about\n>Windows-specific network performance problems.\n\nI don't think it's a Windows-specific problem, because psql is doing the job\nblindingly fast. The problem lies in the way my application is coded. See the\nresponse to Dimitri for details.\n\nRainer\n", "msg_date": "Fri, 22 Jun 2007 11:15:12 +0200", "msg_from": "Rainer Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Hello Dimitri,\n\n>Let's stay optimist - at least now you know the main source of your problem! :))\n>\n>Let's see now with CURSOR...\n>\n>Firstly try this:\n>munnin=>\\timing\n>munnin=>\\set FETCH_COUNT 1;\n>munnin=>select * from \"tblItem\";\n>\n>what's the time you see here? (I think your application is working in\n>this manner)\n\nThat's it! It takes exactly 8 seconds like my program.\n\nI retrieve the data through a bound column:\nSELECT * FROM tblItem WHERE intItemIDCnt = ?\n\nAfter converting this to\nSELECT * FROM tblItem WHERE intItemIDCnt IN (...)\nthe query is as fast as psql: 409ms\n\nSo the problem is identified and the solution is to recode my application.\n\nRainer\n\nPS: When enabling UseDeclareFetch as suggested by Tom then the runtime is\nstill three times slower: 1192ms. But I guess that problem is for the ODBC\nlist.\n", "msg_date": "Fri, 22 Jun 2007 11:15:14 +0200", "msg_from": "Rainer Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Rainer, but did you try initial query with FETCH_COUNT equal to 100?...\n\nRgds,\n-Dimitri\n\nOn 6/22/07, Rainer Bauer <[email protected]> wrote:\n> Hello Dimitri,\n>\n> >Let's stay optimist - at least now you know the main source of your\n> problem! :))\n> >\n> >Let's see now with CURSOR...\n> >\n> >Firstly try this:\n> >munnin=>\\timing\n> >munnin=>\\set FETCH_COUNT 1;\n> >munnin=>select * from \"tblItem\";\n> >\n> >what's the time you see here? (I think your application is working in\n> >this manner)\n>\n> That's it! It takes exactly 8 seconds like my program.\n>\n> I retrieve the data through a bound column:\n> SELECT * FROM tblItem WHERE intItemIDCnt = ?\n>\n> After converting this to\n> SELECT * FROM tblItem WHERE intItemIDCnt IN (...)\n> the query is as fast as psql: 409ms\n>\n> So the problem is identified and the solution is to recode my application.\n>\n> Rainer\n>\n> PS: When enabling UseDeclareFetch as suggested by Tom then the runtime is\n> still three times slower: 1192ms. But I guess that problem is for the ODBC\n> list.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n", "msg_date": "Fri, 22 Jun 2007 11:37:40 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Hello Dimitri,\n\n>Rainer, but did you try initial query with FETCH_COUNT equal to 100?...\n\nYes I tried it with different values and it's like you suspected:\n\nFETCH_COUNT 1 Time: 8642,000 ms\nFETCH_COUNT 5 Time: 2360,000 ms\nFETCH_COUNT 10 Time: 1563,000 ms\nFETCH_COUNT 25 Time: 1329,000 ms\nFETCH_COUNT 50 Time: 1140,000 ms\nFETCH_COUNT 100 Time: 969,000 ms\n\n\\unset FETCH_COUNT Time: 390,000 ms\n\nRainer\n", "msg_date": "Fri, 22 Jun 2007 12:02:15 +0200", "msg_from": "Rainer Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Rainer Bauer wrote:\n>> It's not immediately clear why pgAdmin would have the same issue,\n>> though, because AFAIK it doesn't rely on ODBC.\n> \n> No it doesn't. That's the reason I used it to verify the behaviour.\n> \n> But I remember Dave Page mentioning using a virtual list control to display\n> the results and that means a round trip for every tuple.\n\npgAdmin's Query Tool (which I assume you're using), uses an async query \nvia libpq to populate a virtual table behind the grid. The query \nhandling can be seen in pgQueryThread::execute() at \nhttp://svn.pgadmin.org/cgi-bin/viewcvs.cgi/trunk/pgadmin3/pgadmin/db/pgQueryThread.cpp?rev=6082&view=markup\n\nWhen the query completes, a dataset object (basically a wrapper around a \nPGresult) is attached to the grid control. As the grid renders each \ncell, it requests the value to display which results in a call to \nPQgetValue. This is how the old display time was eliminated - cells are \nonly rendered when they become visible for the first time, meaning that \nthe query executes in pgAdmin in the time it takes for the async query \nto complete plus (visible rows * visible columns)PQgetValue calls.\n\n> I don't think it's a Windows-specific problem, because psql is doing the job\n> blindingly fast. The problem lies in the way my application is coded. See the\n> response to Dimitri for details.\n\nI don't see why pgAdmin should be slow though - it should be only \nmarginally slower than psql I would think (assuming there are no thinkos \nin our code that none of use ever noticed).\n\nRegards, Dave.\n", "msg_date": "Fri, 22 Jun 2007 11:12:48 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Dave Page wrote:\n> I don't see why pgAdmin should be slow though - it should be only \n> marginally slower than psql I would think (assuming there are no thinkos \n> in our code that none of use ever noticed).\n\nNevermind...\n\n/D\n", "msg_date": "Fri, 22 Jun 2007 11:16:46 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Rainer,\n\n>I did not find a solution so far; and for bulk data transfers I now\n> >programmed a workaround.\n>\n> But that is surely based on some component installed on the server, isn't\n> it?\n>\n\nCorrect. I use a pyro-remote server. On request this remote server copies\nthe relevant rows into a temporary table, uses a copy_to Call to push them\ninto a StringIO-Objekt (that's Pythons version of \"In Memory File\"),\nserializes that StringIO-Objekt, does a bz2-compression and transfers the\nwhole block via VPN.\n\nI read on in this thread, and I scheduled to check on psycopg2 and what it\nis doing with cursors.\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nSpielberger Straße 49\n70435 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nEuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July\nto Wednesday 11th July. See you there!\n\nRainer,>I did not find a solution so far; and for bulk data transfers I now\n>programmed a workaround.But that is surely based on some component installed on the server, isn't it?Correct. I use a pyro-remote server. On request this remote server copies the relevant rows into a temporary table, uses a copy_to Call to push them into a StringIO-Objekt (that's Pythons version of \"In Memory File\"), serializes that StringIO-Objekt, does a bz2-compression and transfers the whole block via VPN.\nI read on in this thread, and I scheduled to check on psycopg2 and what it is doing with cursors.Harald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaSpielberger Straße 49\n70435 Stuttgart0173/9409607fx 01212-5-13695179 -EuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July to Wednesday 11th July. See you there!", "msg_date": "Fri, 22 Jun 2007 12:24:16 +0200", "msg_from": "\"Harald Armin Massa\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "\n>> I did not find a solution so far; and for bulk data transfers I now\n>> >programmed a workaround.\n>>\n>> But that is surely based on some component installed on the server, \n>> isn't\n>> it?\n>>\n>\n> Correct. I use a pyro-remote server. On request this remote server copies\n> the relevant rows into a temporary table, uses a copy_to Call to push \n> them\n> into a StringIO-Objekt (that's Pythons version of \"In Memory File\"),\n> serializes that StringIO-Objekt, does a bz2-compression and transfers the\n> whole block via VPN.\n>\n> I read on in this thread, and I scheduled to check on psycopg2 and what \n> it is doing with cursors.\n\n\tWhat about a SSH tunnel using data compression ?\n\tIf you fetch all rows from a query in one go, would it be fast ?\n\tAlso, PG can now COPY from a query, so you don't really need the temp \ntable...\n", "msg_date": "Fri, 22 Jun 2007 12:45:13 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Hello Rainer,\n\ninitially I was surprised you did not match non-CURSOR time with FETCH\n100, but then thinking little bit the explanation is very simple -\nlet's analyze what's going in both cases:\n\nWithout CURSOR:\n 1.) app calls PQexec() with \"Query\" and waiting for the result\n 2.) PG sends the result to app, data arriving grouped into max\npossible big packets, network latency is hidden by huge amount per\nsingle send\n\nWith CURSOR and FETCH 100:\n 1.) app calls PQexec() with \"BEGIN\" and waiting\n 2.) PG sends ok\n 3.) app calls PQexec() with \"DECLARE cursor for Query\" and waiting\n 4.) PG sends ok\n 5.) app calls PQexec() with \"FETCH 100\" and waiting\n 6.) PG sends the result of 100 rows to app, data arriving grouped\ninto max possible big packets, network latency is hidden by huge data\namount per single send\n 7.) no more data (as you have only 50 rows in output) and app calls\nPQexec() with \"CLOSE cursor\" and waiting\n 8.) PG sends ok\n 9.) app calls PQexec() with \"COMMIT\" and waiting\n 10.) PG sends ok\n\nas you see the difference is huge, and each step add your network\nlatency delay. So, with \"FETCH 100\" we save only cost of steps 5 and 6\n(default \"FETCH 1\" will loop here for all 50 rows adding 50x times\nlatency delay again). But we cannot solve cost of other steps as they\nneed to be executed one by one to keep execution logic and clean error\nhandling...\n\nHope it's more clear now and at least there is a choice :))\nAs well, if your query result will be 500 (for ex.) I think the\ndifference will be less important between non-CURSOR and \"FETCH 500\"\nexecution...\n\nRgds,\n-Dimitri\n\n\nOn 6/22/07, Rainer Bauer <[email protected]> wrote:\n> Hello Dimitri,\n>\n> >Rainer, but did you try initial query with FETCH_COUNT equal to 100?...\n>\n> Yes I tried it with different values and it's like you suspected:\n>\n> FETCH_COUNT 1 Time: 8642,000 ms\n> FETCH_COUNT 5 Time: 2360,000 ms\n> FETCH_COUNT 10 Time: 1563,000 ms\n> FETCH_COUNT 25 Time: 1329,000 ms\n> FETCH_COUNT 50 Time: 1140,000 ms\n> FETCH_COUNT 100 Time: 969,000 ms\n>\n> \\unset FETCH_COUNT Time: 390,000 ms\n>\n> Rainer\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n", "msg_date": "Fri, 22 Jun 2007 14:24:07 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "PFC,\n\n> Correct. I use a pyro-remote server. On request this remote server copies\n> > the relevant rows into a temporary table, uses a copy_to Call to push\n> > them\n> > into a StringIO-Objekt (that's Pythons version of \"In Memory File\"),\n> > serializes that StringIO-Objekt, does a bz2-compression and transfers\n> the\n> > whole block via VPN.\n>\n\n> What about a SSH tunnel using data compression ?\nSetup on multiple Windows Workstations in multiple Installations is not\npossible.\n\n> If you fetch all rows from a query in one go, would it be fast ?\nI tried the same copy_to via VPN. It took 10-50x the time it took locally.\n\n>Also, PG can now COPY from a query, so you don't really need the temp\ntable...\nI know, but was stuck to 8.1 on some servers.\n\nBest wishes,\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nSpielberger Straße 49\n70435 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nEuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July\nto Wednesday 11th July. See you there!\n\nPFC,> Correct. I use a pyro-remote server. On request this remote server copies\n> the relevant rows into a temporary table, uses a copy_to Call to push> them> into a StringIO-Objekt (that's Pythons version of \"In Memory File\"),> serializes that StringIO-Objekt, does a bz2-compression and transfers the\n> whole block via VPN.>       What about a SSH tunnel using data compression ?Setup on multiple Windows Workstations in multiple Installations is not possible.> If you fetch all rows from a query in one go, would it be fast ?\nI tried the same copy_to via VPN. It took 10-50x the time it took locally.>Also, PG can now COPY from a query, so you don't really need the temp table...I know, but was stuck to 8.1 on some servers. \nBest wishes,Harald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaSpielberger Straße 4970435 Stuttgart0173/9409607fx 01212-5-13695179 -\nEuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July to Wednesday 11th July. See you there!", "msg_date": "Fri, 22 Jun 2007 14:45:12 +0200", "msg_from": "\"Harald Armin Massa\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Hello Dimitri,\n\n>Hope it's more clear now and at least there is a choice :))\n>As well, if your query result will be 500 (for ex.) I think the\n>difference will be less important between non-CURSOR and \"FETCH 500\"\n>execution...\n\nThe problem is that I am using ODBC and not libpq directly.\n\nI will have to rewrite most of the queries and use temporary tables in some\nplaces, but at least I know now what the problem was.\n\nThanks for your help. \n\nRainer\n", "msg_date": "Fri, 22 Jun 2007 16:01:49 +0200", "msg_from": "Rainer Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Rainer Bauer wrote:\n> Hello Dimitri,\n> \n>> Hope it's more clear now and at least there is a choice :))\n>> As well, if your query result will be 500 (for ex.) I think the\n>> difference will be less important between non-CURSOR and \"FETCH 500\"\n>> execution...\n> \n> The problem is that I am using ODBC and not libpq directly.\n\nThat opens up some questions. What ODBC driver are you using (with exact \nversion please).\n\nJoshua D. Drake\n\n> \n> I will have to rewrite most of the queries and use temporary tables in some\n> places, but at least I know now what the problem was.\n> \n> Thanks for your help. \n> \n> Rainer\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Fri, 22 Jun 2007 07:16:18 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Hello Joshua,\n\n>That opens up some questions. What ODBC driver are you using (with exact \n>version please).\n\npsqlODBC 8.2.4.2 (build locally).\n\nI have restored the 8.2.4.0 from the official msi installer, but the results\nare the same.\n\nRainer\n", "msg_date": "Fri, 22 Jun 2007 16:38:06 +0200", "msg_from": "Rainer Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Rainer, seeking psqlODBC code source it seems to work in similar way\nand have an option \"SQL_ROWSET_SIZE\" to execute FETCH query in the\nsame way as \"FETCH_COUNT\" in psql. Try to set it to 100 and let's see\nif it'll be better...\n\nRgds,\n-Dimitri\n\nOn 6/22/07, Rainer Bauer <[email protected]> wrote:\n> Hello Joshua,\n>\n> >That opens up some questions. What ODBC driver are you using (with exact\n> >version please).\n>\n> psqlODBC 8.2.4.2 (build locally).\n>\n> I have restored the 8.2.4.0 from the official msi installer, but the results\n> are the same.\n>\n> Rainer\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n", "msg_date": "Fri, 22 Jun 2007 18:04:18 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data transfer very slow when connected via DSL" }, { "msg_contents": "Hello Dimitri,\n\n>Rainer, seeking psqlODBC code source it seems to work in similar way\n>and have an option \"SQL_ROWSET_SIZE\" to execute FETCH query in the\n>same way as \"FETCH_COUNT\" in psql. Try to set it to 100 and let's see\n>if it'll be better...\n\nBut that is only for bulk fetching with SQLExtendedFetch() and does not work\nfor my case with a single bound column where each tuple is retrived\nindividually by calling SQLFetch().\nSee <http://msdn2.microsoft.com/en-us/library/ms713591.aspx>\n\nRainer\n", "msg_date": "Fri, 22 Jun 2007 23:51:15 +0200", "msg_from": "Rainer Bauer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data transfer very slow when connected via DSL" } ]
[ { "msg_contents": "Hi there,\n\nReading different references, I understand there is no need to vacuum a \ntable where just insert actions perform. So I'm surprising to see a table \nwith just historical data, which is vacuumed at the nightly cron with a \nsimple VACUUM VERBOSE on about 1/3 of indexes amount.\n\nTake a look on the fragment log concerning this table:\nINFO: vacuuming \"public.tbTEST\"\nINFO: scanned index \"tbTEST_pkey\" to remove 1357614 row versions\nDETAIL: CPU 0.31s/1.38u sec elapsed 4.56 sec.\nINFO: \"tbTEST\": removed 1357614 row versions in 16923 pages\nDETAIL: CPU 0.70s/0.13u sec elapsed 2.49 sec.\nINFO: index \"tbTEST_pkey\" now contains 2601759 row versions in 12384 pages\nDETAIL: 1357614 index row versions were removed.\n5415 index pages have been deleted, 2452 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"tbTEST\": found 1357614 removable, 2601759 nonremovable row versions \nin 49153 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 29900 unused item pointers.\n16923 pages contain useful free space.\n0 pages are entirely empty.\nCPU 2.12s/1.87u sec elapsed 11.41 sec.\nINFO: \"tbTEST\": truncated 49153 to 32231 pages\nDETAIL: CPU 0.23s/0.06u sec elapsed 0.31 sec.\n\nI found the following statistics in pg_stat_user_tables:\nn_tup_ins = 11444229\nn_tup_upd = 0\nn_tup_del = 0\n\nThe structure of the table is the following:\nCREATE TABLE \"tbTEST\"\n(\n \"PK_ID\" integer NOT NULL DEFAULT nextval('\"tbTEST_PK_ID_seq\"'::regclass),\n \"FK_SourceTypeID\" integer,\n \"SourceID\" integer DEFAULT -1,\n \"Message\" character varying(500) NOT NULL DEFAULT ''::character varying,\n \"DateAndTime\" timestamp without time zone NOT NULL,\n CONSTRAINT \"tbTEST_pkey\" PRIMARY KEY (\"PK_ID\"),\n CONSTRAINT \"tbTEST_FK_SourceTypeID_fkey\" FOREIGN KEY (\"FK_SourceTypeID\")\n REFERENCES \"tbLISTS\" (\"PK_ID\") MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\n\nPostgres version is 8.2.3.\n\nWhat's happen ?\n\nTIA,\nSabin \n\n\n", "msg_date": "Thu, 21 Jun 2007 19:53:54 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum a lot of data when insert only" }, { "msg_contents": "On Thu, Jun 21, 2007 at 07:53:54PM +0300, Sabin Coanda wrote:\n> Reading different references, I understand there is no need to vacuum a \n> table where just insert actions perform. \n\nThat's false. First, you must vacuum at least once every 2 billion\ntransactions. Second, if a table is INSERTed to, but then the\nINSERTing transaction rolls back, it leaves a dead tuple in its wake. \nMy guess, from your posted example, is that you have the latter case\nhappening, because you have removable rows (that's assuming you\naren't mistaken that there's never a delete or update to the table).\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nUnfortunately reformatting the Internet is a little more painful \nthan reformatting your hard drive when it gets out of whack.\n\t\t--Scott Morris\n", "msg_date": "Thu, 21 Jun 2007 13:17:33 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum a lot of data when insert only" } ]
[ { "msg_contents": "We recently upgraded a very large database (~550 GB) from 8.1.4 to 8.2.4 via\na pg_dump and pg_restore. (Note that the restore took several days.) We\nhad accepted the default settings:\n\nvacuum_freeze_min_age = 100 million\nautovacuum_freeze_max_age = 200 million\n\nDue to our very high transaction rate, it appears that a database-wide\nvacuum kicked off approximately 2 weeks after the restore. (Aside: after\nreading the docs and considering our system characteristics, I know now that\nour autovacuum_freeze_max_age should be more like 2 billion. However on\nthis machine I haven't changed the config settings yet.) Also, I believe,\nthat due to the bulk of our data having the same \"age\" after the restore,\nthe db-wide vacuum had *a lot* of rows to mark with the FrozenXID.\n\nThe good thing is that the db-wide vacuum, which ran for a long time, was\nreasonably non-intrusive to other database activity (somewhat, but\nreasonable for the short term). The other good thing was that concurrent\nautovacuum processes were still vacuuming/analyzing tables as necessary.\n\nThe bad thing, which I don't totally understand from reading the docs, is\nthat another db-wide vacuum kicked off exactly 24 hours after the first\ndb-wide vacuum kicked off, before the first one had finished. (Note that\nthese vacuums seem to go through the tables alphabetically.) I managed to\nexplain this to myself in that there were still rows in tables not yet\ntouched by the first db-wide vacuum that could have XIDs older than\nautovacuum_freeze_max_age. Fine, so two db-wide vacuums were now taking\nplace, one behind the other.\n\nThe first db-wide vacuum finished approximately 36 hours after it started.\nAt this point I was convinced that the second db-wide vacuum would run to\ncompletion with little or no work to do and all would be good. The thing I\ncan't explain is why a third db-wide vacuum kicked off exactly 24 hours\n(again) after the second db-wide vacuum kicked off (and the second vacuum\nstill running).\n\nWouldn't the first db-wide vacuum have marked any rows that needed it with\nthe FrozenXID? Why would a third db-wide vacuum kick off so soon after the\nfirst db-wide vacuum had completed? Surely there haven't been 100 million\nmore transactions in the last two days?\n\nCan someone explain what is going on here? I can't quite figure it out\nbased on the docs.\n\nThanks,\nSteve\n\nWe recently upgraded a very large database (~550 GB) from 8.1.4 to 8.2.4 via a pg_dump and pg_restore.  (Note that the restore took several days.)  We had accepted the default settings:\n \nvacuum_freeze_min_age = 100 million\nautovacuum_freeze_max_age = 200 million\n \nDue to our very high transaction rate, it appears that a database-wide vacuum kicked off approximately 2 weeks after the restore.  (Aside: after reading the docs and considering our system characteristics, I know now that our autovacuum_freeze_max_age should be more like 2 billion.  However on this machine I haven't changed the config settings yet.)  Also, I believe, that due to the bulk of our data having the same \"age\" after the restore, the db-wide vacuum had *a lot* of rows to mark with the FrozenXID.\n\n \nThe good thing is that the db-wide vacuum, which ran for a long time, was reasonably non-intrusive to other database activity (somewhat, but reasonable for the short term).  The other good thing was that concurrent autovacuum processes were still vacuuming/analyzing tables as necessary.\n\n \nThe bad thing, which I don't totally understand from reading the docs, is that another db-wide vacuum kicked off exactly 24 hours after the first db-wide vacuum kicked off, before the first one had finished.  (Note that these vacuums seem to go through the tables alphabetically.)  I managed to explain this to myself in that there were still rows in tables not yet touched by the first db-wide vacuum that could have XIDs older than autovacuum_freeze_max_age.  Fine, so two db-wide vacuums were now taking place, one behind the other.\n\n \nThe first db-wide vacuum finished approximately 36 hours after it started.  At this point I was convinced that the second db-wide vacuum would run to completion with little or no work to do and all would be good.  The thing I can't explain is why a third db-wide vacuum kicked off exactly 24 hours (again) after the second db-wide vacuum kicked off (and the second vacuum still running).\n\n \nWouldn't the first db-wide vacuum have marked any rows that needed it with the FrozenXID?  Why would a third db-wide vacuum kick off so soon after the first db-wide vacuum had completed?  Surely there haven't been 100 million more transactions in the last two days?\n\n \nCan someone explain what is going on here?  I can't quite figure it out based on the docs.\n \nThanks,\nSteve", "msg_date": "Thu, 21 Jun 2007 13:09:57 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Database-wide VACUUM ANALYZE" }, { "msg_contents": "Steven Flatt wrote:\n> The bad thing, which I don't totally understand from reading the docs, is\n> that another db-wide vacuum kicked off exactly 24 hours after the first\n> db-wide vacuum kicked off, before the first one had finished. (Note that\n> these vacuums seem to go through the tables alphabetically.) I managed to\n> explain this to myself in that there were still rows in tables not yet\n> touched by the first db-wide vacuum that could have XIDs older than\n> autovacuum_freeze_max_age. Fine, so two db-wide vacuums were now taking\n> place, one behind the other.\n\nAre you sure there's no cron job starting the vacuums? 24h sounds too \ngood to be a coincidence, and there's no magic constant of 24h in the \nautovacuum code. Besides, autovacuum can only be running one VACUUM at a \ntime, so there must be something else launching them.\n\nWhat's your vacuuming strategy in general, before and after upgrade?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 21 Jun 2007 19:33:47 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database-wide VACUUM ANALYZE" }, { "msg_contents": "Steven Flatt writes:\n\n> Can someone explain what is going on here?� I can't quite figure it out \n> based on the docs. \n\nAre you on FreeBSD by any chance?\n\nI think the FreeBSD port by default installs a script that does a daily \nvacuum. If using another OS, perhaps you want to see if you used some sort \nof package system and if that package added a nightly vacuum.\n", "msg_date": "Thu, 21 Jun 2007 14:58:33 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database-wide VACUUM ANALYZE" }, { "msg_contents": "On 6/21/07, Francisco Reyes <[email protected]> wrote:\n>\n> Are you on FreeBSD by any chance?\n>\n> I think the FreeBSD port by default installs a script that does a daily\n> vacuum.\n\n\nYes, FreeBSD. Do you know what script that is? And it does a db-wide\nVACUUM ANALYZE every day?! That is certainly not necessary, and in fact,\ncostly for us.\n\nHmmm... I wonder why this would just start now, three days ago. Everything\nseemed to be normal for the last two weeks.\n\nSteve\n\n\nOn 6/21/07, Francisco Reyes <[email protected]> wrote:\nAre you on FreeBSD by any chance?I think the FreeBSD port by default installs a script that does a daily\nvacuum.\nYes, FreeBSD.  Do you know what script that is?  And it does a db-wide VACUUM ANALYZE every day?!  That is certainly not necessary, and in fact, costly for us.\n \nHmmm... I wonder why this would just start now, three days ago.  Everything seemed to be normal for the last two weeks.\n \nSteve", "msg_date": "Thu, 21 Jun 2007 15:36:00 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database-wide VACUUM ANALYZE" }, { "msg_contents": "On Thu, 21 Jun 2007, Steven Flatt wrote:\n\n> On 6/21/07, Francisco Reyes <[email protected]> wrote:\n>> \n>> Are you on FreeBSD by any chance?\n>> \n>> I think the FreeBSD port by default installs a script that does a daily\n>> vacuum.\n>\n>\n> Yes, FreeBSD. Do you know what script that is? And it does a db-wide\n> VACUUM ANALYZE every day?! That is certainly not necessary, and in fact,\n> costly for us.\n>\n> Hmmm... I wonder why this would just start now, three days ago. Everything\n> seemed to be normal for the last two weeks.\n>\nThe current FreeBSD port places the script in:\n\n/usr/local/etc/periodic/daily/502.pgsql\n\nAnd it can be controlled from /etc/periodic.conf\n\nSee the top of that script.\n\nLER\n\n> Steve\n>\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 512-248-2683 E-Mail: [email protected]\nUS Mail: 430 Valona Loop, Round Rock, TX 78681-3893\n", "msg_date": "Thu, 21 Jun 2007 14:59:53 -0500 (CDT)", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database-wide VACUUM ANALYZE" }, { "msg_contents": "In response to \"Steven Flatt\" <[email protected]>:\n\n> On 6/21/07, Francisco Reyes <[email protected]> wrote:\n> >\n> > Are you on FreeBSD by any chance?\n> >\n> > I think the FreeBSD port by default installs a script that does a daily\n> > vacuum.\n> \n> \n> Yes, FreeBSD. Do you know what script that is?\n\n/usr/local/etc/periodic/daily/502.pgsql\n\n> And it does a db-wide\n> VACUUM ANALYZE every day?! That is certainly not necessary, and in fact,\n> costly for us.\n\nYou can control it with knobs in /etc/periodic.conf (just like other\nperiodic job):\ndaily_pgsql_vacuum_enable=\"YES\"\ndaily_pgsql_backup_enable=\"NO\"\n\nare the defaults.\n\n> Hmmm... I wonder why this would just start now, three days ago. Everything\n> seemed to be normal for the last two weeks.\n\nSomeone alter /etc/periodic.conf? Perhaps it's been running all along but\nyou never noticed it before now?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Thu, 21 Jun 2007 16:07:32 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database-wide VACUUM ANALYZE" }, { "msg_contents": "Thanks everyone. It appears that we had hacked the 502.pgsql script for our\n8.1 build to disable the daily vacuum. I was not aware of this when\nbuilding and upgrading to 8.2.\n\nSo it looks like for the past two weeks, that 36 hour db-wide vacuum has\nbeen running every 24 hours. Good for it for being reasonably non-intrusive\nand going unnoticed until now. :)\n\nAlthough apparently not related anymore, I still think it was a good move to\nchange autovacuum_freeze_max_age from 200 million to 2 billion.\n\nSteve\n\nThanks everyone.  It appears that we had hacked the 502.pgsql script for our 8.1 build to disable the daily vacuum.  I was not aware of this when building and upgrading to 8.2.\n \nSo it looks like for the past two weeks, that 36 hour db-wide vacuum has been running every 24 hours.  Good for it for being reasonably non-intrusive and going unnoticed until now. :)\n \nAlthough apparently not related anymore, I still think it was a good move to change autovacuum_freeze_max_age from 200 million to 2 billion.\n \nSteve", "msg_date": "Thu, 21 Jun 2007 16:37:49 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database-wide VACUUM ANALYZE" }, { "msg_contents": "Steven Flatt escribi�:\n> Thanks everyone. It appears that we had hacked the 502.pgsql script for our\n> 8.1 build to disable the daily vacuum. I was not aware of this when\n> building and upgrading to 8.2.\n> \n> So it looks like for the past two weeks, that 36 hour db-wide vacuum has\n> been running every 24 hours. Good for it for being reasonably non-intrusive\n> and going unnoticed until now. :)\n\nLooks like you have plenty of spare I/O ;-)\n\n\n> Although apparently not related anymore, I still think it was a good move to\n> change autovacuum_freeze_max_age from 200 million to 2 billion.\n\nAbsolutely not related. Also note that\n\n1. autovacuum is not able (in 8.2 or older) to have more than one task\n running\n\n2. autovacuum in 8.2 doesn't ever launch database-wide vacuums. As of\n 8.2 it only vacuums tables that are in actual danger of Xid\n wraparound (according to pg_class.relfrozenxid); tables that had been\n vacuumed the day before would not need another vacuum for Xid\n purposes (though if you had modified the table to the point that it\n needed another vacuum, that would be another matter). Unless you\n consumed 200 million (or 2 billion) transactions during the day, that\n is.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/5ZYLFMCVHXC\n\"The only difference is that Saddam would kill you on private, where the\nAmericans will kill you in public\" (Mohammad Saleh, 39, a building contractor)\n", "msg_date": "Fri, 22 Jun 2007 09:55:04 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database-wide VACUUM ANALYZE" }, { "msg_contents": "On Jun 21, 2007, at 3:37 PM, Steven Flatt wrote:\n> Thanks everyone. It appears that we had hacked the 502.pgsql \n> script for our 8.1 build to disable the daily vacuum. I was not \n> aware of this when building and upgrading to 8.2.\n\nMuch better to change stuff in a config file than to hack installed \nscripts, for this very reason. :)\n\n> So it looks like for the past two weeks, that 36 hour db-wide \n> vacuum has been running every 24 hours. Good for it for being \n> reasonably non-intrusive and going unnoticed until now. :)\n>\n> Although apparently not related anymore, I still think it was a \n> good move to change autovacuum_freeze_max_age from 200 million to 2 \n> billion.\n\nIf you set that to 2B, that means you're 2^31-\"2 billion\"-1000000 \ntransactions away from a shutdown when autovac finally gets around to \ntrying to run a wraparound vacuum on a table. If you have any number \nof large tables, that could be a big problem, as autovac could get \ntied up on a large table for a long enough period that the table \nneeding to be frozen doesn't get frozen in time.\n\nI suspect 1B is a much better setting. I probably wouldn't go past 1.5B.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Mon, 25 Jun 2007 19:00:20 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database-wide VACUUM ANALYZE" }, { "msg_contents": "On 6/25/07, Jim Nasby <[email protected]> wrote:\n>\n> If you set that to 2B, that means you're 2^31-\"2 billion\"-1000000\n> transactions away from a shutdown when autovac finally gets around to\n> trying to run a wraparound vacuum on a table. If you have any number\n> of large tables, that could be a big problem, as autovac could get\n> tied up on a large table for a long enough period that the table\n> needing to be frozen doesn't get frozen in time.\n>\n> I suspect 1B is a much better setting. I probably wouldn't go past 1.5B.\n\n\n From my understanding of the docs, for tables that are not otherwise\nvacuumed, autovac will be invoked on it once every autovacuum_freeze_max_age\nminus vacuum_freeze_min_age transactions. In our case that's 2 billion -\n100 million = 1.9 billion transactions. So when an autovac finally kicks\noff on an otherwise non-vacuumed table, we are (2^31 - 1.9 billion) - 1\nmillion =~ 250 million transactions away from shutdown. (I guess that's\nclose to what you were saying.)\n\nMost of our large (partitioned) tables are insert-only (truncated\neventually) so will not be touched by autovacuum until wraparound prevention\nkicks in. However the tables are partitioned by timestamp so tables will\ncross the 1.9 billion marker at different times (some not at all, as the\ndata will have been truncated).\n\nDo you still think the 250 million transactions away from shutdown is\ncutting it too close? Recall that the unintentional db-wide vacuum analyze\nthat was going on last week on our system took less than two days to\ncomplete.\n\nSteve\n\nOn 6/25/07, Jim Nasby <[email protected]> wrote:\nIf you set that to 2B, that means you're 2^31-\"2 billion\"-1000000transactions away from a shutdown when autovac finally gets around to\ntrying to run a wraparound vacuum on a table. If you have any numberof large tables, that could be a big problem, as autovac could gettied up on a large table for a long enough period that the tableneeding to be frozen doesn't get frozen in time.\nI suspect 1B is a much better setting. I probably wouldn't go past 1.5B.\n \nFrom my understanding of the docs, for tables that are not otherwise vacuumed, autovac will be invoked on it once every autovacuum_freeze_max_age minus vacuum_freeze_min_age transactions.  In our case that's 2 billion - 100 million = \n1.9 billion transactions.  So when an autovac finally kicks off on an otherwise non-vacuumed table, we are (2^31 - 1.9 billion) - 1 million =~ 250 million transactions away from shutdown.  (I guess that's close to what you were saying.)\n\n \nMost of our large (partitioned) tables are insert-only (truncated eventually) so will not be touched by autovacuum until wraparound prevention kicks in.  However the tables are partitioned by timestamp so tables will cross the \n1.9 billion marker at different times (some not at all, as the data will have been truncated).\n \nDo you still think the 250 million transactions away from shutdown is cutting it too close?  Recall that the unintentional db-wide vacuum analyze that was going on last week on our system took less than two days to complete.\n\n \nSteve", "msg_date": "Tue, 26 Jun 2007 13:25:44 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database-wide VACUUM ANALYZE" }, { "msg_contents": "Steven Flatt escribi�:\n\n> Most of our large (partitioned) tables are insert-only (truncated\n> eventually) so will not be touched by autovacuum until wraparound prevention\n> kicks in. However the tables are partitioned by timestamp so tables will\n> cross the 1.9 billion marker at different times (some not at all, as the\n> data will have been truncated).\n\nNote that as of 8.3, tables that are truncated do not need vacuuming for\nXid wraparound purposes, because the counter is updated on TRUNCATE (as\nit is on CLUSTER and certain forms of ALTER TABLE).\n\n> Do you still think the 250 million transactions away from shutdown is\n> cutting it too close? Recall that the unintentional db-wide vacuum analyze\n> that was going on last week on our system took less than two days to\n> complete.\n\nIs this 8.1 or 8.2? In the latter you don't ever need db-wide vacuums\nat all, because Xid wraparound is tracked per table, so only tables\nactually needing vacuum are processed. To answer your question, the\nfollowup question is how many transactions normally take place in two\ndays. If they are way less than 250 million then you don't need to\nworry. Otherwise, the database may shut itself down to protect from Xid\nwraparound.\n\n-- \nAlvaro Herrera http://www.flickr.com/photos/alvherre/\n\"La soledad es compa��a\"\n", "msg_date": "Tue, 26 Jun 2007 14:49:06 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database-wide VACUUM ANALYZE" } ]
[ { "msg_contents": "I can't seem to find a definitive answer to this.\n\nIt looks like Postgres does not enforce a limit on the length of an SQL\nstring. Great. However is there some point at which a query string becomes\nridiculously too long and affects performance? Here's my particular case:\nconsider an INSERT statement where you're using the new multi-row VALUES\nclause or SELECT ... UNION ALL to group together tuples. Is it always\nbetter to group as many together as possible?\n\nFor example, on a toy table with two columns, I noticed about a 20% increase\nwhen bulking together 1000 tuples in one INSERT statement as opposed to\ndoing 1000 individual INSERTS. Would this be the same for 10000? 100000?\nDoes it depend on the width of the tuples or the data types?\n\nAre there any values A and B such that grouping together A tuples and B\ntuples separately and running two statements, will be faster than grouping\nA+B tuples in one statement?\n\nSteve\n\nI can't seem to find a definitive answer to this.\n \nIt looks like Postgres does not enforce a limit on the length of an SQL string.  Great.  However is there some point at which a query string becomes ridiculously too long and affects performance?  Here's my particular case: consider an INSERT statement where you're using the new multi-row VALUES clause or SELECT ... UNION ALL to group together tuples.  Is it always better to group as many together as possible?\n\n \nFor example, on a toy table with two columns, I noticed about a 20% increase when bulking together 1000 tuples in one INSERT statement as opposed to doing 1000 individual INSERTS.  Would this be the same for 10000? 100000?  Does it depend on the width of the tuples or the data types?\n\n \nAre there any values A and B such that grouping together A tuples and B tuples separately and running two statements, will be faster than grouping A+B tuples in one statement?\n \nSteve", "msg_date": "Thu, 21 Jun 2007 14:33:01 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Very long SQL strings" }, { "msg_contents": "Steven Flatt wrote:\n> It looks like Postgres does not enforce a limit on the length of an SQL\n> string. Great. However is there some point at which a query string \n> becomes\n> ridiculously too long and affects performance? Here's my particular case:\n> consider an INSERT statement where you're using the new multi-row VALUES\n> clause or SELECT ... UNION ALL to group together tuples. Is it always\n> better to group as many together as possible?\n\nI'm sure you'll reach a point of diminishing returns, and eventually a \nceiling where you run out of memory etc, but I don't know what the limit \nwould be.\n\nThe most efficient way to do bulk inserts is to stream the data with COPY.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 21 Jun 2007 19:41:55 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very long SQL strings" }, { "msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> It looks like Postgres does not enforce a limit on the length of an SQL\n> string. Great. However is there some point at which a query string becomes\n> ridiculously too long and affects performance?\n\nYes, but it'll depend a whole lot on context; I'd suggest\nexperimentation if you want to derive a number for your particular\nsituation. For starters, whether you are on 32- or 64-bit hardware\nis hugely relevant.\n\nFYI, when we developed multi-row-VALUES quite a bit of thought was\nput into maintaining performance with lots of rows, and IIRC we saw\nreasonable performance up into the tens of thousands of rows (depending\non how wide the rows are). Other ways of making a query long, such as\nlots of WHERE clauses, might send performance into the tank a lot\nquicker.\n\nSo the short answer is it all depends.\n\n\t\t\tregards, tom lane\n\nPS: for the record, there is a hard limit at 1GB of query text, owing\nto restrictions built into palloc. But I think you'd hit other\nmemory limits or performance bottlenecks before that one.\n", "msg_date": "Thu, 21 Jun 2007 14:45:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very long SQL strings " }, { "msg_contents": "Steven Flatt <[email protected]> schrieb:\n> For example, on a toy table with two columns, I noticed about a 20% increase\n> when bulking together 1000 tuples in one INSERT statement as opposed to doing\n> 1000 individual INSERTS. Would this be the same for 10000? 100000? Does it\n> depend on the width of the tuples or the data types?\n\nI guess you can obtain the same if you pack all INSERTs into one\ntransaction. \n\nAnd, faster than INSERT: COPY.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Thu, 21 Jun 2007 20:48:33 +0200", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very long SQL strings" }, { "msg_contents": "[email protected] (Tom Lane) writes:\n> PS: for the record, there is a hard limit at 1GB of query text, owing\n> to restrictions built into palloc. But I think you'd hit other\n> memory limits or performance bottlenecks before that one.\n\nIt would be much funnier to set a hard limit of 640K of query text.\nThe reasoning should be obvious :-).\n\nI once ran into the situation where Slony-I generated a query that\nmade the parser blow out (some sort of memory problem / running out of\nstack space somewhere thing); it was just short of 640K long, and so\nwe figured that evidently it was wrong to conclude that \"640K ought to\nbe enough for anybody.\"\n\nNeil Conway was an observer; he was speculating that, with some\n(possibly nontrivial) change to the parser, we should have been able\nto cope with it.\n\nThe query consisted mostly of a NOT IN clause where the list had some\natrocious number of entries in it (all integers).\n\n(Aside: I wound up writing a \"query compressor\" (now in 1.2) which\nwould read that list and, if it was at all large, try to squeeze any\nsets of consecutive integers into sets of \"NOT BETWEEN\" clauses.\nUsually, the lists, of XIDs, were more or less consecutive, and\nfrequently, in the cases where the query got to MBs in size, there\nwould be sets of hundreds or even thousands of consecutive integers\nsuch that we'd be left with a tiny query after this...)\n-- \nselect 'cbbrowne' || '@' || 'linuxfinances.info';\nhttp://linuxfinances.info/info/linux.html\nAs of next Monday, MACLISP will no longer support list structure.\nPlease downgrade your programs.\n", "msg_date": "Thu, 21 Jun 2007 15:52:24 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very long SQL strings" }, { "msg_contents": "Thanks everyone for your responses. I don't think it's realistic to change\nour application infrastructure to use COPY from a stream at this point.\nIt's good to know that multi-row-VALUES is good up into the thousands of\nrows (depending on various things, of course). That's a good enough answer\nfor what I was looking for and we can revisit this if performance does start\nto hurt.\n\nOn 6/21/07, Andreas Kretschmer <[email protected]> wrote:\n>\n> I guess you can obtain the same if you pack all INSERTs into one\n> transaction.\n\n\nWell the 20% gain I referred to was when all individual INSERTs were within\none transaction. When each INSERT does its own commit, it's significantly\nslower.\n\nSteve\n\nThanks everyone for your responses.  I don't think it's realistic to change our application infrastructure to use COPY from a stream at this point.  It's good to know that multi-row-VALUES is good up into the thousands of rows (depending on various things, of course).  That's a good enough answer for what I was looking for and we can revisit this if performance does start to hurt.\n\nOn 6/21/07, Andreas Kretschmer <[email protected]> wrote:\nI guess you can obtain the same if you pack all INSERTs into onetransaction.\n \nWell the 20% gain I referred to was when all individual INSERTs were within one transaction.  When each INSERT does its own commit, it's significantly slower.\n \nSteve", "msg_date": "Thu, 21 Jun 2007 16:09:31 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very long SQL strings" }, { "msg_contents": "Chris Browne <[email protected]> writes:\n> I once ran into the situation where Slony-I generated a query that\n> made the parser blow out (some sort of memory problem / running out of\n> stack space somewhere thing); it was just short of 640K long, and so\n> we figured that evidently it was wrong to conclude that \"640K ought to\n> be enough for anybody.\"\n\n> Neil Conway was an observer; he was speculating that, with some\n> (possibly nontrivial) change to the parser, we should have been able\n> to cope with it.\n\n> The query consisted mostly of a NOT IN clause where the list had some\n> atrocious number of entries in it (all integers).\n\nFWIW, we do seem to have improved that as of 8.2. Assuming your entries\nwere 6-or-so-digit integers, that would have been on the order of 80K\nentries, and we can manage it --- not amazingly fast, but it doesn't\nblow out the stack anymore.\n\n> (Aside: I wound up writing a \"query compressor\" (now in 1.2) which\n> would read that list and, if it was at all large, try to squeeze any\n> sets of consecutive integers into sets of \"NOT BETWEEN\" clauses.\n> Usually, the lists, of XIDs, were more or less consecutive, and\n> frequently, in the cases where the query got to MBs in size, there\n> would be sets of hundreds or even thousands of consecutive integers\n> such that we'd be left with a tiny query after this...)\n\nProbably still a win.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2007 16:59:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very long SQL strings " } ]
[ { "msg_contents": "Hi -\n I'm looking at ways to do clean PITR backups. Currently we're \npg_dumping our data in some cases when compressed is about 100GB. \nNeedless to say it's slow and IO intensive on both the host and the \nbackup server.\n\n All of our databases are on NetApp storage and I have been looking \nat SnapMirror (PITR RO copy ) and FlexClone (near instant RW volume \nreplica) for backing up our databases. The problem is because there \nis no write-suspend or even a 'hot backup mode' for postgres it's \nvery plausible that the database has data in RAM that hasn't been \nwritten and will corrupt the data. NetApp suggested that if we do a \nSnapMirror, we do a couple in succession ( < 1s) so should one be \ncorrupt, we try the next one. They said oracle does something similar.\n\n Is there a better way to quiesce the database without shutting it \ndown? Some of our databases are doing about 250,000 commits/min.\n\nBest Regards,\nDan Gorman\n\n", "msg_date": "Thu, 21 Jun 2007 16:31:32 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "PITR Backups" }, { "msg_contents": "Dan Gorman <[email protected]> writes:\n> All of our databases are on NetApp storage and I have been looking \n> at SnapMirror (PITR RO copy ) and FlexClone (near instant RW volume \n> replica) for backing up our databases. The problem is because there \n> is no write-suspend or even a 'hot backup mode' for postgres it's \n> very plausible that the database has data in RAM that hasn't been \n> written and will corrupt the data.\n\nI think you need to read the fine manual a bit more closely:\nhttp://www.postgresql.org/docs/8.2/static/backup-file.html\nIf the NetApp does provide an instantaneous-snapshot operation then\nit will work fine; you just have to be sure the snap covers both\ndata and WAL files.\n\nAlternatively, you can use a PITR base backup as suggested here:\nhttp://www.postgresql.org/docs/8.2/static/continuous-archiving.html\n\nIn either case, the key point is that you need both the data files\nand matching WAL files.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Jun 2007 20:26:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups " }, { "msg_contents": "\nTom Lane wrote:\n> Dan Gorman <[email protected]> writes:\n>> All of our databases are on NetApp storage and I have been looking \n>> at SnapMirror (PITR RO copy ) and FlexClone (near instant RW volume \n>> replica) for backing up our databases. The problem is because there \n>> is no write-suspend or even a 'hot backup mode' for postgres it's \n>> very plausible that the database has data in RAM that hasn't been \n>> written and will corrupt the data.\n\n> Alternatively, you can use a PITR base backup as suggested here:\n> http://www.postgresql.org/docs/8.2/static/continuous-archiving.html\n\nI think Dan's problem is important if we use PostgreSQL to a large size database:\n\n- When we take a PITR base backup with hardware level snapshot operation\n (not filesystem level) which a lot of storage vender provide, the backup data\n can be corrupted as Dan said. During recovery we can't even read it,\n especially if meta-data was corrupted.\n\n- If we don't use hardware level snapshot operation, it takes long time to take\n a large backup data, and a lot of full-page-written WAL files are made.\n\nSo, I think users need a new feature not to write out heap pages during taking a\nbackup.\n\nAny comments?\n\nBest regards,\n\n-- \nToru SHIMOGAKI<[email protected]>\nNTT Open Source Software Center\n\n", "msg_date": "Fri, 22 Jun 2007 11:30:49 +0900", "msg_from": "Toru SHIMOGAKI <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "Toru SHIMOGAKI wrote:\n> Tom Lane wrote:\n\n> - When we take a PITR base backup with hardware level snapshot operation\n> (not filesystem level) which a lot of storage vender provide, the backup data\n> can be corrupted as Dan said. During recovery we can't even read it,\n> especially if meta-data was corrupted.\n> \n> - If we don't use hardware level snapshot operation, it takes long time to take\n> a large backup data, and a lot of full-page-written WAL files are made.\n\nDoes it? I have done it with fairly large databases without issue.\n\nJoshua D. Drake\n\n\n> \n> So, I think users need a new feature not to write out heap pages during taking a\n> backup.\n> \n> Any comments?\n> \n> Best regards,\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Thu, 21 Jun 2007 20:10:30 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "\nOn Jun 21, 2007, at 7:30 PM, Toru SHIMOGAKI wrote:\n\n>\n> Tom Lane wrote:\n>> Dan Gorman <[email protected]> writes:\n>>> All of our databases are on NetApp storage and I have been \n>>> looking\n>>> at SnapMirror (PITR RO copy ) and FlexClone (near instant RW volume\n>>> replica) for backing up our databases. The problem is because there\n>>> is no write-suspend or even a 'hot backup mode' for postgres it's\n>>> very plausible that the database has data in RAM that hasn't been\n>>> written and will corrupt the data.\n>\n>> Alternatively, you can use a PITR base backup as suggested here:\n>> http://www.postgresql.org/docs/8.2/static/continuous-archiving.html\n>\n> I think Dan's problem is important if we use PostgreSQL to a large \n> size database:\n>\n> - When we take a PITR base backup with hardware level snapshot \n> operation\n> (not filesystem level) which a lot of storage vender provide, the \n> backup data\n> can be corrupted as Dan said. During recovery we can't even read it,\n> especially if meta-data was corrupted.\n\nI can't see any explanation for how this could happen, other\nthan your hardware vendor is lying about snapshot ability.\n\nWhat problems have you actually seen?\n\nCheers,\n Steve\n\n\n\n\n", "msg_date": "Thu, 21 Jun 2007 21:01:41 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "\nSteve Atkins wrote:\n\n>> - When we take a PITR base backup with hardware level snapshot operation\n>> (not filesystem level) which a lot of storage vender provide, the \n>> backup data\n>> can be corrupted as Dan said. During recovery we can't even read it,\n>> especially if meta-data was corrupted.\n> \n> I can't see any explanation for how this could happen, other\n> than your hardware vendor is lying about snapshot ability.\n\nAll of the hardware vendors I asked always said:\n\n \"The hardware level snapshot has nothing to do with filesystem condition and \nof course with what data has been written from operating system chache to the \nhard disk platter. It just copies byte data on storage to the other volume.\n\nSo, if any data is written during taking snapshot, we can't assurance data \ncorrectness *strictly* .\n\nIn Oracle, no table data is written between BEGIN BACKUP and END BACKUP, and it \nis not a problem REDO is written...\"\n\nI'd like to know the correct information if the explanation has any mistakes, or \na good way to avoid the probrem.\n\nI think there are users who want to migrate Oracle to PostgreSQL but can't \nbecause of the problem as above.\n\n\nBest regards,\n\n-- \nToru SHIMOGAKI<[email protected]>\nNTT Open Source Software Center\n\n", "msg_date": "Fri, 22 Jun 2007 16:30:17 +0900", "msg_from": "Toru SHIMOGAKI <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "\nJoshua D. Drake wrote:\n\n>> - If we don't use hardware level snapshot operation, it takes long time to take\n>> a large backup data, and a lot of full-page-written WAL files are made.\n> \n> Does it? I have done it with fairly large databases without issue.\n\nYou mean hardware snapshot? I know taking a backup using rsync(or tar, cp?) as a\nn online backup method is not so a big problem as documented. But it just take a\nlong time if we handle a terabyte database. We have to VACUUM and other batch\nprocesses to the large database as well, so we don't want to take a long time\nto take a backup...\n\nRegards,\n\n-- \nToru SHIMOGAKI<[email protected]>\nNTT Open Source Software Center\n\n", "msg_date": "Fri, 22 Jun 2007 16:41:24 +0900", "msg_from": "Toru SHIMOGAKI <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "Here is an example. Most of the snap shots worked fine, but I did get \nthis once:\n\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [9-1] 2007-06-21 \n00:39:43 PDTLOG: redo done at 71/99870670\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [10-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28905 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [11-1] 2007-06-21 \n00:39:43 PDTWARNING: page 13626 of relation 1663/16384/76716 did not \nexist\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [12-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28904 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [13-1] 2007-06-21 \n00:39:43 PDTWARNING: page 26711 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [14-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28900 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [15-1] 2007-06-21 \n00:39:43 PDTWARNING: page 3535208 of relation 1663/16384/33190 did \nnot exist\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [16-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28917 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [17-1] 2007-06-21 \n00:39:43 PDTWARNING: page 3535207 of relation 1663/16384/33190 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [18-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28916 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [19-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28911 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [20-1] 2007-06-21 \n00:39:43 PDTWARNING: page 26708 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [21-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28914 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [22-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28909 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [23-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28908 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [24-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28913 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [25-1] 2007-06-21 \n00:39:43 PDTWARNING: page 26712 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [26-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28918 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [27-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28912 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [28-1] 2007-06-21 \n00:39:43 PDTWARNING: page 3535209 of relation 1663/16384/33190 did \nnot exist\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [29-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28907 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [30-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28906 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [31-1] 2007-06-21 \n00:39:43 PDTWARNING: page 26713 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [32-1] 2007-06-21 \n00:39:43 PDTWARNING: page 17306 of relation 1663/16384/76710 did not \nexist\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [33-1] 2007-06-21 \n00:39:43 PDTWARNING: page 26706 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [34-1] 2007-06-21 \n00:39:43 PDTWARNING: page 800226 of relation 1663/16384/33204 did \nnot exist\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [35-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28915 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [36-1] 2007-06-21 \n00:39:43 PDTWARNING: page 26710 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [37-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28903 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [38-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28902 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [39-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28910 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [40-1] 2007-06-21 \n00:39:43 PDTPANIC: WAL contains references to invalid pages\nJun 21 00:39:43 sfmedstorageha001 postgres[3503]: [1-1] 2007-06-21 \n00:39:43 PDTLOG: startup process (PID 3506) was terminated by signal 6\nJun 21 00:39:43 sfmedstorageha001 postgres[3503]: [2-1] 2007-06-21 \n00:39:43 PDTLOG: aborting startup due to startup process failure\nJun 21 00:39:43 sfmedstorageha001 postgres[3505]: [1-1] 2007-06-21 \n00:39:43 PDTLOG: logger shutting down\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [1-1] 2007-06-21 \n00:40:39 PDTLOG: database system was interrupted while in recovery \nat 2007-06-21 00:36:40 PDT\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [1-2] 2007-06-21 \n00:40:39 PDTHINT: This probably means that some data is corrupted \nand you will have to use the last backup for\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [1-3] recovery.\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [2-1] 2007-06-21 \n00:40:39 PDTLOG: checkpoint record is at 71/9881E928\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [3-1] 2007-06-21 \n00:40:39 PDTLOG: redo record is at 71/986BF148; undo record is at \n0/0; shutdown FALSE\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [4-1] 2007-06-21 \n00:40:39 PDTLOG: next transaction ID: 0/2871389429; next OID: 83795\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [5-1] 2007-06-21 \n00:40:39 PDTLOG: next MultiXactId: 1; next MultiXactOffset: 0\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [6-1] 2007-06-21 \n00:40:39 PDTLOG: database system was not properly shut down; \nautomatic recovery in progress\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [7-1] 2007-06-21 \n00:40:39 PDTLOG: redo starts at 71/986BF148\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [8-1] 2007-06-21 \n00:40:39 PDTLOG: record with zero length at 71/998706A8\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [9-1] 2007-06-21 \n00:40:39 PDTLOG: redo done at 71/99870670\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [10-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28905 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [11-1] 2007-06-21 \n00:40:39 PDTWARNING: page 13626 of relation 1663/16384/76716 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [12-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28904 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [13-1] 2007-06-21 \n00:40:39 PDTWARNING: page 26711 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [14-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28900 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [15-1] 2007-06-21 \n00:40:39 PDTWARNING: page 3535208 of relation 1663/16384/33190 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [16-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28917 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [17-1] 2007-06-21 \n00:40:39 PDTWARNING: page 3535207 of relation 1663/16384/33190 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [18-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28916 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [19-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28911 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [20-1] 2007-06-21 \n00:40:39 PDTWARNING: page 26708 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [21-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28914 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [22-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28909 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [23-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28908 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [24-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28913 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [25-1] 2007-06-21 \n00:40:39 PDTWARNING: page 26712 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [26-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28918 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [27-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28912 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [28-1] 2007-06-21 \n00:40:39 PDTWARNING: page 3535209 of relation 1663/16384/33190 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [29-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28907 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [30-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28906 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [31-1] 2007-06-21 \n00:40:39 PDTWARNING: page 26713 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [32-1] 2007-06-21 \n00:40:39 PDTWARNING: page 17306 of relation 1663/16384/76710 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [33-1] 2007-06-21 \n00:40:39 PDTWARNING: page 26706 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [34-1] 2007-06-21 \n00:40:39 PDTWARNING: page 800226 of relation 1663/16384/33204 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [35-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28915 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [36-1] 2007-06-21 \n00:40:39 PDTWARNING: page 26710 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [37-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28903 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [38-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28902 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [39-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28910 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [40-1] 2007-06-21 \n00:40:39 PDTPANIC: WAL contains references to invalid pages\nJun 21 00:40:39 sfmedstorageha001 postgres[3755]: [1-1] 2007-06-21 \n00:40:39 PDTLOG: startup process (PID 3757) was terminated by signal 6\nJun 21 00:40:39 sfmedstorageha001 postgres[3755]: [2-1] 2007-06-21 \n00:40:39 PDTLOG: aborting startup due to startup process failure\nJun 21 00:40:39 sfmedstorageha001 postgres[3756]: [1-1] 2007-06-21 \n00:40:39 PDTLOG: logger shutting down\n\n\nOn Jun 22, 2007, at 12:30 AM, Toru SHIMOGAKI wrote:\n\n>\n> Steve Atkins wrote:\n>\n>>> - When we take a PITR base backup with hardware level snapshot \n>>> operation\n>>> (not filesystem level) which a lot of storage vender provide, \n>>> the backup data\n>>> can be corrupted as Dan said. During recovery we can't even \n>>> read it,\n>>> especially if meta-data was corrupted.\n>> I can't see any explanation for how this could happen, other\n>> than your hardware vendor is lying about snapshot ability.\n>\n> All of the hardware vendors I asked always said:\n>\n> \"The hardware level snapshot has nothing to do with filesystem \n> condition and of course with what data has been written from \n> operating system chache to the hard disk platter. It just copies \n> byte data on storage to the other volume.\n>\n> So, if any data is written during taking snapshot, we can't \n> assurance data correctness *strictly* .\n>\n> In Oracle, no table data is written between BEGIN BACKUP and END \n> BACKUP, and it is not a problem REDO is written...\"\n>\n> I'd like to know the correct information if the explanation has any \n> mistakes, or a good way to avoid the probrem.\n>\n> I think there are users who want to migrate Oracle to PostgreSQL \n> but can't because of the problem as above.\n>\n>\n> Best regards,\n>\n> -- \n> Toru SHIMOGAKI<[email protected]>\n> NTT Open Source Software Center\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "Fri, 22 Jun 2007 00:43:13 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "\nDan Gorman wrote:\n> Here is an example. Most of the snap shots worked fine, but I did get \n> this once:\n\nThank you for your example. I'd appreciate it if I'd get any responses; whether \nwe should tackle the problem for 8.4?\n\nRegards,\n\n-- \nToru SHIMOGAKI<[email protected]>\nNTT Open Source Software Center\n\n", "msg_date": "Fri, 22 Jun 2007 17:23:50 +0900", "msg_from": "Toru SHIMOGAKI <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "On Fri, 2007-06-22 at 11:30 +0900, Toru SHIMOGAKI wrote:\n> Tom Lane wrote:\n> > Dan Gorman <[email protected]> writes:\n> >> All of our databases are on NetApp storage and I have been looking \n> >> at SnapMirror (PITR RO copy ) and FlexClone (near instant RW volume \n> >> replica) for backing up our databases. The problem is because there \n> >> is no write-suspend or even a 'hot backup mode' for postgres it's \n> >> very plausible that the database has data in RAM that hasn't been \n> >> written and will corrupt the data.\n> \n> > Alternatively, you can use a PITR base backup as suggested here:\n> > http://www.postgresql.org/docs/8.2/static/continuous-archiving.html\n> \n> I think Dan's problem is important if we use PostgreSQL to a large size database:\n> \n> - When we take a PITR base backup with hardware level snapshot operation\n> (not filesystem level) which a lot of storage vender provide, the backup data\n> can be corrupted as Dan said. During recovery we can't even read it,\n> especially if meta-data was corrupted.\n> \n> - If we don't use hardware level snapshot operation, it takes long time to take\n> a large backup data, and a lot of full-page-written WAL files are made.\n> \n> So, I think users need a new feature not to write out heap pages during taking a\n> backup.\n\nYour worries are unwarranted, IMHO. It appears Dan was taking a snapshot\nwithout having read the procedure as clearly outlined in the manual.\n\npg_start_backup() flushes all currently dirty blocks to disk as part of\na checkpoint. If you snapshot after that point, then you will have all\nthe data blocks required from which to correctly roll forward. On its\nown, the snapshot is an inconsistent backup and will give errors as Dan\nshows. It is only when the snapshot is used as the base backup in a full\ncontinuous recovery that the inconsistencies are removed and the\ndatabase is fully and correctly restored.\n\npg_start_backup() is the direct analogue of Oracle's ALTER DATABASE\nBEGIN BACKUP. Snapshots work with Oracle too, in much the same way.\n\nAfter reviewing the manual, if you honestly think there is a problem,\nplease let me know and I'll work with you to investigate.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Jun 2007 11:55:47 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "This snapshot is done at the LUN (filer) level, postgres is un-aware \nwe're creating a backup, so I'm not sure how pg_start_backup() plays \ninto this ...\n\nRegards,\nDan Gorman\n\nOn Jun 22, 2007, at 3:55 AM, Simon Riggs wrote:\n\n> On Fri, 2007-06-22 at 11:30 +0900, Toru SHIMOGAKI wrote:\n>> Tom Lane wrote:\n>>> Dan Gorman <[email protected]> writes:\n>>>> All of our databases are on NetApp storage and I have been \n>>>> looking\n>>>> at SnapMirror (PITR RO copy ) and FlexClone (near instant RW volume\n>>>> replica) for backing up our databases. The problem is because there\n>>>> is no write-suspend or even a 'hot backup mode' for postgres it's\n>>>> very plausible that the database has data in RAM that hasn't been\n>>>> written and will corrupt the data.\n>>\n>>> Alternatively, you can use a PITR base backup as suggested here:\n>>> http://www.postgresql.org/docs/8.2/static/continuous-archiving.html\n>>\n>> I think Dan's problem is important if we use PostgreSQL to a large \n>> size database:\n>>\n>> - When we take a PITR base backup with hardware level snapshot \n>> operation\n>> (not filesystem level) which a lot of storage vender provide, \n>> the backup data\n>> can be corrupted as Dan said. During recovery we can't even read \n>> it,\n>> especially if meta-data was corrupted.\n>>\n>> - If we don't use hardware level snapshot operation, it takes long \n>> time to take\n>> a large backup data, and a lot of full-page-written WAL files \n>> are made.\n>>\n>> So, I think users need a new feature not to write out heap pages \n>> during taking a\n>> backup.\n>\n> Your worries are unwarranted, IMHO. It appears Dan was taking a \n> snapshot\n> without having read the procedure as clearly outlined in the manual.\n>\n> pg_start_backup() flushes all currently dirty blocks to disk as \n> part of\n> a checkpoint. If you snapshot after that point, then you will have all\n> the data blocks required from which to correctly roll forward. On its\n> own, the snapshot is an inconsistent backup and will give errors as \n> Dan\n> shows. It is only when the snapshot is used as the base backup in a \n> full\n> continuous recovery that the inconsistencies are removed and the\n> database is fully and correctly restored.\n>\n> pg_start_backup() is the direct analogue of Oracle's ALTER DATABASE\n> BEGIN BACKUP. Snapshots work with Oracle too, in much the same way.\n>\n> After reviewing the manual, if you honestly think there is a problem,\n> please let me know and I'll work with you to investigate.\n>\n> -- \n> Simon Riggs\n> EnterpriseDB http://www.enterprisedb.com\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n", "msg_date": "Fri, 22 Jun 2007 04:10:36 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "On Fri, 2007-06-22 at 04:10 -0700, Dan Gorman wrote:\n> This snapshot is done at the LUN (filer) level, postgres is un-aware \n> we're creating a backup, so I'm not sure how pg_start_backup() plays \n> into this ...\n\nPostgres *is* completely unaware that you intend to take a backup, that\nis *exactly* why you must tell the server you intend to make a backup,\nusing pg_start_backup() and pg_stop_backup(). That way Postgres will\nflush its buffers, so that they are present on storage when you make the\nbackup.\n\nIs the procedure for Oracle or any other transactional RDBMS any\ndifferent? \n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Jun 2007 12:38:09 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "Ah okay. I understand now. So how can I signal postgres I'm about to \ntake a backup ? (read doc from previous email ? )\n\nRegards,\nDan Gorman\n\nOn Jun 22, 2007, at 4:38 AM, Simon Riggs wrote:\n\n> On Fri, 2007-06-22 at 04:10 -0700, Dan Gorman wrote:\n>> This snapshot is done at the LUN (filer) level, postgres is un-aware\n>> we're creating a backup, so I'm not sure how pg_start_backup() plays\n>> into this ...\n>\n> Postgres *is* completely unaware that you intend to take a backup, \n> that\n> is *exactly* why you must tell the server you intend to make a backup,\n> using pg_start_backup() and pg_stop_backup(). That way Postgres will\n> flush its buffers, so that they are present on storage when you \n> make the\n> backup.\n>\n> Is the procedure for Oracle or any other transactional RDBMS any\n> different?\n>\n> -- \n> Simon Riggs\n> EnterpriseDB http://www.enterprisedb.com\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n", "msg_date": "Fri, 22 Jun 2007 04:51:19 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "You can use the psql command line to run:\n\n\"select pg_start_backup();\"\n\n...then when you're done,\n\n\"select pg_stop_backup();\"\n\nif you want an example from the unix command line:\n\npsql -c \"select pg_start_backup();\" database_name\n\nthen\n\npsql -c \"select pg_stop_backup();\" database_name\n\n/kurt\n\n\nOn Jun 22, 2007, at 7:51 AM, Dan Gorman wrote:\n\n> Ah okay. I understand now. So how can I signal postgres I'm about \n> to take a backup ? (read doc from previous email ? )\n>\n> Regards,\n> Dan Gorman\n>\n> On Jun 22, 2007, at 4:38 AM, Simon Riggs wrote:\n>\n>> On Fri, 2007-06-22 at 04:10 -0700, Dan Gorman wrote:\n>>> This snapshot is done at the LUN (filer) level, postgres is un-aware\n>>> we're creating a backup, so I'm not sure how pg_start_backup() plays\n>>> into this ...\n>>\n>> Postgres *is* completely unaware that you intend to take a backup, \n>> that\n>> is *exactly* why you must tell the server you intend to make a \n>> backup,\n>> using pg_start_backup() and pg_stop_backup(). That way Postgres will\n>> flush its buffers, so that they are present on storage when you \n>> make the\n>> backup.\n>>\n>> Is the procedure for Oracle or any other transactional RDBMS any\n>> different?\n>>\n>> -- \n>> Simon Riggs\n>> EnterpriseDB http://www.enterprisedb.com\n>>\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n", "msg_date": "Fri, 22 Jun 2007 08:10:46 -0400", "msg_from": "Kurt Overberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nWasn't it select pg_start_backup('backuplabel');?\n\nAndreas\n\nKurt Overberg wrote:\n> You can use the psql command line to run:\n> \n> \"select pg_start_backup();\"\n> \n> ...then when you're done,\n> \n> \"select pg_stop_backup();\"\n> \n> if you want an example from the unix command line:\n> \n> psql -c \"select pg_start_backup();\" database_name\n> \n> then\n> \n> psql -c \"select pg_stop_backup();\" database_name\n> \n> /kurt\n> \n> \n> On Jun 22, 2007, at 7:51 AM, Dan Gorman wrote:\n> \n>> Ah okay. I understand now. So how can I signal postgres I'm about to\n>> take a backup ? (read doc from previous email ? )\n>>\n>> Regards,\n>> Dan Gorman\n>>\n>> On Jun 22, 2007, at 4:38 AM, Simon Riggs wrote:\n>>\n>>> On Fri, 2007-06-22 at 04:10 -0700, Dan Gorman wrote:\n>>>> This snapshot is done at the LUN (filer) level, postgres is un-aware\n>>>> we're creating a backup, so I'm not sure how pg_start_backup() plays\n>>>> into this ...\n>>>\n>>> Postgres *is* completely unaware that you intend to take a backup, that\n>>> is *exactly* why you must tell the server you intend to make a backup,\n>>> using pg_start_backup() and pg_stop_backup(). That way Postgres will\n>>> flush its buffers, so that they are present on storage when you make the\n>>> backup.\n>>>\n>>> Is the procedure for Oracle or any other transactional RDBMS any\n>>> different?\n>>>\n>>> -- Simon Riggs\n>>> EnterpriseDB http://www.enterprisedb.com\n>>>\n>>>\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>>> choose an index scan if your joining column's datatypes do not\n>>> match\n>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGe7zyHJdudm4KnO0RAgyaAJ9Vz52izICKYkep/wZpJMFPkfAiuQCfZcjB\nyUYM6rYu18HmTAs3F4VaGJo=\n=n3vX\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 22 Jun 2007 14:13:38 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "On Fri, 2007-06-22 at 17:23 +0900, Toru SHIMOGAKI wrote:\n> Dan Gorman wrote:\n> > Here is an example. Most of the snap shots worked fine, but I did get \n> > this once:\n> \n> Thank you for your example. I'd appreciate it if I'd get any responses; whether \n> we should tackle the problem for 8.4?\n\nIf you see a problem, please explain what it is, after careful review of\nthe manual.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Jun 2007 13:14:46 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "Toru SHIMOGAKI wrote:\n> Joshua D. Drake wrote:\n> \n>>> - If we don't use hardware level snapshot operation, it takes long time to take\n>>> a large backup data, and a lot of full-page-written WAL files are made.\n>> Does it? I have done it with fairly large databases without issue.\n> \n> You mean hardware snapshot?\n\nOh goodness no. :)\n\n> I know taking a backup using rsync(or tar, cp?) as a\n> n online backup method is not so a big problem as documented. But it just take a\n\nI use rsync with pg_start/stop_backup and it works very well. Even on\ndatabases that are TB in size.\n\n> long time if we handle a terabyte database. We have to VACUUM and other batch\n> processes to the large database as well, so we don't want to take a long time\n> to take a backup...\n\nAhh o.k. that makes sense. The difference here is probably how often we\ntake the snapshot. We take them very often to insure we don't have a ton\nof logs we have to pull over.\n\nJoshua D. Drake\n\n> \n> Regards,\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Fri, 22 Jun 2007 06:53:14 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "Toru SHIMOGAKI wrote:\n> \n> Steve Atkins wrote:\n> \n>>> - When we take a PITR base backup with hardware level snapshot operation\n>>> (not filesystem level) which a lot of storage vender provide, the \n>>> backup data\n>>> can be corrupted as Dan said. During recovery we can't even read it,\n>>> especially if meta-data was corrupted.\n>>\n>> I can't see any explanation for how this could happen, other\n>> than your hardware vendor is lying about snapshot ability.\n> \n> All of the hardware vendors I asked always said:\n> \n> \"The hardware level snapshot has nothing to do with filesystem \n> condition and of course with what data has been written from operating \n> system chache to the hard disk platter. It just copies byte data on \n> storage to the other volume.\n\nRight that has been my understanding as well.\n\nJoshua D. Drake\n\n> \n> So, if any data is written during taking snapshot, we can't assurance \n> data correctness *strictly* .\n> \n> In Oracle, no table data is written between BEGIN BACKUP and END BACKUP, \n> and it is not a problem REDO is written...\"\n> \n> I'd like to know the correct information if the explanation has any \n> mistakes, or a good way to avoid the probrem.\n> \n> I think there are users who want to migrate Oracle to PostgreSQL but \n> can't because of the problem as above.\n> \n> \n> Best regards,\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Fri, 22 Jun 2007 06:54:01 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "\n>> So, if any data is written during taking snapshot, we can't assurance data\n>> correctness *strictly* .\n\nThat sounds nothing like what I've heard called a \"snapshot\" before. Some\n\"filesystems\" which aren't really filesystems but are also storage layer\ndrivers like Veritas (and ZFS?) allow you to take a snapshot which they\nguarantee is atomic. You can do them while you have concurrent i/o and be sure\nto get a single consistent view of the filesystem.\n\nIf you're just copying blocks from a device without any atomic snapshot\nguarantee then you're going to get garbage. Even in Postgres wasn't writing\nanything the OS might still choose to flush blocks during that time, possibly\nnot even Postgres data blocks but filesystem meta-information blocks.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Fri, 22 Jun 2007 16:31:42 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "Dan Gorman <[email protected]> writes:\n> This snapshot is done at the LUN (filer) level, postgres is un-aware \n> we're creating a backup, so I'm not sure how pg_start_backup() plays \n> into this ...\n\nThat method works too, as long as you snapshot both the data files and\nWAL files --- when you start PG from the backup, it will think it\ncrashed and recover by replaying WAL. So, assuming that the snapshot\ntechnology really works, it should be exactly as reliable as crash\nrecovery is. If you saw a problem I'd be inclined to question whether\nthere is some upstream component (OS or disk controller) that's\nreordering writes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2007 13:12:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups " }, { "msg_contents": "On Fri, 2007-06-22 at 13:12 -0400, Tom Lane wrote:\n> Dan Gorman <[email protected]> writes:\n> > This snapshot is done at the LUN (filer) level, postgres is un-aware \n> > we're creating a backup, so I'm not sure how pg_start_backup() plays \n> > into this ...\n> \n> That method works too, as long as you snapshot both the data files and\n> WAL files --- when you start PG from the backup, it will think it\n> crashed and recover by replaying WAL. So, assuming that the snapshot\n> technology really works, it should be exactly as reliable as crash\n> recovery is. \n\n> If you saw a problem I'd be inclined to question whether\n> there is some upstream component (OS or disk controller) that's\n> reordering writes.\n\nGiven thats exactly what they do, constantly, I don't think its safe to\nsay that it works since we cannot verify whether that has happened or\nnot.\n\nAt the very least, you should issue a CHECKPOINT prior to taking the\nsnapshot, to ensure that the write barriers have gone through.\n\nBut that being said, I'm not quite sure why following the Continuous\nArchiving procedures is a problem, since they don't add any significant\noverhead, over and above the checkpoint command.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Jun 2007 18:45:57 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n> On Fri, 2007-06-22 at 13:12 -0400, Tom Lane wrote:\n>> If you saw a problem I'd be inclined to question whether\n>> there is some upstream component (OS or disk controller) that's\n>> reordering writes.\n\n> Given thats exactly what they do, constantly, I don't think its safe to\n> say that it works since we cannot verify whether that has happened or\n> not.\n\nIf he's trying to snapshot at a level of hardware that's behind a\nwrite-caching disk controller, I agree that that's untrustworthy.\n\nIf not, ie if he's snapshotting the actual durable state of the storage\nsystem, then any problems in the snapshot indicate a problem with the\ndatabase's ability to recover from a crash. So I don't think you should\ntell him to not worry.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jun 2007 14:02:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups " }, { "msg_contents": "Hi,\n\nYear, I agree we should carefully follow how Done really did a backup. \nMy point is PostgreSQL may have to extend the file during the hot backup \nto write to the new block. It is slightly different from Oracle's case. \n Oracle allocates all the database space in advance so that there could \nbe no risk to modify the metadata on the fly. In our case, because SAN \nbased storage snapshot is device level, not file system level, even a \nfile system does not know that the snapshot is being taken and we might \nencounter the case where metadata and/or user data are not consistent. \nSuch snapshot (whole filesystem) might be corrupted and cause file \nsystem level error.\n\nI'm interested in this. Any further comment/openion is welcome.\n\nRegards;\n\nSimon Riggs Wrote:\n> On Fri, 2007-06-22 at 11:30 +0900, Toru SHIMOGAKI wrote:\n>> Tom Lane wrote:\n>>> Dan Gorman <[email protected]> writes:\n>>>> All of our databases are on NetApp storage and I have been looking \n>>>> at SnapMirror (PITR RO copy ) and FlexClone (near instant RW volume \n>>>> replica) for backing up our databases. The problem is because there \n>>>> is no write-suspend or even a 'hot backup mode' for postgres it's \n>>>> very plausible that the database has data in RAM that hasn't been \n>>>> written and will corrupt the data.\n>>> Alternatively, you can use a PITR base backup as suggested here:\n>>> http://www.postgresql.org/docs/8.2/static/continuous-archiving.html\n>> I think Dan's problem is important if we use PostgreSQL to a large size database:\n>>\n>> - When we take a PITR base backup with hardware level snapshot operation\n>> (not filesystem level) which a lot of storage vender provide, the backup data\n>> can be corrupted as Dan said. During recovery we can't even read it,\n>> especially if meta-data was corrupted.\n>>\n>> - If we don't use hardware level snapshot operation, it takes long time to take\n>> a large backup data, and a lot of full-page-written WAL files are made.\n>>\n>> So, I think users need a new feature not to write out heap pages during taking a\n>> backup.\n> \n> Your worries are unwarranted, IMHO. It appears Dan was taking a snapshot\n> without having read the procedure as clearly outlined in the manual.\n> \n> pg_start_backup() flushes all currently dirty blocks to disk as part of\n> a checkpoint. If you snapshot after that point, then you will have all\n> the data blocks required from which to correctly roll forward. On its\n> own, the snapshot is an inconsistent backup and will give errors as Dan\n> shows. It is only when the snapshot is used as the base backup in a full\n> continuous recovery that the inconsistencies are removed and the\n> database is fully and correctly restored.\n> \n> pg_start_backup() is the direct analogue of Oracle's ALTER DATABASE\n> BEGIN BACKUP. Snapshots work with Oracle too, in much the same way.\n> \n> After reviewing the manual, if you honestly think there is a problem,\n> please let me know and I'll work with you to investigate.\n> \n\n\n-- \n-------------\nKoichi Suzuki\n", "msg_date": "Mon, 25 Jun 2007 19:06:07 +0900", "msg_from": "Koichi Suzuki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "On Mon, 2007-06-25 at 19:06 +0900, Koichi Suzuki wrote:\n\n> Year, I agree we should carefully follow how Done really did a backup. \n\n> My point is PostgreSQL may have to extend the file during the hot backup \n> to write to the new block. \n\nIf the snapshot is a consistent, point-in-time copy then I don't see how\nany I/O at all makes a difference. To my knowledge, both EMC and NetApp\nproduce snapshots like this. IIRC, EMC calls these instant snapshots,\nNetApp calls them frozen snapshots.\n\n> It is slightly different from Oracle's case. \n> Oracle allocates all the database space in advance so that there could \n> be no risk to modify the metadata on the fly. \n\nNot really sure its different.\n\nOracle allows dynamic file extensions and I've got no evidence that file\nextension is prevented from occurring during backup simply as a result\nof issuing the start hot backup command.\n\nOracle and DB2 both support a stop-I/O-to-the-database mode. My\nunderstanding is that isn't required any more if you do an instant\nsnapshot, so if people are using instant snapshots it should certainly\nbe the case that they are safe to do this with PostgreSQL also.\n\nOracle is certainly more picky about snapshotted files than PostgreSQL\nis. In Oracle, each file has a header with the LSN of the last\ncheckpoint in it. This is used at recovery time to ensure the backup is\nconsistent by having exactly equal LSNs across all files. PostgreSQL\ndoesn't use file headers and we don't store the LSN on a per-file basis,\nthough we do store the LSN in the control file for the whole server.\n\n> In our case, because SAN \n> based storage snapshot is device level, not file system level, even a \n> file system does not know that the snapshot is being taken and we might \n> encounter the case where metadata and/or user data are not consistent. \n> Such snapshot (whole filesystem) might be corrupted and cause file \n> system level error.\n> \n> I'm interested in this. Any further comment/openion is welcome.\n\nIf you can show me either \n \ni) an error that occurs after the full and correct PostgreSQL hot backup\nprocedures have been executed, or\n\nii) present a conjecture that explains in detail how a device level\nerror might occur\n\nthen I will look into this further.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jun 2007 14:26:02 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "Koichi Suzuki <[email protected]> writes:\n> Year, I agree we should carefully follow how Done really did a backup. \n> My point is PostgreSQL may have to extend the file during the hot backup \n> to write to the new block. It is slightly different from Oracle's case. \n> Oracle allocates all the database space in advance so that there could \n> be no risk to modify the metadata on the fly. In our case, because SAN \n> based storage snapshot is device level, not file system level, even a \n> file system does not know that the snapshot is being taken and we might \n> encounter the case where metadata and/or user data are not consistent. \n> Such snapshot (whole filesystem) might be corrupted and cause file \n> system level error.\n\nSurely a hot-backup technique that cannot even produce a consistent\nstate of filesystem metadata is too broken to be considered a backup\ntechnique at all.\n\nAFAIK, actually workable methods of this type depend on filesystem\ncooperation, and are able to produce coherent snapshots of the logical\n(not necessarily physical) filesystem content at a specific instant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jun 2007 10:31:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups " }, { "msg_contents": "\n\"Tom Lane\" <[email protected]> writes:\n\n> AFAIK, actually workable methods of this type depend on filesystem\n> cooperation, and are able to produce coherent snapshots of the logical\n> (not necessarily physical) filesystem content at a specific instant.\n\nI think you need filesystem cooperation in order to provide access to the\nsnapshot somewhere. But the actual snapshotting is done at a very low level by\nintercepting any block writes and stashing away the old version before writing\nor alternately by noting the new version and redirecting any reads to the new\nversion.\n\nI concur that anything that doesn't allow concurrent i/o while the\nsnapshotting is happening is worthless. It sounds like you're just dd'ing from\nthe device which is pretty much guaranteed not to work.\n\nEven if Postgres didn't do any i/o there's nothing stopping the OS and\nfilesystem from issuing i/o.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 25 Jun 2007 16:00:25 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "It's the latter, is snapshot of the durable state of the storage \nsystem (e.g. it will never be corrupted)\n\nRegards,\nDan Gorman\n\nOn Jun 22, 2007, at 11:02 AM, Tom Lane wrote:\n\n> \"Simon Riggs\" <[email protected]> writes:\n>> On Fri, 2007-06-22 at 13:12 -0400, Tom Lane wrote:\n>>> If you saw a problem I'd be inclined to question whether\n>>> there is some upstream component (OS or disk controller) that's\n>>> reordering writes.\n>\n>> Given thats exactly what they do, constantly, I don't think its \n>> safe to\n>> say that it works since we cannot verify whether that has happened or\n>> not.\n>\n> If he's trying to snapshot at a level of hardware that's behind a\n> write-caching disk controller, I agree that that's untrustworthy.\n>\n> If not, ie if he's snapshotting the actual durable state of the \n> storage\n> system, then any problems in the snapshot indicate a problem with the\n> database's ability to recover from a crash. So I don't think you \n> should\n> tell him to not worry.\n>\n> \t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Jun 2007 08:26:52 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "I took several snapshots. In all cases the FS was fine. In one case \nthe db looked like on recovery it thought there were outstanding \npages to be written to disk as seen below and the db wouldn't start.\n\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [9-1] 2007-06-21 \n00:39:43 PDTLOG: redo done at 71/99870670\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [10-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28905 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [11-1] 2007-06-21 \n00:39:43 PDTWARNING: page 13626 of relation 1663/16384/76716 did not \nexist\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [12-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28904 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [13-1] 2007-06-21 \n00:39:43 PDTWARNING: page 26711 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [14-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28900 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [15-1] 2007-06-21 \n00:39:43 PDTWARNING: page 3535208 of relation 1663/16384/33190 did \nnot exist\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [16-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28917 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [17-1] 2007-06-21 \n00:39:43 PDTWARNING: page 3535207 of relation 1663/16384/33190 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [18-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28916 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [19-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28911 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [20-1] 2007-06-21 \n00:39:43 PDTWARNING: page 26708 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [21-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28914 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [22-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28909 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [23-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28908 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [24-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28913 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [25-1] 2007-06-21 \n00:39:43 PDTWARNING: page 26712 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [26-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28918 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [27-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28912 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [28-1] 2007-06-21 \n00:39:43 PDTWARNING: page 3535209 of relation 1663/16384/33190 did \nnot exist\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [29-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28907 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [30-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28906 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [31-1] 2007-06-21 \n00:39:43 PDTWARNING: page 26713 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [32-1] 2007-06-21 \n00:39:43 PDTWARNING: page 17306 of relation 1663/16384/76710 did not \nexist\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [33-1] 2007-06-21 \n00:39:43 PDTWARNING: page 26706 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [34-1] 2007-06-21 \n00:39:43 PDTWARNING: page 800226 of relation 1663/16384/33204 did \nnot exist\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [35-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28915 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [36-1] 2007-06-21 \n00:39:43 PDTWARNING: page 26710 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [37-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28903 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [38-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28902 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [39-1] 2007-06-21 \n00:39:43 PDTWARNING: page 28910 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:39:43 sfmedstorageha001 postgres[3506]: [40-1] 2007-06-21 \n00:39:43 PDTPANIC: WAL contains references to invalid pages\nJun 21 00:39:43 sfmedstorageha001 postgres[3503]: [1-1] 2007-06-21 \n00:39:43 PDTLOG: startup process (PID 3506) was terminated by signal 6\nJun 21 00:39:43 sfmedstorageha001 postgres[3503]: [2-1] 2007-06-21 \n00:39:43 PDTLOG: aborting startup due to startup process failure\nJun 21 00:39:43 sfmedstorageha001 postgres[3505]: [1-1] 2007-06-21 \n00:39:43 PDTLOG: logger shutting down\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [1-1] 2007-06-21 \n00:40:39 PDTLOG: database system was interrupted while in recovery \nat 2007-06-21 00:36:40 PDT\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [1-2] 2007-06-21 \n00:40:39 PDTHINT: This probably means that some data is corrupted \nand you will have to use the last backup for\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [1-3] recovery.\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [2-1] 2007-06-21 \n00:40:39 PDTLOG: checkpoint record is at 71/9881E928\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [3-1] 2007-06-21 \n00:40:39 PDTLOG: redo record is at 71/986BF148; undo record is at \n0/0; shutdown FALSE\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [4-1] 2007-06-21 \n00:40:39 PDTLOG: next transaction ID: 0/2871389429; next OID: 83795\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [5-1] 2007-06-21 \n00:40:39 PDTLOG: next MultiXactId: 1; next MultiXactOffset: 0\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [6-1] 2007-06-21 \n00:40:39 PDTLOG: database system was not properly shut down; \nautomatic recovery in progress\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [7-1] 2007-06-21 \n00:40:39 PDTLOG: redo starts at 71/986BF148\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [8-1] 2007-06-21 \n00:40:39 PDTLOG: record with zero length at 71/998706A8\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [9-1] 2007-06-21 \n00:40:39 PDTLOG: redo done at 71/99870670\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [10-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28905 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [11-1] 2007-06-21 \n00:40:39 PDTWARNING: page 13626 of relation 1663/16384/76716 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [12-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28904 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [13-1] 2007-06-21 \n00:40:39 PDTWARNING: page 26711 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [14-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28900 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [15-1] 2007-06-21 \n00:40:39 PDTWARNING: page 3535208 of relation 1663/16384/33190 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [16-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28917 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [17-1] 2007-06-21 \n00:40:39 PDTWARNING: page 3535207 of relation 1663/16384/33190 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [18-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28916 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [19-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28911 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [20-1] 2007-06-21 \n00:40:39 PDTWARNING: page 26708 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [21-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28914 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [22-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28909 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [23-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28908 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [24-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28913 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [25-1] 2007-06-21 \n00:40:39 PDTWARNING: page 26712 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [26-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28918 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [27-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28912 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [28-1] 2007-06-21 \n00:40:39 PDTWARNING: page 3535209 of relation 1663/16384/33190 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [29-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28907 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [30-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28906 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [31-1] 2007-06-21 \n00:40:39 PDTWARNING: page 26713 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [32-1] 2007-06-21 \n00:40:39 PDTWARNING: page 17306 of relation 1663/16384/76710 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [33-1] 2007-06-21 \n00:40:39 PDTWARNING: page 26706 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [34-1] 2007-06-21 \n00:40:39 PDTWARNING: page 800226 of relation 1663/16384/33204 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [35-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28915 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [36-1] 2007-06-21 \n00:40:39 PDTWARNING: page 26710 of relation 1663/16384/76719 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [37-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28903 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [38-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28902 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [39-1] 2007-06-21 \n00:40:39 PDTWARNING: page 28910 of relation 1663/16384/76718 was \nuninitialized\nJun 21 00:40:39 sfmedstorageha001 postgres[3757]: [40-1] 2007-06-21 \n00:40:39 PDTPANIC: WAL contains references to invalid pages\nJun 21 00:40:39 sfmedstorageha001 postgres[3755]: [1-1] 2007-06-21 \n00:40:39 PDTLOG: startup process (PID 3757) was terminated by signal 6\nJun 21 00:40:39 sfmedstorageha001 postgres[3755]: [2-1] 2007-06-21 \n00:40:39 PDTLOG: aborting startup due to startup process failure\nJun 21 00:40:39 sfmedstorageha001 postgres[3756]: [1-1] 2007-06-21 \n00:40:39 PDTLOG: logger shutting down\n\n\n\n\nOn Jun 25, 2007, at 6:26 AM, Simon Riggs wrote:\n\n> On Mon, 2007-06-25 at 19:06 +0900, Koichi Suzuki wrote:\n>\n>> Year, I agree we should carefully follow how Done really did a \n>> backup.\n>\n>> My point is PostgreSQL may have to extend the file during the hot \n>> backup\n>> to write to the new block.\n>\n> If the snapshot is a consistent, point-in-time copy then I don't \n> see how\n> any I/O at all makes a difference. To my knowledge, both EMC and \n> NetApp\n> produce snapshots like this. IIRC, EMC calls these instant snapshots,\n> NetApp calls them frozen snapshots.\n>\n>> It is slightly different from Oracle's case.\n>> Oracle allocates all the database space in advance so that there \n>> could\n>> be no risk to modify the metadata on the fly.\n>\n> Not really sure its different.\n>\n> Oracle allows dynamic file extensions and I've got no evidence that \n> file\n> extension is prevented from occurring during backup simply as a result\n> of issuing the start hot backup command.\n>\n> Oracle and DB2 both support a stop-I/O-to-the-database mode. My\n> understanding is that isn't required any more if you do an instant\n> snapshot, so if people are using instant snapshots it should certainly\n> be the case that they are safe to do this with PostgreSQL also.\n>\n> Oracle is certainly more picky about snapshotted files than PostgreSQL\n> is. In Oracle, each file has a header with the LSN of the last\n> checkpoint in it. This is used at recovery time to ensure the \n> backup is\n> consistent by having exactly equal LSNs across all files. PostgreSQL\n> doesn't use file headers and we don't store the LSN on a per-file \n> basis,\n> though we do store the LSN in the control file for the whole server.\n>\n>> In our case, because SAN\n>> based storage snapshot is device level, not file system level, even a\n>> file system does not know that the snapshot is being taken and we \n>> might\n>> encounter the case where metadata and/or user data are not \n>> consistent.\n>> Such snapshot (whole filesystem) might be corrupted and cause file\n>> system level error.\n>>\n>> I'm interested in this. Any further comment/openion is welcome.\n>\n> If you can show me either\n>\n> i) an error that occurs after the full and correct PostgreSQL hot \n> backup\n> procedures have been executed, or\n>\n> ii) present a conjecture that explains in detail how a device level\n> error might occur\n>\n> then I will look into this further.\n>\n> -- \n> Simon Riggs\n> EnterpriseDB http://www.enterprisedb.com\n>\n>\n\n\n", "msg_date": "Mon, 25 Jun 2007 08:28:51 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "\"Dan Gorman\" <[email protected]> writes:\n\n> I took several snapshots. In all cases the FS was fine. In one case the db\n> looked like on recovery it thought there were outstanding pages to be written\n> to disk as seen below and the db wouldn't start.\n>\n> Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [9-1] 2007-06-21 00:39:43\n> PDTLOG: redo done at 71/99870670\n> Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [10-1] 2007-06-21 00:39:43\n> PDTWARNING: page 28905 of relation 1663/16384/76718 was uninitialized\n\nWhat version of Postgres did you say this was?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 25 Jun 2007 17:02:53 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "Greg,\n\nPG 8.2.4\n\nRegards,\nDan Gorman\n\nOn Jun 25, 2007, at 9:02 AM, Gregory Stark wrote:\n\n> \"Dan Gorman\" <[email protected]> writes:\n>\n>> I took several snapshots. In all cases the FS was fine. In one \n>> case the db\n>> looked like on recovery it thought there were outstanding pages \n>> to be written\n>> to disk as seen below and the db wouldn't start.\n>>\n>> Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [9-1] 2007-06-21 \n>> 00:39:43\n>> PDTLOG: redo done at 71/99870670\n>> Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [10-1] \n>> 2007-06-21 00:39:43\n>> PDTWARNING: page 28905 of relation 1663/16384/76718 was \n>> uninitialized\n>\n> What version of Postgres did you say this was?\n>\n> -- \n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n>\n\n\n", "msg_date": "Mon, 25 Jun 2007 09:05:12 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "On Mon, 2007-06-25 at 08:28 -0700, Dan Gorman wrote:\n> I took several snapshots. In all cases the FS was fine. In one case \n> the db looked like on recovery it thought there were outstanding \n> pages to be written to disk as seen below and the db wouldn't start.\n> \n> Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [9-1] 2007-06-21 \n> 00:39:43 PDTLOG: redo done at 71/99870670\n> Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [10-1] 2007-06-21 \n> 00:39:43 PDTWARNING: page 28905 of relation 1663/16384/76718 was \n> uninitialized\n\nOK, please put log_min_messages = DEBUG2 and re-run the recovery please.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jun 2007 17:23:18 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "Dan Gorman <[email protected]> writes:\n> Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [9-1] 2007-06-21 \n> 00:39:43 PDTLOG: redo done at 71/99870670\n> Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [10-1] 2007-06-21 \n> 00:39:43 PDTWARNING: page 28905 of relation 1663/16384/76718 was \n> uninitialized\n> ... lots of these ...\n> Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [40-1] 2007-06-21 \n> 00:39:43 PDTPANIC: WAL contains references to invalid pages\n\n(BTW, you'll find putting a space at the end of log_line_prefix\ndoes wonders for log readability.)\n\nReformatting and sorting, we have\n\nWARNING: page 3535207 of relation 1663/16384/33190 was uninitialized\nWARNING: page 3535208 of relation 1663/16384/33190 did not exist\nWARNING: page 3535209 of relation 1663/16384/33190 did not exist\n\nWARNING: page 800226 of relation 1663/16384/33204 did not exist\n\nWARNING: page 17306 of relation 1663/16384/76710 did not exist\n\nWARNING: page 13626 of relation 1663/16384/76716 did not exist\n\nWARNING: page 28900 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28902 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28903 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28904 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28905 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28906 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28907 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28908 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28909 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28910 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28911 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28912 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28913 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28914 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28915 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28916 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28917 of relation 1663/16384/76718 was uninitialized\nWARNING: page 28918 of relation 1663/16384/76718 was uninitialized\n\nWARNING: page 26706 of relation 1663/16384/76719 was uninitialized\nWARNING: page 26708 of relation 1663/16384/76719 was uninitialized\nWARNING: page 26710 of relation 1663/16384/76719 was uninitialized\nWARNING: page 26711 of relation 1663/16384/76719 was uninitialized\nWARNING: page 26712 of relation 1663/16384/76719 was uninitialized\nWARNING: page 26713 of relation 1663/16384/76719 was uninitialized\n\nSo the problems were pretty localized, probably at the ends of these\nfiles. Can you go back to the source database and check which\ntables these are --- match the last number cited in each line\nagainst pg_class.relfilenode? Are they tables or indexes, and\nabout how big are they?\n\nA possible explanation is we stopped scanning WAL before reaching\nrecords that truncated or dropped these tables. But it's not clear why.\nCould we see the last few log lines before the \"redo done\" one?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jun 2007 12:34:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups " }, { "msg_contents": "On Mon, 2007-06-25 at 12:34 -0400, Tom Lane wrote:\n> Dan Gorman <[email protected]> writes:\n> > Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [9-1] 2007-06-21 \n> > 00:39:43 PDTLOG: redo done at 71/99870670\n\nThis is mid-way through an xlog file.\n\n> > Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [10-1] 2007-06-21 \n> > 00:39:43 PDTWARNING: page 28905 of relation 1663/16384/76718 was \n> > uninitialized\n> > ... lots of these ...\n> > Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [40-1] 2007-06-21 \n> > 00:39:43 PDTPANIC: WAL contains references to invalid pages\n> \n> (BTW, you'll find putting a space at the end of log_line_prefix\n> does wonders for log readability.)\n> \n> Reformatting and sorting, we have\n> \n> WARNING: page 28900 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28902 of relation 1663/16384/76718 was uninitialized\n\n> WARNING: page 26706 of relation 1663/16384/76719 was uninitialized\n> WARNING: page 26708 of relation 1663/16384/76719 was uninitialized\n\nThose two are interesting because we appear to have two valid pages in\nthe middle of some uninitialized ones. That implies were not looking at\nan unapplied truncation.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jun 2007 17:56:06 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "Thanks for the pointers to a) make it readable and b) log min messages\n\nI didn't however keep the snapshots around. I could try and re-set \nthis scenario up. I was in the middle of doing some data migration \nwith Netapp and wanted to just 'test' it to make sure it was sane.\n\nIf you guys would like me to try to 'break' it again and keep the db \naround for further testing let me know.\n\nRegards,\nDan Gorman\n\n\nOn Jun 25, 2007, at 9:34 AM, Tom Lane wrote:\n\n> Dan Gorman <[email protected]> writes:\n>> Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [9-1] 2007-06-21\n>> 00:39:43 PDTLOG: redo done at 71/99870670\n>> Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [10-1] 2007-06-21\n>> 00:39:43 PDTWARNING: page 28905 of relation 1663/16384/76718 was\n>> uninitialized\n>> ... lots of these ...\n>> Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [40-1] 2007-06-21\n>> 00:39:43 PDTPANIC: WAL contains references to invalid pages\n>\n> (BTW, you'll find putting a space at the end of log_line_prefix\n> does wonders for log readability.)\n>\n> Reformatting and sorting, we have\n>\n> WARNING: page 3535207 of relation 1663/16384/33190 was uninitialized\n> WARNING: page 3535208 of relation 1663/16384/33190 did not exist\n> WARNING: page 3535209 of relation 1663/16384/33190 did not exist\n>\n> WARNING: page 800226 of relation 1663/16384/33204 did not exist\n>\n> WARNING: page 17306 of relation 1663/16384/76710 did not exist\n>\n> WARNING: page 13626 of relation 1663/16384/76716 did not exist\n>\n> WARNING: page 28900 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28902 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28903 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28904 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28905 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28906 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28907 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28908 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28909 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28910 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28911 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28912 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28913 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28914 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28915 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28916 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28917 of relation 1663/16384/76718 was uninitialized\n> WARNING: page 28918 of relation 1663/16384/76718 was uninitialized\n>\n> WARNING: page 26706 of relation 1663/16384/76719 was uninitialized\n> WARNING: page 26708 of relation 1663/16384/76719 was uninitialized\n> WARNING: page 26710 of relation 1663/16384/76719 was uninitialized\n> WARNING: page 26711 of relation 1663/16384/76719 was uninitialized\n> WARNING: page 26712 of relation 1663/16384/76719 was uninitialized\n> WARNING: page 26713 of relation 1663/16384/76719 was uninitialized\n>\n> So the problems were pretty localized, probably at the ends of these\n> files. Can you go back to the source database and check which\n> tables these are --- match the last number cited in each line\n> against pg_class.relfilenode? Are they tables or indexes, and\n> about how big are they?\n>\n> A possible explanation is we stopped scanning WAL before reaching\n> records that truncated or dropped these tables. But it's not clear \n> why.\n> Could we see the last few log lines before the \"redo done\" one?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n\n", "msg_date": "Mon, 25 Jun 2007 10:04:48 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n\n>> WARNING: page 28900 of relation 1663/16384/76718 was uninitialized\n>> WARNING: page 28902 of relation 1663/16384/76718 was uninitialized\n>\n>> WARNING: page 26706 of relation 1663/16384/76719 was uninitialized\n>> WARNING: page 26708 of relation 1663/16384/76719 was uninitialized\n>\n> Those two are interesting because we appear to have two valid pages in\n> the middle of some uninitialized ones. That implies were not looking at\n> an unapplied truncation.\n\nYou don't have fsync off do you? That could explain missing pages at the end\nof a file like this too. And it would explain how you could have two written\nin the midst of others that are missing.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Mon, 25 Jun 2007 18:07:24 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n>> Reformatting and sorting, we have\n>> \n>> WARNING: page 28900 of relation 1663/16384/76718 was uninitialized\n>> WARNING: page 28902 of relation 1663/16384/76718 was uninitialized\n\n>> WARNING: page 26706 of relation 1663/16384/76719 was uninitialized\n>> WARNING: page 26708 of relation 1663/16384/76719 was uninitialized\n\n> Those two are interesting because we appear to have two valid pages in\n> the middle of some uninitialized ones. That implies were not looking at\n> an unapplied truncation.\n\nNot necessarily --- it's possible the WAL sequence simply didn't touch\nthose pages.\n\nYour suggestion to rerun the recovery with higher log_min_messages\nis a good one, because that way we'd get some detail about what the\nWAL records that touched the pages were. I think DEBUG1 would be\nsufficient for that, though, and DEBUG2 might be pretty durn verbose.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jun 2007 13:10:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups " }, { "msg_contents": "No, however, I will attach the postgreql.conf so everyone can look at \nother settings just in case.\n\n\n\nRegards,\nDan Gorman\n\nOn Jun 25, 2007, at 10:07 AM, Gregory Stark wrote:\n\n> \"Simon Riggs\" <[email protected]> writes:\n>\n>>> WARNING: page 28900 of relation 1663/16384/76718 was uninitialized\n>>> WARNING: page 28902 of relation 1663/16384/76718 was uninitialized\n>>\n>>> WARNING: page 26706 of relation 1663/16384/76719 was uninitialized\n>>> WARNING: page 26708 of relation 1663/16384/76719 was uninitialized\n>>\n>> Those two are interesting because we appear to have two valid \n>> pages in\n>> the middle of some uninitialized ones. That implies were not \n>> looking at\n>> an unapplied truncation.\n>\n> You don't have fsync off do you? That could explain missing pages \n> at the end\n> of a file like this too. And it would explain how you could have \n> two written\n> in the midst of others that are missing.\n>\n> -- \n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n>", "msg_date": "Mon, 25 Jun 2007 10:10:55 -0700", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PITR Backups" }, { "msg_contents": "Dan Gorman <[email protected]> writes:\n> I didn't however keep the snapshots around. I could try and re-set \n> this scenario up. I was in the middle of doing some data migration \n> with Netapp and wanted to just 'test' it to make sure it was sane.\n\n> If you guys would like me to try to 'break' it again and keep the db \n> around for further testing let me know.\n\nYeah, please do. It's not entirely clear whether you've found a bug\nor not, and it'd be good to determine that ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jun 2007 13:12:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PITR Backups " } ]
[ { "msg_contents": "I have a query that runs about 30-50 seconds.  The query is a join between 2 tables (customer and address), each table with about 400,000 rows.  My customer table has fields like first_name and last_name where the address table has city, state, etc.  I'm using \"like\" in most of the query columns, which all have indexes.  The actual query is:SELECT p.party_id, p.first_name, p.last_name, pli.address1, pli.city, pli.state FROM customer as p JOIN address as pli ON ( p.party_id = pli.party_id ) WHERE ( p.void_flag IS NULL OR p.void_flag = false )  AND  (first_name like 'B%') AND (last_name like 'S%') AND (pli.state like 'M%') AND (pli.city like 'AL%') ORDER BY last_name, first_name LIMIT 51\nWhen the query runs, the hard drive lights up for the duration.  (I'm confused by this as 'top' reports only 24k of swap in use).  My SUSE 9 test machine has 512 Meg of RAM with 300 Meg used by a Java app.  Postmaster reports 56 Meg under \"top\" and has a 52 Meg segment under \"ipcs\".  I've played with the cache size, shared buffers, and OS shmmax with little change in the query performance.\nQ: Would this query benefit from using a view between these two tables?\nQ: Any idea why the reported swap usage is so low, yet the query slams the drive?  Is postgres not caching this data?  If I run the query with the same arguments, it comes right back the second time.  If I change the args and re-run, it goes back to the hard drive and takes 30-50 seconds.  \nSuggestions very welcome,\nTom\n  Who's that on the Red Carpet? Play & win glamorous prizes. \n", "msg_date": "Fri, 22 Jun 2007 18:32:15 +0000", "msg_from": "\"Tom Tamulewicz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow join query" }, { "msg_contents": "\nOn Jun 22, 2007, at 13:32 , Tom Tamulewicz wrote:\n> ( p.void_flag IS NULL OR p.void_flag = false )\nJust a note: you can rewrite (a IS NULL or a = false) as (a IS NOT \nTRUE). Shouldn't affect performance, but might make your query easier \nto read.\n\nWhat's the EXPLAIN ANALYZE output for this query?\n> When the query runs, the hard drive lights up for the duration. \n> (I'm confused by this as 'top' reports only 24k of swap in use). \n> My SUSE 9 test machine has 512 Meg of RAM with 300 Meg used by a \n> Java app. Postmaster reports 56 Meg under \"top\" and has a 52 Meg \n> segment under \"ipcs\". I've played with the cache size, shared \n> buffers, and OS shmmax with little change in the query performance.\n>\n> Q: Would this query benefit from using a view between these two \n> tables?\nI doubt it, as views are just pre-parsed queries: no data is \nmaterialized for the view.\n> Q: Any idea why the reported swap usage is so low, yet the query \n> slams the drive? Is postgres not caching this data? If I run the \n> query with the same arguments, it comes right back the second \n> time. If I change the args and re-run, it goes back to the hard \n> drive and takes 30-50 seconds.\nHow much is cached depends on shared_buffers, I believe. If the \nresult is still cached, that'd explain why running the query with the \nsame arguments returns so quickly. You might see some improvement \nusing a prepared query, as the server shouldn't have to reparse and \nreplan the query. Of course, if you change the arguments, it can't \nuse the result that's cached from the previous run.\n\nTake this all with an appropriate amount of salt. I'm learning about \nthis, too.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Fri, 22 Jun 2007 14:51:32 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join query" }, { "msg_contents": "The explain is as follows...\n                                                                                                           QUERY PLAN                                        \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.00..96.48 rows=1 width=2450)   ->  Nested Loop  (cost=0.00..96.48 rows=1 width=2450)         ->  Index Scan using idx_last_name on customer p  (cost=0.00..50.22 rows=1 width=1209)               Index Cond: (((last_name)::text >= 'S'::character varying) AND ((last_name)::text < 'T'::character varying) AND ((first_name)::text >= 'B'::character varying) AND ((first_name)::text < 'C'::character \nvarying))               Filter: (((void_flag IS NULL) OR (void_flag = false)) AND ((first_name)::text ~~ 'B%'::text) AND ((last_name)::text ~~ 'S%'::text))         ->  Index Scan using address_pkey on address pli  (cost=0.00..46.23 rows=1 width=1257)               Index Cond: ((\"outer\".party_id = pli.party_id))               Filter: (((state)::text ~~ 'M%'::text) AND ((city)::text ~~ 'AL%'::text))\n \n\n\nFrom: Michael Glaesemann <[email protected]>To: Tom Tamulewicz <[email protected]>CC: [email protected]: Re: [PERFORM] Slow join queryDate: Fri, 22 Jun 2007 14:51:32 -0500>>On Jun 22, 2007, at 13:32 , Tom Tamulewicz wrote:>>( p.void_flag IS NULL OR p.void_flag = false )>Just a note: you can rewrite (a IS NULL or a = false) as (a IS NOT >TRUE). Shouldn't affect performance, but might make your query >easier to read.>>What's the EXPLAIN ANALYZE output for this query?>>When the query runs, the hard drive lights up for the duration. >>(I'm confused by this as 'top' reports only 24k of swap in use). >>My SUSE 9 test machine has 512 Meg of RAM with 300 Meg used by a >>Java app. Postmaster reports 56 Meg under \n\"top\" and has a 52 Meg >>segment under \"ipcs\". I've played with the cache size, shared >>buffers, and OS shmmax with little change in the query performance.>>>>Q: Would this query benefit from using a view between these two >>tables?>I doubt it, as views are just pre-parsed queries: no data is >materialized for the view.>>Q: Any idea why the reported swap usage is so low, yet the query >>slams the drive? Is postgres not caching this data? If I run the >>query with the same arguments, it comes right back the second >>time. If I change the args and re-run, it goes back to the hard >>drive and takes 30-50 seconds.>How much is cached depends on shared_buffers, I believe. If the >result is still cached, that'd explain why running the query with >the same \narguments returns so quickly. You might see some >improvement using a prepared query, as the server shouldn't have to >reparse and replan the query. Of course, if you change the >arguments, it can't use the result that's cached from the previous >run.>>Take this all with an appropriate amount of salt. I'm learning about > this, too.>>Michael Glaesemann>grzm seespotcode net>>>>---------------------------(end of >broadcast)--------------------------->TIP 9: In versions below 8.0, the planner will ignore your desire to> choose an index scan if your joining column's datatypes do >not> match Picture this � share your photos and you could win big! \n", "msg_date": "Fri, 22 Jun 2007 21:25:37 +0000", "msg_from": "\"Tom Tamulewicz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow join query" }, { "msg_contents": "[Please don't top post as it makes the discussion more difficult to \nfollow.]\n\nOn Jun 22, 2007, at 16:25 , Tom Tamulewicz wrote:\n> The explain is as follows...\nEXPLAIN ANALYZE, please. (And for convenience, it helps if you \ninclude the query :) )\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Fri, 22 Jun 2007 16:30:56 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join query" }, { "msg_contents": " \n\n\n\n\n\nFrom: Michael Glaesemann <[email protected]>To: Tom Tamulewicz <[email protected]>CC: [email protected]: Re: [PERFORM] Slow join queryDate: Fri, 22 Jun 2007 14:51:32 -0500>>On Jun 22, 2007, at 13:32 , Tom Tamulewicz wrote:>>( p.void_flag IS NULL OR p.void_flag = false )>Just a note: you can rewrite (a IS NULL or a = false) as (a IS NOT >TRUE). Shouldn't affect performance, but might make your query >easier to read.>>What's the EXPLAIN ANALYZE output for this query?>>When the query runs, the hard drive lights up for the duration. >>(I'm confused by this as 'top' reports only 24k of swap in use). >>My SUSE 9 test machine has 512 Meg of RAM with 300 Meg used by a >>Java app. Postmaster reports 56 Meg \nunder \"top\" and has a 52 Meg >>segment under \"ipcs\". I've played with the cache size, shared >>buffers, and OS shmmax with little change in the query performance.>>>>Q: Would this query benefit from using a view between these two >>tables?>I doubt it, as views are just pre-parsed queries: no data is >materialized for the view.>>Q: Any idea why the reported swap usage is so low, yet the query >>slams the drive? Is postgres not caching this data? If I run the >>query with the same arguments, it comes right back the second >>time. If I change the args and re-run, it goes back to the hard >>drive and takes 30-50 seconds.>How much is cached depends on shared_buffers, I believe. If the >result is still cached, that'd explain why running the query with >the same \narguments returns so quickly. You might see some >improvement using a prepared query, as the server shouldn't have to >reparse and replan the query. Of course, if you change the >arguments, it can't use the result that's cached from the previous >run.>>Take this all with an appropriate amount of salt. I'm learning about > this, too.>>Michael Glaesemann>grzm seespotcode net>>>>---------------------------(end of >broadcast)--------------------------->TIP 9: In versions below 8.0, the planner will ignore your desire to> choose an index scan if your joining column's datatypes do >not> match\n \nSELECT p.party_id, p.first_name, p.last_name, pli.address1, pli.city, pli.state FROM customer as p JOIN address as pli ON ( p.party_id = pli.party_id ) WHERE ( p.void_flag IS NULL OR p.void_flag = false )  AND  (first_name like 'B%') AND (last_name like 'S%') AND (pli.state like 'M%') AND (pli.city like 'AL%') ORDER BY last_name, first_name LIMIT 51  \n                                                                                                     QUERY PLAN                                        \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.00..96.48 rows=1 width=2450) (actual time=13459.814..13459.814 rows=0 loops=1)   ->  Nested Loop  (cost=0.00..96.48 rows=1 width=2450) (actual time=13459.804..13459.804 rows=0 loops=1)         ->  Index Scan using idx_last_name on customer p  (cost=0.00..50.22 rows=1 width=1209) (actual time=57.812..13048.524 rows=2474 loops=1)               Index Cond: (((last_name)::text >= 'S'::character varying) AND ((last_name)::text < 'T'::character varying) AND ((first_name)::text \n>= 'B'::character varying) AND ((first_name)::text < 'C'::character varying))               Filter: (((void_flag IS NULL) OR (void_flag = false)) AND ((first_name)::text ~~ 'B%'::text) AND ((last_name)::text ~~ 'S%'::text))         ->  Index Scan using address_pkey on address pli  (cost=0.00..46.23 rows=1 width=1257) (actual time=0.149..0.149 rows=0 loops=2474)               Index Cond: ((\"outer\".party_id = pli.party_id))               Filter: (((state)::text ~~ 'M%'::text) AND ((city)::text ~~ 'AL%'::text)) Total runtime: 13460.292 ms\n\n\nPicture this � share your photos and you could win big! Get a preview of Live Earth, the hottest event this summer - only on MSN \n", "msg_date": "Fri, 22 Jun 2007 21:53:17 +0000", "msg_from": "\"Tom Tamulewicz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow join query" }, { "msg_contents": "Tom Tamulewicz wrote:\n>\n> \n>\n> ------------------------------------------------------------------------\n>\n> SELECT p.party_id, p.first_name, p.last_name, pli.address1,\n> pli.city, pli.state FROM customer as p JOIN address as pli ON (\n> p.party_id = pli.party_id ) WHERE ( p.void_flag IS NULL OR\n> p.void_flag = false ) AND (first_name like 'B%') AND (last_name\n> like 'S%') AND (pli.state like 'M%') AND (pli.city like 'AL%')\n> ORDER BY last_name, first_name LIMIT 51 \n>\n> \n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..96.48 rows=1 width=2450) (actual\n> time=13459.814..13459.814 rows=0 loops=1)\n> -> Nested Loop (cost=0.00..96.48 rows=1 width=2450) (actual\n> time=13459.804..13459.804 rows=0 loops=1)\n> -> Index Scan using idx_last_name on customer p \n> (cost=0.00..50.22 rows=1 width=1209) (actual\n> time=57.812..13048.524 rows=2474 loops=1)\n> Index Cond: (((last_name)::text >= 'S'::character\n> varying) AND ((last_name)::text < 'T'::character varying) AND\n> ((first_name)::text >= 'B'::character varying) AND\n> ((first_name)::text < 'C'::character varying))\n> Filter: (((void_flag IS NULL) OR (void_flag =\n> false)) AND ((first_name)::text ~~ 'B%'::text) AND\n> ((last_name)::text ~~ 'S%'::text))\n> -> Index Scan using address_pkey on address pli \n> (cost=0.00..46.23 rows=1 width=1257) (actual time=0.149..0.149\n> rows=0 loops=2474)\n> Index Cond: ((\"outer\".party_id = pli.party_id))\n> Filter: (((state)::text ~~ 'M%'::text) AND\n> ((city)::text ~~ 'AL%'::text))\n> Total runtime: 13460.292 ms\n>\n\nThe problem here is this bit:\n\n-> Index Scan using idx_last_name on customer p (cost=0.00..50.22 \nrows=1 width=1209) (actual time=57.812..13048.524 rows=2474 loops=1)\n Index Cond: (((last_name)::text >= 'S'::character \nvarying) AND ((last_name)::text < 'T'::character varying) AND \n((first_name)::text >= 'B'::character varying) AND ((first_name)::text < \n'C'::character varying))\n Filter: (((void_flag IS NULL) OR (void_flag = false)) AND \n((first_name)::text ~~ 'B%'::text) AND ((last_name)::text ~~ 'S%'::text))\n\nNote that you're getting back 2474 rows, but the planner expects 1. Not \nthe actual time going from 57 to 13048, it's spending all it's time \nlooking up each tuple in the index, then in the table. Using a seq scan \nwould be much faster.\n\nHave you analyzed this table? If so, you might need to up the stats \ntarget on last_name and see if that helps.\n", "msg_date": "Fri, 22 Jun 2007 17:24:07 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join query" } ]
[ { "msg_contents": "\n\tSuppose a web application with persistent database connections.\n\tI have some queries which take longer to plan than to execute !\n\n\tI with there was a way to issue a PREPARE (like \"PERSISTENT PREPARE\").\n\tNow all Postgres connections would know that prepared statement foo( $1, \n$2, $3 ) corresponds to some SQL query, but it wouldn't plan it yet. Just \nlike a SQL function.\n\tWhen invoking EXECUTE foo( 1,2,3 ) on any given connection the statement \nwould get prepared and planned. Then on subsequent invocations I'd just \nget the previously prepared plan.\n\n\tIs this planned ?\n", "msg_date": "Sat, 23 Jun 2007 23:30:06 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": true, "msg_subject": "PREPARE and stuff" }, { "msg_contents": "PFC wrote:\n> \n> Suppose a web application with persistent database connections.\n> I have some queries which take longer to plan than to execute !\n> \n> I with there was a way to issue a PREPARE (like \"PERSISTENT PREPARE\").\n> Now all Postgres connections would know that prepared statement foo( \n> $1, $2, $3 ) corresponds to some SQL query, but it wouldn't plan it yet. \n> Just like a SQL function.\n> When invoking EXECUTE foo( 1,2,3 ) on any given connection the \n> statement would get prepared and planned. Then on subsequent invocations \n> I'd just get the previously prepared plan.\n\nHow would that be different from the current PREPARE/EXECUTE? Do you \nmean you could PREPARE in one connection, and EXECUTE in another? If \nyou're using persistent connections, it wouldn't be any faster than \ndoing a PREPARE once in each connection.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 23 Jun 2007 22:51:00 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PREPARE and stuff" }, { "msg_contents": "\"PFC\" <[email protected]> writes:\n\n> \tSuppose a web application with persistent database connections.\n> \tI have some queries which take longer to plan than to execute !\n\nThere have periodically been discussions about a shared plan cache but\ngenerally the feeling is that it would do more harm than good and there are no\nplans to implement anything like that.\n\nFor a web application though you would expect to be executing the same queries\nover and over again since you would be executing the same pages over and over\nagain. So just a regular prepared query ought to good for your needs.\n\nYou do not want to be reconnecting to the database for each page fetch.\nReplanning queries is the least of the problems with that approach.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Sat, 23 Jun 2007 23:02:25 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PREPARE and stuff" } ]
[ { "msg_contents": "Well, that's not completely trivial => the plan might depend upon the concrete value of $1,$2 and $3.\n\nAndreas\n\n-- Ursprüngl. Mitteil. --\nBetreff:\t[PERFORM] PREPARE and stuff\nVon:\tPFC <[email protected]>\nDatum:\t\t23.06.2007 21:31\n\n\n\tSuppose a web application with persistent database connections.\n\tI have some queries which take longer to plan than to execute !\n\n\tI with there was a way to issue a PREPARE (like \"PERSISTENT PREPARE\").\n\tNow all Postgres connections would know that prepared statement foo( $1, \n$2, $3 ) corresponds to some SQL query, but it wouldn't plan it yet. Just \nlike a SQL function.\n\tWhen invoking EXECUTE foo( 1,2,3 ) on any given connection the statement \nwould get prepared and planned. Then on subsequent invocations I'd just \nget the previously prepared plan.\n\n\tIs this planned ?\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n", "msg_date": "Sat, 23 Jun 2007 23:55:49 +0200", "msg_from": "\"Andreas Kostyrka\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PREPARE and stuff" }, { "msg_contents": "\n> Well, that's not completely trivial => the plan might depend upon the \n> concrete value of $1,$2 and $3.\n\n\tWhen you use PREPARE, it doesn't. I could live with that.\n\tThe purpose of this would be to have a library of \"persistent prepared \nstatements\" (just like lightweight functions) for your application, and \nmaximize the performance of persistent connections.\n", "msg_date": "Sun, 24 Jun 2007 00:27:32 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PREPARE and stuff" } ]
[ { "msg_contents": "Hello, I am wondering if it is safe to assume that\nspecifying cio mount option is safe with PostgreSQL.\n\nAs far as I understand the CIO (AIX Concurrent I/O) means that\nfilesystem does not serialize access to file blocks. In other\nwords multiple threads can simultaneously read and write\nthe file block, which means that it is possible that reader\nreads stale data. Now, if database enforces its own serialization\n(and as far as I can tell, bufmgr does it exactly), this option\nshould be totally safe to use, probably giving boost since\nkernel has less overhead while multiple processes access\nthe database file.\n\nIs my thinking correct?\n\n Regards,\n Dawid\n", "msg_date": "Mon, 25 Jun 2007 12:01:44 +0200", "msg_from": "\"Dawid Kuroczko\" <[email protected]>", "msg_from_op": true, "msg_subject": "Is AIX Concurrent IO safe with PostgreSQL?" } ]
[ { "msg_contents": "We have a search facility in our database that uses full text indexing to\nsearch about 300,000 records spread across 2 tables. Nothing fancy there.\n\nThe problem is, whenever we restart the database (system crash, lost\nconnectivity to SAN, upgrade, configuration change, etc.) our data is not\ncached and query performance is really sketchy the first five to ten minutes\nor so after the restart. This is particularly problematic because the only\nway the data gets cached in memory is if somebody actively searches for it,\nand the first few people who visit our site after a restart are pretty much\nscrewed.\n\nI'd like to know what are the recommended strategies for dealing with this\nproblem. We need our search queries to be near instantaneous, and we just\ncan't afford the startup penalty.\n\nI'm also concerned that Postgres may not be pulling data off the SAN as\nefficiently as theory dictates. What's the best way I can diagnose if the\nSAN is performing up to spec? I've been using iostat, and some of what I'm\nseeing concerns me. Here's a typical iostat output (iostat -m -d 1):\n\nDevice: tps MB_read/s MB_wrtn/s MB_read MB_wrtn\nsda 0.00 0.00 0.00 0 0\nsdb 102.97 2.03 0.00 2 0\nsdc 0.00 0.00 0.00 0 0\nsdd 0.00 0.00 0.00 0 0\n\nsda is the os partitionn (local), sdb is the primary database partion (SAN),\nsdc is the log file partition (SAN), and sdd is used only for backups\n(SAN). I very rarely seen sdb MB_read/s much above 2, and most of the time\nit hovers around 1 or lower. This seems awfully goddamn slow to me, but\nmaybe I just don't fully understand what iostat is telling me. I've seen\nsdc writes get as high as 10 during a database restore.\n\nA few bits of information about our setup:\n\nDebian Linux 2.6.18-4-amd64 (stable)\n4x Intel(R) Xeon(R) CPU 5110 @ 1.60GHz (100% dedicated to database)\nRAID 1+0 iSCSI partitions over Gig/E MTU 9000 (99% dedicated to database)\n8GB RAM\nPostgres v8.1.9\n\nThe database is only about 4GB in size and the key tables total about 700MB.\nPrimary keys are CHAR(32) GUIDs\n\nThanks,\nBryan\n\nWe have a search facility in our database that uses full text indexing to search about 300,000 records spread across 2 tables.  Nothing fancy there.The problem is, whenever we restart the database (system crash, lost connectivity to SAN, upgrade, configuration change, etc.) our data is not cached and query performance is really sketchy the first five to ten minutes or so after the restart.  This is particularly problematic because the only way the data gets cached in memory is if somebody actively searches for it, and the first few people who visit our site after a restart are pretty much screwed.\nI'd like to know what are the recommended strategies for dealing with this problem.  We need our search queries to be near instantaneous, and we just can't afford the startup penalty. I'm also concerned that Postgres may not be pulling data off the SAN as efficiently as theory dictates.  What's the best way I can diagnose if the SAN is performing up to spec?  I've been using iostat, and some of what I'm seeing concerns me.  Here's a typical iostat output (iostat -m -d 1):\nDevice:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtnsda               0.00         0.00         0.00          0          0sdb             102.97         2.03         0.00          2          0\nsdc               0.00         0.00         0.00          0          0sdd               0.00         0.00         0.00          0          0sda is the os partitionn (local), sdb is the primary database partion (SAN), sdc is the log file partition (SAN), and sdd is used only for backups (SAN).  I very rarely seen sdb MB_read/s much above 2, and most of the time it hovers around 1 or lower.  This seems awfully goddamn slow to me, but maybe I just don't fully understand what iostat is telling me.  I've seen sdc writes get as high as 10 during a database restore.\nA few bits of information about our setup:Debian Linux 2.6.18-4-amd64 (stable)4x Intel(R) Xeon(R) CPU 5110 @ 1.60GHz (100% dedicated to database)\nRAID 1+0 iSCSI partitions over Gig/E MTU 9000 (99% dedicated to database)8GB RAM\nPostgres v8.1.9\nThe database is only about 4GB in size and the key tables total about 700MB.Primary keys are CHAR(32) GUIDsThanks,Bryan", "msg_date": "Mon, 25 Jun 2007 15:18:43 -0500", "msg_from": "\"Bryan Murphy\" <[email protected]>", "msg_from_op": true, "msg_subject": "startup caching suggestions" }, { "msg_contents": "On Mon, 25 Jun 2007, Bryan Murphy wrote:\n\n> We have a search facility in our database that uses full text indexing to\n> search about 300,000 records spread across 2 tables. Nothing fancy there.\n>\n> The problem is, whenever we restart the database (system crash, lost\n> connectivity to SAN, upgrade, configuration change, etc.) our data is not\n> cached and query performance is really sketchy the first five to ten minutes\n> or so after the restart. This is particularly problematic because the only\n> way the data gets cached in memory is if somebody actively searches for it,\n> and the first few people who visit our site after a restart are pretty much\n> screwed.\n>\n> I'd like to know what are the recommended strategies for dealing with this\n> problem. We need our search queries to be near instantaneous, and we just\n> can't afford the startup penalty.\n\nBryan, did you try 'dd if=/path/to/your/table of=/dev/null' trick ?\nIt will very fast read you data into kernel's buffers.\n\n>\n> I'm also concerned that Postgres may not be pulling data off the SAN as\n> efficiently as theory dictates. What's the best way I can diagnose if the\n> SAN is performing up to spec? I've been using iostat, and some of what I'm\n> seeing concerns me. Here's a typical iostat output (iostat -m -d 1):\n>\n> Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn\n> sda 0.00 0.00 0.00 0 0\n> sdb 102.97 2.03 0.00 2 0\n> sdc 0.00 0.00 0.00 0 0\n> sdd 0.00 0.00 0.00 0 0\n>\n> sda is the os partitionn (local), sdb is the primary database partion (SAN),\n> sdc is the log file partition (SAN), and sdd is used only for backups\n> (SAN). I very rarely seen sdb MB_read/s much above 2, and most of the time\n> it hovers around 1 or lower. This seems awfully goddamn slow to me, but\n> maybe I just don't fully understand what iostat is telling me. I've seen\n> sdc writes get as high as 10 during a database restore.\n>\n> A few bits of information about our setup:\n>\n> Debian Linux 2.6.18-4-amd64 (stable)\n> 4x Intel(R) Xeon(R) CPU 5110 @ 1.60GHz (100% dedicated to database)\n> RAID 1+0 iSCSI partitions over Gig/E MTU 9000 (99% dedicated to database)\n> 8GB RAM\n> Postgres v8.1.9\n>\n> The database is only about 4GB in size and the key tables total about 700MB.\n> Primary keys are CHAR(32) GUIDs\n>\n> Thanks,\n> Bryan\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Tue, 26 Jun 2007 02:16:16 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: startup caching suggestions" }, { "msg_contents": "No, but I was just informed of that trick earlier and intend to try it\nsoon. Sometimes, the solution is so simple it's TOO obvious... :)\n\nBryan\n\nOn 6/25/07, Oleg Bartunov <[email protected]> wrote:\n>\n> On Mon, 25 Jun 2007, Bryan Murphy wrote:\n>\n> > We have a search facility in our database that uses full text indexing\n> to\n> > search about 300,000 records spread across 2 tables. Nothing fancy\n> there.\n> >\n> > The problem is, whenever we restart the database (system crash, lost\n> > connectivity to SAN, upgrade, configuration change, etc.) our data is\n> not\n> > cached and query performance is really sketchy the first five to ten\n> minutes\n> > or so after the restart. This is particularly problematic because the\n> only\n> > way the data gets cached in memory is if somebody actively searches for\n> it,\n> > and the first few people who visit our site after a restart are pretty\n> much\n> > screwed.\n> >\n> > I'd like to know what are the recommended strategies for dealing with\n> this\n> > problem. We need our search queries to be near instantaneous, and we\n> just\n> > can't afford the startup penalty.\n>\n> Bryan, did you try 'dd if=/path/to/your/table of=/dev/null' trick ?\n> It will very fast read you data into kernel's buffers.\n>\n> >\n> > I'm also concerned that Postgres may not be pulling data off the SAN as\n> > efficiently as theory dictates. What's the best way I can diagnose if\n> the\n> > SAN is performing up to spec? I've been using iostat, and some of what\n> I'm\n> > seeing concerns me. Here's a typical iostat output (iostat -m -d 1):\n> >\n> > Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn\n> > sda 0.00 0.00 0.00 0 0\n> > sdb 102.97 2.03 0.00 2 0\n> > sdc 0.00 0.00 0.00 0 0\n> > sdd 0.00 0.00 0.00 0 0\n> >\n> > sda is the os partitionn (local), sdb is the primary database partion\n> (SAN),\n> > sdc is the log file partition (SAN), and sdd is used only for backups\n> > (SAN). I very rarely seen sdb MB_read/s much above 2, and most of the\n> time\n> > it hovers around 1 or lower. This seems awfully goddamn slow to me, but\n> > maybe I just don't fully understand what iostat is telling me. I've\n> seen\n> > sdc writes get as high as 10 during a database restore.\n> >\n> > A few bits of information about our setup:\n> >\n> > Debian Linux 2.6.18-4-amd64 (stable)\n> > 4x Intel(R) Xeon(R) CPU 5110 @ 1.60GHz (100% dedicated to database)\n> > RAID 1+0 iSCSI partitions over Gig/E MTU 9000 (99% dedicated to\n> database)\n> > 8GB RAM\n> > Postgres v8.1.9\n> >\n> > The database is only about 4GB in size and the key tables total about\n> 700MB.\n> > Primary keys are CHAR(32) GUIDs\n> >\n> > Thanks,\n> > Bryan\n> >\n>\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n> Sternberg Astronomical Institute, Moscow University, Russia\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(495)939-16-83, +007(495)939-23-83\n>\n\nNo, but I was just informed of that trick earlier and intend to try it soon.  Sometimes, the solution is so simple it's TOO obvious... :)BryanOn 6/25/07, \nOleg Bartunov <[email protected]> wrote:\nOn Mon, 25 Jun 2007, Bryan Murphy wrote:> We have a search facility in our database that uses full text indexing to> search about 300,000 records spread across 2 tables.  Nothing fancy there.>\n> The problem is, whenever we restart the database (system crash, lost> connectivity to SAN, upgrade, configuration change, etc.) our data is not> cached and query performance is really sketchy the first five to ten minutes\n> or so after the restart.  This is particularly problematic because the only> way the data gets cached in memory is if somebody actively searches for it,> and the first few people who visit our site after a restart are pretty much\n> screwed.>> I'd like to know what are the recommended strategies for dealing with this> problem.  We need our search queries to be near instantaneous, and we just> can't afford the startup penalty.\nBryan, did you try 'dd if=/path/to/your/table of=/dev/null' trick ?It will very fast read you data into kernel's buffers.>> I'm also concerned that Postgres may not be pulling data off the SAN as\n> efficiently as theory dictates.  What's the best way I can diagnose if the> SAN is performing up to spec?  I've been using iostat, and some of what I'm> seeing concerns me.  Here's a typical iostat output (iostat -m -d 1):\n>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn> sda               0.00         0.00         0.00          0          0> sdb             102.97         2.03         0.00\n          2          0> sdc               0.00         0.00         0.00          0          0> sdd               0.00         0.00         0.00          0          0>> sda is the os partitionn (local), sdb is the primary database partion (SAN),\n> sdc is the log file partition (SAN), and sdd is used only for backups> (SAN).  I very rarely seen sdb MB_read/s much above 2, and most of the time> it hovers around 1 or lower.  This seems awfully goddamn slow to me, but\n> maybe I just don't fully understand what iostat is telling me.  I've seen> sdc writes get as high as 10 during a database restore.>> A few bits of information about our setup:>\n> Debian Linux 2.6.18-4-amd64 (stable)> 4x Intel(R) Xeon(R) CPU 5110 @ 1.60GHz (100% dedicated to database)> RAID 1+0 iSCSI partitions over Gig/E MTU 9000 (99% dedicated to database)> 8GB RAM\n> Postgres v8.1.9>> The database is only about 4GB in size and the key tables total about 700MB.> Primary keys are CHAR(32) GUIDs>> Thanks,> Bryan>        Regards,\n                Oleg_____________________________________________________________Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),Sternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/phone: +007(495)939-16-83, +007(495)939-23-83", "msg_date": "Mon, 25 Jun 2007 17:20:17 -0500", "msg_from": "\"Bryan Murphy\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: startup caching suggestions" } ]
[ { "msg_contents": "Hey All,\n\nI am testing upgrading our database from version 8.1 to 8.2. I ran our\nworst performing query on this table, an outer join with an \"is null\"\ncondition, and I was happy to see it ran over four times faster. I also\nnoticed the explain analyze showed the planner chose to do sequential\nscans on both tables. I realized I had forgotten to increase\ndefault_statistics_target from the default 10, so I increased it to 100,\nand ran \"analyze\". In 8.1 this sped things up significantly, but in 8.2\nwhen I ran the query again it was actually slower. These tests were\ndone with 8.2.3.1 so I also loaded 8.2.4.1 for comparison. Here is the\nexplain analyze with default_statistics_target set to 10 on 8.2.3.1:\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using (record_id) where\nbackup_location.record_id is null;\n QUERY\nPLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n Hash Left Join (cost=7259801.24..12768737.71 rows=1 width=8) (actual\ntime=651003.121..1312717.249 rows=10411 loops=1)\n Hash Cond: (backupobjects.record_id = backup_location.record_id)\n Filter: (backup_location.record_id IS NULL)\n -> Seq Scan on backupobjects (cost=0.00..466835.63 rows=13716963\nwidth=8) (actual time=0.030..95981.895 rows=13706121 loops=1)\n -> Hash (cost=3520915.44..3520915.44 rows=215090944 width=8)\n(actual time=527345.024..527345.024 rows=215090786 loops=1)\n -> Seq Scan on backup_location (cost=0.00..3520915.44\nrows=215090944 width=8) (actual time=0.048..333944.886 rows=215090786\nloops=1)\n Total runtime: 1312727.200 ms\n\nAnd again with default_statistics_target set to 100 on 8.2.3.1:\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using (record_id) where\nbackup_location.record_id is null;\n QUERY\nPLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n Merge Left Join (cost=38173940.88..41468823.19 rows=1 width=8) (actual\ntime=3256548.988..4299922.345 rows=10411 loops=1)\n Merge Cond: (backupobjects.record_id = backup_location.record_id)\n Filter: (backup_location.record_id IS NULL)\n -> Sort (cost=2258416.72..2292675.79 rows=13703629 width=8) (actual\ntime=74450.897..85651.707 rows=13706121 loops=1)\n Sort Key: backupobjects.record_id\n -> Seq Scan on backupobjects (cost=0.00..466702.29\nrows=13703629 width=8) (actual time=0.024..40939.762 rows=13706121\nloops=1)\n -> Sort (cost=35915524.17..36453251.53 rows=215090944 width=8)\n(actual time=3182075.661..4094748.788 rows=215090786 loops=1)\n Sort Key: backup_location.record_id\n -> Seq Scan on backup_location (cost=0.00..3520915.44\nrows=215090944 width=8) (actual time=17.905..790499.303 rows=215090786\nloops=1)\n Total runtime: 4302591.325 ms\n\nWith 8.2.4.1 I get the same plan and performance with\ndefault_statistics_target set to either 10 or 100:\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using (record_id) where\nbackup_location.record_id is null;\n QUERY\nPLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n Merge Left Join (cost=37615266.46..40910145.54 rows=1 width=8) (actual\ntime=2765729.582..3768519.658 rows=10411 loops=1)\n Merge Cond: (backupobjects.record_id = backup_location.record_id)\n Filter: (backup_location.record_id IS NULL)\n -> Sort (cost=2224866.79..2259124.25 rows=13702985 width=8) (actual\ntime=101118.216..113245.942 rows=13706121 loops=1)\n Sort Key: backupobjects.record_id\n -> Seq Scan on backupobjects (cost=0.00..466695.85\nrows=13702985 width=8) (actual time=10.003..67604.564 rows=13706121\nloops=1)\n -> Sort (cost=35390399.67..35928127.03 rows=215090944 width=8)\n(actual time=2664596.049..3540048.500 rows=215090786 loops=1)\n Sort Key: backup_location.record_id\n -> Seq Scan on backup_location (cost=0.00..3520915.44\nrows=215090944 width=8) (actual time=7.110..246561.900 rows=215090786\nloops=1)\n Total runtime: 3770428.750 ms\n\nAnd for reference here is the same query with 8.1.5.6:\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using (record_id) where\nbackup_location.record_id is null;\n \nQUERY PLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-----------------\n Merge Left Join (cost=37490897.67..41269533.13 rows=13705356 width=8)\n(actual time=5096492.430..6588745.386 rows=10411 loops=1)\n Merge Cond: (\"outer\".record_id = \"inner\".record_id)\n Filter: (\"inner\".record_id IS NULL)\n -> Index Scan using backupobjects_pkey on backupobjects\n(cost=0.00..518007.92 rows=13705356 width=8) (actual\ntime=32.020..404517.133 rows=13706121 loops=1)\n -> Sort (cost=37490897.67..38028625.03 rows=215090944 width=8)\n(actual time=5096460.396..6058937.259 rows=215090786 loops=1)\n Sort Key: backup_location.record_id\n -> Seq Scan on backup_location (cost=0.00..3520915.44\nrows=215090944 width=8) (actual time=0.020..389038.442 rows=215090786\nloops=1)\n Total runtime: 6599215.268 ms\n(8 rows)\n\nBased on all this we will be going with 8.2.4.1, but it seems like\ncurrently the query planner isn't choosing the best plan for this case.\n\nThanks,\nEd\n", "msg_date": "Mon, 25 Jun 2007 14:28:32 -0700", "msg_from": "\"Tyrrill, Ed\" <[email protected]>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "* Tyrrill, Ed ([email protected]) wrote:\n> Based on all this we will be going with 8.2.4.1, but it seems like\n> currently the query planner isn't choosing the best plan for this case.\n\nWas the 'work_mem' set to the same thing on all these runs? Also, you\nmight try increasing the 'work_mem' under 8.2.4, at least for this query\n(you can set it by just doing: set work_mem = '2GB'; or similar in psql,\nor you can change the default in postgresql.conf).\n\nThe big thing of note, it seems, is that you've got enough memory and\nit's coming out faster when doing a hash-join vs. a sort + merge-join.\nCould likely be because it doesn't think there's enough work memory\navailable for the hash, which might change based on the values it gets\nfrom the statistics on how frequently something shows up, etc.\n\n\tEnjoy,\n\n\t\tStephen", "msg_date": "Mon, 25 Jun 2007 17:56:54 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "\"Tyrrill, Ed\" <[email protected]> writes:\n> ... With 8.2.4.1 I get the same plan and performance with\n> default_statistics_target set to either 10 or 100:\n\nThere's something fishy about that, because AFAICS from the CVS logs,\nthere are no relevant planner changes between 8.2.3 and 8.2.4. You\nshould have gotten exactly the same behavior with both. Maybe the\nversion difference you think you see is due to noise in ANALYZE's\nrandom sampling --- are the plan choices stable if you repeat ANALYZE\nseveral times at the same statistics target?\n\nI'm also noticing some rather large variation in what ought to be\nessentially the same seqscan cost:\n\n> -> Seq Scan on backup_location (cost=0.00..3520915.44\n> rows=215090944 width=8) (actual time=0.048..333944.886 rows=215090786\n> loops=1)\n\n> -> Seq Scan on backup_location (cost=0.00..3520915.44\n> rows=215090944 width=8) (actual time=17.905..790499.303 rows=215090786\n> loops=1)\n\n> -> Seq Scan on backup_location (cost=0.00..3520915.44\n> rows=215090944 width=8) (actual time=7.110..246561.900 rows=215090786\n> loops=1)\n\nGot any idea what's up with that --- heavy background activity maybe,\nor partially cached table data? It's pretty tough to blame the plan for\na 3x variation in the cost of reading data.\n\nAlso, what do you have work_mem set to? Have you changed any of the\nplanner cost parameters from their defaults?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jun 2007 18:10:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "On Mon, 2007-06-25 at 17:56 -0400, Stephen Frost wrote:\n> Was the 'work_mem' set to the same thing on all these runs? Also, you\n> might try increasing the 'work_mem' under 8.2.4, at least for this query\n> (you can set it by just doing: set work_mem = '2GB'; or similar in psql,\n> or you can change the default in postgresql.conf).\n> \n> The big thing of note, it seems, is that you've got enough memory and\n> it's coming out faster when doing a hash-join vs. a sort + merge-join.\n> Could likely be because it doesn't think there's enough work memory\n> available for the hash, which might change based on the values it gets\n> from the statistics on how frequently something shows up, etc.\n> \n> \tEnjoy,\n> \n> \t\tStephen\n\nYes, work_mem was set to 128MB for all runs. All settings were the same\nexcept for the change to default_statistics_target. I'm certainly\nmemory constrained, but giving 2GB to one one session doesn't allow\nother sessions to do anything. Possibly when we upgrade to 16GB. :-)\n\n\n\n", "msg_date": "Mon, 25 Jun 2007 16:39:02 -0700", "msg_from": "Ed Tyrrill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "* Ed Tyrrill ([email protected]) wrote:\n> Yes, work_mem was set to 128MB for all runs. All settings were the same\n> except for the change to default_statistics_target. I'm certainly\n> memory constrained, but giving 2GB to one one session doesn't allow\n> other sessions to do anything. Possibly when we upgrade to 16GB. :-)\n\nYou might consider a smaller increase, say to 256MB, to see if that'll\nswitch it to a hash join (and then watch the *actual* memory usage, of\ncourse), if you're looking for performance for this query at least.\n\nYeah, 2GB is what I typically run on our data warehouse box, which is a\nnice dual-proc/dual-core DL385 w/ 16GB of ram. :) The annoying thing is\nthat I can still run it out of memory sometimes, even w/ 16GB. :/\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Mon, 25 Jun 2007 19:52:09 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "On Mon, 2007-06-25 at 18:10 -0400, Tom Lane wrote:\n> \"Tyrrill, Ed\" <[email protected]> writes:\n> > ... With 8.2.4.1 I get the same plan and performance with\n> > default_statistics_target set to either 10 or 100:\n> \n> There's something fishy about that, because AFAICS from the CVS logs,\n> there are no relevant planner changes between 8.2.3 and 8.2.4. You\n> should have gotten exactly the same behavior with both. Maybe the\n> version difference you think you see is due to noise in ANALYZE's\n> random sampling --- are the plan choices stable if you repeat ANALYZE\n> several times at the same statistics target?\n> \n> I'm also noticing some rather large variation in what ought to be\n> essentially the same seqscan cost:\n> \n> > -> Seq Scan on backup_location (cost=0.00..3520915.44\n> > rows=215090944 width=8) (actual time=0.048..333944.886 rows=215090786\n> > loops=1)\n> \n> > -> Seq Scan on backup_location (cost=0.00..3520915.44\n> > rows=215090944 width=8) (actual time=17.905..790499.303 rows=215090786\n> > loops=1)\n> \n> > -> Seq Scan on backup_location (cost=0.00..3520915.44\n> > rows=215090944 width=8) (actual time=7.110..246561.900 rows=215090786\n> > loops=1)\n> \n> Got any idea what's up with that --- heavy background activity maybe,\n> or partially cached table data? It's pretty tough to blame the plan for\n> a 3x variation in the cost of reading data.\n> \n> Also, what do you have work_mem set to? Have you changed any of the\n> planner cost parameters from their defaults?\n> \n> \t\t\tregards, tom lane\n\nI would expect the seqscan actual time to go down from the first explain\nto the second because at least some of the data should be in the file\ncache. But the time goes up for the second run. There are no other\napplications running on this machine besides linux services, though it's\npossible that one or more of them was doing something, but none of those\nshould have this major of an impact.\n\nAfter loading the data dump from 8.1 I ran analyze once, ran the first\nquery, changed default_statistics_target to 100 in postgresql.conf, and\nrestarted postmaster, analyzed again, and ran the second query. I then\ndid the same with 8.2.4.1, and the third explain analyze shows the run\nwith default_statistics_target set to 100. The run with\ndefault_statistics_target set to 10 with 8.2.4.1 was very similar to\nwhen set to 100 so I didn't include it.\n\nwork_mem was set to 128MB for all runs. I also have random_page_cost =\n2.\n\nIt seems to me that the first plan is the optimal one for this case, but\nwhen the planner has more information about the table it chooses not to\nuse it. Do you think that if work_mem were higher it might choose the\nfirst plan again?\n\nThanks,\nEd\n", "msg_date": "Mon, 25 Jun 2007 17:09:58 -0700", "msg_from": "Ed Tyrrill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "* Ed Tyrrill ([email protected]) wrote:\n> It seems to me that the first plan is the optimal one for this case, but\n> when the planner has more information about the table it chooses not to\n> use it. Do you think that if work_mem were higher it might choose the\n> first plan again?\n\nSeems likely to me. You understand that you can set the work_mem\nwhenever you want, right? It's a GUC, so you could issue a 'set\nwork_mem = blah' in the application code right before and right after\n(assuming you're going to continue using the session) this particular\nquery, or just do it in a seperate session using 'explain' to play\naround with what the planner does given different arguments.\n\n'explain's pretty cheap/easy, and you can play around with various\nsettings to see what PG will do in various cases. Of course, you won't\nknow the runtimes without doing 'explain analyze', but I think you have\na good idea of the best plan for this query already...\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Mon, 25 Jun 2007 20:33:39 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "Ed Tyrrill <[email protected]> writes:\n> It seems to me that the first plan is the optimal one for this case, but\n> when the planner has more information about the table it chooses not to\n> use it. Do you think that if work_mem were higher it might choose the\n> first plan again?\n\nIt's worth fooling around with work_mem just to see what happens. The\nother thing that would be interesting is to force the other plan (set\nenable_mergejoin = off) just to see what the planner is costing it at.\nMy suspicion is that the estimated costs are pretty close.\n\nThe ANALYZE stats affect this choice only in second-order ways AFAIR.\nThe planner penalizes hashes if it thinks there will be a lot of\nduplicate values in the inner relation, but IIRC there is also a penalty\nfor inner duplicates in the mergejoin cost estimate. So I'm a bit\nsurprised that there'd be a change.\n\nCan you show us the pg_stats rows for the join columns after analyzing\nat target 10 and target 100?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Jun 2007 21:07:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "On Mon, 2007-06-25 at 21:07 -0400, Tom Lane wrote:\n> It's worth fooling around with work_mem just to see what happens. The\n> other thing that would be interesting is to force the other plan (set\n> enable_mergejoin = off) just to see what the planner is costing it at.\n> My suspicion is that the estimated costs are pretty close.\n> \n> The ANALYZE stats affect this choice only in second-order ways AFAIR.\n> The planner penalizes hashes if it thinks there will be a lot of\n> duplicate values in the inner relation, but IIRC there is also a penalty\n> for inner duplicates in the mergejoin cost estimate. So I'm a bit\n> surprised that there'd be a change.\n> \n> Can you show us the pg_stats rows for the join columns after analyzing\n> at target 10 and target 100?\n> \n> \t\t\tregards, tom lane\n\nI wasn't able to work on this for a couple days, but now I am back on it\nagain. I increased work_mem to 1GB, and decreased\ndefault_statistics_target to 10. postmaster takes 74.8% of RAM (out of\n4GB) with shared_memory = 1GB as well. I have not been able to get the\ndatabase to use the plan that was really fast the first time. So\nperhaps the random sample factor is what caused it to choose the faster\nplan the first time.\n\nTom, as you requested here are the pg_stats rows with\ndefault_statistics_target = 10:\n\nmdsdb=# select * from pg_stats where attname = 'record_id';\n schemaname | tablename | attname | null_frac | avg_width |\nn_distinct | most_common_vals | most_common_freqs |\nhistogram_bounds | correlation\n------------+-----------------+-----------+-----------+-----------\n+-------------+------------------+-------------------\n+----------------------------------------------------------------------------------------------+-------------\n public | backup_location | record_id | 0 | 8 |\n4.40637e+06 | {6053595} | {0.000666667} |\n{25859,1940711,2973201,4592467,5975199,8836423,10021178,10261007,11058355,12087662,14349748} | 0.165715\n public | backupobjects | record_id | 0 | 8 |\n-1 | | |\n{10565,1440580,2736075,4140418,5600863,7412501,8824407,10136590,11560512,13069900,14456128} | 0.902336\n\nand default_statistics_target = 100:\n\nmdsdb=# select * from pg_stats where attname = 'record_id';\n schemaname | tablename | attname | null_frac | avg_width |\nn_distinct |\nmost_common_vals |\nmost_common_freqs |\nhistogram_bounds\n| correlation\n------------+-----------------+-----------+-----------+-----------\n+-------------\n+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------!\n ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------!\n --------------------------------------------------------------!\n --------\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n public | backup_location | record_id | 0 | 8 |\n5.82337e+06 |\n{235096,295262,1553025,1612535,1635617,1803461,2000641,2036309,2507381,2904177,2921981,3089088,3146908,3224744,3253356,3580055,3647668,4660094,4661032,4752775,4801371,5116051,5173423,9891458,9895966,9897668,9905497,9907478,9908664,9913842,9916856,9929495,9946579,9957084,9962904,9963807,9971068,9980253,9985117,9985892,10007476,10010352,10010808,10025192,10075013,10103597,10115103,10116781,10120165,10137641,10141427,10144210,10148637,10369082,10395553,10418593,10435057,10441855,10497439,10499683,10509766,10515351,10521300,10522302,10525281,10538714,10542612,10544981,10546440,10678033,10995462,11101727,11132055,12664343,12967575} | {6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.6!\n 6667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05} | {11545,295289,430988,565826,912519,1179853,1442886,1590646,1649901,1709198,1773893,1831398,1966887,2026312,2087874,2151518,2316639,2474245,2571004,2769480,2863837,2952117,3100933,3182580,3259831,3338022,3412802,3517509,3671705,3758894,4106521,4549800,4620521,4699748,4772724,4851063,4927467,5028209,5105421,5183582,5364296,5454952,5965286,6081539,6528031,6798065,7192136,7518897,7854942,8169821,8527085,8867514,9318637,9812968,9896732,9!\n 915321,9933027,9950345,9969581,9987324,10004114,10022269,10040!\n 935,1005\n9618,10077611,10096111,10114682,10132165,10151207,10168791,10232857,10299111,10370156,10441842,10497303,10514993,10531984,10678040,10953841,11030018,11088408,11153327,11214573,11443648,11507997,11566711,11615011,11683984,11909042,12014715,12106151,12194283,12284176,12373145,12456035,12545752,12628686,12723672,13022336,13621556,14449465} | 0.210513\n public | backupobjects | record_id | 0 | 8 |\n-1 | |\n|\n{621,167329,364075,495055,629237,768429,906683,1036819,1168225,1304782,1446441,1635583,1776623,1919568,2058804,2213573,2384816,2516367,2654165,2777015,2913726,3045319,3179436,3326044,3449751,3584737,3705100,3849567,3983587,4119532,4255086,4400700,4522294,4676257,4803235,4930094,5065599,5212568,5341881,5476010,5610455,5750156,5876952,6009086,6341074,6663749,6792397,6913638,7035450,7166345,7309759,7449436,7579067,7717768,7852692,7992611,8107334,8232850,8376448,8510463,8654839,8785467,8930354,9065437,9219398,9347145,9497479,9694222,9829935,9962878,10107465,10246453,10406586,10548493,10690983,10827832,10978600,11111459,11257696,11462706,11593369,11738262,11918473,12065317,12208496,12340088,12483168,12631769,12754208,12907042,13037605,13176218,13312853,13440791,13600318,13749132,13884632,14018915,14174415,14328234,14458641} | 0.911416\n\n\n", "msg_date": "Fri, 29 Jun 2007 14:01:03 -0700", "msg_from": "Ed Tyrrill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "Sorry to repost this, but I forgot the subject the first time around.\n\nHey All,\n\nI am testing upgrading our database from version 8.1 to 8.2. I ran our\nworst performing query on this table, an outer join with an \"is null\"\ncondition, and I was happy to see it ran over four times faster. I also\nnoticed the explain analyze showed the planner chose to do sequential\nscans on both tables. I realized I had forgotten to increase\ndefault_statistics_target from the default 10, so I increased it to 100,\nand ran \"analyze\". In 8.1 this sped things up significantly, but in 8.2\nwhen I ran the query again it was actually slower. These tests were\ndone with 8.2.3.1 so I also loaded 8.2.4.1 for comparison. Here is the\nexplain analyze with default_statistics_target set to 10 on 8.2.3.1:\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using (record_id) where\nbackup_location.record_id is null;\n QUERY\nPLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n Hash Left Join (cost=7259801.24..12768737.71 rows=1 width=8) (actual\ntime=651003.121..1312717.249 rows=10411 loops=1)\n Hash Cond: (backupobjects.record_id = backup_location.record_id)\n Filter: (backup_location.record_id IS NULL)\n -> Seq Scan on backupobjects (cost=0.00..466835.63 rows=13716963\nwidth=8) (actual time=0.030..95981.895 rows=13706121 loops=1)\n -> Hash (cost=3520915.44..3520915.44 rows=215090944 width=8)\n(actual time=527345.024..527345.024 rows=215090786 loops=1)\n -> Seq Scan on backup_location (cost=0.00..3520915.44\nrows=215090944 width=8) (actual time=0.048..333944.886 rows=215090786\nloops=1)\n Total runtime: 1312727.200 ms\n\nAnd again with default_statistics_target set to 100 on 8.2.3.1:\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using (record_id) where\nbackup_location.record_id is null;\n QUERY\nPLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n Merge Left Join (cost=38173940.88..41468823.19 rows=1 width=8) (actual\ntime=3256548.988..4299922.345 rows=10411 loops=1)\n Merge Cond: (backupobjects.record_id = backup_location.record_id)\n Filter: (backup_location.record_id IS NULL)\n -> Sort (cost=2258416.72..2292675.79 rows=13703629 width=8) (actual\ntime=74450.897..85651.707 rows=13706121 loops=1)\n Sort Key: backupobjects.record_id\n -> Seq Scan on backupobjects (cost=0.00..466702.29\nrows=13703629 width=8) (actual time=0.024..40939.762 rows=13706121\nloops=1)\n -> Sort (cost=35915524.17..36453251.53 rows=215090944 width=8)\n(actual time=3182075.661..4094748.788 rows=215090786 loops=1)\n Sort Key: backup_location.record_id\n -> Seq Scan on backup_location (cost=0.00..3520915.44\nrows=215090944 width=8) (actual time=17.905..790499.303 rows=215090786\nloops=1)\n Total runtime: 4302591.325 ms\n\nWith 8.2.4.1 I get the same plan and performance with\ndefault_statistics_target set to either 10 or 100:\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using (record_id) where\nbackup_location.record_id is null;\n QUERY\nPLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n Merge Left Join (cost=37615266.46..40910145.54 rows=1 width=8) (actual\ntime=2765729.582..3768519.658 rows=10411 loops=1)\n Merge Cond: (backupobjects.record_id = backup_location.record_id)\n Filter: (backup_location.record_id IS NULL)\n -> Sort (cost=2224866.79..2259124.25 rows=13702985 width=8) (actual\ntime=101118.216..113245.942 rows=13706121 loops=1)\n Sort Key: backupobjects.record_id\n -> Seq Scan on backupobjects (cost=0.00..466695.85\nrows=13702985 width=8) (actual time=10.003..67604.564 rows=13706121\nloops=1)\n -> Sort (cost=35390399.67..35928127.03 rows=215090944 width=8)\n(actual time=2664596.049..3540048.500 rows=215090786 loops=1)\n Sort Key: backup_location.record_id\n -> Seq Scan on backup_location (cost=0.00..3520915.44\nrows=215090944 width=8) (actual time=7.110..246561.900 rows=215090786\nloops=1)\n Total runtime: 3770428.750 ms\n\nAnd for reference here is the same query with 8.1.5.6:\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using (record_id) where\nbackup_location.record_id is null;\n \nQUERY PLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-----------------\n Merge Left Join (cost=37490897.67..41269533.13 rows=13705356 width=8)\n(actual time=5096492.430..6588745.386 rows=10411 loops=1)\n Merge Cond: (\"outer\".record_id = \"inner\".record_id)\n Filter: (\"inner\".record_id IS NULL)\n -> Index Scan using backupobjects_pkey on backupobjects\n(cost=0.00..518007.92 rows=13705356 width=8) (actual\ntime=32.020..404517.133 rows=13706121 loops=1)\n -> Sort (cost=37490897.67..38028625.03 rows=215090944 width=8)\n(actual time=5096460.396..6058937.259 rows=215090786 loops=1)\n Sort Key: backup_location.record_id\n -> Seq Scan on backup_location (cost=0.00..3520915.44\nrows=215090944 width=8) (actual time=0.020..389038.442 rows=215090786\nloops=1)\n Total runtime: 6599215.268 ms\n(8 rows)\n\nBased on all this we will be going with 8.2.4.1, but it seems like\ncurrently the query planner isn't choosing the best plan for this case.\n\nThanks,\nEd\n", "msg_date": "Mon, 25 Jun 2007 14:48:55 -0700", "msg_from": "\"Tyrrill, Ed\" <[email protected]>", "msg_from_op": true, "msg_subject": "Non-optimal query plan with 8.2" } ]
[ { "msg_contents": "Hi,\n\n \n\nIn version 8.1.5, I have an rtree index on a 1.5 GB table. The size of\nthis index is 500 MB. After migrating to 8.2.3, the size of this index\nhas increased to 35GB. I've dropped are recreated the index and got the\nsame result. In 8.2.3 the index type is gist, does this have something\nto do with it? At 35GB we have seen a decrease in performance. Any\nhelp or hints are appreciated.\n\n \n\nCREATE INDEX binloc_boxrange\n\n ON featureloc\n\n USING rtree\n\n (boxrange(fmin, fmax));\n\n \n\n \n\nCREATE TABLE featureloc\n\n(\n\n featureloc_id serial NOT NULL,\n\n feature_id integer NOT NULL,\n\n srcfeature_id integer,\n\n fmin integer,\n\n is_fmin_partial boolean NOT NULL DEFAULT false,\n\n fmax integer,\n\n is_fmax_partial boolean NOT NULL DEFAULT false,\n\n strand smallint,\n\n phase integer,\n\n residue_info text,\n\n locgroup integer NOT NULL DEFAULT 0,\n\n rank integer NOT NULL DEFAULT 0,\n\n....\n\n \n\n \n\nCREATE OR REPLACE FUNCTION boxrange(integer, integer)\n\n RETURNS box AS\n\n'SELECT box (create_point(0, $1), create_point($2,500000000))'\n\n LANGUAGE 'sql' IMMUTABLE;\n\nALTER FUNCTION boxrange(integer, integer) OWNER TO cjm;\n\n \n\n \n\nCREATE OR REPLACE FUNCTION create_point(integer, integer)\n\n RETURNS point AS\n\n'SELECT point ($1, $2)'\n\n LANGUAGE 'sql' VOLATILE;\n\nALTER FUNCTION create_point(integer, integer) OWNER TO cjm;\n\n \n\n \n\n \n\nThanks,\n\nTom\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nIn version 8.1.5, I have an rtree index on a 1.5 GB table. \nThe size of this index is 500 MB.  After migrating to 8.2.3, the size of\nthis index has increased to 35GB.  I’ve dropped are recreated the\nindex and got the same result.  In 8.2.3 the index type is gist, does this\nhave something to do with it?  At 35GB we have seen a decrease in performance. \nAny help or hints are appreciated.\n \nCREATE INDEX binloc_boxrange\n  ON featureloc\n  USING rtree\n  (boxrange(fmin, fmax));\n \n \nCREATE TABLE featureloc\n(\n  featureloc_id serial NOT NULL,\n  feature_id integer NOT NULL,\n  srcfeature_id integer,\n  fmin integer,\n  is_fmin_partial boolean NOT NULL DEFAULT false,\n  fmax integer,\n  is_fmax_partial boolean NOT NULL DEFAULT false,\n  strand smallint,\n  phase integer,\n  residue_info text,\n  locgroup integer NOT NULL DEFAULT 0,\n  rank integer NOT NULL DEFAULT 0,\n….\n \n \nCREATE OR REPLACE FUNCTION boxrange(integer, integer)\n  RETURNS box AS\n'SELECT box (create_point(0, $1),\ncreate_point($2,500000000))'\n  LANGUAGE 'sql' IMMUTABLE;\nALTER FUNCTION boxrange(integer, integer) OWNER TO cjm;\n \n \nCREATE OR REPLACE FUNCTION create_point(integer, integer)\n  RETURNS point AS\n'SELECT point ($1, $2)'\n  LANGUAGE 'sql' VOLATILE;\nALTER FUNCTION create_point(integer, integer) OWNER TO cjm;\n \n \n \nThanks,\nTom", "msg_date": "Wed, 27 Jun 2007 11:17:48 -0400", "msg_from": "\"Dolafi, Tom\" <[email protected]>", "msg_from_op": true, "msg_subject": "rtree/gist index taking enormous amount of space in 8.2.3" }, { "msg_contents": "\"Dolafi, Tom\" <[email protected]> writes:\n> In version 8.1.5, I have an rtree index on a 1.5 GB table. The size of\n> this index is 500 MB. After migrating to 8.2.3, the size of this index\n> has increased to 35GB. I've dropped are recreated the index and got the\n> same result. In 8.2.3 the index type is gist, does this have something\n> to do with it?\n\nWe dropped rtree in 8.2 on the strength of experiments that seemed to\nshow gist was always better, but you seem to have an outlier case...\n\nCan you tell us something about the distribution of the data values\n(fmin and fmax)? Is there anything particularly magic about the two\nconstants you're using in the boxes? I suppose you've hit some bad\ncorner case in the gist box opclass, but it's not clear what.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Jun 2007 12:08:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rtree/gist index taking enormous amount of space in 8.2.3 " }, { "msg_contents": "\nmin(fmin) | max(fmin) | avg(fmin) \n 1 | 55296469 | 11423945 \n\nmin(fmax) | max(fmax) | avg(fmax)\n 18 | 55553288 | 11424491\n\nThere are 5,704,211 rows in the table.\n\nThis application has been inherited by us. As far as I can tell the\nmagic of the two constants seem to represent the notion of infinity.\n\nThanks,\nTom\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Thursday, June 28, 2007 12:09 PM\nTo: Dolafi, Tom\nCc: [email protected]\nSubject: Re: [PERFORM] rtree/gist index taking enormous amount of space\nin 8.2.3 \n\n\"Dolafi, Tom\" <[email protected]> writes:\n> In version 8.1.5, I have an rtree index on a 1.5 GB table. The size\nof\n> this index is 500 MB. After migrating to 8.2.3, the size of this\nindex\n> has increased to 35GB. I've dropped are recreated the index and got\nthe\n> same result. In 8.2.3 the index type is gist, does this have\nsomething\n> to do with it?\n\nWe dropped rtree in 8.2 on the strength of experiments that seemed to\nshow gist was always better, but you seem to have an outlier case...\n\nCan you tell us something about the distribution of the data values\n(fmin and fmax)? Is there anything particularly magic about the two\nconstants you're using in the boxes? I suppose you've hit some bad\ncorner case in the gist box opclass, but it's not clear what.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jun 2007 09:51:19 -0400", "msg_from": "\"Dolafi, Tom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rtree/gist index taking enormous amount of space in 8.2.3 " }, { "msg_contents": "Dolafi, Tom wrote:\n> min(fmin) | max(fmin) | avg(fmin) \n> 1 | 55296469 | 11423945 \n> \n> min(fmax) | max(fmax) | avg(fmax)\n> 18 | 55553288 | 11424491\n> \n> There are 5,704,211 rows in the table.\n\nWhen you're looking for weird index problems, it's more interesting to know if there are certain numbers that occur a LOT. From your statistics above, each number occurs about 10 times in the table. But do some particular numbers occur thousands, or even millions, of times?\n\nHere is a query that will print a list of the highest-occuring values. You might expect a few occurances of 20, and maybe 30, but if you have thousands or millions of occurances of certain numbers, then that can screw up an index.\n\n select fmax, c from\n (select fmax, count(fmax) as c from your_table group by fmax) as foo\n where c > 3 order by c desc;\n\nCraig\n\n", "msg_date": "Fri, 29 Jun 2007 09:14:08 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rtree/gist index taking enormous amount of space in\n 8.2.3" }, { "msg_contents": "\"Dolafi, Tom\" <[email protected]> writes:\n> min(fmin) | max(fmin) | avg(fmin) \n> 1 | 55296469 | 11423945 \n> min(fmax) | max(fmax) | avg(fmax)\n> 18 | 55553288 | 11424491\n\nOK, I was able to reproduce a problem after making the further guess\nthat fmax is usually a little bit greater than fmin. The attached test\nscript generates an rtree index of around 800 pages on 8.1.9, and the\nindex build time is about 6 seconds on my machine. On CVS HEAD, the\nscript generates a gist index of over 30000 pages and the build time is\nover 60 seconds. Since I'm using random() the numbers move around a\nbit, but they're consistently awful. I experimented with a few other\ndistributions, such as fmin and fmax chosen independently in the same\nrange, and saw gist build time usually better than rtree and index size\nonly somewhat larger, so this particular distribution apparently fools\ngist_box_picksplit rather badly. The problem seems nonlinear too ---\nI had originally tried it with 1 million test rows instead of 100000,\nand gave up waiting for the index build after more than an hour.\n\nOleg, Teodor, can this be improved?\n\n\t\t\tregards, tom lane\n\ndrop table featureloc;\n\nCREATE TABLE featureloc\n(\n fmin integer,\n fmax integer\n);\n\ninsert into featureloc\n select r1, r1 + 1 + random() * 1000 from\n (select 1 + random() * 55000000 as r1, 1 + random() * 55000000 as r2\n from generate_series(1,100000) offset 0) as ss;\n\nCREATE OR REPLACE FUNCTION boxrange(integer, integer)\n RETURNS box AS\n 'SELECT box (point(0, $1), point($2, 500000000))'\n LANGUAGE 'sql' STRICT IMMUTABLE;\n\nCREATE INDEX binloc_boxrange\n ON featureloc\n USING rtree\n (boxrange(fmin, fmax));\n\nvacuum verbose featureloc;\n", "msg_date": "Fri, 29 Jun 2007 13:57:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rtree/gist index taking enormous amount of space in 8.2.3 " }, { "msg_contents": "The data is not distributed well...\n\nTop 20 occurrences of fmin and fmax:\n fmin | count\n----------+--------\n 0 | 214476\n 19281576 | 2870\n 2490005 | 2290\n 1266332 | 2261\n 15539680 | 2086\n 11022233 | 2022\n 25559658 | 1923\n 3054411 | 1906\n 10237885 | 1890\n 13827272 | 1876\n 19187021 | 1847\n 18101335 | 1845\n 1518230 | 1843\n 21199488 | 1842\n 1922518 | 1826\n 1216144 | 1798\n 25802126 | 1762\n 8307335 | 1745\n 21271866 | 1736\n 8361667 | 1721\n\n\n fmax | count\n----------+--------\n 25 | 197551\n 21272002 | 547\n 21271988 | 335\n 21271969 | 321\n 6045781 | 247\n 1339301 | 243\n 21669151 | 235\n 7779506 | 232\n 2571422 | 229\n 7715946 | 228\n 27421323 | 222\n 7048089 | 221\n 87364 | 219\n 13656535 | 217\n 26034147 | 214\n 19184612 | 213\n 7048451 | 213\n 21668877 | 213\n 6587492 | 212\n 9484598 | 212\n\nAlso, out of 5.7 million rows there are 1.6 million unique fmin and 1.6\nmillion unique fmax values.\n\nThanks,\nTom \n\n-----Original Message-----\nFrom: Craig James [mailto:[email protected]] \nSent: Friday, June 29, 2007 12:14 PM\nTo: Dolafi, Tom\nCc: Tom Lane; [email protected]\nSubject: Re: [PERFORM] rtree/gist index taking enormous amount of space\nin 8.2.3\n\nDolafi, Tom wrote:\n> min(fmin) | max(fmin) | avg(fmin) \n> 1 | 55296469 | 11423945 \n> \n> min(fmax) | max(fmax) | avg(fmax)\n> 18 | 55553288 | 11424491\n> \n> There are 5,704,211 rows in the table.\n\nWhen you're looking for weird index problems, it's more interesting to\nknow if there are certain numbers that occur a LOT. From your\nstatistics above, each number occurs about 10 times in the table. But\ndo some particular numbers occur thousands, or even millions, of times?\n\nHere is a query that will print a list of the highest-occuring values.\nYou might expect a few occurances of 20, and maybe 30, but if you have\nthousands or millions of occurances of certain numbers, then that can\nscrew up an index.\n\n select fmax, c from\n (select fmax, count(fmax) as c from your_table group by fmax) as foo\n where c > 3 order by c desc;\n\nCraig\n", "msg_date": "Fri, 29 Jun 2007 14:05:30 -0400", "msg_from": "\"Dolafi, Tom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rtree/gist index taking enormous amount of space in 8.2.3" }, { "msg_contents": "Thanks for looking into this and reproducing a similar result. The\nindex took 6 hours to complete on a 1.5GB table resulting in 35GB of\nstorage, and it took 36 hours to vacuum... I'm patient :-)\n\nIn the mean time I've dropped the index which has resulted in overall\nperformance gain on queries against the table, but we have not tested\nthe part of the application which would utilize this index.\n\n- Tom\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, June 29, 2007 1:58 PM\nTo: Dolafi, Tom\nCc: [email protected]; Oleg Bartunov; Teodor Sigaev\nSubject: Re: [PERFORM] rtree/gist index taking enormous amount of space\nin 8.2.3 \n\n\"Dolafi, Tom\" <[email protected]> writes:\n> min(fmin) | max(fmin) | avg(fmin) \n> 1 | 55296469 | 11423945 \n> min(fmax) | max(fmax) | avg(fmax)\n> 18 | 55553288 | 11424491\n\nOK, I was able to reproduce a problem after making the further guess\nthat fmax is usually a little bit greater than fmin. The attached test\nscript generates an rtree index of around 800 pages on 8.1.9, and the\nindex build time is about 6 seconds on my machine. On CVS HEAD, the\nscript generates a gist index of over 30000 pages and the build time is\nover 60 seconds. Since I'm using random() the numbers move around a\nbit, but they're consistently awful. I experimented with a few other\ndistributions, such as fmin and fmax chosen independently in the same\nrange, and saw gist build time usually better than rtree and index size\nonly somewhat larger, so this particular distribution apparently fools\ngist_box_picksplit rather badly. The problem seems nonlinear too ---\nI had originally tried it with 1 million test rows instead of 100000,\nand gave up waiting for the index build after more than an hour.\n\nOleg, Teodor, can this be improved?\n\n\t\t\tregards, tom lane\n\ndrop table featureloc;\n\nCREATE TABLE featureloc\n(\n fmin integer,\n fmax integer\n);\n\ninsert into featureloc\n select r1, r1 + 1 + random() * 1000 from\n (select 1 + random() * 55000000 as r1, 1 + random() * 55000000 as r2\n from generate_series(1,100000) offset 0) as ss;\n\nCREATE OR REPLACE FUNCTION boxrange(integer, integer)\n RETURNS box AS\n 'SELECT box (point(0, $1), point($2, 500000000))'\n LANGUAGE 'sql' STRICT IMMUTABLE;\n\nCREATE INDEX binloc_boxrange\n ON featureloc\n USING rtree\n (boxrange(fmin, fmax));\n\nvacuum verbose featureloc;\n", "msg_date": "Fri, 29 Jun 2007 14:13:23 -0400", "msg_from": "\"Dolafi, Tom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rtree/gist index taking enormous amount of space in 8.2.3 " }, { "msg_contents": "\"Dolafi, Tom\" <[email protected]> writes:\n> The data is not distributed well...\n\nCan you show us min, max, and avg of fmax minus fmin? I'd like to\ncheck my guess about that being a fairly narrow range.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jun 2007 14:30:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rtree/gist index taking enormous amount of space in 8.2.3 " }, { "msg_contents": "\"Dolafi, Tom\" <[email protected]> writes:\n> In the mean time I've dropped the index which has resulted in overall\n> performance gain on queries against the table, but we have not tested\n> the part of the application which would utilize this index.\n\nI noted that with the same (guessed-at) distribution of fmin/fmax, the\nindex size remains reasonable if you change the derived boxes to\n\nCREATE OR REPLACE FUNCTION boxrange(integer, integer)\n RETURNS box AS\n 'SELECT box (point($1, $1), point($2, $2))'\n LANGUAGE 'sql' STRICT IMMUTABLE;\n\nwhich makes sense from the point of view of geometric intuition: instead\nof a bunch of very tall, mostly very narrow, mostly overlapping boxes,\nyou have a bunch of small square boxes spread out along a line. So it\nstands to reason that a geometrically-motivated index structure would\nwork a lot better on the latter. I don't know though whether your\nqueries can be adapted to work with this. What was the index being used\nfor, exactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jun 2007 14:38:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rtree/gist index taking enormous amount of space in 8.2.3 " }, { "msg_contents": "(fmax-fmin)...\n min | max | avg\n---------+---------+----------------------\n 1 | 2278225 | 546 \n\nI noticed 3000 occurrences where fmax is less than fmin. I excluded\nthese values to get the min difference between the two. Also, there are\n20 \"invalid\"/\"bogus\" rows with negative values which were excluded from\nthe queries.\n\nThanks,\nTom\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, June 29, 2007 2:30 PM\nTo: Dolafi, Tom\nCc: Craig James; [email protected]\nSubject: Re: [PERFORM] rtree/gist index taking enormous amount of space\nin 8.2.3 \n\n\"Dolafi, Tom\" <[email protected]> writes:\n> The data is not distributed well...\n\nCan you show us min, max, and avg of fmax minus fmin? I'd like to\ncheck my guess about that being a fairly narrow range.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jun 2007 14:44:31 -0400", "msg_from": "\"Dolafi, Tom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rtree/gist index taking enormous amount of space in 8.2.3 " }, { "msg_contents": "The application need is to determine genomic features present in a\nuser-defined portion of a chromosome. My guess is that features (boxes)\nare overlapping along a line (chromosome), and there is a need to\nrepresent them as being stacked. Since I'm not certain of its exact\nuse, I've emailed the application owner to find the motivation as to why\na geometric index structure is used, and why the boxes are tall and\noverlapping. As a side note, the data model for our application is\nbased on a popular bioinformatics open source project called chado.\n\nThanks,\nTom\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, June 29, 2007 2:38 PM\nTo: Dolafi, Tom\nCc: [email protected]; Oleg Bartunov; Teodor Sigaev\nSubject: Re: [PERFORM] rtree/gist index taking enormous amount of space\nin 8.2.3 \n\n\"Dolafi, Tom\" <[email protected]> writes:\n> In the mean time I've dropped the index which has resulted in overall\n> performance gain on queries against the table, but we have not tested\n> the part of the application which would utilize this index.\n\nI noted that with the same (guessed-at) distribution of fmin/fmax, the\nindex size remains reasonable if you change the derived boxes to\n\nCREATE OR REPLACE FUNCTION boxrange(integer, integer)\n RETURNS box AS\n 'SELECT box (point($1, $1), point($2, $2))'\n LANGUAGE 'sql' STRICT IMMUTABLE;\n\nwhich makes sense from the point of view of geometric intuition: instead\nof a bunch of very tall, mostly very narrow, mostly overlapping boxes,\nyou have a bunch of small square boxes spread out along a line. So it\nstands to reason that a geometrically-motivated index structure would\nwork a lot better on the latter. I don't know though whether your\nqueries can be adapted to work with this. What was the index being used\nfor, exactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jun 2007 15:40:30 -0400", "msg_from": "\"Dolafi, Tom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rtree/gist index taking enormous amount of space in 8.2.3 " }, { "msg_contents": "Thank you for the patch. The index size is back down to 500MB and there\nare no performance issues with queries against the table.\n \n-----Original Message-----\nFrom: Teodor Sigaev [mailto:[email protected]] \nSent: Friday, July 06, 2007 8:08 AM\nTo: Tom Lane\nCc: Dolafi, Tom; [email protected]; Oleg Bartunov\nSubject: Re: [PERFORM] rtree/gist index taking enormous amount of space\nin 8.2.3\n\n> Oleg, Teodor, can this be improved?\nAttached patch improves creation of index for similar corner cases. And\nsplit \nalgorithm still demonstrates O(n).\n\nIt possible to make fallback to Guttman's split algorithm in corner\ncases, but I \n don't like this: used linear algorithm is much faster and usually has\nbetter \nperformance in search.\n\n-- \nTeodor Sigaev E-mail: [email protected]\n WWW:\nhttp://www.sigaev.ru/\n", "msg_date": "Mon, 9 Jul 2007 17:48:21 -0400", "msg_from": "\"Dolafi, Tom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rtree/gist index taking enormous amount of space in 8.2.3" } ]
[ { "msg_contents": "I was wondering if you guys have some suggested settings for our server, i\nthink we are not hardware limited but the configureation is set up\nincorrectly. For some reason our database seems to have trouble handling\n10+ inserts per second which seems to be a pretty trivial load for this\nhardware, we're seeing very high %iowait, this is a pretty typical output\nfor #iostat -m 5\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 0.41 0.00 0.41 96.28 0.00 2.90\n\nDevice: tps MB_read/s MB_wrtn/s MB_read MB_wrtn\nsda 90.63 0.08 0.56 0 2\nsdc 0.00 0.00 0.00 0 0\nsdd 94.09 0.19 1.74 0 8\n\n\nsda = 2x320GB 7200rpm in RAID1\nsdc = 2x150GB 10krpm in RAID1 (transaction log is on this array)\nsdd = 6x150GB 10krpm in RAID 10 (database is on the array)\n\nraid controller = 3ware 9650 12port - 256MB cache\n\n8GB RAM, core 2 duo - quad core\n\nit would seem like the io subsystem is the limiting factor, but i feel like\nwe should be barely hitting a wall, you can see from the example its writing\n< 2MB/s to the array\n\nHere's some of our settings\n\nshared_buffers = 256MB # min 128kB or max_connections*16kB\ntemp_buffers = 32MB # min 800kB\nmax_prepared_transactions = 50 # can be 0 or more\nwork_mem = 32MB # min 64kB\nmaintenance_work_mem = 32MB # min 1MB\nmax_stack_depth = 7MB # min 100kB\n\nmax_fsm_pages = 512000 # min max_fsm_relations*16, 6 bytes\n\nfsync = off # turns forced synchronization on or\noff\n\n\nIf you guys have any suggestions it would be greatly appreciated\n\nI was wondering if you guys have some suggested settings for our server, i think we are not hardware limited but the configureation is set up incorrectly.  For some reason our database seems to have trouble handling 10+ inserts per second which seems to be a pretty trivial load for this hardware, we're seeing very high %iowait, this is a pretty typical output for #iostat -m 5 \navg-cpu:  %user   %nice %system %iowait  %steal   %idle                 0.41    0.00     0.41       96.28       0.00     2.90Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtnsda              \n90.63         0.08         0.56          0          2sdc               0.00         0.00         0.00          0          0sdd              94.09         0.19         1.74          0          8sda = 2x320GB 7200rpm in RAID1\nsdc = 2x150GB 10krpm in RAID1    (transaction log is on this array)sdd = 6x150GB 10krpm in RAID 10 (database is on the array)raid controller = 3ware 9650 12port - 256MB cache \n8GB RAM, core 2 duo - quad core it would seem like the io subsystem is the limiting factor, but i feel like we should be barely hitting a wall, you can see from the example its writing < 2MB/s to the array \nHere's some of our settingsshared_buffers = 256MB                  # min 128kB or max_connections*16kBtemp_buffers = 32MB                     # min 800kBmax_prepared_transactions = 50          # can be 0 or more \nwork_mem = 32MB                         # min 64kBmaintenance_work_mem = 32MB             # min 1MBmax_stack_depth = 7MB                   # min 100kBmax_fsm_pages = 512000          # min max_fsm_relations*16, 6 bytes \nfsync = off                             # turns forced synchronization on or offIf you guys have any suggestions it would be greatly appreciated", "msg_date": "Wed, 27 Jun 2007 20:38:11 -0400", "msg_from": "\"Evan Reiser\" <[email protected]>", "msg_from_op": true, "msg_subject": "High IOWAIT times, low iops? Need Help with configuration" }, { "msg_contents": "Evan Reiser wrote:\n> I was wondering if you guys have some suggested settings for our server, i\n> think we are not hardware limited but the configureation is set up\n> incorrectly. For some reason our database seems to have trouble handling\n> 10+ inserts per second which seems to be a pretty trivial load for this\n> hardware, we're seeing very high %iowait, this is a pretty typical output\n> for #iostat -m 5\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 0.41 0.00 0.41 96.28 0.00 2.90\n> \n> Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn\n> sda 90.63 0.08 0.56 0 2\n> sdc 0.00 0.00 0.00 0 0\n> sdd 94.09 0.19 1.74 0 8\n> \n> \n> sda = 2x320GB 7200rpm in RAID1\n> sdc = 2x150GB 10krpm in RAID1 (transaction log is on this array)\n> sdd = 6x150GB 10krpm in RAID 10 (database is on the array)\n\nOK, so no write activity on the transaction log, and hardly any reading \non sdd. Your disks are practically idle, and yet iowait is at 96% - very \nstrange.\n\n> raid controller = 3ware 9650 12port - 256MB cache\n> \n> 8GB RAM, core 2 duo - quad core\n> \n> it would seem like the io subsystem is the limiting factor, but i feel like\n> we should be barely hitting a wall, you can see from the example its \n> writing\n> < 2MB/s to the array\n> \n> Here's some of our settings\n> \n> shared_buffers = 256MB # min 128kB or max_connections*16kB\n> temp_buffers = 32MB # min 800kB\n> max_prepared_transactions = 50 # can be 0 or more\n> work_mem = 32MB # min 64kB\n> maintenance_work_mem = 32MB # min 1MB\n> max_stack_depth = 7MB # min 100kB\n> \n> max_fsm_pages = 512000 # min max_fsm_relations*16, 6 bytes\n\nWell, you might want to tweak these, but they're not going to completely \nkill your io.\n\n> fsync = off # turns forced synchronization \n\nYou'll be turning this back on in production, I take it?\n\nHmm - ideas\n1. Run a VACUUM FULL on your database(s) and see what happens with your \nio then\n2. Test a block copy, something like (but a directory on sdd):\n dd if=/dev/zero of=/tmp/empty count=1000000\n That should show an upper limit for your write speed.\n3. Google around and check there aren't any issues with your raid \ncontroller and kernel/driver versions.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 28 Jun 2007 08:17:38 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High IOWAIT times, low iops? Need Help with configuration" }, { "msg_contents": "we've tried benchmarking the array, the data array can write at\n800mb/s for files less than 256mb (raid write cache), after which it\ncan sustain 300mb/s, it seems like it can also handle 6-700 iops when\nbenchmarking. it seems to work as expected outside of postgres, I\nguess we can look at the drivers, let me know if you guys have any\nother suggestions, thanks for your help, -evan\n\nOn 6/28/07, Richard Huxton <[email protected]> wrote:\n> Evan Reiser wrote:\n> > I was wondering if you guys have some suggested settings for our server, i\n> > think we are not hardware limited but the configureation is set up\n> > incorrectly. For some reason our database seems to have trouble handling\n> > 10+ inserts per second which seems to be a pretty trivial load for this\n> > hardware, we're seeing very high %iowait, this is a pretty typical output\n> > for #iostat -m 5\n> >\n> > avg-cpu: %user %nice %system %iowait %steal %idle\n> > 0.41 0.00 0.41 96.28 0.00 2.90\n> >\n> > Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn\n> > sda 90.63 0.08 0.56 0 2\n> > sdc 0.00 0.00 0.00 0 0\n> > sdd 94.09 0.19 1.74 0 8\n> >\n> >\n> > sda = 2x320GB 7200rpm in RAID1\n> > sdc = 2x150GB 10krpm in RAID1 (transaction log is on this array)\n> > sdd = 6x150GB 10krpm in RAID 10 (database is on the array)\n>\n> OK, so no write activity on the transaction log, and hardly any reading\n> on sdd. Your disks are practically idle, and yet iowait is at 96% - very\n> strange.\n>\n> > raid controller = 3ware 9650 12port - 256MB cache\n> >\n> > 8GB RAM, core 2 duo - quad core\n> >\n> > it would seem like the io subsystem is the limiting factor, but i feel\n> like\n> > we should be barely hitting a wall, you can see from the example its\n> > writing\n> > < 2MB/s to the array\n> >\n> > Here's some of our settings\n> >\n> > shared_buffers = 256MB # min 128kB or\n> max_connections*16kB\n> > temp_buffers = 32MB # min 800kB\n> > max_prepared_transactions = 50 # can be 0 or more\n> > work_mem = 32MB # min 64kB\n> > maintenance_work_mem = 32MB # min 1MB\n> > max_stack_depth = 7MB # min 100kB\n> >\n> > max_fsm_pages = 512000 # min max_fsm_relations*16, 6 bytes\n>\n> Well, you might want to tweak these, but they're not going to completely\n> kill your io.\n>\n> > fsync = off # turns forced synchronization\n>\n> You'll be turning this back on in production, I take it?\n>\n> Hmm - ideas\n> 1. Run a VACUUM FULL on your database(s) and see what happens with your\n> io then\n> 2. Test a block copy, something like (but a directory on sdd):\n> dd if=/dev/zero of=/tmp/empty count=1000000\n> That should show an upper limit for your write speed.\n> 3. Google around and check there aren't any issues with your raid\n> controller and kernel/driver versions.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n", "msg_date": "Thu, 28 Jun 2007 09:17:59 -0400", "msg_from": "\"Evan Reiser\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High IOWAIT times, low iops? Need Help with configuration" }, { "msg_contents": "On Wed, 27 Jun 2007, Evan Reiser wrote:\n\n> For some reason our database seems to have trouble handling 10+ inserts \n> per second which seems to be a pretty trivial load for this hardware, \n> we're seeing very high %iowait\n\nTwo things come to mind:\n\n1) Is the table you're inserting into very complicated, with lots of \nindexes or triggers on it? Low I/O rates but high wait times are typical \nof when the data needed to update is spread out across the disk \nconsiderably, so there's lots of disk seeking involved even though the \nwrites are relatively small. That can happen if there are lots of index \nblocks to be updated every time you do an insert. Taking a look at VACCUM \nVERBOSE ANALYZE may either fix the problem or give you an idea what's \ngoing on. You might want to cluster your indexes at some point to help \nout with this.\n\n2) If you still have checkpoint_segments at the default of 3, your system \ncould be basically in a continuous checkpoint. Try making that 10X bigger \nas a start just to see if it improves things; you may end up settling for \na much larger value before you're done.\n\n> 8GB RAM, core 2 duo - quad core\n> shared_buffers = 256MB # min 128kB or max_connections*16kB\n\nAnd while not necessarily causing the problem you asked about, this is off \nby an order of magnitude if this server is mainly for PostgreSQL, and you \nshould be setting effective_cache_size as well if you're not doing that. \nSee http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm for a \nquick intro to things to consider.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 28 Jun 2007 09:39:38 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High IOWAIT times, low iops? Need Help with configuration" } ]
[ { "msg_contents": "Hi all,\n\nI'm trying to do an update of a reasonably large table and it's taking \nway too long so I'm trying to work out why and if I need to tweak any \nsettings to speed it up.\n\nThe table is around 3.5 million records.\n\nThe query is\n\nupdate table set domainname=substring(emailaddress from position('@' in \nemailaddress));\n\nI've left it running for over 20 minutes and it hasn't finished so I'm \ndoing something terribly bad but I have no idea what ;)\n\nMaybe there's another way to write the query but I'm not sure how to \nmake it better.\n\nMost settings are default, I have bumped up shared_buffers a bit to \n65536 - I think that works out to 512Meg ? The machine has 2G ram.\n\nRunning version 8.1.9.\n\nAny pointers about settings etc are most welcome.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Thu, 28 Jun 2007 15:03:32 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": true, "msg_subject": "update query taking too long" }, { "msg_contents": "am Thu, dem 28.06.2007, um 15:03:32 +1000 mailte Chris folgendes:\n> Hi all,\n> \n> I'm trying to do an update of a reasonably large table and it's taking \n> way too long so I'm trying to work out why and if I need to tweak any \n> settings to speed it up.\n> \n> The table is around 3.5 million records.\n> \n> The query is\n> \n> update table set domainname=substring(emailaddress from position('@' in \n> emailaddress));\n\nI think, this is a bad idea.\nBecause, first, you have 2 columns with nearly identical data\n(mailaddres includes the domain and a extra domain field)\n\nAnd, after the UPDATE you have every row twice, because of MVCC: the\nlive tuple and a dead tuple.\n\n\n> Any pointers about settings etc are most welcome.\n\nI think, you should better use a VIEW.\n\nCREATE VIEW my_view_on_table as SELECT mailaddres, substring(...) as\ndomain, ...\n\nor, use the substring(...) in your regular queries instead the extra\ncolumn.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Thu, 28 Jun 2007 07:37:25 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update query taking too long" }, { "msg_contents": "Chris <[email protected]> writes:\n> I'm trying to do an update of a reasonably large table and it's taking \n> way too long so I'm trying to work out why and if I need to tweak any \n> settings to speed it up.\n\nAny foreign keys leading to or from that table?\n\n3.5 million row updates are not exactly gonna be instantaneous anyway,\nbut only FK checks or really slow user-written triggers would make it\ntake upwards of an hour ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Jun 2007 01:41:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update query taking too long " }, { "msg_contents": "Tom Lane wrote:\n> Chris <[email protected]> writes:\n>> I'm trying to do an update of a reasonably large table and it's taking \n>> way too long so I'm trying to work out why and if I need to tweak any \n>> settings to speed it up.\n> \n> Any foreign keys leading to or from that table?\n\nNope :(\n\n> 3.5 million row updates are not exactly gonna be instantaneous anyway,\n> but only FK checks or really slow user-written triggers would make it\n> take upwards of an hour ...\n\nNo triggers, functions.\n\nTable is pretty basic.\n\nI have a few indexes (one on the primary key, one on emailaddress etc) \nbut the 'domainname' column is a new one not referenced by any of the \nindexes.\n\nFWIW (while the other update is still going in another window):\n\nselect SUBSTRING(emailaddress FROM POSITION('@' IN emailaddress)) from \ntable;\nTime: 28140.399 ms\n\nIs there a better way to write the update? I thought about something \nlike this (but couldn't get it working - guess I don't have the right \nsyntax):\n\nupdate t1 set domainname=(select id, SUBSTRING(emailaddress FROM \nPOSITION('@' IN emailaddress)) from table t2) AS t2 where t1.id=t2.id\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Thu, 28 Jun 2007 16:16:50 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: update query taking too long" }, { "msg_contents": "A. Kretschmer wrote:\n> am Thu, dem 28.06.2007, um 15:03:32 +1000 mailte Chris folgendes:\n>> Hi all,\n>>\n>> I'm trying to do an update of a reasonably large table and it's taking \n>> way too long so I'm trying to work out why and if I need to tweak any \n>> settings to speed it up.\n>>\n>> The table is around 3.5 million records.\n>>\n>> The query is\n>>\n>> update table set domainname=substring(emailaddress from position('@' in \n>> emailaddress));\n> \n> I think, this is a bad idea.\n> Because, first, you have 2 columns with nearly identical data\n> (mailaddres includes the domain and a extra domain field)\n\nYeh I know. I might have to go back to the drawing board on this one. \nThe app has to work in mysql & postgres so I'm a bit limited in some of \nmy approaches.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Thu, 28 Jun 2007 16:20:59 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: update query taking too long" }, { "msg_contents": "am Thu, dem 28.06.2007, um 16:16:50 +1000 mailte Chris folgendes:\n> Is there a better way to write the update? I thought about something \n> like this (but couldn't get it working - guess I don't have the right \n> syntax):\n> \n> update t1 set domainname=(select id, SUBSTRING(emailaddress FROM \n> POSITION('@' IN emailaddress)) from table t2) AS t2 where t1.id=t2.id\n\ntest=# select * from foo;\n id | mail | domain\n----+-------------+--------\n 1 | [email protected] |\n 2 | [email protected] |\n(2 rows)\n\ntest=*# update foo set domain=SUBSTRING(mail FROM (POSITION('@' IN\nmail)+1));\nUPDATE 2\ntest=*# select * from foo;\n id | mail | domain\n----+-------------+---------\n 1 | [email protected] | foo.tld\n 2 | [email protected] | bar.tld\n(2 rows)\n\n\n(without the @ in the domain...)\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Thu, 28 Jun 2007 08:28:26 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update query taking too long" }, { "msg_contents": "A. Kretschmer wrote:\n> am Thu, dem 28.06.2007, um 16:16:50 +1000 mailte Chris folgendes:\n>> Is there a better way to write the update? I thought about something \n>> like this (but couldn't get it working - guess I don't have the right \n>> syntax):\n>>\n>> update t1 set domainname=(select id, SUBSTRING(emailaddress FROM \n>> POSITION('@' IN emailaddress)) from table t2) AS t2 where t1.id=t2.id\n> \n> test=# select * from foo;\n> id | mail | domain\n> ----+-------------+--------\n> 1 | [email protected] |\n> 2 | [email protected] |\n> (2 rows)\n> \n> test=*# update foo set domain=SUBSTRING(mail FROM (POSITION('@' IN\n> mail)+1));\n\nThat's what my original query is (apart from the +1 at the end) ;)\n\nI was just trying to approach it differently with the other attempt.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Thu, 28 Jun 2007 16:37:43 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: update query taking too long" }, { "msg_contents": "Chris wrote:\n> Tom Lane wrote:\n>> Any foreign keys leading to or from that table?\n> \n> Nope :(\n> \n>> 3.5 million row updates are not exactly gonna be instantaneous anyway,\n>> but only FK checks or really slow user-written triggers would make it\n>> take upwards of an hour ...\n> \n> No triggers, functions.\n\nOf course you really want a trigger on this, since presumably domainname \nshould always be kept in sync with emailaddress. But that's not the \nimmediate issue.\n\n> Table is pretty basic.\n> \n> I have a few indexes (one on the primary key, one on emailaddress etc) \n> but the 'domainname' column is a new one not referenced by any of the \n> indexes.\n> \n> FWIW (while the other update is still going in another window):\n\nWhat's saturated? Is the system I/O limited or CPU limited? You *should* \nbe limited by the write speed of your disk with something simple like this.\n\nWhat happens if you do the following?\nCREATE TABLE email_upd_test (id SERIAL, email text, domainname text, \nPRIMARY KEY (id));\n\nINSERT INTO email_upd_test (email) SELECT n::text || '@' || n::text FROM \n(SELECT generate_series(1,1000000) AS n) AS numbers;\nANALYSE email_upd_test;\n\n\\timing\nUPDATE email_upd_test SET domainname=substring(email from position('@' \nin email));\nUPDATE 1000000\nTime: 35056.125 ms\n\nThat 35 seconds is on a simple single-disk IDE disk. No particular \ntuning done on that box.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 28 Jun 2007 07:39:46 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update query taking too long" }, { "msg_contents": "Richard Huxton wrote:\n> Chris wrote:\n>> Tom Lane wrote:\n>>> Any foreign keys leading to or from that table?\n>>\n>> Nope :(\n>>\n>>> 3.5 million row updates are not exactly gonna be instantaneous anyway,\n>>> but only FK checks or really slow user-written triggers would make it\n>>> take upwards of an hour ...\n>>\n>> No triggers, functions.\n> \n> Of course you really want a trigger on this, since presumably domainname \n> should always be kept in sync with emailaddress. But that's not the \n> immediate issue.\n> \n>> Table is pretty basic.\n>>\n>> I have a few indexes (one on the primary key, one on emailaddress etc) \n>> but the 'domainname' column is a new one not referenced by any of the \n>> indexes.\n>>\n>> FWIW (while the other update is still going in another window):\n> \n> What's saturated? Is the system I/O limited or CPU limited? You *should* \n> be limited by the write speed of your disk with something simple like this.\n> \n> What happens if you do the following?\n\ndb=# CREATE TABLE email_upd_test (id SERIAL, email text, domainname \ntext, PRIMARY KEY (id));\nNOTICE: CREATE TABLE will create implicit sequence \n\"email_upd_test_id_seq\" for serial column \"email_upd_test.id\"\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n\"email_upd_test_pkey\" for table \"email_upd_test\"\nCREATE TABLE\nTime: 276.500 ms\ndb=# INSERT INTO email_upd_test (email) SELECT n::text || '@' || n::text \nFROM (SELECT generate_series(1,1000000) AS n) AS numbers;\nINSERT 0 1000000\nTime: 14104.663 ms\ndb=# ANALYSE email_upd_test;\nANALYZE\nTime: 121.775 ms\ndb=# UPDATE email_upd_test SET domainname=substring(email from \nposition('@' in email));\nUPDATE 1000000\nTime: 43796.030 ms\n\n\nI think I'm I/O bound from my very limited understanding of vmstat.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Thu, 28 Jun 2007 16:49:58 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: update query taking too long" }, { "msg_contents": "Chris wrote:\n> db=# UPDATE email_upd_test SET domainname=substring(email from \n> position('@' in email));\n> UPDATE 1000000\n> Time: 43796.030 ms\n> \n> I think I'm I/O bound from my very limited understanding of vmstat.\n\nWell, 43 seconds to update 1 million rows suggests your real query \nshould be complete in a few minutes, even if your real table has more \ncolumns.\n\nCould you check again and just make sure you don't have a foreign key \nreferencing this table? I suspect a large table without an index on the \nreferencing column.\n\nIf you can't see anything, cancel the long-running query, run VACUUM \nFULL VERBOSE on the table, ANALYSE VERBOSE and then try it again. \nThere's something very odd here.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 28 Jun 2007 08:02:44 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update query taking too long" }, { "msg_contents": "Richard Huxton wrote:\n> Chris wrote:\n>> db=# UPDATE email_upd_test SET domainname=substring(email from \n>> position('@' in email));\n>> UPDATE 1000000\n>> Time: 43796.030 ms\n>>\n>> I think I'm I/O bound from my very limited understanding of vmstat.\n> \n> Well, 43 seconds to update 1 million rows suggests your real query \n> should be complete in a few minutes, even if your real table has more \n> columns.\n\nYep.\n\nI think I have solved it though - the server was checkpointing so much \nnot much else was going on.\n\nI didn't have logging set up before but it's up and running now and I \nwas getting\n\nLOG: checkpoints are occurring too frequently (26 seconds apart)\nHINT: Consider increasing the configuration parameter \n\"checkpoint_segments\".\n\nSo I increased that from 10 to 30 and it finished:\n\nUPDATE 3500101\nTime: 146513.349 ms\n\nThanks for all the help :)\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Thu, 28 Jun 2007 18:18:16 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: update query taking too long" }, { "msg_contents": "Chris wrote (in part):\n\n> I didn't have logging set up before but it's up and running now and I\n> was getting\n> \n> LOG: checkpoints are occurring too frequently (26 seconds apart)\n> HINT: Consider increasing the configuration parameter\n> \"checkpoint_segments\".\n> \n> So I increased that from 10 to 30 and it finished:\n> \n> UPDATE 3500101\n> Time: 146513.349 ms\n> \nI have not used postgreSQL since I tried it once in about 1998 (when I found\nit unsatisfactory, but much has changed since then), but I am going to try\nit again. What would be a good checkpointing interval? I would guess 26\nseconds is too often. What considerations go into picking a checkpointing\ninterval?\n\nI note, from the book \"PostgreSQL\" second edition by Douglas and Doublas,\nthe following parameters are available:\n\nWAL_BUFFERS The default is 8.\nCHECKPOINT_SEGMENTS The default is 3. This would have been too low for the\n O.P. Would it make sense to start with a higher value\n or is this a good value and just not appropriate for\n the O.P.? Should CHECKPOINT_SEGMENTS be raised until\n the checkpointing is about half CHECKPOINT_TIMEOUT,\n e.g., 150 seconds while the dbms is running typical\n work?\nCHECKPOINT_TIMEOUT The default is 300 seconds.\nCHECKPOINT_WARNING The default is 30 seconds.\n\nMy machine has 8 GBytes RAM and it worked perfectly well (very very little\npaging) when it had 4 GBytes RAM. I doubled it because it was cheap at the\ntime and I was afraid it would become unavailable later. It is usually\nbetween 2/3 and 3/4 used by the cache. When I run IBM DB2 on it, the choke\npoint is the IO time spent writing the logfiles.\n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 07:20:01 up 7 days, 14:55, 3 users, load average: 4.26, 4.15, 4.07\n", "msg_date": "Thu, 28 Jun 2007 07:44:57 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update query taking too long" }, { "msg_contents": "Jean-David Beyer wrote:\n> I have not used postgreSQL since I tried it once in about 1998 (when I found\n> it unsatisfactory, but much has changed since then), but I am going to try\n> it again. What would be a good checkpointing interval? I would guess 26\n> seconds is too often. What considerations go into picking a checkpointing\n> interval?\n\nWelcome back.\n\nLonger is better when the system is running. But on recovery, longer \ncheckpoint interval means a longer wait until the database is up again. \nLonger checkpoint interval also means that more WAL needs to be kept \naround, but that's not usually a concern on normal server hardware with \nplenty of disk space.\n\n> WAL_BUFFERS The default is 8.\n\nIncreasing this can increase the performance of bulk load operations but \nit doesn't make much difference otherwise.\n\n> CHECKPOINT_SEGMENTS The default is 3. This would have been too low for the\n> O.P. Would it make sense to start with a higher value\n> or is this a good value and just not appropriate for\n> the O.P.? Should CHECKPOINT_SEGMENTS be raised until\n> the checkpointing is about half CHECKPOINT_TIMEOUT,\n> e.g., 150 seconds while the dbms is running typical\n> work?\n> CHECKPOINT_TIMEOUT The default is 300 seconds.\n\nYou have to decide if you want to use checkpoint_timeout or \ncheckpoint_segments as the primary means of controlling your checkpoint \ninterval. checkpoint_timeout is easier to understand and tune, so I \nwould suggest using that. Depending on how long recovery times you can \nlive with, set it to something like 15 minutes - 60 minutes. Then set \ncheckpoint_segments to a high value; it's purpose in this scheme is \nbasically to just protect you from running out of disk space on the \nfilesystem WAL is located in.\n\nNote that unlike on DB2, the size of your transactions isn't limited by \nthe amount of transaction log you keep around; this is all about \nperformance.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 28 Jun 2007 13:01:03 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update query taking too long" }, { "msg_contents": "Jean-David Beyer wrote:\n> Chris wrote (in part):\n> \n>> I didn't have logging set up before but it's up and running now and I\n>> was getting\n>>\n>> LOG: checkpoints are occurring too frequently (26 seconds apart)\n>> HINT: Consider increasing the configuration parameter\n>> \"checkpoint_segments\".\n>>\n>> So I increased that from 10 to 30 and it finished:\n>>\n>> UPDATE 3500101\n>> Time: 146513.349 ms\n>>\n> I have not used postgreSQL since I tried it once in about 1998 (when I found\n> it unsatisfactory, but much has changed since then), but I am going to try\n> it again. What would be a good checkpointing interval? I would guess 26\n> seconds is too often. What considerations go into picking a checkpointing\n> interval?\n\nBasically, it depends on the amount of updates you have and whether you \nwant to minimise total writes or keep the load even. Lots of \ncheckpointing means you'll do more writing, but in smaller chunks. The \nonly way to find out the right value for you is to test on a realistic \nsetup I'm afraid.\n\n> \n> I note, from the book \"PostgreSQL\" second edition by Douglas and Doublas,\n> the following parameters are available:\n> \n> WAL_BUFFERS The default is 8.\n> CHECKPOINT_SEGMENTS The default is 3. This would have been too low for the\n> O.P. Would it make sense to start with a higher value\n> or is this a good value and just not appropriate for\n> the O.P.? Should CHECKPOINT_SEGMENTS be raised until\n> the checkpointing is about half CHECKPOINT_TIMEOUT,\n> e.g., 150 seconds while the dbms is running typical\n> work?\n> CHECKPOINT_TIMEOUT The default is 300 seconds.\n> CHECKPOINT_WARNING The default is 30 seconds.\n\nIf your updates are large (rather than having lots of small ones) then \nincreasing wal_buffers might be useful.\n\nIf you have a lot of updates, you'll want to increase \ncheckpoint_segments at least. You'll see mention in the logs when PG \nthinks checkpoints are too close together (checkpoint_timeout/warning).\n\nOf course, a lot of people will have PostgreSQL installed on a PC or \nlaptop along with the rest of the Linux distro. They'll not want to \nallocate too many resources.\n\n> My machine has 8 GBytes RAM and it worked perfectly well (very very little\n> paging) when it had 4 GBytes RAM. I doubled it because it was cheap at the\n> time and I was afraid it would become unavailable later. It is usually\n> between 2/3 and 3/4 used by the cache. When I run IBM DB2 on it, the choke\n> point is the IO time spent writing the logfiles.\n\nIf DB2 was I/O saturated with its transaction log, I'd be surprised if \nPG isn't too.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 28 Jun 2007 13:03:39 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update query taking too long" } ]
[ { "msg_contents": "Hi,\n\n I am new for postgresql server. And now i work on a projects which\nrequires postgreSQL 8.0 and Java. I don't know why the server occasionally\nslow down a bit for every 3 minutes.\nI have changed the log configuration so that it logs all statement\ntransaction > 1000 ms and the result shown below :\n\n============================================================================\n<elf2 2007-06-28 14:30:25 HKT 46835574.7a64> LOG: duration: 1494.109 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:33:34 HKT 468354a8.7415> LOG: duration: 1048.429 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:33:35 HKT 468354a9.7418> LOG: duration: 1580.120 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:33:37 HKT 468354a9.7418> LOG: duration: 1453.620 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:36:51 HKT 468354a9.7419> LOG: duration: 1430.019 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:36:53 HKT 468354a9.7418> LOG: duration: 1243.886 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:36:54 HKT 468354a9.7419> LOG: duration: 1491.821 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:36:54 HKT 468354a9.7418> LOG: duration: 1266.516 ms\nstatement: commit;begin;\n ...\n ...\n<elf2 2007-06-28 14:40:54 HKT 468354a9.741b> LOG: duration: 1776.466 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:40:54 HKT 468357ec.d5a> LOG: duration: 1500.132 ms\nstatement: commit;begin;\n ...\n ...\n<elf2 2007-06-28 14:44:07 HKT 46835477.73b7> LOG: duration: 1011.216 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:44:12 HKT 46835477.73b7> LOG: duration: 1009.187 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:44:13 HKT 468352f9.7194> LOG: duration: 1086.769 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:44:14 HKT 46835477.73b7> LOG: duration: 1481.627 ms\nstatement: commit;begin;\n ...\n ...\n<elf2 2007-06-28 14:47:44 HKT 468354a9.7419> LOG: duration: 10513.208 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:48:22 HKT 468354a9.7419> LOG: duration: 38126.708 ms\nstatement: commit;begin;\n\n============================================================================\n\nFor each 3 ~ 4 minutes , there are many transactions which requires (>1\nseconds) for execution. It is strange for me seems the tables size is quite\nsmall (~ 10K < 20K row). I can said the rate of incoming transactions is\nquite steady through our the testing. So i am quite confusing why the\nperformance degrades for every 3 ~ 4 minutes. I am wondering if there is any\ndefault scheduled task in the postgreSQL 8.0\n\nThe configurations which i have amended in postgresql.conf.\n\nmax_fsm_pages = 100000\nvacuum_cost_delay = 10\n\nThe machine using :\n512 RAM\nGentoo Linux\n\nDo anyone can help me about this ? or any resolution for a sudden\nperformance degrade ( because the application i need to develop is quite\ntime-critical).\n\nThank.\nTwinsen\n\nHi,   I am new for postgresql server. And now i work on a projects which requires postgreSQL 8.0 and Java. I don't know why the server occasionally slow down a bit for every 3 minutes. I have changed the log configuration so that it logs all statement transaction > 1000 ms and the result shown below :\n============================================================================<elf2 2007-06-28 14:30:25 HKT 46835574.7a64> LOG:  duration: 1494.109 ms  statement: commit;begin;<elf2 2007-06-28 14:33:34 HKT \n468354a8.7415> LOG:  duration: 1048.429 ms  statement: commit;begin;<elf2 2007-06-28 14:33:35 HKT 468354a9.7418> LOG:  duration: 1580.120 ms  statement: commit;begin;<elf2 2007-06-28 14:33:37 HKT 468354a9.7418\n> LOG:  duration: 1453.620 ms  statement: commit;begin;<elf2 2007-06-28 14:36:51 HKT 468354a9.7419> LOG:  duration: 1430.019 ms  statement: commit;begin;<elf2 2007-06-28 14:36:53 HKT 468354a9.7418> LOG:  duration: \n1243.886 ms  statement: commit;begin;<elf2 2007-06-28 14:36:54 HKT 468354a9.7419> LOG:  duration: 1491.821 ms  statement: commit;begin;<elf2 2007-06-28 14:36:54 HKT 468354a9.7418> LOG:  duration: 1266.516\n ms  statement: commit;begin;    ...    ...<elf2 2007-06-28 14:40:54 HKT 468354a9.741b> LOG:  duration: 1776.466 ms  statement: commit;begin;<elf2 2007-06-28 14:40:54 HKT 468357ec.d5a> LOG:  duration: \n1500.132 ms  statement: commit;begin;    ...    ...<elf2 2007-06-28 14:44:07 HKT 46835477.73b7> LOG:  duration: 1011.216 ms  statement: commit;begin;<elf2 2007-06-28 14:44:12 HKT 46835477.73b7> LOG:  duration: \n1009.187 ms  statement: commit;begin;<elf2 2007-06-28 14:44:13 HKT 468352f9.7194> LOG:  duration: 1086.769 ms  statement: commit;begin;<elf2 2007-06-28 14:44:14 HKT 46835477.73b7> LOG:  duration: 1481.627\n ms  statement: commit;begin;   ...   ...<elf2 2007-06-28 14:47:44 HKT 468354a9.7419> LOG:  duration: 10513.208 ms  statement: commit;begin;<elf2 2007-06-28 14:48:22 HKT 468354a9.7419> LOG:  duration: \n38126.708 ms  statement: commit;begin;============================================================================For each 3 ~ 4 minutes , there are many transactions which requires (>1 seconds) for execution. It is strange for me seems the tables size is quite small (~ 10K < 20K row). I can said the rate of incoming transactions is quite steady through our the testing. So i am quite confusing why the performance degrades for every 3 ~ 4 minutes. I am wondering if there is any default scheduled task in the postgreSQL \n8.0 The configurations which i have amended in postgresql.conf.max_fsm_pages = 100000         vacuum_cost_delay = 10          The machine using :512 RAMGentoo LinuxDo anyone can help me about this ? or any resolution for a sudden performance degrade ( because the application i need to develop is quite time-critical).\nThank.Twinsen", "msg_date": "Thu, 28 Jun 2007 14:54:31 +0800", "msg_from": "\"Ho Fat Tsang\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "Ho Fat Tsang wrote:\n> \n> I am new for postgresql server. And now i work on a projects which\n> requires postgreSQL 8.0 and Java. I don't know why the server occasionally\n> slow down a bit for every 3 minutes.\n\n> Do anyone can help me about this ? or any resolution for a sudden\n> performance degrade ( because the application i need to develop is quite\n> time-critical).\n\nIt's probably checkpointing. PG will write updates to the transaction \nlog (WAL) immediately and update the main data files later. Every so \noften it makes sure the data files are up-to-date and this is called \ncheckpointing.\n\nYou want checkpointing to happen more often, not less. That way the load \nwill be less each time it happens. See the manual for details.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 28 Jun 2007 08:22:17 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "Hi Richard,\n\n I have tuned the checkpoint_timeout to 30 second which is ten times less\nthan default and the issue is still reproduced. Do you have any recommended\nconfiguration for WAL ?\n\nThanks\nTwinsen\n\n2007/6/28, Richard Huxton <[email protected]>:\n>\n> Ho Fat Tsang wrote:\n> >\n> > I am new for postgresql server. And now i work on a projects which\n> > requires postgreSQL 8.0 and Java. I don't know why the server\n> occasionally\n> > slow down a bit for every 3 minutes.\n>\n> > Do anyone can help me about this ? or any resolution for a sudden\n> > performance degrade ( because the application i need to develop is quite\n> > time-critical).\n>\n> It's probably checkpointing. PG will write updates to the transaction\n> log (WAL) immediately and update the main data files later. Every so\n> often it makes sure the data files are up-to-date and this is called\n> checkpointing.\n>\n> You want checkpointing to happen more often, not less. That way the load\n> will be less each time it happens. See the manual for details.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\nHi Richard,    I have tuned the checkpoint_timeout to 30 second which is ten times less than default and the issue is still reproduced. Do you have any recommended configuration for WAL ? \n  Thanks Twinsen2007/6/28, Richard Huxton <[email protected]>:\nHo Fat Tsang wrote:>>   I am new for postgresql server. And now i work on a projects which> requires postgreSQL 8.0 and Java. I don't know why the server occasionally> slow down a bit for every 3 minutes.\n> Do anyone can help me about this ? or any resolution for a sudden> performance degrade ( because the application i need to develop is quite> time-critical).It's probably checkpointing. PG will write updates to the transaction\nlog (WAL) immediately and update the main data files later. Every sooften it makes sure the data files are up-to-date and this is calledcheckpointing.You want checkpointing to happen more often, not less. That way the load\nwill be less each time it happens. See the manual for details.--   Richard Huxton   Archonet Ltd", "msg_date": "Thu, 28 Jun 2007 16:01:30 +0800", "msg_from": "\"Ho Fat Tsang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "Ho Fat Tsang wrote:\n> Hi Richard,\n> \n> I have tuned the checkpoint_timeout to 30 second which is ten times less\n> than default and the issue is still reproduced. Do you have any recommended\n> configuration for WAL ?\n\nIf you look at the output of \"vmstat 10\" and \"iostat -m 10\" (I'm \nassuming you're on Linux) does it show your I/O peaking every three \nminutes? I might have been wrong about the cause.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 28 Jun 2007 09:03:50 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "Hi Richard,\n\n Thank for your prompt reply. I have used the command \"vmstat 10\" to\ninvestigate the I/O issue and listed below :\n\nprocs -----------memory---------- ---swap-- -----io---- --system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa\n 0 0 26848 8376 2208 595796 0 0 16 16 14 13 5 2 91\n2\n 1 0 26848 8024 2128 596324 0 0 1595 620 2006 3489 45 7 39\n9\n 2 0 26848 8432 2024 595988 0 0 1399 163 1953 3830 38 8 47\n7\n 2 0 26936 8488 2008 596092 0 0 1696 636 1973 7423 52 8 31\n9\n 1 0 26936 8476 2008 596148 0 0 1237 660 1618 1863 34 6 50\n11 <-- The starting time when the pgsql log transaction due to long\nexecution duration.\n 0 0 26936 8024 1980 596756 0 0 1983 228 1985 2241 52 8 31\n10\n 0 2 26936 8312 2040 595904 0 0 405 16674 1449 1675 17 6 1\n76 <-- The intermediate time reaching I/O peak.\n 0 0 26936 8544 2088 594964 0 0 1191 8295 680 1038 30 4 13\n53\n 2 0 26936 8368 2124 595032 0 0 517 935 866 985 14 3 79\n4\n 0 0 26936 8368 2064 595228 0 0 1706 190 1979 2356 45 7 38\n9\n 0 0 26936 8196 2132 595452 0 0 1713 642 1913 2238 44 8 37\n11\n 1 1 26936 8164 2168 595512 0 0 1652 666 2011 2542 45 7 38\n10\n 0 1 26936 8840 2160 594592 0 0 1524 228 1846 2116 42 8 43\n7\n 0 0 26936 7384 2200 596304 0 0 1584 604 1972 2137 41 7 40\n11\n\nAs you said, it seems for each 3~4 minutes, there is a I/O peak. But what is\nthe problem indicating by it ?\n\nThanks for help.\nTwinsen\n\n2007/6/28, Richard Huxton <[email protected]>:\n>\n> Ho Fat Tsang wrote:\n> > Hi Richard,\n> >\n> > I have tuned the checkpoint_timeout to 30 second which is ten times\n> less\n> > than default and the issue is still reproduced. Do you have any\n> recommended\n> > configuration for WAL ?\n>\n> If you look at the output of \"vmstat 10\" and \"iostat -m 10\" (I'm\n> assuming you're on Linux) does it show your I/O peaking every three\n> minutes? I might have been wrong about the cause.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\nHi Richard,    Thank for your prompt reply. I have used the command \"vmstat 10\" to investigate the I/O issue and listed below :procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa 0  0  26848   8376   2208 595796    0    0    16    16   14    13  5  2 91  2 1  0  26848   8024   2128 596324    0    0  1595   620 2006  3489 45  7 39  9\n 2  0  26848   8432   2024 595988    0    0  1399   163 1953  3830 38  8 47  7 2  0  26936   8488   2008 596092    0    0  1696   636 1973  7423 52  8 31  9 1  0  26936   8476   2008 596148    0    0  1237   660 1618  1863 34  6 50 11 <-- The starting time when the pgsql log transaction due to long execution duration.\n 0  0  26936   8024   1980 596756    0    0  1983   228 1985  2241 52  8 31 10 0  2  26936   8312   2040 595904    0    0   405 16674 1449  1675 17  6  1 76 <-- The intermediate time reaching I/O peak. 0  0  26936   8544   2088 594964    0    0  1191  8295  680  1038 30  4 13 53\n 2  0  26936   8368   2124 595032    0    0   517   935  866   985 14  3 79  4 0  0  26936   8368   2064 595228    0    0  1706   190 1979  2356 45  7 38  9 0  0  26936   8196   2132 595452    0    0  1713   642 1913  2238 44  8 37 11\n 1  1  26936   8164   2168 595512    0    0  1652   666 2011  2542 45  7 38 10 0  1  26936   8840   2160 594592    0    0  1524   228 1846  2116 42  8 43  7 0  0  26936   7384   2200 596304    0    0  1584   604 1972  2137 41  7 40 11\nAs you said, it seems for each 3~4 minutes, there is a I/O peak. But what is the problem indicating by it ? Thanks for help.Twinsen2007/6/28, Richard Huxton <\[email protected]>:Ho Fat Tsang wrote:> Hi Richard,>>   I have tuned the checkpoint_timeout to 30 second which is ten times less\n> than default and the issue is still reproduced. Do you have any recommended> configuration for WAL ?If you look at the output of \"vmstat 10\" and \"iostat -m 10\" (I'massuming you're on Linux) does it show your I/O peaking every three\nminutes? I might have been wrong about the cause.--   Richard Huxton   Archonet Ltd", "msg_date": "Thu, 28 Jun 2007 16:17:28 +0800", "msg_from": "\"Ho Fat Tsang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "Ho Fat Tsang wrote:\n> Hi Richard,\n> \n> Thank for your prompt reply. I have used the command \"vmstat 10\" to\n> investigate the I/O issue and listed below :\n> \n> procs -----------memory---------- ---swap-- -----io---- --system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> 0 0 26848 8376 2208 595796 0 0 16 16 14 13 5 2 91\n> 2\n[etc]\n> 1 0 26936 8476 2008 596148 0 0 1237 660 1618 1863 34 6 50\n> 11 <-- The starting time when the pgsql log transaction due to long\n> execution duration.\n> 0 0 26936 8024 1980 596756 0 0 1983 228 1985 2241 52 8 31\n> 10\n> 0 2 26936 8312 2040 595904 0 0 405 16674 1449 1675 17 6 1\n> 76 <-- The intermediate time reaching I/O peak.\n[etc]\n> As you said, it seems for each 3~4 minutes, there is a I/O peak. But \n> what is\n> the problem indicating by it ?\n\nIt's a burst of writing too (bo=blocks out for those who aren't familiar \nwith vmstat).\n\nWell, there are four possibilities:\n1. Something outside of PostgreSQL\n2. An increase in update queries\n3. Checkpoints\n4. Vacuum\n\nIf you keep an eye on \"top\" at the same time as vmstat, that should show \nwhether it is another process.\n\nYou would have mentioned if this co-incided with more queries, so we can \nprobably rule that out.\n\nYou've changed checkpointing timeouts and that's not affected this.\n\nWe can see if it's autovacuum by disabling it in postgresql.conf and \nrestarting PG. Try that and see if it alters things.\n\nIt might be you need to vacuum more often (so you do less on each run) \nor it might be you need more/faster disks to keep up with your update \nactivity.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 28 Jun 2007 10:16:00 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "On Thu, 28 Jun 2007, Ho Fat Tsang wrote:\n\n> I have tuned the checkpoint_timeout to 30 second which is ten times less \n> than default and the issue is still reproduced.\n\nDoing a checkpoint every 30 seconds is crazy; no wonder your system is \npausing so much. Put the timeout back to the default. What you should do \nhere is edit your config file and set checkpoint_warning to its maximum of \n3600. After that, take a look at the log files; you'll then get a warning \nmessage every time a checkpoint happens. If those line up with when \nyou're getting the slowdowns, then at least you'll have narrowed the cause \nof your problem, and you can get some advice here on how to make the \noverhead of checkpoints less painful.\n\nThe hint it will give is probably the first thing to try: increase \ncheckpoint_segments from the default to something much larger (if it's at \n3 now, try 10 instead to start), and see if the problem stops happening as \nfrequently. Your problem looks exactly like a pause at every checkpoint, \nand I'm not sure what Richard was thinking when he suggested having them \nmore often would improve things.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 28 Jun 2007 09:52:54 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "\"Greg Smith\" <[email protected]> writes:\n\n> On Thu, 28 Jun 2007, Ho Fat Tsang wrote:\n>\n>> I have tuned the checkpoint_timeout to 30 second which is ten times less than\n>> default and the issue is still reproduced.\n>\n> Your problem looks exactly like a pause at every checkpoint, and I'm not\n> sure what Richard was thinking when he suggested having them more often\n> would improve things.\n\nHaving frequent checkpoints is bad for overall performance but should reduce\nthe severity of the checkpoint impact. I interpreted his comment as saying he\nlowered it just as an experiment to test if it was checkpoint causing the\nproblems not as a permanent measure.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Thu, 28 Jun 2007 15:16:49 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": ">>> On Thu, Jun 28, 2007 at 1:54 AM, in message\n<[email protected]>, \"Ho Fat Tsang\"\n<[email protected]> wrote: \n> \n> I don't know why the server occasionally\n> slow down a bit for every 3 minutes.\n \nIf the problem is checkpoints, try making your background writer more aggressive. This allows more of the pages to be written to disk before the checkpoint starts. I'll show the settings which have eliminated similar problems for us, but your best settings will depend on hardware and are almost certainly going to be different. In particular, we have a battery backed caching RAID controller, which seems to change the dynamics of these sorts of issues quite a bit.\n \n#bgwriter_delay = 200ms # 10-10000ms between rounds\nbgwriter_lru_percent = 20.0 # 0-100% of LRU buffers scanned/round\nbgwriter_lru_maxpages = 200 # 0-1000 buffers max written/round\nbgwriter_all_percent = 10.0 # 0-100% of all buffers scanned/round\nbgwriter_all_maxpages = 600 # 0-1000 buffers max written/round\n \nWe also adjust a couple other WAL-related settings: \n \nwal_buffers = 160kB # min 32kB\n # (change requires restart)\ncheckpoint_segments = 10 # in logfile segments, min 1, 16MB each\n \nSince you're on 8.0 I think you'll need to specify wal-buffers as a number of 8KB pages.\n \n-Kevin\n \n\n\n", "msg_date": "Thu, 28 Jun 2007 09:19:11 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "Hi Richard,\n\n I've tested again according your suggestion. I noticed that for each\ntime the pgsql slow down, there is a short period a process called \"pdflush\"\neating up lot of I/O. I've goolgled and know it is a process for writing\ndirty pages back to the disk by the Linux kernel. I will have further\ninvestigation on this process with my limited knowledge on Linux kernel.\n\n Correct me if i am wrong. It seems postgresql 8.0 does not bundle\nauto-vacuum by default. So all vacuum and analyse are done manually ? So\nwhat i have tested related to vaccuum is running auto-vacuum (a executeable\nlocated in /bin) parallel under normal production load but it seems won't\nhelp.\n\nThanks for help.\nTwinsen\n\n2007/6/28, Richard Huxton <[email protected]>:\n>\n> Ho Fat Tsang wrote:\n> > Hi Richard,\n> >\n> > Thank for your prompt reply. I have used the command \"vmstat 10\" to\n> > investigate the I/O issue and listed below :\n> >\n> > procs -----------memory---------- ---swap-- -----io---- --system--\n> > ----cpu----\n> > r b swpd free buff cache si so bi bo in cs us sy\n> id\n> > wa\n> > 0 0 26848 8376 2208 595796 0 0 16 16 14 13 5 2\n> 91\n> > 2\n> [etc]\n> > 1 0 26936 8476 2008 596148 0 0 1237 660 1618 1863 34 6\n> 50\n> > 11 <-- The starting time when the pgsql log transaction due to long\n> > execution duration.\n> > 0 0 26936 8024 1980 596756 0 0 1983 228 1985 2241 52 8\n> 31\n> > 10\n> > 0 2 26936 8312 2040 595904 0 0 405 16674 1449 1675\n> 17 6 1\n> > 76 <-- The intermediate time reaching I/O peak.\n> [etc]\n> > As you said, it seems for each 3~4 minutes, there is a I/O peak. But\n> > what is\n> > the problem indicating by it ?\n>\n> It's a burst of writing too (bo=blocks out for those who aren't familiar\n> with vmstat).\n>\n> Well, there are four possibilities:\n> 1. Something outside of PostgreSQL\n> 2. An increase in update queries\n> 3. Checkpoints\n> 4. Vacuum\n>\n> If you keep an eye on \"top\" at the same time as vmstat, that should show\n> whether it is another process.\n>\n> You would have mentioned if this co-incided with more queries, so we can\n> probably rule that out.\n>\n> You've changed checkpointing timeouts and that's not affected this.\n>\n> We can see if it's autovacuum by disabling it in postgresql.conf and\n> restarting PG. Try that and see if it alters things.\n>\n> It might be you need to vacuum more often (so you do less on each run)\n> or it might be you need more/faster disks to keep up with your update\n> activity.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\nHi Richard,    I've tested again according your suggestion. I noticed that for each time the pgsql slow down, there is a short period a process called \"pdflush\" eating up lot of I/O. I've goolgled and know it is a process for writing dirty pages back to the disk by the Linux kernel. I will have further investigation on this process with my limited knowledge on Linux kernel.\n   Correct me if i am wrong. It seems postgresql 8.0 does not bundle auto-vacuum by default. So all vacuum and analyse are done manually ? So what i have tested related to vaccuum is running auto-vacuum (a executeable located in /bin) parallel under normal production load but it seems won't help. \nThanks for help.Twinsen2007/6/28, Richard Huxton <[email protected]>:\nHo Fat Tsang wrote:> Hi Richard,>>   Thank for your prompt reply. I have used the command \"vmstat 10\" to> investigate the I/O issue and listed below :>> procs -----------memory---------- ---swap-- -----io---- --system--\n> ----cpu----> r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id> wa> 0  0  26848   8376   2208 595796    0    0    16    16   14    13  5  2 91> 2[etc]> 1  0  26936   8476   2008 596148    0    0  1237   660 1618  1863 34  6 50\n> 11 <-- The starting time when the pgsql log transaction due to long> execution duration.> 0  0  26936   8024   1980 596756    0    0  1983   228 1985  2241 52  8 31> 10> 0  2  26936   8312   2040 595904    0    0   405 16674 1449  1675 17  6  1\n> 76 <-- The intermediate time reaching I/O peak.[etc]> As you said, it seems for each 3~4 minutes, there is a I/O peak. But> what is> the problem indicating by it ?It's a burst of writing too (bo=blocks out for those who aren't familiar\nwith vmstat).Well, there are four possibilities:1. Something outside of PostgreSQL2. An increase in update queries3. Checkpoints4. VacuumIf you keep an eye on \"top\" at the same time as vmstat, that should show\nwhether it is another process.You would have mentioned if this co-incided with more queries, so we canprobably rule that out.You've changed checkpointing timeouts and that's not affected this.\nWe can see if it's autovacuum by disabling it in postgresql.conf andrestarting PG. Try that and see if it alters things.It might be you need to vacuum more often (so you do less on each run)or it might be you need more/faster disks to keep up with your update\nactivity.--   Richard Huxton   Archonet Ltd", "msg_date": "Fri, 29 Jun 2007 10:04:41 +0800", "msg_from": "\"Ho Fat Tsang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "Ho Fat Tsang wrote:\n> Hi Richard,\n> \n> I've tested again according your suggestion. I noticed that for each\n> time the pgsql slow down, there is a short period a process called \n> \"pdflush\"\n> eating up lot of I/O. I've goolgled and know it is a process for writing\n> dirty pages back to the disk by the Linux kernel. I will have further\n> investigation on this process with my limited knowledge on Linux kernel.\n\nWell, pdflush is responsible for flushing dirty pages to disk on behalf \nof all processes.\n\nIf it's doing it every 3 minutes while checkpoints are happening every \n30 seconds then I don't see how it's PG that's responsible.\n\nThere are three possibilities:\n1. PG isn't actually checkpointing every 30 seconds.\n2. There is a burst of query activity every 3 minutes that causes a lot \nof writing.\n3. Some other process is responsible.\n\n\n\n> Correct me if i am wrong. It seems postgresql 8.0 does not bundle\n> auto-vacuum by default. So all vacuum and analyse are done manually ? So\n> what i have tested related to vaccuum is running auto-vacuum (a executeable\n> located in /bin) parallel under normal production load but it seems won't\n> help.\n\nCan't remember whether 8.0 had autovacuum bundled and turned off or not \nbundled at all. If it's not running it can't be causing this problem though.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 29 Jun 2007 10:21:22 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "Richard Huxton escribi�:\n> Ho Fat Tsang wrote:\n\n> > Correct me if i am wrong. It seems postgresql 8.0 does not bundle\n> >auto-vacuum by default. So all vacuum and analyse are done manually ? So\n> >what i have tested related to vaccuum is running auto-vacuum (a executeable\n> >located in /bin) parallel under normal production load but it seems won't\n> >help.\n> \n> Can't remember whether 8.0 had autovacuum bundled and turned off or not \n> bundled at all. If it's not running it can't be causing this problem though.\n\nThe separate binary he found is the contrib pg_autovacuum. Integrated\nautovac got into 8.1.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 29 Jun 2007 09:33:51 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "On Fri, 29 Jun 2007, Ho Fat Tsang wrote:\n\n> I noticed that for each time the pgsql slow down, there is a short \n> period a process called \"pdflush\" eating up lot of I/O. I've goolgled \n> and know it is a process for writing dirty pages back to the disk by the \n> Linux kernel.\n\nThe pdflush documentation is really spread out, you may find my paper at \nhttp://www.westnet.com/~gsmith/content/linux-pdflush.htm a good place to \nstart looking into that.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 2 Jul 2007 18:21:40 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "Hi Greg.\n\n2007/6/28, Greg Smith <[email protected]>:\n>\n> On Thu, 28 Jun 2007, Ho Fat Tsang wrote:\n>\n> > I have tuned the checkpoint_timeout to 30 second which is ten times less\n> > than default and the issue is still reproduced.\n\nDoing a checkpoint every 30 seconds is crazy; no wonder your system is\n> pausing so much. Put the timeout back to the default. What you should do\n> here is edit your config file and set checkpoint_warning to its maximum of\n> 3600. After that, take a look at the log files; you'll then get a warning\n> message every time a checkpoint happens. If those line up with when\n> you're getting the slowdowns, then at least you'll have narrowed the cause\n> of your problem, and you can get some advice here on how to make the\n> overhead of checkpoints less painful.\n>\n> The hint it will give is probably the first thing to try: increase\n> checkpoint_segments from the default to something much larger (if it's at\n> 3 now, try 10 instead to start), and see if the problem stops happening as\n> frequently. Your problem looks exactly like a pause at every checkpoint,\n> and I'm not sure what Richard was thinking when he suggested having them\n> more often would improve things.\n\n\nYes, Thank you for your suggestion. i have found that the slowdown time does\nnot align to checkpoint after i turned on the warning. The issue is related\nwhat Richard has been mentioned - Something outsides PG doing many write\noperations to pages.\n\n--\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nHi Greg.2007/6/28, Greg Smith <[email protected]>:\nOn Thu, 28 Jun 2007, Ho Fat Tsang wrote:> I have tuned the checkpoint_timeout to 30 second which is ten times less> than default and the issue is still reproduced.\nDoing a checkpoint every 30 seconds is crazy; no wonder your system ispausing so much.  Put the timeout back to the default.  What you should dohere is edit your config file and set checkpoint_warning to its maximum of\n3600.  After that, take a look at the log files; you'll then get a warningmessage every time a checkpoint happens.  If those line up with whenyou're getting the slowdowns, then at least you'll have narrowed the cause\nof your problem, and you can get some advice here on how to make theoverhead of checkpoints less painful.The hint it will give is probably the first thing to try: increasecheckpoint_segments from the default to something much larger (if it's at\n3 now, try 10 instead to start), and see if the problem stops happening asfrequently.  Your problem looks exactly like a pause at every checkpoint,and I'm not sure what Richard was thinking when he suggested having them\nmore often would improve things.Yes, Thank you for your suggestion. i have found that the slowdown time does not align to checkpoint after i turned on the warning. The issue is related what Richard has been mentioned - Something outsides PG doing many write operations to pages.\n--* Greg Smith [email protected]\nhttp://www.gregsmith.com Baltimore, MD---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to\n       choose an index scan if your joining column's datatypes do not       match", "msg_date": "Wed, 4 Jul 2007 01:03:17 +0800", "msg_from": "\"Ho Fat Tsang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "Hi Kevin,\n\n Thank for your configuration. I have tested with this configuration\n(amended a bit) and it helps a bit. But i have found the root cause is\nrelated to the application that using PG.\n But yet i can learn much to tune the PG in my restricted environment !.\n\nRegards,\nTwinsen\n\n2007/6/28, Kevin Grittner <[email protected]>:\n>\n> >>> On Thu, Jun 28, 2007 at 1:54 AM, in message\n> <[email protected]>, \"Ho Fat\n> Tsang\"\n> <[email protected]> wrote:\n> >\n> > I don't know why the server occasionally\n> > slow down a bit for every 3 minutes.\n>\n> If the problem is checkpoints, try making your background writer more\n> aggressive. This allows more of the pages to be written to disk before the\n> checkpoint starts. I'll show the settings which have eliminated similar\n> problems for us, but your best settings will depend on hardware and are\n> almost certainly going to be different. In particular, we have a battery\n> backed caching RAID controller, which seems to change the dynamics of these\n> sorts of issues quite a bit.\n>\n> #bgwriter_delay = 200ms # 10-10000ms between rounds\n> bgwriter_lru_percent = 20.0 # 0-100% of LRU buffers\n> scanned/round\n> bgwriter_lru_maxpages = 200 # 0-1000 buffers max written/round\n> bgwriter_all_percent = 10.0 # 0-100% of all buffers\n> scanned/round\n> bgwriter_all_maxpages = 600 # 0-1000 buffers max written/round\n>\n> We also adjust a couple other WAL-related settings:\n>\n> wal_buffers = 160kB # min 32kB\n> # (change requires restart)\n> checkpoint_segments = 10 # in logfile segments, min 1, 16MB\n> each\n>\n> Since you're on 8.0 I think you'll need to specify wal-buffers as a number\n> of 8KB pages.\n>\n> -Kevin\n>\n>\n>\n>\n\nHi Kevin,     Thank for your configuration. I have tested with this configuration (amended a bit) and it helps a bit. But i have found the root cause is related to the application that using PG.     But yet i can learn much to tune the PG in my restricted environment !.\nRegards,Twinsen2007/6/28, Kevin Grittner <[email protected]>:\n>>> On Thu, Jun 28, 2007 at  1:54 AM, in message<[email protected]>, \"Ho Fat Tsang\"\n<[email protected]> wrote:>> I don't know why the server occasionally> slow down a bit for every 3 minutes.If the problem is checkpoints, try making your background writer more aggressive.  This allows more of the pages to be written to disk before the checkpoint starts.  I'll show the settings which have eliminated similar problems for us, but your best settings will depend on hardware and are almost certainly going to be different.  In particular, we have a battery backed caching RAID controller, which seems to change the dynamics of these sorts of issues quite a bit.\n#bgwriter_delay = 200ms                 # 10-10000ms between roundsbgwriter_lru_percent = 20.0             # 0-100% of LRU buffers scanned/roundbgwriter_lru_maxpages = 200             # 0-1000 buffers max written/round\nbgwriter_all_percent = 10.0             # 0-100% of all buffers scanned/roundbgwriter_all_maxpages = 600             # 0-1000 buffers max written/roundWe also adjust a couple other WAL-related settings:\nwal_buffers = 160kB                     # min 32kB                                        # (change requires restart)checkpoint_segments = 10                # in logfile segments, min 1, 16MB eachSince you're on \n8.0 I think you'll need to specify wal-buffers as a number of 8KB pages.-Kevin", "msg_date": "Wed, 4 Jul 2007 01:07:55 +0800", "msg_from": "\"Ho Fat Tsang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "2007/6/29, Richard Huxton <[email protected]>:\n>\n> Ho Fat Tsang wrote:\n> > Hi Richard,\n> >\n> > I've tested again according your suggestion. I noticed that for each\n> > time the pgsql slow down, there is a short period a process called\n> > \"pdflush\"\n> > eating up lot of I/O. I've goolgled and know it is a process for writing\n> > dirty pages back to the disk by the Linux kernel. I will have further\n> > investigation on this process with my limited knowledge on Linux kernel.\n>\n> Well, pdflush is responsible for flushing dirty pages to disk on behalf\n> of all processes.\n>\n> If it's doing it every 3 minutes while checkpoints are happening every\n> 30 seconds then I don't see how it's PG that's responsible.\n>\n> There are three possibilities:\n> 1. PG isn't actually checkpointing every 30 seconds.\n> 2. There is a burst of query activity every 3 minutes that causes a lot\n> of writing.\n> 3. Some other process is responsible.\n\n\nExactly ! you are right, finally i have found that the root cause for this\nis the application that use PG. There is memory leak using MappedByteBuffer\n(writing in java), it leads high I/O loading and finally reaches the ratio\nthat pdflush is being kicked start in the kernel.\n\nThank you for helping a lot in digging out this issue ! learn much for you\nguys !\n\n> Correct me if i am wrong. It seems postgresql 8.0 does not bundle\n> > auto-vacuum by default. So all vacuum and analyse are done manually ? So\n> > what i have tested related to vaccuum is running auto-vacuum (a\n> executeable\n> > located in /bin) parallel under normal production load but it seems\n> won't\n> > help.\n>\n> Can't remember whether 8.0 had autovacuum bundled and turned off or not\n> bundled at all. If it's not running it can't be causing this problem\n> though.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\n2007/6/29, Richard Huxton <[email protected]>:\nHo Fat Tsang wrote:> Hi Richard,>>    I've tested again according your suggestion. I noticed that for each> time the pgsql slow down, there is a short period a process called> \"pdflush\"\n> eating up lot of I/O. I've goolgled and know it is a process for writing> dirty pages back to the disk by the Linux kernel. I will have further> investigation on this process with my limited knowledge on Linux kernel.\nWell, pdflush is responsible for flushing dirty pages to disk on behalfof all processes.If it's doing it every 3 minutes while checkpoints are happening every30 seconds then I don't see how it's PG that's responsible.\nThere are three possibilities:1. PG isn't actually checkpointing every 30 seconds.2. There is a burst of query activity every 3 minutes that causes a lotof writing.3. Some other process is responsible.\nExactly ! you are right, finally i have found that the root cause for this is the application that use PG. There is memory leak using MappedByteBuffer (writing in java), it leads high I/O loading and finally reaches the ratio that pdflush is being kicked start in the kernel. \nThank you for helping a lot in digging out this issue ! learn much for you guys ! \n>   Correct me if i am wrong. It seems postgresql 8.0 does not bundle> auto-vacuum by default. So all vacuum and analyse are done manually ? So> what i have tested related to vaccuum is running auto-vacuum (a executeable\n> located in /bin) parallel under normal production load but it seems won't> help.Can't remember whether 8.0 had autovacuum bundled and turned off or notbundled at all. If it's not running it can't be causing this problem though.\n--   Richard Huxton   Archonet Ltd", "msg_date": "Wed, 4 Jul 2007 01:14:05 +0800", "msg_from": "\"Ho Fat Tsang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" }, { "msg_contents": "2007/7/3, Greg Smith <[email protected]>:\n>\n> On Fri, 29 Jun 2007, Ho Fat Tsang wrote:\n>\n> > I noticed that for each time the pgsql slow down, there is a short\n> > period a process called \"pdflush\" eating up lot of I/O. I've goolgled\n> > and know it is a process for writing dirty pages back to the disk by the\n> > Linux kernel.\n>\n> The pdflush documentation is really spread out, you may find my paper at\n> http://www.westnet.com/~gsmith/content/linux-pdflush.htm a good place to\n> start looking into that.\n\n\nWhen i found the \"pdflush\" process is the major clue of PG slow down, i\ngoogled and found this article !\nit is a really good one for tuning pdflush ! Thank a lot !\n\n--\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n2007/7/3, Greg Smith <[email protected]>:\nOn Fri, 29 Jun 2007, Ho Fat Tsang wrote:> I noticed that for each time the pgsql slow down, there is a short> period a process called \"pdflush\" eating up lot of I/O. I've goolgled> and know it is a process for writing dirty pages back to the disk by the\n> Linux kernel.The pdflush documentation is really spread out, you may find my paper athttp://www.westnet.com/~gsmith/content/linux-pdflush.htm\n a good place tostart looking into that.When i found the \"pdflush\" process is the major clue of PG slow down, i googled and found this article !it is a really good one for tuning pdflush ! Thank a lot ! \n--* Greg Smith [email protected]\nhttp://www.gregsmith.com Baltimore, MD---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to\n       choose an index scan if your joining column's datatypes do not       match", "msg_date": "Wed, 4 Jul 2007 01:21:25 +0800", "msg_from": "\"Ho Fat Tsang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" } ]
[ { "msg_contents": "Two points:\n\n* need more information about the circumstances.\n\n* could it be that autovaccum hits you?\n\nAndreas\n\n-- Ursprüngl. Mitteil. --\nBetreff:\t[PERFORM] PostgreSQL 8.0 occasionally slow down\nVon:\t\"Ho Fat Tsang\" <[email protected]>\nDatum:\t\t28.06.2007 06:56\n\nHi,\n\n I am new for postgresql server. And now i work on a projects which\nrequires postgreSQL 8.0 and Java. I don't know why the server occasionally\nslow down a bit for every 3 minutes.\nI have changed the log configuration so that it logs all statement\ntransaction > 1000 ms and the result shown below :\n\n============================================================================\n<elf2 2007-06-28 14:30:25 HKT 46835574.7a64> LOG: duration: 1494.109 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:33:34 HKT 468354a8.7415> LOG: duration: 1048.429 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:33:35 HKT 468354a9.7418> LOG: duration: 1580.120 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:33:37 HKT 468354a9.7418> LOG: duration: 1453.620 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:36:51 HKT 468354a9.7419> LOG: duration: 1430.019 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:36:53 HKT 468354a9.7418> LOG: duration: 1243.886 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:36:54 HKT 468354a9.7419> LOG: duration: 1491.821 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:36:54 HKT 468354a9.7418> LOG: duration: 1266.516 ms\nstatement: commit;begin;\n ...\n ...\n<elf2 2007-06-28 14:40:54 HKT 468354a9.741b> LOG: duration: 1776.466 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:40:54 HKT 468357ec.d5a> LOG: duration: 1500.132 ms\nstatement: commit;begin;\n ...\n ...\n<elf2 2007-06-28 14:44:07 HKT 46835477.73b7> LOG: duration: 1011.216 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:44:12 HKT 46835477.73b7> LOG: duration: 1009.187 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:44:13 HKT 468352f9.7194> LOG: duration: 1086.769 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:44:14 HKT 46835477.73b7> LOG: duration: 1481.627 ms\nstatement: commit;begin;\n ...\n ...\n<elf2 2007-06-28 14:47:44 HKT 468354a9.7419> LOG: duration: 10513.208 ms\nstatement: commit;begin;\n<elf2 2007-06-28 14:48:22 HKT 468354a9.7419> LOG: duration: 38126.708 ms\nstatement: commit;begin;\n\n============================================================================\n\nFor each 3 ~ 4 minutes , there are many transactions which requires (>1\nseconds) for execution. It is strange for me seems the tables size is quite\nsmall (~ 10K < 20K row). I can said the rate of incoming transactions is\nquite steady through our the testing. So i am quite confusing why the\nperformance degrades for every 3 ~ 4 minutes. I am wondering if there is any\ndefault scheduled task in the postgreSQL 8.0\n\nThe configurations which i have amended in postgresql.conf.\n\nmax_fsm_pages = 100000\nvacuum_cost_delay = 10\n\nThe machine using :\n512 RAM\nGentoo Linux\n\nDo anyone can help me about this ? or any resolution for a sudden\nperformance degrade ( because the application i need to develop is quite\ntime-critical).\n\nThank.\nTwinsen\n\n", "msg_date": "Thu, 28 Jun 2007 09:17:59 +0200", "msg_from": "\"Andreas Kostyrka\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.0 occasionally slow down" } ]
[ { "msg_contents": "Hi,\n\nI am new to PostgreSQL database. Can anybody help me (or point me the\nrelated post) to install PostgreSQL on windows XP from command line.\n(From .bat file)\n\n \n\nThanks\n\nSachi\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nHi,\nI am new to PostgreSQL database. Can anybody help me (or\npoint me the related post) to install PostgreSQL on windows XP from command\nline. (From .bat file)\n \nThanks\nSachi", "msg_date": "Thu, 28 Jun 2007 15:09:45 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to install Postgresql 8.2.x on windows XP silently" }, { "msg_contents": "Sachchida Ojha wrote:\n> Hi,\n> \n> I am new to PostgreSQL database. Can anybody help me (or point me the\n> related post) to install PostgreSQL on windows XP from command line.\n> (From .bat file)\n\nhttp://pginstaller.projects.postgresql.org/silent.html\n\n//Magnus\n", "msg_date": "Thu, 28 Jun 2007 21:44:59 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to install Postgresql 8.2.x on windows XP silently" } ]
[ { "msg_contents": "Hi,\n\nI have found some discussions about that issue, but did not find the\nanswer actually.\n\nIs there a way to be sure, that some indexes are alway in memory? My\ntests bringing them to the memory based file system (ramfs) tablespace\nshowed really a very significant performance gain. But a perspective\nto have a corrupted database after each machine restart does not\nreally make me feel good.\n\nWith best regards,\n\n-- Valentine Gogichashvili\n\n", "msg_date": "Mon, 02 Jul 2007 10:04:46 -0000", "msg_from": "valgog <[email protected]>", "msg_from_op": true, "msg_subject": "[PERFORMANCE] is it possible to force an index to be held in memory?" } ]
[ { "msg_contents": "Hi all,\n\n I need a very urgent help from you all in below case.\n\n I have a query\n\n SELECT amp.campaign_id, dam.allocation_map_id,amp.optimize_type,\namp.optimize_by_days, amp.rate, amp.action_id,amp.actions_delta,\namp.vearned_today, amp.creative_id, amp.channel_code,SUM(CASE\ndam.sqldatewhen 20070701 then\ndam.actions_delivered else 0 end) as action_yest,SUM(CASE sign(20070624 -\ndam.sqldate) when -1 then dam.actions_delivered else 0 end) as\naction_wk1,SUM(CASE sign(20070617 - dam.sqldate) when -1 then\ndam.actions_delivered else 0 end) as action_wk2,SUM(CASE sign(20070610 -\ndam.sqldate) when -1 then dam.actions_delivered else 0 end) as\naction_wk3,SUM(CASE sign(20070603 - dam.sqldate) when -1 then\ndam.actions_delivered else 0 end) as action_wk4,SUM(CASE sign(20070527 -\ndam.sqldate) when -1 then dam.actions_delivered else 0 end) as\naction_wk5,SUM(CASE sign(20070520 - dam.sqldate) when -1 then\ndam.actions_delivered else 0 end) as action_wk6,SUM(CASE sign(20070513 -\ndam.sqldate) when -1 then dam.actions_delivered else 0 end) as\naction_wk7,SUM(CASE sign(20070506 - dam.sqldate) when -1 then\ndam.actions_delivered else 0 end) as action_wk8,SUM(CASE dam.sqldate when\n20070701 then dam.vearned_total else 0 end) as earned_yest,SUM(CASE\nsign(20070624 - dam.sqldate) when -1 then dam.vearned_total else 0 end) as\nvearned_wk1,SUM(CASE sign(20070617 - dam.sqldate) when -1 then\ndam.vearned_total else 0 end) as vearned_wk2,SUM(CASE sign(20070610 -\ndam.sqldate) when -1 then dam.vearned_total else 0 end) as\nvearned_wk3,SUM(CASE sign(20070603 - dam.sqldate) when -1 then\ndam.vearned_total else 0 end) as vearned_wk4,SUM(CASE sign(20070527 -\ndam.sqldate) when -1 then dam.vearned_total else 0 end) as\nvearned_wk5,SUM(CASE sign(20070520 - dam.sqldate) when -1 then\ndam.vearned_total else 0 end) as vearned_wk6,SUM(CASE sign(20070513 -\ndam.sqldate) when -1 then dam.vearned_total else 0 end) as\nvearned_wk7,SUM(CASE sign(20070506 - dam.sqldate) when -1 then\ndam.vearned_total else 0 end) as vearned_wk8,SUM(CASE dam.sqldate when\n20070701 then dam.vactions_delivered else 0 end) as vactions_yest,SUM(CASE\nsign(20070624 - dam.sqldate) when -1 then dam.vactions_delivered else 0 end)\nas vactionsdel1,SUM(CASE sign(20070617 - dam.sqldate ) when -1 then\ndam.vactions_delivered else 0 end) as vactionsdel2,SUM(CASE sign(20070610 -\ndam.sqldate) when -1 then dam.vactions_delivered else 0 end) as\nvactionsdel3,SUM(CASE sign(20070603 - dam.sqldate) when -1 then\ndam.vactions_delivered else 0 end) as vactionsdel4,SUM(CASE sign(20070527 -\ndam.sqldate) when -1 then dam.vactions_delivered else 0 end) as\nvactionsdel5, SUM(CASE sign(20070520 - dam.sqldate) when -1 then\ndam.vactions_delivered else 0 end) as vactionsdel6,SUM(CASE sign(20070513 -\ndam.sqldate) when -1 then dam.vactions_delivered else 0 end) as\nvactionsdel7,SUM(CASE sign(20070506 - dam.sqldate) when -1 then\ndam.vactions_delivered else 0 end) as vactionsdel8 FROM delivered_action_map\ndam INNER JOIN (SELECT a.campaign_id, a.optimize_type,a.optimize_by_days,\na.rate, a.action_id, am.creative_id, am.channel_code, amt.actions_delta,\namt.vearned_today, am.id AS allocation_map_id FROM (SELECT c.campaign_id ,\nc.optimize_type, c.optimize_by_days, a1.rate, a1.id AS action_id FROM action\na1 INNER JOIN (SELECT c1.asset_id AS campaign_id, ca.value AS\noptimize_type,c1.optimize_by_days AS optimize_by_days FROM campaign c1 INNER\nJOIN (SELECT ca2.campaign_id AS campaign_id, ca3.value AS value FROM\ncampaign_attributes ca2, campaign_attributes ca3 WHERE ca2.campaign_id =\nca3.campaign_id AND ca2.attribute='OPTIMIZE_STATUS' AND ca2.value = '1'AND\nca3. attribute ='OPTIMIZE_TYPE') as ca ON c1.asset_id=ca.campaign_id AND\n20070702 BETWEEN (c1.start_date - interval '1 day') AND\n(c1.end_date+interval '1day') AND\nc1.status = 'A' AND c1.revenue_type != 'FOC' AND c1.action_type >= 1 AND\nc1.optimize_by_days > 0) AS c ON a1.campaign_id = c.campaign_id AND\na1.status = 'A') AS a, allocation_map am, action_metrics amt WHERE\na.action_id = amt.action_id AND am.id = amt.allocation_map_id AND\nam.status= 'A') AS amp ON\ndam.allocation_map_id= amp.allocation_map_id AND dam.action_id =\namp.action_id GROUP BY amp.campaign_id, amp.optimize_type,\namp.optimize_by_days, amp.rate, amp.action_id, amp.actions_delta ,\namp.creative_id, amp.channel_code, dam.allocation_map_id, amp.vearned_today;\n\nafter vacuuming the db it has become very very slow ... 100 times slow.\n\nPlease suggest ?\n\nRegards\nVidhya\n\n\nHi all,\n \n   I need a very urgent help from you all in below case.\n \n   I have a query \n \n  SELECT amp.campaign_id, dam.allocation_map_id,amp.optimize_type,amp.optimize_by_days, amp.rate, amp.action_id,amp.actions_delta, amp.vearned_today, amp.creative_id, amp.channel_code,SUM(CASE dam.sqldate when 20070701 then \ndam.actions_delivered else 0 end) as action_yest,SUM(CASE sign(20070624 - dam.sqldate) when -1 then dam.actions_delivered else 0 end) as action_wk1,SUM(CASE sign(20070617 - dam.sqldate) when -1 then dam.actions_delivered else 0 end) as action_wk2,SUM(CASE sign(20070610 - \ndam.sqldate) when -1 then dam.actions_delivered else 0 end) as action_wk3,SUM(CASE sign(20070603 - dam.sqldate) when -1 then dam.actions_delivered else 0 end) as action_wk4,SUM(CASE sign(20070527 - dam.sqldate) when -1 then \ndam.actions_delivered else 0 end) as action_wk5,SUM(CASE sign(20070520 - dam.sqldate) when -1 then dam.actions_delivered else 0 end) as action_wk6,SUM(CASE sign(20070513 - dam.sqldate) when -1 then dam.actions_delivered else 0 end) as action_wk7,SUM(CASE sign(20070506 - \ndam.sqldate) when -1 then dam.actions_delivered else 0 end) as action_wk8,SUM(CASE dam.sqldate when 20070701 then dam.vearned_total else 0 end) as earned_yest,SUM(CASE sign(20070624 - dam.sqldate) when -1 then dam.vearned_total\n else 0 end) as vearned_wk1,SUM(CASE sign(20070617 - dam.sqldate) when -1 then dam.vearned_total else 0 end) as vearned_wk2,SUM(CASE sign(20070610 - dam.sqldate) when -1 then dam.vearned_total else 0 end) as vearned_wk3,SUM(CASE sign(20070603 - \ndam.sqldate) when -1 then dam.vearned_total else 0 end) as vearned_wk4,SUM(CASE sign(20070527 - dam.sqldate) when -1 then dam.vearned_total else 0 end) as vearned_wk5,SUM(CASE sign(20070520 - dam.sqldate) when -1 then dam.vearned_total\n else 0 end) as vearned_wk6,SUM(CASE sign(20070513 - dam.sqldate) when -1 then dam.vearned_total else 0 end) as vearned_wk7,SUM(CASE sign(20070506 - dam.sqldate) when -1 then dam.vearned_total else 0 end) as vearned_wk8,SUM(CASE \ndam.sqldate when 20070701 then dam.vactions_delivered else 0 end) as vactions_yest,SUM(CASE sign(20070624 - dam.sqldate) when -1 then dam.vactions_delivered else 0 end) as vactionsdel1,SUM(CASE sign(20070617 - dam.sqldate\n ) when -1 then dam.vactions_delivered else 0 end) as vactionsdel2,SUM(CASE sign(20070610 - dam.sqldate) when -1 then dam.vactions_delivered else 0 end) as vactionsdel3,SUM(CASE sign(20070603 - dam.sqldate) when -1 then dam.vactions_delivered\n else 0 end) as vactionsdel4,SUM(CASE sign(20070527 - dam.sqldate) when -1 then dam.vactions_delivered else 0 end) as vactionsdel5, SUM(CASE sign(20070520 - dam.sqldate) when -1 then dam.vactions_delivered else 0 end) as vactionsdel6,SUM(CASE sign(20070513 - \ndam.sqldate) when -1 then dam.vactions_delivered else 0 end) as vactionsdel7,SUM(CASE sign(20070506 - dam.sqldate) when -1 then dam.vactions_delivered else 0 end) as vactionsdel8 FROM delivered_action_map dam  INNER JOIN  (SELECT \na.campaign_id, a.optimize_type,a.optimize_by_days,a.rate, a.action_id, am.creative_id, am.channel_code,  amt.actions_delta, amt.vearned_today, \nam.id AS allocation_map_id FROM  (SELECT c.campaign_id , c.optimize_type, c.optimize_by_days, a1.rate, a1.id AS action_id FROM action a1 INNER JOIN  (SELECT \nc1.asset_id AS campaign_id, ca.value AS optimize_type,c1.optimize_by_days AS optimize_by_days FROM campaign c1 INNER JOIN (SELECT ca2.campaign_id AS campaign_id, ca3.value AS value FROM campaign_attributes ca2, campaign_attributes ca3 WHERE \nca2.campaign_id = ca3.campaign_id AND ca2.attribute='OPTIMIZE_STATUS' AND ca2.value = '1'AND ca3. attribute ='OPTIMIZE_TYPE') as ca ON c1.asset_id=ca.campaign_id  AND 20070702 BETWEEN (c1.start_date\n - interval '1 day') AND (c1.end_date +interval '1day') AND c1.status = 'A' AND c1.revenue_type != 'FOC' AND c1.action_type >= 1 AND c1.optimize_by_days > 0) AS c ON a1.campaign_id = \nc.campaign_id AND a1.status = 'A') AS a, allocation_map am, action_metrics amt WHERE a.action_id = amt.action_id AND am.id\n = amt.allocation_map_id AND am.status = 'A') AS amp ON dam.allocation_map_id= amp.allocation_map_id AND dam.action_id = amp.action_id GROUP BY amp.campaign_id, amp.optimize_type, amp.optimize_by_days, amp.rate\n, amp.action_id, amp.actions_delta , amp.creative_id, amp.channel_code, dam.allocation_map_id, amp.vearned_today; \nafter vacuuming the db it has become very very slow ... 100 times slow.\n \nPlease suggest ?\n \nRegards\nVidhya", "msg_date": "Mon, 2 Jul 2007 15:37:04 +0530", "msg_from": "\"Vidhya Bondre\" <[email protected]>", "msg_from_op": true, "msg_subject": "slow query" }, { "msg_contents": "Vidhya Bondre skrev:\n\n> Hi all,\n> \n> I need a very urgent help from you all in below case.\n> \n> I have a query\n\n[snipped]\n\n> after vacuuming the db it has become very very slow ... 100 times slow.\n> \n> Please suggest ?\n\nSuggestions for getting more/better responses:\n\n- Format your query nicely before posting it.\n- Post the relevant table definitions, including indices\n- Tell us what the query is supposed to do.\n\nSuggestions for finding the cause of your problem:\n\n- Run \"EXPLAIN ANALYZE\" on the query.\n- Try to \"remove bits\" of the query to see which bits slow it down - try\nto find a \"minimal query\" which shows the performance problem. If you\ncan, use the output of \"EXPLAIN ANALYZE\" obtained above. For instance,\nall the SUMs in the SELECT clause are unlikely to significantly affect\nthe running time.\n- Run \"EXPLAIN ANALYZE\" on the \"minimal query\", post the results.\n\nNis\n\n", "msg_date": "Mon, 02 Jul 2007 15:20:37 +0200", "msg_from": "=?ISO-8859-1?Q?Nis_J=F8rgensen?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" } ]
[ { "msg_contents": "I have the same schema in two different databases. In \"smalldb\", the two tables of interest have about 430,000 rows, in \"bigdb\", the two tables each contain about 5.5 million rows. I'm processing the data, and for various reasons it works out well to process it in 100,000 row chunks. However, it turns out for the big schema, selecting 100,000 rows is the longest single step of the processing.\n\nBelow is the explain/analyze output of the query from each database. Since both tables are indexed on the joined columns, I don't understand why the big table should be so much slower -- I hoped this would scale well, or at least O(log(N)), not O(N).\n\nWhat's going on here? I don't know if I'm reading this right, but it looks like the sort is taking all the time, but that doesn't make sense because in both cases it's sorting 100,000 rows.\n\nThanks,\nCraig\n\n\nbigdb=> explain analyze\nbigdb-> select r.row_num, m.molkeys from my_rownum r\nbigdb-> join my_molkeys m on (r.version_id = m.version_id)\nbigdb-> where r.row_num >= 100000 AND r.row_num < 200000\nbigdb-> order by r.row_num;\n\n Sort (cost=431000.85..431248.23 rows=98951 width=363) (actual time=46306.748..46417.448 rows=100000 loops=1)\n Sort Key: r.row_num\n -> Hash Join (cost=2583.59..422790.68 rows=98951 width=363) (actual time=469.010..45752.131 rows=100000 loops=1)\n Hash Cond: (\"outer\".version_id = \"inner\".version_id)\n -> Seq Scan on my_molkeys m (cost=0.00..323448.30 rows=5472530 width=363) (actual time=11.243..33299.933 rows=5472532 loops=1)\n -> Hash (cost=2336.21..2336.21 rows=98951 width=8) (actual time=442.260..442.260 rows=100000 loops=1)\n -> Index Scan using i_chm_rownum_row_num on my_rownum r (cost=0.00..2336.21 rows=98951 width=8) (actual time=47.551..278.736 rows=100000 loops=1)\n Index Cond: ((row_num >= 100000) AND (row_num < 200000))\n Total runtime: 46543.163 ms\n\n\nsmalldb=> explain analyze\nsmalldb-> select r.row_num, m.molkeys from my_rownum r\nsmalldb-> join my_molkeys m on (r.version_id = m.version_id)\nsmalldb-> where r.row_num >= 100000 AND r.row_num < 200000\nsmalldb-> order by r.row_num;\n\n Sort (cost=43598.23..43853.38 rows=102059 width=295) (actual time=4097.180..4207.733 rows=100000 loops=1)\n Sort Key: r.row_num\n -> Hash Join (cost=2665.09..35107.41 rows=102059 width=295) (actual time=411.635..3629.756 rows=100000 loops=1)\n Hash Cond: (\"outer\".version_id = \"inner\".version_id)\n -> Seq Scan on my_molkeys m (cost=0.00..23378.90 rows=459590 width=295) (actual time=8.563..2011.455 rows=459590 loops=1)\n -> Hash (cost=2409.95..2409.95 rows=102059 width=8) (actual time=402.867..402.867 rows=100000 loops=1)\n -> Index Scan using i_chm_rownum_row_num_8525 on my_rownum r (cost=0.00..2409.95 rows=102059 width=8) (actual time=37.122..242.528 rows=100000 loops=1)\n Index Cond: ((row_num >= 100000) AND (row_num < 200000))\n Total runtime: 4333.501 ms\n\n\n\nTable \"bigdb.my_rownum\"\n Column | Type | Modifiers \n------------+---------+-----------\n version_id | integer | \n parent_id | integer | \n row_num | integer | \nIndexes:\n \"i_chm_rownum_row_num\" UNIQUE, btree (row_num)\n \"i_chm_rownum_version_id\" UNIQUE, btree (version_id)\n \"i_chm_rownum_parent_id\" btree (parent_id)\n\n\n\nTable \"bigdb.my_molkeys\"\n Column | Type | Modifiers \n------------+---------+-----------\n version_id | integer | \n molkeys | text | \nIndexes:\n \"i_chm_molkeys_version_id\" UNIQUE, btree (version_id)\n", "msg_date": "Mon, 02 Jul 2007 16:30:59 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Join with lower/upper limits doesn't scale well" }, { "msg_contents": "\"Craig James\" <[email protected]> writes:\n\n> Below is the explain/analyze output of the query from each database. Since\n> both tables are indexed on the joined columns, I don't understand why the\n> big table should be so much slower -- I hoped this would scale well, or at\n> least O(log(N)), not O(N).\n...\n> Sort (cost=431000.85..431248.23 rows=98951 width=363) (actual time=46306.748..46417.448 rows=100000 loops=1)\n> Sort Key: r.row_num\n> -> Hash Join (cost=2583.59..422790.68 rows=98951 width=363) (actual time=469.010..45752.131 rows=100000 loops=1)\n> Hash Cond: (\"outer\".version_id = \"inner\".version_id)\n> -> Seq Scan on my_molkeys m (cost=0.00..323448.30 rows=5472530 width=363) (actual time=11.243..33299.933 rows=5472532 loops=1)\n> -> Hash (cost=2336.21..2336.21 rows=98951 width=8) (actual time=442.260..442.260 rows=100000 loops=1)\n> -> Index Scan using i_chm_rownum_row_num on my_rownum r (cost=0.00..2336.21 rows=98951 width=8) (actual time=47.551..278.736 rows=100000 loops=1)\n> Index Cond: ((row_num >= 100000) AND (row_num < 200000))\n> Total runtime: 46543.163 ms\n\nIt looks like most of the time is being spent in the sequential scan of the\nmy_molkeys at least 33 seconds out of 46 seconds is. The actual sort is taking\nunder a second (the hash join finishes after 45.7s and the sort finishes after\n46.4s). \n\nThe rest of the time (about 13s) is actually being spent in the hash join\nwhich makes me think it's overflowing work_mem and having to process the join\nin two batches. You might be able to speed it up by raising work_mem for this\nquery (you can set work_mem locally using SET)\n\nThe row_num where clause only narrows down the set of rows coming from the\nmy_rownum table. If you want sublinear performance you would have to provide\nsome way for Postgres to narrow down the rows from my_molkeys without actually\nhaving to read them all in.\n\nWith the query as is the only way I can see that happening would be if you had\nan index on \"my_molkey(version_id)\" and \"my_rownum(version_id) WHERE row_num\nbetween 100000 and 200000\". Then it could do a merge join between two index\nscans.\n\nNote than even then I'm surprised the optimizer is bothering with the index\nfor these queries, at least for the 400k case. Do you have enable_seqscan=off\nor random_page_cost dialled way down?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Tue, 03 Jul 2007 01:04:46 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join with lower/upper limits doesn't scale well" }, { "msg_contents": "The two queries below produce different plans.\n\nselect r.version_id, r.row_num, m.molkeys from my_rownum r\njoin my_molkeys m on (r.version_id = m.version_id)\nwhere r.version_id >= 3200000\nand r.version_id < 3300000\norder by r.version_id;\n\n\nselect r.version_id, r.row_num, m.molkeys from my_rownum r\njoin my_molkeys m on (r.version_id = m.version_id)\nwhere r.version_id >= 3200000\nand r.version_id < 3300000\nand m.version_id >= 3200000\nand m.version_id < 3300000\norder by r.version_id;\n\nI discovered this while looking at the plans for the first query. It seemed to be ignoring the fact that it could push the \"between\" condition along to the second table, since the condition and the join are on the same indexed columns. So, I added a redundant condition, and bingo, it was a lot faster. In the analysis shown below, the timing (about 1.0 and 1.5 seconds respectively) are for a \"hot\" database that's been queried a couple of times. In real life on a \"cold\" database, the times are more like 10 seconds and 21 seconds, so it's quite significant.\n\nThanks,\nCraig\n\n\n\ndb=> explain analyze \ndb-> select r.version_id, r.row_num, m.molkeys from my_rownum r\ndb-> join my_molkeys m on (r.version_id = m.version_id)\ndb-> where r.version_id >= 3200000\ndb-> and r.version_id < 3300000\ndb-> order by r.version_id;\n\n Sort (cost=264979.51..265091.06 rows=44620 width=366) (actual time=1424.126..1476.048 rows=46947 loops=1)\n Sort Key: r.version_id\n -> Nested Loop (cost=366.72..261533.64 rows=44620 width=366) (actual time=41.649..1186.331 rows=46947 loops=1)\n -> Bitmap Heap Scan on my_rownum r (cost=366.72..41168.37 rows=44620 width=8) (actual time=41.616..431.783 rows=46947 loops=1)\n Recheck Cond: ((version_id >= 3200000) AND (version_id < 3300000))\n -> Bitmap Index Scan on i_chm_rownum_version_id_4998 (cost=0.00..366.72 rows=44620 width=0) (actual time=21.244..21.244 rows=46947 loops=1)\n Index Cond: ((version_id >= 3200000) AND (version_id < 3300000))\n -> Index Scan using i_chm_molkeys_version_id on my_molkeys m (cost=0.00..4.93 rows=1 width=362) (actual time=0.009..0.010 rows=1 loops=46947)\n Index Cond: (\"outer\".version_id = m.version_id)\n Total runtime: 1534.638 ms\n(10 rows)\n\n\ndb=> explain analyze \ndb-> select r.version_id, r.row_num, m.molkeys from my_rownum r\ndb-> join my_molkeys m on (r.version_id = m.version_id)\ndb-> where r.version_id >= 3200000\ndb-> and r.version_id < 3300000\ndb-> and m.version_id >= 3200000\ndb-> and m.version_id < 3300000\ndb-> order by r.version_id;\n\n Sort (cost=157732.20..157732.95 rows=298 width=366) (actual time=985.383..1037.423 rows=46947 loops=1)\n Sort Key: r.version_id\n -> Hash Join (cost=41279.92..157719.95 rows=298 width=366) (actual time=502.875..805.402 rows=46947 loops=1)\n Hash Cond: (\"outer\".version_id = \"inner\".version_id)\n -> Index Scan using i_chm_molkeys_version_id on my_molkeys m (cost=0.00..115717.85 rows=47947 width=362) (actual time=0.023..117.270 rows=46947 loops=1)\n Index Cond: ((version_id >= 3200000) AND (version_id < 3300000))\n -> Hash (cost=41168.37..41168.37 rows=44620 width=8) (actual time=502.813..502.813 rows=46947 loops=1)\n -> Bitmap Heap Scan on my_rownum r (cost=366.72..41168.37 rows=44620 width=8) (actual time=41.621..417.508 rows=46947 loops=1)\n Recheck Cond: ((version_id >= 3200000) AND (version_id < 3300000))\n -> Bitmap Index Scan on i_chm_rownum_version_id_4998 (cost=0.00..366.72 rows=44620 width=0) (actual time=21.174..21.174 rows=46947 loops=1)\n Index Cond: ((version_id >= 3200000) AND (version_id < 3300000))\n Total runtime: 1096.031 ms\n(12 rows)\n", "msg_date": "Tue, 10 Jul 2007 17:53:17 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Equivalent queries produce different plans" }, { "msg_contents": "Sorry, I forgot to mention: This is 8.1.4, with a fairly ordinary configuration on a 4 GB system.\n\nCraig\n\n\nCraig James wrote:\n> The two queries below produce different plans.\n> \n> select r.version_id, r.row_num, m.molkeys from my_rownum r\n> join my_molkeys m on (r.version_id = m.version_id)\n> where r.version_id >= 3200000\n> and r.version_id < 3300000\n> order by r.version_id;\n> \n> \n> select r.version_id, r.row_num, m.molkeys from my_rownum r\n> join my_molkeys m on (r.version_id = m.version_id)\n> where r.version_id >= 3200000\n> and r.version_id < 3300000\n> and m.version_id >= 3200000\n> and m.version_id < 3300000\n> order by r.version_id;\n> \n> I discovered this while looking at the plans for the first query. It \n> seemed to be ignoring the fact that it could push the \"between\" \n> condition along to the second table, since the condition and the join \n> are on the same indexed columns. So, I added a redundant condition, and \n> bingo, it was a lot faster. In the analysis shown below, the timing \n> (about 1.0 and 1.5 seconds respectively) are for a \"hot\" database that's \n> been queried a couple of times. In real life on a \"cold\" database, the \n> times are more like 10 seconds and 21 seconds, so it's quite significant.\n> \n> Thanks,\n> Craig\n> \n> \n> \n> db=> explain analyze db-> select r.version_id, r.row_num, m.molkeys from \n> my_rownum r\n> db-> join my_molkeys m on (r.version_id = m.version_id)\n> db-> where r.version_id >= 3200000\n> db-> and r.version_id < 3300000\n> db-> order by r.version_id;\n> \n> Sort (cost=264979.51..265091.06 rows=44620 width=366) (actual \n> time=1424.126..1476.048 rows=46947 loops=1)\n> Sort Key: r.version_id\n> -> Nested Loop (cost=366.72..261533.64 rows=44620 width=366) (actual \n> time=41.649..1186.331 rows=46947 loops=1)\n> -> Bitmap Heap Scan on my_rownum r (cost=366.72..41168.37 \n> rows=44620 width=8) (actual time=41.616..431.783 rows=46947 loops=1)\n> Recheck Cond: ((version_id >= 3200000) AND (version_id < \n> 3300000))\n> -> Bitmap Index Scan on i_chm_rownum_version_id_4998 \n> (cost=0.00..366.72 rows=44620 width=0) (actual time=21.244..21.244 \n> rows=46947 loops=1)\n> Index Cond: ((version_id >= 3200000) AND (version_id \n> < 3300000))\n> -> Index Scan using i_chm_molkeys_version_id on my_molkeys m \n> (cost=0.00..4.93 rows=1 width=362) (actual time=0.009..0.010 rows=1 \n> loops=46947)\n> Index Cond: (\"outer\".version_id = m.version_id)\n> Total runtime: 1534.638 ms\n> (10 rows)\n> \n> \n> db=> explain analyze db-> select r.version_id, r.row_num, m.molkeys from \n> my_rownum r\n> db-> join my_molkeys m on (r.version_id = m.version_id)\n> db-> where r.version_id >= 3200000\n> db-> and r.version_id < 3300000\n> db-> and m.version_id >= 3200000\n> db-> and m.version_id < 3300000\n> db-> order by r.version_id;\n> \n> Sort (cost=157732.20..157732.95 rows=298 width=366) (actual \n> time=985.383..1037.423 rows=46947 loops=1)\n> Sort Key: r.version_id\n> -> Hash Join (cost=41279.92..157719.95 rows=298 width=366) (actual \n> time=502.875..805.402 rows=46947 loops=1)\n> Hash Cond: (\"outer\".version_id = \"inner\".version_id)\n> -> Index Scan using i_chm_molkeys_version_id on my_molkeys m \n> (cost=0.00..115717.85 rows=47947 width=362) (actual time=0.023..117.270 \n> rows=46947 loops=1)\n> Index Cond: ((version_id >= 3200000) AND (version_id < \n> 3300000))\n> -> Hash (cost=41168.37..41168.37 rows=44620 width=8) (actual \n> time=502.813..502.813 rows=46947 loops=1)\n> -> Bitmap Heap Scan on my_rownum r \n> (cost=366.72..41168.37 rows=44620 width=8) (actual time=41.621..417.508 \n> rows=46947 loops=1)\n> Recheck Cond: ((version_id >= 3200000) AND \n> (version_id < 3300000))\n> -> Bitmap Index Scan on \n> i_chm_rownum_version_id_4998 (cost=0.00..366.72 rows=44620 width=0) \n> (actual time=21.174..21.174 rows=46947 loops=1)\n> Index Cond: ((version_id >= 3200000) AND \n> (version_id < 3300000))\n> Total runtime: 1096.031 ms\n> (12 rows)\n> \n\n\n", "msg_date": "Tue, 10 Jul 2007 18:06:31 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Equivalent queries produce different plans" }, { "msg_contents": "Craig James <[email protected]> writes:\n> The two queries below produce different plans.\n\n> select r.version_id, r.row_num, m.molkeys from my_rownum r\n> join my_molkeys m on (r.version_id = m.version_id)\n> where r.version_id >= 3200000\n> and r.version_id < 3300000\n> order by r.version_id;\n\n> select r.version_id, r.row_num, m.molkeys from my_rownum r\n> join my_molkeys m on (r.version_id = m.version_id)\n> where r.version_id >= 3200000\n> and r.version_id < 3300000\n> and m.version_id >= 3200000\n> and m.version_id < 3300000\n> order by r.version_id;\n\nYeah, the planner does not make any attempt to infer implied\ninequalities, so it will not generate the last two clauses for you.\nThere is machinery in there to infer implied *equalities*, which\nis cheaper (fewer operators to consider) and much more useful across\ntypical queries such as multiway joins on the same keys. I'm pretty\ndubious that it'd be worth the cycles to search for implied\ninequalities.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2007 21:25:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Equivalent queries produce different plans " }, { "msg_contents": "Here's an oddity. I have 10 databases, each with about a dozen connections to Postgres (about 120 connections total), and at midnight they're all idle. These are mod_perl programs (like a FastCGI -- they stay connected so they're ready for instant service). So using \"ps -ef\" and grep, we find one of the databases looks like this:\n\npostgres 22708 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(46915) idle\npostgres 22709 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(46916) idle\npostgres 22710 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(46917) idle\npostgres 22711 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(46918) idle\npostgres 22712 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(46919) idle\npostgres 22724 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(42440) idle\npostgres 22725 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(42441) idle\npostgres 22726 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(42442) idle\npostgres 22727 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(42443) idle\npostgres 22728 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(42444) idle\npostgres 22731 7619 0 Jul11 ? 00:00:06 postgres: chemmega chemmega 192.168.10.192(42447) idle\n\nNow here's the weird thing. I'm running a pg_restore of a database (on the order of 4GB compressed, maybe 34M rows of ordinary data, and 15M rows in one BLOB table that's typically 2K per blob). When I do this, ALL of the postgress backends start working at about 1% CPU apiece. This means that the 120 \"idle\" postgres backends are together using almost 100% of one CPU on top of the 100% CPU being used by pg_restore. See the output of top(1) below.\n\nIs this normal? All I can guess at is that something's going on in shared memory that every Postgres backend has to respond to.\n\nThanks,\nCraig\n\n\n\nTasks: 305 total, 1 running, 304 sleeping, 0 stopped, 0 zombie\nCpu(s): 33.5% us, 1.5% sy, 0.0% ni, 57.8% id, 6.6% wa, 0.2% hi, 0.4% si\nMem: 4151456k total, 4011020k used, 140436k free, 10096k buffers\nSwap: 2104504k total, 94136k used, 2010368k free, 3168596k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n 6681 postgres 16 0 217m 188m 161m D 50.4 4.6 4:29.30 postmaster \n 1577 root 10 -5 0 0 0 S 1.0 0.0 108:01.97 md0_raid1 \n 8487 postgres 15 0 187m 8704 4996 S 1.0 0.2 0:06.56 postmaster \n 8506 postgres 15 0 187m 8604 4892 S 1.0 0.2 0:06.37 postmaster \n 8507 postgres 15 0 187m 8708 5004 S 1.0 0.2 0:06.42 postmaster \n 8512 postgres 15 0 187m 8612 4904 S 1.0 0.2 0:06.65 postmaster \n 8751 postgres 15 0 187m 10m 7520 S 1.0 0.3 0:07.95 postmaster \n 8752 postgres 15 0 187m 10m 7492 S 1.0 0.3 0:07.84 postmaster \n14053 postgres 15 0 187m 8752 5044 S 1.0 0.2 0:06.53 postmaster \n16515 postgres 15 0 187m 8156 4452 S 1.0 0.2 0:06.33 postmaster \n25351 postgres 15 0 187m 9772 6064 S 1.0 0.2 0:06.75 postmaster \n25387 postgres 15 0 187m 8444 4752 S 1.0 0.2 0:06.45 postmaster \n25425 postgres 15 0 187m 9.8m 6340 S 1.0 0.2 0:06.75 postmaster \n30626 postgres 15 0 187m 8472 4792 S 1.0 0.2 0:06.52 postmaster \n30628 postgres 15 0 187m 8536 4840 S 1.0 0.2 0:06.50 postmaster \n30630 postgres 15 0 187m 8524 4844 S 1.0 0.2 0:06.49 postmaster \n30637 postgres 15 0 187m 8692 4880 S 1.0 0.2 0:06.25 postmaster \n31679 postgres 15 0 187m 8544 4860 S 1.0 0.2 0:06.39 postmaster \n31681 postgres 15 0 187m 8528 4848 S 1.0 0.2 0:06.25 postmaster \n 1751 postgres 15 0 187m 8432 4748 S 1.0 0.2 0:06.26 postmaster \n11620 postgres 15 0 187m 8344 4644 S 1.0 0.2 0:06.23 postmaster \n11654 postgres 15 0 187m 8316 4624 S 1.0 0.2 0:06.36 postmaster \n19173 postgres 15 0 187m 9372 5668 S 1.0 0.2 0:06.49 postmaster \n19670 postgres 15 0 187m 9236 5528 S 1.0 0.2 0:06.29 postmaster \n20380 postgres 15 0 187m 8656 4956 S 1.0 0.2 0:06.20 postmaster \n20649 postgres 15 0 187m 8280 4584 S 1.0 0.2 0:06.16 postmaster \n22731 postgres 15 0 187m 8408 4700 S 1.0 0.2 0:06.03 postmaster \n11045 postgres 15 0 185m 71m 68m S 0.7 1.8 0:19.35 postmaster \n 6408 postgres 15 0 187m 11m 7520 S 0.7 0.3 0:07.89 postmaster \n 6410 postgres 15 0 187m 10m 7348 S 0.7 0.3 0:07.53 postmaster \n 6411 postgres 15 0 187m 10m 7380 S 0.7 0.3 0:07.83 postmaster \n 6904 postgres 15 0 187m 8644 4788 S 0.7 0.2 0:06.15 postmaster \n 6905 postgres 15 0 187m 8288 4596 S 0.7 0.2 0:06.15 postmaster \n 6906 postgres 15 0 187m 8488 4764 S 0.7 0.2 0:06.18 postmaster \n 6907 postgres 15 0 187m 8580 4856 S 0.7 0.2 0:06.37 postmaster \n 7049 postgres 15 0 187m 8488 4800 S 0.7 0.2 0:06.07 postmaster \n 7054 postgres 15 0 187m 8376 4672 S 0.7 0.2 0:06.28 postmaster \n 7188 postgres 15 0 187m 8588 4868 S 0.7 0.2 0:06.39 postmaster \n 7190 postgres 15 0 187m 8832 5120 S 0.7 0.2 0:06.52 postmaster \n 7191 postgres 15 0 187m 8632 4916 S 0.7 0.2 0:06.48 postmaster \n 7192 postgres 15 0 187m 8884 5176 S 0.7 0.2 0:06.55 postmaster \n 8511 postgres 15 0 187m 8612 4904 S 0.7 0.2 0:06.39 postmaster \n 8513 postgres 15 0 187m 8776 5064 S 0.7 0.2 0:06.60 postmaster \n 8750 postgres 15 0 187m 10m 7220 S 0.7 0.3 0:07.72 postmaster \n 8768 postgres 15 0 187m 10m 7508 S 0.7 0.3 0:07.77 postmaster \n 8769 postgres 15 0 187m 10m 7448 S 0.7 0.3 0:07.81 postmaster \n 8775 postgres 15 0 187m 10m 7064 S 0.7 0.3 0:07.72 postmaster \n 8782 postgres 15 0 187m 10m 7316 S 0.7 0.3 0:07.84 postmaster \n13947 postgres 15 0 187m 8500 4780 S 0.7 0.2 0:06.36 postmaster \n13949 postgres 15 0 187m 8536 4824 S 0.7 0.2 0:06.36 postmaster \n13951 postgres 15 0 187m 8504 4804 S 0.7 0.2 0:06.35 postmaster \n14041 postgres 15 0 187m 8548 4828 S 0.7 0.2 0:06.45 postmaster \n14046 postgres 15 0 187m 8560 4812 S 0.7 0.2 0:06.39 postmaster \n14052 postgres 15 0 187m 8744 5024 S 0.7 0.2 0:06.54 postmaster \n14055 postgres 15 0 187m 8580 4868 S 0.7 0.2 0:06.52 postmaster \n14061 postgres 15 0 187m 8464 4760 S 0.7 0.2 0:06.45 postmaster \n14092 postgres 15 0 187m 8624 4920 S 0.7 0.2 0:06.52 postmaster \n16358 postgres 15 0 187m 8284 4596 S 0.7 0.2 0:06.54 postmaster \n16367 postgres 15 0 187m 8392 4568 S 0.7 0.2 0:06.24 postmaster \n\n\n", "msg_date": "Thu, 12 Jul 2007 00:34:45 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "pg_restore causes 100" }, { "msg_contents": "Craig James <[email protected]> writes:\n> Now here's the weird thing. I'm running a pg_restore of a database\n> (on the order of 4GB compressed, maybe 34M rows of ordinary data, and\n> 15M rows in one BLOB table that's typically 2K per blob). When I do\n> this, ALL of the postgress backends start working at about 1% CPU\n> apiece.\n\nIt's not surprising that they'd all start eating some CPU, if that's a\nschema restore and not just bulk data loading. Any catalog change is\ngoing to broadcast \"shared cache inval\" messages that all the backends\nhave to process to make sure they get rid of any now-obsolete cached\ncatalog information.\n\n> This means that the 120 \"idle\" postgres backends are together\n> using almost 100% of one CPU on top of the 100% CPU being used by\n> pg_restore. See the output of top(1) below.\n\nPerhaps you need to try to cut down the number of idle processes ...\n\nI don't think anyone's ever spent any time trying to micro-optimize\nthe shared cache inval code paths. It could be we could cut your\n1% figure some, if we were willing to put effort into that. But it's\nnot going to go to zero.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Jul 2007 10:27:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore causes 100 " } ]
[ { "msg_contents": "Hi,\n I've dbase with about 80 relations.\n On deleting a user, this cascades through all the tables.\n This is very slow, for 20 users it takes 4 hours, with exclusive \naccess to the dbase.\n No other users connected to the dbase.\n\n Ok I know there will be somewhere a relation with a FK without \nindex, which\n is being scanned sequentially. But how can I find out what postgres \nis doing\n while it is handling the transaction?\n\n Is there a way I can find out what postgres does, and where it hangs \naround, so I know\n where the FK might not be indexed. (The dbase is to big to analyze \nit by hand).\n\n The way I do it now is to check the pg_locks relation, but this is \nnot very representative.\n\n Is there profiling method for triggers/constraints, or a method \nwhich gives me a hint\n why it is taking so long?\n\nthanks in advance\n \n \n", "msg_date": "Tue, 03 Jul 2007 08:05:27 +0200", "msg_from": "Patric de Waha <[email protected]>", "msg_from_op": true, "msg_subject": "Delete Cascade FK speed issue" }, { "msg_contents": "On Tue, Jul 03, 2007 at 08:05:27AM +0200, Patric de Waha wrote:\n> Is there a way I can find out what postgres does, and where it hangs \n> around, so I know where the FK might not be indexed. (The dbase is\n> to big to analyze it by hand).\n\nYou could query the system catalogs to look for foreign key constraints\nthat don't have an index on the referencing column(s). Something like\nthe following should work for single-column foreign keys:\n\nselect n1.nspname,\n c1.relname,\n a1.attname,\n t.conname,\n n2.nspname as fnspname,\n c2.relname as frelname,\n a2.attname as fattname\n from pg_constraint t\n join pg_attribute a1 on a1.attrelid = t.conrelid and a1.attnum = t.conkey[1]\n join pg_class c1 on c1.oid = t.conrelid\n join pg_namespace n1 on n1.oid = c1.relnamespace\n join pg_class c2 on c2.oid = t.confrelid\n join pg_namespace n2 on n2.oid = c2.relnamespace\n join pg_attribute a2 on a2.attrelid = t.confrelid and a2.attnum = t.confkey[1]\n where t.contype = 'f'\n and not exists (\n select 1\n from pg_index i\n where i.indrelid = t.conrelid\n and i.indkey[0] = t.conkey[1]\n )\n order by n1.nspname,\n c1.relname,\n a1.attname;\n\n-- \nMichael Fuhr\n", "msg_date": "Tue, 3 Jul 2007 05:33:42 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delete Cascade FK speed issue" }, { "msg_contents": "On Tue, 2007-07-03 at 08:05 +0200, Patric de Waha wrote:\n> Hi,\n> I've dbase with about 80 relations.\n> On deleting a user, this cascades through all the tables.\n> This is very slow, for 20 users it takes 4 hours, with exclusive \n> access to the dbase.\n> No other users connected to the dbase.\n> \n> Ok I know there will be somewhere a relation with a FK without \n> index, which\n> is being scanned sequentially. But how can I find out what postgres \n> is doing\n> while it is handling the transaction?\n> \n> Is there a way I can find out what postgres does, and where it hangs \n> around, so I know\n> where the FK might not be indexed. (The dbase is to big to analyze \n> it by hand).\n> \n> The way I do it now is to check the pg_locks relation, but this is \n> not very representative.\n> \n> Is there profiling method for triggers/constraints, or a method \n> which gives me a hint\n> why it is taking so long?\n\nIn 8.1 and later, an EXPLAIN ANALYZE of the delete will show you the\namount of time spent in each trigger. Remember that it will still\nperform the delete, so if you want to be able to re-run the DELETE over\nand over as you add missing indexes, run it in a transaction and\nrollback each time. That will tell you which foreign key constraint\nchecks are taking up time. The output will not be nearly as useful if\nyou don't name your foreign key constraints, but is still better than\nnothing.\n\nAlternatively, you can just dump the schema to a text file and spend 30\nminutes and some text searching to reconstruct your foreign key\ndependency graph rooted at the table in question and check each column\nfor proper indexes. We recently did this for a 150 relation database,\nit's not as painful as you seem to think it is. An 80 relation database\nis by no means \"too big to analyze\" :)\n\n-- Mark Lewis\n", "msg_date": "Tue, 03 Jul 2007 09:01:05 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delete Cascade FK speed issue" }, { "msg_contents": "Am 03.07.2007 um 13:33 schrieb Michael Fuhr:\n\n> Something like\n> the following should work for single-column foreign keys:\nNice query. Found immediately 2 missing indexes. (-;)\n\nAxel\n---------------------------------------------------------------------\nAxel Rau, ☀Frankfurt , Germany +49 69 9514 18 0\n\n\n\nAm 03.07.2007 um 13:33 schrieb Michael Fuhr: Something like the following should work for single-column foreign keys: Nice query. Found immediately 2 missing indexes. (-;)Axel ---------------------------------------------------------------------Axel Rau, ☀Frankfurt , Germany                       +49 69 9514 18 0", "msg_date": "Wed, 4 Jul 2007 09:30:24 +0200", "msg_from": "Axel Rau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delete Cascade FK speed issue" } ]
[ { "msg_contents": "All,\n\nI'm very curious to know if we may expect or guarantee any data\nconsistency with WAL sync=OFF but using file system mounted in Direct\nI/O mode (means every write() system call called by PG really writes\nto disk before return)...\n\nSo may we expect data consistency:\n - none?\n - per checkpoint basis?\n - full?...\n\nThanks a lot for any info!\n\nRgds,\n-Dimitri\n", "msg_date": "Tue, 3 Jul 2007 16:54:38 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Filesystem Direct I/O and WAL sync option" }, { "msg_contents": "Dimitri wrote:\n> I'm very curious to know if we may expect or guarantee any data\n> consistency with WAL sync=OFF but using file system mounted in Direct\n> I/O mode (means every write() system call called by PG really writes\n> to disk before return)...\n\nYou'd have to turn that mode on on the data drives as well to get \nconsistency, because fsync=off disables checkpoint fsyncs of the data \nfiles as well.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 03 Jul 2007 16:06:29 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem Direct I/O and WAL sync option" }, { "msg_contents": "Yes, disk drives are also having cache disabled or having cache on\ncontrollers and battery protected (in case of more high-level\nstorage) - but is it enough to expect data consistency?... (I was\nsurprised about checkpoint sync, but does it always calls write()\nanyway? because in this way it should work without fsync)...\n\nOn 7/3/07, Heikki Linnakangas <[email protected]> wrote:\n> Dimitri wrote:\n> > I'm very curious to know if we may expect or guarantee any data\n> > consistency with WAL sync=OFF but using file system mounted in Direct\n> > I/O mode (means every write() system call called by PG really writes\n> > to disk before return)...\n>\n> You'd have to turn that mode on on the data drives as well to get\n> consistency, because fsync=off disables checkpoint fsyncs of the data\n> files as well.\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n", "msg_date": "Tue, 3 Jul 2007 17:26:55 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Filesystem Direct I/O and WAL sync option" }, { "msg_contents": "\n\"Dimitri\" <[email protected]> writes:\n\n> Yes, disk drives are also having cache disabled or having cache on\n> controllers and battery protected (in case of more high-level\n> storage) - but is it enough to expect data consistency?... (I was\n> surprised about checkpoint sync, but does it always calls write()\n> anyway? because in this way it should work without fsync)...\n\nWell if everything is mounted in sync mode then I suppose you have the same\nguarantee as if fsync were called after every single write. If that's true\nthen surely that's at least as good. I'm curious how it performs though.\n\nActually it seems like in that configuration fsync should be basically\nzero-cost. In other words, you should be able to leave fsync=on and get the\nsame performance (whatever that is) and not have to worry about any risks.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 04 Jul 2007 02:13:02 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem Direct I/O and WAL sync option" }, { "msg_contents": "Yes Gregory, that's why I'm asking, because from 1800 transactions/sec\nI'm jumping to 2800 transactions/sec! and it's more than important\nperformance level increase :))\n\nRgds,\n-Dimitri\n\nOn 7/4/07, Gregory Stark <[email protected]> wrote:\n>\n> \"Dimitri\" <[email protected]> writes:\n>\n> > Yes, disk drives are also having cache disabled or having cache on\n> > controllers and battery protected (in case of more high-level\n> > storage) - but is it enough to expect data consistency?... (I was\n> > surprised about checkpoint sync, but does it always calls write()\n> > anyway? because in this way it should work without fsync)...\n>\n> Well if everything is mounted in sync mode then I suppose you have the same\n> guarantee as if fsync were called after every single write. If that's true\n> then surely that's at least as good. I'm curious how it performs though.\n>\n> Actually it seems like in that configuration fsync should be basically\n> zero-cost. In other words, you should be able to leave fsync=on and get the\n> same performance (whatever that is) and not have to worry about any risks.\n>\n> --\n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n>\n>\n", "msg_date": "Wed, 4 Jul 2007 12:26:10 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Filesystem Direct I/O and WAL sync option" }, { "msg_contents": "\n\"Dimitri\" <[email protected]> writes:\n\n> Yes Gregory, that's why I'm asking, because from 1800 transactions/sec\n> I'm jumping to 2800 transactions/sec! and it's more than important\n> performance level increase :))\n\nwow. That's kind of suspicious though. Does the new configuration take\nadvantage of the lack of the filesystem cache by increasing the size of\nshared_buffers? Even then I wouldn't expect such a big boost unless you got\nvery lucky with the size of your working set compared to the two sizes of\nshared_buffers.\n\nIt seems likely that somehow this change is not providing the same guarantees\nas fsync. Perhaps fsync is actually implementing IDE write barriers and the\nsync mode is just flushing buffers to the hard drive cache and then returning.\n\nWhat transaction rate do you get if you just have a single connection\nstreaming inserts in autocommit mode? What kind of transaction rate do you get\nwith both sync mode on and fsync=on in Postgres?\n\nAnd did you say this with a battery backed cache? In theory fsync=on/off and\nshouldn't make much difference at all with a battery backed cache. Stranger\nand stranger.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 04 Jul 2007 11:45:01 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem Direct I/O and WAL sync option" }, { "msg_contents": "Gregory, thanks for good questions! :))\nI got more lights on my throughput here :))\n\nThe running OS is Solaris9 (customer is still not ready to upgrade to\nSol10), and I think the main \"sync\" issue is coming from the old UFS\nimplementation... UFS mounted with 'forcedirectio' option uses\ndifferent \"sync\" logic as well accepting concurrent writing to the\nsame file which is giving here a higher performance level. I did not\nexpect really so big gain, so did not think to replay the same test\nwith direct I/O on and fsync=on too. For my big surprise - it also\nreached 2800 tps as with fsync=off !!! So, initial question is no more\nvalid :))\n\nAs well my tests are executed just to validate server + storage\ncapabilities, and honestly it's really pity to see them used under old\nSolaris version :))\nbut well, at least we know what kind of performance they may expect\ncurrently, and think about migration before the end of this year...\n\nSeeing at least 10.000 random writes/sec on storage sub-system during\nlive database test was very pleasant to customer and make feel them\ncomfortable for their production...\n\nThanks a lot for all your help!\n\nBest regards!\n-Dimitri\n\nOn 7/4/07, Gregory Stark <[email protected]> wrote:\n>\n> \"Dimitri\" <[email protected]> writes:\n>\n> > Yes Gregory, that's why I'm asking, because from 1800 transactions/sec\n> > I'm jumping to 2800 transactions/sec! and it's more than important\n> > performance level increase :))\n>\n> wow. That's kind of suspicious though. Does the new configuration take\n> advantage of the lack of the filesystem cache by increasing the size of\n> shared_buffers? Even then I wouldn't expect such a big boost unless you got\n> very lucky with the size of your working set compared to the two sizes of\n> shared_buffers.\n>\n> It seems likely that somehow this change is not providing the same\n> guarantees\n> as fsync. Perhaps fsync is actually implementing IDE write barriers and the\n> sync mode is just flushing buffers to the hard drive cache and then\n> returning.\n>\n> What transaction rate do you get if you just have a single connection\n> streaming inserts in autocommit mode? What kind of transaction rate do you\n> get\n> with both sync mode on and fsync=on in Postgres?\n>\n> And did you say this with a battery backed cache? In theory fsync=on/off and\n> shouldn't make much difference at all with a battery backed cache. Stranger\n> and stranger.\n>\n> --\n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n>\n>\n", "msg_date": "Thu, 5 Jul 2007 14:01:14 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Filesystem Direct I/O and WAL sync option" }, { "msg_contents": "On Tue, Jul 03, 2007 at 04:06:29PM +0100, Heikki Linnakangas wrote:\n> Dimitri wrote:\n> >I'm very curious to know if we may expect or guarantee any data\n> >consistency with WAL sync=OFF but using file system mounted in Direct\n> >I/O mode (means every write() system call called by PG really writes\n> >to disk before return)...\n> \n> You'd have to turn that mode on on the data drives as well to get \n> consistency, because fsync=off disables checkpoint fsyncs of the data \n> files as well.\n\nBTW, it might be worth trying the different wal_sync_methods. IIRC,\nJonah's seen some good results from open_datasync.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Mon, 9 Jul 2007 15:32:20 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem Direct I/O and WAL sync option" }, { "msg_contents": "On 7/9/07, Jim C. Nasby <[email protected]> wrote:\n> BTW, it might be worth trying the different wal_sync_methods. IIRC,\n> Jonah's seen some good results from open_datasync.\n\nOn Linux, using ext3, reiser, or jfs, I've seen open_sync perform\nquite better than fsync/fdatasync in most of my tests. But, I haven't\ndone significant testing with direct I/O lately.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Mon, 9 Jul 2007 16:46:32 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem Direct I/O and WAL sync option" }, { "msg_contents": "Yes, I tried all WAL sync methods, but there was no difference...\nHowever, there was a huge difference when I run the same tests under\nSolaris10 - 'fdatasync' option gave the best performance level. On the\nsame time direct I/O did not make difference on Solaris 10 :)\n\nSo the main rule - there is no universal rule :)\njust adapt system options according your workload...\n\nDirect I/O will generally speed-up write operation due avoiding buffer\nflashing overhead as well concurrent writing (breaking POSIX\nlimitation of single writer per given file on the same time). But on\nthe same time it may slow-down your read operations, and you may need\n64bit PG version to use big cache to still keep same performance level\non SELECT queries. And again, there are other file systems like QFS\n(for ex.) which may give you the best of both worlds: direct write and\nbuffered read on the same time! etc. etc. etc. :)\n\nRgds,\n-Dimitri\n\nOn 7/9/07, Jonah H. Harris <[email protected]> wrote:\n> On 7/9/07, Jim C. Nasby <[email protected]> wrote:\n> > BTW, it might be worth trying the different wal_sync_methods. IIRC,\n> > Jonah's seen some good results from open_datasync.\n>\n> On Linux, using ext3, reiser, or jfs, I've seen open_sync perform\n> quite better than fsync/fdatasync in most of my tests. But, I haven't\n> done significant testing with direct I/O lately.\n>\n> --\n> Jonah H. Harris, Software Architect | phone: 732.331.1324\n> EnterpriseDB Corporation | fax: 732.331.1301\n> 33 Wood Ave S, 3rd Floor | [email protected]\n> Iselin, New Jersey 08830 | http://www.enterprisedb.com/\n>\n", "msg_date": "Tue, 10 Jul 2007 14:53:18 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Filesystem Direct I/O and WAL sync option" } ]
[ { "msg_contents": "\nThis query is taking less than 5 minutes on 7.4 but over 5 hours on 8.1...\n\nPostgreSQL 8.1.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)\n4.1.0 (SUSE Linux)\nTotal runtime: 20448310.101 ms = 5.6800862 hour\n(132 rows)\n\n--postgresql.conf:\n\nshared_buffers = 114688 # min 16 or max_connections*2, 8KB\neach\n#temp_buffers = 20000 # min 100, 8KB each\n#max_prepared_transactions = 5 # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 10240 # size in KB\nmaintenance_work_mem = 64384 # min 1024, size in KB\nmax_stack_depth = 4096 # min 100, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 500000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 1000 # min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200 # 10-10000 milliseconds between\nrounds\n#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers\nscanned/round\n#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333 # 0-100% of all buffers\nscanned/round\n#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = on # turns forced synchronization on or\noff\n#wal_sync_method = fsync # the default is the first option\n # supported by the operating system:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\n#full_page_writes = on # recover from partial page writes\n#wal_buffers = 8 # min 4, 8KB each\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 12 # in logfile segments, min 1, 16MB\neach\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # in seconds, 0 is off\n\n# - Archiving -\n\n#archive_command = '' # command to use to archive a\nlogfile\n # segment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\nenable_bitmapscan = off\nenable_hashagg = on\nenable_hashjoin = on\nenable_indexscan = on\nenable_mergejoin = on\nenable_nestloop = on\nenable_seqscan = off\nenable_sort = on\nenable_tidscan = on\n\n# - Planner Cost Constants -\n\neffective_cache_size = 10000 # typically 8KB each\nrandom_page_cost = 4 # units are one sequential page\nfetch\n # cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000 # in milliseconds\n#max_locks_per_transaction = 64 # min 10\n# note: each lock table slot uses ~220 bytes of shared memory, and there are\n# max_locks_per_transaction * (max_connections + max_prepared_transactions)\n# lock table slots.\n\n-- \nView this message in context: http://www.nabble.com/Query-is-taking-5-HOURS-to-Complete-on-8.1-version-tf4019778.html#a11416966\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Tue, 3 Jul 2007 10:25:31 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Query is taking 5 HOURS to Complete on 8.1 version" }, { "msg_contents": "In response to smiley2211 <[email protected]>:\n> \n> This query is taking less than 5 minutes on 7.4 but over 5 hours on 8.1...\n> \n> PostgreSQL 8.1.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)\n> 4.1.0 (SUSE Linux)\n> Total runtime: 20448310.101 ms = 5.6800862 hour\n> (132 rows)\n\nWhen was the last time you vacuum analyzed the database?\n\nAlso, you don't even provide the query. I can't imagine how you'd expect\nanyone to help you. If vacuum analyze doesn't fix the problem, please\nprovide the query, explain output of the query, and the schema of any\ntables involved, including information on indexes.\n\n> \n> --postgresql.conf:\n> \n> shared_buffers = 114688 # min 16 or max_connections*2, 8KB\n> each\n> #temp_buffers = 20000 # min 100, 8KB each\n> #max_prepared_transactions = 5 # can be 0 or more\n> # note: increasing max_prepared_transactions costs ~600 bytes of shared\n> memory\n> # per transaction slot, plus lock space (see max_locks_per_transaction).\n> work_mem = 10240 # size in KB\n> maintenance_work_mem = 64384 # min 1024, size in KB\n> max_stack_depth = 4096 # min 100, size in KB\n> \n> # - Free Space Map -\n> \n> max_fsm_pages = 500000 # min max_fsm_relations*16, 6 bytes each\n> max_fsm_relations = 1000 # min 100, ~70 bytes each\n> \n> # - Kernel Resource Usage -\n> \n> #max_files_per_process = 1000 # min 25\n> #preload_libraries = ''\n> \n> # - Cost-Based Vacuum Delay -\n> \n> #vacuum_cost_delay = 0 # 0-1000 milliseconds\n> #vacuum_cost_page_hit = 1 # 0-10000 credits\n> #vacuum_cost_page_miss = 10 # 0-10000 credits\n> #vacuum_cost_page_dirty = 20 # 0-10000 credits\n> #vacuum_cost_limit = 200 # 0-10000 credits\n> \n> # - Background writer -\n> \n> #bgwriter_delay = 200 # 10-10000 milliseconds between\n> rounds\n> #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers\n> scanned/round\n> #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n> #bgwriter_all_percent = 0.333 # 0-100% of all buffers\n> scanned/round\n> #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round\n> \n> \n> #---------------------------------------------------------------------------\n> # WRITE AHEAD LOG\n> #---------------------------------------------------------------------------\n> \n> # - Settings -\n> \n> #fsync = on # turns forced synchronization on or\n> off\n> #wal_sync_method = fsync # the default is the first option\n> # supported by the operating system:\n> # open_datasync\n> # fdatasync\n> # fsync\n> # fsync_writethrough\n> # open_sync\n> #full_page_writes = on # recover from partial page writes\n> #wal_buffers = 8 # min 4, 8KB each\n> #commit_delay = 0 # range 0-100000, in microseconds\n> #commit_siblings = 5 # range 1-1000\n> \n> # - Checkpoints -\n> \n> checkpoint_segments = 12 # in logfile segments, min 1, 16MB\n> each\n> #checkpoint_timeout = 300 # range 30-3600, in seconds\n> #checkpoint_warning = 30 # in seconds, 0 is off\n> \n> # - Archiving -\n> \n> #archive_command = '' # command to use to archive a\n> logfile\n> # segment\n> \n> \n> #---------------------------------------------------------------------------\n> # QUERY TUNING\n> #---------------------------------------------------------------------------\n> \n> # - Planner Method Configuration -\n> \n> enable_bitmapscan = off\n> enable_hashagg = on\n> enable_hashjoin = on\n> enable_indexscan = on\n> enable_mergejoin = on\n> enable_nestloop = on\n> enable_seqscan = off\n> enable_sort = on\n> enable_tidscan = on\n> \n> # - Planner Cost Constants -\n> \n> effective_cache_size = 10000 # typically 8KB each\n> random_page_cost = 4 # units are one sequential page\n> fetch\n> # cost\n> #cpu_tuple_cost = 0.01 # (same)\n> #cpu_index_tuple_cost = 0.001 # (same)\n> #cpu_operator_cost = 0.0025 # (same)\n> #---------------------------------------------------------------------------\n> # LOCK MANAGEMENT\n> #---------------------------------------------------------------------------\n> \n> #deadlock_timeout = 1000 # in milliseconds\n> #max_locks_per_transaction = 64 # min 10\n> # note: each lock table slot uses ~220 bytes of shared memory, and there are\n> # max_locks_per_transaction * (max_connections + max_prepared_transactions)\n> # lock table slots.\n> \n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 3 Jul 2007 13:32:48 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is taking 5 HOURS to Complete on 8.1 version" }, { "msg_contents": "\nHere is the EXPLAIN after I changed some conf file - now I am running another\nEXPLAIN ANALYZE which may take 5 or more hours to complete :,(\n\neffective_cache = 170000\nenable_seqscan = on\nenable _bitmapscan = on\n\n\n QUERY PLAN \n \n \n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n-\n Limit (cost=27674.12..27674.21 rows=1 width=8)\n -> Subquery Scan people_consent (cost=27674.12..27978.41 rows=3121\nwidth=8)\n -> Unique (cost=27674.12..27947.20 rows=3121 width=816)\n -> Sort (cost=27674.12..27681.92 rows=3121 width=816)\n Sort Key: id, firstname, lastname, homephone,\nworkphone, al\ntphone, eligibilityzipcode, address1, address2, city, state, zipcode1,\nzipcode2,\n email, dayofbirth, monthofbirth, yearofbirth, ethnic_detail, external_id,\nhighe\nstlevelofeducation_id, ethnicgroup_id, ethnicotherrace, entered_at,\nentered_by, \nbesttimetoreach_id, language_id, otherlanguage, gender_id,\nhispaniclatino_id, ca\n\nnscheduleapt_id, mayweleaveamessage_id, ethnictribe, ethnicasian,\nethnicislander\n -> Append (cost=13595.19..27492.98 rows=3121\nwidth=816)\n -> Nested Loop (cost=13595.19..13602.61 rows=2\nwidt\nh=816)\n -> Unique (cost=13595.19..13595.20 rows=2\nwid\nth=8)\n -> Sort (cost=13595.19..13595.19\nrows=2\n width=8)\n Sort Key: temp_consent2.id\n -> Unique \n(cost=13595.14..13595.1\n6 rows=2 width=16)\n -> Sort \n(cost=13595.14..135\n95.15 rows=2 width=16)\n Sort Key:\ntemp_consent.\ndaterecorded, temp_consent.id\n -> Subquery Scan\ntemp_\nconsent (cost=13595.09..13595.13 rows=2 width=16)\n -> Unique \n(cost\n=13595.09..13595.11 rows=2 width=36)\n -> \nSort (\ncost=13595.09..13595.10 rows=2 width=36)\n \nSort \nKey: id, daterecorded, answer\n\n \n-> A\nppend (cost=13506.81..13595.08 rows=2 width=36)\n \n -> HashAggregate (cost=13506.81..13506.83 rows=1 width=36)\n \n -> Nested Loop (cost=58.47..13506.81 rows=1 width=36)\n \n -> Nested Loop (cost=58.47..13503.10 rows=1 width=36)\n \n -> Nested Loop (cost=58.47..13499.67 rows=1 width=24)\n \n -> Nested Loop (cost=58.47..13496.64 rows=1\nwidth=24)\n \n Join Filter: (\"inner\".question_answer_id =\n\"outer\n\".id)\n \n -> Nested Loop (cost=58.47..78.41 rows=1\nwidth=\n28)\n \n -> Index Scan using answers_answer_un\non a\nnswers a (cost=0.00..4.01 rows=1 width=28)\n \n Index Cond: ((answer)::text =\n'Yes'::\ntext)\n \n -> Bitmap Heap Scan on\nquestions_answers q\na (cost=58.47..74.30 rows=8 width=16)\n \n Recheck Cond: ((qa.answer_id =\n\"outer\n\".id) AND (((qa.question_tag)::text = 'consentTransfer'::text) OR\n((qa.question_\ntag)::text = 'shareWithEval'::text)))\n \n -> BitmapAnd (cost=58.47..58.47\nrow\ns=8 width=0)\n \n -> Bitmap Index Scan on\nqs_as_\nanswer_id (cost=0.00..5.37 rows=677 width=0)\n \n Index Cond:\n(qa.answer_id\n = \"outer\".id)\n \n -> BitmapOr \n(cost=52.85..52.8\n5 rows=6530 width=0)\n \n -> Bitmap Index Scan\non \nqs_as_qtag (cost=0.00..26.43 rows=3265 width=0)\n\n \n Index Cond:\n((quest\nion_tag)::text = 'consentTransfer'::text)\n \n -> Bitmap Index Scan\non \nqs_as_qtag (cost=0.00..26.43 rows=3265 width=0)\n \n Index Cond:\n((quest\nion_tag)::text = 'shareWithEval'::text)\n \n -> Seq Scan on encounters_questions_answers\neqa \n (cost=0.00..7608.66 rows=464766 width=8)\n \n -> Index Scan using encounters_id on encounters ec \n(c\nost=0.00..3.01 rows=1 width=8)\n \n Index Cond: (ec.id = \"outer\".encounter_id)\n \n -> Index Scan using enrollements_pk on enrollments en \n(cost\n=0.00..3.42 rows=1 width=20)\n \n Index Cond: (\"outer\".enrollment_id = en.id) \n\n -> Index Scan using people_pk on people p (cost=0.00..3.69\nrows=1\n width=8)\n \n Index Cond: (p.id = \"outer\".person_id)\n \n -> HashAggregate (cost=88.22..88.24 rows=1 width=36)\n \n -> Nested Loop (cost=58.47..88.22 rows=1 width=36)\n \n -> Nested Loop (cost=58.47..84.51 rows=1 width=36)\n \n -> Nested Loop (cost=58.47..81.43 rows=1 width=24)\n \n -> Nested Loop (cost=58.47..78.41 rows=1\nwidth=28)\n \n -> Index Scan using answers_answer_un on\nanswers\n a (cost=0.00..4.01 rows=1 width=28)\n \n Index Cond: ((answer)::text =\n'Yes'::text)\n \n -> Bitmap Heap Scan on questions_answers qa \n(co\nst=58.47..74.30 rows=8 width=16) \n Recheck Cond: ((qa.answer_id =\n\"outer\".id) \nAND (((qa.question_tag)::text = 'consentTransfer'::text) OR\n((qa.question_tag)::\ntext = 'shareWithEval'::text)))\n \n -> BitmapAnd (cost=58.47..58.47\nrows=8 wi\ndth=0)\n \n -> Bitmap Index Scan on\nqs_as_answer\n_id (cost=0.00..5.37 rows=677 width=0)\n \n Index Cond: (qa.answer_id =\n\"ou\nter\".id)\n \n -> BitmapOr (cost=52.85..52.85\nrows\n=6530 width=0)\n \n -> Bitmap Index Scan on\nqs_as_\nqtag (cost=0.00..26.43 rows=3265 width=0)\n \n Index Cond:\n((question_ta\ng)::text = 'consentTransfer'::text)\n \n -> Bitmap Index Scan on\nqs_as_\n\nqtag (cost=0.00..26.43 rows=3265 width=0)\n \n Index Cond:\n((question_ta\ng)::text = 'shareWithEval'::text)\n \n -> Index Scan using ctccalls_qs_as_qaid on\nctccalls_qu\nestions_answers cqa (cost=0.00..3.02 rows=1 width=8)\n \n Index Cond: (cqa.question_answer_id =\n\"outer\".id)\n \n -> Index Scan using ctccalls_pk on ctccalls c \n(cost=0.00..3\n.06 rows=1 width=20)\n \n Index Cond: (c.id = \"outer\".call_id)\n \n -> Index Scan using people_pk on people p (cost=0.00..3.69\nrows=1\n width=8)\n \n Index Cond: (p.id = \"outer\".person_id)\n -> Index Scan using people_pk on people \n(cost\n=0.00..3.69 rows=1 width=816)\n Index Cond: (people.id = \"outer\".id)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=13595.18..13890\n.35 rows=3119 width=677)\n -> Seq Scan on people \n(cost=13595.18..13859.1\n6 rows=3119 width=677)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Subquery Scan temp_consent2 \n(cost=\n13595.14..13595.18 rows=2 width=8)\n -> Unique \n(cost=13595.14..13595\n.16 rows=2 width=16)\n -> Sort \n(cost=13595.14..1\n3595.15 rows=2 width=16)\n Sort Key:\ntemp_consen\nt.daterecorded, temp_consent.id\n -> Subquery Scan\ntem\np_consent (cost=13595.09..13595.13 rows=2 width=16)\n -> Unique \n(co\nst=13595.09..13595.11 rows=2 width=36)\n -> \nSort \n (cost=13595.09..13595.10 rows=2 width=36)\n \nSor\nt Key: id, daterecorded, answer\n \n-> \n Append (cost=13506.81..13595.08 rows=2 width=36)\n\n \n -> HashAggregate (cost=13506.81..13506.83 rows=1 width=36)\n \n -> Nested Loop (cost=58.47..13506.81 rows=1 width=36)\n \n -> Nested Loop (cost=58.47..13503.10 rows=1 width=36)\n \n -> Nested Loop (cost=58.47..13499.67 rows=1 width=24)\n \n -> Nested Loop (cost=58.47..13496.64 rows=1\nwidth=2\n4)\n \n Join Filter: (\"inner\".question_answer_id =\n\"out\ner\".id)\n \n -> Nested Loop (cost=58.47..78.41 rows=1\nwidt\nh=28)\n \n -> Index Scan using\nanswers_answer_un on\n answers a (cost=0.00..4.01 rows=1 width=28)\n \n Index Cond: ((answer)::text =\n'Yes'\n::text) \n \n -> Bitmap Heap Scan on\nquestions_answers\n qa (cost=58.47..74.30 rows=8 width=16)\n \n Recheck Cond: ((qa.answer_id =\n\"out\ner\".id) AND (((qa.question_tag)::text = 'consentTransfer'::text) OR\n((qa.questio\nn_tag)::text = 'shareWithEval'::text)))\n \n -> BitmapAnd \n(cost=58.47..58.47 r\nows=8 width=0)\n \n -> Bitmap Index Scan on\nqs_a\ns_answer_id (cost=0.00..5.37 rows=677 width=0)\n \n Index Cond:\n(qa.answer_\nid = \"outer\".id)\n \n -> BitmapOr \n(cost=52.85..52\n.85 rows=6530 width=0)\n \n -> Bitmap Index\nScan o\nn qs_as_qtag (cost=0.00..26.43 rows=3265 width=0)\n \n Index Cond:\n((que\nstion_tag)::text = 'consentTransfer'::text)\n \n -> Bitmap Index\nScan o\nn qs_as_qtag (cost=0.00..26.43 rows=3265 width=0)\n \n Index Cond:\n((que\nstion_tag)::text = 'shareWithEval'::text)\n \n -> Seq Scan on\nencounters_questions_answers eq\na (cost=0.00..7608.66 rows=464766 width=8)\n \n -> Index Scan using encounters_id on encounters\nec \n(cost=0.00..3.01 rows=1 width=8)\n \n Index Cond: (ec.id = \"outer\".encounter_id)\n \n -> Index Scan using enrollements_pk on enrollments en \n(co\nst=0.00..3.42 rows=1 width=20)\n \n Index Cond: (\"outer\".enrollment_id = en.id)\n \n -> Index Scan using people_pk on people p (cost=0.00..3.69\nrows\n\n=1 width=8)\n \n Index Cond: (p.id = \"outer\".person_id)\n \n -> HashAggregate (cost=88.22..88.24 rows=1 width=36)\n \n -> Nested Loop (cost=58.47..88.22 rows=1 width=36)\n \n -> Nested Loop (cost=58.47..84.51 rows=1 width=36)\n \n -> Nested Loop (cost=58.47..81.43 rows=1 width=24)\n \n -> Nested Loop (cost=58.47..78.41 rows=1\nwidth=28)\n \n -> Index Scan using answers_answer_un on\nanswe\nrs a (cost=0.00..4.01 rows=1 width=28)\n \n Index Cond: ((answer)::text =\n'Yes'::text\n)\n \n -> Bitmap Heap Scan on questions_answers\nqa (\ncost=58.47..74.30 rows=8 width=16)\n \n Recheck Cond: ((qa.answer_id =\n\"outer\".id\n) AND (((qa.question_tag)::text = 'consentTransfer'::text) OR\n((qa.question_tag)\n::text = 'shareWithEval'::text)))\n \n -> BitmapAnd (cost=58.47..58.47\nrows=8 \nwidth=0)\n \n -> Bitmap Index Scan on\nqs_as_answ\ner_id (cost=0.00..5.37 rows=677 width=0)\n \n Index Cond: (qa.answer_id\n= \"\nouter\".id)\n \n -> BitmapOr \n(cost=52.85..52.85 ro\nws=6530 width=0)\n \n -> Bitmap Index Scan on\nqs_a\ns_qtag (cost=0.00..26.43 rows=3265 width=0)\n \n Index Cond:\n((question_\ntag)::text = 'consentTransfer'::text)\n \n -> Bitmap Index Scan on\nqs_a\n\n -> Bitmap Index Scan on\nqs_a\ns_qtag (cost=0.00..26.43 rows=3265 width=0)\n \n Index Cond:\n((question_\ntag)::text = 'shareWithEval'::text)\n \n -> Index Scan using ctccalls_qs_as_qaid on\nctccalls_\nquestions_answers cqa (cost=0.00..3.02 rows=1 width=8)\n \n Index Cond: (cqa.question_answer_id =\n\"outer\".i\nd)\n \n -> Index Scan using ctccalls_pk on ctccalls c \n(cost=0.00.\n.3.06 rows=1 width=20)\n \n Index Cond: (c.id = \"outer\".call_id)\n \n -> Index Scan using people_pk on people p (cost=0.00..3.69\nrows\n=1 width=8)\n \n Index Cond: (p.id = \"outer\".person_id)\n(131 rows)\n\n\n\n\n-- \nView this message in context: http://www.nabble.com/Query-is-taking-5-HOURS-to-Complete-on-8.1-version-tf4019778.html#a11418557\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Tue, 3 Jul 2007 11:55:27 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query is taking 5 HOURS to Complete on 8.1 version" }, { "msg_contents": "smiley2211 wrote:\n> Here is the EXPLAIN after I changed some conf file - now I am running another\n> EXPLAIN ANALYZE which may take 5 or more hours to complete :,(\n> \n> effective_cache = 170000\n\nWhy has effective_cache changed from 80,000 to 170,000 - have you \nstopped running some other application?\n\n> enable_seqscan = on\n> enable _bitmapscan = on\n\nWhy were these disabled before? What were you trying to achieve? What \nhas now changed?\n\n> QUERY PLAN \n\nYou still haven't supplied the query. However, looking at the explain \nI'd guess there's a lot of sorting going on? You might want to increase \nwork_mem just for this query:\n\nSET work_mem = ...;\nSELECT ...\n\nHowever, that's just a blind guess because you haven't supplied the \nabsolutely vital information:\n1. The query\n2. An idea of how many rows are in the relevant tables\n3. The \"I have vacuumed and analysed recently\" disclaimer\n4. The explain analyse (which you are running - good, make sure you save \na copy of it somwhere).\n\nEven then it'll be difficult to get a quick answer because it looks like \na large query. So - you can speed things along by looking for oddities \nyourself.\n\nThe explain analyse will have two values for \"rows\" on each line, the \npredicted and the actual - look for where they are wildly different. If \nthe planner is expecting 2 matches and seeing 2000 it might make the \nwrong choice. You can usually cut down the large query to test just this \nsection. Then you might want to read up about \"ALTER TABLE ... SET \nSTATISTICS\" - that might give the planner more to work with.\n\nThe other thing to look for is the time. The explain analyse has two \nfigures for \"actual time\". These are startup and total time for that \nnode (if \"loops\" is > 1 then multiply the time by the number of loop \niterations). It might be there are one or two nodes that are taking a \nlong time and we can find out why then.\n\nHTH\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 03 Jul 2007 20:18:11 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is taking 5 HOURS to Complete on 8.1 version" }, { "msg_contents": "\nHere are the VIEWS in question: query = (explain analyze select id from\npeople_consent LIMIT 1;)\n\nCREATE OR REPLACE VIEW temp_consent AS \n SELECT p.id, max(en.enrolled_at) AS daterecorded, a.answer\n FROM people p, enrollments en, encounters ec,\nencounters_questions_answers eqa, questions_answers qa, answers a\n WHERE (qa.question_tag::text = 'consentTransfer'::text OR\nqa.question_tag::text = 'shareWithEval'::text) AND eqa.question_answer_id =\nqa.id AND ec.id = eqa.encounter_id AND ec.enrollment_id = en.id AND p.id =\nen.person_id AND qa.answer_id = a.id\n GROUP BY p.id, a.answer\nUNION \n SELECT p.id, max(c.entered_at) AS daterecorded, a.answer\n FROM people p, ctccalls c, ctccalls_questions_answers cqa,\nquestions_answers qa, answers a\n WHERE (qa.question_tag::text = 'consentTransfer'::text OR\nqa.question_tag::text = 'shareWithEval'::text) AND cqa.question_answer_id =\nqa.id AND c.id = cqa.call_id AND p.id = c.person_id AND qa.answer_id = a.id\n GROUP BY p.id, a.answer;\n\n\nCREATE OR REPLACE VIEW temp_consent2 AS \n SELECT DISTINCT temp_consent.id, temp_consent.daterecorded\n FROM temp_consent\n WHERE temp_consent.answer::text = 'Yes'::text\n ORDER BY temp_consent.daterecorded DESC, temp_consent.id;\n\n\nCREATE OR REPLACE VIEW people_consent AS \n SELECT people.id, people.firstname, people.lastname, people.homephone,\npeople.workphone, people.altphone, people.eligibilityzipcode,\npeople.address1, people.address2, people.city, people.state,\npeople.zipcode1, people.zipcode2, people.email, people.dayofbirth,\npeople.monthofbirth, people.yearofbirth, people.ethnic_detail,\npeople.external_id, people.highestlevelofeducation_id,\npeople.ethnicgroup_id, people.ethnicotherrace, people.entered_at,\npeople.entered_by, people.besttimetoreach_id, people.language_id,\npeople.otherlanguage, people.gender_id, people.hispaniclatino_id,\npeople.canscheduleapt_id, people.mayweleaveamessage_id, people.ethnictribe,\npeople.ethnicasian, people.ethnicislander\n FROM people\n WHERE (people.id IN ( SELECT temp_consent2.id\n FROM temp_consent2))\nUNION \n SELECT people.id, '***MASKED***' AS firstname, '***MASKED***' AS lastname,\n'***MASKED***' AS homephone, '***MASKED***' AS workphone, '***MASKED***' AS\naltphone, '***MASKED***' AS eligibilityzipcode, '***MASKED***' AS address1,\n'***MASKED***' AS address2, '***MASKED***' AS city, '***MASKED***' AS state,\n'***MASKED***' AS zipcode1, '***MASKED***' AS zipcode2, people.email,\n'***MASKED***' AS dayofbirth, '***MASKED***' AS monthofbirth, '***MASKED***'\nAS yearofbirth, people.ethnic_detail, people.external_id,\npeople.highestlevelofeducation_id, people.ethnicgroup_id,\npeople.ethnicotherrace, people.entered_at, people.entered_by,\npeople.besttimetoreach_id, people.language_id, people.otherlanguage,\npeople.gender_id, people.hispaniclatino_id, people.canscheduleapt_id,\npeople.mayweleaveamessage_id, people.ethnictribe, people.ethnicasian,\npeople.ethnicislander\n FROM people\n WHERE NOT (people.id IN ( SELECT temp_consent2.id\n FROM temp_consent2));\n\n-- \nView this message in context: http://www.nabble.com/Query-is-taking-5-HOURS-to-Complete-on-8.1-version-tf4019778.html#a11418991\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Tue, 3 Jul 2007 12:23:44 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query is taking 5 HOURS to Complete on 8.1 version" }, { "msg_contents": "On 7/3/07, smiley2211 <[email protected]> wrote:\n\n> CREATE OR REPLACE VIEW temp_consent2 AS\n> SELECT DISTINCT temp_consent.id, temp_consent.daterecorded\n> FROM temp_consent\n> WHERE temp_consent.answer::text = 'Yes'::text\n> ORDER BY temp_consent.daterecorded DESC, temp_consent.id;\n\n\n\nGet rid of the order by on this view. It is a waste of resources. If you\nneed it ordered else where, order it on the fly i.e. select * from\ntemp_consent2 order by .....\n\nCREATE OR REPLACE VIEW people_consent AS\n> SELECT people.id, people.firstname, people.lastname, people.homephone,\n> people.workphone, people.altphone, people.eligibilityzipcode,\n> people.address1, people.address2, people.city, people.state,\n> people.zipcode1, people.zipcode2, people.email, people.dayofbirth,\n> people.monthofbirth, people.yearofbirth, people.ethnic_detail,\n> people.external_id, people.highestlevelofeducation_id,\n> people.ethnicgroup_id, people.ethnicotherrace, people.entered_at,\n> people.entered_by, people.besttimetoreach_id, people.language_id,\n> people.otherlanguage, people.gender_id, people.hispaniclatino_id,\n> people.canscheduleapt_id, people.mayweleaveamessage_id, people.ethnictribe\n> ,\n> people.ethnicasian, people.ethnicislander\n> FROM people\n> WHERE (people.id IN ( SELECT temp_consent2.id\n> FROM temp_consent2))\n> UNION\n> SELECT people.id, '***MASKED***' AS firstname, '***MASKED***' AS lastname,\n> '***MASKED***' AS homephone, '***MASKED***' AS workphone, '***MASKED***'\n> AS\n> altphone, '***MASKED***' AS eligibilityzipcode, '***MASKED***' AS\n> address1,\n> '***MASKED***' AS address2, '***MASKED***' AS city, '***MASKED***' AS\n> state,\n> '***MASKED***' AS zipcode1, '***MASKED***' AS zipcode2, people.email,\n> '***MASKED***' AS dayofbirth, '***MASKED***' AS monthofbirth,\n> '***MASKED***'\n> AS yearofbirth, people.ethnic_detail, people.external_id,\n> people.highestlevelofeducation_id, people.ethnicgroup_id,\n> people.ethnicotherrace, people.entered_at, people.entered_by,\n> people.besttimetoreach_id, people.language_id, people.otherlanguage,\n> people.gender_id, people.hispaniclatino_id, people.canscheduleapt_id,\n> people.mayweleaveamessage_id, people.ethnictribe, people.ethnicasian,\n> people.ethnicislander\n> FROM people\n> WHERE NOT (people.id IN ( SELECT temp_consent2.id\n> FROM temp_consent2));\n\n\n\nTry linking the people and temp_consent2 like this\nwhere people.id not in (select temp_consent2.id from temp_consent2 where\ntemp_consent2.id = people.id)\n\nThat will help a lot.\n\nHTH,\n\nChris\n\nOn 7/3/07, smiley2211 <[email protected]> wrote:\nCREATE OR REPLACE VIEW temp_consent2 AS SELECT DISTINCT temp_consent.id, temp_consent.daterecorded   FROM temp_consent  WHERE temp_consent.answer::text = 'Yes'::text  ORDER BY temp_consent.daterecorded DESC, temp_consent.id;\nGet rid of the order by on this view.  It is  a waste of resources.  If you need it ordered else where, order it on the fly i.e. select * from temp_consent2 order by .....\nCREATE OR REPLACE VIEW people_consent AS SELECT people.id, people.firstname, people.lastname, people.homephone,people.workphone, people.altphone, people.eligibilityzipcode,people.address1\n, people.address2, people.city, people.state,people.zipcode1, people.zipcode2, people.email, people.dayofbirth,people.monthofbirth, people.yearofbirth, people.ethnic_detail,people.external_id, people.highestlevelofeducation_id\n,people.ethnicgroup_id, people.ethnicotherrace, people.entered_at,people.entered_by, people.besttimetoreach_id, people.language_id,people.otherlanguage, people.gender_id, people.hispaniclatino_id,people.canscheduleapt_id\n, people.mayweleaveamessage_id, people.ethnictribe,people.ethnicasian, people.ethnicislander   FROM people  WHERE (people.id IN ( SELECT temp_consent2.id           FROM temp_consent2))\nUNION SELECT people.id, '***MASKED***' AS firstname, '***MASKED***' AS lastname,'***MASKED***' AS homephone, '***MASKED***' AS workphone, '***MASKED***' AS\naltphone, '***MASKED***' AS eligibilityzipcode, '***MASKED***' AS address1,'***MASKED***' AS address2, '***MASKED***' AS city, '***MASKED***' AS state,'***MASKED***' AS zipcode1, '***MASKED***' AS zipcode2, \npeople.email,'***MASKED***' AS dayofbirth, '***MASKED***' AS monthofbirth, '***MASKED***'AS yearofbirth, people.ethnic_detail, people.external_id,people.highestlevelofeducation_id, people.ethnicgroup_id\n,people.ethnicotherrace, people.entered_at, people.entered_by,people.besttimetoreach_id, people.language_id, people.otherlanguage,people.gender_id, people.hispaniclatino_id, people.canscheduleapt_id,people.mayweleaveamessage_id\n, people.ethnictribe, people.ethnicasian,people.ethnicislander   FROM people  WHERE NOT (people.id IN ( SELECT temp_consent2.id           FROM temp_consent2));\nTry linking the people and temp_consent2 like thiswhere people.id not in (select temp_consent2.id from temp_consent2 where temp_consent2.id = people.id\n)That will help a lot.HTH,Chris", "msg_date": "Tue, 3 Jul 2007 16:16:50 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is taking 5 HOURS to Complete on 8.1 version" }, { "msg_contents": "\nTOP shows CPU at 100% while executed the EXPLAIN ANALYZE...what does this\nmean?\n\n17519 postgres 25 0 3470m 43m 39m R 100 0.3 28:50.53 postmaster \n-- \nView this message in context: http://www.nabble.com/Query-is-taking-5-HOURS-to-Complete-on-8.1-version-tf4019778.html#a11419885\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Tue, 3 Jul 2007 13:18:38 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query is taking 5 HOURS to Complete on 8.1 version" }, { "msg_contents": "smiley2211 wrote:\n> Here are the VIEWS in question: query = (explain analyze select id from\n> people_consent LIMIT 1;)\n\nFirst thing I notice - you don't have any ordering, so the LIMIT isn't \nreturning a well-defined record. Might not matter in your particular \ncontext.\n\n> CREATE OR REPLACE VIEW temp_consent AS \n> SELECT p.id, max(en.enrolled_at) AS daterecorded, a.answer\n> FROM people p, enrollments en, encounters ec,\n> encounters_questions_answers eqa, questions_answers qa, answers a\n> WHERE (qa.question_tag::text = 'consentTransfer'::text OR\n> qa.question_tag::text = 'shareWithEval'::text) AND eqa.question_answer_id =\n> qa.id AND ec.id = eqa.encounter_id AND ec.enrollment_id = en.id AND p.id =\n> en.person_id AND qa.answer_id = a.id\n> GROUP BY p.id, a.answer\n> UNION \n\nI think you might be able to make this \"UNION ALL\" - a UNION will check \nfor duplicates and eliminate them. That's a match on \n(id,daterecorded,answer) from both sub-queries - can that happen and do \nyou care?\n\n> SELECT p.id, max(c.entered_at) AS daterecorded, a.answer\n> FROM people p, ctccalls c, ctccalls_questions_answers cqa,\n> questions_answers qa, answers a\n> WHERE (qa.question_tag::text = 'consentTransfer'::text OR\n> qa.question_tag::text = 'shareWithEval'::text) AND cqa.question_answer_id =\n> qa.id AND c.id = cqa.call_id AND p.id = c.person_id AND qa.answer_id = a.id\n> GROUP BY p.id, a.answer;\n> \n> \n> CREATE OR REPLACE VIEW temp_consent2 AS \n> SELECT DISTINCT temp_consent.id, temp_consent.daterecorded\n> FROM temp_consent\n> WHERE temp_consent.answer::text = 'Yes'::text\n> ORDER BY temp_consent.daterecorded DESC, temp_consent.id;\n\nNot sure what the DISTINCT is doing for us here. You've eliminated \nduplicates in the previous view and so you can't have more than one \n(id,daterecorded) for any given answer. (Assuming you keep the previous \nUNION in)\n\n> CREATE OR REPLACE VIEW people_consent AS \n> SELECT people.id, people.firstname, people.lastname, people.homephone,\n> people.workphone, people.altphone, people.eligibilityzipcode,\n> people.address1, people.address2, people.city, people.state,\n> people.zipcode1, people.zipcode2, people.email, people.dayofbirth,\n> people.monthofbirth, people.yearofbirth, people.ethnic_detail,\n> people.external_id, people.highestlevelofeducation_id,\n> people.ethnicgroup_id, people.ethnicotherrace, people.entered_at,\n> people.entered_by, people.besttimetoreach_id, people.language_id,\n> people.otherlanguage, people.gender_id, people.hispaniclatino_id,\n> people.canscheduleapt_id, people.mayweleaveamessage_id, people.ethnictribe,\n> people.ethnicasian, people.ethnicislander\n> FROM people\n> WHERE (people.id IN ( SELECT temp_consent2.id\n> FROM temp_consent2))\n> UNION \n> SELECT people.id, '***MASKED***' AS firstname, '***MASKED***' AS lastname,\n> '***MASKED***' AS homephone, '***MASKED***' AS workphone, '***MASKED***' AS\n> altphone, '***MASKED***' AS eligibilityzipcode, '***MASKED***' AS address1,\n> '***MASKED***' AS address2, '***MASKED***' AS city, '***MASKED***' AS state,\n> '***MASKED***' AS zipcode1, '***MASKED***' AS zipcode2, people.email,\n> '***MASKED***' AS dayofbirth, '***MASKED***' AS monthofbirth, '***MASKED***'\n> AS yearofbirth, people.ethnic_detail, people.external_id,\n> people.highestlevelofeducation_id, people.ethnicgroup_id,\n> people.ethnicotherrace, people.entered_at, people.entered_by> people.besttimetoreach_id, people.language_id, people.otherlanguage,\n> people.gender_id, people.hispaniclatino_id, people.canscheduleapt_id,\n> people.mayweleaveamessage_id, people.ethnictribe, people.ethnicasian,\n> people.ethnicislander\n> FROM people\n> WHERE NOT (people.id IN ( SELECT temp_consent2.id\n> FROM temp_consent2));\n\nOK, well the UNION here can certainly be UNION ALL.\n1. You're using \"***MASKED***\" for a bunch of fields, so unless they're \noccurring naturally in \"people\" you won't get duplicates.\n2. Your WHERE clauses are the complement of each other.\n\nOne other point NOT (people.id IN...) would perhaps be usually written \nas \"people.id NOT IN (...)\". The planner should realise they're the same \nthough.\n\nHowever, there's one obvious thing you can do. As it stands you're \ntesting against temp_consent2 twice. You could rewrite the query \nsomething like:\n\nSELECT\n people.id,\n CASE WHEN temp_consent2.id IS NULL\n THEN '***MASKED***'\n ELSE people.firstname\n END AS firstname\n ...\nFROM\n people LEFT JOIN temp_consent2 ON people.id=temp_consent2.id\n;\n\nYou might want to try these tweaks, but I'd start by working with \ntemp_consent and seeing how long that takes to execute. Then work out.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 03 Jul 2007 21:27:01 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is taking 5 HOURS to Complete on 8.1 version" }, { "msg_contents": "smiley2211 wrote:\n> TOP shows CPU at 100% while executed the EXPLAIN ANALYZE...what does this\n> mean?\n> \n> 17519 postgres 25 0 3470m 43m 39m R 100 0.3 28:50.53 postmaster \n\nIt means it's busy. Probably sorting/eliminating duplicates (see my \nanswer posted just before this one).\n\nKeep an eye on \"vmstat\" too and see if there's much disk activity.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 03 Jul 2007 21:28:48 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is taking 5 HOURS to Complete on 8.1 version" }, { "msg_contents": "\nHello all,\n\nI've made the changes to view to use UNION ALL and the where NOT IN\nsuggestions...the query now takes a little under 3 hours instead of 5 --\nhere is the EXPLAIN ANALYZE:\n\n*********************************\n \nQUERY PLAN \n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------\n Limit (cost=100013612.76..299939413.70 rows=1 width=8) (actual\ntime=10084289.859..10084289.861 rows=1 loops=1)\n -> Subquery Scan people_consent (cost=100013612.76..624068438343.99\nrows=3121 width=8) (actual time=10084289.853..10084289.853 rows=1 loops=1)\n -> Append (cost=100013612.76..624068438312.78 rows=3121\nwidth=815) (actual time=10084289.849..10084289.849 rows=1 loops=1)\n -> Nested Loop (cost=100013612.76..100013621.50 rows=2\nwidth=815) (actual time=10084289.846..10084289.846 rows=1 loops=1)\n -> Unique (cost=100013612.76..100013612.77 rows=2\nwidth=8) (actual time=10084289.817..10084289.817 rows=1 loops=1)\n -> Sort (cost=100013612.76..100013612.77 rows=2\nwidth=8) (actual time=10084289.814..10084289.814 rows=1 loops=1)\n Sort Key: temp_consent.id\n -> Unique \n(cost=100013612.71..100013612.73 rows=2 width=36) (actual\ntime=10084245.195..10084277.468 rows=7292 loops=1)\n -> Sort \n(cost=100013612.71..100013612.72 rows=2 width=36) (actual\ntime=10084245.191..10084254.425 rows=7292 loops=1)\n Sort Key: id, daterecorded,\nanswer\n -> Append \n(cost=100013515.80..100013612.70 rows=2 width=36) (actual\ntime=10083991.226..10084228.613 rows=7292 loops=1)\n -> HashAggregate \n(cost=100013515.80..100013515.82 rows=1 width=36) (actual\ntime=10083991.223..10083998.046 rows=3666 loops=1)\n -> Nested Loop \n(cost=100000060.61..100013515.80 rows=1 width=36) (actual\ntime=388.263..10083961.330 rows=3702 loops=1)\n -> Nested\nLoop (cost=100000060.61..100013511.43 rows=1 width=36) (actual\ntime=388.237..10083897.268 rows=3702 loops=1)\n -> \nNested Loop (cost=100000060.61..100013507.59 rows=1 width=24) (actual\ntime=388.209..10083833.870 rows=3702 loops=1)\n \n-> Nested Loop (cost=100000060.61..100013504.56 rows=1 width=24) (actual\ntime=388.173..10083731.122 rows=3702 loops=1)\n \nJoin Filter: (\"inner\".question_answer_id = \"outer\".id)\n \n-> Nested Loop (cost=60.61..86.33 rows=1 width=28) (actual\ntime=13.978..114.768 rows=7430 loops=1)\n \n-> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 rows=1\nwidth=28) (actual time=0.084..0.088 rows=1 loops=1)\n \nIndex Cond: ((answer)::text = 'Yes'::text)\n \n-> Bitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\nwidth=16) (actual time=13.881..87.112 rows=7430 loops=1)\n \nRecheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n'consentTransfer'::text) OR ((qa.question_tag)::text = 'share\nWithEval'::text)))\n \n-> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (actual\ntime=13.198..13.198 rows=0 loops=1)\n \n-> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 width=0)\n(actual time=9.689..9.689 rows=57804 loops=1)\n \nIndex Cond: (qa.answer_id = \"outer\".id)\n \n-> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (actual\ntime=2.563..2.563 rows=0 loops=1)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(actual time=1.923..1.923 rows=6237 loops=1)\n \nIndex Cond: ((question_tag)::text = 'consentTransfer'::text)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(actual time=0.634..0.634 rows=2047 loops=1)\n \nIndex Cond: ((question_tag)::text = 'shareWithEval'::text)\n \n-> Seq Scan on encounters_questions_answers eqa \n(cost=100000000.00..100007608.66 rows=464766 width=8) (actual\ntime=0.003..735.934 rows=464766 loop\ns=7430)\n \n-> Index Scan using encounters_id on encounters ec (cost=0.00..3.02 rows=1\nwidth=8) (actual time=0.016..0.018 rows=1 loops=3702)\n \nIndex Cond: (ec.id = \"outer\".encounter_id)\n -> \nIndex Scan using enrollements_pk on enrollments en (cost=0.00..3.82 rows=1\nwidth=20) (actual time=0.008..0.010 rows=1 loops=3702)\n \nIndex Cond: (\"outer\".enrollment_id = en.id)\n -> Index\nScan using people_pk on people p (cost=0.00..4.35 rows=1 width=8) (actual\ntime=0.008..0.010 rows=1 loops=3702)\n Index\nCond: (p.id = \"outer\".person_id)\n -> HashAggregate \n(cost=96.86..96.87 rows=1 width=36) (actual time=205.471..212.207 rows=3626\nloops=1)\n -> Nested Loop \n(cost=60.61..96.85 rows=1 width=36) (actual time=13.163..196.421 rows=3722\nloops=1)\n -> Nested\nLoop (cost=60.61..92.48 rows=1 width=36) (actual time=13.149..158.112\nrows=3722 loops=1)\n -> \nNested Loop (cost=60.61..89.36 rows=1 width=24) (actual\ntime=13.125..120.021 rows=3722 loops=1)\n \n-> Nested Loop (cost=60.61..86.33 rows=1 width=28) (actual\ntime=13.013..48.460 rows=7430 loops=1)\n \n-> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 rows=1\nwidth=28) (actual time=0.030..0.032 rows=1 loops=1)\n \nIndex Cond: ((answer)::text = 'Yes'::text)\n \n-> Bitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\nwidth=16) (actual time=12.965..28.902 rows=7430 loops=1)\n \nRecheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n'consentTransfer'::text) OR ((qa.question_tag)::text = 'shareWithEv\nal'::text)))\n \n-> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (actual\ntime=12.288..12.288 rows=0 loops=1)\n \n-> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 width=0)\n(actual time=8.985..8.985 rows=57804 loops=1)\n \nIndex Cond: (qa.answer_id = \"outer\".id)\n \n-> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (actual\ntime=2.344..2.344 rows=0 loops=1)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(actual time=1.762..1.762 rows=6237 loops=1)\n \nIndex Cond: ((question_tag)::text = 'consentTransfer'::text)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(actual time=0.578..0.578 rows=2047 loops=1)\n \nIndex Cond: ((question_tag)::text = 'shareWithEval'::text)\n \n-> Index Scan using ctccalls_qs_as_qaid on ctccalls_questions_answers cqa \n(cost=0.00..3.02 rows=1 width=8) (actual time=0.005..0.006 rows=1\nloops=7430)\n \nIndex Cond: (cqa.question_answer_id = \"outer\".id)\n -> \nIndex Scan using ctccalls_pk on ctccalls c (cost=0.00..3.11 rows=1\nwidth=20) (actual time=0.003..0.005 rows=1 loops=3722)\n \nIndex Cond: (c.id = \"outer\".call_id)\n -> Index\nScan using people_pk on people p (cost=0.00..4.35 rows=1 width=8) (actual\ntime=0.004..0.005 rows=1 loops=3722)\n Index\nCond: (p.id = \"outer\".person_id)\n -> Index Scan using people_pk on people \n(cost=0.00..4.35 rows=1 width=815) (actual time=0.018..0.018 rows=1 loops=1)\n Index Cond: (people.id = \"outer\".id)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=100000000.00..623968424691.25 rows=3119 width=676) (never executed)\n -> Seq Scan on people \n(cost=100000000.00..623968424660.06 rows=3119 width=676) (never executed)\n Filter: (NOT (subplan))\n SubPlan\n -> Subquery Scan temp_consent \n(cost=100010968.94..100010968.98 rows=2 width=8) (never executed)\nlines 1-69/129 56%\n \nIndex Cond: (cqa.question_answer_id = \"outer\".id)\n -> \nIndex Scan using ctccalls_pk on ctccalls c (cost=0.00..3.11 rows=1\nwidth=20) (actual time=0.003..0.005 rows=1 loops=3722)\n \nIndex Cond: (c.id = \"outer\".call_id)\n -> Index\nScan using people_pk on people p (cost=0.00..4.35 rows=1 width=8) (actual\ntime=0.004..0.005 rows=1 loops=3722)\n Index\nCond: (p.id = \"outer\".person_id)\n -> Index Scan using people_pk on people \n(cost=0.00..4.35 rows=1 width=815) (actual time=0.018..0.018 rows=1 loops=1)\n Index Cond: (people.id = \"outer\".id)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=100000000.00..623968424691.25 rows=3119 width=676) (never executed)\n -> Seq Scan on people \n(cost=100000000.00..623968424660.06 rows=3119 width=676) (never executed)\n Filter: (NOT (subplan))\n SubPlan\n -> Subquery Scan temp_consent \n(cost=100010968.94..100010968.98 rows=2 width=8) (never executed)\n -> Unique \n(cost=100010968.94..100010968.96 rows=2 width=36) (never executed)\n -> Sort \n(cost=100010968.94..100010968.95 rows=2 width=36) (never executed)\n Sort Key: id, daterecorded,\nanswer\n -> Append \n(cost=100010872.03..100010968.93 rows=2 width=36) (never executed)\n -> HashAggregate \n(cost=100010872.03..100010872.04 rows=1 width=36) (never executed)\n -> Nested Loop \n(cost=100000907.99..100010872.02 rows=1 width=36) (never executed)\n Join\nFilter: (\"inner\".question_answer_id = \"outer\".id)\n -> Nested\nLoop (cost=60.61..90.69 rows=1 width=36) (never executed)\n -> \nNested Loop (cost=0.00..9.37 rows=1 width=36) (never executed)\n \n-> Index Scan using people_pk on people p (cost=0.00..4.35 rows=1 width=8)\n(never executed)\n \nIndex Cond: (id = $0)\n \n-> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 rows=1\nwidth=28) (never executed)\n \nIndex Cond: ((answer)::text = 'Yes'::text)\n -> \nBitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\nwidth=16) (never executed)\n \nRecheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n'consentTransfer'::text) OR ((qa.question_tag)::text =\n'shareWithEval'::text)\n))\n \n-> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (never executed)\n \n-> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 width=0)\n(never executed)\n \nIndex Cond: (qa.answer_id = \"outer\".id)\n \n-> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (never executed)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(never executed)\n \nIndex Cond: ((question_tag)::text = 'consentTransfer'::text)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(never executed)\n \nIndex Cond: ((question_tag)::text = 'shareWithEval'::text)\n -> Hash\nJoin (cost=100000847.38..100010780.52 rows=65 width=20) (never executed)\n Hash\nCond: (\"outer\".encounter_id = \"inner\".id)\n -> \nSeq Scan on encounters_questions_answers eqa \n(cost=100000000.00..100007608.66 rows=464766 width=8) (never executed)\n -> \nHash (cost=847.37..847.37 rows=3 width=20) (never executed)\n \n-> Hash Join (cost=214.73..847.37 rows=3 width=20) (never executed)\n \nHash Cond: (\"outer\".enrollment_id = \"inner\".id)\n \n-> Index Scan using encounters_id on encounters ec (cost=0.00..524.72\nrows=21578 width=8) (never executed)\n \n-> Hash (cost=214.73..214.73 rows=1 width=20) (never executed)\n \n-> Index Scan using enrollements_pk on enrollments en (cost=0.00..214.73\nrows=1 width=20) (never executed)\n \nFilter: ($0 = person_id)\n -> HashAggregate \n(cost=96.86..96.87 rows=1 width=36) (never executed)\n -> Nested Loop \n(cost=60.61..96.85 rows=1 width=36) (never executed)\n -> Nested\nLoop (cost=60.61..93.72 rows=1 width=32) (never executed)\n -> \nNested Loop (cost=60.61..90.69 rows=1 width=36) (never executed)\n \n-> Nested Loop (cost=0.00..9.37 rows=1 width=36) (never executed)\n \n-> Index Scan using people_pk on people p (cost=0.00..4.35 rows=1 width=8)\n(never executed)\n \nIndex Cond: (id = $0)\n \n-> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 rows=1\nwidth=28) (never executed)\n \nIndex Cond: ((answer)::text = 'Yes'::text)\n \n-> Bitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\nwidth=16) (never executed)\n \nRecheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n'consentTransfer'::text) OR ((qa.question_tag)::text = 'shareWithEval':\n:text)))\n \n-> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (never executed)\n \n-> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 width=0)\n(never executed)\n \nIndex Cond: (qa.answer_id = \"outer\".id)\n \n-> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (never executed)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(never executed)\n \nIndex Cond: ((question_tag)::text = 'consentTransfer'::text)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(never executed)\n \nIndex Cond: ((question_tag)::text = 'shareWithEval'::text)\n -> \nIndex Scan using ctccalls_qs_as_qaid on ctccalls_questions_answers cqa \n(cost=0.00..3.02 rows=1 width=8) (never executed)\n \nIndex Cond: (cqa.question_answer_id = \"outer\".id)\n -> Index\nScan using ctccalls_pk on ctccalls c (cost=0.00..3.11 rows=1 width=20)\n(never executed)\n Index\nCond: (c.id = \"outer\".call_id)\n \nFilter: ($0 = person_id)\n Total runtime: 10084292.497 ms\n(125 rows)\n\n\n\n\n\n\n\n\nsmiley2211 wrote:\n> \n> This query is taking less than 5 minutes on 7.4 but over 5 hours on 8.1...\n> \n> PostgreSQL 8.1.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)\n> 4.1.0 (SUSE Linux)\n> Total runtime: 20448310.101 ms = 5.6800862 hour\n> (132 rows)\n> \n> --postgresql.conf:\n> \n> shared_buffers = 114688 # min 16 or max_connections*2, 8KB\n> each\n> #temp_buffers = 20000 # min 100, 8KB each\n> #max_prepared_transactions = 5 # can be 0 or more\n> # note: increasing max_prepared_transactions costs ~600 bytes of shared\n> memory\n> # per transaction slot, plus lock space (see max_locks_per_transaction).\n> work_mem = 10240 # size in KB\n> maintenance_work_mem = 64384 # min 1024, size in KB\n> max_stack_depth = 4096 # min 100, size in KB\n> \n> # - Free Space Map -\n> \n> max_fsm_pages = 500000 # min max_fsm_relations*16, 6 bytes each\n> max_fsm_relations = 1000 # min 100, ~70 bytes each\n> \n> # - Kernel Resource Usage -\n> \n> #max_files_per_process = 1000 # min 25\n> #preload_libraries = ''\n> \n> # - Cost-Based Vacuum Delay -\n> \n> #vacuum_cost_delay = 0 # 0-1000 milliseconds\n> #vacuum_cost_page_hit = 1 # 0-10000 credits\n> #vacuum_cost_page_miss = 10 # 0-10000 credits\n> #vacuum_cost_page_dirty = 20 # 0-10000 credits\n> #vacuum_cost_limit = 200 # 0-10000 credits\n> \n> # - Background writer -\n> \n> #bgwriter_delay = 200 # 10-10000 milliseconds between\n> rounds\n> #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers\n> scanned/round\n> #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n> #bgwriter_all_percent = 0.333 # 0-100% of all buffers\n> scanned/round\n> #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round\n> \n> \n> #---------------------------------------------------------------------------\n> # WRITE AHEAD LOG\n> #---------------------------------------------------------------------------\n> \n> # - Settings -\n> \n> #fsync = on # turns forced synchronization on\n> or off\n> #wal_sync_method = fsync # the default is the first option\n> # supported by the operating\n> system:\n> # open_datasync\n> # fdatasync\n> # fsync\n> # fsync_writethrough\n> # open_sync\n> #full_page_writes = on # recover from partial page writes\n> #wal_buffers = 8 # min 4, 8KB each\n> #commit_delay = 0 # range 0-100000, in microseconds\n> #commit_siblings = 5 # range 1-1000\n> \n> # - Checkpoints -\n> \n> checkpoint_segments = 12 # in logfile segments, min 1, 16MB\n> each\n> #checkpoint_timeout = 300 # range 30-3600, in seconds\n> #checkpoint_warning = 30 # in seconds, 0 is off\n> \n> # - Archiving -\n> \n> #archive_command = '' # command to use to archive a\n> logfile\n> # segment\n> \n> \n> #---------------------------------------------------------------------------\n> # QUERY TUNING\n> #---------------------------------------------------------------------------\n> \n> # - Planner Method Configuration -\n> \n> enable_bitmapscan = off\n> enable_hashagg = on\n> enable_hashjoin = on\n> enable_indexscan = on\n> enable_mergejoin = on\n> enable_nestloop = on\n> enable_seqscan = off\n> enable_sort = on\n> enable_tidscan = on\n> \n> # - Planner Cost Constants -\n> \n> effective_cache_size = 10000 # typically 8KB each\n> random_page_cost = 4 # units are one sequential page\n> fetch\n> # cost\n> #cpu_tuple_cost = 0.01 # (same)\n> #cpu_index_tuple_cost = 0.001 # (same)\n> #cpu_operator_cost = 0.0025 # (same)\n> #---------------------------------------------------------------------------\n> # LOCK MANAGEMENT\n> #---------------------------------------------------------------------------\n> \n> #deadlock_timeout = 1000 # in milliseconds\n> #max_locks_per_transaction = 64 # min 10\n> # note: each lock table slot uses ~220 bytes of shared memory, and there\n> are\n> # max_locks_per_transaction * (max_connections +\n> max_prepared_transactions)\n> # lock table slots.\n> \n> \n\n-- \nView this message in context: http://www.nabble.com/Query-is-taking-5-HOURS-to-Complete-on-8.1-version-tf4019778.html#a11451397\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 5 Jul 2007 10:49:24 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query is taking 5 HOURS to Complete on 8.1 version" }, { "msg_contents": "smiley2211 wrote:\n> \n> Hello all,\n> \n> I've made the changes to view to use UNION ALL and the where NOT IN\n> suggestions...the query now takes a little under 3 hours instead of 5 --\n> here is the EXPLAIN ANALYZE:\n\nIt seems you have disabled nested loops --- why? Try turning them back\non and let us see the EXPLAIN ANALYZE again.\n\nIt would be extremely helpful if you saved it in a file and attached it\nseparately so that the indentation and whitespace is not mangled by your\nemail system. It would be a lot more readable that way.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 5 Jul 2007 14:05:47 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is taking 5 HOURS to Complete on 8.1 version" } ]
[ { "msg_contents": "Hi\n\nI have the following scenario for a database that I need to design, and\nwould like some hints on what to improve or do differently to achieve the\ndesired performance goal, disregarding hardware and postgres tuning.\n\nThe premise is an attribute database that stores about 100 different\nattribute types as attribute values. Every X seconds, Y number of new\nattribute values are stored in the database. X is constant and currently \nbetween 6 and 20 seconds, depending on the setup. In the future X could\nbecome as low as 3 seconds. Y can, within the next 5-10 years, become as\nhigh as 200 000.\n\nThat means that for example, every 6 seconds 100 000 attributes needs to\nbe written to the database.\n\nAt the same time, somewhere between 5-20 users needs to read parts of\nthose newly written attributes, maybe in total 30 000 attributes.\n\nThis continues for the duration of the field operation, which could be\n18hrs a day for 6 weeks. So the total db size is up towards 200 gigs.\n\nNow here is how I suggest doing this:\n\n1- the tables\n\ntable attribute_values:\n\tid \t\tint\n\tattr_type \tint ( references attribute_types(id) )\n\tposX\t\tint\n\tposY\t\tint\n\tdata_type\tint\n\tvalue\t\tvarchar(50)\n\ntable attribute_types:\n\tid\t\tint\n\tname\t\tvarchar(200);\n\n\n\n2- function\n\n a function that receives an array of data and inserts each attribute.\n perhaps one array per attribute data (type, posX, posY, data_type,\n value) so five arrays as in parameters ot the function\n\n3- java client\n\n the client receives the data from a corba request, and splits it\n into, say 4 equally sized blocks and executes 4 threads that insert\n each block (this seems to be more efficient than just using one\n thread.)\n\nNow I am wondering if this is the most efficient way of doing it?\n\n- I know that I could group the attributes so that each type of attribute\ngets its own table with all attributes in one row. But I am not sure if\nthat is any more efficient than ont attribute per row since I pass\neverything to the function as an array.\nWith the above design a change in attribute types only requires changing\nthe data in a table instead of having to modify the client, the function\nand the tables.\n\n- I am also wondering if writing the client and function in C would create\na more efficient solution.\n\nany comments?\n\nps, I am currently running postgres 8.1, but could probably use 8.2 if it\nis needed for functionality or performance reasons. It will run on a sparc\nmachine with solaris 10 and perhaps 4-6 processors, as many GB of RAM as\nnecessary and SCSI disks ( perhaps in raid 0 ).\n\nregards\n\nthomas\n\n\n\n", "msg_date": "Thu, 5 Jul 2007 14:15:48 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "improvement suggestions for performance design" }, { "msg_contents": "I would strongly suggest that you use a proper relational schema, \ninstead of storing everything in two tables. I don't know your \napplication, but a schema like that is called an Entity-Attribute-Value \n(though your entity seems to be just posx and posy) and it should raise \na big red flag in the mind of any database designer. In particular, \nconstructing queries against an EAV schema is a major pain in the ass. \nThis has been discussed before on postgresql lists as well, you might \nwant to search and read the previous discussions.\n\nIgnoring the EAV issue for a moment, it's hard to give advice without \nknowing what kind of queries are going to executed. Are the lookups \nalways going to be by id? By posx/posy perhaps? By attribute?\n\[email protected] wrote:\n> Hi\n> \n> I have the following scenario for a database that I need to design, and\n> would like some hints on what to improve or do differently to achieve the\n> desired performance goal, disregarding hardware and postgres tuning.\n> \n> The premise is an attribute database that stores about 100 different\n> attribute types as attribute values. Every X seconds, Y number of new\n> attribute values are stored in the database. X is constant and currently \n> between 6 and 20 seconds, depending on the setup. In the future X could\n> become as low as 3 seconds. Y can, within the next 5-10 years, become as\n> high as 200 000.\n> \n> That means that for example, every 6 seconds 100 000 attributes needs to\n> be written to the database.\n> \n> At the same time, somewhere between 5-20 users needs to read parts of\n> those newly written attributes, maybe in total 30 000 attributes.\n> \n> This continues for the duration of the field operation, which could be\n> 18hrs a day for 6 weeks. So the total db size is up towards 200 gigs.\n> \n> Now here is how I suggest doing this:\n> \n> 1- the tables\n> \n> table attribute_values:\n> \tid \t\tint\n> \tattr_type \tint ( references attribute_types(id) )\n> \tposX\t\tint\n> \tposY\t\tint\n> \tdata_type\tint\n> \tvalue\t\tvarchar(50)\n> \n> table attribute_types:\n> \tid\t\tint\n> \tname\t\tvarchar(200);\n> \n> \n> \n> 2- function\n> \n> a function that receives an array of data and inserts each attribute.\n> perhaps one array per attribute data (type, posX, posY, data_type,\n> value) so five arrays as in parameters ot the function\n> \n> 3- java client\n> \n> the client receives the data from a corba request, and splits it\n> into, say 4 equally sized blocks and executes 4 threads that insert\n> each block (this seems to be more efficient than just using one\n> thread.)\n> \n> Now I am wondering if this is the most efficient way of doing it?\n> \n> - I know that I could group the attributes so that each type of attribute\n> gets its own table with all attributes in one row. But I am not sure if\n> that is any more efficient than ont attribute per row since I pass\n> everything to the function as an array.\n> With the above design a change in attribute types only requires changing\n> the data in a table instead of having to modify the client, the function\n> and the tables.\n> \n> - I am also wondering if writing the client and function in C would create\n> a more efficient solution.\n> \n> any comments?\n> \n> ps, I am currently running postgres 8.1, but could probably use 8.2 if it\n> is needed for functionality or performance reasons. It will run on a sparc\n> machine with solaris 10 and perhaps 4-6 processors, as many GB of RAM as\n> necessary and SCSI disks ( perhaps in raid 0 ).\n\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 05 Jul 2007 13:59:30 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improvement suggestions for performance design" }, { "msg_contents": "> I would strongly suggest that you use a proper relational schema,\n> instead of storing everything in two tables. I don't know your\n> application, but a schema like that is called an Entity-Attribute-Value\n> (though your entity seems to be just posx and posy) and it should raise\n> a big red flag in the mind of any database designer. In particular,\n> constructing queries against an EAV schema is a major pain in the ass.\n> This has been discussed before on postgresql lists as well, you might\n> want to search and read the previous discussions.\n\nI get your point, but the thing is the attributes have no particular\nrelation to each other, other than belonging to same attribute groups.\nThere are no specific rules that states that certain attributes are always\nused together, such as with an address record. It depends on what\nattributes the operator wants to study. This is why I don't find any\nreason to group the attributes into separate tables and columns.\n\nI am still looking into the design of the tables, but I need to get at\nproper test harness running before I can start ruling things out. And a\npart of that, is for example, efficient ways of transferring the insert\ndata from the client to the db, instead of just single command inserts.\nThis is where bulk transfer by arrays probably would be preferable.\n\n> Ignoring the EAV issue for a moment, it's hard to give advice without\n> knowing what kind of queries are going to executed. Are the lookups\n> always going to be by id? By posx/posy perhaps? By attribute?\n\nthe query will be by attribute type and posx/y. So for position x,y, give\nme the following attributes...\n\nthomas\n\n\n\n", "msg_date": "Thu, 5 Jul 2007 15:35:57 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: improvement suggestions for performance design" }, { "msg_contents": "On 7/5/07, [email protected] <[email protected]> wrote:\n>\n> > I would strongly suggest that you use a proper relational schema,\n> > instead of storing everything in two tables. I don't know your\n> > application, but a schema like that is called an Entity-Attribute-Value\n> > (though your entity seems to be just posx and posy) and it should raise\n> > a big red flag in the mind of any database designer. In particular,\n> > constructing queries against an EAV schema is a major pain in the ass.\n> > This has been discussed before on postgresql lists as well, you might\n> > want to search and read the previous discussions.\n>\n> I get your point, but the thing is the attributes have no particular\n> relation to each other, other than belonging to same attribute groups.\n> There are no specific rules that states that certain attributes are always\n> used together, such as with an address record. It depends on what\n> attributes the operator wants to study. This is why I don't find any\n> reason to group the attributes into separate tables and columns.\n>\n> I am still looking into the design of the tables, but I need to get at\n> proper test harness running before I can start ruling things out. And a\n> part of that, is for example, efficient ways of transferring the insert\n> data from the client to the db, instead of just single command inserts.\n> This is where bulk transfer by arrays probably would be preferable.\n>\n> > Ignoring the EAV issue for a moment, it's hard to give advice without\n> > knowing what kind of queries are going to executed. Are the lookups\n> > always going to be by id? By posx/posy perhaps? By attribute?\n>\n> the query will be by attribute type and posx/y. So for position x,y, give\n> me the following attributes...\n>\n> thomas\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\nI don't know much about this EAV stuff. Except to say that my company is in\na situation with a lot of adds and bulk deletes and I wish the tables were\ndesigned with partitioning in mind. That is if you know how much, order of\nmagnitude, data each table will hold or will pass through (add and delete),\nyou may want to design the table with partitioning in mind. I have not done\nany partitioning so I cannot give you details but can tell you that mass\ndeletes are a breeze because you just \"drop\" that part of the table. I think\nit is a sub table. And that alleviates table bloat and excessive vacuuming.\n\nGood luck.\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nOn 7/5/07, [email protected] <[email protected]> wrote:\n> I would strongly suggest that you use a proper relational schema,> instead of storing everything in two tables. I don't know your\n> application, but a schema like that is called an Entity-Attribute-Value> (though your entity seems to be just posx and posy) and it should raise> a big red flag in the mind of any database designer. In particular,\n> constructing queries against an EAV schema is a major pain in the ass.> This has been discussed before on postgresql lists as well, you might> want to search and read the previous discussions.\nI get your point, but the thing is the attributes have no particularrelation to each other, other than belonging to same attribute groups.There are no specific rules that states that certain attributes are always\nused together, such as with an address record. It depends on whatattributes the operator wants to study. This is why I don't find anyreason to group the attributes into separate tables and columns.I am still looking into the design of the tables, but I need to get at\nproper test harness running before I can start ruling things out. And apart of that, is for example, efficient ways of transferring the insertdata from the client to the db, instead of just single command inserts.\nThis is where bulk transfer by arrays probably would be preferable.> Ignoring the EAV issue for a moment, it's hard to give advice without> knowing what kind of queries are going to executed. Are the lookups\n> always going to be by id? By posx/posy perhaps? By attribute?the query will be by attribute type and posx/y. So for position x,y, giveme the following attributes...thomas---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?               http://archives.postgresql.orgI\ndon't know much about this EAV stuff. Except to say that my company is\nin a situation with a lot of adds and bulk deletes and I wish the\ntables were designed with partitioning in mind. That is if you know how\nmuch, order of magnitude, data each table will hold or will pass\nthrough (add and delete), you may want to design the table with\npartitioning in mind. I have not done any partitioning so I cannot give\nyou details but can tell you that mass deletes are a breeze because you\njust \"drop\" that part of the table. I think it is a sub table. And that\nalleviates table bloat and excessive vacuuming.\n\nGood luck.\n\n-- Yudhvir Singh Sidhu408 375 3134 cell", "msg_date": "Thu, 5 Jul 2007 07:57:07 -0700", "msg_from": "\"Y Sidhu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improvement suggestions for performance design" }, { "msg_contents": "> On 7/5/07, [email protected] <[email protected]> wrote:\n>\n> I don't know much about this EAV stuff. Except to say that my company is\n> in\n> a situation with a lot of adds and bulk deletes and I wish the tables were\n> designed with partitioning in mind. That is if you know how much, order of\n> magnitude, data each table will hold or will pass through (add and\n> delete),\n> you may want to design the table with partitioning in mind. I have not\n> done\n> any partitioning so I cannot give you details but can tell you that mass\n> deletes are a breeze because you just \"drop\" that part of the table. I\n> think\n> it is a sub table. And that alleviates table bloat and excessive\n> vacuuming.\n\nBy partitioning, do you mean some sort of internal db table partitioning\nscheme or just me dividing the data into different tables?\n\nThere want be many deletes, but there might of course be some.\nAdditionally, because of the\nperformance requirements, there wont be time to run vacuum in between the\ninsert, except for in non-operational periods. which will only be a couple\nof hours during the day. So vacuum will have to be scheduled at those\ntimes, instead of the normal intervals.\n\n\n", "msg_date": "Thu, 5 Jul 2007 17:49:29 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: improvement suggestions for performance design" }, { "msg_contents": "On 7/5/07, [email protected] <[email protected]> wrote:\n>\n> > On 7/5/07, [email protected] <[email protected]> wrote:\n> >\n> > I don't know much about this EAV stuff. Except to say that my company is\n> > in\n> > a situation with a lot of adds and bulk deletes and I wish the tables\n> were\n> > designed with partitioning in mind. That is if you know how much, order\n> of\n> > magnitude, data each table will hold or will pass through (add and\n> > delete),\n> > you may want to design the table with partitioning in mind. I have not\n> > done\n> > any partitioning so I cannot give you details but can tell you that mass\n> > deletes are a breeze because you just \"drop\" that part of the table. I\n> > think\n> > it is a sub table. And that alleviates table bloat and excessive\n> > vacuuming.\n>\n> By partitioning, do you mean some sort of internal db table partitioning\n> scheme or just me dividing the data into different tables?\n>\n> There want be many deletes, but there might of course be some.\n> Additionally, because of the\n> performance requirements, there wont be time to run vacuum in between the\n> insert, except for in non-operational periods. which will only be a couple\n> of hours during the day. So vacuum will have to be scheduled at those\n> times, instead of the normal intervals.\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\nInternal db table partitioning. Check out:\nhttp://www.postgresql.org/docs/8.2/static/ddl-partitioning.html\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nOn 7/5/07, [email protected] <[email protected]> wrote:\n> On 7/5/07, [email protected] <\[email protected]> wrote:>> I don't know much about this EAV stuff. Except to say that my company is> in> a situation with a lot of adds and bulk deletes and I wish the tables were\n> designed with partitioning in mind. That is if you know how much, order of> magnitude, data each table will hold or will pass through (add and> delete),> you may want to design the table with partitioning in mind. I have not\n> done> any partitioning so I cannot give you details but can tell you that mass> deletes are a breeze because you just \"drop\" that part of the table. I> think> it is a sub table. And that alleviates table bloat and excessive\n> vacuuming.By partitioning, do you mean some sort of internal db table partitioningscheme or just me dividing the data into different tables?There want be many deletes, but there might of course be some.\nAdditionally, because of theperformance requirements, there wont be time to run vacuum in between theinsert, except for in non-operational periods. which will only be a coupleof hours during the day. So vacuum will have to be scheduled at those\ntimes, instead of the normal intervals.---------------------------(end of broadcast)---------------------------TIP 7: You can help support the PostgreSQL project by donating at                \nhttp://www.postgresql.org/about/donate\nInternal db table partitioning. Check out: http://www.postgresql.org/docs/8.2/static/ddl-partitioning.html-- Yudhvir Singh Sidhu\n408 375 3134 cell", "msg_date": "Thu, 5 Jul 2007 09:10:21 -0700", "msg_from": "\"Y Sidhu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improvement suggestions for performance design" }, { "msg_contents": "[email protected] wrote:\n>> I would strongly suggest that you use a proper relational schema,\n>> instead of storing everything in two tables. I don't know your\n>> application, but a schema like that is called an Entity-Attribute-Value\n>> (though your entity seems to be just posx and posy) and it should raise\n>> a big red flag in the mind of any database designer. In particular,\n>> constructing queries against an EAV schema is a major pain in the ass.\n>> This has been discussed before on postgresql lists as well, you might\n>> want to search and read the previous discussions.\n> \n> I get your point, but the thing is the attributes have no particular\n> relation to each other, other than belonging to same attribute groups.\n> There are no specific rules that states that certain attributes are always\n> used together, such as with an address record. It depends on what\n> attributes the operator wants to study. This is why I don't find any\n> reason to group the attributes into separate tables and columns.\n\nISTM that a properly normalized schema would look something like this:\n\ncreate table position (\n posX int not null,\n posY int not null,\n primary key (posX, posY)\n);\n\ncreate table colour (\n posX int not null,\n posY int not null,\n colour varchar(50) not null,\n primary key (posX, posY),\n foreign key (posX, posY) references position (posX, posY)\n);\n\ncreate table population (\n posX int not null,\n posY int not null,\n population int notn u,\n primary key (posX, posY),\n foreign key (posX, posY) references position (posX, posY)\n);\n\nwhere colour and population are examples of attributes you want to \nstore. If you have 100 different attributes, you'll have 100 tables like \nthat. That may sound like a lot, but it's not.\n\nThis allows you to use proper data types for the attributes, as well as \nconstraints and all the other goodies a good relational data model gives you\n\nIt also allows you to build proper indexes on the attributes. For \nexample, if you store populations as text, you're going to have a hard \ntime building an index that allows you to query for positions with a \npopulation between 100-2000 efficiently.\n\nThese are all imaginary examples, but what I'm trying to point out here \nis that a proper relational schema allows you to manage and query your \ndata much more easily and with more flexibility, allows for future \nextensions.\n\nA normalized schema will also take less space, which means less I/O and \nmore performance, because there's no need to store metadata like the \ndata_type, attr_type on every row. For performance reasons, you might \nactually want to not store the position-table at all in the above schema.\n\nAn alternative design would be to have a single table, with one column \nper attribute:\n\ncreate table position (\n posX int not null,\n posY int not null,\n colour varchar(50),\n population int,\n ...\n primary key (posX, posY)\n)\n\nThis is more space-efficient, especially if you have a lot of attributes \non same coordinates. You can easily add and drop columns as needed, \nusing ALTER TABLE.\n\n> I am still looking into the design of the tables, but I need to get at\n> proper test harness running before I can start ruling things out. And a\n> part of that, is for example, efficient ways of transferring the insert\n> data from the client to the db, instead of just single command inserts.\n> This is where bulk transfer by arrays probably would be preferable.\n\nBefore you start fiddling with functions, I'd suggest that you try \nbatching the inserts with the JDBC PreparedStatement batch facility.\n\nSplitting the inserts into multiple threads in your application sounds \nmessy. The inserts would have to be in separate transactions, for \nexample. Presumably your CORBA ORB will spawn multiple threads for you \nwhen there's a lot requests coming in, so the overall throughput should \nbe the same with a single thread per request.\n\nBTW, I concur with Y Sidhu that with data volumes as high as you have, \npartitioning is a good idea. It's a lot easier to manage 20 10 GB table \npartitions, than one 200 GB table. For example, VACUUM, CLUSTER, CREATE \nINDEX can be done partition per partition, instead of as a single huge \noperatio that runs for hours. Though if you choose to have just one \ntable per attribute type, each table might be conveniently small by \nnature, so that no partitioning is required.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 05 Jul 2007 19:39:51 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improvement suggestions for performance design" }, { "msg_contents": "\nHeikki Linnakangas wrote:\n> ISTM that a properly normalized schema would look something like this:\n>\n> create table position (\n> posX int not null,\n> posY int not null,\n> primary key (posX, posY)\n> );\n> \n> create table colour (\n> posX int not null,\n> posY int not null,\n> colour varchar(50) not null,\n> primary key (posX, posY),\n> foreign key (posX, posY) references position (posX, posY)\n> );\n> \n> create table population (\n> posX int not null,\n> posY int not null,\n> population int notn u,\n> primary key (posX, posY),\n> foreign key (posX, posY) references position (posX, posY)\n> );\n\nI agree that this is a way it could be done.\n\n> where colour and population are examples of attributes you want to \n> store. If you have 100 different attributes, you'll have 100 tables like \n> that. That may sound like a lot, but it's not.\n\nIn any case, there is no point in having one table per attribute, as \nsome attributes are logically grouped and can therefore be grouped \ntoghether in the table. Since there are 5-10 groups of attributes, 5-10 \ntables would be enough.\n\n> \n> This allows you to use proper data types for the attributes, as well as \n> constraints and all the other goodies a good relational data model gives \n> you\n> \n> It also allows you to build proper indexes on the attributes. For \n> example, if you store populations as text, you're going to have a hard \n> time building an index that allows you to query for positions with a \n> population between 100-2000 efficiently.\n\nPerforming queries on the attribute value is of no interrest, so that \ndoes not matter,\n\n> These are all imaginary examples, but what I'm trying to point out here \n> is that a proper relational schema allows you to manage and query your \n> data much more easily and with more flexibility, allows for future \n> extensions.\n\nThey have been treating their data this way for the last 20 years, and \nthere is nothing on the horizon that tells neither them nor me that it \nwill be any different the next 10 years. So I am not sure I need to plan \nfor that.\n\n> A normalized schema will also take less space, which means less I/O and \n> more performance, \n\nThat is what I am trying to find out, if it is true for this scenario as \nwell.\n\n> because there's no need to store metadata like the \n> data_type, attr_type on every row. \n\ndata_type and attr_type are not decorative meta_data, they are actively \nused as query parameters for each attribute, if they where not there I \nwould not be able to perform the queries I need to do.\n\nFor performance reasons, you might\n> actually want to not store the position-table at all in the above schema.\n> \n> An alternative design would be to have a single table, with one column \n> per attribute:\n> \n> create table position (\n> posX int not null,\n> posY int not null,\n> colour varchar(50),\n> population int,\n> ...\n> primary key (posX, posY)\n> )\n> \n> This is more space-efficient, especially if you have a lot of attributes \n> on same coordinates. You can easily add and drop columns as needed, \n> using ALTER TABLE.\n> \n>> I am still looking into the design of the tables, but I need to get at\n>> proper test harness running before I can start ruling things out. And a\n>> part of that, is for example, efficient ways of transferring the insert\n>> data from the client to the db, instead of just single command inserts.\n>> This is where bulk transfer by arrays probably would be preferable.\n> \n> Before you start fiddling with functions, I'd suggest that you try \n> batching the inserts with the JDBC PreparedStatement batch facility.\n\nI have done that, now I need to have something to compare it against, \npreferably a function written in plpgsql and one in c.\nSo any other suggestions on how to efficiently bulk transfer the data to \nthe db for insertion?\n\n> Splitting the inserts into multiple threads in your application sounds \n> messy. \n\nWell, it has been tested and showed to make postgres perform much \nbetter, ie. 100 000 inserts separated between 4 threads performed much \nfaster than with a single thread alone.\n\n> BTW, I concur with Y Sidhu that with data volumes as high as you have, \n> partitioning is a good idea. \n\nYes, I will be looking into to it.\n\nregards\n\nthomas\n\n", "msg_date": "Fri, 06 Jul 2007 00:23:36 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improvement suggestions for performance design" }, { "msg_contents": "Hi Thomas & all,\n\n2007/7/6, Thomas Finneid <[email protected]>:\n>\n> Heikki Linnakangas wrote:\n> > ISTM that a properly normalized schema would look something like this:\n\n[example of tables per attr referencing main table containing only primary key]\n\n> I agree that this is a way it could be done.\n\nIndeed. Another way is postgresql-specific:\n\ncreate table position (\n posX int not null,\n posY int not null,\n);\n\ncreate table population INHERITS position ( population int not null );\n-- optionally:\ncreate unique index population_idx(posX,posY,population);\n\nThis leads to each attr table inheriting posX, posY from position; you\nnever insert anything to position itself, but you can use it to list\nall positions that have any attributes in any of the inherited tables\n(in that sense it's a view over all its children).\n\n> In any case, there is no point in having one table per attribute, as\n> some attributes are logically grouped and can therefore be grouped\n> toghether in the table. Since there are 5-10 groups of attributes, 5-10\n> tables would be enough.\n\nThis sounds very sensible. This way you would send only 1 table (or\nprocedure, or prepared statement) name instead of as many attr_types\nas you have attributes in a group.\n\nSo instead of calling 'your_procedure(type, posX, posY, data_type,\nvalue)' for each 5 values separately you would call\n'attrgroup_and_datatype_specific_procedure(posX, posY, value1, value2,\nvalue3, value4, value5)'. Inside the procedure the inserts change from\n'insert into attribute_values values (type, posX, posY, data_type,\nvalue)' to 'insert into attrgroup_and_datatype_specific_table values\n(posX, posY, value1, value2, value3, value4, value5)' -- so you save\nfour inserts and for each value inserted you use 2/5 extra fields\ninstead of 4. You are allowed to use shorter names for the tables and\nprocedures ;)\n\nIt should be trivial to hide this separation in client; you could even\ncreate new tables for new kinds of attribute-datatype combinations\nautomatically on the fly.\n\n> They have been treating their data this way for the last 20 years, and\n> there is nothing on the horizon that tells neither them nor me that it\n> will be any different the next 10 years. So I am not sure I need to plan\n> for that.\n\nIs it possible that normalization has been skipped originally because\nof lack of resources or knowledge of the nature of data to be\nimported, or lack of dynamism on the part of the original tools (such\nas creation of type specific tables on the fly), that would now be\navailable, or at least worth a second look?\n\n> > A normalized schema will also take less space, which means less I/O and\n> > more performance,\n>\n> That is what I am trying to find out, if it is true for this scenario as\n> well.\n\nWell, you're saving four extra ints per each value, when you only need\ntwo per 5-10 values.\n\nIf you happen to save numerical data as the value in the text field\nfor some data_types, you are losing a lot more.\n\n> > because there's no need to store metadata like the\n> > data_type, attr_type on every row.\n>\n> data_type and attr_type are not decorative meta_data, they are actively\n> used as query parameters for each attribute, if they where not there I\n> would not be able to perform the queries I need to do.\n\nYou can still express them as table or column names rather than extra\ndata per row.\n\n> > Before you start fiddling with functions, I'd suggest that you try\n> > batching the inserts with the JDBC PreparedStatement batch facility.\n>\n> I have done that, now I need to have something to compare it against,\n> preferably a function written in plpgsql and one in c.\n> So any other suggestions on how to efficiently bulk transfer the data to\n> the db for insertion?\n\nCOPY is plentitudes faster than INSERT:\nhttp://www.postgresql.org/docs/8.1/interactive/sql-copy.html\n\nIf you can't just push the data straight into the final table with\nCOPY, push it into a temporary table that you go through with the\ndatabase procedure.\n\nShameless plug: If you use Java and miss COPY functionality in the\ndriver, it's available at\n\nhttp://kato.iki.fi/sw/db/postgresql/jdbc/copy/\n\nI was able to practically nullify time spent inserting with that.\n\n> Well, it has been tested and showed to make postgres perform much\n> better, ie. 100 000 inserts separated between 4 threads performed much\n> faster than with a single thread alone.\n\nSounds interesting. The results should still end up written into the\nsame table, so are you sure this didn't end up using the same time at\nserver end - would that even matter to you?\n\nWe ended up having best results with sequential batches of around 10\n000 rows each.\n\n> > BTW, I concur with Y Sidhu that with data volumes as high as you have,\n> > partitioning is a good idea.\n>\n> Yes, I will be looking into to it.\n\nDepending on distribution of your data, saving each attribute group\n(or datatype, or both) to its own table will take you some way to the\nsame direction.\n\nIf you have no indexes and do no deletes (like it seems to be in your\ncase), size of table might not matter much.\n\nIt might make sense in your case, though, to name tables with times,\nlike attribute_values_YYYYMMDD, and create a new table for each chosen\nperiod, be it month, day or even per batch. (It takes a few seconds to\ncreate a table though.)\n\nTo keep viewing the data as your customer is used to, you can hide the\nseparation of data into partitions by inheriting each partial table\nfrom an identical ancestor table, that then serves as a view over all\nits children -- but that's already explained in the docs. Separation\ninto tables by attribute groups you have to hide with a view, or\nprocedures, preferably server-side.\n\nCheers,\n\n-- \nKalle Hallivuori +358-41-5053073 http://korpiq.iki.fi/\n", "msg_date": "Fri, 6 Jul 2007 14:30:30 +0300", "msg_from": "\"Kalle Hallivuori\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improvement suggestions for performance design" }, { "msg_contents": "\nKalle Hallivuori wrote:\n\n > COPY is plentitudes faster than INSERT:\n > http://www.postgresql.org/docs/8.1/interactive/sql-copy.html\n >\n > If you can't just push the data straight into the final table with\n > COPY, push it into a temporary table that you go through with the\n > database procedure.\n >\n > Shameless plug: If you use Java and miss COPY functionality in the\n > driver, it's available at\n >\n > http://kato.iki.fi/sw/db/postgresql/jdbc/copy/\n >\n > I was able to practically nullify time spent inserting with that.\n\nInterresting, I will definately have a look at it.\nWhat is the maturity level of the code at this point? and what is \npotentially missing to bring it up to production quality? (stability is \nof the utmost importance in my usage scenario.)\n\n >\n >> Well, it has been tested and showed to make postgres perform much\n >> better, ie. 100 000 inserts separated between 4 threads performed much\n >> faster than with a single thread alone.\n >\n > Sounds interesting. The results should still end up written into the\n > same table, so are you sure this didn't end up using the same time at\n > server end - would that even matter to you?\n\nyes it would matter, because a number of clients are waiting to read the \ndata before the next batch of data is inserted. (in essence every 6 \nseconds 40000 attributes must be written, and after that write 8-16 \nclients read most of that data based on query criteria.and this is just \ntoday, in the future, 5-10 years, it might be as high as 2-300 000 \nattributes per 3 seconds.\n\nthomas\n", "msg_date": "Sun, 08 Jul 2007 12:06:29 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improvement suggestions for performance design" }, { "msg_contents": "Hi.\n\n2007/7/8, Thomas Finneid <[email protected]>:\n>\n> Kalle Hallivuori wrote:\n>\n> > COPY is plentitudes faster than INSERT:\n> > http://www.postgresql.org/docs/8.1/interactive/sql-copy.html\n> >\n> > If you can't just push the data straight into the final table with\n> > COPY, push it into a temporary table that you go through with the\n> > database procedure.\n> >\n> > Shameless plug: If you use Java and miss COPY functionality in the\n> > driver, it's available at\n> >\n> > http://kato.iki.fi/sw/db/postgresql/jdbc/copy/\n> >\n> > I was able to practically nullify time spent inserting with that.\n>\n> Interresting, I will definately have a look at it.\n> What is the maturity level of the code at this point? and what is\n> potentially missing to bring it up to production quality? (stability is\n> of the utmost importance in my usage scenario.)\n\nIt's my third implementation, based on earlier work by Kris Jurka, a\nmaintainer of the JDBC driver. (It is really quite short so it's easy\nto keep it clear.) I consider it mature enough to have accommodated it\nas part of an upcoming large application, but I'd really like to hear\nothers' opinions. Documentation I should add one of these days, maybe\neven rewrite the javadoc.\n\nYou can use COPY as is on the server side without the patch, but then\nyou need to get the data as CSV or TSV files onto the database server\nmachine, and use db superuser privileges to import it. My patch just\nadds the ability to feed data from client with normal user privileges.\n\n-- \nKalle Hallivuori +358-41-5053073 http://korpiq.iki.fi/\n", "msg_date": "Sun, 8 Jul 2007 19:25:26 +0300", "msg_from": "\"Kalle Hallivuori\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improvement suggestions for performance design" }, { "msg_contents": "\nKalle Hallivuori wrote:\n> Hi.\n> \n> 2007/7/8, Thomas Finneid <[email protected]>:\n>>\n>> Kalle Hallivuori wrote:\n>>\n>> > COPY is plentitudes faster than INSERT:\n>> > http://www.postgresql.org/docs/8.1/interactive/sql-copy.html\n>> >\n>> > If you can't just push the data straight into the final table with\n>> > COPY, push it into a temporary table that you go through with the\n>> > database procedure.\n>> >\n>> > Shameless plug: If you use Java and miss COPY functionality in the\n>> > driver, it's available at\n>> >\n>> > http://kato.iki.fi/sw/db/postgresql/jdbc/copy/\n>> >\n>> > I was able to practically nullify time spent inserting with that.\n>>\n>> Interresting, I will definately have a look at it.\n>> What is the maturity level of the code at this point? and what is\n>> potentially missing to bring it up to production quality? (stability is\n>> of the utmost importance in my usage scenario.)\n> \n> It's my third implementation, based on earlier work by Kris Jurka, a\n> maintainer of the JDBC driver. (It is really quite short so it's easy\n> to keep it clear.) I consider it mature enough to have accommodated it\n> as part of an upcoming large application, but I'd really like to hear\n> others' opinions. Documentation I should add one of these days, maybe\n> even rewrite the javadoc.\n\nHi I have tested your COPY patch (actually I tested \npostgresql-jdbc-8.2-505-copy-20070716.jdbc3.jar) and it is really fast, \nactually just as fast as serverside COPY (boths tests was performed on \nlocal machine).\n\nThis means I am interrested in using it in my project, but I have some \nconcerns that needs to be adressed, (and I am prepared to help in any \nway I can). The following are the concerns I have\n\n- While testing I got some errors, which needs to be fixed (detailed below)\n- The patch must be of production grade quality\n- I would like the patch to be part of the official pg JDBC driver.\n\n\nThe error I got the most is :\n\nThis command runs a single run, single thread and generates 10000 rows \nof data\n\ntofi@duplo:~/svn/pores$ java -server -Xms20m -Xmx256m -cp \n/usr/java/jdk1.5.0_06/jre/lib/rt.jar:.:src/:test/:conf/:lib/postgresql-jdbc-8.2-505-copy-20070716.jdbc3.jar \nwg.daemon.Main -m SINGLE_WRITE -t 1 -r 1 -c 10000 -p CPBulk\nInitialising connection...\nPerforming insert...\nBuild bulk data time: 0s 211ms\ntoString() bulk data time: 0s 4ms\ntime: 0s 205ms\norg.postgresql.util.PSQLException: Unexpected command completion while \ncopying: COPY\n at \norg.postgresql.core.v3.QueryExecutorImpl.executeCopy(QueryExecutorImpl.java:706)\n at org.postgresql.copy.CopyManager.copyIntoDB(CopyManager.java:50)\n at org.postgresql.copy.CopyManager.copyIntoDB(CopyManager.java:37)\n at wg.storage.pores1.CPBulk.addAttributes(CPBulk.java:72)\n at wg.daemon.Daemon.run(Daemon.java:57)\ntofi@duplo:~/svn/pores$ ls -al lib/\n\n\nregards\n\nthomas\n\n", "msg_date": "Wed, 18 Jul 2007 21:24:18 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improvement suggestions for performance design" }, { "msg_contents": "Hi all!\n\n2007/7/18, Thomas Finneid <[email protected]>:\n> Hi I have tested your COPY patch (actually I tested\n> postgresql-jdbc-8.2-505-copy-20070716.jdbc3.jar) and it is really fast,\n> actually just as fast as serverside COPY (boths tests was performed on\n> local machine).\n\nHappy to hear there's interest toward this solution.\n\n> This means I am interrested in using it in my project, but I have some\n> concerns that needs to be adressed, (and I am prepared to help in any\n> way I can). The following are the concerns I have\n>\n> - While testing I got some errors, which needs to be fixed (detailed below)\n> - The patch must be of production grade quality\n> - I would like the patch to be part of the official pg JDBC driver.\n\nDefinitely agreed, those are my requirements as well. We can discuss\nbug fixing among ourselves; new versions I'll announce on pgsql-jdbc\nlist.\n\n-- \nKalle Hallivuori +358-41-5053073 http://korpiq.iki.fi/\n", "msg_date": "Thu, 19 Jul 2007 10:23:38 +0300", "msg_from": "\"Kalle Hallivuori\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improvement suggestions for performance design" } ]
[ { "msg_contents": "\nHello,\nHow can I know my PostgreSQL 8 is using direct I/O or buffered I/O? If using buffered I/O, how can I enable direct I/O? What is the performance difference of them?\nThis is urgent, Thanks.\n_________________________________________________________________\nWindows Live Spaces is here! It嚙踝蕭s easy to create your own personal Web site. \nhttp://spaces.live.com/?mkt=en-my\n", "msg_date": "Fri, 6 Jul 2007 02:28:49 +0000", "msg_from": "lai yoke hman <[email protected]>", "msg_from_op": true, "msg_subject": "Direct I/O" }, { "msg_contents": "lai yoke hman wrote:\n\n> How can I know my PostgreSQL 8 is using direct I/O or buffered I/O? If\n> using buffered I/O, how can I enable direct I/O? What is the\n> performance difference of them?\n\n1. it is buffered\n2. you can't\n3. there isn't any because there isn't direct I/O\n\nUnless you mess with the filesystem features, at which point I shut up\nor they shut me down.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 6 Jul 2007 10:40:35 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct I/O" }, { "msg_contents": "On Solaris you just look at the mount options on the file system and see\nif there is a forcedirectio option enabled. Generally since PostgreSQL\ndoesn't use any special options for enabling directio that's a known way\nto figure it out on Solaris. Atleast on Solaris the performance over\nbuffered filesystem is better for many workloads but not always. Plus\nyou typically see a small reduction in CPU usage (system) and ofcourse\nmemory.\n\nHowever depending on workload, you may see increased latency in writes\nbut generally that's not the problem in many workloads since its the\nmultiple writes to the same file which is better using concurrentio\n(modified directio) in Solaris.\n\nAs for Linux I will leave that to other experts ..\n\n-Jignesh\n\n\nlai yoke hman wrote:\n> Hello,\n> How can I know my PostgreSQL 8 is using direct I/O or buffered I/O? If using buffered I/O, how can I enable direct I/O? What is the performance difference of them?\n> This is urgent, Thanks.\n> _________________________________________________________________\n> Windows Live Spaces is here! It嚙踝蕭s easy to create your own personal Web site. \n> http://spaces.live.com/?mkt=en-my\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n", "msg_date": "Fri, 06 Jul 2007 12:03:12 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct I/O" } ]
[ { "msg_contents": "\nHello,\nI have seen some performance testing indicates that apparently the PostgreSQL 8 is faster in writing data while seems like Oracle 10g is better in reading data from database, can any one tell me why? Or is there anyone done performance benchmark on them before?\nThis is urgent.\nThanks.\n_________________________________________________________________\nCall friends with PC-to-PC calling for free!\nhttp://get.live.com/messenger/overview\n", "msg_date": "Fri, 6 Jul 2007 02:44:09 +0000", "msg_from": "lai yoke hman <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of PostgreSQL and Oracle" }, { "msg_contents": "lai yoke hman <[email protected]> writes:\n> I have seen some performance testing indicates that apparently the PostgreSQL 8 is faster in writing data while seems like Oracle 10g is better in reading data from database, can any one tell me why? Or is there anyone done performance benchmark on them before?\n\nYou won't find anything particularly definitive on this, because Oracle\nforbids publishing benchmarks of their software.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Jul 2007 23:59:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of PostgreSQL and Oracle " } ]
[ { "msg_contents": "Hi all,\n\n I have the following scenario, I have users and groups where a user \ncan belong to n groups, and a group can have n users. A user must belogn \nat least to a group. So when I delete a group I must check that there \nisn't any orphan. To do this I have something like that:\n\n CREATE TABLE users\n (\n user_id SERIAL8 PRIMARY KEY\n user_name VARCHAR(50)\n )\n\n CREATE TABLE groups\n (\n group_id SERIAL8 PRIMARY KEY,\n group_name VARCHAR(50)\n )\n\n CREATE TABLE user_groups\n (\n user_id INT8 REFERENCES users(user_id),\n group_id INT8 REFERENCE groups(group_id),\n CONSTRAINT pk PRIMARY_KEY ( user_id, group_id)\n )\n\n CREATE INDEX idx_user_id ON user_groups( user_id );\n CREATE INDEX idx_group_id ON user_groups( group_id );\n\n FUNCTION delete_group( INT8 )\n DECLARE\n p_groupid ALIAS FOR $1;\n v_deleted INTEGER;\n v_count INTEGER;\n result RECORD;\n\n BEGIN\n v_deleted = 0;\n\n FOR result IN SELECT user_id FROM user_groups WHERE group_id = \np_groupid\n LOOP\n\n SELECT INTO v_count COUNT(user_id) FROM user_groups WHERE user_id \n= result.user_id LIMIT 2;\n\n IF v_count = 1 THEN\n DELETE FROM users WHERE user_id = result.user_id;\n v_deleted = v_deleted + 1;\n END IF;\n\n END LOOP;\n\n DELETE FROM groups WHERE group_id = p_groupid;\n\n RETURN v_deleted;\n END;\n\n\n This works quite fast with small groups but when the group has an \nimportant number of users, it takes too much time. The delete_group \naction is fired from the user interface of the application.\n\n Do you have any idea about how I could improve the performance of this?\n\nThanks all\n-- \nArnau\n", "msg_date": "Fri, 06 Jul 2007 16:42:31 +0200", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "Advice about how to delete" }, { "msg_contents": "\nOn Jul 6, 2007, at 9:42 , Arnau wrote:\n\n> I have the following scenario, I have users and groups where a \n> user can belong to n groups, and a group can have n users. A user \n> must belogn at least to a group. So when I delete a group I must \n> check that there isn't any orphan. To do this I have something like \n> that:\n\n\n> IF v_count = 1 THEN\n> DELETE FROM users WHERE user_id = result.user_id;\n> v_deleted = v_deleted + 1;\n> END IF;\n\nAm I right in reading that you're deleting any users that would be \norphans? If so, you can just delete the orphans after rather than \ndelete them beforehand (untested):\n\n-- delete user_group \nDELETE FROM user_groups\nWHERE user_group_id = p_group_id;\n\n-- delete users that don't belong to any group\nDELETE FROM users\nWHERE user_id IN (\n SELECT user_id\n LEFT JOIN user_groups\n WHERE group_id IS NULL);\n\nThis should execute pretty quickly. You don't need to loop over any \nresults. Remember, SQL is a set-based language, so if you can pose \nyour question in a set-based way, you can probably find a pretty \ngood, efficient solution.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Fri, 6 Jul 2007 10:04:54 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice about how to delete" }, { "msg_contents": "Hi Michael,\n\nMichael Glaesemann wrote:\n> \n> On Jul 6, 2007, at 9:42 , Arnau wrote:\n> \n>> I have the following scenario, I have users and groups where a user \n>> can belong to n groups, and a group can have n users. A user must \n>> belogn at least to a group. So when I delete a group I must check that \n>> there isn't any orphan. To do this I have something like that:\n> \n> \n>> IF v_count = 1 THEN\n>> DELETE FROM users WHERE user_id = result.user_id;\n>> v_deleted = v_deleted + 1;\n>> END IF;\n> \n> Am I right in reading that you're deleting any users that would be \n> orphans? If so, you can just delete the orphans after rather than delete \n> them beforehand (untested):\n> \n> -- delete user_groupDELETE FROM user_groups\n> WHERE user_group_id = p_group_id;\n> \n> -- delete users that don't belong to any group\n> DELETE FROM users\n> WHERE user_id IN (\n> SELECT user_id\n> LEFT JOIN user_groups\n> WHERE group_id IS NULL);\n> \n> This should execute pretty quickly. You don't need to loop over any \n> results. Remember, SQL is a set-based language, so if you can pose your \n> question in a set-based way, you can probably find a pretty good, \n> efficient solution.\n\n I have tested your solution and it's much worse than mine.\n\n My test database has about 254000 users and about 30 groups. The test \nI have done is remove a group with 258 users, my solution has taken \nabout 3 seconds and your solution after 20seconds didn't finished. Of \ncourse the test machine is an old celeron with few MB of RAM, but as \ntest machine does the job.\n\nThank you very much\n-- \nArnau\n", "msg_date": "Fri, 06 Jul 2007 17:56:21 +0200", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Advice about how to delete" }, { "msg_contents": "Arnau wrote:\n> CREATE TABLE user_groups\n> (\n> user_id INT8 REFERENCES users(user_id),\n> group_id INT8 REFERENCE groups(group_id),\n> CONSTRAINT pk PRIMARY_KEY ( user_id, group_id)\n> )\n> \n> CREATE INDEX idx_user_id ON user_groups( user_id );\n\nThe primary key implicitly creates an index on (user_id, group_id), so \nyou probably don't need this additional index.\n\n> This works quite fast with small groups but when the group has an \n> important number of users, it takes too much time. The delete_group \n> action is fired from the user interface of the application.\n\nIt looks like you're not deleting rows from user_groups when a group is \ndeleted. Perhaps the table definition you posted misses ON DELETE \nCASCADE on the foreign key declarations?\n\nI would implement this with triggers. Use the ON DELETE CASCADE to take \ncare of deleting rows from user_groups and create an ON DELETE trigger \non user_groups to delete orphan rows. Like this:\n\nCREATE OR REPLACE FUNCTION delete_orphan_users () RETURNS trigger AS $$\n DECLARE\n BEGIN\n PERFORM * FROM user_groups ug WHERE ug.user_id = OLD.user_id;\n IF NOT FOUND THEN\n DELETE FROM users WHERE users.user_id = OLD.user_id;\n END IF;\n\n RETURN NULL;\n END;\n$$ LANGUAGE 'plpgsql';\n\nDROP TRIGGER IF EXISTS d_usergroup ON user_groups;\nCREATE TRIGGER d_usergroup AFTER DELETE ON user_groups FOR EACH ROW \nEXECUTE PROCEDURE delete_orphan_users();\n\nThis might not be significantly faster, but it's easier to work with.\n\n> Do you have any idea about how I could improve the performance of this?\n\nMichael Glaesemann's idea of using a single statement to delete all \norphan users with one statement is a good one, though you want to refine \nit a bit so that you don't need to do a full table scan every time. \nPerhaps like this, before deleting rows from user_groups:\n\nDELETE FROM users WHERE user_id IN (\n SELECT u.user_id FROM users u\n LEFT OUTER JOIN user_groups ug ON (u.user_id = ug.user_id AND \nug.group_id <> 10)\n WHERE group_id IS NULL\n AND u.user_id IN (SELECT user_id FROM user_groups where group_id = 10)\n);\n\nOr maybe you could just leave the orphans in the table, and delete them \nlater in batch?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 06 Jul 2007 17:18:53 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice about how to delete" } ]
[ { "msg_contents": "Hello all,\n\nI think this result will be useful for performance discussions of \npostgresql against other databases.\n\nhttp://www.spec.org/jAppServer2004/results/res2007q3/\n\nMore on Josh Berkus's blog:\n\nhttp://blogs.ittoolbox.com/database/soup/archives/postgresql-publishes-first-real-benchmark-17470\n\nRegards,\nJignesh\n\n", "msg_date": "Mon, 09 Jul 2007 11:57:13 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL publishes first real benchmark" }, { "msg_contents": "On 7/9/07, Jignesh K. Shah <[email protected]> wrote:\n> I think this result will be useful for performance discussions of\n> postgresql against other databases.\n\nI'm happy to see an industry-standard benchmark result for PostgreSQL.\n Great work guys!\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Mon, 9 Jul 2007 12:21:46 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "Jignesh K. Shah wrote:\n> Hello all,\n> \n> I think this result will be useful for performance discussions of \n> postgresql against other databases.\n> \n> http://www.spec.org/jAppServer2004/results/res2007q3/\n> \n> More on Josh Berkus's blog:\n> \n> http://blogs.ittoolbox.com/database/soup/archives/postgresql-publishes-first-real-benchmark-17470 \n\nThat's really exciting news!\n\nI'm sure you spent a lot of time tweaking the settings, so let me ask \nyou something topical:\n\nHow did you end up with the bgwriter settings you used? Did you \nexperiment with different values? How much difference did it make?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 09 Jul 2007 17:27:24 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "Jonah H. Harris wrote:\n> On 7/9/07, Jignesh K. Shah <[email protected]> wrote:\n>> I think this result will be useful for performance discussions of\n>> postgresql against other databases.\n> \n> I'm happy to see an industry-standard benchmark result for PostgreSQL.\n> Great work guys!\n\nI would note that if you track through the other results that we indeed \nbeat MySQL ;)\n\nJoshua D. Drake\n\n\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Mon, 09 Jul 2007 09:27:52 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "On Mon, Jul 09, 2007 at 11:57:13AM -0400, Jignesh K. Shah wrote:\n> I think this result will be useful for performance discussions of \n> postgresql against other databases.\n>\n> http://www.spec.org/jAppServer2004/results/res2007q3/\n\nAm I right if this is for a T2000 (Niagara) database server? It sure is\ninteresting, but I can't help thinking it's not a very common\nconfiguration...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Mon, 9 Jul 2007 18:52:19 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "Steinar H. Gunderson wrote:\n> On Mon, Jul 09, 2007 at 11:57:13AM -0400, Jignesh K. Shah wrote:\n>> I think this result will be useful for performance discussions of \n>> postgresql against other databases.\n>>\n>> http://www.spec.org/jAppServer2004/results/res2007q3/\n> \n> Am I right if this is for a T2000 (Niagara) database server? It sure is\n> interesting, but I can't help thinking it's not a very common\n> configuration...\n\nI have yet to see a benchmark that was valid on a common configuration. \nCommon configurations don't make people in suits go... \"oooohhhh sexy!\". \nIt is also the reason that those in the know typically ignore all \nbenchmarks and do their own testing.\n\nJoshua D. Drake\n\n\n> \n> /* Steinar */\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Mon, 09 Jul 2007 10:02:10 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "\nHi Heikki,\n\nHeikki Linnakangas wrote:\n>\n> That's really exciting news!\n>\n> I'm sure you spent a lot of time tweaking the settings, so let me ask \n> you something topical:\n>\n> How did you end up with the bgwriter settings you used? Did you \n> experiment with different values? How much difference did it make?\n>\n\nBackground writer is still a pain to get it right.. I say it is a \nnecessary evil since you are trying to balance it with trying to level \nwrites to the disks and lock contentions caused by the writer itself to \nthe postgresql connections. Our typical problem will arise at the high \nnumber of users where all users are suddenly locked due to the bgwriter \nholding the locks.. Using the hotuser script (which uses pearl/Dtrace \ncombination) we ran quite a bit of numbers trying to see which ones \nresults in less overall time spent in PGLock* calls and yet gave good \nuniform writes to the disks. After reaching the published settings, \neverynow and then we would try playing with different values to see if \nit improves but generally seemed to degrade if changed.. (Of course your \nmileage will vary depending on config, workload, etc).\n\nStill I believe the locking mechanism needs to be revisited at some \npoint since that seems to be the one which will eventually limit the \nnumber of users in such a workload. (Specially if you dont hit the right \nsettings for your workload)\n\nHopefully soon we will get access to bigger capacity servers and redo \nSMP tests on it with the background writer.\n\nRegards,\nJignesh\n\n", "msg_date": "Mon, 09 Jul 2007 13:48:44 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "Joshua D. Drake wrote:\n> Steinar H. Gunderson wrote:\n>> On Mon, Jul 09, 2007 at 11:57:13AM -0400, Jignesh K. Shah wrote:\n>>> I think this result will be useful for performance discussions of\n>>> postgresql against other databases.\n>>>\n>>> http://www.spec.org/jAppServer2004/results/res2007q3/\n>>\n>> Am I right if this is for a T2000 (Niagara) database server? It sure is\n>> interesting, but I can't help thinking it's not a very common\n>> configuration...\n> \n> I have yet to see a benchmark that was valid on a common configuration.\n> Common configurations don't make people in suits go... \"oooohhhh sexy!\".\n> It is also the reason that those in the know typically ignore all\n> benchmarks and do their own testing.\n\nThis kind of benchmarks are primarily a marketing thing. And as such,\nit's a very good one to have, because there are certainly a large\namounts of PHBs who will just dispose a solution that's not \"proven\" (we\nall know what that means, but they don't) without even looking at the\ndetails.\n\n From a *technical* perspective it doesn't mean anything that you can\napply directly to your application. All it says is \"yes, you can make it\nfast enough\".\n\nBut again, as a marketing thing, it's great.\n\n//Magnus\n\n", "msg_date": "Mon, 09 Jul 2007 20:23:50 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "\nOn Jul 9, 2007, at 1:02 PM, Joshua D. Drake wrote:\n\n> It is also the reason that those in the know typically ignore all \n> benchmarks and do their own testing.\n\nHeresy!\n\n", "msg_date": "Mon, 9 Jul 2007 14:38:08 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "On Mon, 9 Jul 2007, Joshua D. Drake wrote:\n\n> I would note that if you track through the other results that we indeed beat \n> MySQL ;)\n\nThere's just enough hardware differences between the two configurations \nthat it's not quite a fair fight. For example, the MySQL test had 10K RPM \ndrives in the database server storage array, while the PostgreSQL one had \n15K RPM ones. A few other small differences as well if you dig into the \nconfigurations, all of which I noted favored the PG system.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 9 Jul 2007 14:55:11 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "On 7/9/07, Greg Smith <[email protected]> wrote:\n> There's just enough hardware differences between the two configurations\n> that it's not quite a fair fight. For example, the MySQL test had 10K RPM\n> drives in the database server storage array, while the PostgreSQL one had\n> 15K RPM ones. A few other small differences as well if you dig into the\n> configurations, all of which I noted favored the PG system.\n\nAgreed.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Mon, 9 Jul 2007 15:04:53 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "Jonah H. Harris wrote:\n> On 7/9/07, Greg Smith <[email protected]> wrote:\n>> There's just enough hardware differences between the two configurations\n>> that it's not quite a fair fight. For example, the MySQL test had 10K \n>> RPM\n>> drives in the database server storage array, while the PostgreSQL one had\n>> 15K RPM ones. A few other small differences as well if you dig into the\n>> configurations, all of which I noted favored the PG system.\n> \n> Agreed.\n> \n\nPostgreSQL still beats MySQL ;)\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Mon, 09 Jul 2007 12:17:35 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "On 7/9/07, Joshua D. Drake <[email protected]> wrote:\n> PostgreSQL still beats MySQL ;)\n\nAgreed.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Mon, 9 Jul 2007 15:17:58 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "On Mon, Jul 09, 2007 at 01:48:44PM -0400, Jignesh K. Shah wrote:\n> \n> Hi Heikki,\n> \n> Heikki Linnakangas wrote:\n> >\n> >That's really exciting news!\n> >\n> >I'm sure you spent a lot of time tweaking the settings, so let me ask \n> >you something topical:\n> >\n> >How did you end up with the bgwriter settings you used? Did you \n> >experiment with different values? How much difference did it make?\n> >\n> \n> Background writer is still a pain to get it right.. I say it is a \n> necessary evil since you are trying to balance it with trying to level \n> writes to the disks and lock contentions caused by the writer itself to \n> the postgresql connections. Our typical problem will arise at the high \n> number of users where all users are suddenly locked due to the bgwriter \n> holding the locks.. Using the hotuser script (which uses pearl/Dtrace \n> combination) we ran quite a bit of numbers trying to see which ones \n> results in less overall time spent in PGLock* calls and yet gave good \n> uniform writes to the disks. After reaching the published settings, \n> everynow and then we would try playing with different values to see if \n> it improves but generally seemed to degrade if changed.. (Of course your \n> mileage will vary depending on config, workload, etc).\n> \n> Still I believe the locking mechanism needs to be revisited at some \n> point since that seems to be the one which will eventually limit the \n> number of users in such a workload. (Specially if you dont hit the right \n> settings for your workload)\n\nDo you know specifically what locks were posing the problem? I have a\ntheory that having individual backends run the clock sweep limits\nconcurrency and I'm wondering if you were seeing any of that. The lock\nin question would be BufFreelistLock.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Mon, 9 Jul 2007 15:53:08 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "Hi,\n\n> I think this result will be useful for performance discussions of \n> postgresql against other databases.\n>\n> http://www.spec.org/jAppServer2004/results/res2007q3/\n>\n> More on Josh Berkus's blog:\n>\n> http://blogs.ittoolbox.com/database/soup/archives/postgresql- \n> publishes-first-real-benchmark-17470\n>\nCongrats to everyone that worked to make this happen.\nWhile I will never get customers to buy that nice hardware (and I \nwould recommend the JBoss Appserver anyway :-),\nit really is a big sign telling \"yes postgres can be really fast\" - \nas oposed to the urban legends around.\n\n Heiko\n\n-- \n Reg. Adresse: Red Hat GmbH, Hauptst�tter Strasse 58, 70178 Stuttgart\n Handelsregister: Amtsgericht Stuttgart HRB 153243\n Gesch�ftsf�hrer: Brendan Lane, Charlie Peters, Michael \nCunningham, Werner Knoblich\n\n\n", "msg_date": "Tue, 10 Jul 2007 10:00:43 +0200", "msg_from": "\"Heiko W.Rupp\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "Jignesh K. Shah wrote:\n> Hello all,\n> \n> I think this result will be useful for performance discussions of \n> postgresql against other databases.\n> \n> http://www.spec.org/jAppServer2004/results/res2007q3/\n> \n> More on Josh Berkus's blog:\n> \n> http://blogs.ittoolbox.com/database/soup/archives/postgresql-publishes-first-real-benchmark-17470 \n\n\nMay I ask you why you set max_prepared_transactions to 450, while you're \napparently not using prepared transactions, according to this quote:\n\n> Recoverable 2-phase transactions were used to coordinate the interaction between\n> the database server and JMS server using Sun's Last Agent Logging\n> Optimization; the 1PC database transactions and transaction log records are\n> written to the database in a single transaction.\n\nDid you perhaps use 2PC at first, but didn't bother to change the config \nafter switching to the last agent optimization?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 11 Jul 2007 16:59:32 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "\n\nHeikki Linnakangas wrote:\n>\n>\n> May I ask you why you set max_prepared_transactions to 450, while \n> you're apparently not using prepared transactions, according to this \n> quote:\n>\n>> Recoverable 2-phase transactions were used to coordinate the \n>> interaction between\n>> the database server and JMS server using Sun's Last Agent Logging\n>> Optimization; the 1PC database transactions and transaction log \n>> records are\n>> written to the database in a single transaction.\n>\n> Did you perhaps use 2PC at first, but didn't bother to change the \n> config after switching to the last agent optimization?\n>\n\nYep.. one of the things that we didn't revert back and got strayed out \nthere.\n\n-Jignesh\n\n\n", "msg_date": "Wed, 11 Jul 2007 12:33:01 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n> Heikki Linnakangas wrote:\n>> May I ask you why you set max_prepared_transactions to 450, while \n>> you're apparently not using prepared transactions, according to this \n>> quote:\n\n> Yep.. one of the things that we didn't revert back and got strayed out \n> there.\n\nThere were quite a few settings in that list that looked like random\nexperimentation rather than recommendable good practice to me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jul 2007 13:07:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark " }, { "msg_contents": "Can you list others that seemed out of place?\n\nThanks.\nRegards,\nJignesh\n\n\nTom Lane wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n> \n>> Heikki Linnakangas wrote:\n>> \n>>> May I ask you why you set max_prepared_transactions to 450, while \n>>> you're apparently not using prepared transactions, according to this \n>>> quote:\n>>> \n>\n> \n>> Yep.. one of the things that we didn't revert back and got strayed out \n>> there.\n>> \n>\n> There were quite a few settings in that list that looked like random\n> experimentation rather than recommendable good practice to me.\n>\n> \t\t\tregards, tom lane\n> \n", "msg_date": "Wed, 11 Jul 2007 14:34:13 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "Jignesh K. Shah wrote:\n> Can you list others that seemed out of place?\n\nwell to me the ones that look most questionable are:\n\nwork_mem=100MB - so this benchmark is really low concurrency(which does\nnot fit with max_connections=1000) and with trivial queries ?\n\nenable_seqscan = off - why ?\n\neffective_cache_size = 40GB - on a box with 16GB this seems wrong\nespecially since there are some indications out there that suggest that\nwhile overestimating effective_cache_size was not a problem in versions\n<8.2 it might not be so in 8.2 and up\n\nwal_buffers = 2300 - there have been some numbers reported that going\nover the default of 8 helps but it is generally considered that going\nbeyond 500 or maybe 1000 does not help at all ...\n\n\nand one more is that you claim you used \"-fast -O4 -xtarget=ultraT1\"\nwhich is something we explicitly advise against in our own\nFAQ(http://www.postgresql.org/docs/faqs.FAQ_Solaris.html):\n\n\"Do not use any flags that modify behavior of floating point operations\nand errno processing (e.g.,-fast). These flags could raise some\nnonstandard PostgreSQL behavior for example in the date/time computing.\"\n\n\n\nStefan\n", "msg_date": "Thu, 12 Jul 2007 11:08:38 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "\n\"Stefan Kaltenbrunner\" <[email protected]> writes:\n\n> Jignesh K. Shah wrote:\n>> Can you list others that seemed out of place?\n\nThe one which surprised me the most was the commit_delay setting. What results\nled you to set that? The common wisdom on this setting is that it doesn't\naccomplish its goals and does more harm than good for most cases and should be\nreplaced with something more effective.\n\nIn any case I wouldn't think the use case for a feature like this would\nactually apply in the case of a benchmark. The use case where something like\nthis is needed is where there are not enough concurrent requests to keep the\nserver busy during the fsync of the wal. If for example each query does 5ms of\nactual work and fsyncs take 15ms then you could be committing up to 3\ntransactions in one fsync and need another 3 busy connections to keep the\nserver busy during that fsync so you would need at least 6 concurrently busy\nconnections. If you have a more cpu-bound system then that number might be\nhigher but 100+ connections ought to be enough and in any case I would expect\na benchmark to be mostly disk-bound.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Thu, 12 Jul 2007 11:45:16 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "On Thu, 12 Jul 2007, Gregory Stark wrote:\n\n> In any case I wouldn't think the use case for a feature like this would\n> actually apply in the case of a benchmark.\n\nI've also seen a tiny setting for commit_delay (like the 10 they used) as \nhelping improve throughput under a heavy commit load with many processors. \nI'm not sure why a quick yield of the processor at that point helps, but \nthere seem to be cases where it does. Don't think it has anything to do \nwith the originally intended use for this parameter, probably some sort of \nOS scheduler quirk.\n\n> The use case where something like this is needed is where there are not \n> enough concurrent requests to keep the server busy during the fsync of \n> the wal.\n\nI've actually finished an long investigation of this recently that will be \non my web page soon. On a non-caching controller where you'd think \nthere's the most benefit here, I was only able to get about 10% more \ncommits at low client loads by setting the delay to about 1/2 of the fsync \ntime, and a few percent more at high loads by setting a delay longer than \nthe fsync time. It's really a slippery setting though--very easy to set \nin a way that will degrade performance significantly if you're not very \nsystematic about testing it many times at various client counts.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 12 Jul 2007 11:00:54 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" }, { "msg_contents": "\n\nI had such great hopes for this thread. \"Alas, poor Yorick! I \nknew him, Horatio ...\"\n\n\n\n\nOn Thu, Jul 12, 2007 at 11:00:54AM -0400, Greg Smith wrote:\n> On Thu, 12 Jul 2007, Gregory Stark wrote:\n> \n> >In any case I wouldn't think the use case for a feature like this would\n> >actually apply in the case of a benchmark.\n> \n> I've also seen a tiny setting for commit_delay (like the 10 they used) as \n> helping improve throughput under a heavy commit load with many processors. \n> I'm not sure why a quick yield of the processor at that point helps, but \n> there seem to be cases where it does. Don't think it has anything to do \n> with the originally intended use for this parameter, probably some sort of \n> OS scheduler quirk.\n> \n> >The use case where something like this is needed is where there are not \n> >enough concurrent requests to keep the server busy during the fsync of \n> >the wal.\n> \n> I've actually finished an long investigation of this recently that will be \n> on my web page soon. On a non-caching controller where you'd think \n> there's the most benefit here, I was only able to get about 10% more \n> commits at low client loads by setting the delay to about 1/2 of the fsync \n> time, and a few percent more at high loads by setting a delay longer than \n> the fsync time. It's really a slippery setting though--very easy to set \n> in a way that will degrade performance significantly if you're not very \n> systematic about testing it many times at various client counts.\n> \n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n", "msg_date": "Fri, 13 Jul 2007 08:08:13 -0400", "msg_from": "Ray Stell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL publishes first real benchmark" } ]
[ { "msg_contents": "Hi List,\n\nIs there anyway so as to indicate the Query Analyser not to use the\nplan which it is using regularly, and use a new plan ?\n\n From where do the Query Analyser gets the all info to prepare a plan?\nIs it only from the pg_statistics table or are there anyother tables\nwhich have this info. stored?\n\nAnd can we change the statistic??\n\nThanx in advance\n-- \nRegards\nGauri\n", "msg_date": "Tue, 10 Jul 2007 20:17:05 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query Analyser" }, { "msg_contents": "On Tue, Jul 10, 2007 at 08:17:05PM +0530, Gauri Kanekar wrote:\n> Is there anyway so as to indicate the Query Analyser not to use the\n> plan which it is using regularly, and use a new plan ?\n\nYou can't dictate the query plan but you can influence the planner's\ndecisions with various configuration settings.\n\nhttp://www.postgresql.org/docs/8.2/interactive/runtime-config-query.html\n\nDisabling planner methods (enable_seqscan, etc.) should be a last\nresort -- before doing so make sure that settings like shared_buffers\nand effective_cache_size are appropriately sized for your system,\nthat you're gathering enough statistics (see below), and that the\nstatistics are current (run ANALYZE or VACUUM ANALYZE). After all\nthat, if you still think you need to disable a planner method then\nconsider posting the query and the EXPLAIN ANALYZE output to\npgsql-performance to see if anybody has other suggestions.\n\n> From where do the Query Analyser gets the all info to prepare a plan?\n> Is it only from the pg_statistics table or are there anyother tables\n> which have this info. stored?\n\nThe planner also uses pg_class.{reltuples,relpages}.\n\nhttp://www.postgresql.org/docs/8.2/interactive/planner-stats.html\nhttp://www.postgresql.org/docs/8.2/interactive/planner-stats-details.html\n\n> And can we change the statistic??\n\nYou can increase the amount of statistics gathered for a specific\ncolumn with ALTER TABLE SET STATISTICS or system-wide by adjusting\ndefault_statistics_target.\n\nhttp://www.postgresql.org/docs/8.2/interactive/sql-altertable.html\nhttp://www.postgresql.org/docs/8.2/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 11 Jul 2007 08:05:57 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Analyser" } ]
[ { "msg_contents": "Hello,\n\nI have a simple table id/value, and a function that returns the id of a\ngiven value, inserting the later if not yet present. The probability\nthat a value already exist within the reference table is very high.\n\nDifferent db users may have their own reference table with different\ncontent, but as the table definition is identical, I've defined a public\nfunction to maintain these tables. \n\nCan I optimize this function with:\n\na) remove the EXCEPTION clause (Is there an underlying lock that prevent\nconcurrent inserts ?)\n\nb) declare the function being IMMUTABLE ?\n \n - although it may insert a new raw, the returned id is invariant for\na given user\n (I don't really understand the holdability ov immutable functions;\nare the results cached only for the livetime of a prepared statement ?,\nor can they be shared by different sessions ?)\n\n\nThanks,\n\nMarc\n\n\n\n\n--Table definition:\n\ncreate table ref_table (\n id serial NOT NULL, \n v varchar NOT NULL, \n constraint ref_table_pk primary key (id)\n) without oids;\n\ncreate unique index ref_table_uk on ref_table(v);\n\n\n-- Function:\n\nCREATE OR REPLACE FUNCTION public.get_or_insert_value(\"varchar\") RETURNS\nINT AS \n$BODY$\n\nDECLARE\n id_value INT;\n\nBEGIN\n\n SELECT INTO id_value id FROM ref_table WHERE v = $1;\n\n IF FOUND THEN\n\n RETURN id_value;\n\n ELSE --new value to be inserted\n\n DECLARE\n rec record;\n \n BEGIN\n \n FOR rec in INSERT INTO ref_table (v) VALUES ($1) RETURNING id\n LOOP\n return rec.id; \n END LOOP;\n\n EXCEPTION --concurrent access ?\n WHEN unique_violation THEN\n RETURN(SELECT id FROM ref_table WHERE v = $1);\n\n END;\n\n END IF;\nEND;\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE;\n\n\n\n\n\n\ntuning a function to insert/retrieve values from a reference table\n\n\n\n\nHello,\n\nI have a simple table id/value, and a function that returns the id of a given value, inserting the later if not yet present. The probability that a value already exist within the reference table is very high.\nDifferent db users may have their own reference table with different content, but as the table definition is identical, I've defined a public function to maintain these tables. \nCan I optimize this function with:\n\na) remove the EXCEPTION clause (Is there an underlying lock that prevent concurrent inserts ?)\n\nb) declare the function being IMMUTABLE ?\n   \n   - although it may insert a new raw, the returned id is invariant for a given user\n     (I don't really understand the holdability ov immutable functions; are the results cached only for the livetime of a prepared statement ?, or can they be shared by different sessions ?)\n\nThanks,\n\nMarc\n\n\n\n\n--Table definition:\n\ncreate table ref_table (\n  id serial NOT NULL, \n  v varchar NOT NULL, \n  constraint ref_table_pk primary key  (id)\n) without oids;\n\ncreate unique index ref_table_uk on ref_table(v);\n\n\n-- Function:\n\nCREATE OR REPLACE FUNCTION public.get_or_insert_value(\"varchar\") RETURNS INT AS \n$BODY$\n\nDECLARE\n  id_value INT;\n\nBEGIN\n\n  SELECT INTO id_value id FROM ref_table WHERE v =  $1;\n\n  IF FOUND THEN\n\n    RETURN id_value;\n\n  ELSE  --new value to be inserted\n\n    DECLARE\n      rec record;\n    \n    BEGIN\n    \n         FOR rec in INSERT INTO ref_table (v) VALUES ($1) RETURNING id\n         LOOP\n              return rec.id;  \n         END LOOP;\n\n         EXCEPTION --concurrent access ?\n           WHEN unique_violation THEN\n             RETURN(SELECT id FROM ref_table WHERE v =  $1);\n\n    END;\n\n  END IF;\nEND;\n$BODY$\n  LANGUAGE 'plpgsql' VOLATILE;", "msg_date": "Tue, 10 Jul 2007 17:03:40 +0200", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": true, "msg_subject": "tuning a function to insert/retrieve values from a reference table" }, { "msg_contents": "\"Marc Mamin\" <[email protected]> writes:\n> Can I optimize this function with:\n\n> a) remove the EXCEPTION clause (Is there an underlying lock that prevent\n> concurrent inserts ?)\n\nNo.\n\n> b) declare the function being IMMUTABLE ?\n\nCertainly not --- it's got side-effects.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2007 11:41:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning a function to insert/retrieve values from a reference\n\ttable" } ]