threads
listlengths
1
275
[ { "msg_contents": "\nI have a table which has e.g.\n\nCREATE TABLE portstats\n(\n id serial,\n logtime TIMESTAMP,\n cluster VARCHAR(40),\n element VARCHAR(40),\n port INT,\n rxOctets BIGINT,\n txOctets BIGINT\n);\n\nwhich is used for logging statistics from network equipment.\ncluster is like the location.\nrxOctets, txOctets are numbers which increase over time.\n\nNow, i would like to generate a chart which shows the \nbitrate. So i need subtract rxOctets from a previous\nvalue and divide by the time range.\n\nTo be efficient, and avoid fetching too many points, i\nwant the interval between points I select to be a function\nof the time range. E.g., when I'm doing a 1-day chart, \ni would like to select points that are 15min apart.\nWhen I'm doing a 1-yr query, I would like to select\npoints that are e.g. 4hours apart. I can make this \ndetermination in the script that generates the statement.\n\nThe problem i'm having is that this is \na) a very slow operation\nb) selects all data points on t1, and then the interval\n apart one on t2... so i still end up with too many \n points.\n\npoints are logged every ~5 minutes, but there is \nsome small variation on the interval (and some observations\nmight be missing due to eg communication loss to db).\n[ a process goes along later and decimates out points\nas they age to prevent the db from becoming very large].\t\n\nThe query I have is below. The question is ... what\nis the best strategy for an operation of this nature?\n\nSELECT\n t1.port,\n t1.logtime AS start,\n t2.logtime AS end,\n t1.cluster,\n t1.element,\n (8.0 * (t2.rxoctets - t1.rxoctets) /\n (extract(EPOCH FROM(t2.logtime - t1.logtime))))::int8 AS rxbps,\n (8.0 * (t2.txoctets - t1.txoctets) /\n (extract(EPOCH FROM(t2.logtime - t1.logtime))))::int8 AS txbps\nFROM\n portstats t1\n INNER JOIN portstats t2\n ON t2.cluster = t1.cluster\n AND t2.element = t1.element\n AND t2.port = t1.port\n AND t2.logtime =\n (SELECT logtime\n FROM portstats t3\n WHERE t3.cluster = t1.cluster\n AND t3.element = t1.element\n AND t3.port = t1.port\n AND t3.logtime > t1.logtime + '00:15:00'\nORDER BY cluster ASC,\n element ASC,\n port ASC,\n logtime ASC\n LIMIT 1)\nWHERE t1.cluster = 'somecluster'\n AND (t1.element = 'somelement')\n AND (t1.logtime BETWEEN '2004-01-07 00:00' AND '2004-02-08 00:00')\nORDER BY\n t1.cluster ASC,\n t1.element ASC,\n t1.port ASC,\n t1.logtime ASC\n;\n\nThe query plan for 1 week is below, this takes ~2s to operate. It gets very\nslow for 1yr.\n\nSort (cost=14055.35..14067.74 rows=4956 width=176) (actual\ntime=1523.956..1538.354 rows=5943 loops=1)\n Sort Key: t1.svcluster, t1.element, t1.port, t1.logtime\n -> Merge Join (cost=2304.49..13751.18 rows=4956 width=176) (actual\ntime=1008.620..1329.766 rows=5943 loops=1)\n Merge Cond: ((\"outer\".\"?column10?\" = \"inner\".logtime) AND\n(\"outer\".port = \"inner\".port))\n -> Sort (cost=977.39..992.25 rows=5944 width=136) (actual\ntime=678.564..692.974 rows=5943 loops=1)\n Sort Key: (subplan), t1.port\n -> Index Scan using portstats_element_idx on portstats t1\n(cost=0.00..604.78 rows=5944 width=136) (actual time=0.191..581.311 ro\nws=5943 loops=1)\n Index Cond: (element = 'my-element.mydomain.net'::bpchar)\n Filter: ((svcluster = 'my-cluster'::bpchar) AND (logtime\n>= '2004-01-07 00:00:00-05'::timestamp with time zone) AND (logtime\n <= '2004-02-08 00:00:00-05'::timestamp with time zone))\n SubPlan\n -> Limit (cost=0.00..0.62 rows=1 width=104) (actual\ntime=0.064..0.066 rows=1 loops=5943)\n -> Index Scan using www6 on portstats t3\n(cost=0.00..399.28 rows=643 width=104) (actual time=0.054..0.054 rows=1 l\noops=5943)\n Index Cond: ((svcluster = $1) AND (element\n= $2) AND (port = $3) AND (logtime > ($4 + '00:15:00'::interval)))\n -> Sort (cost=1327.10..1356.00 rows=11560 width=136) (actual\ntime=289.168..321.522 rows=11771 loops=1)\n Sort Key: t2.logtime, t2.port\n -> Index Scan using portstats_element_idx on portstats t2\n(cost=0.00..546.98 rows=11560 width=136) (actual time=0.103..192.027 r\nows=11560 loops=1)\n Index Cond: ('my-element.mydomain.net'::bpchar = element)\n Filter: (('my-cluster'::bpchar = svcluster))\nTotal runtime: 1609.411 ms\n(19 rows)\n\n", "msg_date": "Sat, 21 Feb 2004 16:12:24 -0500", "msg_from": "Don Bowman <[email protected]>", "msg_from_op": true, "msg_subject": "conceptual method to create high performance query involving time" } ]
[ { "msg_contents": "This is a follow-up to an old thread of mine, but I can't find it now \nso I'll just re-summarize.\n\nI have a ~1 million row table that I mostly want to query by date \nrange. The rows are pretty uniformly spread over a 3 year date range. \nI have an index on the date column, but it wasn't always used in the \npast. I disabled the seqscan plan before running my query as a first \nfix, but it bothered me that I had to do that.\n\nNext, thanks to my earlier thread, I clustered the table on the date \ncolumn and then \"SET STATISTICS\" on the date column to be 100. That \ndid the trick, and I stopped explicitly disabling seqscan.\n\nToday, I noticed that Postgres (still 7.4) stopped using the date index \nagain. I checked the correlation for the date column and it was down \nto 0.4. So I guess that stat does drift away from 1.0 after \nclustering. That's a bummer, because clustering locks up the table \nwhile it works, which I can't really afford to do often. Even at a \ncorrelation of 0.4 on the date column, using the date index was still \nmuch faster than the seqscan plan that Postgres was choosing. Anyway, \nit's reclustering now.\n\nA common query looks like this:\n\nSELECT\n SUM(amount),\n SUM(quantity),\n date_trunc('day', date) AS date\nFROM\n mytable\nWHERE\n col1 IS NOT NULL AND\n col2 = 'foo' AND\n col3 = 'bar' AND\n date BETWEEN '2004-02-01 00:00:00' AND '2004-02-28 23:59:59'\nGROUP BY\n date_trunc('day', date)\nORDER BY\n date;\n\nThe EXPLAIN ANALYZE output should look like this:\n\n Sort (cost=4781.75..4824.15 rows=16963 width=23) (actual \ntime=2243.595..2243.619 rows=21 loops=1)\n Sort Key: date_trunc('day'::text, date)\n -> HashAggregate (cost=3462.87..3590.09 rows=16963 width=23) \n(actual time=2241.773..2243.454 rows=21 loops=1)\n -> Index Scan using mytable_date_idx on mytable \n(cost=0.00..3071.70 rows=52155 width=23) (actual time=2.610..1688.111 \nrows=49679 loops=1)\n Index Cond: ((date >= '2004-02-01 00:00:00'::timestamp \nwithout time zone) AND (date <= '2004-02-28 23:59:59'::timestamp \nwithout time zone))\n Filter: ((col1 IS NOT NULL) AND ((col2)::text = \n'foo'::text) AND ((col3)::text = 'bar'::text))\n Total runtime: 2244.391 ms\n\nUnfortunately, since I just re-clustered, I can't get the old EXPLAIN \noutput, but just imagine \"Seq Scan\" in place of \"Index Scan using \nmytable_date_idx\" to get the idea.\n\nMy question is: what other options do I have? Should I \"SET \nSTATISTICS\" on the date column to 200? 500? The maximum value of 1000? \n I want to do something that will convince Postgres that using the date \nindex is, by far, the best plan when running my queries, even when the \ndate column correlation stat drops well below 1.0.\n\n-John\n\n", "msg_date": "Sat, 21 Feb 2004 19:18:04 -0500", "msg_from": "John Siracusa <[email protected]>", "msg_from_op": true, "msg_subject": "Column correlation drifts, index ignored again" }, { "msg_contents": "On Saturday 21 February 2004 16:18, John Siracusa wrote:\nJohn,\n\n> Next, thanks to my earlier thread, I clustered the table on the date\n> column and then \"SET STATISTICS\" on the date column to be 100. That\n> did the trick, and I stopped explicitly disabling seqscan.\n\n100? Are you sure you don't mean some other number? 100 is not very high \nfor problem analyze issues. You might try 500. Generally when I have a \nproblem query I raise stats to something like 1000 and drop it down until the \nproblem behaviour starts re-appearing.\n\n> date_trunc('day', date) AS date\n\nHave you tried putting an index on date_trunc('day', date) and querying on \nthat instead of using this:\n\n> date BETWEEN '2004-02-01 00:00:00' AND '2004-02-28 23:59:59'\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 22 Feb 2004 11:05:57 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Column correlation drifts, index ignored again" }, { "msg_contents": "On 2/22/04 2:05 PM, Josh Berkus wrote:\n> On Saturday 21 February 2004 16:18, John Siracusa wrote:\n>> Next, thanks to my earlier thread, I clustered the table on the date\n>> column and then \"SET STATISTICS\" on the date column to be 100. That\n>> did the trick, and I stopped explicitly disabling seqscan.\n> \n> 100? Are you sure you don't mean some other number? 100 is not very high\n> for problem analyze issues. You might try 500.\n\nIIRC, 100 was the number suggested in the earlier thread. I did set it to\n500 yesterday, I believe. We'll see how that goes.\n\n> Generally when I have a problem query I raise stats to something like 1000 and\n> drop it down until the problem behaviour starts re-appearing.\n\nSince this problem takes a long time to appear (months), that cycle could\ntake a long time... :)\n\n>> date_trunc('day', date) AS date\n> \n> Have you tried putting an index on date_trunc('day', date) and querying on\n> that instead of using this:\n> \n>> date BETWEEN '2004-02-01 00:00:00' AND '2004-02-28 23:59:59'\n\nNo, but then I'd just have a different index to persuade the planner to use\n:) Not every query does date_trunc() stuff, but they all do date ranges,\noften at a granularity of seconds.\n\n-John\n\n", "msg_date": "Sun, 22 Feb 2004 16:58:30 -0500", "msg_from": "John Siracusa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Column correlation drifts, index ignored again" }, { "msg_contents": "John Siracusa <[email protected]> writes:\n> I want to do something that will convince Postgres that using the date \n> index is, by far, the best plan when running my queries, even when the \n> date column correlation stat drops well below 1.0.\n\nHave you tried experimenting with random_page_cost? Seems like your\nresults suggest that you need to lower it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 22 Feb 2004 17:06:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Column correlation drifts, index ignored again " }, { "msg_contents": "On 2/22/04 5:06 PM, Tom Lane wrote:\n> John Siracusa <[email protected]> writes:\n>> I want to do something that will convince Postgres that using the date\n>> index is, by far, the best plan when running my queries, even when the\n>> date column correlation stat drops well below 1.0.\n> \n> Have you tried experimenting with random_page_cost? Seems like your\n> results suggest that you need to lower it.\n\nI don't want to do anything that \"universal\" if I can help it, because I\ndon't want to adversely affect any other queries that the planner currently\naces.\n\nI'm guessing that the reason using the date index is always so much faster\nis that doing so only reads the rows in the date range (say, 1,000 of them)\ninstead of reading every single row in the table (1,000,000) as in a seqscan\nplan.\n\nI think the key is to get the planner to correctly ballpark the number of\nrows in the date range. If it does, I can't imagine it ever deciding to\nread 1,000,000 rows instead of 1,000 with any sane \"cost\" setting. I'm\nassuming the defaults are sane :)\n\n-John\n\n", "msg_date": "Sun, 22 Feb 2004 17:34:47 -0500", "msg_from": "John Siracusa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Column correlation drifts, index ignored again " }, { "msg_contents": "John Siracusa <[email protected]> writes:\n> I think the key is to get the planner to correctly ballpark the number of\n> rows in the date range.\n\nI thought it was. What you showed was\n\n-> Index Scan using mytable_date_idx on mytable (cost=0.00..3071.70 rows=52155 width=23) (actual time=2.610..1688.111 rows=49679 loops=1)\n\nwhich seemed plenty close enough to me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 22 Feb 2004 18:40:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Column correlation drifts, index ignored again " }, { "msg_contents": "On 2/22/04 6:40 PM, Tom Lane wrote:\n> John Siracusa <[email protected]> writes:\n>> I think the key is to get the planner to correctly ballpark the number of\n>> rows in the date range.\n> \n> I thought it was. What you showed was\n> \n> -> Index Scan using mytable_date_idx on mytable (cost=0.00..3071.70 rows=52155\n> width=23) (actual time=2.610..1688.111 rows=49679 loops=1)\n> \n> which seemed plenty close enough to me.\n\nThat's after the planner correctly chooses the date index. Unfortunately, I\nforgot to save the EXPLAIN output from when it was choosing seqscan instead.\nDoes the planner get estimates from both plans before deciding whether or\nnot to use the one that references the date index?\n\n-John\n\n", "msg_date": "Sun, 22 Feb 2004 19:09:07 -0500", "msg_from": "John Siracusa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Column correlation drifts, index ignored again " }, { "msg_contents": "John,\n\n> I think the key is to get the planner to correctly ballpark the number of\n> rows in the date range. If it does, I can't imagine it ever deciding to\n> read 1,000,000 rows instead of 1,000 with any sane \"cost\" setting. I'm\n> assuming the defaults are sane :)\n\nThe default for random_page_cost is sane, but very conservative; it's pretty \nmuch assuming tables that are bigger than RAM and a single IDE disk. If \nyour setup is better than that, you can lower it.\n\nFor example, in the ideal case (database fits in RAM, fast RAM, CPU, and \nrandom seek on the disk), you can lower it to 1.5. For less ideal \nsituations, 1.8 to 2.5 is reasonable on high-end hardware.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Sun, 22 Feb 2004 16:17:27 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Column correlation drifts, index ignored again" }, { "msg_contents": "John Siracusa <[email protected]> writes:\n> Does the planner get estimates from both plans before deciding whether or\n> not to use the one that references the date index?\n\nThe rowcount estimate is made prior to the plan cost estimate, much less\nthe plan selection. So you'd see the same number either way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 22 Feb 2004 22:38:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Column correlation drifts, index ignored again " }, { "msg_contents": "Josh Berkus wrote:\n> John,\n> \n> > I think the key is to get the planner to correctly ballpark the number of\n> > rows in the date range. If it does, I can't imagine it ever deciding to\n> > read 1,000,000 rows instead of 1,000 with any sane \"cost\" setting. I'm\n> > assuming the defaults are sane :)\n> \n> The default for random_page_cost is sane, but very conservative; it's pretty \n> much assuming tables that are bigger than RAM and a single IDE disk. If \n> your setup is better than that, you can lower it.\n> \n> For example, in the ideal case (database fits in RAM, fast RAM, CPU, and \n> random seek on the disk), you can lower it to 1.5. For less ideal \n> situations, 1.8 to 2.5 is reasonable on high-end hardware.\n\nI suspect this ultimately depends on the types of queries you do, the\nsize of the tables involved, disk cache, etc.\n\nFor instance, if you don't have sort_mem set high enough, then things\nlike large hash joins will spill to disk and almost certainly cause a\nlot of contention (random access patterns) even if a sequential scan is\nbeing used to read the table data. The fix there is, of course, to\nincrease sort_mem if possible (as long as you don't cause paging during\nthe operation, which will also slow things down), but you might not\nreally have that option -- in which case you might see some improvement\nby tweaking random_page_cost.\n\nOn a system where the PG data is stored on a disk that does other things,\nyou'll actually want random_page_cost to be *closer* to 1 rather than\nfurther away. The reason is that the average access time of a sequential\npage in that case is much closer to that of a random page than it would\nbe if the disk in question were dedicated to PG duty. This also goes for\nlarge RAID setups where multiple types of data (e.g., home directories,\nlog files, etc.) are stored along with the PG data -- such disk setups\nwill have more random activity happening on the disk while PG activity\nis happening, thus making the PG sequential access pattern appear more\nlike random access.\n\n\nThe best way I can think of to tune random_page_cost is to do EXPLAIN\nANALYZE on the queries you want to optimize the most under the\ncircumstances the queries are most likely to be run, then do the same\nwith enable_seqscan off. Then look at the ratio of predicted and actual\ntimes for the scans themselves. Once you've done that, you can tweak\nrandom_page_cost up or down and do further EXPLAINs (with enable_seqscan\noff and without ANALYZE) until the ratio of the estimated index scan time\nto the actual index scan time of the same query (gotten previously via\nEXPLAIN ANALYZE) is the same as the ratio of the estimated sequential\nscan time (which won't change based on random_page_cost) to the actual\nsequential scan time.\n\nSo:\n\n1. set enable_seqscan = on\n2. set random_page_cost = <some really high value to force seqscans>\n3. EXPLAIN ANALYZE query\n4. record the ratio of estimated to actual scan times.\n5. set enable_seqscan = off\n6. set random_page_cost = <rough estimate of what it should be>\n7. EXPLAIN ANALYZE query\n8. record the actual index scan time(s)\n9. tweak random_page_cost\n10. EXPLAIN query\n11. If ratio of estimate to actual (recorded in step 8) is much\n different than that recorded in step 4, then go back to step 9.\n Reduce random_page_cost if the random ratio is larger than the\n sequential ratio, increase if it's smaller.\n\n\nAs a result, I ended up setting my random_page_cost to 1.5 on my system.\nI suspect that the amount of pain you'll suffer when the planner\nincorrectly chooses a sequential scan is much greater on average than\nthe amount of pain if it incorrectly chooses an index scan, so I'd tend\nto favor erring on the low side for random_page_cost.\n\n\nI'll know tomorrow whether or not my tweaking worked properly, as I have\na job that kicks off every night that scans the entire filesystem and\nstores all the inode information about every file in a newly-created table,\nthen \"merges\" it into the existing file information table. Each table\nis about 2.5 million rows...\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Mon, 23 Feb 2004 19:56:02 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Column correlation drifts, index ignored again" }, { "msg_contents": "Kevin,\n\n> 1. set enable_seqscan = on\n> 2. set random_page_cost = <some really high value to force seqscans>\n> 3. EXPLAIN ANALYZE query\n> 4. record the ratio of estimated to actual scan times.\n> 5. set enable_seqscan = off\n> 6. set random_page_cost = <rough estimate of what it should be>\n> 7. EXPLAIN ANALYZE query\n> 8. record the actual index scan time(s)\n> 9. tweak random_page_cost\n> 10. EXPLAIN query\n> 11. If ratio of estimate to actual (recorded in step 8) is much\n> different than that recorded in step 4, then go back to step 9.\n> Reduce random_page_cost if the random ratio is larger than the\n> sequential ratio, increase if it's smaller.\n\nNice, we ought to post that somewhere people can find it in the future.\n\nI'm also glad that your new job allows you to continue doing PostgreSQL stuff.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 24 Feb 2004 08:59:32 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Column correlation drifts, index ignored again" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Kevin,\n>> 1. set enable_seqscan = on\n>> 2. set random_page_cost = <some really high value to force seqscans>\n>> 3. EXPLAIN ANALYZE query\n>> 4. record the ratio of estimated to actual scan times.\n>> 5. set enable_seqscan = off\n>> 6. set random_page_cost = <rough estimate of what it should be>\n>> 7. EXPLAIN ANALYZE query\n>> 8. record the actual index scan time(s)\n>> 9. tweak random_page_cost\n>> 10. EXPLAIN query\n>> 11. If ratio of estimate to actual (recorded in step 8) is much\n>> different than that recorded in step 4, then go back to step 9.\n>> Reduce random_page_cost if the random ratio is larger than the\n>> sequential ratio, increase if it's smaller.\n\n> Nice, we ought to post that somewhere people can find it in the future.\n\nIf we post it as recommended procedure we had better put big caveat\nnotices on it. The pitfalls with doing this are:\n\n1. If you repeat the sequence exactly as given, you will be homing in on\na RANDOM_PAGE_COST that describes your system's behavior with a fully\ncached query. It is to be expected that you will end up with 1.0 or\nsomething very close to it. The only way to avoid that is to use a\nquery that is large enough to blow out your kernel's RAM cache; which of\ncourse will take long enough that iterating step 10 will be no fun,\nand people will be mighty tempted to take shortcuts.\n\n2. Of course, you are computing a RANDOM_PAGE_COST that is relevant to\njust this single query. Prudence would suggest repeating the process\nwith several different queries and taking some sort of average.\n\nWhen I did the experiments that led up to choosing 4.0 as the default,\nsome years ago, it took several days of thrashing the disks on a couple\nof different machines before I had numbers that I didn't think were\nmostly noise :-(. I am *real* suspicious of any replacement numbers\nthat have been derived in just a few minutes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Feb 2004 13:29:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Column correlation drifts, index ignored again " }, { "msg_contents": "Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> > Kevin,\n> >> 1. set enable_seqscan = on\n> >> 2. set random_page_cost = <some really high value to force seqscans>\n> >> 3. EXPLAIN ANALYZE query\n> >> 4. record the ratio of estimated to actual scan times.\n> >> 5. set enable_seqscan = off\n> >> 6. set random_page_cost = <rough estimate of what it should be>\n> >> 7. EXPLAIN ANALYZE query\n> >> 8. record the actual index scan time(s)\n> >> 9. tweak random_page_cost\n> >> 10. EXPLAIN query\n> >> 11. If ratio of estimate to actual (recorded in step 8) is much\n> >> different than that recorded in step 4, then go back to step 9.\n> >> Reduce random_page_cost if the random ratio is larger than the\n> >> sequential ratio, increase if it's smaller.\n> \n> > Nice, we ought to post that somewhere people can find it in the future.\n> \n> If we post it as recommended procedure we had better put big caveat\n> notices on it. The pitfalls with doing this are:\n> \n> 1. If you repeat the sequence exactly as given, you will be homing in on\n> a RANDOM_PAGE_COST that describes your system's behavior with a fully\n> cached query. It is to be expected that you will end up with 1.0 or\n> something very close to it. The only way to avoid that is to use a\n> query that is large enough to blow out your kernel's RAM cache; which of\n> course will take long enough that iterating step 10 will be no fun,\n> and people will be mighty tempted to take shortcuts.\n\nOops. You're right. I did this on my system, but forgot to put it in\nthe list of things to do:\n\n0. Fill the page cache with something other than PG data, e.g. by\n repeatedly catting several large files and redirecting the output to\n /dev/null. The sum total size of the files should exceed the amount\n of memory on the system.\n\nThe reason you might not have to do this between EXPLAIN ANALYZE queries\nis that the first query will scan the table itself while the second one\nwill scan the index. But that was probably more specific to the query I\nwas doing. If the one you're doing is complex enough the system may have\nto read data pages from the table itself after fetching the index page,\nin which case you'll want to fill the page cache between the queries.\n\n> 2. Of course, you are computing a RANDOM_PAGE_COST that is relevant to\n> just this single query. Prudence would suggest repeating the process\n> with several different queries and taking some sort of average.\n\nRight. And the average should probably be weighted based on the\nrelative frequency that the query in question will be executed.\n\nIn my case, the query I was experimenting with was by far the biggest\nquery that occurs on my system (though it turns out that there are\nothers in that same process that I should look at as well).\n\n> When I did the experiments that led up to choosing 4.0 as the default,\n> some years ago, it took several days of thrashing the disks on a couple\n> of different machines before I had numbers that I didn't think were\n> mostly noise :-(. I am *real* suspicious of any replacement numbers\n> that have been derived in just a few minutes.\n\nOne problem I've been running into is the merge join spilling to disk\nbecause sort_mem isn't big enough. The problem isn't that this is\nhappening, it's that I think the planner is underestimating the impact\nthat doing this will have on the time the merge join takes. Does the\nplanner even account for the possibility that a sort or join will spill\nto disk? Spilling to disk like that will suddenly cause sequential\nreads to perform much more like random reads, unless the sequential\nscans are performed in their entirety between sorts/merges.\n\n\nIn any case, one thing that none of this really accounts for is that\nit's better to set random_page_cost too low than too high. The reason is\nthat index scans are more selective than sequential scans: a sequential\nscan will read the entire table every time, whereas an index scan will\nread only the index pages (and their parents) that match the query.\nMy experience is that when the planner improperly computes the selectivity\nof the query (e.g., by not having good enough or sufficiently up to\ndate statistics), it generally computes a lower selectivity than the\nquery actually represents, and thus selects a sequential scan when an\nindex scan would be more efficient.\n\nThe auto vacuum daemon helps in this regard, by keeping the statistics\nmore up-to-date.\n\nCertainly you shouldn't go overboard by setting random_page_cost too low\n\"just in case\", but it does mean that if you go through the process of\nrunning tests to determine the proper value for random_page_cost, you\nshould probably select a random_page_cost that's in the lower part of\nthe range of values you got.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Tue, 24 Feb 2004 12:14:20 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Column correlation drifts, index ignored again" }, { "msg_contents": "On Tuesday February 24 2004 1:14, Kevin Brown wrote:\n>\n> One problem I've been running into is the merge join spilling to disk\n> because sort_mem isn't big enough. The problem isn't that this is\n> happening, it's that I think the planner is underestimating the impact\n> that doing this will have on the time the merge join takes. Does the\n> planner even account for the possibility that a sort or join will spill\n> to disk? Spilling to disk like that will suddenly cause sequential\n> reads to perform much more like random reads, unless the sequential\n> scans are performed in their entirety between sorts/merges.\n\nHow do you know the merge join is spilling to disk? How are you identifying \nthat? Just assuming from vmstat? iostat?\n", "msg_date": "Tue, 24 Feb 2004 13:54:15 -0700", "msg_from": "\"Ed L.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Column correlation drifts, index ignored again" }, { "msg_contents": "Ed L. wrote:\n> How do you know the merge join is spilling to disk? How are you identifying \n> that? Just assuming from vmstat? iostat?\n\nThe existence of files in $PG_DATA/base/<db-oid>/pgsql_tmp while the\nquery is running, combined with the EXPLAIN output (which shows what\nsorts and joins are being performed).\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Tue, 24 Feb 2004 13:24:19 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Column correlation drifts, index ignored again" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> One problem I've been running into is the merge join spilling to disk\n> because sort_mem isn't big enough. The problem isn't that this is\n> happening, it's that I think the planner is underestimating the impact\n> that doing this will have on the time the merge join takes. Does the\n> planner even account for the possibility that a sort or join will spill\n> to disk?\n\nYes it does. I thought it was making a pretty good estimate, actually.\nThe only obvious hole in the assumptions is\n\n * The disk traffic is assumed to be half sequential and half random\n * accesses (XXX can't we refine that guess?)\n\nBecause of the way that tuplesort.c works, the first merge pass should\nbe pretty well sequential, but I think the passes after that might be\nmostly random from the kernel's viewpoint :-(. Possibly the I/O cost\nshould be adjusted depending on how many merge passes we expect.\n\n\n> In any case, one thing that none of this really accounts for is that\n> it's better to set random_page_cost too low than too high.\n\nThat depends on what you are doing, although I will concede that a lot\nof people are doing things where indexscans should be favored.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Feb 2004 17:16:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Column correlation drifts, index ignored again " } ]
[ { "msg_contents": "I'm trying to join two tables on an inet column, where one of the\ncolumns may contain a subnet rather than a single host. Somehow the\noperation isn't completing quite fast enough, even though neither table\nis very large:\n\n table | rows\n--------------------+--------\n clients | 115472\n clients_commercial | 11670\n\n\nFirst attempt, cancelled after running for half an hour:\n\nSELECT\n c.address AS address,\n cc.address AS network\nFROM\n clients c\n JOIN clients_commercial cc ON (c.address <<= cc.address)\n;\n\n Nested Loop\n (cost=189.00..27359887.76 rows=607947200 width=22)\n Join Filter: (\"outer\".address <<= \"inner\".address)\n -> Seq Scan on clients c\n (cost=0.00..2074.76 rows=102176 width=11)\n -> Materialize\n (cost=189.00..308.00 rows=11900 width=11)\n -> Seq Scan on clients_commercial cc\n (cost=0.00..189.00 rows=11900 width=11)\n\n\n\nSecond attempt, completes within 10 min:\n\nSELECT\n c.address AS address,\n cc.address AS network\nFROM\n clients c,\n clients_commercial cc\nWHERE\n c.commercial IS NULL\n AND c.address <<= cc.address\n;\n\n Nested Loop\n (cost=189.00..139084.01 rows=3040450 width=22)\n Join Filter: (\"outer\".address <<= \"inner\".address)\n -> Seq Scan on clients c\n (cost=0.00..2074.76 rows=511 width=11)\n Filter: (commercial IS NULL)\n -> Materialize\n (cost=189.00..308.00 rows=11900 width=11)\n -> Seq Scan on clients_commercial cc\n (cost=0.00..189.00 rows=11900 width=11)\n\n\nThird attempt; provided some indexes, which unfortunately don't get\nused, making the query twice as slow as the previous one:\n\nSELECT\n c.address AS address,\n cc.address AS network\nFROM\n clients c,\n clients_commercial cc\nWHERE\n c.commercial IS NULL\n AND set_masklen(c.address, masklen(cc.address)) = cc.address\n;\n\nCREATE INDEX clients_commercial_masklen_idx\nON clients_commercial((masklen(address)));\n\nCREATE INDEX clients_32_idx\nON clients((set_masklen(address, 32)));\n\nCREATE INDEX clients_24_idx\nON clients((set_masklen(address, 24)));\n\nCREATE INDEX clients_16_idx\nON clients((set_masklen(address, 16)));\n\n Nested Loop\n (cost=189.00..169488.51 rows=479 width=22)\n Join Filter: (set_masklen(\"outer\".address, masklen(\"inner\".address))\n= \"inner\".address)\n -> Seq Scan on clients c\n (cost=0.00..2074.76 rows=511 width=11)\n Filter: (commercial IS NULL)\n -> Materialize\n (cost=189.00..308.00 rows=11900 width=11)\n -> Seq Scan on clients_commercial cc\n (cost=0.00..189.00 rows=11900 width=11)\n\n\nAnything else I could try? BTREE indexes don't seem to work with the <<=\noperator; is this not possible in principal, or simply something that\nhas not been implmented yet?\n\n", "msg_date": "Mon, 23 Feb 2004 12:48:02 +0100", "msg_from": "\"Eric Jain\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow join using network address function" }, { "msg_contents": "On Mon, Feb 23, 2004 at 12:48:02PM +0100, Eric Jain wrote:\n> I'm trying to join two tables on an inet column, where one of the\n> columns may contain a subnet rather than a single host. Somehow the\n> operation isn't completing quite fast enough, even though neither table\n> is very large:\n> \n> table | rows\n> --------------------+--------\n> clients | 115472\n> clients_commercial | 11670\n\n[snip]\n\n> Anything else I could try? BTREE indexes don't seem to work with the <<=\n> operator; is this not possible in principal, or simply something that\n> has not been implmented yet?\n\nI've been looking at a similar problem for a while. I found that the inet\ntype didn't really give me the flexibility I needed, and indexing it in\na way that worked with CIDR blocks didn't seem easy (and maybe not possible).\n\nSo I rolled my own, based on the seg sample.\n\n<http://word-to-the-wise.com/ipr.tgz> is a datatype that contains a range\nof IPv4 addresses, and which has the various operators to make it GIST\nindexable. Untar it into contrib and make as usual.\n\nInput is of the form '10.11.12.13' or '10.11.12.13.0/25' or\n'10.11.12.13-10.11.12.13.127'. The function display() takes an\nipr type and returns it formatted for display (as a dotted-quad if\na /32, as CIDR format if possible, as a range of dotted-quads otherwise).\n\nA bunch of operators are included, but '&&' returns true if two ipr\nfields intersect.\n\nBugs include:\n\n 0.0.0.0/0 doesn't do what it should on input.\n No documentation.\n No cast operators between ipr and inet types.\n No documentation.\n\nI was planning on doing some docs before releasing it, but here it\nis anyway.\n\nCheers,\n Steve\n-- \n-- Steve Atkins -- [email protected]\n", "msg_date": "Mon, 23 Feb 2004 08:07:34 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join using network address function" }, { "msg_contents": "Eric,\n\n> Nested Loop\n> (cost=189.00..27359887.76 rows=607947200 width=22)\n> Join Filter: (\"outer\".address <<= \"inner\".address)\n> -> Seq Scan on clients c\n> (cost=0.00..2074.76 rows=102176 width=11)\n> -> Materialize\n> (cost=189.00..308.00 rows=11900 width=11)\n> -> Seq Scan on clients_commercial cc\n> (cost=0.00..189.00 rows=11900 width=11)\n\nTo help you, we need EXPLAIN ANALYZE, not just EXPLAIN. Thanks!\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 23 Feb 2004 12:04:18 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join using network address function" }, { "msg_contents": "On Пнд, 2004-02-23 at 12:04 -0800, Josh Berkus wrote:\n> Eric,\n> \n> > Nested Loop\n> > (cost=189.00..27359887.76 rows=607947200 width=22)\n> > Join Filter: (\"outer\".address <<= \"inner\".address)\n> > -> Seq Scan on clients c\n> > (cost=0.00..2074.76 rows=102176 width=11)\n> > -> Materialize\n> > (cost=189.00..308.00 rows=11900 width=11)\n> > -> Seq Scan on clients_commercial cc\n> > (cost=0.00..189.00 rows=11900 width=11)\n> \n> To help you, we need EXPLAIN ANALYZE, not just EXPLAIN. Thanks!\n\nHe said he cancelled the query.\n\n-- \nMarkus Bertheau <[email protected]>\n\n", "msg_date": "Tue, 24 Feb 2004 02:23:27 +0100", "msg_from": "Markus Bertheau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join using network address function" }, { "msg_contents": "> <http://word-to-the-wise.com/ipr.tgz> is a datatype that contains \n> a range of IPv4 addresses, and which has the various operators to \n> make it GIST indexable.\n\nGreat, this looks very promising.\n\n\n> No cast operators between ipr and inet types.\n\nAny way to work around this, short of dumping and reloading tables?\n\nSELECT ipr '1.2.3.4'; -- Okay\nSELECT ipr text(inet '1.2.3.4'); -- Syntax error, of course\nSELECT ipr(text(inet '1.2.3.4')); -- Function does not exist, of course\n...\n\n", "msg_date": "Tue, 24 Feb 2004 13:07:10 +0100", "msg_from": "\"Eric Jain\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow join using network address function" }, { "msg_contents": "\"Eric Jain\" <[email protected]> writes:\n>> <http://word-to-the-wise.com/ipr.tgz> is a datatype that contains \n>> a range of IPv4 addresses, and which has the various operators to \n>> make it GIST indexable.\n\n> Great, this looks very promising.\n\n>> No cast operators between ipr and inet types.\n\n> Any way to work around this, short of dumping and reloading tables?\n\nWouldn't it be better to implement the GIST indexing operators of that\npackage on the standard datatypes? It wasn't apparent to me what \"range\nof IP addresses\" does for you that isn't covered by \"CIDR subnet\" for\nreal-world cases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Feb 2004 10:23:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join using network address function " }, { "msg_contents": "Tom Lane wrote:\n\n>\"Eric Jain\" <[email protected]> writes:\n> \n>\n>>><http://word-to-the-wise.com/ipr.tgz> is a datatype that contains \n>>>a range of IPv4 addresses, and which has the various operators to \n>>>make it GIST indexable.\n>>> \n>>>\n>\n> \n>\n>>Great, this looks very promising.\n>> \n>>\n>\n> \n>\n>>>No cast operators between ipr and inet types.\n>>> \n>>>\n>\n> \n>\n>>Any way to work around this, short of dumping and reloading tables?\n>> \n>>\n>\n>Wouldn't it be better to implement the GIST indexing operators of that\n>package on the standard datatypes? It wasn't apparent to me what \"range\n>of IP addresses\" does for you that isn't covered by \"CIDR subnet\" for\n>real-world cases.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n> \n>\nWe currently only allow access to one of our apps based on IP address. \nThese IPs are stored one per row in a single table, but often represent \na contiguous piece of IP space, but does not represent a full subnet. \nThe current CIDR subnet has the limitation that it will only allow full \nsubnets, i.e. every IP address in 192.168.1.0/24. For example:\n\n192.168.1.15 -> 192.168.1.31\n\nThis range cannot be represented by a CIDR subnet, or it might be able \nto but I really dont want to figure it out each time. However this new \ntype allows us to store this range as one row. It allows an arbitrary \nrange of IP addresses, not just those in a specific subnet. I would see \nthis as a useful inclusion whether in the main src tree or in contrib \nand we will probably be using it when we get to \"mess\" with the database \nschema for this app in the next few months, in fact I have already \ninserted it into our PG source tree ;-).\n\nNick\n\nP.S. We are not responsible for the IP address ranges, we just get told \nwhat they are.\n\n\n\n\n", "msg_date": "Tue, 24 Feb 2004 16:44:57 +0000", "msg_from": "Nick Barr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join using network address function" }, { "msg_contents": "On Tue, Feb 24, 2004 at 10:23:22AM -0500, Tom Lane wrote:\n> \"Eric Jain\" <[email protected]> writes:\n> >> <http://word-to-the-wise.com/ipr.tgz> is a datatype that contains \n> >> a range of IPv4 addresses, and which has the various operators to \n> >> make it GIST indexable.\n> \n> > Great, this looks very promising.\n> \n> >> No cast operators between ipr and inet types.\n> \n> > Any way to work around this, short of dumping and reloading tables?\n> \n> Wouldn't it be better to implement the GIST indexing operators of that\n> package on the standard datatypes? It wasn't apparent to me what \"range\n> of IP addresses\" does for you that isn't covered by \"CIDR subnet\" for\n> real-world cases.\n\nWell, maybe.\n\nHowever, many of the cases where people want to use this sort of\nfunctionality (address range ownership, email blacklists etc) an\nentity is likely to associated with one or a small number of ranges\nof contiguous addresses. Those ranges are often not simple CIDR\nblocks, and deaggregating them into a sequence of CIDR blocks\ndoesn't buy anything and complicates the problem.\n\nI also managed to convince myself that it wasn't possible to do\na useful GIST index of a CIDR datatype - as the union between two\nadjacent CIDR blocks as a CIDR block is often far, far larger than\nthe actual range involved - consider 63.255.255.255/32 and 64.0.0.0/32.\nThat seemed to break the indexing algorithms. I'd like to be proven\nwrong on that, but would still find ipr a more useful datatype than\ninet for my applications.\n\nCheers,\n Steve\n\n", "msg_date": "Tue, 24 Feb 2004 09:10:26 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join using network address function" }, { "msg_contents": "On Tue, Feb 24, 2004 at 01:07:10PM +0100, Eric Jain wrote:\n> > <http://word-to-the-wise.com/ipr.tgz> is a datatype that contains \n> > a range of IPv4 addresses, and which has the various operators to \n> > make it GIST indexable.\n> \n> Great, this looks very promising.\n> \n> > No cast operators between ipr and inet types.\n> \n> Any way to work around this, short of dumping and reloading tables?\n> \n> SELECT ipr '1.2.3.4'; -- Okay\n> SELECT ipr text(inet '1.2.3.4'); -- Syntax error, of course\n> SELECT ipr(text(inet '1.2.3.4')); -- Function does not exist, of course\n\nThere's probably some horrible SQL hack that would let you do it, but\nI should add some casting code anyway. Shouldn't be too painful to do -\nI'll try and get that, and some minimal documentation out today.\n\nCheers,\n Steve\n", "msg_date": "Tue, 24 Feb 2004 09:14:42 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join using network address function" }, { "msg_contents": "On Tue, Feb 24, 2004 at 09:14:42AM -0800, Steve Atkins wrote:\n> On Tue, Feb 24, 2004 at 01:07:10PM +0100, Eric Jain wrote:\n> > > <http://word-to-the-wise.com/ipr.tgz> is a datatype that contains \n> > > a range of IPv4 addresses, and which has the various operators to \n> > > make it GIST indexable.\n> > \n> > Great, this looks very promising.\n> > \n> > > No cast operators between ipr and inet types.\n> > \n> > Any way to work around this, short of dumping and reloading tables?\n> \n> There's probably some horrible SQL hack that would let you do it, but\n> I should add some casting code anyway. Shouldn't be too painful to do -\n> I'll try and get that, and some minimal documentation out today.\n\nDone. <http://word-to-the-wise.com/ipr/>\n\nThis really isn't pgsql-performance content, so this is the last time\nI'll mention it here.\n\nCheers,\n Steve\n", "msg_date": "Tue, 24 Feb 2004 22:28:37 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join using network address function" } ]
[ { "msg_contents": "EXPLAIN\nINSERT INTO public.historical_price ( security_serial_id, [7 fields of proprietary data])\nSELECT public.security_series.security_serial_id, [7 fields of data],\nFROM obsolete.datadb_fix INNER JOIN (obsolete.calcdb INNER JOIN public.security_series ON obsolete.calcdb.serial=public.security_series.legacy_calcdb_id) ON obsolete.datadb_fix.id=public.security_series.legacy_calcdb_id;\n\ndatadb_fix is about 5.5MM records. The other two tables are about 15K records.\n\n Hash Join (cost=1151.63..225863.54 rows=5535794 width=53)\n Hash Cond: (\"outer\".id = \"inner\".serial)\n -> Seq Scan on datadb_fix (cost=0.00..121867.99 rows=6729299 width=28)\n -> Hash (cost=1115.54..1115.54 rows=14438 width=25)\n -> Hash Join (cost=609.96..1115.54 rows=14438 width=25)\n Hash Cond: (\"outer\".legacy_calcdb_id = \"inner\".serial)\n -> Seq Scan on security_series (cost=0.00..247.40 rows=15540 width=13)\n -> Hash (cost=572.37..572.37 rows=15037 width=12)\n -> Seq Scan on calcdb (cost=0.00..572.37 rows=15037 width=12)\n\npim_new-# Table \"obsolete.datadb_fix\"\npim_new-# Column | Type | Modifiers\npim_new-# -------------+------------------+-----------\npim_new-# serial | integer |\npim_new-# id | integer |\npim_new-# date | date |\n[4 fields deleted]\npim_new-# Indexes: sb_data_pkey unique btree (id, date),\npim_new-# datadb1_id btree (id),\n\n\npim_new=# \\d obsolete.calcdb\n Table \"obsolete.calcdb\"\n Column | Type | Modifiers\n\n--------------------+----------------------+------------------------------------\n-------------------\n serial | integer | not null default nextval('\"calcdb_s\nerial_seq\"'::text)\n[...30 proprietary fields]\n\nIndexes: calcdb_serial_key unique btree (serial),\n[...5 other indexes]\n\npim_new=# \\d security_series\n Table \"public.security_series\"\n Column | Type | Modifiers\n--------------------+--------------+-----------\n security_serial_id | integer | not null\n period | character(1) | not null\n legacy_calcdb_id | integer |\nIndexes: security_series_pkey primary key btree (security_serial_id, period),\n secseries_legacy_id_idx1 btree (legacy_calcdb_id)\n\nThe target table has three indexes on it, so I suppose that accounts for SOME extra time. I ended up cancelling the query, running the select on a faster machine into an unindexed temp table, then using COPY out and in. That process took about 2.5 hours total. Machine: Linux, PG 7.3.4, 1.1GHz, 768MB RAM, unfortunately running other stuff. The first try which didn't finish in 24 hours was on Mac OS X Jaguar, PG 7.3.3, 1GHz, 256MB (please don't laugh). Yes, hardware upgrades are coming, but I need to estimate how much more I have to squeeze out of the DB and client applications.\n", "msg_date": "Mon, 23 Feb 2004 12:00:53 -0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: JOIN order, 15K, 15K, 7MM rows" } ]
[ { "msg_contents": "\nA 7.3.4 question...\n\nI want to \"expire\" some data after 90 days, but not delete too\nmuch at once so as not to overwhelm a system with precariously\nbalanced disk I/O and on a table with millions of rows. If I \ncould say it the way I think for a simple example, it'd be\nlike this:\n\n\tdelete from mytable\n\twhere posteddatetime < now() - '90 days'\n\tlimit 100;\n\nOf course, that's not legal 7.3.4 syntax. These are both too\nslow due to sequential scan of table:\n\n\tdelete from mytable where key in (\n\t\tselect key\n\t\tfrom mytable\n\t\twhere posteddatetime < now() - '90 days'\n\t\tlimit 100);\nor \n\tdelete from mytable where exists (\n\t\tselect m.key\n\t\tfrom mytable m\n\t\twhere m.key = mytable.key\n\t\t and m.posteddatetime < now() - '90 days'\n\t\tlimit 100);\n\nTried to use a cursor, but couldn't figure out the syntax\nfor select-for-delete yet, or find appropriate example on\ngoogle. Any clues?\n\nTIA.\n\n", "msg_date": "Mon, 23 Feb 2004 19:10:57 -0700", "msg_from": "\"Ed L.\" <[email protected]>", "msg_from_op": true, "msg_subject": "[PERFORMANCE] slow small delete on large table" }, { "msg_contents": "> Of course, that's not legal 7.3.4 syntax. These are both too\n> slow due to sequential scan of table:\n> \n> \tdelete from mytable where key in (\n> \t\tselect key\n> \t\tfrom mytable\n> \t\twhere posteddatetime < now() - '90 days'\n> \t\tlimit 100);\n\nUpgrade to 7.4 - the query above will be vastly faster.\n\n> \tdelete from mytable where exists (\n> \t\tselect m.key\n> \t\tfrom mytable m\n> \t\twhere m.key = mytable.key\n> \t\t and m.posteddatetime < now() - '90 days'\n> \t\tlimit 100);\n\nThat one I used to use on 7.3 - I seem to recall it indexed nicely.\n\nChris\n", "msg_date": "Tue, 24 Feb 2004 10:34:00 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] slow small delete on large table" }, { "msg_contents": "On Mon, Feb 23, 2004 at 19:10:57 -0700,\n \"Ed L.\" <[email protected]> wrote:\n> \n> A 7.3.4 question...\n> \n> I want to \"expire\" some data after 90 days, but not delete too\n> much at once so as not to overwhelm a system with precariously\n> balanced disk I/O and on a table with millions of rows. If I \n> could say it the way I think for a simple example, it'd be\n> like this:\n\nIf there aren't foreign keys into the table from which rows are being\ndeleted, then a delete shouldn't have a big impact on the system.\nIf you do the expires frequently, then there won't be as many records\nto delete at one time. The other response showed you how to avoid the\nsequential scan, which is the other part of the problem.\n", "msg_date": "Mon, 23 Feb 2004 21:00:17 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] slow small delete on large table" }, { "msg_contents": "[email protected] (\"Ed L.\") wrote:\n> A 7.3.4 question...\n>\n> I want to \"expire\" some data after 90 days, but not delete too\n> much at once so as not to overwhelm a system with precariously\n> balanced disk I/O and on a table with millions of rows. If I \n> could say it the way I think for a simple example, it'd be\n> like this:\n>\n> \tdelete from mytable\n> \twhere posteddatetime < now() - '90 days'\n> \tlimit 100;\n>\n> Of course, that's not legal 7.3.4 syntax. These are both too\n> slow due to sequential scan of table:\n>\n> \tdelete from mytable where key in (\n> \t\tselect key\n> \t\tfrom mytable\n> \t\twhere posteddatetime < now() - '90 days'\n> \t\tlimit 100);\n> or \n> \tdelete from mytable where exists (\n> \t\tselect m.key\n> \t\tfrom mytable m\n> \t\twhere m.key = mytable.key\n> \t\t and m.posteddatetime < now() - '90 days'\n> \t\tlimit 100);\n>\n> Tried to use a cursor, but couldn't figure out the syntax\n> for select-for-delete yet, or find appropriate example on\n> google. Any clues?\n\nI'm hoping that there's an index on posteddatetime, right?\n\nThere are several approaches that would be quite sensible to consider...\n\n1. Delete records as often as possible, so that the number deleted at\nany given time stays small.\n\n2. Or find an hour at which the system isn't busy, and blow through a\nlot of them then.\n\n3. Open a cursor querying records in your acceptable range, e.g.\n\n declare nukem cursor for select key from mytable where posteddate <\n now() - '90 days'::interval;\n\n Fetch 100 entries from the cursor, and submit, across another\n connection, delete requests for the 100 entries, all as one\n transaction, which you commit.\n\n Sleep a bit, and fetch another 100.\n\n Note that the cursor will draw groups of 100 entries into memory;\n it's good to immediately delete them, as they'll be in buffers.\n Keeping the number of rows deleted small, and sleeping a bit, means\n you're not trashing buffers too badly. The query doesn't enforce\n any particular order on things; it effect chews out old entries in\n any order the query finds them. If you can't keep up with\n insertions, there could be rather old entries that would linger\n around...\n\n This parallels the \"sleepy vacuum\" that takes a similar strategy to\n keeping vacuums from destroying performance.\n\n4. Rotor tables.\n\nHave \"mytable\" be a view on a sequence of tables.\n\ncreate view mytable as \n select * from mytable1 \n union all \n select * from mytable2\n union all \n select * from mytable3 \n union all \n select * from mytable4 \n union all \n select * from mytable5 \n union all \n select * from mytable6 \n union all \n select * from mytable7 \n union all \n select * from mytable8\n union all \n select * from mytable9 \n union all\n select * from mytable10\n\nA rule can choose an appropriate table from the 9 to _actually_ insert\ninto.\n\nEvery 3 days, you truncate the eldest table and rotate on to insert\ninto the next table. \n\nThat will take mere moments, which is real helpful to save you I/O on\nthe deletes.\n\nThere is an unfortunate other problem with this; joins against mytable\nare pretty bad, and self-joins effectively turn into a union all\nacross 100 joins. (Table 1 against 1-10, Table 2 against 1-10, and so\nforth...)\n\nFor this not to suck rather incredibly requires fairly carefully\nstructuring queries on the table. That may or may not be compatible\nwith your needs...\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in name ^ \"@\" ^ tld;;\nhttp://cbbrowne.com/info/x.html\nA Linux machine! because a 486 is a terrible thing to waste! \n-- <[email protected]> Joe Sloan\n", "msg_date": "Mon, 23 Feb 2004 22:48:29 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] slow small delete on large table" }, { "msg_contents": "\"Ed L.\" <[email protected]> writes:\n> If I could say it the way I think for a simple example, it'd be\n> like this:\n\n> \tdelete from mytable\n> \twhere posteddatetime < now() - '90 days'\n> \tlimit 100;\n\n> Of course, that's not legal 7.3.4 syntax.\n\nAssuming you have a primary key on the table, consider this:\n\nCREATE TEMP TABLE doomed AS\n SELECT key FROM mytable WHERE posteddatetime < now() - '90 days'\n LIMIT 100;\n\nDELETE FROM mytable WHERE key = doomed.key;\n\nDROP TABLE doomed;\n\nDepending on the size of mytable, you might need an \"ANALYZE doomed\"\nin there, but I'm suspecting not. A quick experiment suggests that\nyou'll get a plan with an inner indexscan on mytable.key, which is\nexactly what you need.\n\nSee also Chris Browne's excellent suggestions nearby, if you are willing\nto make larger readjustments in your thinking...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Feb 2004 00:23:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] slow small delete on large table " }, { "msg_contents": "On Monday February 23 2004 10:23, Tom Lane wrote:\n> \"Ed L.\" <[email protected]> writes:\n> > If I could say it the way I think for a simple example, it'd be\n> > like this:\n> >\n> > \tdelete from mytable\n> > \twhere posteddatetime < now() - '90 days'\n> > \tlimit 100;\n> >\n> > Of course, that's not legal 7.3.4 syntax.\n>\n> Assuming you have a primary key on the table, consider this:\n>\n> CREATE TEMP TABLE doomed AS\n> SELECT key FROM mytable WHERE posteddatetime < now() - '90 days'\n> LIMIT 100;\n>\n> DELETE FROM mytable WHERE key = doomed.key;\n>\n> DROP TABLE doomed;\n>\n> Depending on the size of mytable, you might need an \"ANALYZE doomed\"\n> in there, but I'm suspecting not. A quick experiment suggests that\n> you'll get a plan with an inner indexscan on mytable.key, which is\n> exactly what you need.\n\nI didn't mention I'd written a trigger to do delete N rows on each new \ninsert (with a delay governor preventing deletion avalanches). The \napproach looks a little heavy to be done from within a trigger with the \nresponse time I need, but I'll try it. Cantchajust toss in that \"limit N\" \nfunctionality to delete clauses? How hard could that be? ;)\n\n> See also Chris Browne's excellent suggestions nearby, if you are willing\n> to make larger readjustments in your thinking...\n\nI did a search for articles by Chris Browne, didn't see one that appeared \nrelevant. What is the thread subject to which you refer?\n\n", "msg_date": "Tue, 24 Feb 2004 11:36:08 -0700", "msg_from": "\"Ed L.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORMANCE] slow small delete on large table" }, { "msg_contents": "After a long battle with technology, [email protected] (\"Ed L.\"), an earthling, wrote:\n> On Monday February 23 2004 10:23, Tom Lane wrote:\n>> \"Ed L.\" <[email protected]> writes:\n>> Depending on the size of mytable, you might need an \"ANALYZE doomed\"\n>> in there, but I'm suspecting not. A quick experiment suggests that\n>> you'll get a plan with an inner indexscan on mytable.key, which is\n>> exactly what you need.\n>\n> I didn't mention I'd written a trigger to do delete N rows on each new \n> insert (with a delay governor preventing deletion avalanches). The \n> approach looks a little heavy to be done from within a trigger with the \n> response time I need, but I'll try it. Cantchajust toss in that \"limit N\" \n> functionality to delete clauses? How hard could that be? ;)\n\nIt's nonstandard, which will get you a certain amount of opposition\n\"for free;\" the problem with nonstandard behaviour is that sometimes\nthe implications haven't been thought out...\n\n>> See also Chris Browne's excellent suggestions nearby, if you are willing\n>> to make larger readjustments in your thinking...\n>\n> I did a search for articles by Chris Browne, didn't see one that\n> appeared relevant. What is the thread subject to which you refer?\n\nIt's in the same thread. I suggested having a daemon running a cursor\n(amounting to a slightly more expensive version of Tom's \"doomed temp\ntable\" approach), or using \"rotor\" tables where you could TRUNCATE a\ntable every few days which would be _really_ cheap...\n-- \noutput = (\"cbbrowne\" \"@\" \"cbbrowne.com\")\nhttp://cbbrowne.com/info/emacs.html\nExpect the unexpected.\n-- The Hitchhiker's Guide to the Galaxy, page 7023\n", "msg_date": "Tue, 24 Feb 2004 14:12:54 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] slow small delete on large table" } ]
[ { "msg_contents": "I have a query that I think should run faster. The machine is P2/400 \nwith enough ram (384MB), but still, maybe the query could be tuned up.\npostgresql.conf is stock with these values changed:\n\nfsync=false\nshared_buffers = 5000\nsort_mem = 8192\nvacuum_mem = 16384\n\nThis is a development machine, the production will be dual P3, 1GHz, 1GB \nRAM, but I fear that the execution will still be slow, as the tables \nwill get bigger.\n\nI've pasted information about the database, and the explain output, but \nthe text is horribly wrapped so there's a clean copy on the web in \nhttp://geri.cc.fer.hr/~ivoras/query.txt\n\nThe intention is: there is a table called cl_log which records events \nfrom various sources, some of which also have data in data_kat_id and \ndata_user_id fields, some of which don't (hence the outer joins). The \nquery is report-style, and tries to collect as much data as possible \nabout the events. Tables cl_source, cl_handler and cl_event_type hold \ninformation about the type of event. They are small (currently 1-3 \nrecords in each, will grow to about 10 records).\n\n\n\nferweb=> explain analyze SELECT cl_log.*, cl_source.name AS source_name, \ncl_source.description AS source_description,\n\tcl_handler.name AS handler_name, cl_handler.description AS \nhandler_description, cl_event_type.name AS event_type_name,\n\tcl_event_type.description as event_type_description, users.jime, \nkategorija.knaziv\n\tFROM cl_log\n\t\tINNER JOIN cl_source ON source_id=cl_source.id\n\t\tINNER JOIN cl_handler ON cl_source.handler_id=cl_handler.id\n\t\tINNER JOIN cl_event_type ON event_type_id=cl_event_type.id\n\t\tLEFT OUTER JOIN kategorija ON data_kat_id=kategorija.id\n\t\tLEFT OUTER JOIN users ON data_user_id=users.id\n\t\tORDER BY time desc LIMIT 30;\n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=788.78..788.79 rows=2 width=500) (actual \ntime=23229.78..23230.44 rows=30 loops=1)\n -> Sort (cost=788.78..788.79 rows=3 width=500) (actual \ntime=23229.75..23230.10 rows=31 loops=1)\n Sort Key: cl_log.\"time\"\n -> Nested Loop (cost=1.04..788.76 rows=3 width=500) (actual \ntime=4078.85..20185.89 rows=38999 loops=1)\n -> Nested Loop (cost=1.04..771.27 rows=3 width=485) \n(actual time=4078.71..14673.27 rows=38999 loops=1)\n -> Hash Join (cost=1.04..754.21 rows=3 \nwidth=417) (actual time=4078.54..8974.08 rows=38999 loops=1)\n Hash Cond: (\"outer\".event_type_id = \"inner\".id)\n -> Nested Loop (cost=0.00..752.16 rows=195 \nwidth=288) (actual time=4078.20..6702.17 rows=38999 loops=1)\n Join Filter: (\"inner\".handler_id = \n\"outer\".id)\n -> Seq Scan on cl_handler \n(cost=0.00..1.01 rows=1 width=104) (actual time=0.02..0.04 rows=1 loops=1)\n -> Materialize (cost=748.72..748.72 \nrows=195 width=184) (actual time=4078.08..4751.52 rows=38999 loops=1)\n -> Nested Loop \n(cost=0.00..748.72 rows=195 width=184) (actual time=0.21..3197.16 \nrows=38999 loops=1)\n -> Seq Scan on cl_source \n (cost=0.00..1.01 rows=1 width=108) (actual time=0.05..0.06 rows=1 loops=1)\n -> Index Scan using \ncl_log_source on cl_log (cost=0.00..745.27 rows=195 width=76) (actual \ntime=0.11..1467.08 rows=38999 loops=1)\n Index Cond: \n(cl_log.source_id = \"outer\".id)\n -> Hash (cost=1.03..1.03 rows=3 width=129) \n(actual time=0.12..0.12 rows=0 loops=1)\n -> Seq Scan on cl_event_type \n(cost=0.00..1.03 rows=3 width=129) (actual time=0.04..0.08 rows=3 loops=1)\n -> Index Scan using kategorija_pkey on kategorija \n (cost=0.00..5.82 rows=1 width=68) (actual time=0.05..0.07 rows=1 \nloops=38999)\n Index Cond: (\"outer\".data_kat_id = \nkategorija.id)\n -> Index Scan using users_pkey on users \n(cost=0.00..5.97 rows=1 width=15) (actual time=0.05..0.07 rows=1 \nloops=38999)\n Index Cond: (\"outer\".data_user_id = users.id)\n Total runtime: 23267.25 msec\n(22 rows)\n\nferweb=> select count(*) from cl_log;\n count\n-------\n 38999\n(1 row)\n\nferweb=> select count(*) from cl_handler;\n count\n-------\n 1\n(1 row)\n\nferweb=> select count(*) from cl_source;\n count\n-------\n 1\n(1 row)\n\nferweb=> select count(*) from cl_event_type;\n count\n-------\n 3\n(1 row)\n\nferweb=> select count(*) from users;\n count\n-------\n 2636\n(1 row)\n\nferweb=> select count(*) from kategorija;\n count\n-------\n 1928\n(1 row)\n\n\n\n-- \nEvery sufficiently advanced magic is indistinguishable from technology\n - Arthur C Anticlarke\n\n", "msg_date": "Tue, 24 Feb 2004 14:17:18 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query" }, { "msg_contents": "On Tue, 24 Feb 2004, Ivan Voras wrote:\n\n> -> Nested Loop (cost=1.04..788.76 rows=3 width=500) (actual \n> time=4078.85..20185.89 rows=38999 loops=1)\n> -> Nested Loop (cost=1.04..771.27 rows=3 width=485) \n> (actual time=4078.71..14673.27 rows=38999 loops=1)\n> -> Nested Loop (cost=0.00..752.16 rows=195 \n> width=288) (actual time=4078.20..6702.17 rows=38999 loops=1)\n> -> Nested Loop \n> (cost=0.00..748.72 rows=195 width=184) (actual time=0.21..3197.16 \n> rows=38999 loops=1)\n\nNote those nested loops up there. They think that you are going to be \noperating on 3,3,195, and 195 rows respectively, when they actually are \noperating on 38999, 38999, 38999, and 38999 in reality.\n\nset enable_nestloop = off\n\nand see if that helps. If so, see if altering the responsible columns \ndefault stats to something higher (100 is a good start) and reanalyze to \nsee if you get a better plan. As long as those estimates are that far \noff, you're gonna get a poorly performing query when the planner is \nallowed to use nested loops.\n\n", "msg_date": "Mon, 1 Mar 2004 09:26:50 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI've written a summary of my findings on implementing and using \nmaterialized views in PostgreSQL. I've already deployed eagerly updating \nmaterialized views on several views in a production environment for a \ncompany called RedWeek: http://redweek.com/. As a result, some queries \nthat were taking longer than 30 seconds to run now run in a fraction of a \nmillisecond.\n\nYou can view my summary at \nhttp://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n\nComments and suggestions are definitely welcome.\n\n- -- \nJonathan Gardner\[email protected]\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.3 (GNU/Linux)\n\niD8DBQFAO3eZqp6r/MVGlwwRAnpEAKC8+/lFyPBbXetPEfFLwgUvJZLCmgCfYlmR\n0vZmCcbGSNT/m/W8QOIhufk=\n=snCu\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 24 Feb 2004 08:11:03 -0800", "msg_from": "\"Jonathan M. Gardner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Materialized View Summary" }, { "msg_contents": "On Tuesday 24 February 2004 16:11, Jonathan M. Gardner wrote:\n>\n> I've written a summary of my findings on implementing and using\n> materialized views in PostgreSQL. I've already deployed eagerly updating\n> materialized views on several views in a production environment for a\n> company called RedWeek: http://redweek.com/. As a result, some queries\n> that were taking longer than 30 seconds to run now run in a fraction of a\n> millisecond.\n>\n> You can view my summary at\n> http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n\nInteresting (and well written) summary. Even if not a \"built in\" feature, I'm \nsure that plenty of people will find this useful. Make sure it gets linked to \nfrom techdocs.\n\nIf you could identify candidate keys on a view, you could conceivably automate \nthe process even more. That's got to be possible in some cases, but I'm not \nsure how difficult it is to do in all cases.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 24 Feb 2004 17:11:20 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Materialized View Summary" }, { "msg_contents": "Richard Huxton wrote:\n> On Tuesday 24 February 2004 16:11, Jonathan M. Gardner wrote:\n> \n>>I've written a summary of my findings on implementing and using\n>>materialized views in PostgreSQL. I've already deployed eagerly updating\n>>materialized views on several views in a production environment for a\n>>company called RedWeek: http://redweek.com/. As a result, some queries\n>>that were taking longer than 30 seconds to run now run in a fraction of a\n>>millisecond.\n>>\n>>You can view my summary at\n>>http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n> \n> \n> Interesting (and well written) summary. Even if not a \"built in\" feature, I'm \n> sure that plenty of people will find this useful. Make sure it gets linked to \n> from techdocs.\n> \n> If you could identify candidate keys on a view, you could conceivably automate \n> the process even more. That's got to be possible in some cases, but I'm not \n> sure how difficult it is to do in all cases.\n> \n\n\n\nAre there any plans to rewrite that in C and add proper support for SQL \ncommands? (e.g. \"CREATE MATERIALIZED VIEW\", \"DROP VIEW\", ...).\n\n\n\tBest regards,\n\n\t\tHans\n\n-- \nCybertec Geschwinde u Schoenig\nSchoengrabern 134, A-2020 Hollabrunn, Austria\nTel: +43/2952/30706 or +43/664/233 90 75\nwww.cybertec.at, www.postgresql.at, kernel.cybertec.at\n\n", "msg_date": "Tue, 24 Feb 2004 18:40:25 +0100", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [SQL] Materialized View Summary" }, { "msg_contents": "Tsearch2 comes with its own tsearch2 trigger function. You pass column names to\nit, and it puts a vanilla tsvector into the column names in TG_ARGV[0] (zero\nbased, yes?). Not only can you pass column names to it, but you can pass simple\nfunctions to it as well. This is magical to me. :)\n\nI'm trying to figure out how to do the same thing, except instead of returning\na vanilla tsvector, I want to return a specially weighted tsvector. I've\ncreated a function that can do this:\n\ncreate or replace function name_vector (text) returns tsvector as '\nselect setweight(to_tsvector(substr($1,1,strpos($1,'',''))),''C'') ||\nto_tsvector(substr($1,strpos($1,'','')+1,length($1)));\n' language 'sql';\n\nso... \n\nPlain:\n\nselect to_tsvector('Einstein, Albert');\n to_tsvector\n-------------------------\n 'albert':2 'einstein':1\n\nWeighted:\n\nselect name_vector('Einstein, Albert');\n name_vector\n--------------------------\n 'albert':2 'einstein':1C\n\n\nNow, to somehow package that into a magical trigger function... \n\nAll the examples for creating trigger functions that I've found use static\ncolumn names, NEW and OLD ... I would like to create a generic trigger\nfunction, as the tsearch2 trigger function does, to return the specially\nweighted tsvector.\n\nIts like a lighter to a caveman. Can anyone lend a hand?\n\nCG\n\n__________________________________\nDo you Yahoo!?\nYahoo! Mail SpamGuard - Read only the mail you want.\nhttp://antispam.yahoo.com/tools\n", "msg_date": "Tue, 24 Feb 2004 10:58:06 -0800 (PST)", "msg_from": "Chris Gamache <[email protected]>", "msg_from_op": false, "msg_subject": "tsearch2 trigger alternative" }, { "msg_contents": "On Tue, 2004-02-24 at 12:11, Richard Huxton wrote:\n> On Tuesday 24 February 2004 16:11, Jonathan M. Gardner wrote:\n> >\n> > I've written a summary of my findings on implementing and using\n> > materialized views in PostgreSQL. I've already deployed eagerly updating\n> > materialized views on several views in a production environment for a\n> > company called RedWeek: http://redweek.com/. As a result, some queries\n> > that were taking longer than 30 seconds to run now run in a fraction of a\n> > millisecond.\n> >\n> > You can view my summary at\n> > http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n\n\nhave you done much concurrency testing on your snapshot views? I\nimplemented a similar scheme in one of my databases but found problems\nwhen I had concurrent \"refresh attempts\". I ended up serializing the\ncalls view LOCKing, which was ok for my needs, but I thought potentially\nproblematic in other cases.\n\n> \n> Interesting (and well written) summary. Even if not a \"built in\" feature, I'm \n> sure that plenty of people will find this useful. Make sure it gets linked to \n> from techdocs.\n\nDone. :-)\n\n> \n> If you could identify candidate keys on a view, you could conceivably automate \n> the process even more. That's got to be possible in some cases, but I'm not \n> sure how difficult it is to do in all cases.\n>\n\nit seems somewhere between Joe Conways work work arrays and polymorphic\nfunctions in 7.4 this should be feasible. \n\n \nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "24 Feb 2004 16:48:49 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [SQL] Materialized View Summary" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Tuesday 24 February 2004 01:48 pm, Robert Treat wrote:\n> On Tue, 2004-02-24 at 12:11, Richard Huxton wrote:\n> > On Tuesday 24 February 2004 16:11, Jonathan M. Gardner wrote:\n> > > I've written a summary of my findings on implementing and using\n> > > materialized views in PostgreSQL. I've already deployed eagerly\n> > > updating materialized views on several views in a production\n> > > environment for a company called RedWeek: http://redweek.com/. As a\n> > > result, some queries that were taking longer than 30 seconds to run\n> > > now run in a fraction of a millisecond.\n> > >\n> > > You can view my summary at\n> > > http://jonathangardner.net/PostgreSQL/materialized_views/matviews.htm\n> > >l\n>\n> have you done much concurrency testing on your snapshot views? I\n> implemented a similar scheme in one of my databases but found problems\n> when I had concurrent \"refresh attempts\". I ended up serializing the\n> calls view LOCKing, which was ok for my needs, but I thought potentially\n> problematic in other cases.\n>\n\nI don't actually use snapshot views in production. I would imagine that if \nyou had two seperate processes trying to update the views simultaneously, \nthat would be a problem. All I can say is \"don't do that\". I think you'd \nwant to lock the table before we go and start messing with it on that \nscale.\n\nWe are running into some deadlock issues and some other problems with eager \nmvs, but they are very rare and hard to reproduce. I think we are going to \nstart locking the row before updating it and see if that solves it. We also \njust discovered the \"debug_deadlock\" feature.\n\nI'll post my findings and summaries of the information I am getting here \nsoon.\n\nI'm interested in whatever you've been working on WRT materialized views. \nWhat cases do you think will be problematic? Do you have ideas on how to \nwork around them? Are there issues that I'm not addressing but should be?\n\n> > Interesting (and well written) summary. Even if not a \"built in\"\n> > feature, I'm sure that plenty of people will find this useful. Make\n> > sure it gets linked to from techdocs.\n>\n> Done. :-)\n>\n\n*blush*\n\n> > If you could identify candidate keys on a view, you could conceivably\n> > automate the process even more. That's got to be possible in some\n> > cases, but I'm not sure how difficult it is to do in all cases.\n>\n> it seems somewhere between Joe Conways work work arrays and polymorphic\n> functions in 7.4 this should be feasible.\n>\n\nI'll have to look at what he is doing in more detail.\n\n- -- \nJonathan M. Gardner\nWeb Developer, Amazon.com\[email protected] - (206) 266-2906\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.3 (GNU/Linux)\n\niD8DBQFAO837BFeYcclU5Q0RAhonAKDBY7Svz9/vxmerS+y/h2mLgV1ZZQCdFlnd\n7aMPFvRx4O8qg+sJfWkaBh8=\n=zdhL\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 24 Feb 2004 14:19:39 -0800", "msg_from": "Jonathan Gardner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [SQL] Materialized View Summary" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI'm not sure if my original reply made it through. Ignore the last one if \nit did.\n\nOn Tuesday 24 February 2004 1:48 pm, Robert Treat wrote:\n> On Tue, 2004-02-24 at 12:11, Richard Huxton wrote:\n> > On Tuesday 24 February 2004 16:11, Jonathan M. Gardner wrote:\n> > > I've written a summary of my findings on implementing and using\n> > > materialized views in PostgreSQL. I've already deployed eagerly\n> > > updating materialized views on several views in a production\n> > > environment for a company called RedWeek: http://redweek.com/. As a\n> > > result, some queries that were taking longer than 30 seconds to run\n> > > now run in a fraction of a millisecond.\n> > >\n> > > You can view my summary at\n> > > http://jonathangardner.net/PostgreSQL/materialized_views/matviews.h\n> > >tml\n>\n> have you done much concurrency testing on your snapshot views? I\n> implemented a similar scheme in one of my databases but found problems\n> when I had concurrent \"refresh attempts\". I ended up serializing the\n> calls view LOCKing, which was ok for my needs, but I thought\n> potentially problematic in other cases.\n>\n\nWe are running into some small problems with deadlocks and multiple \ninserts. It's not a problem unless we do a mass update to the data or \nsomething like that. I'm interested in how you solved your problem.\n\nI am playing with an exclusive lock scheme that will lock all the \nmaterialized views with an exclusive lock (see Section 12.3 for a \nreminder on what exactly this means). The locks have to occur in order, \nso I use a recursive function to traverse a dependency tree to the root \nand then lock from there. Right now, we only have one materialized view \ntree, but I can see some schemas having multiple seperate trees with \nmultiple roots. So I put in an ordering to lock the tables in a \npre-defined order.\n\nBut if the two dependency trees are totally seperate, it is possible for \none transaction to lock tree A and then tree B, and for another to lock \ntree B and then tree A, causing deadlock.\n\nUnfortunately, I can't force any update to the underlying tables to force \nthis locking function to be called. So we will probably call this \nmanually before we touch any of those tables.\n\nIn the future, it would be nice to have a hook into the locking mechanism \nso any kind of lock on the underlying tables can trigger this.\n\nAlso, building the dependency trees is completely manual. Until I can get \nsome functions to actually assemble the triggers and such, automatic \nbuilding of the trees will be difficult.\n\n\n- -- \nJonathan Gardner\[email protected]\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.3 (GNU/Linux)\n\niD8DBQFAPFqRqp6r/MVGlwwRAnvPAJ90lEEyaBzAfUoLZU93ZDvkojaAwwCdGjaA\nYBlO57OiZidZuQ5/S0u6wXM=\n=bMYE\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 25 Feb 2004 00:19:29 -0800", "msg_from": "\"Jonathan M. Gardner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] [SQL] Materialized View Summary" }, { "msg_contents": "Jonathan M. Gardner wrote:\n\n> You can view my summary at\n> http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n>\n> Comments and suggestions are definitely welcome.\n>\nFantastic, I was planning on a bit of materialized view investigations \nmyself\nwhen time permits, I'm pleased to see you've started the ball rolling.\n\nI was thinking about your problem with mutable functions used in a \nmaterialized view.\n\nHow about eliminating the mutable functions as much as possible from the \nunderlying\nview definition, and create another view on top of the materialized view \nthat has the mutable bits!\nGiving you the best of both worlds.\n\nI haven't tried this or thought it through very much - too busy - but \nI'd thought I'd throw\nit in for a bit o' head scratching, and chin stroking :)\n\nCheers\n-- \nMark Gibson <gibsonm |AT| cromwell |DOT| co |DOT| uk>\nWeb Developer & Database Admin\nCromwell Tools Ltd.\nLeicester, England.\n\n", "msg_date": "Wed, 25 Feb 2004 11:35:47 +0000", "msg_from": "Mark Gibson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Materialized View Summary" }, { "msg_contents": "On Wed, 2004-02-25 at 03:19, Jonathan M. Gardner wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> I'm not sure if my original reply made it through. Ignore the last one if \n> it did.\n\nBut I liked the last one :-)\n\n> \n> On Tuesday 24 February 2004 1:48 pm, Robert Treat wrote:\n> > On Tue, 2004-02-24 at 12:11, Richard Huxton wrote:\n> > > On Tuesday 24 February 2004 16:11, Jonathan M. Gardner wrote:\n> > > > I've written a summary of my findings on implementing and using\n> > > > materialized views in PostgreSQL. I've already deployed eagerly\n> > > > updating materialized views on several views in a production\n> > > > environment for a company called RedWeek: http://redweek.com/. As a\n> > > > result, some queries that were taking longer than 30 seconds to run\n> > > > now run in a fraction of a millisecond.\n> > > >\n> > > > You can view my summary at\n> > > > http://jonathangardner.net/PostgreSQL/materialized_views/matviews.h\n> > > >tml\n> >\n> > have you done much concurrency testing on your snapshot views? I\n> > implemented a similar scheme in one of my databases but found problems\n> > when I had concurrent \"refresh attempts\". I ended up serializing the\n> > calls view LOCKing, which was ok for my needs, but I thought\n> > potentially problematic in other cases.\n> >\n> \n> We are running into some small problems with deadlocks and multiple \n> inserts. It's not a problem unless we do a mass update to the data or \n> something like that. I'm interested in how you solved your problem.\n> \n\nWell, I have two different cases actually. In one case I have a master\ntable with what are essentially 4 or 5 matviews based off of that. I\ndon't allow updates to the matviews, only to the master table, and only\nvia stored procedures. This would work better if locking semantics\ninside of pl functions worked properly, but currently we have the\napplication lock the table in exclusive access mode and then call the\nfunction to make the data changes which then fires off a function to\nupdate the matviews. Since it's all within a transaction, readers of\nthe matviews are oblivious to the change. IMO this whole method is a\nwizardry in database hack jobs that I would love to replace.\n\nThe second case, and this one being much simpler, started out as a view\nthat does aggregation across several other views and tables, which is\npretty resource intensive but only returns 4 rows. I refresh the matview\nvia a cron job which basically does a SELECT * FOR UPDATE on the\nmatview, deletes the entire contents, then does an INSERT INTO matview\nSELECT * FROM view. Again since it's in a transaction, readers of the\nmatview are happy (and apps are only granted select on the matview). \nConcurrency is kept because the cron job must wait to get a LOCK on the\ntable before it can proceed with the delete/update. I have a feeling\nthat this method could fall over given a high enough number of\nconcurrent updaters, but works pretty well for our needs. \n\n> I am playing with an exclusive lock scheme that will lock all the \n> materialized views with an exclusive lock (see Section 12.3 for a \n> reminder on what exactly this means). The locks have to occur in order, \n> so I use a recursive function to traverse a dependency tree to the root \n> and then lock from there. Right now, we only have one materialized view \n> tree, but I can see some schemas having multiple seperate trees with \n> multiple roots. So I put in an ordering to lock the tables in a \n> pre-defined order.\n> \n> But if the two dependency trees are totally seperate, it is possible for \n> one transaction to lock tree A and then tree B, and for another to lock \n> tree B and then tree A, causing deadlock.\n> \n> Unfortunately, I can't force any update to the underlying tables to force \n> this locking function to be called. So we will probably call this \n> manually before we touch any of those tables.\n\nYeah, I ran into similar problems as this, but ISTM you could do a\nbefore update trigger on the matview to do the locking (though I'd guess\nthis would end in trouble due to plpgsql lock semantics, so maybe i\nshouldn't send you down a troubled road...)\n\n> \n> In the future, it would be nice to have a hook into the locking mechanism \n> so any kind of lock on the underlying tables can trigger this.\n> \n> Also, building the dependency trees is completely manual. Until I can get \n> some functions to actually assemble the triggers and such, automatic \n> building of the trees will be difficult.\n> \n\nI just noticed that your summary doesn't make use of postgresql RULES in\nany way, how much have you traveled down that path? We had cooked up a\nscheme for our second case where we would have a table that held an\nentry for the matview and then a timestamp of the last update/insert\ninto any of the base tables the matview depended on. when then would\ncreate rules on all the base tables to do an update to the refresh table\nany time they were updated/inserted/deleted. We would then put a\ncorresponding rule on the matview so that on each select from the\nmatview, it would check to see if any of it's base tables had changed\nand if so fire off a refresh of itself. We ended up abandoning this\nidea as the complexity seemed to high when the simple scheme above\nworked equally well for our needs. \n\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "25 Feb 2004 10:46:16 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [SQL] Materialized View Summary" }, { "msg_contents": "\nCan we get this URL added to the techdocs site please? Thanks.\n\n---------------------------------------------------------------------------\n\nJonathan M. Gardner wrote:\n[ PGP not available, raw data follows ]\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> I've written a summary of my findings on implementing and using \n> materialized views in PostgreSQL. I've already deployed eagerly updating \n> materialized views on several views in a production environment for a \n> company called RedWeek: http://redweek.com/. As a result, some queries \n> that were taking longer than 30 seconds to run now run in a fraction of a \n> millisecond.\n> \n> You can view my summary at \n> http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n> \n> Comments and suggestions are definitely welcome.\n> \n> - -- \n> Jonathan Gardner\n> [email protected]\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.2.3 (GNU/Linux)\n> \n> iD8DBQFAO3eZqp6r/MVGlwwRAnpEAKC8+/lFyPBbXetPEfFLwgUvJZLCmgCfYlmR\n> 0vZmCcbGSNT/m/W8QOIhufk=\n> =snCu\n> -----END PGP SIGNATURE-----\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n[ End of raw data]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 2 Mar 2004 11:40:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Materialized View Summary" }, { "msg_contents": "Already there, under \"Technical Guides and Documents\", I gave it a new\nsection \"Materialized Views\"\n\nRobert Treat\n\nOn Tue, 2004-03-02 at 11:40, Bruce Momjian wrote:\n> \n> Can we get this URL added to the techdocs site please? Thanks.\n> \n> ---------------------------------------------------------------------------\n> \n> Jonathan M. Gardner wrote:\n> [ PGP not available, raw data follows ]\n> > -----BEGIN PGP SIGNED MESSAGE-----\n> > Hash: SHA1\n> > \n> > I've written a summary of my findings on implementing and using \n> > materialized views in PostgreSQL. I've already deployed eagerly updating \n> > materialized views on several views in a production environment for a \n> > company called RedWeek: http://redweek.com/. As a result, some queries \n> > that were taking longer than 30 seconds to run now run in a fraction of a \n> > millisecond.\n> > \n> > You can view my summary at \n> > http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n> > \n> > Comments and suggestions are definitely welcome.\n> > \n> > - -- \n> > Jonathan Gardner\n> > [email protected]\n> > -----BEGIN PGP SIGNATURE-----\n> > Version: GnuPG v1.2.3 (GNU/Linux)\n> > \n> > iD8DBQFAO3eZqp6r/MVGlwwRAnpEAKC8+/lFyPBbXetPEfFLwgUvJZLCmgCfYlmR\n> > 0vZmCcbGSNT/m/W8QOIhufk=\n> > =snCu\n> > -----END PGP SIGNATURE-----\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> > \n> [ End of raw data]\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "02 Mar 2004 11:54:12 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Materialized View Summary" } ]
[ { "msg_contents": "", "msg_date": "Tue, 24 Feb 2004 22:10:16 -0700", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Speed up a function?CREATE TABLE readings ( \"when\" \n\tTIMESTAMP DEFAULT timeofday()::timestamp NOT\n\tNULL PRIMARY KEY, \"barometer\" FLOAT DEFAULT NULL," }, { "msg_contents": "Robert Creager <[email protected]> writes:\n> I've implemented a couple of functions ala date_trunc (listed at the bottom)\n> [ and they're too slow ]\n\nWell, it's hardly surprising that a function that invokes date_trunc and\nhalf a dozen other comparably-expensive operations should be half a\ndozen times as expensive as date_trunc. Not to mention that plpgsql is\ninherently far slower than C.\n\nAssuming that you don't want to descend to writing C, I'd suggest doing\narithmetic on the Unix-epoch version of the timestamp. Perhaps\nsomething along the lines of\n\nselect 'epoch'::timestamptz +\n trunc(extract(epoch from now())/(3600*24*7))*(3600*24*7) * '1sec'::interval;\n\nThis doesn't have the same roundoff behavior as what you posted, but I\nthink it could be adjusted to do so with a couple more additions and\nsubtractions, unless there's some magic I'm not seeing about the year\nboundary behavior. Certainly the five-minute-trunc problem could be\ndone this way.\n\nIf you do feel like descending to C, I don't see any fundamental reason\nwhy we accept date_part('week',...) but not date_trunc('week',...).\nFeel free to submit a patch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Feb 2004 01:04:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up a function?CREATE TABLE readings ( \"when\" TIMESTAMP\n\tDEFAULT timeofday()::timestamp NOT NULL PRIMARY KEY,\n\t\"barometer\" FLOAT DEFAULT NULL," } ]
[ { "msg_contents": "can anyone tell me what the best way to compile postgresql 7.4.1 on \nSolaris 9 (UltraSparcIII) is? I have latest gmake and gcc installed. I \nwas going to use CFLAGS=\"-O2 -fast -mcpu=ultrasparc\" based on snippets \nI've read about the place. Would using -O3 be an improvement?\n\nthanks\n", "msg_date": "Thu, 26 Feb 2004 12:46:23 +0000", "msg_from": "teknokrat <[email protected]>", "msg_from_op": true, "msg_subject": "compiling 7.4.1 on Solaris 9" }, { "msg_contents": "On Thu, Feb 26, 2004 at 12:46:23PM +0000, teknokrat wrote:\n> I've read about the place. Would using -O3 be an improvement?\n\nIn my experience, it's not only not an improvement, it sometimes\nbreaks the code. That's on 8, though, not 9.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n", "msg_date": "Mon, 1 Mar 2004 09:34:57 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compiling 7.4.1 on Solaris 9" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Thu, Feb 26, 2004 at 12:46:23PM +0000, teknokrat wrote:\n> \n>>I've read about the place. Would using -O3 be an improvement?\n> \n> \n> In my experience, it's not only not an improvement, it sometimes\n> breaks the code. That's on 8, though, not 9.\n> \n> A\n> \n\nthanks, i remember a thread about problems with flags passed to gcc on \nsolaris. I was wondering if there had been any resolution and if the \ndefaults for 7.4 are considered Ok.\n\nthanks\n", "msg_date": "Tue, 02 Mar 2004 10:54:23 +0000", "msg_from": "teknokrat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: compiling 7.4.1 on Solaris 9" }, { "msg_contents": "On Tue, Mar 02, 2004 at 10:54:23AM +0000, teknokrat wrote:\n> thanks, i remember a thread about problems with flags passed to gcc on \n> solaris. I was wondering if there had been any resolution and if the \n> defaults for 7.4 are considered Ok.\n\nAs near as I can tell, -O2 is used by default on Solaris now. Again,\nthis is on 8, not 9. \n\nAt work, we have been doing a number of tests on 7.4. The\nperformance is such an improvement over 7.2 that the QA folks thought\nthere must be something wrong. So I suppose the defaults are ok.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\n", "msg_date": "Wed, 10 Mar 2004 11:07:28 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compiling 7.4.1 on Solaris 9" }, { "msg_contents": "\nOn Mar 2, 2004, at 5:54 AM, teknokrat wrote:\n\n> Andrew Sullivan wrote:\n>> On Thu, Feb 26, 2004 at 12:46:23PM +0000, teknokrat wrote:\n>>> I've read about the place. Would using -O3 be an improvement?\n>> In my experience, it's not only not an improvement, it sometimes\n>> breaks the code. That's on 8, though, not 9.\n>> A\n>\n> thanks, i remember a thread about problems with flags passed to gcc on \n> solaris. I was wondering if there had been any resolution and if the \n> defaults for 7.4 are considered Ok.\n>\nYes. The compile flags on solaris were fixed on 7.4. Previously it \nwasn't using any optimization flags.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Wed, 10 Mar 2004 18:21:43 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compiling 7.4.1 on Solaris 9" }, { "msg_contents": "On Wed, Mar 10, 2004 at 11:07:28AM -0500, Andrew Sullivan wrote:\n\n> At work, we have been doing a number of tests on 7.4. The\n> performance is such an improvement over 7.2 that the QA folks thought\n> there must be something wrong. So I suppose the defaults are ok.\n\nI know, I know, replying to myself. I just wanted to note that we\n_were_ using optimisation with 7.2. 7.4 is still a lot faster.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\n", "msg_date": "Thu, 11 Mar 2004 13:29:08 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compiling 7.4.1 on Solaris 9" } ]
[ { "msg_contents": "Hi everybody,\n\ni'd like to know if it exists a system of cache for the results of\nqueries.\n\nWhat i'd like to do :\n\nselect whatever_things from (selection_in_cache) where special_conditions;\n\nThe interesting thing would be to have a precalculated\nselection_in_cache, especially when selection_in_cache is a very long\nlist of joins...\n\nFor example, a real case:\nSELECT \n\tp.id_prospect,\n\tp.id_personne1,\n\tINITCAP(p1.nom) AS nom,\n\tINITCAP(p1.prenom) AS prenom,\n\ta1.no_tel,\n\ta1.no_portable,\n\tp.dernier_contact,\n\tcn.id_contact,\n\tcn.id_vendeur,\n\tcn.id_operation,\n\tCASE WHEN p.dernier_contact IS NOT NULL THEN cn.date_contact::ABSTIME::INT4 ELSE p.cree_le::ABSTIME::INT4 END AS date_contact,\n\tcn.id_moyen_de_contact,\n\tcn.id_type_evenement,\n\tcn.nouveau_rdv::ABSTIME::INT4 AS nouveau_rdv,\n\tcn.date_echeance::ABSTIME::INT4 AS date_echeance,\n\tcn.date_reponse::ABSTIME::INT4 AS date_reponse,\n\t(CASE WHEN lower(cn.type_reponse) = '.qs( 'abandon' ).' AND cn.id_vendeur = '.qs( $login ).' THEN '.qs( 'O').' ELSE p.abandon END) AS abandon\nFROM\n\tprospect p\n\tJOIN personne p1 ON (p.id_personne1 = p1.id_personne)\n\tJOIN adresse a1 ON (a1.id_adresse = p1.id_adresse_principale)\n\tLEFT JOIN contact cn ON (p.dernier_contact = cn.id_contact)\n\t'.( $type_orig ? 'LEFT JOIN orig_pub op ON ( p.id_orig_pub = op.id_orig_pub )' : '' ).'\nWHERE\n\t( '.(\n\t\t\t$abandon\n\t\t\t\t? ''\n\t\t\t\t: '(\n\t\t\t\t\t\t(cn.type_reponse IS NULL OR lower(cn.type_reponse) != ' .qs( 'abandon' ) .' OR cn.id_vendeur != ' .qs( $login ) .')\n\t\t\t\t\t\tAND (p.abandon != ' .qs( 'O' ) .' OR p.abandon IS NULL)) AND '\n\t\t).' TRUE '.$condition.')\nORDER BY\n\t'.$primary_sort.',\n\t'.secondary_sort.'\nLIMIT 30\nOFFSET '.$offset*$page_length\n\nThere is some perl inside to generate the query ; for non-perl-people,\n'.' is used for concatenation and '( a ? b : c)' means 'if a then b else c'.\n\n$condition is a precalculated set of conditions.\n\nHere i have a very heavy query with 4 very heavy JOIN.\nThat's why i'd like to have a precalculated query.\nA view wouldn't help, because it would calculate the whole query each\ntime.\n\nAny idea ?\nThanks in advance for any input.\n\n-- \[email protected] 01.46.47.21.33 fax: 01.45.20.17.98\n", "msg_date": "Thu, 26 Feb 2004 14:30:38 +0100", "msg_from": "David Pradier <[email protected]>", "msg_from_op": true, "msg_subject": "A cache for the results of queries ?" }, { "msg_contents": "On Thursday 26 February 2004 13:30, David Pradier wrote:\n> Hi everybody,\n>\n> i'd like to know if it exists a system of cache for the results of\n> queries.\n>\n> What i'd like to do :\n>\n> select whatever_things from (selection_in_cache) where special_conditions;\n>\n> The interesting thing would be to have a precalculated\n> selection_in_cache, especially when selection_in_cache is a very long\n> list of joins...\n\nYou might want to search the archives for the -sql list for a message \n\"Materialized View Summary\" - some time this week. That's almost exactly what \nyou want.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 26 Feb 2004 14:26:00 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A cache for the results of queries ?" }, { "msg_contents": "On Thu, 26 Feb 2004, David Pradier wrote:\n\n> Hi everybody,\n> \n> i'd like to know if it exists a system of cache for the results of\n> queries.\n\nI believe there are some external libs that provide this at the \napplication level. PHP's adodb is purported to do so.\n\n", "msg_date": "Thu, 26 Feb 2004 10:15:54 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A cache for the results of queries ?" }, { "msg_contents": "David Pradier wrote:\n> i'd like to know if it exists a system of cache for the results of\n> queries.\n\nIf you are willing to do this at an application level, you could \ncalculate a MD5 for every query you plan to run and then SELECT INTO a \ntemporary table that's based on the MD5 sum (e.g. TMP_CACHE_45123). Next \ntime somebody runs a query, check to see if that table exists already. \nThen you just have to figure out some way to know when results should be \nexpired.\n", "msg_date": "Thu, 26 Feb 2004 16:40:25 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A cache for the results of queries ?" } ]
[ { "msg_contents": "Hi,\n\nWe have postgres running on freebsd 4.9 with 2 Gigs of memory. As per\nrepeated advice on the mailing lists we configured effective_cache_size\n= 25520 which you get by doing `sysctl -n vfs.hibufspace` / 8192\n\nWhich results in using 200Megs for disk caching. \n\nIs there a reason not to increase the hibufspace beyond the 200 megs and\nprovide a bigger cache to postgres? I looked both on the postgres and\nfreebsd mailing lists and couldn't find a good answer to this.\n\nIf yes, any suggestions on what would be a good size on a 2 Gig machine?\n\nRegards,\n\nDror\n\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.fastbuzz.com\nhttp://www.zapatec.com\n", "msg_date": "Thu, 26 Feb 2004 10:03:45 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": true, "msg_subject": "FreeBSD config" }, { "msg_contents": "On Thu, 26 Feb 2004, Dror Matalon wrote:\n\n> Hi,\n> \n> We have postgres running on freebsd 4.9 with 2 Gigs of memory. As per\n> repeated advice on the mailing lists we configured effective_cache_size\n> = 25520 which you get by doing `sysctl -n vfs.hibufspace` / 8192\n> \n> Which results in using 200Megs for disk caching. \n> \n> Is there a reason not to increase the hibufspace beyond the 200 megs and\n> provide a bigger cache to postgres? I looked both on the postgres and\n> freebsd mailing lists and couldn't find a good answer to this.\n\nActually, I think you're confusing effective_cache_size with \nshared_buffers.\n\neffective_cache_size changes no cache settings for postgresql, it simply \nacts as a hint to the planner on about how much of the dataset your OS / \nKernel / Disk cache can hold.\n\nMaking it bigger only tells the query planny it's more likely the data \nit's looking for will be in cache.\n\nshared_buffers, OTOH, sets the amount of cache that postgresql uses. It's \ngenerall considered that 256 Megs or 1/4 of memory, whichever is LESS, is \na good setting for production database servers.\n\n", "msg_date": "Thu, 26 Feb 2004 11:55:31 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "On Thu, Feb 26, 2004 at 11:55:31AM -0700, scott.marlowe wrote:\n> On Thu, 26 Feb 2004, Dror Matalon wrote:\n> \n> > Hi,\n> > \n> > We have postgres running on freebsd 4.9 with 2 Gigs of memory. As per\n> > repeated advice on the mailing lists we configured effective_cache_size\n> > = 25520 which you get by doing `sysctl -n vfs.hibufspace` / 8192\n> > \n> > Which results in using 200Megs for disk caching. \n> > \n> > Is there a reason not to increase the hibufspace beyond the 200 megs and\n> > provide a bigger cache to postgres? I looked both on the postgres and\n> > freebsd mailing lists and couldn't find a good answer to this.\n> \n> Actually, I think you're confusing effective_cache_size with \n> shared_buffers.\n\nNo, I'm not.\n\n> \n> effective_cache_size changes no cache settings for postgresql, it simply \n> acts as a hint to the planner on about how much of the dataset your OS / \n> Kernel / Disk cache can hold.\n\nI understand that. The question is why have the OS, in this case FreeBsd\nuse only 200 Megs for disk cache and not more. Why not double the\nvfs.hibufspace to 418119680 and double the effective_cache_size to 51040.\n\n> \n> Making it bigger only tells the query planny it's more likely the data \n> it's looking for will be in cache.\n> \n> shared_buffers, OTOH, sets the amount of cache that postgresql uses. It's \n> generall considered that 256 Megs or 1/4 of memory, whichever is LESS, is \n> a good setting for production database servers.\n> \n\nActually last I looked, I thought that the recommended max shared\nbuffers was 10,000, 80MB, even on machines with large amounts of memory.\n\nRegards,\n\nDror\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.fastbuzz.com\nhttp://www.zapatec.com\n", "msg_date": "Thu, 26 Feb 2004 11:16:16 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "On Thu, 26 Feb 2004, Dror Matalon wrote:\n\n> On Thu, Feb 26, 2004 at 11:55:31AM -0700, scott.marlowe wrote:\n> > On Thu, 26 Feb 2004, Dror Matalon wrote:\n> > \n> > > Hi,\n> > > \n> > > We have postgres running on freebsd 4.9 with 2 Gigs of memory. As per\n> > > repeated advice on the mailing lists we configured effective_cache_size\n> > > = 25520 which you get by doing `sysctl -n vfs.hibufspace` / 8192\n> > > \n> > > Which results in using 200Megs for disk caching. \n> > > \n> > > Is there a reason not to increase the hibufspace beyond the 200 megs and\n> > > provide a bigger cache to postgres? I looked both on the postgres and\n> > > freebsd mailing lists and couldn't find a good answer to this.\n> > \n> > Actually, I think you're confusing effective_cache_size with \n> > shared_buffers.\n> \n> No, I'm not.\n\nOK, sorry, I wasn't sure which you meant.\n\n> > effective_cache_size changes no cache settings for postgresql, it simply \n> > acts as a hint to the planner on about how much of the dataset your OS / \n> > Kernel / Disk cache can hold.\n> \n> I understand that. The question is why have the OS, in this case FreeBsd\n> use only 200 Megs for disk cache and not more. Why not double the\n> vfs.hibufspace to 418119680 and double the effective_cache_size to 51040.\n\nDoesn't the kernel just use the spare memory to buffer anyway?\n\nI'd say if you got 2 megs memory and nothing else on the box, give a big \nchunk (1 gig or so) to the kernel to manage. Unless large kernel caches \ncause some issues in FreeBSD.\n\n> > Making it bigger only tells the query planny it's more likely the data \n> > it's looking for will be in cache.\n> > \n> > shared_buffers, OTOH, sets the amount of cache that postgresql uses. It's \n> > generall considered that 256 Megs or 1/4 of memory, whichever is LESS, is \n> > a good setting for production database servers.\n> > \n> \n> Actually last I looked, I thought that the recommended max shared\n> buffers was 10,000, 80MB, even on machines with large amounts of memory.\n\nIt really depends on what you're doing. For loads involving very large \ndata sets, up to 256 Megs has resulted in improvements, but anything after \nthat has only had advantages in very limited types of applications.\n\n", "msg_date": "Thu, 26 Feb 2004 12:36:44 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "> We have postgres running on freebsd 4.9 with 2 Gigs of memory. As per\n> repeated advice on the mailing lists we configured effective_cache_size\n> = 25520 which you get by doing `sysctl -n vfs.hibufspace` / 8192\n>\n> Which results in using 200Megs for disk caching.\n\neffective_cache_size does nothing of the sort. CHeck your\nshared_buffers value...\n\n> Is there a reason not to increase the hibufspace beyond the 200 megs and\n> provide a bigger cache to postgres? I looked both on the postgres and\n> freebsd mailing lists and couldn't find a good answer to this.\n\nWell, maybe butnot necessarily. It's better to leave the OS to look after\nmost of your RAM.\n\nChris\n\n", "msg_date": "Fri, 27 Feb 2004 05:47:47 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "On Fri, Feb 27, 2004 at 05:47:47AM +0800, Christopher Kings-Lynne wrote:\n> > We have postgres running on freebsd 4.9 with 2 Gigs of memory. As per\n> > repeated advice on the mailing lists we configured effective_cache_size\n> > = 25520 which you get by doing `sysctl -n vfs.hibufspace` / 8192\n> >\n> > Which results in using 200Megs for disk caching.\n> \n> effective_cache_size does nothing of the sort. CHeck your\n> shared_buffers value...\n\nSigh.\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\neffective_cache_size\n\tSets the optimizer's assumption about the effective size of the disk\n\tcache (that is, the portion of the kernel's disk cache that will be\n\tused for PostgreSQL data files). This is measured in disk pages, which\n\tare normally 8 kB each.\n\n\nhttp://archives.postgresql.org/pgsql-performance/2003-07/msg00159.php\ntalks about how to programmatically determine the right setting for\neffective_cache_size:\n\tcase `uname` in \"FreeBSD\")\n\techo \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n\t\t\t\t\t;;\n\t\t\t\t\t*)\n\techo \"Unable to automatically determine the effective cache size\" >> /dev/stderr\n\t\t\t\t;;\n\t\t\t\tesac\n\nwhich brings me back to my question why not make Freebsd use more of its\nmemory for disk caching and then tell postgres about it. \n\n\n\n> \n> > Is there a reason not to increase the hibufspace beyond the 200 megs and\n> > provide a bigger cache to postgres? I looked both on the postgres and\n> > freebsd mailing lists and couldn't find a good answer to this.\n> \n> Well, maybe butnot necessarily. It's better to leave the OS to look after\n> most of your RAM.\n> \n> Chris\n> \n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.fastbuzz.com\nhttp://www.zapatec.com\n", "msg_date": "Thu, 26 Feb 2004 13:58:38 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "On 26 Feb 2004 at 13:58, Dror Matalon wrote:\n\n> \n> which brings me back to my question why not make Freebsd use more of its\n> memory for disk caching and then tell postgres about it. \n> \n\nI think there is some confusion about maxbufsize and hibufspace. I looking at a \ncomment in the FreeBSB source 4.9 that explains this. I think you will want to \nincrease effective_cache to match maxbufsize not hibufspace but I could be wrong.\n\n$FreeBSD: src/sys/kern/vfs_bio.c,v 1.242.2.21 line 363\n\n\n\n\n\n\n\nOn 26 Feb 2004 at 13:58, Dror Matalon wrote:\n\n> \n> which brings me back to my question why not make Freebsd use more of its\n> memory for disk caching and then tell postgres about it. \n> \n\n\nI think there is some confusion about maxbufsize and hibufspace.  I looking at a \ncomment in the FreeBSB  source 4.9 that explains this.  I think you will want to \nincrease effective_cache to match maxbufsize not hibufspace but I could be wrong.\n\n\n$FreeBSD: src/sys/kern/vfs_bio.c,v 1.242.2.21 line 363", "msg_date": "Thu, 26 Feb 2004 16:36:01 -0600", "msg_from": "\"Kevin Barnard\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "\nThanks for the pointer. So \n\nmaxbufspace = nbuf * BKVASIZE;\n\nWhich is confirmed in \nhttp://unix.derkeiler.com/Mailing-Lists/FreeBSD/performance/2003-09/0045.html\n\nand it looks like there's a patch by Sean Chittenden at\nhttp://people.freebsd.org/~seanc/patches/patch-HEAD-kern.nbuf\n\nthat does what I was asking. Seems a little on the bleeding edge. Has\nanyone tried this?\n\n\nOn Thu, Feb 26, 2004 at 04:36:01PM -0600, Kevin Barnard wrote:\n> On 26 Feb 2004 at 13:58, Dror Matalon wrote:\n> \n> > \n> > which brings me back to my question why not make Freebsd use more of its\n> > memory for disk caching and then tell postgres about it. \n> > \n> \n> I think there is some confusion about maxbufsize and hibufspace. I looking at a \n> comment in the FreeBSB source 4.9 that explains this. I think you will want to \n> increase effective_cache to match maxbufsize not hibufspace but I could be wrong.\n> \n> $FreeBSD: src/sys/kern/vfs_bio.c,v 1.242.2.21 line 363\n> \n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.fastbuzz.com\nhttp://www.zapatec.com\n", "msg_date": "Thu, 26 Feb 2004 15:02:40 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "On 02/26/2004-11:16AM, Dror Matalon wrote:\n> > \n> > effective_cache_size changes no cache settings for postgresql, it simply \n> > acts as a hint to the planner on about how much of the dataset your OS / \n> > Kernel / Disk cache can hold.\n> \n> I understand that. The question is why have the OS, in this case FreeBsd\n> use only 200 Megs for disk cache and not more. Why not double the\n> vfs.hibufspace to 418119680 and double the effective_cache_size to 51040.\n> \n\nFreeBSD uses ALL ram that isn't being used for something else as\nits disk cache. The \"effective_cache_size\" in the PostGreSQL config\nhas no effect on how the OS chooses to use memory, it is just hint\nto the PostGreSQL planner so it can guess the the likelyhood of\nwhat it is looking for being in the cache.\n\n", "msg_date": "Thu, 26 Feb 2004 18:06:06 -0500", "msg_from": "Christopher Weimann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "On 02/26/2004-01:58PM, Dror Matalon wrote:\n> \n> Sigh.\n> \n\nSigh, right back at you.\n\n> which brings me back to my question why not make Freebsd use more of its\n> memory for disk caching and then tell postgres about it. \n> \n\nBecause you can't. It already uses ALL RAM that isn't in use for\nsomething else.\n\n", "msg_date": "Thu, 26 Feb 2004 18:10:13 -0500", "msg_from": "Christopher Weimann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "On Thu, Feb 26, 2004 at 06:06:06PM -0500, Christopher Weimann wrote:\n> On 02/26/2004-11:16AM, Dror Matalon wrote:\n> > > \n> > > effective_cache_size changes no cache settings for postgresql, it simply \n> > > acts as a hint to the planner on about how much of the dataset your OS / \n> > > Kernel / Disk cache can hold.\n> > \n> > I understand that. The question is why have the OS, in this case FreeBsd\n> > use only 200 Megs for disk cache and not more. Why not double the\n> > vfs.hibufspace to 418119680 and double the effective_cache_size to 51040.\n> > \n> \n> FreeBSD uses ALL ram that isn't being used for something else as\n> its disk cache. The \"effective_cache_size\" in the PostGreSQL config\n> has no effect on how the OS chooses to use memory, it is just hint\n> to the PostGreSQL planner so it can guess the the likelyhood of\n> what it is looking for being in the cache.\n\nLet me try and say it again. I know that setting effective_cache_size\ndoesn't affect the OS' cache. I know it just gives Postgres the *idea*\nof how much cache the OS is using. I know that. I also know that a\ncorrect hint helps performance.\n\nI've read Matt Dillon's discussion about the freebsd VM at\nhttp://www.daemonnews.org/200001/freebsd_vm.html and I didn't see him\nsaying that Freebsd uses all the free RAM for disk cache. Would you care\nto provide a URL pointing to that?\n\nAssuming you are correct, why has the ongoing recommendation been to use\nhibufspace/8192 as the effective_cache_size? Seems like it would be\nquite a bit more on machines with lots of RAM.\n\nRegards,\n\nDror\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.fastbuzz.com\nhttp://www.zapatec.com\n", "msg_date": "Thu, 26 Feb 2004 15:42:46 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "Dror Matalon wrote:\n\n> Let me try and say it again. I know that setting effective_cache_size\n> doesn't affect the OS' cache. I know it just gives Postgres the *idea*\n> of how much cache the OS is using. I know that. I also know that a\n> correct hint helps performance.\n> \n> I've read Matt Dillon's discussion about the freebsd VM at\n> http://www.daemonnews.org/200001/freebsd_vm.html and I didn't see him\n> saying that Freebsd uses all the free RAM for disk cache. Would you care\n> to provide a URL pointing to that?\n\nI don't believe freeBSD yses everything available unlike linux. It is actually a \ngood thing. If you have 1GB RAM and kernel buffers set at 600MB, you are \nguaranteed to have some mmory in crunch situations.\n\nAs far you original questions, I think you can increase the kernel buffer sizes \nfor VFS safely. However remembet that more to dedicate to kernel buffers, less \nspace you have in case of crunch for whatever reasons.\n\nFreeBSD gives you a control which linux does not. Use it to best of your advantage..\n\n Shridhar\n", "msg_date": "Fri, 27 Feb 2004 12:46:08 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "On Fri, 27 Feb 2004, Shridhar Daithankar wrote:\n\n> Dror Matalon wrote:\n> \n> > Let me try and say it again. I know that setting effective_cache_size\n> > doesn't affect the OS' cache. I know it just gives Postgres the *idea*\n> > of how much cache the OS is using. I know that. I also know that a\n> > correct hint helps performance.\n> > \n> > I've read Matt Dillon's discussion about the freebsd VM at\n> > http://www.daemonnews.org/200001/freebsd_vm.html and I didn't see him\n> > saying that Freebsd uses all the free RAM for disk cache. Would you care\n> > to provide a URL pointing to that?\n> \n> I don't believe freeBSD yses everything available unlike linux. It is actually a \n> good thing. If you have 1GB RAM and kernel buffers set at 600MB, you are \n> guaranteed to have some mmory in crunch situations.\n\nLinux doesn't work with a pre-assigned size for kernel cache. \nIt just grabs whatever's free, minus a few megs for easily launching new \nprograms or allocating more memory for programs, and uses that for the \ncache. then, when a request comes in for more memory than is free, it \ndumps some of the least used buffers and gives them back. \n\nIt would seem to work very well underneath a mixed load server like an \nLAPP box.\n\n", "msg_date": "Fri, 27 Feb 2004 08:33:40 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "I guess the thing to do is to move this topic over to a freebsd list\nwhere we can get more definitive answers on how disk caching is handled.\nI asked here since I know that FreeBsd is often recommended,\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html#\nas a good platform for postgres, and with Modern machines often having\nGigabytes of memory the issue of, possibly, having a disk cache of 200MB\nwould be one often asked.\n\nOn Fri, Feb 27, 2004 at 12:46:08PM +0530, Shridhar Daithankar wrote:\n> Dror Matalon wrote:\n> \n> >Let me try and say it again. I know that setting effective_cache_size\n> >doesn't affect the OS' cache. I know it just gives Postgres the *idea*\n> >of how much cache the OS is using. I know that. I also know that a\n> >correct hint helps performance.\n> >\n> >I've read Matt Dillon's discussion about the freebsd VM at\n> >http://www.daemonnews.org/200001/freebsd_vm.html and I didn't see him\n> >saying that Freebsd uses all the free RAM for disk cache. Would you care\n> >to provide a URL pointing to that?\n> \n> I don't believe freeBSD yses everything available unlike linux. It is \n> actually a good thing. If you have 1GB RAM and kernel buffers set at 600MB, \n> you are guaranteed to have some mmory in crunch situations.\n> \n> As far you original questions, I think you can increase the kernel buffer \n> sizes for VFS safely. However remembet that more to dedicate to kernel \n> buffers, less space you have in case of crunch for whatever reasons.\n> \n> FreeBSD gives you a control which linux does not. Use it to best of your \n> advantage..\n> \n> Shridhar\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.fastbuzz.com\nhttp://www.zapatec.com\n", "msg_date": "Fri, 27 Feb 2004 10:27:26 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": ">>>>> \"DM\" == Dror Matalon <[email protected]> writes:\n\nDM> which brings me back to my question why not make Freebsd use more of its\nDM> memory for disk caching and then tell postgres about it. \n\nBecause this is a painfully hard thing to do ;-(\n\nIt involves hacking a system header file and recompiling the kernel.\nIt is not a simple tunable. It has side effects regarding some other\nsizing parameter as well, but I don't recall the details. Details\nhave been posted to this list at least once before by Sean Chittenden.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Fri, 27 Feb 2004 13:33:33 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "On Friday 27 February 2004 21:03, scott.marlowe wrote:\n> Linux doesn't work with a pre-assigned size for kernel cache.\n> It just grabs whatever's free, minus a few megs for easily launching new\n> programs or allocating more memory for programs, and uses that for the\n> cache. then, when a request comes in for more memory than is free, it\n> dumps some of the least used buffers and gives them back.\n>\n> It would seem to work very well underneath a mixed load server like an\n> LAPP box.\n\nI was just pointing out that freeBSD is different than linux nd for one thing \nit is good because if there is a bug in freeSD VM, it won't run rampant \nbecause you can explicitly limit kernel cache and other parameter.\n\nOTOH, freeBSD VM anyways works. And running unlimited kernel cache allowed \nlinux to iron out some of corner cases bugs.\n\nNot a concern anymore I believe but having choice is always great..\n \nShridhar\n", "msg_date": "Sat, 28 Feb 2004 14:08:41 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "\n\nShridhar Daithankar wrote:\n\n> Dror Matalon wrote:\n>\n>> I've read Matt Dillon's discussion about the freebsd VM at\n>> http://www.daemonnews.org/200001/freebsd_vm.html and I didn't see him\n>> saying that Freebsd uses all the free RAM for disk cache. Would you care\n>> to provide a URL pointing to that?\n>\n>\n>\nQuoting from http://www.daemonnews.org/200001/freebsd_vm.html :\n\n<snip>*\nWhen To Free a Page*\n\nSince the VM system uses all available memory for disk caching, there \nare usually very few truly-free pages...\n</snip>\n\nGot to say - a very interesting discussion you have all being having, I \nam now quite confused about what those vfs.*buf* variables actually do...\n\nPlease feed back any clarfications from the FreeBSD experts to this list!\n\nregards\n\nMark\n\n", "msg_date": "Mon, 01 Mar 2004 20:30:58 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "On Mon, Mar 01, 2004 at 08:30:58PM +1300, Mark Kirkwood wrote:\n> \n> \n> Shridhar Daithankar wrote:\n> \n> >Dror Matalon wrote:\n> >\n> >>I've read Matt Dillon's discussion about the freebsd VM at\n> >>http://www.daemonnews.org/200001/freebsd_vm.html and I didn't see him\n> >>saying that Freebsd uses all the free RAM for disk cache. Would you care\n> >>to provide a URL pointing to that?\n> >\n> >\n> >\n\nI noticed this passage too, but ...\n> Quoting from http://www.daemonnews.org/200001/freebsd_vm.html :\n> \n> <snip>*\n> When To Free a Page*\n> \n> Since the VM system uses all available memory for disk caching, there \n ^^^^^^^^^^^^^\n\nThe VM system, as you can see from the article, is focused on paging and\ncaching the programs and program data. Is the cache for disk reads and\nwrites thrown into the mix as well?\n\n> are usually very few truly-free pages...\n> </snip>\n\n> \n> Got to say - a very interesting discussion you have all being having, I \n> am now quite confused about what those vfs.*buf* variables actually do...\n\nSame here.\n\n> \n> Please feed back any clarfications from the FreeBSD experts to this list!\n> \n> regards\n> \n> Mark\n> \n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.fastbuzz.com\nhttp://www.zapatec.com\n", "msg_date": "Sun, 29 Feb 2004 23:39:48 -0800", "msg_from": "Dror Matalon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": "\n\n>I noticed this passage too, but ...\n> \n>\n>>Quoting from http://www.daemonnews.org/200001/freebsd_vm.html :\n>>\n>><snip>*\n>>When To Free a Page*\n>>\n>>Since the VM system uses all available memory for disk caching, there \n>> \n>>\n> ^^^^^^^^^^^^^\n>\n>The VM system, as you can see from the article, is focused on paging and\n>caching the programs and program data. Is the cache for disk reads and\n>writes thrown into the mix as well?\n>\n> \n>\nYes - that is the real question. The following link :\n\nhttp://www.freebsd.org/doc/en_US.ISO8859-1/books/arch-handbook/vm-cache.html\n\nand the few \"next\" pages afterward talk about a unified buffer cache i.e \nfile buffer cache is part of the KVM system which is part of the VM \nsystem - but this does not seem to preclude those vfs.*buf* variables \nlimiting the size of the file buffer cache... hmm ... so no real \ndecrease of confusion at this end.. :-)\n\nMark\n\n\n", "msg_date": "Mon, 01 Mar 2004 21:19:31 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD config" }, { "msg_contents": ">>>>> \"CW\" == Christopher Weimann <[email protected]> writes:\n\n>> which brings me back to my question why not make Freebsd use more of its\n>> memory for disk caching and then tell postgres about it. \n>> \n\nCW> Because you can't. It already uses ALL RAM that isn't in use for\nCW> something else.\n\nNo, it does not.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Tue, 02 Mar 2004 15:45:31 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD config" } ]
[ { "msg_contents": "I sent this to the admin list the other day and got no responses. Maybe this\nlist can give me some pointers.\n\nHello\n\n\tI am working on installing and configuring a Postgres database\nserver. I am running Redhat Enterprise ES 3.0 and Redhat Database 3.0.\n\"Postgres version 7.3.4-11\". This server will host 150-200 users. There will\nbe about 9 databases in our cluster ranging anywhere from 500MB to 3GB The\nhardware is a dual Xeon running at 2.8GHZ, 4GB RAM, Ultra 320 SCSI hard\ndrives running on Adaptec Ultra Raid Controllers. \n\tI am planning on separating the OS, Data, WAL on to separate drives\nwhich will be mirrored. I am looking for input on setting kernel parameters,\nand Postgres server runtime parameters and other settings relating to\ntuning. Also is there any benchmarking tools available that will help me\ntune this server.\n\n\nThanks\n\nJohn Allgood - ESC\nSystem Administrator\n770.535.5049\n\n\n", "msg_date": "Thu, 26 Feb 2004 16:28:07 -0500", "msg_from": "\"John Allgood\" <[email protected]>", "msg_from_op": true, "msg_subject": "Database Server Tuning" }, { "msg_contents": "On Thu, 26 Feb 2004 16:28:07 -0500, John Allgood wrote:\n\n> \tI am planning on separating the OS, Data, WAL on to separate drives\n> which will be mirrored.\n\nHave you considered RAID-10 in stead of RAID-1?\n\n> I am looking for input on setting kernel\n> parameters, and Postgres server runtime parameters and other settings\n> relating to tuning.\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n(See the \"Performance\" section.)\n\n-- \nGreetings from Troels Arvin, Copenhagen, Denmark\n\n\n", "msg_date": "Thu, 26 Feb 2004 23:01:13 +0100", "msg_from": "Troels Arvin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Server Tuning" }, { "msg_contents": "John,\n\n> and Postgres server runtime parameters and other settings relating to\n> tuning. Also is there any benchmarking tools available that will help me\n> tune this server.\n\nCheck out \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nAlso, I'd like to see what you get under heavy load for context-switching. \nWe've been having issues with RH+Xeon with really large queries.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 26 Feb 2004 14:48:34 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Server Tuning" }, { "msg_contents": "Josh Berkus wrote:\n\n>John,\n>\n> \n>\n>>and Postgres server runtime parameters and other settings relating to\n>>tuning. Also is there any benchmarking tools available that will help me\n>>tune this server.\n>> \n>>\n>\n>Check out \n>http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n>\n>Also, I'd like to see what you get under heavy load for context-switching. \n>We've been having issues with RH+Xeon with really large queries.\n>\n> \n>\nThis is exactly what I was looking for. I will keep you posted on what \nkinda results I get when I start putting a load on this server.\n\nThanks\nJohn Allgood - ESC\nSystems Administrator\n770.535.5049\n", "msg_date": "Thu, 26 Feb 2004 22:07:49 -0500", "msg_from": "John Allgood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Server Tuning" }, { "msg_contents": ">>>>> \"JA\" == John Allgood <[email protected]> writes:\n\nJA> \tI am planning on separating the OS, Data, WAL on to separate drives\nJA> which will be mirrored. I am looking for input on setting kernel parameters,\nJA> and Postgres server runtime parameters and other settings relating to\n\nI did a bunch of testing with different RAID levels on a 14 disk\narray. I finally settled on this: RAID5 across 14 disks for the\ndata, the OS (including syslog directory) and WAL on a RAID1 pair on\nthe other channel of the same controller (I didn't want to spring for\ndual RAID controllers). The biggest bumps in performance came from\nincreasing the checkpoint_buffers since my DB is heavily written to,\nand increasing sort_mem.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Tue, 02 Mar 2004 15:51:26 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Server Tuning" }, { "msg_contents": "Vivek,\n\n> I did a bunch of testing with different RAID levels on a 14 disk\n> array. I finally settled on this: RAID5 across 14 disks for the\n> data, the OS (including syslog directory) and WAL on a RAID1 pair on\n> the other channel of the same controller (I didn't want to spring for\n> dual RAID controllers). The biggest bumps in performance came from\n> increasing the checkpoint_buffers since my DB is heavily written to,\n> and increasing sort_mem.\n\nWith large RAID, have you found that having WAL on a seperate array actually \nboosts performance? The empirical tests we've seen so far don't seem to \nsupport this.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 2 Mar 2004 13:27:37 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Server Tuning" }, { "msg_contents": "\nOn Mar 2, 2004, at 4:27 PM, Josh Berkus wrote:\n\n> Vivek,\n>\n>> I did a bunch of testing with different RAID levels on a 14 disk\n>> array. I finally settled on this: RAID5 across 14 disks for the\n>> data, the OS (including syslog directory) and WAL on a RAID1 pair on\n>> the other channel of the same controller (I didn't want to spring for\n>\n\n\n> With large RAID, have you found that having WAL on a seperate array \n> actually\n> boosts performance? The empirical tests we've seen so far don't seem \n> to\n> support this.\n\nYes, it was a noticeable improvement.\n\n", "msg_date": "Tue, 2 Mar 2004 16:32:03 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Server Tuning" }, { "msg_contents": "Vivek,\n\n> > With large RAID, have you found that having WAL on a seperate array \n> > actually\n> > boosts performance? The empirical tests we've seen so far don't seem \n> > to\n> > support this.\n> \n> Yes, it was a noticeable improvement.\n\nDo you have any stats? This would be useful for your talk, as well.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 3 Mar 2004 10:59:19 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Server Tuning" } ]
[ { "msg_contents": "Hi,\n\nwhat is the most performant way to select for example the first 99 rows of a table and insert them into another table...\n\nat the moment i do this:\n\nfor userrecord in select * from table where account_id = a_account_id and counter_id = userrecord.counter_id and visitortable_id between a_minid and a_maxid limit 99 loop\n\tinsert into lastusers (account_id, counter_id, date, ip, hostname) values(a_account_id,userrecord.counter_id,userrecord.date,userrecord.ip,userrecord.hostname);\nend loop;\n\ni think \"limit\" is a performance killer, is that right? but what to do instead\n\nthanks\nbye\n", "msg_date": "Fri, 27 Feb 2004 17:52:39 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Select-Insert-Query " }, { "msg_contents": "On Friday 27 February 2004 16:52, [email protected] wrote:\n\n*please* don't post HTML-only messages.\n\n<br><br>what is the most performant way to select for\n> example the first 99 rows of a table and insert them into another\n> table...<br><br>at the moment i do this:<br><br>\n\n> for userrecord in select *\n> from table where account_id = a_account_id and counter_id =\n> userrecord.counter_id and visitortable_id between a_minid and a_maxid limit\n> 99 loop\n>\tinsert into lastusers (account_id, counter_id, date, ip,\n> hostname)\n> values(a_account_id,userrecord.counter_id,userrecord.date,userrecord.ip,\n> userrecord.hostname);\n>end loop;\n\nIf that is the actual query, I'm puzzled as to what you're doing, since you \ndon't know what it is you just inserted. Anyway, you can do this as a single \nquery\n\nINSERT INTO lastusers (account_id ... hostname)\nSELECT a_account_id, counter_id...\nFROM table where...\n\nThe LIMIT shouldn't take any time in itself, although if you are sorting then \nPG may need to sort all the rows before discarding all except the first 99.\n\nIf this new query is no better, make sure you have vacuum analyse'd the tables \nand post the output of EXPLAIN ANALYSE for the query.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 27 Feb 2004 19:47:13 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select-Insert-Query" } ]
[ { "msg_contents": "Hi,\n\nThere alot here, so skip to the middle from my WAL settings if you like.\n\nI'm currently investigating the performance on a large database which \nconsumes email designated as SPAM for the perusal of customers wishing \nto check. This incorporates a number of subprocesses - several delivery \ndaemons, an expiry daemon and a UI which performs large selects. A \nconsiderable amount of UPDATE, SELECT and DELETE are performed continually.\n\nStarting with a stock pg config, I've well understood the importance \nincreased shared mem, effective cache size and low random_page_cost as \ndetailed in \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html. After some \nsystem analysis with vmstat and sar we've been able to determin that the \nmain problem is IO bound and IMO this is due to lots of updates \nrequiring high drive contention - the array is a RAID0 mirror and the \ndataset originally 79GB. Alot of SPAM is being sent from our mail \nscanners and coupled with the UI is creating an increasingly lagging system.\n\nTypically all our db servers have these sort of enhancements - 1GB ram, \nSMP boxen with SCSI 160 disks :\neffective_cache_size = 95694\nrandom_page_cost = 0.5\nsort_mem=65536\nmax_connections = 128\nshared_buffers = 15732\n\nMy focus today has been on WAL - I've not looked at WAL before. By \nincreasing the settings thus :\n\nwal_buffers = 64 # need to determin WAL usage\nwal_files = 64 # range 0-64\nwal_sync_method = fsync # the default varies across platforms:\nwal_debug = 0 # range 0-16\n\n# hopefully this should see less LogFlushes per LogInsert - use more WAL \nthough.\ncommit_delay = 10000 # range 0-100000\ncommit_siblings = 2 # range 1-1000\ncheckpoint_segments = 16 # in logfile segments (16MB each), min 1\ncheckpoint_timeout = 300 # in seconds, range 30-3600\nfsync = true\n\ngreat improvements have been seen. A vacuumdb -f -a -z went from \nprocessing 1 table in 10 minutes to 10 tables in 1 minute. :) I actually \nstopped it after 80 tables (48 hours runtime) because the projected end \ntime would have been next week. Once I restarted the postmaster with the \nabove WAL settings, vacuumdb -f -a -z completed all 650 tables by the \nfollowing day.\n\nMy thinking is therefore to reduce disk context switching as best as \npossible within the current hardware limitiations. I'm aiming at keeping \nthe checkpoint subprocess happy that other backends are about to commit \n- hence keep siblings low at 2 - and create a sufficient gap between \ninternal commital so many commits can be done in a single sync. From the \nabove config, I believe I've gone some way to acheive this and the \nperformance I'm now seeing suggests this.\n\nBut I think we can get more out of this as the above setting were picked \nfrom thin air and my concern here is being able to determin WAL file \nusage and if the system is caught out on the other extreme that we're \nnot commiting fast enough. Currently I've read that WAL files shouldn't \nbe more than 2*checkpoint_segments+1 however my pg_xlog directory \ncontains 74 files. This suggests I'm using more logfiles than I should. \nAlso I'm not sure what wal_buffers really should be set to.\n\nCan I get any feedback on this ? How to look into pg's WAL usage would \nbe what I'm looking for. BTW this is an old install I'm afraid 7.2.2 - \nit's been impossible to upgrade up until now because it's been too slow. \nI have moved the pg_xlog onto the root SCSI disk - it doesn't appear to \nhave made a huge difference but it could be on the same cable.\n\nAdditional information as a bit of background :\nI can supply sar output if required. I'm currently running our expiry \ndaemon which scans all mail for each domain (ie each table) and this \nseems to take a few hours to run on a 26GB archive. It's alot faster \nthan it ever was. Load gets to about 8 as backends are all busy doing \nselects, updates and deletes. This process has recently already been run \nso it shouldn't be doing too much deleting. Still seems IO bound, and I \ndon't think I'm going to solve that without a better disk arrangement, \nbut this is essentially what I'm doing now - exhausting other possibilities.\n\n$ sar -B -s 16:00:00\n\n16:35:55 pgpgin/s pgpgout/s activepg inadtypg inaclnpg inatarpg\n16:36:00 3601.60 754.40 143492 87791 10230 48302\n16:36:05 5766.40 552.80 143947 88039 10170 48431\n16:36:10 3663.20 715.20 144578 88354 9075 48401\n16:36:15 3634.40 412.00 144335 88405 9427 48433\n16:36:20 5578.40 447.20 143626 88545 9817 48397\n16:36:25 4154.40 469.60 143640 88654 10388 48536\n16:36:30 3504.00 635.20 143538 88763 9992 48458\n16:36:35 3540.80 456.00 142515 88949 10444 48381\n16:36:40 3334.40 1067.20 143268 89244 9832 48468\n\n$ vmstat 5\n procs memory swap io \nsystem cpu\n r b w swpd free buff cache si so bi bo in cs us \nsy id\n 0 7 1 29588 10592 15700 809060 1 0 97 75 0 103 \n13 9 79\n 3 8 0 29588 11680 15736 807620 0 0 3313 438 1838 3559 19 \n13 68\n 2 13 1 29588 12808 15404 800328 0 0 4470 445 1515 1752 \n7 7 86\n 0 9 1 29588 10992 15728 806476 0 0 2933 781 1246 2686 14 \n10 76\n 2 5 1 29588 11336 15956 807884 0 0 3354 662 1773 5211 27 \n17 57\n 4 5 0 29696 13072 16020 813872 0 24 4282 306 2632 7862 45 \n25 31\n 4 6 1 29696 10400 16116 815084 0 0 5086 314 2668 7893 47 \n26 27\n 9 2 1 29696 13060 16308 814232 27 0 3927 748 2586 7836 48 \n29 23\n 3 8 1 29696 10444 16232 812816 3 0 4015 433 2443 7180 47 \n28 25\n 8 4 0 29696 10904 16432 812488 0 0 4537 500 2616 8418 46 \n30 24\n 4 6 2 29696 11048 16320 810276 0 0 6076 569 1893 3919 20 \n14 66\n 0 5 0 29696 10480 16600 813788 0 0 4595 435 2400 6215 33 \n21 46\n 3 6 0 29696 10536 16376 812248 0 0 3802 504 2417 7921 43 \n25 32\n 1 6 1 29696 11236 16500 809636 0 0 3691 357 2171 5199 24 \n15 61\n 0 14 1 29696 10228 16036 801368 0 0 4038 561 1566 3288 16 \n12 72\n\nSorry it's so long but I thought some brief info would be better than \nnot. Thanks for reading,\n\n-- \n\nRob Fielding\nDevelopment\nDesigner Servers Ltd\n\n", "msg_date": "Sat, 28 Feb 2004 18:40:12 +0000", "msg_from": "Rob Fielding <[email protected]>", "msg_from_op": true, "msg_subject": "WAL Optimisation - configuration and usage" }, { "msg_contents": "Rob\n\nSir - I have to congratulate you on having the most coherently summarised and \nyet complex list query I have ever seen. \n\nI fear that I will be learning from this problem rather than helping, but one \nthing did puzzle me - you've set your random_page_cost to 0.5? I'm not sure \nthis is sensible - you may be compensating for some other parameter \nout-of-range.\n\n\n-- \n Richard Huxton\n", "msg_date": "Sat, 28 Feb 2004 19:37:26 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL Optimisation - configuration and usage" }, { "msg_contents": "> random_page_cost = 0.5\n\nNot likely. The lowest this value should ever be is 1, and thats if\nyou're using something like a ram drive.\n\nIf you're drives are doing a ton of extra random IO due to the above\n(rather than sequential reads) it would lower the throughput quite a\nbit.\n\nTry a value of 2 for a while.\n\n\n", "msg_date": "Sat, 28 Feb 2004 19:52:52 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL Optimisation - configuration and usage" }, { "msg_contents": "Rod Taylor wrote:\n\n>>random_page_cost = 0.5\n>> \n>>\n>\n>Try a value of 2 for a while.\n>\n> \n>\nOK thanks Richard and Rod. I've upped this to 2. I think I left this \nover from a previous play with setttings on my IDE RAID 0 workstation. \nIt seemed to have a good effect being set as a low float so it stuck.\n\nI've set it to 2.\n\n From another post off list, I've also bumped up\n\nmax_fsm_relations = 1000 # min 10, fsm\nmax_fsm_pages = 20000 # min 1000, fs\nvacuum_mem = 32768 # min 1024\n\nas they did seem a little low. I'm hesitant to set them too high at this \nstage as I'd prefer to keep as much RAM available for runtime at this time.\n\nI'm still hoping that perhaps the uber-pgadmin Mr Lane might reply about \nmy WAL issue :) however I'm getting the feeling now the server is \nrunning with a much higher level of performance than it has been. Won't \nknow until tomorrow thought.\n\nCheers,\n\n-- \nRob Fielding\nDevelopment\nDesigner Servers Ltd\n\n", "msg_date": "Sun, 29 Feb 2004 13:08:01 +0000", "msg_from": "Rob Fielding <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL Optimisation - configuration and usage" }, { "msg_contents": "\nFurther update to my WAL experimentation. pg_xlog files have increased \nto 81, and checking today up to 84. Currently nothing much going on with \nthe server save a background process running a select every 30 seconds \nwith almost no impact (according to IO from vmstats).\n\nThis in itself is a good sign - an improvement on running last week, but \nI'd still like to get clarification on WAL file usage if possible.\n\nLog file tailing has nothing more interesting than a whole set of \n\"recycled transaction log file\" entries :\n\n2004-03-01 16:01:55 DEBUG: recycled transaction log file 0000007100000017\n2004-03-01 16:07:01 DEBUG: recycled transaction log file 0000007100000018\n2004-03-01 16:17:14 DEBUG: recycled transaction log file 0000007100000019\n2004-03-01 16:22:20 DEBUG: recycled transaction log file 000000710000001A\n2004-03-01 16:32:31 DEBUG: recycled transaction log file 000000710000001B\n2004-03-01 16:37:36 DEBUG: recycled transaction log file 000000710000001C\n2004-03-01 16:47:48 DEBUG: recycled transaction log file 000000710000001D\n2004-03-01 16:52:54 DEBUG: recycled transaction log file 000000710000001E\n2004-03-01 17:03:05 DEBUG: recycled transaction log file 000000710000001F\n\nLooks kinda automated, but the times aren't quite even at around 6-10 \nminutes apart.\n\ncheers,\n-- \n\nRob Fielding\[email protected]\n\nwww.dsvr.co.uk Development Designer Servers Ltd\n", "msg_date": "Mon, 01 Mar 2004 17:30:41 +0000", "msg_from": "Rob Fielding <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL Optimisation - configuration and usage" }, { "msg_contents": ">Rob Fielding wrote:\n> My focus today has been on WAL - I've not looked at WAL before. By\n> increasing the settings thus :\n> \n> wal_buffers = 64 # need to determin WAL usage\n> wal_files = 64 # range 0-64\n> wal_sync_method = fsync # the default varies across platforms:\n> wal_debug = 0 # range 0-16\n> \n> # hopefully this should see less LogFlushes per LogInsert - use more\nWAL\n> though.\n> commit_delay = 10000 # range 0-100000\n> commit_siblings = 2 # range 1-1000\n> checkpoint_segments = 16 # in logfile segments (16MB each), min 1\n> checkpoint_timeout = 300 # in seconds, range 30-3600\n> fsync = true\n\n> But I think we can get more out of this as the above setting were\npicked\n> from thin air and my concern here is being able to determin WAL file\n> usage and if the system is caught out on the other extreme that we're\n> not commiting fast enough. Currently I've read that WAL files\nshouldn't\n> be more than 2*checkpoint_segments+1 however my pg_xlog directory\n> contains 74 files. This suggests I'm using more logfiles than I\nshould.\n> Also I'm not sure what wal_buffers really should be set to.\n\nAs Richard Huxton says, we're all learning...I'm looking at WAL logic\nfor other reasons right now...\n\nThis is based upon my reading of the code; I think the manual contains\nat least one confusion that has not assisted your understanding (or\nmine):\n \nThe WAL files limit of 2*checkpoint_segments+1 refers to the number of\nfiles allocated-in-advance of the current log, not the total number of\nfiles in use. pg uses a cycle of logs, reusing older ones when all the\ntransactions in those log files have been checkpointed. The limit is set\nto allow checkpoint to release segments and have them all be reused at\nonce. Pg stores them up for use again later when workload hots up again.\n\nIf it cannot recycle a file because there is a still-current txn on the\nend of the cycle, then it will allocate a new file and use this instead,\nbut still keeping everything in a cycle. Thus if transactions are\nparticularly long running, then the number of files in the cycle will\ngrow. So overall, normal behaviour so far. I don't think there's\nanything to worry about in having that many files in your xlog cycle.\n\nThat behaviour is usually seen with occasional long running txns. When a\nlong running transaction is over, pg will try to reduce the number of\nfiles in the cycle until its back to target. \n\nYou seem to be reusing one file in the cycle every 10 mins - this is\nhappening as the result of a checkpoint timeout - \"kinda automated\" as\nyou say. [A checkpoint is the only time you can get the messages you're\ngetting] At one file per checkpoint, it will take 16*2+1=33\ncheckpoints*10 mins = 5 hours before it hits the advance allocation file\nlimit and then starts to reduce number of files. That's why they appear\nto stay constant...\n\nIf you want to check whether this is correct, manually issue a number of\nCHECKPOINT statements. The messages should change from \"recycled\" to\n\"removing\" transaction log file once you've got to 33 checkpoints - the\nnumber of WAL log files should start to go down also? If so, then\nthere's nothing too strange going on, just pg being a little slow in\nreducing the number of wal log files.\n\nSo, it seems that you are running occasional very long transactions.\nDuring that period you run up to 60-80 wal files. That's just on the\nedge of your wal_buffers limit, which means you start to write wal\nquicker than you'd like past that point. Your checkpoint_timeout is 300\nseconds, but a checkpoint will also be called every checkpoint_segments,\nor currently every 16 wal files. Since you go as high as 60-80 then you\nare checkpointing 4-5 times during the heavy transaction period -\nassuming it's all one block of work. In the end, each checkpoint is\ncausing a huge I/O storm, during which not much work happens. \n\nI would suggest that you reduce the effect of checkpointing by either:\n- re-write app to do scan deletes in smaller chunks in quieter periods\nor\n- increase checkpoint_segments to 128, though this may effect your\nrecoverability\n\nYou can of course only do so much with the memory available to you. If\nyou increase one allocation of memory, you may have to reduce another\nparameter and that may be counter productive.\n\n[An alternative view is that you should go for more frequent, not less\nfrequent checkpoints in this situation, smoothing out the effect of the\ncheckpoints, rather than trying to avoid them at all. On the other hand,\nthat approach also increases total WAL log volume, which means you'll\nmake poor use of I/O and memory buffering. I'd stay high.]\n\nHowever, I'm not sure \n- why checkpoint interval of 300 secs causes them to happen every 10\nmins in quieter periods; is that an occaisional update occurring?\n- why checkpoint only releases single Wal file each time - but that\nmaybe me just reading the code incorrectly.\n\nPlease set WAL_DEBUG to 1 so we can see a bit more info: thanks.\n\n> Can I get any feedback on this ? How to look into pg's WAL usage would\n> be what I'm looking for. BTW this is an old install I'm afraid 7.2.2 -\n> it's been impossible to upgrade up until now because it's been too\nslow.\n> I have moved the pg_xlog onto the root SCSI disk - it doesn't appear\nto\n> have made a huge difference but it could be on the same cable.\n\nMy advice is don't touch WAL_SYNC_METHOD...\n\nI **think** the WAL behaviour is still the same in 7.4.1, so no rush to\nupgrade on that account - unless you're using temporary tables....\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 2 Mar 2004 00:27:51 -0000", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL Optimisation - configuration and usage" }, { "msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n> - why checkpoint interval of 300 secs causes them to happen every 10\n> mins in quieter periods; is that an occaisional update occurring?\n\nThere is code in there to suppress a checkpoint if no WAL-loggable\nactivity has happened since the last checkpoint. Not sure if that's\nrelevant to the issue or not though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Mar 2004 20:01:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL Optimisation - configuration and usage " }, { "msg_contents": "Simon,\n\n> Please set WAL_DEBUG to 1 so we can see a bit more info: thanks.\n\nI'm pretty sure that WAL_DEBUG requires a compile-time option.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 1 Mar 2004 17:10:07 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL Optimisation - configuration and usage" }, { "msg_contents": ">Tom Lane\n> \"Simon Riggs\" <[email protected]> writes:\n> > - why checkpoint interval of 300 secs causes them to happen every 10\n> > mins in quieter periods; is that an occaisional update occurring?\n> \n> There is code in there to suppress a checkpoint if no WAL-loggable\n> activity has happened since the last checkpoint. Not sure if that's\n> relevant to the issue or not though.\n\nThanks Tom, at least that clears up why the checkpoints are off.\n\nI must admit, I'm taken aback though:\n\nI'd prefer it if it DIDN'T do that. If the system is quiet, the odd\ncheckpoint doesn't matter that much - however, taking a long time to\nreturn the xlog files to the *desired* state of having many\npre-allocated log files is not a great thing.\n\nWhat do you think about continuing to checkpoint normally until the\nnumber of xlog files has returned to 2*checkpoint_segments+1, then\nallowing a slow down of checkpoints when quiet? It would be easy enough\nto set a variable true while rearranging files to the limit, then set it\nfalse when the limit has been hit and then using that to activate the\nslow-down code (not that I know where that is mind...). However, that\nwould require some backend to postmaster ipc, which may be a problem.\n\nOr perhaps the real problem is only recycling one file at a time - if\nwe're running this as the checkpoint process it wouldn't be a problem to\nrecycle more than one at the same time would it?\n \nThe reason for my interest is: when archiving logs for PITR, there may\nbe occasional long pauses while waiting for tape mounts (typically 30\nminutes from notification to change). These pauses could (and therefore\nwill eventually for some people) cause severe log file build up, and I'm\ninterested in making sure this build up doesn't take too long to clear.\nForgetting the archival API stuff for a second, this is roughly the same\nsituation as Rob is experiencing (or at least causing him to pause and\nthink).\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 2 Mar 2004 22:16:12 -0000", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] WAL Optimisation - configuration and usage " }, { "msg_contents": ">Josh Berkus wrote\n> >Simon Riggs wrote\n> > Please set WAL_DEBUG to 1 so we can see a bit more info: thanks.\n> \n> I'm pretty sure that WAL_DEBUG requires a compile-time option.\n\nIn my naiveté, I just set and use it. I discovered it in the code, then\nset it to take advantage.\n\nI'm surprised, but you are right, the manual does SAY this requires a\ncompile time option; it is unfortunately not correct. Maybe this was\nonce as you say? If Rob is using 7.2.2 maybe this still applies to him?\nI don’t know.\n\nSetting wal_debug > 0 in postgresql.conf sets a variable XLOGDEBUG,\nwhich although it is all capitals is not in fact a compiler directive,\nas it looks. This variable is used within xlog.c to output **too much**\ninformation to the log. However, it is the only option at present.\n\nThis prompts me however to consider the idea of having various levels of\nWAL debug output, or using some kind of log_checkpoint mechanism to\nbetter understand what is going on.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 2 Mar 2004 22:16:12 -0000", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL Optimisation - configuration and usage" }, { "msg_contents": "Simon Riggs wrote:\n>>Josh Berkus wrote\n>>\n>>>Simon Riggs wrote\n>>>Please set WAL_DEBUG to 1 so we can see a bit more info: thanks.\n>>\n>>I'm pretty sure that WAL_DEBUG requires a compile-time option.\n> \n> I'm surprised, but you are right, the manual does SAY this requires a\n> compile time option; it is unfortunately not correct.\n\nActually, the manual is correct: in 7.4 and earlier releases, enabling \nwal_debug can be done without also setting a compile-time #ifdef. As \nof current CVS HEAD, the WAL_DEBUG #ifdef must be defined before this \nvariable is available.\n\n-Neil\n", "msg_date": "Tue, 02 Mar 2004 19:01:11 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL Optimisation - configuration and usage" }, { "msg_contents": "Neil,\n\n> Actually, the manual is correct: in 7.4 and earlier releases, enabling \n> wal_debug can be done without also setting a compile-time #ifdef. As \n> of current CVS HEAD, the WAL_DEBUG #ifdef must be defined before this \n> variable is available.\n\nHmmm. I was told that it was this way for 7.4 as well; that's why it's in \nthe docs that way.\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 3 Mar 2004 10:58:20 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL Optimisation - configuration and usage" }, { "msg_contents": ">Neil Conway\n> Simon Riggs wrote:\n> >>Josh Berkus wrote\n> >>\n> >>>Simon Riggs wrote\n> >>>Please set WAL_DEBUG to 1 so we can see a bit more info: thanks.\n> >>\n> >>I'm pretty sure that WAL_DEBUG requires a compile-time option.\n> >\n> > I'm surprised, but you are right, the manual does SAY this requires\na\n> > compile time option; it is unfortunately not correct.\n> \n> Actually, the manual is correct: in 7.4 and earlier releases, enabling\n> wal_debug can be done without also setting a compile-time #ifdef. As\n> of current CVS HEAD, the WAL_DEBUG #ifdef must be defined before this\n> variable is available.\n\nTouche! I stand corrected, thank you both. My suggestion does work for\nRob, then.\n\n[This also implies I have a screwed version on my machine, so thank you\nalso for flushing that lurking issue out for me. I'd had a suspicion for\na few weeks. Lucky I'm still just prototyping.]\n\nOn the other hand, I was just about to change the wal_debug behaviour to\nallow better debugging of PITR features as they're added. I think it is\nvery important to be able to put the system fairly easily into debug\nmode; a recompile is easy enough, but it would be even better to avoid\nthis completely. This would mean reversing the change you describe:\nhere's the design:\n\nThe behaviour I wish to add is:\nKeep wal_debug as a value between 0 and 16.\nIf =0 then no debug output (default).\nUse following bitmasks against the value\nMask 1 = XLOG Checkpoints get logged\nMask 2 = Archive API calls get logged\nMask 4 = Transaction - commits get logged\nMask 8 = Flush & INSERTs get logged\n\nThat way it should be fairly straightforward to control the amount and\ntype of information available to administrators. The existing design\nproduces too much info to be easily usable, mostly requiring a perl\nprogram to filter out the info overload and do record counts. This\nsuggested design allows you to control the volume of messages, since the\nbitmasks are arranged in volume/frequency order and brings the wal_debug\noption back into something useful for problem diagnosis on live systems,\nnot just hacking the code.\n\nAnybody object to these mods, or have better/different ideas? Getting\nthe diagnostics right is fairly important, IMHO, to making PITR become\nreal.\n\nBest regards, Simon Riggs\n\n", "msg_date": "Wed, 3 Mar 2004 21:40:09 -0000", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] WAL Optimisation - configuration and usage" }, { "msg_contents": "Josh Berkus wrote:\n> Hmmm. I was told that it was this way for 7.4 as well; that's why it's in \n> the docs that way.\n\nNo such statement is made in the docs AFAIK: they merely say \"If \nnonzero, turn on WAL-related debugging output.\"\n\nI invented a new #ifdef symbol when making this change in CVS HEAD, so \nI think you are misremembering.\n\n-Neil\n", "msg_date": "Wed, 03 Mar 2004 18:44:40 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL Optimisation - configuration and usage" }, { "msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n> The behaviour I wish to add is:\n> Keep wal_debug as a value between 0 and 16.\n> If =0 then no debug output (default).\n> Use following bitmasks against the value\n> Mask 1 = XLOG Checkpoints get logged\n> Mask 2 = Archive API calls get logged\n> Mask 4 = Transaction - commits get logged\n> Mask 8 = Flush & INSERTs get logged\n\nI see no value in reverting Neil's change. The above looks way too much\nlike old-line assembler-programmer thinking to me, anyway. Why not\ninvent a separate, appropriately named boolean variable for each thing\nyou want to control? Even C programmers manage to avoid doing the sort\nof mental arithmetic that the above would force onto DBAs.\n\nAs for whether it should be #ifdef'd or not, I'd have no objection to\nturning WAL_DEBUG on by default in pg_config_manual.h for the duration\nof PITR development. One should not however confuse short-term\ndebugging needs with features that the average user is going to need\nindefinitely. (It was not too long ago that there was still debugging\ncode for btree index building in there, for crissakes.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Mar 2004 18:46:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [PERFORM] WAL Optimisation - configuration and usage " }, { "msg_contents": "Simon Riggs wrote:\n> On the other hand, I was just about to change the wal_debug behaviour to\n> allow better debugging of PITR features as they're added.\n\nThat's a development activity. Enabling the WAL_DEBUG #ifdef by \ndefault during the 7.5 development cycle would be uncontroversial, I \nthink.\n\n> I think it is very important to be able to put the system fairly\n> easily into debug mode\n\nIt is? Why would this be useful for non-development activities?\n\n(It may well be the case that we ought to report more or better \ninformation about the status of the WAL subsystem; but WAL_DEBUG is \nsurely not the right mechanism for emitting information intended for \nan administrator.)\n\n-Neil\n", "msg_date": "Wed, 03 Mar 2004 18:50:04 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] WAL Optimisation - configuration and usage" }, { "msg_contents": ">Tom Lane\n> \"Simon Riggs\" <[email protected]> writes:\n> > The behaviour I wish to add is:\n> > Keep wal_debug as a value between 0 and 16.\n> > If =0 then no debug output (default).\n> > Use following bitmasks against the value\n> > Mask 1 = XLOG Checkpoints get logged\n> > Mask 2 = Archive API calls get logged\n> > Mask 4 = Transaction - commits get logged\n> > Mask 8 = Flush & INSERTs get logged\n> \n> I see no value in reverting Neil's change. The above looks way too\nmuch\n> like old-line assembler-programmer thinking to me, anyway. Why not\n> invent a separate, appropriately named boolean variable for each thing\n> you want to control? Even C programmers manage to avoid doing the\nsort\n> of mental arithmetic that the above would force onto DBAs.\n> \n> As for whether it should be #ifdef'd or not, I'd have no objection to\n> turning WAL_DEBUG on by default in pg_config_manual.h for the duration\n> of PITR development. One should not however confuse short-term\n> debugging needs with features that the average user is going to need\n> indefinitely. (It was not too long ago that there was still debugging\n> code for btree index building in there, for crissakes.)\n\n...erm, I guess you didn't like that one then? ;}\n\n> As for whether it should be #ifdef'd or not, I'd have no objection to\n> turning WAL_DEBUG on by default in pg_config_manual.h for the duration\n> of PITR development. \n\nYes OK, thank you.\n\n> Why not\n> invent a separate, appropriately named boolean variable for each thing\n> you want to control?\n\nYes, OK, will do.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Mon, 8 Mar 2004 23:28:25 -0000", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] WAL Optimisation - configuration and usage " }, { "msg_contents": ">Neil Conway\n> Simon Riggs wrote:\n> > On the other hand, I was just about to change the wal_debug\nbehaviour to\n> > allow better debugging of PITR features as they're added.\n> \n> That's a development activity. Enabling the WAL_DEBUG #ifdef by\n> default during the 7.5 development cycle would be uncontroversial, I\n> think.\n\nYes that's the best proposal. Can I leave that with you?\n\n> > I think it is very important to be able to put the system fairly\n> > easily into debug mode\n> \n> It is? Why would this be useful for non-development activities?\n> \n> (It may well be the case that we ought to report more or better\n> information about the status of the WAL subsystem; but WAL_DEBUG is\n> surely not the right mechanism for emitting information intended for\n> an administrator.)\n\nRight again. I guess my proposal amounted to quick-and-dirty logging.\n\nI'll think some more.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 9 Mar 2004 20:26:55 -0000", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] WAL Optimisation - configuration and usage" } ]
[ { "msg_contents": "All:\n \nWe have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running RH9,\nPG 7.4.0. There's an internal U320, 10K RPM RAID-10 setup on 4 drives.\n \nWe are expecting a pretty high load, a few thousands of 'concurrent'\nusers executing either select, insert, update, statments.\n \nWhat is the next step up in terms of handling very heavy loads?\nClustering? \n \nAre there any standard, recommended clustering options?\n \nHow about this? http://c-jdbc.objectweb.org\n<http://c-jdbc.objectweb.org> \n \nAlso, in terms of hardware, overall, what benefits more, a SunFire 880\n(6 or 8 CPUs, lots of RAM, internal FC Drives) type of machine, or an\nIA-64 architecture?\n \nAppreciate any inputs,\n \nThanks,\nAnjan\n************************************************************************\n** \n\nThis e-mail and any files transmitted with it are intended for the use\nof the addressee(s) only and may be confidential and covered by the\nattorney/client and other privileges. If you received this e-mail in\nerror, please notify the sender; do not disclose, copy, distribute, or\ntake any action in reliance on the contents of this information; and\ndelete it from your system. Any other use of this e-mail is prohibited.\n\n \n\nMessage\n\n\n\nAll:\n \nWe have a \nQuad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running RH9, PG 7.4.0. There's \nan internal U320, 10K RPM RAID-10 setup on 4 drives.\n \nWe are expecting \na pretty high load, a few thousands of 'concurrent' users executing \neither select, insert, update, statments.\n \nWhat is the next \nstep up in terms of  handling very heavy loads? Clustering? \n\n \nAre there any \nstandard, recommended clustering options?\n \nHow about this? http://c-jdbc.objectweb.org\n \nAlso, in terms of \nhardware, overall, what benefits more, a SunFire 880 (6 or 8 CPUs, lots of RAM, \ninternal FC Drives) type of machine, or an IA-64 \narchitecture?\n \nAppreciate any \ninputs,\n \nThanks,Anjan\n\n************************************************************************** \n\nThis e-mail and any files transmitted with it are intended for the use of the \naddressee(s) only and may be confidential and covered by the attorney/client and \nother privileges. If you received this e-mail in error, please notify the \nsender; do not disclose, copy, distribute, or take any action in reliance on the \ncontents of this information; and delete it from your system. Any other use of \nthis e-mail is prohibited.", "msg_date": "Mon, 1 Mar 2004 10:35:30 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Scaling further up" }, { "msg_contents": "On Tue, 2004-03-02 at 17:42, William Yu wrote:\n> Anjan Dave wrote:\n> > We have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running RH9, \n> > PG 7.4.0. There's an internal U320, 10K RPM RAID-10 setup on 4 drives.\n> > \n> > We are expecting a pretty high load, a few thousands of 'concurrent' \n> > users executing either select, insert, update, statments.\n> \n> The quick and dirty method would be to upgrade to the recently announced \n> 3GHz Xeon MPs with 4MB of L3. My semi-educated guess is that you'd get \n> another +60% there due to the huge L3 hiding the Xeon's shared bus penalty.\n\nIf you are going to have thousands of 'concurrent' users you should\nseriously consider the 2.6 kernel if you are running Linux or as an\nalternative going with FreeBSD. You will need to load test your system\nand become an expert on tuning Postgres to get the absolute maximum\nperformance from each and every query you have.\n\nAnd you will need lots of hard drives. By lots I mean dozen(s) in a\nraid 10 array with a good controller. Thousands of concurrent users\nmeans hundreds or thousands of transactions per second. I've personally\nseen it scale that far but in my opinion you will need a lot more hard\ndrives and ram than cpu.\n\n", "msg_date": "Tue, 02 Mar 2004 10:57:27 +0000", "msg_from": "Fred Moyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "Anjan Dave wrote:\n> We have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running RH9, \n> PG 7.4.0. There's an internal U320, 10K RPM RAID-10 setup on 4 drives.\n> \n> We are expecting a pretty high load, a few thousands of 'concurrent' \n> users executing either select, insert, update, statments.\n\nThe quick and dirty method would be to upgrade to the recently announced \n3GHz Xeon MPs with 4MB of L3. My semi-educated guess is that you'd get \nanother +60% there due to the huge L3 hiding the Xeon's shared bus penalty.\n", "msg_date": "Tue, 02 Mar 2004 09:42:03 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" } ]
[ { "msg_contents": "> All:\n> \n> We have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running\nRH9, PG 7.4.0. There's \n> an internal U320, 10K RPM RAID-10 setup on 4 drives.\n> \n> We are expecting a pretty high load, a few thousands of 'concurrent'\nusers executing either \n> select, insert, update, statments.\n\n> What is the next step up in terms of handling very heavy loads?\nClustering? \n\nI'd look at adding more disks first. Depending on what type of query\nload you get, that box sounds like it will be very much I/O bound. More\nspindles = more parallell operations = faster under load. Consider\nadding 15KRPM disks as well, they're not all that much more expensive,\nand should give you better performance than 10KRPM.\n\nAlso, make sure you put your WAL disks on a separate RAIDset if possible\n(not just a separate partition on existing RAIDset).\n\nFinally, if you don't already have it, look for a battery-backed RAID\ncontroller that can do writeback-cacheing, and enable that. (Don't even\nthink about enabling it unless it's battery backed!) And add as much RAM\nas you can to that controller.\n\n\n//Magnus\n", "msg_date": "Mon, 1 Mar 2004 21:54:20 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling further up" } ]
[ { "msg_contents": "For the disks part - I am looking at a SAN implementation, and I will be\nplanning a separate RAID group for the WALs.\n\nThe controller is a PERC, with 128MB cache, and I think it is writeback.\n\nOther than the disks, I am curious what other people are using in terms\nof the horsepower needed. The Quad server has been keeping up, but we\nare expecting quite high loads in the near future, and I am not sure if\njust by having the disks on a high-end storage will do it.\n\nThanks,\nAnjan\n\n\n-----Original Message-----\nFrom: Magnus Hagander [mailto:[email protected]] \nSent: Monday, March 01, 2004 3:54 PM\nTo: Anjan Dave; [email protected]\nSubject: RE: [PERFORM] Scaling further up\n\n\n> All:\n> \n> We have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running\nRH9, PG 7.4.0. There's \n> an internal U320, 10K RPM RAID-10 setup on 4 drives.\n> \n> We are expecting a pretty high load, a few thousands of 'concurrent'\nusers executing either \n> select, insert, update, statments.\n\n> What is the next step up in terms of handling very heavy loads?\nClustering? \n\nI'd look at adding more disks first. Depending on what type of query\nload you get, that box sounds like it will be very much I/O bound. More\nspindles = more parallell operations = faster under load. Consider\nadding 15KRPM disks as well, they're not all that much more expensive,\nand should give you better performance than 10KRPM.\n\nAlso, make sure you put your WAL disks on a separate RAIDset if possible\n(not just a separate partition on existing RAIDset).\n\nFinally, if you don't already have it, look for a battery-backed RAID\ncontroller that can do writeback-cacheing, and enable that. (Don't even\nthink about enabling it unless it's battery backed!) And add as much RAM\nas you can to that controller.\n\n\n//Magnus\n", "msg_date": "Mon, 1 Mar 2004 16:02:27 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "Anjan,\n\n> Other than the disks, I am curious what other people are using in terms\n> of the horsepower needed. The Quad server has been keeping up, but we\n> are expecting quite high loads in the near future, and I am not sure if\n> just by having the disks on a high-end storage will do it.\n\nDo a performance analysis of RH9. My experience with RH on Xeon has been \nquite discouraging lately, and I've been recommending swapping stock kernels \nfor the RH kernel.\n\nOf course, if this is RHES, rather than the standard, then test & talk to RH \ninstead.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 1 Mar 2004 17:12:21 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "After a long battle with technology, [email protected] (Josh Berkus), an earthling, wrote:\n>> Other than the disks, I am curious what other people are using in\n>> terms of the horsepower needed. The Quad server has been keeping\n>> up, but we are expecting quite high loads in the near future, and I\n>> am not sure if just by having the disks on a high-end storage will\n>> do it.\n>\n> Do a performance analysis of RH9. My experience with RH on Xeon has\n> been quite discouraging lately, and I've been recommending swapping\n> stock kernels for the RH kernel.\n\nBy that, you mean that you recommend that RHAT kernels be replaced by\n\"stock\" ones?\n\n> Of course, if this is RHES, rather than the standard, then test &\n> talk to RH instead.\n\nIf you're spending the money, better demand value from the vendor...\n\n(And if RHAT is going to charge the big bucks, they'll have to provide\nservice...)\n-- \n(reverse (concatenate 'string \"gro.gultn\" \"@\" \"enworbbc\"))\nhttp://www.ntlug.org/~cbbrowne/rdbms.html\n\"I take it all back. Microsoft Exchange is RFC compliant.\nRFC 1925, point three.\" -- Author unknown\n", "msg_date": "Mon, 01 Mar 2004 21:43:18 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" } ]
[ { "msg_contents": "Hi,\n\nnobody has an idea? :-(\n\n-----Urspr�ngliche Nachricht-----\nVon: [email protected] [mailto:[email protected]] Im Auftrag von [email protected]\nGesendet: Freitag, 27. Februar 2004 17:53\nAn: [email protected]\nBetreff: [PERFORM] Select-Insert-Query \n\nHi,\n\nwhat is the most performant way to select for example the first 99 rows of a table and insert them into another table...\n\nat the moment i do this:\n\nfor userrecord in select * from table where account_id = a_account_id and counter_id = userrecord.counter_id and visitortable_id between a_minid and a_maxid limit 99 loop\ninsert into lastusers (account_id, counter_id, date, ip, hostname) values(a_account_id,userrecord.counter_id,userrecord.date\n ,userrecord.ip,userrecord.hostname);\nend loop;\n\ni think \"limit\" is a performance killer, is that right? but what to do instead\n\nthanks\nbye\n\n", "msg_date": "Tue, 2 Mar 2004 01:49:03 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Select-Insert-Query " }, { "msg_contents": "On Tue, 2004-03-02 at 00:49, [email protected] wrote:\n\n> what is the most performant way to select for example the first 99\n> rows of a table and insert them into another table... \n> \n> at the moment i do this: \n> \n> for userrecord in select * from table where account_id = a_account_id\n> and counter_id = userrecord.counter_id and visitortable_id between\n> a_minid and a_maxid limit 99 loop \n\nUsing LIMIT without ORDER BY will give a selection that is dependent on\nthe physical location of rows in the table; this will change whenever\none of them is UPDATEd.\n\n> insert into lastusers (account_id, counter_id, date, ip, hostname)\n> values(a_account_id,userrecord.counter_id,userrecord.date\n> ,userrecord.ip,userrecord.hostname); \n> end loop; \n> \n> i think \"limit\" is a performance killer, is that right? but what to do\n> instead \n\nI'm sure it is the loop that is the killer. Use a query in the INSERT\nstatement:\n\nINSERT INTO lastusers (account_id, counter_id, date, ip, hostname)\n SELECT * FROM table\n WHERE account_id = a_account_id AND\n counter_id = userrecord.counter_id AND\n visitortable_id between a_minid and a_maxid\n ORDER BY date DESC\n LIMIT 99;\n\n-- \nOliver Elphick <[email protected]>\nLFIX Ltd\n\n", "msg_date": "Tue, 02 Mar 2004 11:48:06 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select-Insert-Query" } ]
[ { "msg_contents": "\"By lots I mean dozen(s) in a raid 10 array with a good controller.\" \n\nI believe, for RAID-10, I will need even number of drives. Currently,\nthe size of the database is about 13GB, and is not expected to grow\nexponentially with thousands of concurrent users, so total space is not\nof paramount importance compared to performance.\n\nDoes this sound reasonable setup?\n10x36GB FC drives on RAID-10\n4x36GB FC drives for the logs on RAID-10 (not sure if this is the\ncorrect ratio)?\n1 hotspare\nTotal=15 Drives per enclosure.\n\nTentatively, I am looking at an entry-level EMC CX300 product with 2GB\nRAID cache, etc.\n\nQuestion - Are 73GB drives supposed to give better performance because\nof higher number of platters?\n\nThanks,\nAnjan\n\n\n-----Original Message-----\nFrom: Fred Moyer [mailto:[email protected]] \nSent: Tuesday, March 02, 2004 5:57 AM\nTo: William Yu; Anjan Dave\nCc: [email protected]\nSubject: Re: [PERFORM] Scaling further up\n\n\nOn Tue, 2004-03-02 at 17:42, William Yu wrote:\n> Anjan Dave wrote:\n> > We have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running \n> > RH9,\n> > PG 7.4.0. There's an internal U320, 10K RPM RAID-10 setup on 4\ndrives.\n> > \n> > We are expecting a pretty high load, a few thousands of 'concurrent'\n> > users executing either select, insert, update, statments.\n> \n> The quick and dirty method would be to upgrade to the recently \n> announced\n> 3GHz Xeon MPs with 4MB of L3. My semi-educated guess is that you'd get\n\n> another +60% there due to the huge L3 hiding the Xeon's shared bus\npenalty.\n\nIf you are going to have thousands of 'concurrent' users you should\nseriously consider the 2.6 kernel if you are running Linux or as an\nalternative going with FreeBSD. You will need to load test your system\nand become an expert on tuning Postgres to get the absolute maximum\nperformance from each and every query you have.\n\nAnd you will need lots of hard drives. By lots I mean dozen(s) in a\nraid 10 array with a good controller. Thousands of concurrent users\nmeans hundreds or thousands of transactions per second. I've personally\nseen it scale that far but in my opinion you will need a lot more hard\ndrives and ram than cpu.\n\n", "msg_date": "Tue, 2 Mar 2004 14:41:12 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "On Tue, 2 Mar 2004, Anjan Dave wrote:\n\n> \"By lots I mean dozen(s) in a raid 10 array with a good controller.\" \n> \n> I believe, for RAID-10, I will need even number of drives.\n\nCorrect.\n\n> Currently,\n> the size of the database is about 13GB, and is not expected to grow\n> exponentially with thousands of concurrent users, so total space is not\n> of paramount importance compared to performance.\n> \n> Does this sound reasonable setup?\n> 10x36GB FC drives on RAID-10\n> 4x36GB FC drives for the logs on RAID-10 (not sure if this is the\n> correct ratio)?\n> 1 hotspare\n> Total=15 Drives per enclosure.\n\nPutting the Logs on RAID-10 is likely to be slower than, or no faster than \nputting them on RAID-1, since the RAID-10 will have to write to 4 drives, \nwhile the RAID-1 will only have to write to two drives. now, if you were \nreading in the logs a lot, it might help to have the RAID-10.\n\n> Tentatively, I am looking at an entry-level EMC CX300 product with 2GB\n> RAID cache, etc.\n\nPick up a spare, I'll get you my home address, etc... :-)\n\nSeriously, that's huge. At that point you may well find that putting \nEVERYTHING on a big old RAID-5 performs best, since you've got lots of \ncaching / write buffering going on.\n\n> Question - Are 73GB drives supposed to give better performance because\n> of higher number of platters?\n\nGenerally, larger hard drives perform better than smaller hard drives \nbecause they a: have more heads and / or b: have a higher areal density.\n\nIt's a common misconception that faster RPM drives are a lot faster, when, \nin fact, their only speed advantage is slight faster seeks. The areal \ndensity of faster spinning hard drives tends to be somewhat less than the \nslower spinning drives, since the maximum frequency the heads can work in \non both drives, assuming the same technology, is the same. I.e. the speed \nat which you can read data off of the platter doesn't usually go up with a \nhigher RPM drive, only the speed with which you can get to the first \nsector.\n\n", "msg_date": "Tue, 2 Mar 2004 14:16:24 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "Hi all,\n\nIf you have a DB of 'only' 13 GB and you do not expect it to grow much, it \nmight be advisable to have enough memory (RAM) to hold the entire DB in \nshared memory (everything is cached). If you have a server with say 24 GB or \nmemory and can allocate 20 GB for cache, you don't care about the speed of \ndisks any more - all you worry about is the speed of your memory and your \nnetwork connection.\nI believe, this not possible using 32-bit technology, you would have to go to \nsome 64-bit platform, but if it's speed you want ...\nYou can also try solid state hard disk drives. These are actually just meory, \nthere are no moving parts, but the look and behave like very very fast disk \ndrives. I have seen them at capacities of 73 GB - but they didn't mention the \nprice (I'd probably have a heart attack when I look at the price tag).\n\nBest regards,\nChris\n\n\nOn Tuesday 02 March 2004 14:41, Anjan Dave wrote:\n> \"By lots I mean dozen(s) in a raid 10 array with a good controller.\"\n>\n> I believe, for RAID-10, I will need even number of drives. Currently,\n> the size of the database is about 13GB, and is not expected to grow\n> exponentially with thousands of concurrent users, so total space is not\n> of paramount importance compared to performance.\n>\n> Does this sound reasonable setup?\n> 10x36GB FC drives on RAID-10\n> 4x36GB FC drives for the logs on RAID-10 (not sure if this is the\n> correct ratio)?\n> 1 hotspare\n> Total=15 Drives per enclosure.\n>\n> Tentatively, I am looking at an entry-level EMC CX300 product with 2GB\n> RAID cache, etc.\n>\n> Question - Are 73GB drives supposed to give better performance because\n> of higher number of platters?\n>\n> Thanks,\n> Anjan\n>\n>\n> -----Original Message-----\n> From: Fred Moyer [mailto:[email protected]]\n> Sent: Tuesday, March 02, 2004 5:57 AM\n> To: William Yu; Anjan Dave\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Scaling further up\n>\n> On Tue, 2004-03-02 at 17:42, William Yu wrote:\n> > Anjan Dave wrote:\n> > > We have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running\n> > > RH9,\n> > > PG 7.4.0. There's an internal U320, 10K RPM RAID-10 setup on 4\n>\n> drives.\n>\n> > > We are expecting a pretty high load, a few thousands of 'concurrent'\n> > > users executing either select, insert, update, statments.\n> >\n> > The quick and dirty method would be to upgrade to the recently\n> > announced\n> > 3GHz Xeon MPs with 4MB of L3. My semi-educated guess is that you'd get\n> >\n> > another +60% there due to the huge L3 hiding the Xeon's shared bus\n>\n> penalty.\n>\n> If you are going to have thousands of 'concurrent' users you should\n> seriously consider the 2.6 kernel if you are running Linux or as an\n> alternative going with FreeBSD. You will need to load test your system\n> and become an expert on tuning Postgres to get the absolute maximum\n> performance from each and every query you have.\n>\n> And you will need lots of hard drives. By lots I mean dozen(s) in a\n> raid 10 array with a good controller. Thousands of concurrent users\n> means hundreds or thousands of transactions per second. I've personally\n> seen it scale that far but in my opinion you will need a lot more hard\n> drives and ram than cpu.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n", "msg_date": "Tue, 2 Mar 2004 16:16:55 -0500", "msg_from": "Chris Ruprecht <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "Anjan,\n\n> Question - Are 73GB drives supposed to give better performance because\n> of higher number of platters?\n\nNot for your situation, no. Your issue is random seek times for multiple \nsimultaneous seek requests and batched checkpoint updates. Things that help \nwith this are:\nMore spindles\nBetter controllers, both RAID and individual disks\nFaster drives\n\nParticularly, I'd check out stuff like reports from Tom's Hardware for \nevaluating the real speed of drives and seek times. Often a really good \n10000 RPM SCSI will beat a 15000RPM SCSI if the latter has poor onboard \nprogramming.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 2 Mar 2004 13:26:19 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "On Tue, Mar 02, 2004 at 02:16:24PM -0700, scott.marlowe wrote:\n> It's a common misconception that faster RPM drives are a lot faster,\n> when, in fact, their only speed advantage is slight faster seeks.\n> The areal density of faster spinning hard drives tends to be\n> somewhat less than the slower spinning drives, since the maximum\n> frequency the heads can work in on both drives, assuming the same\n> technology, is the same. I.e. the speed at which you can read data\n> off of the platter doesn't usually go up with a higher RPM drive,\n> only the speed with which you can get to the first sector.\n\nThis would imply that an upgrade in drive RPM should be accompanied by\na decrease in random_page_cost, correct?\n\nrandom_page_cost should be set with the following things taken into\naccount:\n - seek speed\n - likelihood of page to be cached in memory by the kernel\n - anything else?\n\n\nSorry, i realize this pulls the thread a bit off-topic, but i've heard\nthat about RPM speeds before, and i just want some confirmation that\nmy deductions are reasonable.\n\n-johnnnnnnnnnnn\n", "msg_date": "Tue, 2 Mar 2004 17:25:41 -0600", "msg_from": "johnnnnnn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "\nOn 02/03/2004 23:25 johnnnnnn wrote:\n> [snip]\n> random_page_cost should be set with the following things taken into\n> account:\n> - seek speed\n\nWhich is not exactly the same thing as spindle speed as it's a combination \nof spindle speed and track-to-track speed. I think you'll find that a 15K \nrpm disk, whilst it will probably have a lower seek time than a 10K rpm \ndisk, won't have a proportionately (i.e., 2/3rds) lower seek time.\n\n> - likelihood of page to be cached in memory by the kernel\n\nThat's effective cache size.\n\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n", "msg_date": "Wed, 3 Mar 2004 08:57:18 +0000", "msg_from": "Paul Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "On Wed, 3 Mar 2004, Paul Thomas wrote:\n\n> \n> On 02/03/2004 23:25 johnnnnnn wrote:\n> > [snip]\n> > random_page_cost should be set with the following things taken into\n> > account:\n> > - seek speed\n> \n> Which is not exactly the same thing as spindle speed as it's a combination \n> of spindle speed and track-to-track speed. I think you'll find that a 15K \n> rpm disk, whilst it will probably have a lower seek time than a 10K rpm \n> disk, won't have a proportionately (i.e., 2/3rds) lower seek time.\n\nThere are three factors that affect how fast you can get to the next \nsector:\n\nseek time\nsettle time\nrotational latency\n\nMost drives only list the first, and don't bother to mention the other \ntwo.\n\nOn many modern drives, the seek times are around 5 to 10 milliseconds.\nThe settle time varies as well. the longer the seek, the longer the \nsettle, generally. This is the time it takes for the head to stop shaking \nand rest quietly over a particular track.\nRotational Latency is the amount of time you have to wait, on average, for \nthe sector you want to come under the heads.\n\nAssuming an 8 ms seek, and 2 ms settle (typical numbers), and that the \nrotational latency on average is 1/2 of a rotation: At 10k rpm, a \nrotation takes 1/166.667 of a second, or 6 mS. So, a half a rotation is \napproximately 3 mS. By going to a 15k rpm drive, the latency drops to 2 \nmS. So, if we add them up, on the same basic drive, one being 10k and one \nbeing 15k, we get:\n\n10krpm: 8+2+3 = 13 mS\n15krpm: 8+2+2 = 12 mS\n\nSo, based on the decrease in rotational latency being the only advantage \nthe 15krpm drive has over the 10krpm drive, we get an decrease in access \ntime of only 1 mS, or only about an 8% decrease in actual seek time.\n\nSo, if you're random page cost on 10krpm drives was 1.7, you'd need to \ndrop it to 1.57 or so to reflect the speed increase from 15krpm drives.\n\nI.e. it's much more likely that going from 1 gig to 2 gigs of ram will \nmake a noticeable difference than going from 10k to 15k drives.\n\n", "msg_date": "Wed, 3 Mar 2004 11:23:11 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "John,\n\n> This would imply that an upgrade in drive RPM should be accompanied by\n> a decrease in random_page_cost, correct?\n\nMaybe. Maybe not. Tom's Hardware did some Bonnie++ testing with a variety \nof new drives last year. They were moderately surprised to find that there \nwere \"faster\" drives (i.e. higher RPM) which had lower real throughput due to \npoor onboard software and hardware, such as a small and slow onboard cache.\n\nSo, it would be reasonable to assume that a 10,000 RPM Barracuda could support \nmarginally lower random_page_cost than a 7,200 RPM Barracuda ... but that \ntells you nothing about a 10,000 RPM Maxtor Diamond (as an example).\n\nAlso, many other factors influence real random_page_cost; the size and access \npattern of your database is probably much more important than your RPM.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 3 Mar 2004 11:09:23 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "On 03/03/2004 18:23 scott.marlowe wrote:\n> [snip]\n> There are three factors that affect how fast you can get to the next\n> sector:\n> \n> seek time\n> settle time\n> rotational latency\n> \n> Most drives only list the first, and don't bother to mention the other\n> two.\n\nAh yes, one of my (very) few still functioning brain cells was nagging \nabout another bit of time in the equation :)\n\n> On many modern drives, the seek times are around 5 to 10 milliseconds.\n> [snip]\n\nGoing back to the OPs posting about random_page_cost, imagine I have 2 \nservers identical in every way except the disk drive. Server A has a 10K \nrpm drive and server B has a 15K rpm drive. Seek/settle times aren't \nspectacularly different between the 2 drives. I'm wondering if drive B \nmight actually merit a _higher_ random_page_cost than drive A as, once it \ngets settled on a disk track, it can suck the data off a lot faster. \nopinions/experiences anyone?\n\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n", "msg_date": "Thu, 4 Mar 2004 00:52:08 +0000", "msg_from": "Paul Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "On Thu, 4 Mar 2004, Paul Thomas wrote:\n\n> On 03/03/2004 18:23 scott.marlowe wrote:\n> > [snip]\n> > There are three factors that affect how fast you can get to the next\n> > sector:\n> > \n> > seek time\n> > settle time\n> > rotational latency\n> > \n> > Most drives only list the first, and don't bother to mention the other\n> > two.\n> \n> Ah yes, one of my (very) few still functioning brain cells was nagging \n> about another bit of time in the equation :)\n> \n> > On many modern drives, the seek times are around 5 to 10 milliseconds.\n> > [snip]\n> \n> Going back to the OPs posting about random_page_cost, imagine I have 2 \n> servers identical in every way except the disk drive. Server A has a 10K \n> rpm drive and server B has a 15K rpm drive. Seek/settle times aren't \n> spectacularly different between the 2 drives. I'm wondering if drive B \n> might actually merit a _higher_ random_page_cost than drive A as, once it \n> gets settled on a disk track, it can suck the data off a lot faster. \n> opinions/experiences anyone?\n\nIt might well be that you have higher settle times that offset the small \ngain in rotational latency. I haven't looked into it, so I don't know one \nway or the other, but it seems a reasonable assumption.\n\nHowever, a common misconception is that the higher angular velocity of \nthe 15krpm drives would allow you to read data faster. In fact, the limit \nof how fast you can read is set by the head. There's a maximum frequency \nthat it can read, and the areal density / rpm have to be such that you \ndon't exceed that frequency. OFten, the speed at which you read off the \nplatters is exactly the same between a 10k and 15k of the same family. \n\nThe required lower areal density is the reason 15krpm drives show up in \nthe lower capacities first.\n\n", "msg_date": "Thu, 4 Mar 2004 10:12:18 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" } ]
[ { "msg_contents": "That was part of my original question - whether it makes sense to go for\na mid-range SunFire machine (64bit HW, 64bit OS), which is scalable to\nhigh amounts of memory, and shouldn't have any issues addressing it all.\nI've had that kind of setup once temporarily on a V480 (quad UltraSparc,\n16GB RAM) machine, and it did well in production use. Without having the\ntime/resources to do extensive testing, I am not sure if\nPostgres/Solaris9 is really suggested by the community for\nhigh-performance, as opposed to a XEON/Linux setup. Storage being a\nseparate discussion.\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Chris Ruprecht [mailto:[email protected]] \nSent: Tuesday, March 02, 2004 4:17 PM\nTo: Anjan Dave; [email protected]; William Yu\nCc: [email protected]\nSubject: Re: [PERFORM] Scaling further up\n\n\nHi all,\n\nIf you have a DB of 'only' 13 GB and you do not expect it to grow much,\nit \nmight be advisable to have enough memory (RAM) to hold the entire DB in \nshared memory (everything is cached). If you have a server with say 24\nGB or \nmemory and can allocate 20 GB for cache, you don't care about the speed\nof \ndisks any more - all you worry about is the speed of your memory and\nyour \nnetwork connection.\nI believe, this not possible using 32-bit technology, you would have to\ngo to \nsome 64-bit platform, but if it's speed you want ...\nYou can also try solid state hard disk drives. These are actually just\nmeory, \nthere are no moving parts, but the look and behave like very very fast\ndisk \ndrives. I have seen them at capacities of 73 GB - but they didn't\nmention the \nprice (I'd probably have a heart attack when I look at the price tag).\n\nBest regards,\nChris\n\n\nOn Tuesday 02 March 2004 14:41, Anjan Dave wrote:\n> \"By lots I mean dozen(s) in a raid 10 array with a good controller.\"\n>\n> I believe, for RAID-10, I will need even number of drives. Currently, \n> the size of the database is about 13GB, and is not expected to grow \n> exponentially with thousands of concurrent users, so total space is \n> not of paramount importance compared to performance.\n>\n> Does this sound reasonable setup?\n> 10x36GB FC drives on RAID-10\n> 4x36GB FC drives for the logs on RAID-10 (not sure if this is the \n> correct ratio)? 1 hotspare\n> Total=15 Drives per enclosure.\n>\n> Tentatively, I am looking at an entry-level EMC CX300 product with 2GB\n\n> RAID cache, etc.\n>\n> Question - Are 73GB drives supposed to give better performance because\n\n> of higher number of platters?\n>\n> Thanks,\n> Anjan\n>\n>\n> -----Original Message-----\n> From: Fred Moyer [mailto:[email protected]]\n> Sent: Tuesday, March 02, 2004 5:57 AM\n> To: William Yu; Anjan Dave\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Scaling further up\n>\n> On Tue, 2004-03-02 at 17:42, William Yu wrote:\n> > Anjan Dave wrote:\n> > > We have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running\n\n> > > RH9, PG 7.4.0. There's an internal U320, 10K RPM RAID-10 setup on \n> > > 4\n>\n> drives.\n>\n> > > We are expecting a pretty high load, a few thousands of \n> > > 'concurrent' users executing either select, insert, update, \n> > > statments.\n> >\n> > The quick and dirty method would be to upgrade to the recently \n> > announced 3GHz Xeon MPs with 4MB of L3. My semi-educated guess is \n> > that you'd get\n> >\n> > another +60% there due to the huge L3 hiding the Xeon's shared bus\n>\n> penalty.\n>\n> If you are going to have thousands of 'concurrent' users you should \n> seriously consider the 2.6 kernel if you are running Linux or as an \n> alternative going with FreeBSD. You will need to load test your \n> system and become an expert on tuning Postgres to get the absolute \n> maximum performance from each and every query you have.\n>\n> And you will need lots of hard drives. By lots I mean dozen(s) in a \n> raid 10 array with a good controller. Thousands of concurrent users \n> means hundreds or thousands of transactions per second. I've \n> personally seen it scale that far but in my opinion you will need a \n> lot more hard drives and ram than cpu.\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n", "msg_date": "Tue, 2 Mar 2004 16:50:04 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "On Tue, 2 Mar 2004, Anjan Dave wrote:\n\n> That was part of my original question - whether it makes sense to go for\n> a mid-range SunFire machine (64bit HW, 64bit OS), which is scalable to\n> high amounts of memory, and shouldn't have any issues addressing it all.\n> I've had that kind of setup once temporarily on a V480 (quad UltraSparc,\n> 16GB RAM) machine, and it did well in production use. Without having the\n> time/resources to do extensive testing, I am not sure if\n> Postgres/Solaris9 is really suggested by the community for\n> high-performance, as opposed to a XEON/Linux setup. Storage being a\n> separate discussion.\n\nSome folks on the list have experience with Postgresql on Solaris, and \nthey generally say they use Solaris not for performance reasons, but for \nreliability reasons. I.e. the bigger Sun hardware is fault tolerant.\n\nFor speed, the X86 32 and 64 bit architectures seem to be noticeable \nfaster than Sparc. However, running Linux or BSD on Sparc make them \npretty fast too, but you lose the fault tolerant support for things like \nhot swappable CPUs or memory.\n\n\n", "msg_date": "Tue, 2 Mar 2004 15:36:52 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "> For speed, the X86 32 and 64 bit architectures seem to be noticeable \n> faster than Sparc. However, running Linux or BSD on Sparc make them \n> pretty fast too, but you lose the fault tolerant support for things like \n> hot swappable CPUs or memory.\n\nAgreed.. You can get a Quad Opteron with 16GB memory for around 20K.\n\nGrab 3, a cheap SAN and setup a little master/slave replication with\nfailover (how is Slony coming?), and you're all set.\n\n\n", "msg_date": "Tue, 02 Mar 2004 17:56:32 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "\nOn Mar 2, 2004, at 5:36 PM, scott.marlowe wrote:\n\n> Some folks on the list have experience with Postgresql on Solaris, and\n> they generally say they use Solaris not for performance reasons, but \n> for\n> reliability reasons. I.e. the bigger Sun hardware is fault tolerant.\n>\nSolaris isn't nearly as bad for PG as it used to be.\n\nBut as you say - the #1 reason to use sun is reliability. (In my case, \nit was because we had a giant sun laying around :)\n\nI'm trying to remember exactly what happens.. but I know on sun if it \nhad a severe memory error it kills off processes with data on that dimm \n(Since it has no idea if it is bad or not. Thanks to ECC this is very \nrare, but it can happen.). I want to say if a CPU dies any processes \nrunning on it at that moment are also killed. but the more I think \nabout that th emore I don't think that is the case.\n\nAs for x86.. if ram or a cpu goes bad you're SOL.\n\nAlthough opterons are sexy you need to remember they really are brand \nnew cpus - I'm sure AMD has done tons of testing but sun ultrasparc's \nhave been in every situation conceivable in production. If you are \ngoing to really have thousands of users you probably want to bet the \nfarm on something proven.\n\nlots and lots of spindles\nlots and lots of ram\n\nYou may also want to look into a replication solution as a hot backup.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Tue, 2 Mar 2004 21:07:10 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "On Tue, Mar 02, 2004 at 04:50:04PM -0500, Anjan Dave wrote:\n> time/resources to do extensive testing, I am not sure if\n> Postgres/Solaris9 is really suggested by the community for\n> high-performance, as opposed to a XEON/Linux setup. Storage being a\n> separate discussion.\n\nI can tell you from experience that performance on Solaris is nowhere\nclose to what you'd expect, given the coin you're forking over for\nit. I think the reason to use Solaris is its support for all the\nnifty hot-swappable hardware, and not for its speed or any putative\nbenefit you might get from having 64 bits at your disposal.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe fact that technology doesn't work is no bar to success in the marketplace.\n\t\t--Philip Greenspun\n", "msg_date": "Wed, 3 Mar 2004 06:32:22 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" } ]
[ { "msg_contents": "Here's what I recorded today from iostat (linux, iostat -x -k, sda3 is\nthe pg slice, logs included) during peak time on the RAID-10 array -\nWhat i see is mostly writes, and sometimes, quite a bit of writing,\nduring which the average wait times shoot up.\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\n/dev/sda3 18.81 113.21 3.90 36.33 181.54 1207.75 90.77 603.88\n34.54 0.49 0.73 0.22 0.87\n/dev/sda3 0.00 208.00 0.00 150.00 0.00 2884.00 0.00 1442.00\n19.23 0.75 0.50 0.33 5.00\n/dev/sda3 0.00 239.00 0.00 169.00 0.00 3264.00 0.00 1632.00\n19.31 2.15 1.27 0.33 5.50\n/dev/sda3 0.00 224.50 0.00 158.00 0.00 3060.00 0.00 1530.00\n19.37 1.90 1.20 0.28 4.50\n/dev/sda3 0.00 157.00 0.00 117.00 0.00 2192.00 0.00 1096.00\n18.74 0.40 0.34 0.30 3.50\n/dev/sda3 0.00 249.50 0.00 179.00 0.00 3596.00 0.00 1798.00\n20.09 21.40 10.78 0.39 7.00\n/dev/sda3 0.00 637.50 0.00 620.50 0.00 9936.00 0.00 4968.00\n16.01 1137.15 183.55 1.85 115.00\n/dev/sda3 0.00 690.00 0.00 548.50 0.00 9924.00 0.00 4962.00\n18.09 43.10 7.82 0.46 25.50\n/dev/sda3 0.00 485.00 0.00 392.00 0.00 7028.00 0.00 3514.00\n17.93 86.90 22.21 1.14 44.50\n/dev/sda3 0.00 312.50 0.00 206.50 0.00 4156.00 0.00 2078.00\n20.13 3.50 1.69 0.53 11.00\n/dev/sda3 0.00 386.50 0.00 275.50 0.00 5336.00 0.00 2668.00\n19.37 16.80 6.10 0.60 16.50\n/dev/sda3 0.00 259.00 0.00 176.50 0.00 3492.00 0.00 1746.00\n19.78 3.25 1.84 0.40 7.00\n/dev/sda3 0.00 196.00 0.00 99.00 0.00 2360.00 0.00 1180.00\n23.84 0.10 0.10 0.10 1.00\n/dev/sda3 0.00 147.00 0.00 100.00 0.00 1976.00 0.00 988.00\n19.76 0.50 0.50 0.45 4.50\n/dev/sda3 0.00 126.50 0.00 94.50 0.00 1768.00 0.00 884.00\n18.71 0.20 0.21 0.21 2.00\n/dev/sda3 0.00 133.50 0.00 106.50 0.00 1920.00 0.00 960.00\n18.03 0.50 0.47 0.47 5.00\n/dev/sda3 0.00 146.50 0.00 118.00 0.00 2116.00 0.00 1058.00\n17.93 0.20 0.21 0.17 2.00\n/dev/sda3 0.00 156.00 0.00 128.50 0.00 2276.00 0.00 1138.00\n17.71 0.35 0.27 0.27 3.50\n/dev/sda3 0.00 145.00 0.00 105.00 0.00 2000.00 0.00 1000.00\n19.05 0.25 0.24 0.24 2.50\n/dev/sda3 0.00 72.96 0.00 54.51 0.00 1019.74 0.00 509.87\n18.71 0.17 0.31 0.31 1.72\n/dev/sda3 0.00 168.50 0.00 139.50 0.00 2464.00 0.00 1232.00\n17.66 0.65 0.47 0.39 5.50\n/dev/sda3 0.00 130.50 0.00 100.00 0.00 1844.00 0.00 922.00\n18.44 0.00 0.00 0.00 0.00\n/dev/sda3 0.00 122.00 0.00 101.00 0.00 1784.00 0.00 892.00\n17.66 0.25 0.25 0.25 2.50\n/dev/sda3 0.00 143.00 0.00 121.50 0.00 2116.00 0.00 1058.00\n17.42 0.25 0.21 0.21 2.50\n/dev/sda3 0.00 134.50 0.00 96.50 0.00 1848.00 0.00 924.00\n19.15 0.35 0.36 0.36 3.50\n/dev/sda3 0.00 153.50 0.00 115.00 0.00 2148.00 0.00 1074.00\n18.68 0.35 0.30 0.30 3.50\n/dev/sda3 0.00 101.50 0.00 80.00 0.00 1452.00 0.00 726.00\n18.15 0.20 0.25 0.25 2.00\n/dev/sda3 0.00 108.50 0.00 92.50 0.00 1608.00 0.00 804.00\n17.38 0.25 0.27 0.27 2.50\n/dev/sda3 0.00 179.00 0.00 132.50 0.00 2492.00 0.00 1246.00\n18.81 0.55 0.42 0.42 5.50\n/dev/sda3 1.00 113.00 1.00 83.00 16.00 1568.00 8.00 784.00\n18.86 0.15 0.18 0.12 1.00\n/dev/sda3 0.00 117.00 0.00 97.50 0.00 1716.00 0.00 858.00\n17.60 0.20 0.21 0.21 2.00\n/dev/sda3 0.00 541.00 0.00 415.50 0.00 7696.00 0.00 3848.00\n18.52 146.50 35.09 1.37 57.00\n/dev/sda3 0.00 535.00 0.00 392.50 0.00 7404.00 0.00 3702.00\n18.86 123.70 31.67 1.31 51.50\n/dev/sda3 0.00 993.50 0.00 697.50 0.00 13544.00 0.00 6772.00\n19.42 174.25 24.98 1.25 87.00\n/dev/sda3 0.00 245.00 0.00 108.50 0.00 2832.00 0.00 1416.00\n26.10 0.55 0.51 0.51 5.50\n\n-----Original Message-----\nFrom: scott.marlowe [mailto:[email protected]] \nSent: Tuesday, March 02, 2004 4:16 PM\nTo: Anjan Dave\nCc: [email protected]; William Yu; [email protected]\nSubject: Re: [PERFORM] Scaling further up\n\n\nOn Tue, 2 Mar 2004, Anjan Dave wrote:\n\n> \"By lots I mean dozen(s) in a raid 10 array with a good controller.\"\n> \n> I believe, for RAID-10, I will need even number of drives.\n\nCorrect.\n\n> Currently,\n> the size of the database is about 13GB, and is not expected to grow \n> exponentially with thousands of concurrent users, so total space is \n> not of paramount importance compared to performance.\n> \n> Does this sound reasonable setup?\n> 10x36GB FC drives on RAID-10\n> 4x36GB FC drives for the logs on RAID-10 (not sure if this is the \n> correct ratio)? 1 hotspare\n> Total=15 Drives per enclosure.\n\nPutting the Logs on RAID-10 is likely to be slower than, or no faster\nthan \nputting them on RAID-1, since the RAID-10 will have to write to 4\ndrives, \nwhile the RAID-1 will only have to write to two drives. now, if you\nwere \nreading in the logs a lot, it might help to have the RAID-10.\n\n> Tentatively, I am looking at an entry-level EMC CX300 product with 2GB\n\n> RAID cache, etc.\n\nPick up a spare, I'll get you my home address, etc... :-)\n\nSeriously, that's huge. At that point you may well find that putting \nEVERYTHING on a big old RAID-5 performs best, since you've got lots of \ncaching / write buffering going on.\n\n> Question - Are 73GB drives supposed to give better performance because\n\n> of higher number of platters?\n\nGenerally, larger hard drives perform better than smaller hard drives \nbecause they a: have more heads and / or b: have a higher areal density.\n\nIt's a common misconception that faster RPM drives are a lot faster,\nwhen, \nin fact, their only speed advantage is slight faster seeks. The areal \ndensity of faster spinning hard drives tends to be somewhat less than\nthe \nslower spinning drives, since the maximum frequency the heads can work\nin \non both drives, assuming the same technology, is the same. I.e. the\nspeed \nat which you can read data off of the platter doesn't usually go up with\na \nhigher RPM drive, only the speed with which you can get to the first \nsector.\n\n", "msg_date": "Tue, 2 Mar 2004 17:03:32 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling further up" } ]
[ { "msg_contents": "Can you describe the vendors/components of a \"cheap SAN setup?\"\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Rod Taylor [mailto:[email protected]] \nSent: Tuesday, March 02, 2004 5:57 PM\nTo: Scott Marlowe\nCc: Anjan Dave; Chris Ruprecht; [email protected]; William Yu;\nPostgresql Performance\nSubject: Re: [PERFORM] Scaling further up\n\n\n> For speed, the X86 32 and 64 bit architectures seem to be noticeable\n> faster than Sparc. However, running Linux or BSD on Sparc make them \n> pretty fast too, but you lose the fault tolerant support for things\nlike \n> hot swappable CPUs or memory.\n\nAgreed.. You can get a Quad Opteron with 16GB memory for around 20K.\n\nGrab 3, a cheap SAN and setup a little master/slave replication with\nfailover (how is Slony coming?), and you're all set.\n\n\n", "msg_date": "Tue, 2 Mar 2004 18:24:40 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "On Tue, 2004-03-02 at 18:24, Anjan Dave wrote:\n> Can you describe the vendors/components of a \"cheap SAN setup?\"\n\nheh.. Excellent point.\n\nMy point was that you could get away with a smaller setup (number of\ndisks) if it doesn't have to deal with reads and writes are not time\ndependent than you will if you attempt to pull 500MB/sec off the disks.\n\nIf it is foreseeable that the database can be held in Ram, that it is\nmuch easier and cheaper way to get high IO than with physical disks.\n\n\n", "msg_date": "Tue, 02 Mar 2004 21:28:00 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" } ]
[ { "msg_contents": "After an upgrade to 7.4.1 (from 7.3) we see a severe performance\nregression in bulk INSERTs.\n\nThis is apparently caused by constant checkpointing (every 10 to 20\nseconds). I've already increased the number of checkpoint segments to\n32, but currently, there are just 10 or 11 files in the pg_xlog\ndirectory. With 7.3, we had configured checkpoint_segements at 16, and\nthere were 33 pg_xlog files. Checkpoints happened every couple of\nminutes.\n\nHow can I reduce the checkpoint frequency?\n\n(I'd like to try that first because it's the most obvious anomaly.\nMaybe we can look at the involved table later.)\n\n-- \nCurrent mail filters: many dial-up/DSL/cable modem hosts, and the\nfollowing domains: atlas.cz, bigpond.com, freenet.de, hotmail.com,\nlibero.it, netscape.net, postino.it, tiscali.co.uk, tiscali.cz,\ntiscali.it, voila.fr, wanadoo.fr, yahoo.com.\n", "msg_date": "Wed, 3 Mar 2004 10:25:28 +0100", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": true, "msg_subject": "Bulk INSERT performance in 7.4.1" }, { "msg_contents": "Florian Weimer wrote:\n\n> After an upgrade to 7.4.1 (from 7.3) we see a severe performance\n> regression in bulk INSERTs.\n\nIn turns out that we were running the default configuration, and not the\ntuned one in /etc/postgresql. *blush*\n\nAfter increasing the number of checkpoint segments and the shared-memory\nbuffers, performance is back to the expected levels. It might even be a\nbit faster.\n\n-- \nCurrent mail filters: many dial-up/DSL/cable modem hosts, and the\nfollowing domains: atlas.cz, bigpond.com, freenet.de, hotmail.com,\nlibero.it, netscape.net, postino.it, tiscali.co.uk, tiscali.cz,\ntiscali.it, voila.fr, wanadoo.fr, yahoo.com.\n", "msg_date": "Wed, 3 Mar 2004 15:10:32 +0100", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bulk INSERT performance in 7.4.1" }, { "msg_contents": ">>>>> \"FW\" == Florian Weimer <[email protected]> writes:\n\nFW> After increasing the number of checkpoint segments and the shared-memory\nFW> buffers, performance is back to the expected levels. It might even be a\nFW> bit faster.\n\nIf you've got the time, could you try also doing the full bulk insert\ntest with the checkpoint log files on another physical disk? See if\nthat's any faster.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Wed, 03 Mar 2004 15:28:53 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bulk INSERT performance in 7.4.1" }, { "msg_contents": "Would turning autocommit off help?\n\n\nVivek Khera wrote:\n>>>>>>\"FW\" == Florian Weimer <[email protected]> writes:\n> \n> \n> FW> After increasing the number of checkpoint segments and the shared-memory\n> FW> buffers, performance is back to the expected levels. It might even be a\n> FW> bit faster.\n> \n> If you've got the time, could you try also doing the full bulk insert\n> test with the checkpoint log files on another physical disk? See if\n> that's any faster.\n> \n\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n", "msg_date": "Wed, 03 Mar 2004 16:37:48 -0500", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bulk INSERT performance in 7.4.1" }, { "msg_contents": "\nOn Mar 3, 2004, at 4:37 PM, Greg Spiegelberg wrote:\n\n> Would turning autocommit off help?\n>\n\ndoubtful, since the bulk insert is all one transaction.\n\n", "msg_date": "Wed, 3 Mar 2004 16:39:37 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bulk INSERT performance in 7.4.1" }, { "msg_contents": "Vivek Khera wrote:\n\n> If you've got the time, could you try also doing the full bulk insert\n> test with the checkpoint log files on another physical disk? See if\n> that's any faster.\n\nWe have been doing that for a few weeks, but the performance\nimprovements are less than what we expected. There is hardly any disk\nactivity on the log RAID, even during checkpointing.\n\nAfter I activated the tuned configuration, we are again mostly CPU-bound\n(it seems that updating all four indices is quite expensive). The\nbulk INSERT process runs single-threaded right now, and if we switched\nto multiple processes for that, we could reach some 1,500 INSERTs per\nsecond, I believe. This is more than sufficient for us; our real-time\ndata collector is tuned to emit about 150 records per second, on the\naverage. (There is an on-disk queue to compensate temporary problems,\nsuch as spikes in the data rate and database updates gone awry.)\n\n-- \nCurrent mail filters: many dial-up/DSL/cable modem hosts, and the\nfollowing domains: atlas.cz, bigpond.com, freenet.de, hotmail.com,\nlibero.it, netscape.net, postino.it, tiscali.co.uk, tiscali.cz,\ntiscali.it, voila.fr, wanadoo.fr, yahoo.com.\n", "msg_date": "Thu, 4 Mar 2004 11:06:08 +0100", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bulk INSERT performance in 7.4.1" } ]
[ { "msg_contents": "Given an index like this:\n\n CREATE UNIQUE INDEX i1 ON t1 (c1) WHERE c1 IS NOT NULL;\n\nand a query like this:\n\n SELECT * FROM t1 WHERE c1 = 123;\n\nI'd like the planner to be smart enough to use an index scan using i1. Yes,\nI can change the query to this:\n\n SELECT * FROM t1 WHERE c1 = 123 AND c1 IS NOT NULL;\n\nIn which case the index will be used, but I shouldn't have to. More\npractically, since a lot of my SQL is auto-generated, it's difficult to make\nthis query change just in the cases where I need it. And I'm loathe to\nchange every \"column = value\" pair in my auto-generated SQL into a double\npair of \"(column = value and column is not null)\" It's redundant and looks\npretty silly, IMO.\n\nThanks for you consideration :)\n\n-John\n\n", "msg_date": "Wed, 03 Mar 2004 14:56:06 -0500", "msg_from": "John Siracusa <[email protected]>", "msg_from_op": true, "msg_subject": "Feature request: smarter use of conditional indexes" }, { "msg_contents": "hi,\n\nJohn Siracusa wrote, On 3/3/2004 20:56:\n\n> Given an index like this:\n> \n> CREATE UNIQUE INDEX i1 ON t1 (c1) WHERE c1 IS NOT NULL;\n> \n> and a query like this:\n> \n> SELECT * FROM t1 WHERE c1 = 123;\n> \n> I'd like the planner to be smart enough to use an index scan using i1. Yes,\n> I can change the query to this:\n> \n> SELECT * FROM t1 WHERE c1 = 123 AND c1 IS NOT NULL;\n> \n> In which case the index will be used, but I shouldn't have to. More\n> practically, since a lot of my SQL is auto-generated, it's difficult to make\n> this query change just in the cases where I need it. And I'm loathe to\n> change every \"column = value\" pair in my auto-generated SQL into a double\n> pair of \"(column = value and column is not null)\" It's redundant and looks\n> pretty silly, IMO.\n\nhow about: CREATE UNIQUE INDEX i1 ON t1 (c1);\nWHERE c1 IS NOT NULL in this case what is the point of doing this?\nYou do not need this condition.\n\nC.\n", "msg_date": "Thu, 04 Mar 2004 00:37:46 +0100", "msg_from": "CoL <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Feature request: smarter use of conditional indexes" }, { "msg_contents": "John Siracusa <[email protected]> writes:\n> Given an index like this:\n> CREATE UNIQUE INDEX i1 ON t1 (c1) WHERE c1 IS NOT NULL;\n> and a query like this:\n> SELECT * FROM t1 WHERE c1 = 123;\n> I'd like the planner to be smart enough to use an index scan using i1.\n\nSend a patch ;-)\n\nThe routine you want to teach about this is pred_test_simple_clause() in\nsrc/backend/optimizer/path/indxpath.c. ISTM that it's legitimate to\nconclude that \"foo IS NOT NULL\" is implied by \"foo op anything\" or\n\"anything op foo\" if the operator is marked strict.\n\nNote: please patch against CVS head, as that code got rewritten since\n7.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Mar 2004 18:53:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Feature request: smarter use of conditional indexes " }, { "msg_contents": "\n>>Given an index like this:\n>> CREATE UNIQUE INDEX i1 ON t1 (c1) WHERE c1 IS NOT NULL;\n>>and a query like this:\n>> SELECT * FROM t1 WHERE c1 = 123;\n>>I'd like the planner to be smart enough to use an index scan using i1.\n> \n> \n> Send a patch ;-)\n> \n> The routine you want to teach about this is pred_test_simple_clause() in\n> src/backend/optimizer/path/indxpath.c. ISTM that it's legitimate to\n> conclude that \"foo IS NOT NULL\" is implied by \"foo op anything\" or\n> \"anything op foo\" if the operator is marked strict.\n\nI've actually mentioned this one before in that of all the partial \nindexes I have, almost all of then are a WHERE x IS NOT NULL format. I \ndon't know if that's a common use, but if it is, then maybe it's worth \njust adding the knowledge for IS NOT NULL...\n\nThe other thing is that at the moment, cascading foreign keys will not \nuse partial indexes even if they match the predicate. Maybe an IS NOT \nNULL hack will help there...\n\nChris\n\n", "msg_date": "Thu, 04 Mar 2004 09:31:48 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Feature request: smarter use of conditional indexes" }, { "msg_contents": "On 3/3/04 6:53 PM, Tom Lane wrote:\n> John Siracusa <[email protected]> writes:\n>> Given an index like this:\n>> CREATE UNIQUE INDEX i1 ON t1 (c1) WHERE c1 IS NOT NULL;\n>> and a query like this:\n>> SELECT * FROM t1 WHERE c1 = 123;\n>> I'd like the planner to be smart enough to use an index scan using i1.\n> \n> Send a patch ;-)\n> \n> The routine you want to teach about this is pred_test_simple_clause() in\n> src/backend/optimizer/path/indxpath.c. ISTM that it's legitimate to\n> conclude that \"foo IS NOT NULL\" is implied by \"foo op anything\" or\n> \"anything op foo\" if the operator is marked strict.\n\nGack, C is not my forte...\n\nSo...I'm noodling around in pred_test_simple_clause() and my test query of:\n\n SELECT * FROM t1 WHERE c1 = 123;\n\nlands me in pred_test_simple_clause() with a \"predicate\" with a NodeTag of\nNullTest, and a \"clause\" with a NodeTag of OpExpr. The clause \"rightop\"\nIsA() Const. So far, it seems to make sense. It's comparing the clause \"c1\n= 123\" with the predicate on the \"i1\" index (\"IS NOT NULL\") to see if one\nimplies the other.\n\nBut now I'm stuck, because IsA(predicate, NullTest) is *also* true if the\nindex i1 is dropped and index i2 is created like this:\n\n CREATE UNIQUE INDEX i2 ON t1 (c1) WHERE c1 IS NOT NULL;\n\nIOW, both \"IS NOT NULL\" and \"IS NULL\" lead to IsA(predicate, NullTest) being\ntrue. I found this, which looked promising:\n\ntypedef enum BoolTestType\n{\n IS_TRUE, IS_NOT_TRUE, IS_FALSE, IS_NOT_FALSE, IS_UNKNOWN, IS_NOT_UNKNOWN\n} BoolTestType;\n\ntypedef struct BooleanTest\n{\n Expr xpr;\n Expr *arg; /* input expression */\n BoolTestType booltesttype; /* test type */\n} BooleanTest;\n\nBut then I realized that \"predicate\" is \"Expr *\" inside the\npred_test_simple_clause() function, and Expr seems only to have a single\nfield, which is tested by IsA()\n\ntypedef struct Expr\n{\n NodeTag type;\n} Expr;\n\nSo apparently all I can do is find out if it's a null test, but not if it is\nspecifically \"IS NOT NULL\"\n\nNow I'm stuck, and thinking that I'd have to modify more than\npred_test_simple_clause() to make this work. Any additional pointers? :)\n\n-John\n\n", "msg_date": "Sat, 06 Mar 2004 15:51:47 -0500", "msg_from": "John Siracusa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Feature request: smarter use of conditional indexes " }, { "msg_contents": "John Siracusa <[email protected]> writes:\n> So apparently all I can do is find out if it's a null test, but not if it is\n> specifically \"IS NOT NULL\"\n\nNo, because once you have determined that the node really IsA NullTest,\nyou can cast the pointer to (NullTest *) and look at the\nNullTest-specific fields. Think of this as poor man's object-oriented\nprogramming: Node is the supertype of Expr which is the supertype of\nNullTest (and a lot of other kinds of nodes, too).\n\nIt'd look something like\n\n\tif (IsA(predicate, NullTest) &&\n\t ((NullTest *) predicate)->nulltesttype == IS_NOT_NULL)\n\t{\n\t /* check to see if arg field matches either side of opclause,\n\t * and if so check whether operator is strict ...\n\t */\n\t}\n\nYou can find plenty of examples of this programming pattern throughout\nthe backend. In fact pred_test_simple_clause is doing exactly this\nto check that what it's given is an OpExpr and not some other node type.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Mar 2004 16:06:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Feature request: smarter use of conditional indexes " }, { "msg_contents": "On 3/6/04 4:06 PM, Tom Lane wrote:\n> John Siracusa <[email protected]> writes:\n>> So apparently all I can do is find out if it's a null test, but not if it is\n>> specifically \"IS NOT NULL\"\n> \n> No, because once you have determined that the node really IsA NullTest,\n> you can cast the pointer to (NullTest *) and look at the\n> NullTest-specific fields.\n\nI tried casting, but stupidly tried to access the type name (BoolTestType)\ninstead of the field name (nulltesttype). Duh! :)\n\n> Think of this as poor man's object-oriented programming: Node is the supertype\n> of Expr which is the supertype of NullTest (and a lot of other kinds of nodes,\n> too).\n\nYeah, I read that in the comments but was defeated by my devious brain ;)\nThanks, I'll see how much farther I can go before getting stuck again :)\n\n-John\n\n", "msg_date": "Sat, 06 Mar 2004 16:55:33 -0500", "msg_from": "John Siracusa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Feature request: smarter use of conditional indexes " } ]
[ { "msg_contents": "Hello all,\n\nI have a performance issue that I cannot seem to solve and am hoping that \nsomeone might be able to make some suggestions.\n\nFirst some background information. We are using PostgreSQL 7.3.4 on Linux \nwith kernel 2.4.19. The box is a single P4 2.4Ghz proc with 1G ram and uw \nscsi drives in a hardware raid setup.\n\nWe have a transactioninfo table with about 163k records. psql describes the \ntable as:\n\n\\d transactioninfo\n Table \"public.transactioninfo\"\n Column | Type | Modifiers\n---------------+--------------------------+--------------------------------------------------------\n transactionid | integer | not null default \nnextval('transaction_sequence'::text)\n userid | integer |\n programid | integer |\n time | timestamp with time zone |\n comment | text |\n undoable | boolean |\n del | boolean |\nIndexes: transactioninfo_pkey primary key btree (transactionid),\n delidx btree (del),\n transactioninfo_date btree (\"time\", programid, userid)\nTriggers: RI_ConstraintTrigger_6672989,\n RI_ConstraintTrigger_6672990,\n RI_ConstraintTrigger_6672992,\n--snip--\n--snip--\n RI_ConstraintTrigger_6673121,\n RI_ConstraintTrigger_6673122\n\nThere are about 67 inherited tables that inherit the fields from this table, \nhence the 134 constraint triggers. \n\nThere is a related table transactionlog which has a fk(foreign key) to \ntransactioninfo. It contains about 600k records.\n\nThere are 67 hist_tablename tables, each with a different structure. Then an \nadditional 67 tables called hist_tablename_log which inherit from the \ntransactionlog table and appropriate hist_tablename table. By the automagic \nof inheritance, since the transactionlog has a fk to transactioninfo, each of \nthe hist_tablename_log tables does as well (if I am reading the pg_trigger \ntable correctly).\n\nOnce a day we run a sql select statement to clear out all records in \ntransactioninfo that don't have a matching record in transactionlog. We \naccumulate between 5k-10k records a day that need clearing from \ntransactioninfo. That clear ran this morning for 5 hours and 45 minutes.\n\nToday I am working on streamlining the sql to try and get the delete down to a \nmanageable time frame. The original delete statement was quite inefficent. \nSo, far, I've found that it appears to be much faster to break the task into \ntwo pieces. The first is to update a flag on transactioninfo to mark empty \ntransactions and then a followup delete which clears based on that flag. The \nupdate takes about a minute or so.\n\nupdate only transactioninfo set del=TRUE where\n not exists (select transactionid from transactionlog l where \nl.transactionid=transactioninfo.transactionid);\nUPDATE 6911\nTime: 59763.26 ms\n\n Now if I delete a single transactioninfo record found by selecting del=true \nlimit 1 I get\n\nexplain analyze delete from only transactioninfo where transactionid=734607;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using transactioninfo_pkey on transactioninfo (cost=0.00..6.01 \nrows=1 width=6) (actual time=0.18..0.18 rows=1 loops=1)\n Index Cond: (transactionid = 734607)\n Total runtime: 0.41 msec\n(3 rows)\n\nTime: 855.08 ms\n\nWith the 7000 records to delete and a delete time of 0.855s, we are looking at \n1.5hrs to do the clear which is a great improvement from the 6 hours we have \nbeen seeing. But it still seems like it should run faster.\n\nThe actual clear statement used in the clear is as follows:\nexplain delete from transactioninfo where del=true;\n QUERY PLAN\n----------------------------------------------------------------------\n Seq Scan on transactioninfo (cost=0.00..6177.21 rows=78528 width=6)\n Filter: (del = true)\n(2 rows)\n \nAnother interesting observation is that the raid subsystem shows very low \nactivity during the clear. The backend process is almost entirely cpu bound.\n\nSome of the documentation implies that inherited tables cause deletes to be \nvery slow on the parent table, so I did the following experiment.\n\nvistashare=# create table transactioninfo_copy as select * from \ntransactioninfo;\nSELECT\nTime: 6876.88 ms\nvistashare=# create index transinfo_copy_del_idx on transactioninfo_copy(del);\nCREATE INDEX\nTime: 446.20 ms\nvistashare=# delete from transactioninfo_copy where del=true;\nDELETE 6904\nTime: 202.33 ms\n\nWhich certainly points to the triggers being the culprit. In reading the \ndocumentation, it seems like the \"delete from only...\" statement should \nignore the constraint triggers. But it seems quite obvious from the \nexperiments that it is not. Also, the fact that the query plan doesn't show \nthe actual time used when analyze is used seems to again point to the after \ndelete triggers as being the culprit.\n\nIs there any other way to make this faster then to drop and rebuild all the \nattached constraints? Is there a way to \"disable\" the constraints for a \nsingle statement. Because of the unique nature of the data, we know that the \ninherited tables don't need to be inspected. The table structure has worked \nquite well up till now and we are hoping to not have to drop our foreign keys \nand inheritance if possible. Any ideas?\n\nThanks for your time,\n\n-Chris\n-- \nChris Kratz\nSystems Analyst/Programmer\nVistaShare LLC\n", "msg_date": "Wed, 3 Mar 2004 16:49:44 -0500", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Delete performance on delete from table with inherited tables" }, { "msg_contents": "Chris Kratz <[email protected]> writes:\n> There are about 67 inherited tables that inherit the fields from this table, \n> hence the 134 constraint triggers. \n\nWhy \"hence\"? Inheritance doesn't create any FK relationships. You must\nhave done so. What are those FK constraints exactly?\n\n> Some of the documentation implies that inherited tables cause deletes to be \n> very slow on the parent table, so I did the following experiment.\n\nNo, but foreign keys linked from tables that don't have indexes can be\npretty slow.\n\n> it seems like the \"delete from only...\" statement should \n> ignore the constraint triggers.\n\nWhy would you expect that?\n\nIt appears to me that this table is the referenced table for a large\nnumber of foreign-key relationships, and thus when you delete a row from\nit, many other tables have to be checked to verify that they do not\ncontain entries matching that row. That's going to be relatively slow,\neven with indexes on the other tables. It's not very clear from your\ndescription what the FK relationships actually do in your database\nschema, but I would suggest looking at redesigning the schema so that\nyou do not need them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Mar 2004 19:07:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delete performance on delete from table with inherited tables " }, { "msg_contents": "\nOn Wed, 3 Mar 2004, Chris Kratz wrote:\n\n> Which certainly points to the triggers being the culprit. In reading the\n> documentation, it seems like the \"delete from only...\" statement should\n> ignore the constraint triggers. But it seems quite obvious from the\n\nDelete from only merely means that children of the table being deleted\nwill not have their rows checked against any where conditions and removed\nfor that reason. It does not affect constraint triggers at all.\n\nGiven I'm guessing it's going to be running about 7000 * 67 queries to\ncheck the validity of the delete for 7000 rows each having 67 foreign\nkeys, I'm not sure there's much to do other than hack around the issue\nright now.\n\nIf you're a superuser, you could temporarily hack reltriggers on the\ntable's pg_class row to 0, run the delete and then set it back to the\ncorrect number. I'm guessing from your message that there's never any\nchance of a concurrent transaction putting in a matching row in a way that\nsomething is marked as deletable when it isn't?\n\n\n\n", "msg_date": "Tue, 9 Mar 2004 16:18:22 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delete performance on delete from table with inherited" }, { "msg_contents": "Thanks Stephan and Tom for your responses. We have been busy, so I haven't \nhad time to do any further research on this till yesterday. I found that the \nlarge number of triggers on the parent or master table were foreign key \ntriggers for each table back to the child tables (update and delete on \nmaster, insert on child). The triggers have existed through several versions \nof postgres and as far as we can tell were automatically created using the \nreferences keyword at inception.\n\nYesterday I dropped all the current triggers on parent and children and ran a \nscript that did an alter table add foreign key constraint to each of the 67 \nchild tables with update cascade delete cascade. After this, the delete from \nthe parent where no records existed in the child tables was far more \nacceptable. Instead of taking hours to do the delete, the process ran for \nabout 5 minutes on my workstation. Removing all constraints entirely reduces \nthis time to a couple of seconds. I am currently evaluating if the foreign \nkey constraints are worth the performance penalty in this particular case.\n\nTo finish up, it appears that the foreign key implementation has changed since \nwhen these first tables were created in our database. Dropping the existing \ntriggers and re-adding the constraints on each table significantly improved \nperformance for us. I do not know enough of the internals to know why this \nhappened. But our experience seems to prove that the newer implementation of \nforeign keys is more efficient then previous versions. YMMV\n\nOne other item that was brought up was whether the child tables have the fk \ncolumn indexed, and the answer was yes. Each had a standard btree index on \nthe foreign key. Explain showed nothing as all the time was being spent in \nthe triggers. Time spent in triggers is not shown in the pg 7.3.4 version of \nexplain (nor would I necessarily expect it to).\n\nThanks for your time, expertise and responses.\n\n-Chris\n\nOn Tuesday 09 March 2004 7:18 pm, Stephan Szabo wrote:\n> On Wed, 3 Mar 2004, Chris Kratz wrote:\n> > Which certainly points to the triggers being the culprit. In reading the\n> > documentation, it seems like the \"delete from only...\" statement should\n> > ignore the constraint triggers. But it seems quite obvious from the\n>\n> Delete from only merely means that children of the table being deleted\n> will not have their rows checked against any where conditions and removed\n> for that reason. It does not affect constraint triggers at all.\n>\n> Given I'm guessing it's going to be running about 7000 * 67 queries to\n> check the validity of the delete for 7000 rows each having 67 foreign\n> keys, I'm not sure there's much to do other than hack around the issue\n> right now.\n>\n> If you're a superuser, you could temporarily hack reltriggers on the\n> table's pg_class row to 0, run the delete and then set it back to the\n> correct number. I'm guessing from your message that there's never any\n> chance of a concurrent transaction putting in a matching row in a way that\n> something is marked as deletable when it isn't?\n\n-- \nChris Kratz\nSystems Analyst/Programmer\nVistaShare LLC\nwww.vistashare.com\n", "msg_date": "Wed, 31 Mar 2004 10:42:48 -0500", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Delete performance on delete from table with inherited tables" } ]
[ { "msg_contents": "....\nI'd look at adding more disks first. Depending on what\ntype of query\nload you get, that box sounds like it will be very\nmuch I/O bound....\n\nGiven a a 13G database on a 12G system, with a low\ngrowth rate, it is likely that there is almost no I/O\nfor most activities. The exception is checkpointing.\nThe first thing I'd do is try to build a spreadsheet\nmodel of:\n- select frequency, and # logical and physical reads\ninvolved\n- insert/delete/update frequency, and # logical and\nphysical read and writes involved\n- commit frequency, etc.\n(start out with simplistic assumptions, and do it for\npeak load)\n- system events (checkpoints, vacuum)\n\nI assume that the only high I/O you will see will be\nfor logging. The RAID issue there is basically\nobviated by the sequential write nature of WAL. If\nthat is the case, EMC is not the most cost effective\nor obvious solution - since the value they provide is\nmostly manageability for disaster recovery. The goal\nin this case is to write at the application max speed,\nand with mimimal latency. Any responsible battery\nbacked up write through (mirrored) cached controller\ncan do that for you.\n\nOn the other hand, if your requests are not *all*\ntrivial, you are going to test the hardware and\nscheduling algorithms of OS and pg. Even if 0.1% of\n3,000 tps take a second - that ends up generating 3\nseconds of load.... Any, even slightly, slow\ntransactions will generate enormous queues which slow\ndown everything. \n\nIn most systems of this volume I've seen, the mix of\nactivities is constantly invalidating cache, making L2\ncaching less important. Memory to CPU bus speed is a\nlimiting factor, as well as raw CPU speed in\nprocessing the requests. Xeon is not a great\narchitecture for this because of FSB contention; I\nsuspect a 4-way will be completely FSB bottlenecked so\na more than 4 way would likely not change performance.\n\n\nI would try to get a simple model/benchmark going and\ntest against it. You should be talking to the big iron\nvendors for their take on your issues and get their\ncapacity benchmarks.\n\n__________________________________\nDo you Yahoo!?\nYahoo! Search - Find what you���re looking for faster\nhttp://search.yahoo.com\n", "msg_date": "Thu, 4 Mar 2004 05:57:39 -0800 (PST)", "msg_from": "Aaron W <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling further up" } ]
[ { "msg_contents": "Great response, Thanks.\n\nRegarding 12GB memory and 13G db, and almost no I/O, one thing I don't\nunderstand is that even though the OS caches most of the memory and PG\ncan use it if it needs it, why would the system swap (not much, only\nduring peak times)? The SHMMAX is set to 512MB, shared_buffers is 150MB,\neffective cache size is 2GB, sort mem is 2MB, rest is default values. It\nalso happens that a large query (reporting type) can hold up the other\nqueries, and the load averages shoot up during peak times.\n\nRegarding a baseline - \n\n-We have docs and monitor for frequency of sql statements, most\nexpensive ones, etc. (IronEye)\n-I am monitoring disk reads/writes using iostat\n-How do I measure commit frequency, and system events like checkpoint?\n(vacuum is done nightly during less or no load)\n\nThanks,\nAnjan\n\n\n-----Original Message-----\nFrom: Aaron W [mailto:[email protected]] \nSent: Thursday, March 04, 2004 8:58 AM\nTo: [email protected]; Anjan Dave\nSubject: Re: Scaling further up\n\n\n....\nI'd look at adding more disks first. Depending on what\ntype of query\nload you get, that box sounds like it will be very\nmuch I/O bound....\n\nGiven a a 13G database on a 12G system, with a low\ngrowth rate, it is likely that there is almost no I/O\nfor most activities. The exception is checkpointing.\nThe first thing I'd do is try to build a spreadsheet\nmodel of:\n- select frequency, and # logical and physical reads\ninvolved\n- insert/delete/update frequency, and # logical and\nphysical read and writes involved\n- commit frequency, etc.\n(start out with simplistic assumptions, and do it for\npeak load)\n- system events (checkpoints, vacuum)\n\nI assume that the only high I/O you will see will be\nfor logging. The RAID issue there is basically\nobviated by the sequential write nature of WAL. If\nthat is the case, EMC is not the most cost effective\nor obvious solution - since the value they provide is\nmostly manageability for disaster recovery. The goal\nin this case is to write at the application max speed,\nand with mimimal latency. Any responsible battery\nbacked up write through (mirrored) cached controller\ncan do that for you.\n\nOn the other hand, if your requests are not *all*\ntrivial, you are going to test the hardware and\nscheduling algorithms of OS and pg. Even if 0.1% of\n3,000 tps take a second - that ends up generating 3\nseconds of load.... Any, even slightly, slow\ntransactions will generate enormous queues which slow\ndown everything. \n\nIn most systems of this volume I've seen, the mix of\nactivities is constantly invalidating cache, making L2\ncaching less important. Memory to CPU bus speed is a\nlimiting factor, as well as raw CPU speed in\nprocessing the requests. Xeon is not a great\narchitecture for this because of FSB contention; I\nsuspect a 4-way will be completely FSB bottlenecked so\na more than 4 way would likely not change performance.\n\n\nI would try to get a simple model/benchmark going and\ntest against it. You should be talking to the big iron\nvendors for their take on your issues and get their\ncapacity benchmarks.\n\n__________________________________\nDo you Yahoo!?\nYahoo! Search - Find what you're looking for faster\nhttp://search.yahoo.com\n", "msg_date": "Thu, 4 Mar 2004 10:57:03 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "Anjan Dave wrote:\n> Great response, Thanks.\n> \n> Regarding 12GB memory and 13G db, and almost no I/O, one thing I don't\n> understand is that even though the OS caches most of the memory and PG\n> can use it if it needs it, why would the system swap (not much, only\n> during peak times)? The SHMMAX is set to 512MB, shared_buffers is 150MB,\n> effective cache size is 2GB, sort mem is 2MB, rest is default values. It\n> also happens that a large query (reporting type) can hold up the other\n> queries, and the load averages shoot up during peak times.\n\nIn regards to your system going to swap, the only item I see is sort_mem \nat 2MB. How many simultaneous transactions do you get? If you get \nhundreds or thousands like your first message stated, every select sort \nwould take up 2MB of memory regardless of whether it needed it or not. \nThat could cause your swap activity during peak traffic.\n\nThe only other item to bump up is the effective cache size -- I'd set it \nto 12GB.\n\n", "msg_date": "Mon, 08 Mar 2004 08:40:28 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "On Mon, 2004-03-08 at 11:40, William Yu wrote:\n> Anjan Dave wrote:\n> > Great response, Thanks.\n> > \n> > Regarding 12GB memory and 13G db, and almost no I/O, one thing I don't\n> > understand is that even though the OS caches most of the memory and PG\n> > can use it if it needs it, why would the system swap (not much, only\n> > during peak times)? The SHMMAX is set to 512MB, shared_buffers is 150MB,\n> > effective cache size is 2GB, sort mem is 2MB, rest is default values. It\n> > also happens that a large query (reporting type) can hold up the other\n> > queries, and the load averages shoot up during peak times.\n> \n> In regards to your system going to swap, the only item I see is sort_mem \n> at 2MB. How many simultaneous transactions do you get? If you get \n> hundreds or thousands like your first message stated, every select sort \n> would take up 2MB of memory regardless of whether it needed it or not. \n> That could cause your swap activity during peak traffic.\n> \n> The only other item to bump up is the effective cache size -- I'd set it \n> to 12GB.\n> \n\nWas surprised that no one corrected this bit of erroneous info (or at\nleast I didn't see it) so thought I would for completeness. a basic\nexplanation is that sort_mem controls how much memory a given query is\nallowed to use before spilling to disk, but it will not grab that much\nmemory if it doesn't need it. \n\nSee the docs for a more detailed explanation:\nhttp://www.postgresql.org/docs/7.4/interactive/runtime-config.html#RUNTIME-CONFIG-RESOURCE\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "12 Mar 2004 18:01:57 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" } ]
[ { "msg_contents": "I'm guessing the answer to this is \"no\"\n\nIs there any performance advantage to using a fixed width row (with PG)?\n\nI've heard this theory a few times and I think it is based on older, \ndifferent databases and we have also some custom software here that \nuses fixed width rows to be able to hit row N in O(1), but that isn't \nwhat I'd call a real database.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Fri, 5 Mar 2004 08:35:05 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Fixed width rows faster?" }, { "msg_contents": "Jeff wrote:\n> I'm guessing the answer to this is \"no\"\n> \n> Is there any performance advantage to using a fixed width row (with PG)?\n\nNo. The user docs state that the performance is equal for char, varchar\nand text.\n\n> I've heard this theory a few times and I think it is based on older, \n> different databases\n\nMySQL used to have this issue (I don't know if it still does or not) to\nthe point that the docs once claimed that an index on a varchar was barely\nas fast as a char with no index at all.\n\n> and we have also some custom software here that uses \n> fixed width rows to be able to hit row N in O(1), but that isn't what \n> I'd call a real database.\n\nIsn't needed in modern versions of Postgres, but I don't know (historically)\nif it ever was or not.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Fri, 05 Mar 2004 09:07:34 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "On Fri, 5 Mar 2004, Jeff wrote:\n\n> Is there any performance advantage to using a fixed width row (with PG)?\n\nAs far as I know there is only a small win when you want to extract some\nfield from a tuple and with variable width fields you have to walk to the\ncorrect field. But this is a small performance problem unless you have\nvery many variable size columns in the table.\n\n> different databases and we have also some custom software here that \n> uses fixed width rows to be able to hit row N in O(1)\n\nThis can not happen in pg since there is no row N. Every transaction can \nhave a different view of the table, some rows are visible and some others \nare not. To find row N you have to walk from the start and inspect every \ntuple to see if it's visible to this transaction or not.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Fri, 5 Mar 2004 15:31:44 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "Jeff, Bill:\n\n> No. The user docs state that the performance is equal for char, varchar\n> and text.\n\nActually, CHAR is slightly *slower* than VARCHAR or TEXT for SELECTs in many \napplications. This is becuase of the field padding, and the frequent \nnecessity of casting CHAR::TEXT and back.\n\nFor INSERT and UPDATE, TEXT is the fastest becuase it's not checking a length \nconstraint (takes time) or padding the field out to the required CHAR length \n(even more time).\n\nFrankly, the only reason to use anything other than TEXT is compatibility with \nother databases and applications.\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 5 Mar 2004 15:28:55 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "Jeff:\n\n> As far as I know there is only a small win when you want to extract some\n> field from a tuple and with variable width fields you have to walk to the\n> correct field. But this is a small performance problem unless you have\n> very many variable size columns in the table.\n\nBTW, Dennis here is not talking about CHAR; CHAR is handled as a \nvariable-length field in Postgres. INTEGER is a fixed-width field.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 5 Mar 2004 15:37:02 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "> Frankly, the only reason to use anything other than TEXT is compatibility with \n> other databases and applications.\n\nYou don't consider a requirement that a field be no longer than a \ncertain length a reason not to use TEXT? \n--\nMike Nolan\n", "msg_date": "Fri, 5 Mar 2004 17:43:33 -0600 (CST)", "msg_from": "Mike Nolan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "On Fri, 2004-03-05 at 18:43, Mike Nolan wrote:\n> > Frankly, the only reason to use anything other than TEXT is compatibility with \n> > other databases and applications.\n> \n> You don't consider a requirement that a field be no longer than a \n> certain length a reason not to use TEXT? \n\nActually, I don't. Good reason to have a check constraint on it though\n(hint, check constraints can be changed while column types cannot be, at\nthis moment).\n\n", "msg_date": "Fri, 05 Mar 2004 18:54:04 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": ">>You don't consider a requirement that a field be no longer than a \n>>certain length a reason not to use TEXT? \n\nCan't you just create a TEXT(255) field same as you can just create \nVARCHAR (with no length) field? I think they're basically synonyms for \neach other these days.\n\nChris\n", "msg_date": "Sat, 06 Mar 2004 12:20:29 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "> >>You don't consider a requirement that a field be no longer than a \n> >>certain length a reason not to use TEXT? \n> \n> Can't you just create a TEXT(255) field same as you can just create \n> VARCHAR (with no length) field? I think they're basically synonyms for \n> each other these days.\n\nI'll defer to the SQL standard gurus on this, as well as to the internals\nguys, but I suspect there is a difference between the standard itself \nand implementor details, such as how char, varchar, varchar2 and text \nare implemented. As long as things work as specified, I don't think \nthe standard cares much about what's happening behind the curtain.\n--\nMike Nolan\n", "msg_date": "Fri, 5 Mar 2004 22:32:14 -0600 (CST)", "msg_from": "Mike Nolan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "Mike Nolan <[email protected]> writes:\n>> Frankly, the only reason to use anything other than TEXT is\n>> compatibility with other databases and applications.\n\n> You don't consider a requirement that a field be no longer than a \n> certain length a reason not to use TEXT? \n\nIf you have an actual business-logic requirement to restrict a field to\nno more than N characters, then by all means use varchar(N); that's\nwhat it's for. But I agree with what I think Josh meant: there is very\nseldom any non-broken reason to have a hard upper limit on string\nlengths. If you think you need varchar(N) you should stop and ask\nwhy exactly. If you cannot give a specific, coherent reason why the\nparticular value of N that you're using is the One True Length for the\nfield, then you really need to think twice.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Mar 2004 00:42:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster? " }, { "msg_contents": "Mike Nolan <[email protected]> writes:\n>> Can't you just create a TEXT(255) field same as you can just create \n>> VARCHAR (with no length) field? I think they're basically synonyms for \n>> each other these days.\n\n> I'll defer to the SQL standard gurus on this, as well as to the internals\n> guys, but I suspect there is a difference between the standard itself \n> and implementor details, such as how char, varchar, varchar2 and text \n> are implemented. As long as things work as specified, I don't think \n> the standard cares much about what's happening behind the curtain.\n\nTEXT is not a standard datatype at all; that is, you will not find it\nin the standard, even though quite a few DBMSes have a datatype that\nthey call by that name.\n\nPostgres' interpretation of TEXT is that there is no length-limitation\noption. I don't know what other DBMSes do with their versions of TEXT.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Mar 2004 00:53:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster? " }, { "msg_contents": "> If you have an actual business-logic requirement to restrict a field to\n> no more than N characters, then by all means use varchar(N); that's\n> what it's for. But I agree with what I think Josh meant: there is very\n> seldom any non-broken reason to have a hard upper limit on string\n> lengths. If you think you need varchar(N) you should stop and ask\n> why exactly. If you cannot give a specific, coherent reason why the\n> particular value of N that you're using is the One True Length for the\n> field, then you really need to think twice.\n\nOne nice reason to have like VARCHAR(4096) or whatever is that if there \nis a bug in your website and you forget to length check some user input, \nit stops them from screwing you by uploading megs and megs of data into \na 'firstname' field, say.\n\nChris\n\n", "msg_date": "Sat, 06 Mar 2004 18:01:18 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "> Frankly, the only reason to use anything other than TEXT is\n> compatibility with other databases and applications.\n\nThe main reason why I am still using VARCHAR rather than TEXT in many\nplaces is to ensure that the column can be indexed. Postgres, it seems,\nrefuses to insert a string that is longer than some value into an\nindexed column, and I'll rather have such errors flagged while inserting\na row rather than while rebuilding an index after having inserted lots\nof rows.\n\n", "msg_date": "Sat, 6 Mar 2004 14:17:35 +0100", "msg_from": "\"Eric Jain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "On Sat, Mar 06, 2004 at 02:17:35PM +0100, Eric Jain wrote:\n> places is to ensure that the column can be indexed. Postgres, it seems,\n> refuses to insert a string that is longer than some value into an\n> indexed column, and I'll rather have such errors flagged while inserting\n\nCare to provide some details of this? It sure sounds like a bug to\nme, if it's true. I've never run into anything like this, though.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n", "msg_date": "Sat, 6 Mar 2004 11:26:58 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "\"Eric Jain\" <[email protected]> writes:\n> The main reason why I am still using VARCHAR rather than TEXT in many\n> places is to ensure that the column can be indexed. Postgres, it seems,\n> refuses to insert a string that is longer than some value into an\n> indexed column, and I'll rather have such errors flagged while inserting\n> a row rather than while rebuilding an index after having inserted lots\n> of rows.\n\nThis is bogus reasoning. The limit on index entry length will not\nchange when you rebuild the index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Mar 2004 15:52:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster? " }, { "msg_contents": "> Actually, I don't. Good reason to have a check constraint on it though\n> (hint, check constraints can be changed while column types cannot be, at\n> this moment).\n\nIs there a way to copy a table INCLUDING the check constraints? If not,\nthen that information is lost, unlike varchar(n).\n--\nMike Nolan\n\n", "msg_date": "Sat, 6 Mar 2004 19:16:31 -0600 (CST)", "msg_from": "Mike Nolan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "On Sat, 2004-03-06 at 20:16, Mike Nolan wrote:\n> > Actually, I don't. Good reason to have a check constraint on it though\n> > (hint, check constraints can be changed while column types cannot be, at\n> > this moment).\n> \n> Is there a way to copy a table INCLUDING the check constraints? If not,\n> then that information is lost, unlike varchar(n).\n\nNo, not constraints.\n\n", "msg_date": "Sat, 06 Mar 2004 20:22:25 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "Mike Nolan wrote:\n> Is there a way to copy a table INCLUDING the check constraints? If not,\n> then that information is lost, unlike varchar(n).\n\n\"pg_dump -t\" should work fine, unless I'm misunderstanding you.\n\n-Neil\n", "msg_date": "Sat, 06 Mar 2004 20:32:08 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "> Mike Nolan wrote:\n> > Is there a way to copy a table INCLUDING the check constraints? If not,\n> > then that information is lost, unlike varchar(n).\n> \n> \"pg_dump -t\" should work fine, unless I'm misunderstanding you.\n\nI was specifically referring to doing it in SQL. \n\nThe COPY command goes from table to file or file to table, the \nCREATE TABLE ... SELECT loses the check constraints. \n\nIs there no SQL command that allows me to clone a table, including check\nconstraints?\n\nSomething like COPY TABLE xxx TO TABLE yyy WITH CHECK CONSTRAINTS.\n--\nMike Nolan\n", "msg_date": "Sat, 6 Mar 2004 20:26:30 -0600 (CST)", "msg_from": "Mike Nolan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "On Sat, 2004-03-06 at 21:26, Mike Nolan wrote:\n> > Mike Nolan wrote:\n> > > Is there a way to copy a table INCLUDING the check constraints? If not,\n> > > then that information is lost, unlike varchar(n).\n> > \n> > \"pg_dump -t\" should work fine, unless I'm misunderstanding you.\n> \n> I was specifically referring to doing it in SQL. \n> \n> The COPY command goes from table to file or file to table, the \n> CREATE TABLE ... SELECT loses the check constraints. \n> \n> Is there no SQL command that allows me to clone a table, including check\n> constraints?\n\nThere is not in the spec or in PostgreSQL. Although, this may be a\nrelevant extension to the LIKE structure inheritance in 200N spec\n(partly implemented 7.4).\n\n", "msg_date": "Sat, 06 Mar 2004 22:01:22 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "On Sat, 6 Mar 2004, Andrew Sullivan wrote:\n\n> > places is to ensure that the column can be indexed. Postgres, it seems,\n> > refuses to insert a string that is longer than some value into an\n> > indexed column, and I'll rather have such errors flagged while inserting\n> \n> Care to provide some details of this? It sure sounds like a bug to\n> me, if it's true. I've never run into anything like this, though.\n\nThere is a limit of the size of values that can be indexed. I think it's\n8k or something (a block I assume). Something for someone with an itch to\nfix in the future.\n\nThe error however comes when you try to insert the value. Doing a reindex\nwill not change the length of the value and will always work.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Sun, 7 Mar 2004 14:42:41 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "On Sun, Mar 07, 2004 at 02:42:41PM +0100, Dennis Bjorklund wrote:\n\n> The error however comes when you try to insert the value. Doing a reindex\n> will not change the length of the value and will always work.\n\nI didn't do a good job in my quoting, but this is what I meant. It'd\nsurely be a bug if you could get a value into an indexed field, but\ncouldn't later rebuild that index.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe fact that technology doesn't work is no bar to success in the marketplace.\n\t\t--Philip Greenspun\n", "msg_date": "Sun, 7 Mar 2004 09:49:07 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster?" }, { "msg_contents": "> This is bogus reasoning. The limit on index entry length will not\n> change when you rebuild the index.\n\nWhat I meant by 'rebuilding' was not issuing a REINDEX command, but\ncreating a new index after having dropped the index and inserted\nwhatever records. Building indexes can be slow, and I'd rather not have\nthe operation fail after several hours because record #98556761 is\ndeemed to be too long for indexing...\n\nWhile we are busy complaining, it's a pity Postgres doesn't allow us to\ndisable and later recreate all indexes on a table using a single command\n;-)\n\n", "msg_date": "Mon, 8 Mar 2004 10:39:59 +0100", "msg_from": "\"Eric Jain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixed width rows faster? " } ]
[ { "msg_contents": "On 3/3/04 6:53 PM, Tom Lane wrote:\n> John Siracusa <[email protected]> writes:\n>> Given an index like this:\n>> CREATE UNIQUE INDEX i1 ON t1 (c1) WHERE c1 IS NOT NULL;\n>> and a query like this:\n>> SELECT * FROM t1 WHERE c1 = 123;\n>> I'd like the planner to be smart enough to use an index scan using \n>> i1.\n>\n> Send a patch ;-)\n\nHow does this look? It seems to do what I want without horribly \nbreaking anything as far as I can tell. I ran \"make check\" and got the \nsame result as I did before my changes (5 failures in OS X 10.3.2). \nBut then, I also got the same result when I wasn't even checking to \nmake sure that both clauses were looking at the same variable :) I'm \nnot sure how to add a test for this particular change either.\n\n% cvs diff src/backend/optimizer/path/indxpath.c\nIndex: src/backend/optimizer/path/indxpath.c\n===================================================================\nRCS file: \n/projects/cvsroot/pgsql-server/src/backend/optimizer/path/indxpath.c,v\nretrieving revision 1.156\ndiff -r1.156 indxpath.c\n1032a1033,1055\n >\t{\n >\t\t/* One last chance: \"var = const\" or \"const = var\" implies \"var is \nnot null\" */\n >\t\tif (IsA(predicate, NullTest) &&\n >\t\t\t((NullTest *) predicate)->nulltesttype == IS_NOT_NULL &&\n >\t\t\tis_opclause(clause) && op_strict(((OpExpr *) clause)->opno) &&\n >\t\t\tlength(((OpExpr *) clause)->args) == 2)\n >\t\t{\n >\t\t\tleftop = get_leftop((Expr *) clause);\n >\t\t\trightop = get_rightop((Expr *) clause);\n >\n >\t\t\t/* One of the two arguments must be a constant */\n >\t\t\tif (IsA(rightop, Const))\n >\t\t\t\tclause_var = leftop;\n >\t\t\telse if (IsA(leftop, Const))\n >\t\t\t\tclause_var = rightop;\n >\t\t\telse\n >\t\t\t\treturn false;\n >\n >\t\t\t/* Finally, make sure \"var\" is the same var in both clauses */\n >\t\t\tif (equal(((NullTest *) predicate)->arg, clause_var))\n >\t\t\t\treturn true;\n >\t\t}\n >\n1033a1057\n >\t}\n\n", "msg_date": "Sat, 6 Mar 2004 21:29:27 -0500", "msg_from": "John Siracusa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Feature request: smarter use of conditional indexes " }, { "msg_contents": "--On Saturday, March 06, 2004 21:29:27 -0500 John Siracusa \n<[email protected]> wrote:\n\n> On 3/3/04 6:53 PM, Tom Lane wrote:\n>> John Siracusa <[email protected]> writes:\n>>> Given an index like this:\n>>> CREATE UNIQUE INDEX i1 ON t1 (c1) WHERE c1 IS NOT NULL;\n>>> and a query like this:\n>>> SELECT * FROM t1 WHERE c1 = 123;\n>>> I'd like the planner to be smart enough to use an index scan using\n>>> i1.\n>>\n>> Send a patch ;-)\nJust a suggestion, please use diff -c format, as it makes it easier for\nthe folks who apply the patches to do so.\n[snip]\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749", "msg_date": "Sat, 06 Mar 2004 20:39:38 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Feature request: smarter use of conditional indexes " }, { "msg_contents": "Larry Rosenman <[email protected]> writes:\n> Just a suggestion, please use diff -c format, as it makes it easier for\n> the folks who apply the patches to do so.\n\nThat's not just a suggestion ... patches that aren't in diff -c (or at\nleast diff -u) format will be rejected out of hand. Without the context\nlines provided by these formats, applying a patch is an exercise in\nrisk-taking, because you can't be certain that you are applying the same\npatch the submitter intended.\n\nPersonally I consider -c format the only one of the three that is\nreadable for reviewing purposes, so even if I weren't intending\nimmediate application, I'd ask for -c before looking at the patch.\nThere are some folks who consider -u format readable, but I'm not\none of them ...\n\nBTW, patches really ought to go to pgsql-patches ... they're a bit\noff-topic here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Mar 2004 22:46:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Feature request: smarter use of conditional indexes " }, { "msg_contents": "Tom Lane wrote:\n> Larry Rosenman <[email protected]> writes:\n> > Just a suggestion, please use diff -c format, as it makes it easier for\n> > the folks who apply the patches to do so.\n> \n> That's not just a suggestion ... patches that aren't in diff -c (or at\n> least diff -u) format will be rejected out of hand. Without the context\n> lines provided by these formats, applying a patch is an exercise in\n> risk-taking, because you can't be certain that you are applying the same\n> patch the submitter intended.\n\nAlso, when you get 'fuzz' output when applying the patch, you should\nreview the patch to make sure it appeared in the right place. That has\ngotten me a few times.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 8 Mar 2004 12:55:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Feature request: smarter use of conditional indexes" }, { "msg_contents": "On Sunday 07 March 2004 09:16, Tom Lane wrote:\n> Personally I consider -c format the only one of the three that is\n> readable for reviewing purposes, so even if I weren't intending\n> immediate application, I'd ask for -c before looking at the patch.\n> There are some folks who consider -u format readable, but I'm not\n> one of them ...\n\nI was wondering what people use to keep track of their personal development \nespecially when they do not have a cvs commit access.\n\nI am toying with idea of using GNU arch for personal use. It encourages \nbranching, merging and having as many repository trees as possible. I haven't \ntried it in field as yet but if it delivers what it promises, it could be a \ngreat assistance.\n\nI know that there are not many postgresql branches like say linux kernel needs \nbut having a good tool does not hurt, isn't it..:-)\n\n Just a thought..\n\n Shridhar\n", "msg_date": "Tue, 9 Mar 2004 12:32:43 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "[OT] Respository [was Re: [PERFORM] Feature request: smarter use of\n\tconditional indexes]" }, { "msg_contents": "Shridhar Daithankar wrote:\n> On Sunday 07 March 2004 09:16, Tom Lane wrote:\n> > Personally I consider -c format the only one of the three that is\n> > readable for reviewing purposes, so even if I weren't intending\n> > immediate application, I'd ask for -c before looking at the patch.\n> > There are some folks who consider -u format readable, but I'm not\n> > one of them ...\n> \n> I was wondering what people use to keep track of their personal development \n> especially when they do not have a cvs commit access.\n\nSee the developer's FAQ. They usually use cporig to make copies of\nfiles they are going to modify, then difforig to send the diffs to us,\nor they copy the entire source tree, modify it, and do a recursive diff\nthemselves.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 9 Mar 2004 12:46:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OT] Respository [was Re: [PERFORM] Feature request: smarter" }, { "msg_contents": "Bruce Momjian wrote:\n> Shridhar Daithankar wrote:\n>>I was wondering what people use to keep track of their personal development \n>>especially when they do not have a cvs commit access.\n> \n> See the developer's FAQ. They usually use cporig to make copies of\n> files they are going to modify, then difforig to send the diffs to us,\n> or they copy the entire source tree, modify it, and do a recursive diff\n> themselves.\n\nI used to use cvsup to get a full copy of the repository, and then work \nlocally out of that (check out and diff only).\n\nJoe\n\n", "msg_date": "Tue, 09 Mar 2004 10:48:33 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OT] Respository [was Re: [PERFORM] Feature request:" } ]
[ { "msg_contents": "Hi,\n\n we need to optimize / speed up a simple select:\n\nexplain analyze select\n((t0.int_value-t1.int_value)*(t0.int_value-t1.int_value))\nfrom job_property t0, job_property t1\nwhere t0.id_job_profile = 5\nand t1.id_job_profile = 6\nand t1.id_job_attribute = t0.id_job_attribute\nand t1.int_value < t0.int_value;\n\nthe result from explain analyze is:\n\nfirst run:\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n------------------------------\n Merge Join (cost=8314.36..8336.21 rows=258 width=8) (actual \ntime=226.544..226.890 rows=43 loops=1)\n Merge Cond: (\"outer\".id_job_attribute = \"inner\".id_job_attribute)\n Join Filter: (\"inner\".int_value < \"outer\".int_value)\n -> Sort (cost=4157.18..4159.75 rows=1026 width=8) (actual \ntime=113.781..113.826 rows=232 loops=1)\n Sort Key: t0.id_job_attribute\n -> Index Scan using job_property__id_job_profile__fk_index on \njob_property t0 (cost=0.00..4105.87 rows=1026 width=8) (actual \ntime=0.045..113.244 rows=232 loops=1)\n Index Cond: (id_job_profile = 5)\n -> Sort (cost=4157.18..4159.75 rows=1026 width=8) (actual \ntime=112.504..112.544 rows=254 loops=1)\n Sort Key: t1.id_job_attribute\n -> Index Scan using job_property__id_job_profile__fk_index on \njob_property t1 (cost=0.00..4105.87 rows=1026 width=8) (actual \ntime=0.067..112.090 rows=254 loops=1)\n Index Cond: (id_job_profile = 6)\n Total runtime: 227.120 ms\n(12 rows)\n\nsecond run:\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n----------------------------\n Merge Join (cost=8314.36..8336.21 rows=258 width=8) (actual \ntime=4.323..4.686 rows=43 loops=1)\n Merge Cond: (\"outer\".id_job_attribute = \"inner\".id_job_attribute)\n Join Filter: (\"inner\".int_value < \"outer\".int_value)\n -> Sort (cost=4157.18..4159.75 rows=1026 width=8) (actual \ntime=2.666..2.700 rows=232 loops=1)\n Sort Key: t0.id_job_attribute\n -> Index Scan using job_property__id_job_profile__fk_index on \njob_property t0 (cost=0.00..4105.87 rows=1026 width=8) (actual \ntime=0.279..2.354 rows=232 loops=1)\n Index Cond: (id_job_profile = 5)\n -> Sort (cost=4157.18..4159.75 rows=1026 width=8) (actual \ntime=1.440..1.477 rows=254 loops=1)\n Sort Key: t1.id_job_attribute\n -> Index Scan using job_property__id_job_profile__fk_index on \njob_property t1 (cost=0.00..4105.87 rows=1026 width=8) (actual \ntime=0.040..1.133 rows=254 loops=1)\n Index Cond: (id_job_profile = 6)\n Total runtime: 4.892 ms\n(12 rows)\n\n\nI have run vacuum analyze before executing the statements. I wonder now \nif there is any chance to speed this up. Could we use a C function to \naccess the indexes faster or is there any other chance to speed this \nup?\n\nThe Server is a dual G5/2GHZ with 8 GB of RAM and a 3.5 TB fiberchannel \nRAID. The job_property table is about 1 GB large (checked with dbsize) \nand has about 6.800.000 rows.\n\n\nregards David\n\n", "msg_date": "Sun, 7 Mar 2004 12:42:22 +0100", "msg_from": "David Teran <[email protected]>", "msg_from_op": true, "msg_subject": "speeding up a select with C function?" }, { "msg_contents": "> I have run vacuum analyze before executing the statements. I wonder now \n> if there is any chance to speed this up.\n\nIs this an active table for writes? You may want to take a look at\nCLUSTER. In some circumstances, it can take an order of magnitude off\nthe query time by allowing less pages to be retrieved from disk.\n\nOther than that, if you're willing to drop performance of all queries\nnot hitting the table to speed up this one, you can pin the index and\ntable into memory (cron job running a select periodically to ensure it\nsticks).\n\nShrink the actual data size (Drop the OID column, use a smallint instead\nof an integer, etc.)\n\n\nOne final option is to alter PostgreSQL into possibly doing a\nsudo-sequential scan on the table when reading indexes, rather than\npulling data from the table in a random order as it is found in the\nindex. This is a rather complex project, but doable.\nhttp://momjian.postgresql.org/cgi-bin/pgtodo?performance\n\n", "msg_date": "Sun, 07 Mar 2004 10:02:55 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up a select with C function?" }, { "msg_contents": "> explain analyze select\n> ((t0.int_value-t1.int_value)*(t0.int_value-t1.int_value))\n> from job_property t0, job_property t1\n> where t0.id_job_profile = 5\n> and t1.id_job_profile = 6\n> and t1.id_job_attribute = t0.id_job_attribute\n> and t1.int_value < t0.int_value;\n\nDon't bother with C function, use SQL function instead. You could get a \n50% speedup.\n\nChris\n", "msg_date": "Mon, 08 Mar 2004 09:29:19 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up a select with C function?" }, { "msg_contents": "On Sun, 7 Mar 2004, David Teran wrote:\n\n> we need to optimize / speed up a simple select:\n> \n> explain analyze select\n> ((t0.int_value-t1.int_value)*(t0.int_value-t1.int_value))\n> from job_property t0, job_property t1\n> where t0.id_job_profile = 5\n> and t1.id_job_profile = 6\n> and t1.id_job_attribute = t0.id_job_attribute\n> and t1.int_value < t0.int_value;\n\nTry to add an index on (id_job_profile, id_job_attribute) or maybe even \n(id_job_profile, id_job_attribute, int_value)\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Mon, 8 Mar 2004 07:22:05 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up a select with C function?" }, { "msg_contents": "Hi,\n\n\nOn 08.03.2004, at 02:29, Christopher Kings-Lynne wrote:\n\n>> explain analyze select\n>> ((t0.int_value-t1.int_value)*(t0.int_value-t1.int_value))\n>> from job_property t0, job_property t1\n>> where t0.id_job_profile = 5\n>> and t1.id_job_profile = 6\n>> and t1.id_job_attribute = t0.id_job_attribute\n>> and t1.int_value < t0.int_value;\n>\n> Don't bother with C function, use SQL function instead. You could get \n> a 50% speedup.\n>\nIs this always the case when using SQL instead of the C API to get \nvalues or only the function 'call' itself? We are thinking to use C \nfunctions which are optimized for the G5 altivec unit.\n\nregards David\n\n", "msg_date": "Tue, 9 Mar 2004 10:54:10 +0100", "msg_from": "David Teran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: speeding up a select with C function?" }, { "msg_contents": "Hi Dennis,\n\n>> we need to optimize / speed up a simple select:\n>>\n>> explain analyze select\n>> ((t0.int_value-t1.int_value)*(t0.int_value-t1.int_value))\n>> from job_property t0, job_property t1\n>> where t0.id_job_profile = 5\n>> and t1.id_job_profile = 6\n>> and t1.id_job_attribute = t0.id_job_attribute\n>> and t1.int_value < t0.int_value;\n>\n> Try to add an index on (id_job_profile, id_job_attribute) or maybe even\n> (id_job_profile, id_job_attribute, int_value)\n>\n\nTried this but the index is not used. I know the same problem was true \nwith a FrontBase database so i wonder how i can force that the index is \nused. As i was not sure in which order the query is executed i decided \nto create indexes for all variations:\n\nid_job_profile, id_job_attribute, int_value\nid_job_profile, int_value, id_job_attribute\nint_value, id_job_attribute, id_job_profile,\nint_value, id_job_profile, id_job_attribute\n....\n\n\nhere is the output:\n\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n---------------------------------------\n Merge Join (cost=5369.08..5383.14 rows=150 width=4) (actual \ntime=2.527..2.874 rows=43 loops=1)\n Merge Cond: (\"outer\".id_job_attribute = \"inner\".id_job_attribute)\n Join Filter: (\"inner\".int_value < \"outer\".int_value)\n -> Sort (cost=2684.54..2686.37 rows=734 width=6) (actual \ntime=1.140..1.177 rows=232 loops=1)\n Sort Key: t0.id_job_attribute\n -> Index Scan using \njob_property_short__id_job_profile__fk_index on job_property_short t0 \n(cost=0.00..2649.60 rows=734 width=6) (actual time=0.039..0.820 \nrows=232 loops=1)\n Index Cond: (id_job_profile = 5)\n -> Sort (cost=2684.54..2686.37 rows=734 width=6) (actual \ntime=1.175..1.223 rows=254 loops=1)\n Sort Key: t1.id_job_attribute\n -> Index Scan using \njob_property_short__id_job_profile__fk_index on job_property_short t1 \n(cost=0.00..2649.60 rows=734 width=6) (actual time=0.023..0.878 \nrows=254 loops=1)\n Index Cond: (id_job_profile = 6)\n Total runtime: 3.065 ms\n(12 rows)\n\n\n\n\nSo the question is how to tell Postgres to use the index.\n\nregards David\n\n", "msg_date": "Tue, 9 Mar 2004 13:02:41 +0100", "msg_from": "David Teran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: speeding up a select with C function?" }, { "msg_contents": ">> Don't bother with C function, use SQL function instead. You could get \n>> a 50% speedup.\n>>\n> Is this always the case when using SQL instead of the C API to get \n> values or only the function 'call' itself? We are thinking to use C \n> functions which are optimized for the G5 altivec unit.\n\nSQL functions are stored prepared, so there is less per-call query \nplanning overhead. I'm not sure there'd be much advantage to doing them \nin C...\n\nChris\n\n", "msg_date": "Tue, 09 Mar 2004 22:46:22 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up a select with C function?" }, { "msg_contents": "David Teran <[email protected]> writes:\n> Merge Join (cost=5369.08..5383.14 rows=150 width=4) (actual \n> time=2.527..2.874 rows=43 loops=1)\n> Merge Cond: (\"outer\".id_job_attribute = \"inner\".id_job_attribute)\n> Join Filter: (\"inner\".int_value < \"outer\".int_value)\n> -> Sort (cost=2684.54..2686.37 rows=734 width=6) (actual \n> time=1.140..1.177 rows=232 loops=1)\n> Sort Key: t0.id_job_attribute\n> -> Index Scan using \n> job_property_short__id_job_profile__fk_index on job_property_short t0 \n> (cost=0.00..2649.60 rows=734 width=6) (actual time=0.039..0.820 \n> rows=232 loops=1)\n> Index Cond: (id_job_profile = 5)\n> -> Sort (cost=2684.54..2686.37 rows=734 width=6) (actual \n> time=1.175..1.223 rows=254 loops=1)\n> Sort Key: t1.id_job_attribute\n> -> Index Scan using \n> job_property_short__id_job_profile__fk_index on job_property_short t1 \n> (cost=0.00..2649.60 rows=734 width=6) (actual time=0.023..0.878 \n> rows=254 loops=1)\n> Index Cond: (id_job_profile = 6)\n> Total runtime: 3.065 ms\n> (12 rows)\n\n> So the question is how to tell Postgres to use the index.\n\nEr, which part of that do you think is not using an index?\n\nMore generally, it is not necessarily the case that a join *should* use\nan index. I'm a bit surprised that the above bothers to sort; I'd\nexpect a hash join to be more appropriate. Have you tried experimenting\nwith enable_mergejoin and the other planner-testing settings?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Mar 2004 10:37:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up a select with C function? " } ]
[ { "msg_contents": "Hi,\n\nI've recently converted a database to use bigint for the indices. Suddenly\nsimple queries like\n\nselect * from new_test_result where parent_id = 2\n\nare doing full table scans instead of using the index. The table has over 4\nmillion rows, of which only 30 or so would be selected by the query.\n\nThe database table in question was fully vacuumed and clustered on the\nindex.\n\nI tried disabling seqscan, but it still did full table scan. After browsing\naround a bit, I had a hunch it might be failing to use the index because it\nis perhaps converting the parent_id to an integer, and I don't have a\nfunctional index on that (wouldn't seem correct either).\n\nI tested my hunch by casting the constant to bigint (as can be seen below)\nand suddenly the query is using the index again.\n\nWe are currently using pg 7.3.4. Is this intended behaviour? Should the\nconstant be cast to the type of the table column where possible, or should\nit be the other way around? If this is considered a bug, is it already\nfixed, in 7.3.6 or 7.4.x?\n\nKind Regards,\nSteve Butler\n\n\n\nsteve=# \\d new_test_result;\n Table \"public.new_test_result\"\n Column | Type | Modifiers\n-----------+---------+------------------------------------------------------\n-----------\n id | bigint | not null default\nnextval('public.new_test_result_id_seq'::text)\n parent_id | bigint |\n testcode | text |\n testtype | text |\n testdesc | text |\n pass | integer |\n latency | integer |\n bytessent | integer |\n bytesrecv | integer |\n defect | text |\nIndexes: test_result_parent_id_fk btree (parent_id)\nForeign Key constraints: $1 FOREIGN KEY (parent_id) REFERENCES\nnew_test_run(id) ON UPDATE NO ACTION ON DELETE CASCADE\n\nsteve=# explain select * from new_test_result where parent_id = 2;\n QUERY PLAN\n-----------------------------------------------------------------------\n Seq Scan on new_test_result (cost=0.00..123370.57 rows=23 width=125)\n Filter: (parent_id = 2)\n(2 rows)\n\nsteve=# explain select * from new_test_result where parent_id = 2::bigint;\n QUERY PLAN\n----------------------------------------------------------------------------\n-----------------------\n Index Scan using test_result_parent_id_fk on new_test_result\n(cost=0.00..3.32 rows=23 width=125)\n Index Cond: (parent_id = 2::bigint)\n(2 rows)\n\n", "msg_date": "Mon, 8 Mar 2004 10:26:21 +1000", "msg_from": "\"Steven Butler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Using bigint needs explicit cast to use the index" }, { "msg_contents": "Steven Butler wrote:\n> I've recently converted a database to use bigint for the indices. Suddenly\n> simple queries like\n> \n> select * from new_test_result where parent_id = 2\n> \n> are doing full table scans instead of using the index.\n\nThis is fixed in CVS HEAD. In the mean time, you can enclose the \ninteger literal in single quotes, or explicitely cast it to the type \nof the column.\n\nFWIW, this is an FAQ.\n\n-Neil\n", "msg_date": "Sun, 07 Mar 2004 19:43:58 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using bigint needs explicit cast to use the index" }, { "msg_contents": "On Mon, Mar 08, 2004 at 10:26:21AM +1000, Steven Butler wrote:\n> I tested my hunch by casting the constant to bigint (as can be seen below)\n> and suddenly the query is using the index again.\n\nYes. You can make this work all the time by quoting the constant. \nThat is, instead of\n\n\tWHERE indexcolumn = 123\n\ndo\n\n\tWHERE indexcolumn = '123'\n\n \n> We are currently using pg 7.3.4. Is this intended behaviour? Should the\n> constant be cast to the type of the table column where possible, or should\n\n\"Intended\", no. \"Expected\", yes. This topic has had the best\nPostgres minds work on it, and so far nobody's come up with a\nsolution. There was a proposal to put in a special-case automatic\nfix for int4/int8 in 7.4, but I don't know whether it made it in.\n\nA\n-- \nAndrew Sullivan | [email protected]\nI remember when computers were frustrating because they *did* exactly what \nyou told them to. That actually seems sort of quaint now.\n\t\t--J.D. Baldwin\n", "msg_date": "Mon, 8 Mar 2004 11:05:25 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using bigint needs explicit cast to use the index" }, { "msg_contents": "Andrew Sullivan wrote:\n> \"Intended\", no. \"Expected\", yes. This topic has had the best\n> Postgres minds work on it, and so far nobody's come up with a\n> solution.\n\nActually, this has already been fixed in CVS HEAD (as I mentioned in \nthis thread yesterday). To wit:\n\nnconway=# create table t1 (a int8);\nCREATE TABLE\nnconway=# create index t1_a_idx on t1 (a);\nCREATE INDEX\nnconway=# explain select * from t1 where a = 5;\n QUERY PLAN\n--------------------------------------------------------------------\n Index Scan using t1_a_idx on t1 (cost=0.00..17.07 rows=5 width=8)\n Index Cond: (a = 5)\n(2 rows)\nnconway=# select version();\n version\n------------------------------------------------------------------------------------\n PostgreSQL 7.5devel on i686-pc-linux-gnu, compiled by GCC gcc (GCC) \n3.3.3 (Debian)\n(1 row)\n\n-Neil\n", "msg_date": "Mon, 08 Mar 2004 11:22:56 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using bigint needs explicit cast to use the index" }, { "msg_contents": "On Mon, Mar 08, 2004 at 11:05:25 -0500,\n Andrew Sullivan <[email protected]> wrote:\n> \n> \"Intended\", no. \"Expected\", yes. This topic has had the best\n> Postgres minds work on it, and so far nobody's come up with a\n> solution. There was a proposal to put in a special-case automatic\n> fix for int4/int8 in 7.4, but I don't know whether it made it in.\n\nThis is handled better in 7.5. Instead of doing things deciding what\ntypes of type conversion to do, a check is make for cross type conversion\nfunctions that could be used for an index scan. This is a general solution\nthat doesn't result in unexpected type conversions.\n", "msg_date": "Mon, 8 Mar 2004 10:33:20 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using bigint needs explicit cast to use the index" }, { "msg_contents": "On Mon, Mar 08, 2004 at 11:22:56AM -0500, Neil Conway wrote:\n> Actually, this has already been fixed in CVS HEAD (as I mentioned in \n> this thread yesterday). To wit:\n\nYes, I saw that after I sent my mail. What can I say except, \"Yay! \nGood work!\"\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThis work was visionary and imaginative, and goes to show that visionary\nand imaginative work need not end up well. \n\t\t--Dennis Ritchie\n", "msg_date": "Mon, 8 Mar 2004 11:46:34 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using bigint needs explicit cast to use the index" } ]
[ { "msg_contents": "Hi all,\n \nI've got what should be a relatively simple join between two tables that\nis taking forever and I can't work out why.\n \nVersion 7.3.4RH.\n \nIt can't be upgraded because the system is kept in sync with RedHat\nEnterprise (using up2date). Not my system otherwise I'd do that :(\n \nDatabase has been 'vacuum analyze'd.\n \nblah=> \\d sq_asset;\n Table \"public.sq_asset\"\n Column | Type | Modifiers\n\n----------------+-----------------------------+-------------------------\n-\n type_code | character varying(100) | not null\n version | character varying(20) | not null default '0.0.0'\n name | character varying(255) | not null default ''\n short_name | character varying(255) | not null default ''\n status | integer | not null default 1\n languages | character varying(50) | not null default ''\n charset | character varying(50) | not null default ''\n force_secure | character(1) | not null default '0'\n created | timestamp without time zone | not null\n updated | timestamp without time zone | not null\n created_userid | character varying(255) | not null default '0'\n updated_userid | character varying(255) | not null default '0'\n assetid | integer | not null default 0\nIndexes: sq_asset_pkey primary key btree (assetid)\n\nblah=> select count(*) from sq_asset;\n count \n-------\n 16467\n(1 row)\n \n \nblah=> \\d sq_asset_permission;\n Table \"public.sq_asset_permission\"\n Column | Type | Modifiers \n------------+------------------------+----------------------\n permission | integer | not null default 0\n access | character(1) | not null default '0'\n assetid | character varying(255) | not null default '0'\n userid | character varying(255) | not null default '0'\nIndexes: sq_asset_permission_pkey primary key btree (assetid, userid,\npermission)\n \"sq_asset_permission_access\" btree (\"access\")\n \"sq_asset_permission_assetid\" btree (assetid)\n \"sq_asset_permission_permission\" btree (permission)\n \"sq_asset_permission_userid\" btree (userid)\n\nblah=> select count(*) from sq_asset_permission;\n count \n-------\n 73715\n(1 row)\n\n \nEXPLAIN ANALYZE SELECT p.*\nFROM sq_asset a, sq_asset_permission p\nWHERE a.assetid = p.assetid\nAND p.permission = '1'\nAND p.access = '1'\nAND p.userid = '0';\n QUERY PLAN\n------------------------------------------------------------------------\n--------------------------------------------------------\n Nested Loop (cost=0.00..4743553.10 rows=2582 width=27) (actual\ntime=237.91..759310.60 rows=11393 loops=1)\n Join Filter: ((\"inner\".assetid)::text = (\"outer\".assetid)::text)\n -> Seq Scan on sq_asset_permission p (cost=0.00..1852.01 rows=2288\nwidth=23) (actual time=0.06..196.90 rows=12873 loops=1)\n Filter: ((permission = 1) AND (\"access\" = '1'::bpchar) AND\n(userid = '0'::character varying))\n -> Seq Scan on sq_asset a (cost=0.00..1825.67 rows=16467 width=4)\n(actual time=1.40..29.09 rows=16467 loops=12873)\n Total runtime: 759331.85 msec\n(6 rows)\n\n \nIt's a straight join so I can't see why it would be this slow.. The\ntables are pretty small too.\n \nThanks for any suggestions :)\n \nChris.\n \n\n\n\nMessage\n\n\nHi \nall,\n \nI've got what \nshould be a relatively simple join between two tables that is taking forever and \nI can't work out why.\n \nVersion \n7.3.4RH.\n \nIt can't be \nupgraded because the system is kept in sync with RedHat Enterprise (using \nup2date). Not my system otherwise I'd do that :(\n \nDatabase has been \n'vacuum analyze'd.\n \nblah=> \\d \nsq_asset;                         \nTable \"public.sq_asset\"     \nColumn     \n|            \nType             \n|        \nModifiers         \n----------------+-----------------------------+-------------------------- type_code      \n| character varying(100)      | not \nnull version        | character \nvarying(20)       | not null default \n'0.0.0' name           \n| character varying(255)      | not null default \n'' short_name     | character \nvarying(255)      | not null default \n'' status         | \ninteger                     \n| not null default 1 languages      | \ncharacter varying(50)       | not null default \n'' charset        | character \nvarying(50)       | not null default \n'' force_secure   | \ncharacter(1)                \n| not null default \n'0' created        | timestamp \nwithout time zone | not \nnull updated        | timestamp \nwithout time zone | not null created_userid | character \nvarying(255)      | not null default \n'0' updated_userid | character \nvarying(255)      | not null default \n'0' assetid        | \ninteger                     \n| not null default 0Indexes: sq_asset_pkey primary key btree \n(assetid)\n\nblah=> select \ncount(*) from sq_asset; count ------- 16467(1 \nrow)\n \n \nblah=> \\d \nsq_asset_permission;             \nTable \"public.sq_asset_permission\"   Column   \n|          \nType          \n|      Modifiers       \n------------+------------------------+---------------------- permission \n| \ninteger                \n| not null default 0 access     | \ncharacter(1)           | not \nnull default '0' assetid    | character varying(255) | \nnot null default '0' userid     | character \nvarying(255) | not null default '0'Indexes: sq_asset_permission_pkey primary \nkey btree (assetid, userid, permission)    \n\"sq_asset_permission_access\" btree (\"access\")    \n\"sq_asset_permission_assetid\" btree (assetid)    \n\"sq_asset_permission_permission\" btree (permission)    \n\"sq_asset_permission_userid\" btree (userid)\nblah=> select \ncount(*) from sq_asset_permission; count \n------- 73715(1 row)\n \nEXPLAIN ANALYZE \nSELECT p.*FROM sq_asset a, sq_asset_permission pWHERE a.assetid = \np.assetidAND p.permission = '1'AND p.access = '1'AND p.userid = \n'0';                                                           \nQUERY \nPLAN-------------------------------------------------------------------------------------------------------------------------------- Nested \nLoop  (cost=0.00..4743553.10 rows=2582 width=27) (actual \ntime=237.91..759310.60 rows=11393 loops=1)   Join Filter: \n((\"inner\".assetid)::text = (\"outer\".assetid)::text)   ->  \nSeq Scan on sq_asset_permission p  (cost=0.00..1852.01 rows=2288 width=23) \n(actual time=0.06..196.90 rows=12873 \nloops=1)         Filter: \n((permission = 1) AND (\"access\" = '1'::bpchar) AND (userid = '0'::character \nvarying))   ->  Seq Scan on sq_asset a  \n(cost=0.00..1825.67 rows=16467 width=4) (actual time=1.40..29.09 rows=16467 \nloops=12873) Total runtime: 759331.85 msec(6 \nrows)\n \nIt's a straight \njoin so I can't see why it would be this slow.. The tables are pretty small \ntoo.\n \nThanks for any \nsuggestions :)\n \nChris.", "msg_date": "Mon, 8 Mar 2004 17:57:09 +1100", "msg_from": "\"Chris Smith\" <[email protected]>", "msg_from_op": true, "msg_subject": "simple query join" }, { "msg_contents": "MessageLooks to me like it's because your assetid is varchar in one table and an integer in the other table. AFAIK, PG is unable to use an index join when the join types are different. The query plan shows it is doing full table scans of both tables.\n\nChange both to varchar or both to integer and see what happens.\n\nAlso make sure to vacuum analyze the tables regularly to keep the query planner statistics up-to-date.\n\nCheers,\nSteve Butler\n assetid | integer | not null default 0\n Indexes: sq_asset_pkey primary key btree (assetid)\n\n assetid | character varying(255) | not null default '0'\n EXPLAIN ANALYZE SELECT p.*\n FROM sq_asset a, sq_asset_permission p\n WHERE a.assetid = p.assetid\n AND p.permission = '1'\n AND p.access = '1'\n AND p.userid = '0';\n QUERY PLAN\n --------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..4743553.10 rows=2582 width=27) (actual time=237.91..759310.60 rows=11393 loops=1)\n Join Filter: ((\"inner\".assetid)::text = (\"outer\".assetid)::text)\n -> Seq Scan on sq_asset_permission p (cost=0.00..1852.01 rows=2288 width=23) (actual time=0.06..196.90 rows=12873 loops=1)\n Filter: ((permission = 1) AND (\"access\" = '1'::bpchar) AND (userid = '0'::character varying))\n -> Seq Scan on sq_asset a (cost=0.00..1825.67 rows=16467 width=4) (actual time=1.40..29.09 rows=16467 loops=12873)\n Total runtime: 759331.85 msec\n (6 rows)\n\nMessage\n\n\n\n\n\nLooks to me like it's because your assetid is \nvarchar in one table and an integer in the other table.  AFAIK, PG is \nunable to use an index join when the join types are different.  The query \nplan shows it is doing full table scans of both tables.\n \nChange both to varchar or both to integer and see \nwhat happens.\n \nAlso make sure to vacuum analyze the tables \nregularly to keep the query planner statistics up-to-date.\n \nCheers,\nSteve Butler\n\n assetid        | \n integer                     \n | not null default 0Indexes: sq_asset_pkey primary key btree \n (assetid)\n\n assetid    | character varying(255) | not null \n default '0'EXPLAIN ANALYZE SELECT p.*FROM sq_asset a, sq_asset_permission \n pWHERE a.assetid = p.assetidAND p.permission = '1'AND p.access = \n '1'AND p.userid = \n '0';                                                           \n QUERY \n PLAN-------------------------------------------------------------------------------------------------------------------------------- Nested \n Loop  (cost=0.00..4743553.10 rows=2582 width=27) (actual \n time=237.91..759310.60 rows=11393 loops=1)   Join Filter: \n ((\"inner\".assetid)::text = (\"outer\".assetid)::text)   \n ->  Seq Scan on sq_asset_permission p  (cost=0.00..1852.01 \n rows=2288 width=23) (actual time=0.06..196.90 rows=12873 \n loops=1)         Filter: \n ((permission = 1) AND (\"access\" = '1'::bpchar) AND (userid = '0'::character \n varying))   ->  Seq Scan on sq_asset a  \n (cost=0.00..1825.67 rows=16467 width=4) (actual time=1.40..29.09 rows=16467 \n loops=12873) Total runtime: 759331.85 msec(6 \n rows)", "msg_date": "Mon, 8 Mar 2004 17:11:56 +1000", "msg_from": "\"Steven Butler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simple query join" }, { "msg_contents": "On Mon, 8 Mar 2004, Chris Smith wrote:\n\n> assetid | integer | not null default 0\n\n> assetid | character varying(255) | not null default '0'\n\nThe types above does not match, and these are the attributes you use to \njoin.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Mon, 8 Mar 2004 08:47:51 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simple query join" }, { "msg_contents": "Eek. Casting both to varchar makes it super quick so I'll fix up the\ntables.\n \nAdded to the list of things to check for next time...\n \nOn a side note - I tried it with 7.4.1 on another box and it handled it\nok.\n \nThanks again :)\n \nChris.\n \n\n-----Original Message-----\nFrom: Steven Butler [mailto:[email protected]] \nSent: Monday, March 08, 2004 6:12 PM\nTo: Chris Smith; [email protected]\nSubject: Re: [PERFORM] simple query join\n\n\nLooks to me like it's because your assetid is varchar in one table and\nan integer in the other table. AFAIK, PG is unable to use an index join\nwhen the join types are different. The query plan shows it is doing\nfull table scans of both tables.\n \nChange both to varchar or both to integer and see what happens.\n \nAlso make sure to vacuum analyze the tables regularly to keep the query\nplanner statistics up-to-date.\n \nCheers,\nSteve Butler\n\n assetid | integer | not null default 0\nIndexes: sq_asset_pkey primary key btree (assetid)\n\n\n assetid | character varying(255) | not null default '0'\nEXPLAIN ANALYZE SELECT p.*\nFROM sq_asset a, sq_asset_permission p\nWHERE a.assetid = p.assetid\nAND p.permission = '1'\nAND p.access = '1'\nAND p.userid = '0';\n QUERY PLAN\n------------------------------------------------------------------------\n--------------------------------------------------------\n Nested Loop (cost=0.00..4743553.10 rows=2582 width=27) (actual\ntime=237.91..759310.60 rows=11393 loops=1)\n Join Filter: ((\"inner\".assetid)::text = (\"outer\".assetid)::text)\n -> Seq Scan on sq_asset_permission p (cost=0.00..1852.01 rows=2288\nwidth=23) (actual time=0.06..196.90 rows=12873 loops=1)\n Filter: ((permission = 1) AND (\"access\" = '1'::bpchar) AND\n(userid = '0'::character varying))\n -> Seq Scan on sq_asset a (cost=0.00..1825.67 rows=16467 width=4)\n(actual time=1.40..29.09 rows=16467 loops=12873)\n Total runtime: 759331.85 msec\n(6 rows)\n\n\n\n\n\nMessage\n\n\n\n\nEek. \nCasting both to varchar makes it super quick so I'll \nfix up the tables.\n \nAdded to the list of things to check for next \ntime...\n \nOn a \nside note - I tried it with 7.4.1 on another box and it handled it \nok.\n \nThanks again :)\n \nChris.\n \n\n\n-----Original Message-----From: Steven Butler \n [mailto:[email protected]] Sent: Monday, March 08, 2004 6:12 \n PMTo: Chris Smith; \n [email protected]: Re: [PERFORM] simple query \n join\nLooks to me like it's because your assetid is \n varchar in one table and an integer in the other table.  AFAIK, PG is \n unable to use an index join when the join types are different.  The query \n plan shows it is doing full table scans of both tables.\n \nChange both to varchar or both to integer and see \n what happens.\n \nAlso make sure to vacuum analyze the tables \n regularly to keep the query planner statistics up-to-date.\n \nCheers,\nSteve Butler\n\n assetid        \n | \n integer                     \n | not null default 0Indexes: sq_asset_pkey primary key btree \n (assetid)\n\n assetid    | character varying(255) | not null \n default '0'EXPLAIN ANALYZE SELECT p.*FROM sq_asset a, \n sq_asset_permission pWHERE a.assetid = p.assetidAND p.permission = \n '1'AND p.access = '1'AND p.userid = \n '0';                                                           \n QUERY \n PLAN-------------------------------------------------------------------------------------------------------------------------------- Nested \n Loop  (cost=0.00..4743553.10 rows=2582 width=27) (actual \n time=237.91..759310.60 rows=11393 loops=1)   Join Filter: \n ((\"inner\".assetid)::text = (\"outer\".assetid)::text)   \n ->  Seq Scan on sq_asset_permission p  (cost=0.00..1852.01 \n rows=2288 width=23) (actual time=0.06..196.90 rows=12873 \n loops=1)         Filter: \n ((permission = 1) AND (\"access\" = '1'::bpchar) AND (userid = '0'::character \n varying))   ->  Seq Scan on sq_asset a  \n (cost=0.00..1825.67 rows=16467 width=4) (actual time=1.40..29.09 rows=16467 \n loops=12873) Total runtime: 759331.85 msec(6 \n rows)", "msg_date": "Tue, 9 Mar 2004 08:43:40 +1100", "msg_from": "\"Chris Smith\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: simple query join" } ]
[ { "msg_contents": "I have a few questions about cluster and vacuum.\nWe have a table that is 56 GB in size and after a purge based on dates 16GB\nwas made available as reported below.\nPWFPM_DEV=# vacuum full verbose analyze forecastelement;\nINFO: vacuuming \"public.forecastelement\"\nINFO: \"forecastelement\": found 93351479 removable, 219177133 nonremovable\nrow versions in 6621806 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 156 to 192 bytes long.\nThere were 611201 unused item pointers.\nTotal free space (including removable row versions) is 16296891960 bytes.\n1974172 pages are or will become empty, including 26 at the end of the\ntable.\n1990268 pages containing 15794855436 free bytes are potential move\ndestinations.\nCPU 467.29s/48.52u sec elapsed 4121.69 sec.\n\nHow can you improve the performance of cluster?\n1. BY increasing sort_mem?\n2. Does increasing vacuum_mem help?\n3. Does checkpoint_segments improve it?\n\nDan\n", "msg_date": "Tue, 9 Mar 2004 15:17:59 -0500 ", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Cluster and vacuum performance" }, { "msg_contents": "Dan,\n\n> INFO: vacuuming \"public.forecastelement\"\n> INFO: \"forecastelement\": found 93351479 removable, 219177133 nonremovable\n\nThe high number of nonremovable above probably indicates that you have a \ntransaction being held open which prevents VACUUM from being effective. \nLook for long-hung processes and/or transaction management errors in your \nclient code.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 9 Mar 2004 14:20:53 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster and vacuum performance" }, { "msg_contents": "\"Shea,Dan [CIS]\" <[email protected]> writes:\n> How can you improve the performance of cluster?\n> 1. BY increasing sort_mem?\n\nYes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Mar 2004 17:57:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster and vacuum performance " }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> INFO: vacuuming \"public.forecastelement\"\n>> INFO: \"forecastelement\": found 93351479 removable, 219177133 nonremovable\n\n> The high number of nonremovable above probably indicates that you have a \n> transaction being held open which prevents VACUUM from being effective. \n\nYou misread it --- \"nonremovable\" doesn't mean \"dead but not removable\",\nit just means \"not removable\". Actually the next line of his log showed\nthere were zero nonremovable dead tuples, so he's not got any\nopen-transaction problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Mar 2004 18:30:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster and vacuum performance " }, { "msg_contents": "Tom,\n\n> You misread it --- \"nonremovable\" doesn't mean \"dead but not removable\",\n> it just means \"not removable\". Actually the next line of his log showed\n> there were zero nonremovable dead tuples, so he's not got any\n> open-transaction problem.\n\nOoops. Sorry, Dan.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 10 Mar 2004 09:35:47 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster and vacuum performance" } ]
[ { "msg_contents": "I've been waiting all day for a pg_restore to finish on a test system\nidentically configured as our production in hardware and software\nwith the exception prod is 7.3.5 and test is 7.4.1.\n\nThe file it's restoring from is about 8GB uncompressed from a\n\"pg_dump -b -F t\" and after 2 hours the directory the database is in\ncontains only 1GB. iostat reported ~2000 blocks written every 2\nseconds to the DB file system.\n\nI turned syslog off to see if it was blocking anything and in the\npast couple minutes 1GB has been restored and iostat reports ~35,000\nblocks written every 2 seconds to the DB file system.\n\nThe system is completely idle except for this restore process. Could\nsyslog the culprit?\n\nI turned syslog back on and the restore slowed down again. Turned\nit off and it sped right back up.\n\nCan anyone confirm this for me?\n\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n", "msg_date": "Tue, 09 Mar 2004 15:29:43 -0500", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "syslog slowing the database?" }, { "msg_contents": "On Tuesday 09 March 2004 20:29, Greg Spiegelberg wrote:\n\n> iostat reported ~2000 blocks written every 2\n> seconds to the DB file system.\n>\n> I turned syslog off to see if it was blocking anything and in the\n> past couple minutes 1GB has been restored and iostat reports ~35,000\n> blocks written every 2 seconds to the DB file system.\n\n> Can anyone confirm this for me?\n\nIf syslog is set to sync after every line and you're logging too much then it \ncould slow things down as the disk heads shift back and fore between two \nareas of disk. How many disks do you have and in what configuration?\n\nAlso - was PG logging a lot of info, or is some other application the culprit?\n\nTip: put a minus \"-\" in front of the file-path in your syslog.conf and it \nwon't sync to disk after every entry.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 9 Mar 2004 21:34:35 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] syslog slowing the database?" }, { "msg_contents": "Greg Spiegelberg <[email protected]> writes:\n> I turned syslog back on and the restore slowed down again. Turned\n> it off and it sped right back up.\n\nWe have heard reports before of syslog being quite slow. What platform\nare you on exactly? Does Richard's suggestion of turning off syslog's\nfsync help?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Mar 2004 19:16:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog slowing the database? " }, { "msg_contents": "Tom Lane wrote:\n> Greg Spiegelberg <[email protected]> writes:\n> \n>>I turned syslog back on and the restore slowed down again. Turned\n>>it off and it sped right back up.\n> \n> \n> We have heard reports before of syslog being quite slow. What platform\n> are you on exactly? Does Richard's suggestion of turning off syslog's\n> fsync help?\n> \n\nAnother tip is to use a better (well atleast more optimized) syslog \nimplementation, like metalog. It optimizes log writes to a blocksize \nthat is better for disk throughput.\nYou can also use \"per line\" mode with those if you want, i think.\n\nI use another logger that is called multilog (see at http://cr.yp.to), \nthat's a pipe logger thing, like one per postmaster.\nIt also gives very exact timestamps to every line, has built in log \nrotation and works nice with all programs i use it for.\n\nOne thing is for sure, if you log much, standard syslog (atleast on \nlinux) sucks big time.\nI gained back approx 30% CPU on a mailserver over here by changing to \nanother logger.\n\nCheers\nMagnus\n\n\n\n", "msg_date": "Wed, 10 Mar 2004 02:06:38 +0100", "msg_from": "\"Magnus Naeslund(t)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] syslog slowing the database?" }, { "msg_contents": "Might want to look at metalog, it does delayed writes, though ultimately \nyour issue is io bound and there's not much you can do to reduce io if \nyou want to keep syslog logging your pgsql queries and such.\n\nTom Lane wrote:\n\n>Greg Spiegelberg <[email protected]> writes:\n> \n>\n>>I turned syslog back on and the restore slowed down again. Turned\n>>it off and it sped right back up.\n>> \n>>\n>\n>We have heard reports before of syslog being quite slow. What platform\n>are you on exactly? Does Richard's suggestion of turning off syslog's\n>fsync help?\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n>\n\n", "msg_date": "Tue, 09 Mar 2004 17:09:13 -0800", "msg_from": "\"Gavin M. Roy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] syslog slowing the database?" }, { "msg_contents": "On Wed, 2004-03-10 at 12:09, Gavin M. Roy wrote:\n> Might want to look at metalog, it does delayed writes, though ultimately \n> your issue is io bound and there's not much you can do to reduce io if \n> you want to keep syslog logging your pgsql queries and such.\n\nYeah, but syslog with fsync() after each line is much, much worse than\nsyslog without it, assuming anything else is on the same disk (array).\nIt just guarantees to screw up your drive head movements...\n\n-- \nStephen Norris\t [email protected]\nFarrow Norris Pty Ltd\t+61 417 243 239\n\n", "msg_date": "Wed, 10 Mar 2004 12:32:07 +1100", "msg_from": "Stephen Robert Norris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] syslog slowing the database?" }, { "msg_contents": "Tom Lane wrote:\n> Greg Spiegelberg <[email protected]> writes:\n> \n>>I turned syslog back on and the restore slowed down again. Turned\n>>it off and it sped right back up.\n> \n> We have heard reports before of syslog being quite slow. What platform\n> are you on exactly? Does Richard's suggestion of turning off syslog's\n> fsync help?\n\nRedHat 7.3 w/ 2.4.24 kernel on a dual Intel PIII 1.3Ghz, 2GB memory,\nU160 internal on integrated controller, 1Gbps SAN for database.\nDatabase file being restored and the actual database are on different\ndisk and controllers than syslog files.\n\nWith the ``-'' in front of the syslog file postgres logs too gives\nme roughly 75% of the I/O the performance as reported by iostat. So,\nit helps though turning syslog off gives the optimum performance.\n\nIf the log and database were on the same disk I'd be okay with the\ncurrent workaround. If the ``-'' gave me near the same performance as\nturning syslog off I'd be okay with that too. However, neither of these\nare the case so there has to be something else blocking between the two\nprocesses.\n\n<2 hours and multiple test later>\n\nI've found that hardware interrupts are the culprit. Given my system\nconfig both SCSI and fibre controllers were throttling the system with\nthe interrupts required to write the data (syslog & database) and read\nthe data from the restore. I'm okay with that.\n\nIn the order of worst to best.\n\n* There were, on average about 450 interrupts/sec with the default\n config of syslog on one disk, database on the SAN and syslog using\n fsync.\n\n* Turning fsync off in syslog puts interrupts around 105/sec and.\n\n* Having syslog fsync turned off in syslog AND moving the syslog file\n to a filesystem serviced by the same fibre controller put interrupts\n at around 92/sec. I decided to do this after watching the I/O on\n the SAN with syslog turned off and found that it had bandwidth to\n spare. FYI, the system when idle generated about 50 interrupts/sec.\n\n\nI'm going with the later for now on the test system and after running\nit through it's paces with all our processes I'll make the change in\nproduction. I'll post if I run into anything else.\n\nGreg\n\n\nBTW, I like what metalog has to offer but I prefer using as many of the\ndefault tools as possible and replacing them only when absolutely\nnecessary. What I've learned with syslog here is that it is still\nviable but likely requires a minor tweak. If this tweak fails in\ntesting I'll look at metalog then.\n\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n", "msg_date": "Wed, 10 Mar 2004 10:51:50 -0500", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: syslog slowing the database?" }, { "msg_contents": "Greg Spiegelberg <[email protected]> writes:\n> If the log and database were on the same disk I'd be okay with the\n> current workaround. If the ``-'' gave me near the same performance as\n> turning syslog off I'd be okay with that too. However, neither of these\n> are the case so there has to be something else blocking between the two\n> processes.\n\nYou could also consider not using syslog at all: let the postmaster\noutput to its stderr, and pipe that into a log-rotation program.\nI believe some people use Apache's log rotator for this with good\nresults.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Mar 2004 11:53:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog slowing the database? " }, { "msg_contents": "Tom Lane wrote:\n> Greg Spiegelberg <[email protected]> writes:\n> \n>>If the log and database were on the same disk I'd be okay with the\n>>current workaround. If the ``-'' gave me near the same performance as\n>>turning syslog off I'd be okay with that too. However, neither of these\n>>are the case so there has to be something else blocking between the two\n>>processes.\n> \n> \n> You could also consider not using syslog at all: let the postmaster\n> output to its stderr, and pipe that into a log-rotation program.\n> I believe some people use Apache's log rotator for this with good\n> results.\n\nI do this... here's the relevant lines from my startup script:\n\nROTATE=\"/inst/apache/bin/rotatelogs $PGLOGS/postgresql 86400\"\n$PGBIN/pg_ctl start -s -D $PGDATA | $ROTATE &\n\nFollowing is a patch to rotatelogs that does two things:\n\n- makes a symbolic link 'foo.current' that points to the\n current output file.\n\n- gzips the rotated logfile\n\nIf you have gnu tools installed, you can\n tail --retry --follow=name foo.current\nand it will automatically track the most recent\nlog file.\n\nHTH,\nMark\n\n-- \nMark Harrison\nPixar Animation Studios\n\n\n*** rotatelogs.c-orig\t2004-03-10 10:24:02.000000000 -0800\n--- rotatelogs.c\t2004-03-10 11:01:55.000000000 -0800\n***************\n*** 25,30 ****\n--- 25,32 ----\n int main (int argc, char **argv)\n {\n char buf[BUFSIZE], buf2[MAX_PATH], errbuf[ERRMSGSZ];\n+ char linkbuf[MAX_PATH];\n+ char oldbuf2[MAX_PATH];\n time_t tLogEnd = 0, tRotation;\n int nLogFD = -1, nLogFDprev = -1, nMessCount = 0, nRead, nWrite;\n int utc_offset = 0;\n***************\n*** 75,80 ****\n--- 77,84 ----\n setmode(0, O_BINARY);\n #endif\n\n+ sprintf(linkbuf, \"%s.current\", szLogRoot);\n+ sprintf(oldbuf2, \"\");\n use_strftime = (strstr(szLogRoot, \"%\") != NULL);\n for (;;) {\n nRead = read(0, buf, sizeof buf);\n***************\n*** 99,104 ****\n--- 103,111 ----\n sprintf(buf2, \"%s.%010d\", szLogRoot, (int) tLogStart);\n }\n tLogEnd = tLogStart + tRotation;\n+ printf(\"oldbuf2=%s\\n\",oldbuf2);\n+ printf(\"buf2=%s\\n\",buf2);\n+ printf(\"linkbuf=%s\\n\",linkbuf);\n nLogFD = open(buf2, O_WRONLY | O_CREAT | O_APPEND, 0666);\n if (nLogFD < 0) {\n /* Uh-oh. Failed to open the new log file. Try to clear\n***************\n*** 125,130 ****\n--- 132,146 ----\n }\n else {\n close(nLogFDprev);\n+ /* use: tail --follow=name foo.current */\n+ unlink(linkbuf);\n+ symlink(buf2,linkbuf);\n+ if (strlen(oldbuf2) > 0) {\n+ char cmd[MAX_PATH+100];\n+ sprintf(cmd, \"gzip %s &\", oldbuf2);\n+ system(cmd);\n+ }\n+ strcpy(oldbuf2, buf2);\n }\n nMessCount = 0;\n }\n\n", "msg_date": "Wed, 10 Mar 2004 11:08:50 -0800", "msg_from": "Mark Harrison <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog slowing the database?" }, { "msg_contents": ">>>>> \"GS\" == Greg Spiegelberg <[email protected]> writes:\n\nGS> I've been waiting all day for a pg_restore to finish on a test system\nGS> identically configured as our production in hardware and software\nGS> with the exception prod is 7.3.5 and test is 7.4.1.\n\nGS> The file it's restoring from is about 8GB uncompressed from a\nGS> \"pg_dump -b -F t\" and after 2 hours the directory the database is in\nGS> contains only 1GB. iostat reported ~2000 blocks written every 2\nGS> seconds to the DB file system.\n\nHave you considered increasing the value of checkpoint_segments to\nsomething like 50 or 100 during your restore? It made a *dramatic*\nimprovement on my system when I did the same migration.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Wed, 10 Mar 2004 16:00:36 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog slowing the database?" }, { "msg_contents": "> You could also consider not using syslog at all: let the postmaster\n> output to its stderr, and pipe that into a log-rotation program.\n> I believe some people use Apache's log rotator for this with good\n> results.\n\nNot an option I'm afraid. PostgreSQL just jams and stops logging after \nthe first rotation...\n\nI've read in the docs that syslog logging is the only \"production\" \nsolution...\n\nChris\n\n", "msg_date": "Thu, 11 Mar 2004 09:34:54 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] syslog slowing the database?" }, { "msg_contents": "It might depend on how you're rotating it.\n\nTry the copy/truncate method instead of moving the log file. If you move\nthe log file to another filename you usually have to restart the app\ndoing the logging before it starts logging again.\n\nChris.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Christopher\nKings-Lynne\nSent: Thursday, March 11, 2004 12:35 PM\nTo: Tom Lane\nCc: Greg Spiegelberg; PgSQL Performance ML; Postgres Admin List\nSubject: Re: [PERFORM] [ADMIN] syslog slowing the database?\n\n\n> You could also consider not using syslog at all: let the postmaster \n> output to its stderr, and pipe that into a log-rotation program. I \n> believe some people use Apache's log rotator for this with good \n> results.\n\nNot an option I'm afraid. PostgreSQL just jams and stops logging after \nthe first rotation...\n\nI've read in the docs that syslog logging is the only \"production\" \nsolution...\n\nChris\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n", "msg_date": "Thu, 11 Mar 2004 12:37:38 +1100", "msg_from": "\"Chris Smith\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] syslog slowing the database?" }, { "msg_contents": "On Thu, Mar 11, 2004 at 09:34:54 +0800,\n Christopher Kings-Lynne <[email protected]> wrote:\n> >You could also consider not using syslog at all: let the postmaster\n> >output to its stderr, and pipe that into a log-rotation program.\n> >I believe some people use Apache's log rotator for this with good\n> >results.\n> \n> Not an option I'm afraid. PostgreSQL just jams and stops logging after \n> the first rotation...\n> \n> I've read in the docs that syslog logging is the only \"production\" \n> solution...\n\nI use multilog to log postgres' output and it works fine.\n", "msg_date": "Wed, 10 Mar 2004 20:03:46 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] syslog slowing the database?" }, { "msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n>> You could also consider not using syslog at all: let the postmaster\n>> output to its stderr, and pipe that into a log-rotation program.\n>> I believe some people use Apache's log rotator for this with good\n>> results.\n\n> Not an option I'm afraid. PostgreSQL just jams and stops logging after \n> the first rotation...\n\nI know some people use this in production. Dunno what went wrong in\nyour test, but it can be made to work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Mar 2004 23:09:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] syslog slowing the database? " }, { "msg_contents": ">>Not an option I'm afraid. PostgreSQL just jams and stops logging after \n>>the first rotation...\n\nAre you using a copy truncate method to rotate the logs? In RedHat add\nthe keyword COPYTRUCATE to your /etc/logrotate.d/syslog file.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n> \n> I know some people use this in production. Dunno what went wrong in\n> your test, but it can be made to work.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])", "msg_date": "Thu, 11 Mar 2004 06:55:06 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] syslog slowing the database?" }, { "msg_contents": "On Thu, Mar 11, 2004 at 09:34:54AM +0800, Christopher Kings-Lynne wrote:\n> >You could also consider not using syslog at all: let the postmaster\n> >output to its stderr, and pipe that into a log-rotation program.\n> \n> Not an option I'm afraid. PostgreSQL just jams and stops logging after \n> the first rotation...\n\nActually, this is what we do. Last year we offered an (admittedly\nexpensive) bespoke log rotator written in Perl for just this purpose. \nIt was rejected on the grounds that it didn't do anything Apache's\nrotator didn't do, so I didn't pursue it. I'm willing to put it up\non gborg, though, if anyone thinks it'll be worth having around. \nFWIW, we use ours in production.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n", "msg_date": "Sun, 14 Mar 2004 11:00:18 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] syslog slowing the database?" }, { "msg_contents": "On Thu, 11 Mar 2004, Christopher Kings-Lynne wrote:\n\n> > You could also consider not using syslog at all: let the postmaster\n> > output to its stderr, and pipe that into a log-rotation program.\n> > I believe some people use Apache's log rotator for this with good\n> > results.\n> \n> Not an option I'm afraid. PostgreSQL just jams and stops logging after \n> the first rotation...\n> \n> I've read in the docs that syslog logging is the only \"production\" \n> solution...\n\nCan you use the apache log rotator? It's known to work in my environment \n(redhat 7.2, postgresql 7.2 and 7.4) with this command to start it in my \nrc.local file:\n\nsu - postgres -c 'pg_ctl start | rotatelogs $PGDATA/pglog 86400 2>1&'\n\n", "msg_date": "Mon, 15 Mar 2004 08:17:49 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] syslog slowing the database?" }, { "msg_contents": "scott.marlowe wrote:\n> On Thu, 11 Mar 2004, Christopher Kings-Lynne wrote:\n> \n> > > You could also consider not using syslog at all: let the postmaster\n> > > output to its stderr, and pipe that into a log-rotation program.\n> > > I believe some people use Apache's log rotator for this with good\n> > > results.\n> > \n> > Not an option I'm afraid. PostgreSQL just jams and stops logging after \n> > the first rotation...\n> > \n> > I've read in the docs that syslog logging is the only \"production\" \n> > solution...\n> \n> Can you use the apache log rotator? It's known to work in my environment \n> (redhat 7.2, postgresql 7.2 and 7.4) with this command to start it in my \n> rc.local file:\n> \n> su - postgres -c 'pg_ctl start | rotatelogs $PGDATA/pglog 86400 2>1&'\n\nSure, our documentation specifically mentions using rotatelogs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 15 Mar 2004 10:38:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] syslog slowing the database?" }, { "msg_contents": "On Mon, 15 Mar 2004, Bruce Momjian wrote:\n\n> scott.marlowe wrote:\n> > On Thu, 11 Mar 2004, Christopher Kings-Lynne wrote:\n> > \n> > > > You could also consider not using syslog at all: let the postmaster\n> > > > output to its stderr, and pipe that into a log-rotation program.\n> > > > I believe some people use Apache's log rotator for this with good\n> > > > results.\n> > > \n> > > Not an option I'm afraid. PostgreSQL just jams and stops logging after \n> > > the first rotation...\n> > > \n> > > I've read in the docs that syslog logging is the only \"production\" \n> > > solution...\n> > \n> > Can you use the apache log rotator? It's known to work in my environment \n> > (redhat 7.2, postgresql 7.2 and 7.4) with this command to start it in my \n> > rc.local file:\n> > \n> > su - postgres -c 'pg_ctl start | rotatelogs $PGDATA/pglog 86400 2>1&'\n> \n> Sure, our documentation specifically mentions using rotatelogs.\n\n\nhehe. What I meant was can Christopher use it, or does he have a \nlimitation in his environment where he can't get ahold of the apache log \nrotater... :-) \n\n", "msg_date": "Mon, 15 Mar 2004 08:57:25 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] syslog slowing the database?" } ]
[ { "msg_contents": "\nGuys,\n\nI got a Java program to tune. It connects to a 7.4.1 postgresql server\nrunning Linux using JDBC.\n\nThe program needs to update a counter on a somewhat large number of\nrows, about 1200 on a ~130k rows table. The query is something like\nthe following:\n\nUPDATE table SET table.par = table.par + 1\nWHERE table.key IN ('value1', 'value2', ... , 'value1200' )\n\nThis query runs on a transaction (by issuing a call to\nsetAutoCommit(false)) and a commit() right after the query\nis sent to the backend.\n\nThe process of committing and updating the values is painfully slow\n(no surprises here). Any ideas?\n\nThanks.\n\n\n", "msg_date": "Wed, 10 Mar 2004 00:35:15 -0300 (BRT)", "msg_from": "\"Marcus Andree S. Magalhaes\" <[email protected]>", "msg_from_op": true, "msg_subject": "optimizing large query with IN (...)" }, { "msg_contents": "> UPDATE table SET table.par = table.par + 1\n> WHERE table.key IN ('value1', 'value2', ... , 'value1200' )\n\nHow fast is the query alone, i.e. \n\n SELECT * FROM table\n WHERE table.key IN ('value1', 'value2', ... , 'value1200' )\n\n", "msg_date": "Wed, 10 Mar 2004 09:13:37 +0100", "msg_from": "\"Eric Jain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing large query with IN (...)" }, { "msg_contents": ">>UPDATE table SET table.par = table.par + 1\n>>WHERE table.key IN ('value1', 'value2', ... , 'value1200' )\n> \n> \n> How fast is the query alone, i.e. \n> \n> SELECT * FROM table\n> WHERE table.key IN ('value1', 'value2', ... , 'value1200' )\n\nAlso, post the output of '\\d table' and EXPLAIN ANALYZE UPDATE...\n\nChris\n\n", "msg_date": "Wed, 10 Mar 2004 16:38:39 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing large query with IN (...)" }, { "msg_contents": "On Wed, Mar 10, 2004 at 12:35:15AM -0300, Marcus Andree S. Magalhaes wrote:\n> Guys,\n> \n> I got a Java program to tune. It connects to a 7.4.1 postgresql server\n> running Linux using JDBC.\n> \n> The program needs to update a counter on a somewhat large number of\n> rows, about 1200 on a ~130k rows table. The query is something like\n> the following:\n> \n> UPDATE table SET table.par = table.par + 1\n> WHERE table.key IN ('value1', 'value2', ... , 'value1200' )\n> \n> This query runs on a transaction (by issuing a call to\n> setAutoCommit(false)) and a commit() right after the query\n> is sent to the backend.\n> \n> The process of committing and updating the values is painfully slow\n> (no surprises here). Any ideas?\n\nI posted an analysis of use of IN () like this a few weeks ago on\npgsql-general.\n\nThe approach you're using is optimal for < 3 values.\n\nFor any more than that, insert value1 ... value1200 into a temporary\ntable, then do\n\n UPDATE table SET table.par = table.par + 1\n WHERE table.key IN (SELECT value from temp_table);\n\nIndexing the temporary table marginally increases the speed, but not\nsignificantly.\n\nCheers,\n Steve\n\n\n", "msg_date": "Wed, 10 Mar 2004 06:42:54 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing large query with IN (...)" }, { "msg_contents": "On Wed, 10 Mar 2004, Marcus Andree S. Magalhaes wrote:\n\n> \n> Guys,\n> \n> I got a Java program to tune. It connects to a 7.4.1 postgresql server\n> running Linux using JDBC.\n> \n> The program needs to update a counter on a somewhat large number of\n> rows, about 1200 on a ~130k rows table. The query is something like\n> the following:\n> \n> UPDATE table SET table.par = table.par + 1\n> WHERE table.key IN ('value1', 'value2', ... , 'value1200' )\n> \n> This query runs on a transaction (by issuing a call to\n> setAutoCommit(false)) and a commit() right after the query\n> is sent to the backend.\n> \n> The process of committing and updating the values is painfully slow\n> (no surprises here). Any ideas?\n\nThe problem, as I understand it, is that 7.4 introduced massive \nimprovements in handling moderately large in() clauses, as long as they \ncan fit in sort_mem, and are provided by a subselect.\n\nSo, creating a temp table with all the values in it and using in() on the \ntemp table may be a win:\n\nbegin;\ncreate temp table t_ids(id int);\ninsert into t_ids(id) values (123); <- repeat a few hundred times\nselect * from maintable where id in (select id from t_ids);\n...\n\n\n", "msg_date": "Wed, 10 Mar 2004 09:47:09 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing large query with IN (...)" }, { "msg_contents": "\nHmm... from the 'performance' point of view, since the data comes from\na quite complex select statement, Isn't it better/quicker to have this\nselect replaced by a select into and creating a temporary database?\n\n\n\n> The problem, as I understand it, is that 7.4 introduced massive\n> improvements in handling moderately large in() clauses, as long as they\n> can fit in sort_mem, and are provided by a subselect.\n>\n> So, creating a temp table with all the values in it and using in() on\n> the temp table may be a win:\n>\n> begin;\n> create temp table t_ids(id int);\n> insert into t_ids(id) values (123); <- repeat a few hundred times\n> select * from maintable where id in (select id from t_ids);\n> ...\n\n\n\n", "msg_date": "Wed, 10 Mar 2004 14:02:23 -0300 (BRT)", "msg_from": "\"Marcus Andree S. Magalhaes\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing large query with IN (...)" }, { "msg_contents": "On Wed, Mar 10, 2004 at 02:02:23PM -0300, Marcus Andree S. Magalhaes wrote:\n\n> Hmm... from the 'performance' point of view, since the data comes from\n> a quite complex select statement, Isn't it better/quicker to have this\n> select replaced by a select into and creating a temporary database?\n\nDefinitely - why loop the data into the application and back out again\nif you don't need to?\n\n> > The problem, as I understand it, is that 7.4 introduced massive\n> > improvements in handling moderately large in() clauses, as long as they\n> > can fit in sort_mem, and are provided by a subselect.\n> >\n> > So, creating a temp table with all the values in it and using in() on\n> > the temp table may be a win:\n> >\n> > begin;\n> > create temp table t_ids(id int);\n> > insert into t_ids(id) values (123); <- repeat a few hundred times\n> > select * from maintable where id in (select id from t_ids);\n> > ...\n\nCheers,\n Steve\n", "msg_date": "Wed, 10 Mar 2004 09:11:04 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing large query with IN (...)" }, { "msg_contents": "I'm not sure exactly what you're saying here. If the data in the in() \nclause comes from a complex select, then just use the select in there, and \nbypass the temporary table idea.\n\nI'm not sure what a temporary database is, did you mean temporary table? \nif so, then my above comment addresses that point.\n\nOn Wed, 10 Mar 2004, Marcus Andree S. Magalhaes wrote:\n\n> \n> Hmm... from the 'performance' point of view, since the data comes from\n> a quite complex select statement, Isn't it better/quicker to have this\n> select replaced by a select into and creating a temporary database?\n> \n> \n> \n> > The problem, as I understand it, is that 7.4 introduced massive\n> > improvements in handling moderately large in() clauses, as long as they\n> > can fit in sort_mem, and are provided by a subselect.\n> >\n> > So, creating a temp table with all the values in it and using in() on\n> > the temp table may be a win:\n> >\n> > begin;\n> > create temp table t_ids(id int);\n> > insert into t_ids(id) values (123); <- repeat a few hundred times\n> > select * from maintable where id in (select id from t_ids);\n> > ...\n> \n> \n> \n> \n\n", "msg_date": "Wed, 10 Mar 2004 10:51:18 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing large query with IN (...)" }, { "msg_contents": "Marcus Andree S. Magalhaes wrote:\n> Guys,\n> \n> I got a Java program to tune. It connects to a 7.4.1 postgresql server\n> running Linux using JDBC.\n> \n> The program needs to update a counter on a somewhat large number of\n> rows, about 1200 on a ~130k rows table. The query is something like\n> the following:\n> \n> UPDATE table SET table.par = table.par + 1\n> WHERE table.key IN ('value1', 'value2', ... , 'value1200' )\n> \n\nHow often do you update this counter? Each update requires adding a new \nrow to the table and invalidating the old one. Then the old ones stick \naround until the next vacuum.\n", "msg_date": "Wed, 10 Mar 2004 19:41:45 -0500", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing large query with IN (...)" }, { "msg_contents": "Marcus,\n\n> The problem, as I understand it, is that 7.4 introduced massive\n> improvements in handling moderately large in() clauses, as long as they\n> can fit in sort_mem, and are provided by a subselect.\n\nAlso, this problem may be fixed in 7.5, when it comes out. It's a known \nissue.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 12 Mar 2004 09:39:18 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing large query with IN (...)" } ]
[ { "msg_contents": "I've been running Postgresql 7.4.1 for a couple weeks after upgrading\nfrom 7.2. I noticed today that the postmaster had been using 99% of\nthe dual CPUs (on a PowerEdge 2650) non-stop for the last couple days.\nI stopped all the clients, and it didn't abate---even with no\nconnections---so I restarted the postmaster. Now everything is\nrunning smoothly again.\n\nIs there anything that might accumulate after two weeks that might\ncause postgresql to thrash? I'm running pg_autovacuum, so the\ndatabase itself should be nice and clean. It isn't connections,\nbecause I restarted the clients a few times without success. I've\nbeen running a long time on 7.2 with essentially the same\nconfiguration (minus pg_autovacuum) without any problems....\n\nThanks for any help,\n\n-Mike\n", "msg_date": "Wed, 10 Mar 2004 03:43:37 GMT", "msg_from": "Mike Bridge <[email protected]>", "msg_from_op": true, "msg_subject": "High CPU with 7.4.1 after running for about 2 weeks" }, { "msg_contents": "Mike Bridge <[email protected]> writes:\n> I've been running Postgresql 7.4.1 for a couple weeks after upgrading\n> from 7.2. I noticed today that the postmaster had been using 99% of\n> the dual CPUs (on a PowerEdge 2650) non-stop for the last couple days.\n> I stopped all the clients, and it didn't abate---even with no\n> connections---so I restarted the postmaster. Now everything is\n> running smoothly again.\n\nSince the postmaster is a single unthreaded process, it's quite\nimpossible for it to take up 100% of two CPUs. Could you be more\nprecise about which processes were eating CPU, and what they were\ndoing according to the available state data? (ps auxww and\npg_stat_activity can be helpful tools.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Mar 2004 18:17:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU with 7.4.1 after running for about 2 weeks " }, { "msg_contents": ">Since the postmaster is a single unthreaded process, it's quite\n>impossible for it to take up 100% of two CPUs. Could you be more\n>precise about which processes were eating CPU, and what they were\n>doing according to the available state data? (ps auxww and\n>pg_stat_activity can be helpful tools.)\n>\n>\t\t\tregards, tom lane\n\nI shut down all our clients (all java except one in perl), and\npg_stat_activity showed that there was still one query active. That's\na good table to know about! Anyway, it didn't end until I sent it a\nTERM signal. I assume this means there's a runaway query somewhere,\nwhich I'll have to hunt down.\n\nBut if the client dies, doesn't postgresql normally terminate the\nquery that that client initiated? Or do I need to set\nstatement_timeout?\n\n(As for the 100% CPU, I was confused by the fact that I was getting\ntwo lines in \"top\" (on Linux) with 99% utilization---I assume with two\nrunaway queries.)\n\nThanks for your help!\n\n-Mike\n\n\n\n", "msg_date": "Sun, 14 Mar 2004 03:10:18 GMT", "msg_from": "Mike Bridge <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU with 7.4.1 after running for about 2 weeks" }, { "msg_contents": "Mike Bridge <[email protected]> writes:\n> But if the client dies, doesn't postgresql normally terminate the\n> query that that client initiated? Or do I need to set\n> statement_timeout?\n\nThe backend generally won't notice that the connection is dead until\nit next tries to fetch a command from the client. So if the client\nlaunches a long-running query and then goes away, the query will\nnormally complete.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Mar 2004 16:46:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU with 7.4.1 after running for about 2 weeks " } ]
[ { "msg_contents": "Hello there !!!\n\nI am using postgresql7.2.1 as the backend for an E.R.P system running\non Linux Redhat 7.2(Enigma)\nThe database size is around 20-25GB\nDropping of an individual table whose size is around 200Mb takes more\nthan 7 mins, and also increases the load on our System\nThe database is vacuumed/ reindexed on a daily basis.\n\nWe have recreated the same database on a Linux Redhat release 9 OS, and\nused PostgreSQL 7.3.2, the drop here is really fast.\n\nAny suggestions as to how I could improve the performance of drop on\npostgresql7.2.1.\n\n\nThanks\nmaneesha.\n\n\n", "msg_date": "Wed, 10 Mar 2004 12:33:01 +0530", "msg_from": "Maneesha Nunes <[email protected]>", "msg_from_op": true, "msg_subject": "Drop Tables Very Slow in Postgresql 7.2.1" }, { "msg_contents": "On Wed, Mar 10, 2004 at 12:33:01PM +0530, Maneesha Nunes wrote:\n> Hello there !!!\n> \n> I am using postgresql7.2.1 as the backend for an E.R.P system running\n> on Linux Redhat 7.2(Enigma)\n\nYou should upgrade, to at the _very least_ the last release of 7.2. \nThere were bugs in earlier releases fixed in later releases; that's\nwhy there's a 7.2.4. (I'll also point out that the 7.2 series is\nmissing plenty of performance enhancements which came later. I'd get\nto work on upgrading, because 7.2 is now basically unmaintained.)\n\nBut in any case, you likely have issues on your system tables. I'd\ndo a VACUUM FULL and a complete REINDEX of the system tables next.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nI remember when computers were frustrating because they *did* exactly what \nyou told them to. That actually seems sort of quaint now.\n\t\t--J.D. Baldwin\n", "msg_date": "Sun, 14 Mar 2004 10:52:27 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drop Tables Very Slow in Postgresql 7.2.1" } ]
[ { "msg_contents": "I have had a cluster failure on a table. It most likely was due to space.\nI do not not have the error message anymore, but it was indicating that it\nwas most likely a space problem. The partition was filled to 99%. The\ntable is about 56 GB and what I believe to be the new table that it was\nwriting to looks to be 40 files of 1GB. \n\nThe problem is that it did not clean itself up properly. \n\nThe oids that I believe it was writing to are still there.\nThere are 56 files of 102724113.* and 40 files of 361716097.*.\nA vacuum had indicated that there was around 16 GB of free space.\nI can not find any reference to 361716097 in the pg_class table. \nAm I going to have to manually delete the 361716097.* files myself?\n\nDan.\n", "msg_date": "Wed, 10 Mar 2004 08:56:57 -0500", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Cluster failure due to space" }, { "msg_contents": "\"Shea,Dan [CIS]\" <[email protected]> writes:\n> The problem is that it did not clean itself up properly. \n\nHm. It should have done so. What were the exact filenames and sizes of\nthe not-deleted files?\n\n> I can not find any reference to 361716097 in the pg_class table. \n\nYou are looking at pg_class.relfilenode, I hope, not pg_class.oid.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Mar 2004 10:24:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster failure due to space " } ]
[ { "msg_contents": "\n\n\n\"Shea,Dan [CIS]\" <[email protected]> writes:\n>> The problem is that it did not clean itself up properly. \n\n>Hm. It should have done so. What were the exact filenames and sizes of\n>the not-deleted files?\n361716097 to 361716097.39 are 1073741824 bytes.\n361716097.40 is 186105856 bytes.\n\n> I can not find any reference to 361716097 in the pg_class table. \n\n>>You are looking at pg_class.relfilenode, I hope, not pg_class.oid.\nYes I am looking through pg_class.relfilenode.\n\nDan.\n", "msg_date": "Wed, 10 Mar 2004 10:38:43 -0500", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cluster failure due to space " } ]
[ { "msg_contents": "I have some suggestions based on my anecdotal experience.\n\n1. This is a relatively small DB -- the working set will likely be in \nRAM at any moment in time, making read I/O time mostly irrelevant.\n\n2. The killer will be write times -- specifically log writes. Small and \nheavily synchronized writes, log and data writes, will drag down an \nimpressive hardware RAID setup. We run mirrored hardware RAID 5 arrays \nwith write back cache and are constantly seeking ways to improve write \nperformance. We do a lot of batch processing, though, so we do a lot of \nwrite I/Os.\n\n3. Be very careful with \"battery backed write cache.\" It usually works \nas advertised. More than once in the past decade I have seen \nspontaneous cache corruption after power losss. The corruption usually \nhappens when some admin, including me, has assumed that the cache will \nALWAYS survive a power failure unblemished and has no \"plan B.\" Make \nsure you have a contingency plan for corruption, or don't enable the cache.\n\n4. RAID 10 will likely have bigger stripe sizes on the RAID 0 portion of \nthe setup, and might hinder, not help small write I/O performance.\n\n5. Most (almost all) of the I/O time will be due to the access time \n(head seek + head settle + rotational latency) and very little of the \nI/O time will due to data transfer time. In other words, getting drives \nthat provide faster transfer rates will barely improve performance. The \nsecret is lowering the access time.\n\n6. A relatively cheap way to drastically drop the access time is to get \nlarge drive(s) and only use a portion of them for storage. The less \nspace used on the drive, the less area the heads need to cover for \nseeks. At one extreme, you could make the partition the size of a \nsingle cylinder. This would make access time (ignoring OS and \ncontroller overhead) identical to rotational latency, which is as low as \n4.2 ms for a cheap 7200 RPM drive.\n\n7. A drive with a 5 ms average service time, servicing 8 KB blocks, will \nyield as much as 1.6 MB/s sustained write throughput. Not bad for a \ncheap uncached solution. Any OS aggregation of writes during the \nfsync() call will further improve this number -- it is basically a lower \nbound for throughput.\n\n8. Many people, especially managers, cannot stomach buying disk space \nand only using a portion of it. In many cases, it seems more palatable \nto purchase a much more expensive solution to get to the same speeds.\n\nGood luck.\n\nscott.marlowe wrote:\n> On Wed, 3 Mar 2004, Paul Thomas wrote:\n> \n> >\n> > On 02/03/2004 23:25 johnnnnnn wrote:\n> > > [snip]\n> > > random_page_cost should be set with the following things taken into\n> > > account:\n> > > - seek speed\n> >\n> > Which is not exactly the same thing as spindle speed as it's a \n> combination\n> > of spindle speed and track-to-track speed. I think you'll find that a \n> 15K\n> > rpm disk, whilst it will probably have a lower seek time than a 10K rpm\n> > disk, won't have a proportionately (i.e., 2/3rds) lower seek time.\n> \n> There are three factors that affect how fast you can get to the next\n> sector:\n> \n> seek time\n> settle time\n> rotational latency\n> \n> Most drives only list the first, and don't bother to mention the other\n> two.\n> \n> On many modern drives, the seek times are around 5 to 10 milliseconds.\n> The settle time varies as well. the longer the seek, the longer the\n> settle, generally. This is the time it takes for the head to stop shaking\n> and rest quietly over a particular track.\n> Rotational Latency is the amount of time you have to wait, on average, for\n> the sector you want to come under the heads.\n> \n> Assuming an 8 ms seek, and 2 ms settle (typical numbers), and that the\n> rotational latency on average is 1/2 of a rotation: At 10k rpm, a\n> rotation takes 1/166.667 of a second, or 6 mS. So, a half a rotation is\n> approximately 3 mS. By going to a 15k rpm drive, the latency drops to 2\n> mS. So, if we add them up, on the same basic drive, one being 10k and one\n> being 15k, we get:\n> \n> 10krpm: 8+2+3 = 13 mS\n> 15krpm: 8+2+2 = 12 mS\n> \n> So, based on the decrease in rotational latency being the only advantage\n> the 15krpm drive has over the 10krpm drive, we get an decrease in access\n> time of only 1 mS, or only about an 8% decrease in actual seek time.\n> \n> So, if you're random page cost on 10krpm drives was 1.7, you'd need to\n> drop it to 1.57 or so to reflect the speed increase from 15krpm drives.\n> \n> I.e. it's much more likely that going from 1 gig to 2 gigs of ram will\n> make a noticeable difference than going from 10k to 15k drives.\n> \n> \n> \n\n\n", "msg_date": "Wed, 10 Mar 2004 16:29:55 -0700", "msg_from": "Marty Scholes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "Sorry about not chiming in before - I've been too swamped to think. I agree\nwith most of the points, but a lot of these posts are interesting and seem\nto describe systems from an SA perspective to my DBA-centric view.\n\n----- Original Message ----- \nFrom: \"Marty Scholes\" <[email protected]>\nTo: <[email protected]>\nSent: Wednesday, March 10, 2004 6:29 PM\nSubject: Re: [PERFORM] Scaling further up\n\n\n> I have some suggestions based on my anecdotal experience.\n>\n> 1. This is a relatively small DB -- the working set will likely be in\n> RAM at any moment in time, making read I/O time mostly irrelevant.\n>\n> 2. The killer will be write times -- specifically log writes. Small and\n> heavily synchronized writes, log and data writes, will drag down an\n> impressive hardware RAID setup. We run mirrored hardware RAID 5 arrays\n> with write back cache and are constantly seeking ways to improve write\n> performance. We do a lot of batch processing, though, so we do a lot of\n> write I/Os.\n\nMy experience with RAID5 for streaming sequential writes is bad. This is\nsometimes helped by the hardware caching to cover the cost of the additional\nI/Os for striping (write through RAID5 + big cache acts like RAID 1+0 until\nyou run out of cache). Batch processing is different from high concurrency\ntransactions because it needs faster volume streaming, while TP is dependant\non the speed of ack'ing (few big writes with less synchronous waits vs. lots\nof small writes which serialize everyone). (RAID 3 worked for me in the past\nfor logging, but I haven't used it in years.)\n\n>\n> 3. Be very careful with \"battery backed write cache.\" It usually works\n> as advertised. More than once in the past decade I have seen\n> spontaneous cache corruption after power losss. The corruption usually\n> happens when some admin, including me, has assumed that the cache will\n> ALWAYS survive a power failure unblemished and has no \"plan B.\" Make\n> sure you have a contingency plan for corruption, or don't enable the\ncache.\n\nI agree strongly. There is also the same problem with disk write back cache\nand even with SCSI controllers with write through enabled. PITR would help\nhere. A lot of these problems are due to procedural error post crash.\n\n>\n> 4. RAID 10 will likely have bigger stripe sizes on the RAID 0 portion of\n> the setup, and might hinder, not help small write I/O performance.\n\nIn a high volume system without write caching you are almost always going to\nsee queuing, which can make the larger buffer mostly irrelevant, if it's not\nhuge. Write caching thrives on big block sizes (which is a key reason why\nSymmetrix doesn't do worse than it does) by reducing I/O counts. Most shops\nI've set up or seen use mirroring or RAID 10 for logs. Note also that many\nRAID 10 controllers in a non-write cached setup allows having a race between\nthe two writers, acknowledging when the first of the two completes -\nincreasing throughput by about 1/4.\n\n>\n> 5. Most (almost all) of the I/O time will be due to the access time\n> (head seek + head settle + rotational latency) and very little of the\n> I/O time will due to data transfer time. In other words, getting drives\n> that provide faster transfer rates will barely improve performance. The\n> secret is lowering the access time.\n\nTrue. This is very much a latency story. Even in volume batch, you can see\naccess time that clearly shows some other system configuration bottleneck\nthat happens elsewhere before hitting I/O capacity.\n\n>\n> 6. A relatively cheap way to drastically drop the access time is to get\n> large drive(s) and only use a portion of them for storage. The less\n> space used on the drive, the less area the heads need to cover for\n> seeks. At one extreme, you could make the partition the size of a\n> single cylinder. This would make access time (ignoring OS and\n> controller overhead) identical to rotational latency, which is as low as\n> 4.2 ms for a cheap 7200 RPM drive.\n\nThis is a good strategy for VLDB, and may not be relevant in this case.\n\nAlso - big sequential writes and 15K rpm drives, in the case of\nwritethrough, is a beautiful thing - they look like a manufacturers' demo. A\nprimary performance role of a RDBMS is to convert random I/O to sequential\n(by buffering reads and using a streaming log to defer random writes to\ncheckpoints). RDBMS's are the prime beneficiaries of the drive speed\nimprovements - since logging, backups, and copies are about the only things\n(ignoring bad perl scripts and find commands) that generate loads of 50+\nmB/sec.\n\n/Aaron\n", "msg_date": "Sun, 14 Mar 2004 17:11:22 -0500", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" } ]
[ { "msg_contents": "Hello people! \nI have a question, I am going to begin a project for the University in\nthe area of Data Warehousing and I want to use postgres.\nDo you have some recommendation to me? \n\nThanks!!\n\nGreetings, Pablo\n\n\n", "msg_date": "11 Mar 2004 10:54:03 -0300", "msg_from": "Pablo Marrero <[email protected]>", "msg_from_op": true, "msg_subject": "started Data Warehousing" }, { "msg_contents": "Pablo Marrero wrote:\n> Hello people! \n> I have a question, I am going to begin a project for the University in\n> the area of Data Warehousing and I want to use postgres.\n> Do you have some recommendation to me? \n> \n\nRegarding what? Do you have an specific questions?\n\nSincerely,\n\nJoshua D. Drake\n\n\n> Thanks!!\n> \n> Greetings, Pablo\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly", "msg_date": "Thu, 11 Mar 2004 06:55:48 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] started Data Warehousing" }, { "msg_contents": "\nPablo.....\n\n> I have a question, I am going to begin a project for the University in\n> the area of Data Warehousing and I want to use postgres. Do you have\n> some recommendation to me?\n\nYes. Set up a linux machine if you don't have access to one so you can\nload postgresql and start learning how postgresql works by playing with\nit. Even after you get the production database running you can use it for\ntesting (or even the main machine, depending, but it's nice to have a test\nplatform entirely removed from the hot database).\n\nAnd start reading, read the pgsql newsgroups (and not just performance,\nthere are other more basic ones), read the online docs and books, and\nthere are printed books, too. A wealth of information.\n\nI'm sure others have more ideas to recommend, too......\n\nbrew\n\n ==========================================================================\n Strange Brew ([email protected])\n Check out my Musician's Online Database Exchange (The MODE Pages)\n http://www.TheMode.com\n ==========================================================================\n\n", "msg_date": "Thu, 11 Mar 2004 10:08:52 -0500 (EST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [PERFORM] started Data Warehousing" } ]
[ { "msg_contents": "Hi. I have two existing tables, A and B. A has a 'varchar(1000)' field \nand B has a 'text' field, each with btree indexes defined. When I do a \njoin between these, on this field, it seems to a hash join, as opposed \nto using the indexes, as I might expect (I'm no postgres expert, btw).\n\nMy question is: if I changed both fields to be text or varchar(1000) \nthen would the index be used?\n\nTa,\n\n-- \nMike\n\n\n", "msg_date": "Thu, 11 Mar 2004 18:25:20 +0000", "msg_from": "Mike Moran <[email protected]>", "msg_from_op": true, "msg_subject": "Impact of varchar/text in use of indexes" }, { "msg_contents": "Mike Moran <[email protected]> writes:\n> Hi. I have two existing tables, A and B. A has a 'varchar(1000)' field \n> and B has a 'text' field, each with btree indexes defined. When I do a \n> join between these, on this field, it seems to a hash join, as opposed \n> to using the indexes, as I might expect (I'm no postgres expert, btw).\n\n> My question is: if I changed both fields to be text or varchar(1000) \n> then would the index be used?\n\nProbably not, and in any case your assumption is mistaken. Indexes are\nnot always the right way to join.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Mar 2004 14:02:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of varchar/text in use of indexes " } ]
[ { "msg_contents": "First let me explain the situation:\n\nI came into the #postgresql irc channel talking about this problem, and \nsomeone advised me to use this mailing list (so i'm not just wasting your \ntime i hope). I'm not sure how to describe the problem fully, so I'll start \nby explaining what my database does, a little about the table structure, \nthen an example of a problematic query, and some other information that \nmight be relevant.\n\nMy database is a Chinese-English/English-Chinese dictionary. It lets users \nsearch Chinese words by any character in the word, or any sequence of \ncharacters (starting anywhere the user wants). Characters are often \nsearched by their pinyin values (a romanization of the sounds). The \ndictionary has an average word length of 2 (but it sucks), but there are \nalso many words of length 4 and 6 characters. So it wouldn't be uncommon to \nsearch for something like \"a1 la1 bo yu\" (arabic). There are also some very \nlong words with 12 or more characters (where my problem becomes more \npronounced).\n\nThat being said, the most important table here is the words table:\n Table \"public.words\"\n Column | Type | Modifiers\n--------------+----------------------+-----------\nwid | integer | not null\nsequence | smallint | not null\nvariant | smallint | not null\nchar_count | smallint | not null\nunicode | character varying(5) | not null\npinyin | character varying(8) | not null\nsimpvar | character varying(5) |\nzvar | character varying(5) |\ncompatvar | character varying(5) |\ndef_exists | boolean | not null\nnum_variants | smallint |\npage_order | integer |\npinyins | character varying |\nunicodes | character varying |\nIndexes:\n \"words2_pkey\" primary key, btree (wid, variant, \"sequence\")\n \"page_index\" btree (page_order)\n \"pinyin_index\" btree (pinyin)\n \"unicode_index\" btree (unicode)\n\n\nThe best example of the problem I have when using this table is this query:\n\nSELECT\n w8.wid,\n w8.variant,\n w8.num_variants,\n sum_text(w8.unicode) as unicodes,\n sum_text(w8.pinyin) as pinyins\nFROM\n words as w0, words as w1,\n words as w2, words as w3,\n words as w4, words as w5,\n words as w6, words as w7,\n words as w8\nWHERE\n w0.wid > 0 AND\n w0.pinyin = 'zheng4' AND\n w0.def_exists = 't' AND\n w0.sequence = 0 AND\n w1.wid = w0.wid AND\n w1.pinyin LIKE 'fu_' AND\n w1.variant = w0.variant AND\n w1.sequence = (w0.sequence + 1) AND\n w2.wid = w1.wid AND\n w2.pinyin LIKE 'ji_' AND\n w2.variant = w1.variant AND\n w2.sequence = (w1.sequence + 1) AND\n w3.wid = w2.wid AND\n w3.pinyin LIKE 'guan_' AND\n w3.variant = w2.variant AND\n w3.sequence = (w2.sequence + 1) AND\n w4.wid = w3.wid AND\n w4.pinyin LIKE 'kai_' AND\n w4.variant = w3.variant AND\n w4.sequence = (w3.sequence + 1) AND\n w5.wid = w4.wid AND\n w5.pinyin LIKE 'fang_' AND\n w5.variant = w4.variant AND\n w5.sequence = (w4.sequence + 1) AND\n w6.wid = w5.wid AND\n w6.pinyin LIKE 'xi_' AND\n w6.variant = w5.variant AND\n w6.sequence = (w5.sequence + 1) AND\n w7.wid = w6.wid AND\n w7.pinyin LIKE 'tong_' AND\n w7.variant = w6.variant AND\n w7.sequence = (w6.sequence + 1) AND\n w8.wid = w7.wid AND\n w8.variant = w7.variant\nGROUP BY\n w8.wid,\n w8.variant,\n w8.num_variants,\n w8.page_order ,\n w0.sequence ,\n w1.sequence ,\n w2.sequence ,\n w3.sequence ,\n w4.sequence ,\n w5.sequence ,\n w6.sequence ,\n w7.sequence\nORDER BY\n w8.page_order;\n\n(phew!)\n\nwith the default geqo_threshold of 11, this query takes 3155ms on my machine \n(a 1ghz athlon with 384 megs of pc133 ram). This is very very long.\n\nif i first do prepare blah as SELECT ....., then run execute blah, the time \ngoes down to about 275ms (i had been running this query a lot, and did a \nvacuum update before all this).\n\nthe ouput from EXPLAIN ANALYZE :\n\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=54.13..54.14 rows=1 width=43) (actual time=315.357..315.357 \nrows=1 loops=1)\n Sort Key: w8.page_order\n -> HashAggregate (cost=54.12..54.12 rows=1 width=43) (actual \ntime=315.328..315.330 rows=1 loops=1)\n -> Nested Loop (cost=0.00..54.08 rows=1 width=43) (actual \ntime=6.229..314.566 rows=12 loops=1)\n Join Filter: ((\"outer\".wid = \"inner\".wid) AND \n(\"outer\".variant = \"inner\".variant) AND (\"outer\".\"sequence\" = \n(\"inner\".\"sequence\" + 1)) AND (\"inner\".\"sequence\" = (\"outer\".\"sequence\" + \n1)))\n -> Nested Loop (cost=0.00..48.07 rows=1 width=83) (actual \ntime=6.088..279.745 rows=12 loops=1)\n Join Filter: ((\"inner\".\"sequence\" = (\"outer\".\"sequence\" \n+ 1)) AND (\"outer\".\"sequence\" = (\"inner\".\"sequence\" + 1)))\n -> Nested Loop (cost=0.00..42.05 rows=1 width=75) \n(actual time=5.980..278.602 rows=12 loops=1)\n -> Nested Loop (cost=0.00..36.04 rows=1 \nwidth=48) (actual time=5.910..278.280 rows=1 loops=1)\n Join Filter: ((\"inner\".variant = \n\"outer\".variant) AND (\"inner\".wid = \"outer\".wid))\n -> Nested Loop (cost=0.00..30.04 rows=1 \nwidth=40) (actual time=3.465..275.137 rows=1 loops=1)\n Join Filter: (\"inner\".\"sequence\" = \n(\"outer\".\"sequence\" + 1))\n -> Nested Loop (cost=0.00..24.03 \nrows=1 width=32) (actual time=3.408..275.045 rows=1 loops=1)\n Join Filter: \n(\"outer\".\"sequence\" = (\"inner\".\"sequence\" + 1))\n -> Nested Loop \n(cost=0.00..18.00 rows=1 width=24) (actual time=3.350..274.948 rows=1 \nloops=1)\n -> Nested Loop \n(cost=0.00..11.99 rows=1 width=16) (actual time=3.295..274.678 rows=6 \nloops=1)\n Join Filter: \n((\"inner\".wid = \"outer\".wid) AND (\"inner\".variant = \"outer\".variant) AND \n(\"inner\".\"sequence\" = (\"outer\".\"sequence\" + 1)))\n -> Index Scan \nusing pinyin_index on words w4 (cost=0.00..5.98 rows=1 width=8) (actual \ntime=0.090..1.222 rows=165 loops=1)\n Index Cond: \n(((pinyin)::text >= 'kai'::character varying) AND ((pinyin)::text < \n'kaj'::character varying))\n Filter: \n((pinyin)::text ~~ 'kai_'::text)\n -> Index Scan \nusing pinyin_index on words w5 (cost=0.00..5.98 rows=1 width=8) (actual \ntime=0.017..1.380 rows=259 loops=165)\n Index Cond: \n(((pinyin)::text >= 'fang'::character varying) AND ((pinyin)::text < \n'fanh'::character varying))\n Filter: \n((pinyin)::text ~~ 'fang_'::text)\n -> Index Scan using \nwords2_pkey on words w1 (cost=0.00..6.00 rows=1 width=8) (actual \ntime=0.032..0.037 rows=0 loops=6)\n Index Cond: \n((\"outer\".wid = w1.wid) AND (\"outer\".variant = w1.variant))\n Filter: \n((pinyin)::text ~~ 'fu_'::text)\n -> Index Scan using \nwords2_pkey on words w0 (cost=0.00..6.01 rows=1 width=8) (actual \ntime=0.033..0.068 rows=1 loops=1)\n Index Cond: ((\"outer\".wid \n= w0.wid) AND (w0.wid > 0) AND (\"outer\".variant = w0.variant))\n Filter: (((pinyin)::text \n= 'zheng4'::text) AND (def_exists = true) AND (\"sequence\" = 0))\n -> Index Scan using words2_pkey on \nwords w2 (cost=0.00..6.00 rows=1 width=8) (actual time=0.029..0.060 rows=1 \nloops=1)\n Index Cond: ((w2.wid = \n\"outer\".wid) AND (w2.variant = \"outer\".variant))\n Filter: ((pinyin)::text ~~ \n'ji_'::text)\n -> Index Scan using pinyin_index on words \nw7 (cost=0.00..5.98 rows=1 width=8) (actual time=0.030..2.573 rows=338 \nloops=1)\n Index Cond: (((pinyin)::text >= \n'tong'::character varying) AND ((pinyin)::text < 'tonh'::character varying))\n Filter: ((pinyin)::text ~~ \n'tong_'::text)\n -> Index Scan using words2_pkey on words w8 \n(cost=0.00..5.99 rows=1 width=27) (actual time=0.029..0.130 rows=12 loops=1)\n Index Cond: ((w8.wid = \"outer\".wid) AND \n(w8.variant = \"outer\".variant))\n -> Index Scan using words2_pkey on words w6 \n(cost=0.00..6.00 rows=1 width=8) (actual time=0.040..0.060 rows=1 loops=12)\n Index Cond: ((w6.wid = \"outer\".wid) AND \n(w6.variant = \"outer\".variant))\n Filter: ((pinyin)::text ~~ 'xi_'::text)\n -> Index Scan using pinyin_index on words w3 \n(cost=0.00..5.98 rows=1 width=8) (actual time=0.023..2.312 rows=304 \nloops=12)\n Index Cond: (((pinyin)::text >= 'guan'::character \nvarying) AND ((pinyin)::text < 'guao'::character varying))\n Filter: ((pinyin)::text ~~ 'guan_'::text)\nTotal runtime: 316.493 ms\n(44 rows)\n\nTime: 3167.853 ms\n\n\n\nAs you can see, the two run times there are quite different... The person I \nspoke to in the irc channel said this all indicated a poor planning time, \nand I think I agree. Yesterday I tried setting geqo_threshold to 7 instead \nof the default of 11, and it seemed to help a little, but the running times \nwere still extremely high.\n\nI guess I do have a question in addition to just wanting to notify the right \npeople of this problem: Since a lot of my queries are similar to this one \n(but not similar enough to allow me to use one or two of them over and over \nwith different parameters), is there any way for me to reorganize or rewrite \nthe queries so that the planner doesn't take so long? (I would hate to have \nto take all of this out of the db's hands and iterate in code myself...)\n\nIf you guys are optimistic about someone being able to fix this problem in \npgsql, I will just wait for the bug fix.\n\n\nThanks for listening :) let me know if you need any more information (oh \nyea, this is on Linux, version 7.4.1)\n\n_________________________________________________________________\nCreate a Job Alert on MSN Careers and enter for a chance to win $1000! \nhttp://msn.careerbuilder.com/promo/kaday.htm?siteid=CBMSN_1K&sc_extcmp=JS_JASweep_MSNHotm2\n\n", "msg_date": "Thu, 11 Mar 2004 20:47:23 -0500", "msg_from": "\"Eric Brown\" <[email protected]>", "msg_from_op": true, "msg_subject": "severe performance issue with planner" }, { "msg_contents": "> if i first do prepare blah as SELECT ....., then run execute blah, the \n> time goes down to about 275ms (i had been running this query a lot, and \n> did a vacuum update before all this).\n\nIf you make it an SQL stored procedure, you get the speed up of the \nPREPARE command, without having to prepare manually all the time.\n\nChris\n\n", "msg_date": "Fri, 12 Mar 2004 10:42:28 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: severe performance issue with planner" }, { "msg_contents": "\"Eric Brown\" <[email protected]> writes:\n> [ planning a 9-table query takes too long ]\n\nSee http://www.postgresql.org/docs/7.4/static/explicit-joins.html\nfor some useful tips.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Mar 2004 23:07:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: severe performance issue with planner " }, { "msg_contents": "\nThe other posts about using explicit joins and using stored procedures are\nboth good points. But I have a few other comments to make:\n\n\"Eric Brown\" <[email protected]> writes:\n\n> WHERE\n> w0.wid > 0 AND\n> w0.pinyin = 'zheng4' AND\n> w0.def_exists = 't' AND\n> w0.sequence = 0 AND\n> w1.wid = w0.wid AND\n> w1.pinyin LIKE 'fu_' AND\n> w1.variant = w0.variant AND\n> w1.sequence = (w0.sequence + 1) AND\n\nI'm not sure it'll help the planner, but w0.sequence+1 is always just going to\nbe 1, and so on with the others. I think the planner might be able to figure\nthat out but the plan doesn't seem to show it doing so. I'm not sure it would\nhelp the plan though.\n\nSimilarly you have w1.wid=w0.wid and w2.wid=w1.wid and w3.wid=w2.wid etc. And\nalso with the \"variant\" column. You might be able to get this planned better\nby writing it as a join from w0 to all the others rather than a chain of\nw0->w1->w2->... Again I'm not sure; you would have to experiment.\n\n\nBut I wonder if there isn't a way to do this in a single pass using an\naggregate. I'm not sure I understand the schema exactly, but perhaps something\nlike this? \n\nselect w8.wid,\n w8.variant,\n w8.num_variants,\n sum_text(w8.unicode) as unicodes,\n sum_text(w8.pinyin) as pinyins\n from (\n select wid,variant,\n from words \n where (sequence = 0 and pinyin = 'zheng4')\n OR (sequence = 1 and pinyin like 'ji_')\n OR (sequence = 2 and pinyin like 'guan_')\n OR (sequence = 3 and pinyin like 'kai_')\n OR (sequence = 4 and pinyin like 'fang_')\n OR (sequence = 5 and pinyin like 'xi_')\n OR (sequence = 6 and pinyin like 'tong_')\n OR (sequence = 7 and pinyin like 'fu_')\n group by wid,variant\n having count(*) = 8\n ) as w\n join words as w8 using (wid,variant)\n\nThis might be helped by having an index on <sequence,pinyin> but it might not\neven need it.\n\n\n-- \ngreg\n\n", "msg_date": "12 Mar 2004 13:00:33 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: severe performance issue with planner" }, { "msg_contents": "\nSorry, I forgot a key clause there:\n\nGreg Stark <[email protected]> writes:\n\n> select w8.wid,\n> w8.variant,\n> w8.num_variants,\n> sum_text(w8.unicode) as unicodes,\n> sum_text(w8.pinyin) as pinyins\n> from (\n> select wid,variant,\n> from words \n> where (sequence = 0 and pinyin = 'zheng4')\n> OR (sequence = 1 and pinyin like 'ji_')\n> OR (sequence = 2 and pinyin like 'guan_')\n> OR (sequence = 3 and pinyin like 'kai_')\n> OR (sequence = 4 and pinyin like 'fang_')\n> OR (sequence = 5 and pinyin like 'xi_')\n> OR (sequence = 6 and pinyin like 'tong_')\n> OR (sequence = 7 and pinyin like 'fu_')\n> group by wid,variant\n> having count(*) = 8\n> ) as w\n> join words as w8 using (wid,variant)\n\n where w8.sequence = 8\n\nOr perhaps that ought to be \n\n join words as w8 on ( w8.wid=w.wid \n and w8.variant=w.variant \n and w8.sequence = 8)\n\nor even\n\n join (select * from words where sequence = 8) as w8 using (wid,variant)\n\n\nI think they should all be equivalent though.\n\n\n\n\n-- \ngreg\n\n", "msg_date": "12 Mar 2004 13:14:46 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: severe performance issue with planner" }, { "msg_contents": "\n\nOn Thu, 11 Mar 2004, Tom Lane wrote:\n\n> \"Eric Brown\" <[email protected]> writes:\n> > [ planning a 9-table query takes too long ]\n> \n> See http://www.postgresql.org/docs/7.4/static/explicit-joins.html\n> for some useful tips.\n> \n\nIs this the best answer we've got? For me with an empty table this query \ntakes 4 seconds to plan, is that the expected planning time? I know I've \ngot nine table queries that don't take that long.\n\nSetting geqo_threshold less than 9, it takes 1 second to plan. Does this \nindicate that geqo_threshold is set too high, or is it a tradeoff between \nplanning time and plan quality? If the planning time is so high because \nthe are a large number of possible join orders, should geqo_threhold be \nbased on the number of possible plans somehow instead of the number of \ntables involved?\n\nKris Jurka\n\n", "msg_date": "Sat, 13 Mar 2004 22:48:01 -0500 (EST)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: severe performance issue with planner " } ]
[ { "msg_contents": "\nHello to everybody.\n\nI ask your help for a severe problem when doing a query that LEFT JOINs\none table to another ON a field, and then LEFT JOINs again to another\n\"instance\" of a table ON another field which stores the same entity, but\nwith different meaning.\n\nI include 3 EXPLAIN ANALYZEs:\n* The first one, the target (and problematic) query, which runs in 5 to 6\nminutes.\n* The second one, a variation with the second LEFT JOIN commented out,\nwhich runs in 175 to 450 ms.\n* The third one, a variation of the first one with ORDER BY removed, which\ngives me about 19 seconds.\n\nTherefore, I feel like there are two problems here the one that raises the\nclock to 6 minutes and one that raises it to 20 seconds. I expected a much\nlower time. I checked indexes and data types already, they are all fine.\nAll relevant fields have BTREEs, all PKs have UNIQUE BTREE, and all id and\next_* fields have 'integer' as data type. Each ext_* has its corresponding\nREFERENCES contraint.\n\nI translated all the table and field names to make it easier to read. I\nmade my best not to let any typo go through.\n\nI'd appreciate any help.\n\nOctavio.\n\n=== First EXPLAIN ANALYZE ===\n\nEXPLAIN ANALYZE\nSELECT\n\tt_materias_en_tira.id AS Id,\n\tt_clientes.paterno || ' ' || t_clientes.materno || ' ' ||\nt_clientes.nombre AS Alumno,\n\tt_materias.nombre AS Materia,\n\tt_materias__equivalentes.nombre AS MateriaEquivalente,\n\tt_grupos.nombre AS Grupo,\n\tcalificacion_final AS Calificacion,\n\ttipo AS Tipo,\n\teer AS EER,\n\ttotal_asistencias AS TotalAsistencias,\n\ttotal_clases As TotalClases\nFROM\n\tt_materias_en_tira\n\tLEFT JOIN t_alumnos_en_semestre ON ext_alumno_en_semestre =\nt_alumnos_en_semestre.id\n\tLEFT JOIN t_alumnos ON ext_alumno = t_alumnos.id\n\tLEFT JOIN t_clientes ON ext_cliente = t_clientes.id\n\tLEFT JOIN t_materias ON ext_materia = t_materias.id\n\tLEFT JOIN t_materias AS t_materias__equivalentes ON\next_materia__equivalencia = t_materias.id\n\tLEFT JOIN t_grupos ON ext_grupo = t_grupos.id\nWHERE\n\tt_alumnos_en_semestre.ext_ciclo = 2222\nORDER BY\n\tAlumno, Materia;\n\nThis one gave:\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=11549.08..11552.11 rows=1210 width=112) (actual\ntime=311246.000..355615.000 rows=1309321 loops=1)\n Sort Key: (((((t_clientes.paterno)::text || ' '::text) ||\n(t_clientes.materno)::text) || ' '::text) ||\n(t_clientes.nombre)::text), t_materias.nombre\n InitPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=2.000..2.000 rows=1 loops=1)\n -> Hash Left Join (cost=1089.25..11487.11 rows=1210 width=112)\n(actual time=83.000..19303.000 rows=1309321 loops=1)\n Hash Cond: (\"outer\".ext_grupo = \"inner\".id)\n -> Nested Loop Left Join (cost=1086.92..11454.53 rows=1210\nwidth=107) (actual time=82.000..9077.000 rows=1309321 loops=1)\n Join Filter: (\"outer\".ext_materia__equivalencia =\n\"outer\".id) -> Hash Left Join (cost=1078.15..1181.93\nrows=1210\nwidth=93) (actual time=82.000..275.000 rows=3473 loops=1)\n Hash Cond: (\"outer\".ext_materia = \"inner\".id) -> \nMerge Right Join (cost=1068.43..1154.07\nrows=1210 width=71) (actual time=81.000..213.000\nrows=3473 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".ext_cliente)\n-> Index Scan using t_clientes_pkey on\nt_clientes (cost=0.00..62.87 rows=1847\nwidth=38) (actual time=10.000..34.000 rows=1847\nloops=1)\n -> Sort (cost=1068.43..1071.46 rows=1210\nwidth=41) (actual time=71.000..76.000 rows=3473\nloops=1)\n Sort Key: t_alumnos.ext_cliente\n -> Hash Left Join (cost=41.12..1006.48\nrows=1210 width=41) (actual\ntime=9.000..61.000 rows=3473 loops=1)\n Hash Cond: (\"outer\".ext_alumno =\n\"inner\".id)\n -> Nested Loop (cost=0.00..944.18\nrows=1210 width=41) (actual\ntime=3.000..36.000 rows=3473\nloops=1)\n -> Index Scan using\ni_t_alumnos_en_semestre__ext_ciclo\non t_alumnos_en_semestre\n(cost=0.00..8.63 rows=269\nwidth=8) (actual\ntime=2.000..3.000 rows=457\nloops=1)\n Index Cond: (ext_ciclo\n= $0)\n -> Index Scan using\ni_t_materias_en_tira__ext_alumno_en_semestre\non t_materias_en_tira\n(cost=0.00..3.32 rows=12\nwidth=41) (actual\ntime=0.009..0.035 rows=8\nloops=457)\n Index Cond:\n(t_materias_en_tira.ext_alumno_en_semestre\n= \"outer\".id)\n -> Hash (cost=36.50..36.50\nrows=1850 width=8) (actual\ntime=6.000..6.000 rows=0loops=1)\n -> Seq Scan on t_alumnos\n(cost=0.00..36.50 rows=1850\nwidth=8) (actual\ntime=1.000..3.000 rows=1850\nloops=1)\n -> Hash (cost=8.77..8.77 rows=377 width=26) (actual\ntime=1.000..1.000 rows=0 loops=1)\n -> Seq Scan on t_materias (cost=0.00..8.77\nrows=377 width=26) (actual time=0.000..1.000\nrows=377 loops=1)\n -> Materialize (cost=8.77..12.54 rows=377 width=22)\n(actual time=0.000..0.175 rows=377 loops=3473)\n -> Seq Scan on t_materias t_materias__equivalentes\n(cost=0.00..8.77 rows=377 width=22) (actual\ntime=0.000..1.000 rows=377 loops=1)\n -> Hash (cost=2.07..2.07 rows=107 width=13) (actual\ntime=1.000..1.000 rows=0 loops=1)\n -> Seq Scan on t_grupos (cost=0.00..2.07 rows=107\nwidth=13) (actual time=0.000..1.000 rows=107 loops=1)\n\n Total runtime: 356144.000 ms\n\n=== Second EXPLAIN ANALYZE ===\n\nEXPLAIN ANALYZE\nSELECT\n\tt_materias_en_tira.id AS Id,\n\tt_clientes.paterno || ' ' || t_clientes.materno || ' ' ||\nt_clientes.nombre AS Alumno,\n\tt_materias.nombre AS Materia,\n--\tt_materias__equivalentes.nombre AS MateriaEquivalente,\n\tt_grupos.nombre AS Grupo,\n\tcalificacion_final AS Calificacion,\n\ttipo AS Tipo,\n\teer AS EER,\n\ttotal_asistencias AS TotalAsistencias,\n\ttotal_clases As TotalClases\nFROM\n\tt_materias_en_tira\n\tLEFT JOIN t_alumnos_en_semestre ON ext_alumno_en_semestre =\nt_alumnos_en_semestre.id\n\tLEFT JOIN t_alumnos ON ext_alumno = t_alumnos.id\n\tLEFT JOIN t_clientes ON ext_cliente = t_clientes.id\n\tLEFT JOIN t_materias ON ext_materia = t_materias.id\n--\tLEFT JOIN t_materias AS t_materias__equivalentes ON\next_materia__equivalencia = t_materias.id\n\tLEFT JOIN t_grupos ON ext_grupo = t_grupos.id\nWHERE\n\tt_alumnos_en_semestre.ext_ciclo = 2222\nORDER BY\n\tAlumno, Materia;\n\nEXPLAIN ANALYZE\nSELECT\n\tt_materias_en_tira.id AS Id,\n\tt_clientes.paterno || ' ' || t_clientes.materno || ' ' ||\nt_clientes.nombre AS Alumno,\n\tt_materias.nombre AS Materia,\n\tt_materias__equivalentes.nombre AS MateriaEquivalente,\n\tt_grupos.nombre AS Grupo,\n\tcalificacion_final AS Calificacion,\n\ttipo AS Tipo,\n\teer AS EER,\n\ttotal_asistencias AS TotalAsistencias,\n\ttotal_clases As TotalClases\nFROM\n\tt_materias_en_tira\n\tLEFT JOIN t_alumnos_en_semestre ON ext_alumno_en_semestre =\nt_alumnos_en_semestre.id\n\tLEFT JOIN t_alumnos ON ext_alumno = t_alumnos.id\n\tLEFT JOIN t_clientes ON ext_cliente = t_clientes.id\n\tLEFT JOIN t_materias ON ext_materia = t_materias.id\n\tLEFT JOIN t_materias AS t_materias__equivalentes ON\next_materia__equivalencia = t_materias.id\n\tLEFT JOIN t_grupos ON ext_grupo = t_grupos.id\nWHERE\n\tt_alumnos_en_semestre.ext_ciclo = 2222;\n\nIt gave:\n\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=1276.49..1279.51 rows=1210 width=90) (actual\ntime=341.000..341.000 rows=3473 loops=1)\n Sort Key: (((((t_clientes.paterno)::text || ' '::text) ||\n(t_clientes.materno)::text) || ' '::text) ||\n(t_clientes.nombre)::text), t_materias.nombre\n InitPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=146.000..146.000 rows=1 loops=1)\n -> Hash Left Join (cost=1080.48..1214.52 rows=1210 width=90) (actual\ntime=209.000..284.000 rows=3473 loops=1)\n Hash Cond: (\"outer\".ext_grupo = \"inner\".id)\n -> Hash Left Join (cost=1078.15..1181.93 rows=1210 width=85)\n(actual time=208.000..250.000 rows=3473 loops=1)\n Hash Cond: (\"outer\".ext_materia = \"inner\".id)\n -> Merge Right Join (cost=1068.43..1154.07 rows=1210\nwidth=67) (actual time=207.000..227.000 rows=3473loops=1)\n Merge Cond: (\"outer\".id = \"inner\".ext_cliente) -> \nIndex Scan using t_clientes_pkey on t_clientes\n(cost=0.00..62.87 rows=1847 width=38) (actual\ntime=0.000..5.000 rows=1847 loops=1)\n -> Sort (cost=1068.43..1071.46 rows=1210 width=37)\n(actual time=207.000..209.000 rows=3473 loops=1)\n Sort Key: t_alumnos.ext_cliente\n -> Hash Left Join (cost=41.12..1006.48\nrows=1210 width=37) (actual\ntime=152.000..196.000 rows=3473 loops=1)\n Hash Cond: (\"outer\".ext_alumno =\n\"inner\".id) -> Nested Loop \n(cost=0.00..944.18\nrows=1210 width=37) (actual\ntime=146.000..177.000 rows=3473 loops=1)\n -> Index Scan using\ni_t_alumnos_en_semestre__ext_ciclo\non t_alumnos_en_semestre\n(cost=0.00..8.63 rows=269 width=8)\n(actual time=146.000..148.000\nrows=457 loops=1)\n Index Cond: (ext_ciclo = $0)\n -> Index Scan using\ni_t_materias_en_tira__ext_alumno_en_semestre\non t_materias_en_tira\n(cost=0.00..3.32 rows=12 width=37)\n(actual time=0.009..0.022 rows=8\nloops=457)\n Index Cond:\n(t_materias_en_tira.ext_alumno_en_semestre\n= \"outer\".id)\n -> Hash (cost=36.50..36.50 rows=1850\nwidth=8) (actual time=6.000..6.000 rows=0\nloops=1)\n -> Seq Scan on t_alumnos\n(cost=0.00..36.50 rows=1850\nwidth=8) (actual time=0.000..3.000\nrows=1850 loops=1)\n -> Hash (cost=8.77..8.77 rows=377 width=26) (actual\ntime=1.000..1.000 rows=0 loops=1)\n -> Seq Scan on t_materias (cost=0.00..8.77 rows=377\nwidth=26) (actual time=0.000..0.000 rows=377loops=1)\n -> Hash (cost=2.07..2.07 rows=107 width=13) (actual\ntime=1.000..1.000 rows=0 loops=1)\n -> Seq Scan on t_grupos (cost=0.00..2.07 rows=107\nwidth=13) (actual time=0.000..0.000 rows=107 loops=1)\n\n Total runtime: 346.000 ms\n\n=== Third EXPLAIN ANALYZE ===\n\nEXPLAIN ANALYZE\nSELECT\n\tt_materias_en_tira.id AS Id,\n\tt_clientes.paterno || ' ' || t_clientes.materno || ' ' ||\nt_clientes.nombre AS Alumno,\n\tt_materias.nombre AS Materia,\n\tt_materias__equivalentes.nombre AS MateriaEquivalente,\n\tt_grupos.nombre AS Grupo,\n\tcalificacion_final AS Calificacion,\n\ttipo AS Tipo,\n\teer AS EER,\n\ttotal_asistencias AS TotalAsistencias,\n\ttotal_clases As TotalClases\nFROM\n\tt_materias_en_tira\n\tLEFT JOIN t_alumnos_en_semestre ON ext_alumno_en_semestre =\nt_alumnos_en_semestre.id\n\tLEFT JOIN t_alumnos ON ext_alumno = t_alumnos.id\n\tLEFT JOIN t_clientes ON ext_cliente = t_clientes.id\n\tLEFT JOIN t_materias ON ext_materia = t_materias.id\n\tLEFT JOIN t_materias AS t_materias__equivalentes ON\next_materia__equivalencia = t_materias.id\n\tLEFT JOIN t_grupos ON ext_grupo = t_grupos.id\nWHERE\n\tt_alumnos_en_semestre.ext_ciclo = 2222;\n\nResult:\n\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=484.34..4470.54 rows=459 width=112) (actual\ntime=70.000..18241.000 rows=1309321 loops=1)\n Hash Cond: (\"outer\".ext_grupo = \"inner\".id)\n -> Nested Loop Left Join (cost=482.01..4456.73 rows=459 width=107)\n(actual time=70.000..7912.000 rows=1309321 loops=1)\n Join Filter: (\"outer\".ext_materia__equivalencia = \"outer\".id) -> \nHash Left Join (cost=473.24..554.49 rows=459 width=93)\n(actual time=70.000..142.000 rows=3473 loops=1)\n Hash Cond: (\"outer\".ext_materia = \"inner\".id)\n -> Merge Right Join (cost=463.52..537.90 rows=459\nwidth=71) (actual time=67.000..109.000 rows=3473 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".ext_cliente) -> \nIndex Scan using t_clientes_pkey on t_clientes\n(cost=0.00..62.87 rows=1847 width=38) (actual\ntime=0.000..14.000 rows=1847 loops=1)\n -> Sort (cost=463.52..464.67 rows=459 width=41)\n(actual time=67.000..69.000 rows=3473 loops=1)\n Sort Key: t_alumnos.ext_cliente\n -> Merge Right Join (cost=379.40..443.23\nrows=459 width=41) (actual time=34.000..57.000\nrows=3473 loops=1)\n Merge Cond: (\"outer\".id =\n\"inner\".ext_alumno)\n -> Index Scan using t_alumnos_pkey on\nt_alumnos (cost=0.00..52.35 rows=1850\nwidth=8) (actual time=0.000..4.000\nrows=1850 loops=1)\n -> Sort (cost=379.40..380.55 rows=459\nwidth=41) (actual time=34.000..36.000\nrows=3473 loops=1)\n Sort Key:\nt_alumnos_en_semestre.ext_alumno\n -> Nested Loop (cost=0.00..359.11\nrows=459 width=41) (actual\ntime=0.000..21.000 rows=3473\nloops=1)\n -> Index Scan using\ni_t_alumnos_en_semestre__ext_ciclo\non t_alumnos_en_semestre\n(cost=0.00..4.36 rows=102\nwidth=8) (actual\ntime=0.000..1.000 rows=457\nloops=1)\n Index Cond: (ext_ciclo\n= 2222)\n -> Index Scan using\ni_t_materias_en_tira__ext_alumno_en_semestre\non t_materias_en_tira\n(cost=0.00..3.32 rows=12\nwidth=41) (actual\ntime=0.004..0.026 rows=8\nloops=457)\n Index Cond:\n(t_materias_en_tira.ext_alumno_en_semestre\n= \"outer\".id)\n -> Hash (cost=8.77..8.77 rows=377 width=26) (actual\ntime=2.000..2.000 rows=0 loops=1)\n -> Seq Scan on t_materias (cost=0.00..8.77 rows=377\nwidth=26) (actual time=0.000..2.000 rows=377 loops=1)\n -> Materialize (cost=8.77..12.54 rows=377 width=22) (actual\ntime=0.000..0.163 rows=377 loops=3473)\n -> Seq Scan on t_materias t_materias__equivalentes\n(cost=0.00..8.77 rows=377 width=22) (actual\ntime=0.000..1.000 rows=377 loops=1)\n -> Hash (cost=2.07..2.07 rows=107 width=13) (actual time=0.000..0.000\nrows=0 loops=1)\n -> Seq Scan on t_grupos (cost=0.00..2.07 rows=107 width=13)\n(actual time=0.000..0.000 rows=107 loops=1)\n\n Total runtime: 18787.000 ms\n\nSELECT count(*) FROM t_materias_en_tira;\n count\n-------\n 41059\n(1 row)\n\nSELECT count(*) FROM t_materias;\n count\n-------\n 377\n(1 row)\n\nSELECT version();;\n version\n---------------------------------------------------------------------------------------\n PostgreSQL 7.4.1 on i686-pc-cygwin, compiled by GCC gcc (GCC) 3.3.1\n(cygming special)\n(1 row)\n\n\n-- \nOctavio Alvarez.\nE-mail: [email protected].\n\nAgradezco que sus correos sean enviados siempre a esta direcci�n.\n\n\n\n-- \nOctavio Alvarez.\nE-mail: [email protected].\n\nAgradezco que sus correos sean enviados siempre a esta direcci�n.\n", "msg_date": "Thu, 11 Mar 2004 21:41:26 -0800 (PST)", "msg_from": "\"Octavio Alvarez\" <[email protected]>", "msg_from_op": true, "msg_subject": "Sorting when LEFT JOINING to 2 same tables, even aliased." }, { "msg_contents": "\nOn Thu, 11 Mar 2004, Octavio Alvarez wrote:\n\n>\n> Hello to everybody.\n>\n> I ask your help for a severe problem when doing a query that LEFT JOINs\n> one table to another ON a field, and then LEFT JOINs again to another\n> \"instance\" of a table ON another field which stores the same entity, but\n> with different meaning.\n>\n> I include 3 EXPLAIN ANALYZEs:\n> * The first one, the target (and problematic) query, which runs in 5 to 6\n> minutes.\n> * The second one, a variation with the second LEFT JOIN commented out,\n> which runs in 175 to 450 ms.\n> * The third one, a variation of the first one with ORDER BY removed, which\n> gives me about 19 seconds.\n>\n> Therefore, I feel like there are two problems here the one that raises the\n> clock to 6 minutes and one that raises it to 20 seconds. I expected a much\n> lower time. I checked indexes and data types already, they are all fine.\n> All relevant fields have BTREEs, all PKs have UNIQUE BTREE, and all id and\n> ext_* fields have 'integer' as data type. Each ext_* has its corresponding\n> REFERENCES contraint.\n>\n> I translated all the table and field names to make it easier to read. I\n> made my best not to let any typo go through.\n>\n> I'd appreciate any help.\n\nThis join filter\n> Join Filter: (\"outer\".ext_materia__equivalencia =\n> \"outer\".id)\n\nwhich I believe belongs to\n\n> \tLEFT JOIN t_materias AS t_materias__equivalentes ON\n> ext_materia__equivalencia = t_materias.id\n\nseems wrong. Did you maybe mean = t_materias__equivalentes.id\nthere?\n\n", "msg_date": "Thu, 11 Mar 2004 22:09:16 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting when LEFT JOINING to 2 same tables, even" } ]
[ { "msg_contents": "greetings!\non a dedicated pgsql server is putting pg_xlog\nin drive as OS almost equivalent to putting on a seperate\ndrive?\n\n\nin both case the actual data files are in a seperate\ndrive.\n\nregds\nmallah\n", "msg_date": "Sat, 13 Mar 2004 00:42:59 +0530 (IST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "pg_xlog on same drive as OS" }, { "msg_contents": "Mallah,\n\n> on a dedicated pgsql server is putting pg_xlog\n> in drive as OS almost equivalent to putting on a seperate\n> drive?\n\nYes. If I have limited drives, this is what I do.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 12 Mar 2004 12:03:02 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_xlog on same drive as OS" }, { "msg_contents": "In the last exciting episode, [email protected] wrote:\n> greetings!\n> on a dedicated pgsql server is putting pg_xlog\n> in drive as OS almost equivalent to putting on a seperate\n> drive?\n>\n> in both case the actual data files are in a seperate\n> drive.\n\nWell, if the OS drive is relatively inactive, then it may be \"separate\nenough\" that you won't find performance hurt too much by this.\n-- \nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://cbbrowne.com/info/postgresql.html\nMICROS~1 is not the answer.\nMICROS~1 is the question.\nNO (or Linux) is the answer.\n", "msg_date": "Fri, 12 Mar 2004 15:52:49 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_xlog on same drive as OS" } ]
[ { "msg_contents": "We upgraded from 8GB to 12GB RAM a month or so ago, but even in the\npast, I've never seen the system exhaust on it's system cache (~6GB, in\n'top'), while it's swapping.\n\nSome one had mentioned why not have the entire DB in memory? How do I\nconfigure that, for knowledge?\n\nMax connections is set to 500, and we haven't bumped it yet. (I've seen\nover 200 active queries, but the traffic is seasonal, so the high\nconnection value)\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Robert Treat [mailto:[email protected]] \nSent: Friday, March 12, 2004 6:02 PM\nTo: William Yu\nCc: [email protected]\nSubject: Re: [PERFORM] Scaling further up\n\n\nOn Mon, 2004-03-08 at 11:40, William Yu wrote:\n> Anjan Dave wrote:\n> > Great response, Thanks.\n> > \n> > Regarding 12GB memory and 13G db, and almost no I/O, one thing I \n> > don't understand is that even though the OS caches most of the \n> > memory and PG can use it if it needs it, why would the system swap \n> > (not much, only during peak times)? The SHMMAX is set to 512MB, \n> > shared_buffers is 150MB, effective cache size is 2GB, sort mem is \n> > 2MB, rest is default values. It also happens that a large query \n> > (reporting type) can hold up the other queries, and the load \n> > averages shoot up during peak times.\n> \n> In regards to your system going to swap, the only item I see is \n> sort_mem\n> at 2MB. How many simultaneous transactions do you get? If you get \n> hundreds or thousands like your first message stated, every select\nsort \n> would take up 2MB of memory regardless of whether it needed it or not.\n\n> That could cause your swap activity during peak traffic.\n> \n> The only other item to bump up is the effective cache size -- I'd set \n> it\n> to 12GB.\n> \n\nWas surprised that no one corrected this bit of erroneous info (or at\nleast I didn't see it) so thought I would for completeness. a basic\nexplanation is that sort_mem controls how much memory a given query is\nallowed to use before spilling to disk, but it will not grab that much\nmemory if it doesn't need it. \n\nSee the docs for a more detailed explanation:\nhttp://www.postgresql.org/docs/7.4/interactive/runtime-config.html#RUNTI\nME-CONFIG-RESOURCE\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n", "msg_date": "Fri, 12 Mar 2004 18:25:48 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "On Fri, Mar 12, 2004 at 06:25:48PM -0500, Anjan Dave wrote:\n> We upgraded from 8GB to 12GB RAM a month or so ago, but even in the\n> past, I've never seen the system exhaust on it's system cache (~6GB, in\n> 'top'), while it's swapping.\n> \n> Some one had mentioned why not have the entire DB in memory? How do I\n> configure that, for knowledge?\n\nYou don't. It'll automatically be in memory if (a) you have enough\nmemory, (b) you don't have anything else on the machine using the\nmemory, and (c) it's been read at least one time.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\n", "msg_date": "Mon, 15 Mar 2004 15:09:31 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "Quoting Andrew Sullivan <[email protected]>:\n\n> On Fri, Mar 12, 2004 at 06:25:48PM -0500, Anjan Dave wrote:\n> > We upgraded from 8GB to 12GB RAM a month or so ago, but even in the\n> > past, I've never seen the system exhaust on it's system cache (~6GB, in\n> > 'top'), while it's swapping.\n> > \n> > Some one had mentioned why not have the entire DB in memory? How do I\n> > configure that, for knowledge?\n> \n> You don't. It'll automatically be in memory if (a) you have enough\n> memory, (b) you don't have anything else on the machine using the\n> memory, and (c) it's been read at least one time.\n\nThis is the preferred method, but you could create a memory disk if running\nlinux. This has several caveats, though.\n\n1. You may have to recompile the kernel for support.\n2. You must store the database on a hard drive partition during reboots.\n3. Because of #2 this option is generally useful if you have static content that\nis loaded to the MD upon startup of the system. \n\nYou could have some fancy methodology of shutting down the system and then\ncopying the data to a disk-based filesystem, but this is inherently bad since\nat any moment a power outage would erase any updates changes.\n\nThe option is there to start with all data in memory, but in general, this is\nprobablt not what you want. Just an FYI.\n", "msg_date": "Mon, 15 Mar 2004 13:28:35 -0700", "msg_from": "Matt Davies <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" }, { "msg_contents": "On Tue, 2004-03-16 at 07:28, Matt Davies wrote:\n> This is the preferred method, but you could create a memory disk if running\n> linux. This has several caveats, though.\n> \n> 1. You may have to recompile the kernel for support.\n> 2. You must store the database on a hard drive partition during reboots.\n> 3. Because of #2 this option is generally useful if you have static content that\n> is loaded to the MD upon startup of the system. \n\nAnd 4. You use twice as much memory - one lot for the FS, the second for\nbuffer cache.\n\nIt's generally going to be slower than simply doing some typical queries\nto preload the data into buffer cache, I think.\n\n\tStephen", "msg_date": "Tue, 16 Mar 2004 09:47:50 +1100", "msg_from": "Stephen Robert Norris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling further up" } ]
[ { "msg_contents": "I'm trying to troubleshoot a performance issue on an application ported \nfrom Oracle to postgres. Now, I know the best way to get help is to post \nthe schema, explain analyze output, etc, etc -- unfortunately I can't do \nthat at the moment. However, maybe someone can point me in the right \ndirection to figure this out on my own. That said, here are a few details...\n\nPostgreSQL 7.4.1\nbash-2.03$ uname -a\nSunOS col65 5.8 Generic_108528-27 sun4u sparc SUNW,Sun-Fire-280R\n\nThe problem is this: the application runs an insert, that fires off a \ntrigger, that cascades into a fairly complex series of functions, that \ndo a bunch of calculations, inserts, updates, and deletes. Immediately \nafter a postmaster restart, the first insert or two take about 1.5 \nminutes (undoubtedly this could be improved, but it isn't the main \nissue). However by the second or third insert, the time increases to 7 - \n9 minutes. Restarting the postmaster causes the cycle to repeat, i.e. \nthe first one or two inserts are back to the 1.5 minute range.\n\nAny ideas spring to mind? I don't have much experience with Postgres on \nSolaris -- could it be related to that somehow?\n\nThanks for any insights.\n\nJoe\n", "msg_date": "Fri, 12 Mar 2004 17:38:37 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "rapid degradation after postmaster restart" }, { "msg_contents": "Six days ago I installed Pg 7.4.1 on Sparc Solaris 8 also. I am hopeful \nthat we as well can migrate a bunch of our apps from Oracle.\n\nAfter doing some informal benchmarks and performance testing for the \npast week I am becoming more and more impressed with what I see.\n\nI have seen similar results to what you are describing.\n\nI found that running a full vacuum:\n\nvacuumdb -fza\n\nfollowed by a checkpoint makes it run fast again.\n\nTry timing the update with and without a full vacuum.\n\nI can't help but wonder if a clean shutdown includes some vacuuming.\n\nObviously, in a production database this would be an issue.\n\nPlease post back what you learn.\n\nSincerely,\nMarty\n\nI have been doing a bunch of informat\n\nJoe Conway wrote:\n> I'm trying to troubleshoot a performance issue on an application ported \n> from Oracle to postgres. Now, I know the best way to get help is to post \n> the schema, explain analyze output, etc, etc -- unfortunately I can't do \n> that at the moment. However, maybe someone can point me in the right \n> direction to figure this out on my own. That said, here are a few \n> details...\n> \n> PostgreSQL 7.4.1\n> bash-2.03$ uname -a\n> SunOS col65 5.8 Generic_108528-27 sun4u sparc SUNW,Sun-Fire-280R\n> \n> The problem is this: the application runs an insert, that fires off a \n> trigger, that cascades into a fairly complex series of functions, that \n> do a bunch of calculations, inserts, updates, and deletes. Immediately \n> after a postmaster restart, the first insert or two take about 1.5 \n> minutes (undoubtedly this could be improved, but it isn't the main \n> issue). However by the second or third insert, the time increases to 7 - \n> 9 minutes. Restarting the postmaster causes the cycle to repeat, i.e. \n> the first one or two inserts are back to the 1.5 minute range.\n> \n> Any ideas spring to mind? I don't have much experience with Postgres on \n> Solaris -- could it be related to that somehow?\n> \n> Thanks for any insights.\n> \n> Joe\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n", "msg_date": "Fri, 12 Mar 2004 19:51:06 -0700", "msg_from": "Marty Scholes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> The problem is this: the application runs an insert, that fires off a \n> trigger, that cascades into a fairly complex series of functions, that \n> do a bunch of calculations, inserts, updates, and deletes. Immediately \n> after a postmaster restart, the first insert or two take about 1.5 \n> minutes (undoubtedly this could be improved, but it isn't the main \n> issue). However by the second or third insert, the time increases to 7 - \n> 9 minutes. Restarting the postmaster causes the cycle to repeat, i.e. \n> the first one or two inserts are back to the 1.5 minute range.\n\nI realize this question might take some patience to answer, but what\ndoes the performance curve look like beyond three trials? Does it level\noff or continue to get worse? If it doesn't level off, does the\ndegradation seem linear in the number of trials, or worse than linear?\n\nI have no ideas in mind, just trying to gather data ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Mar 2004 23:02:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rapid degradation after postmaster restart " }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> ... Immediately \n> after a postmaster restart, the first insert or two take about 1.5 \n> minutes (undoubtedly this could be improved, but it isn't the main \n> issue). However by the second or third insert, the time increases to 7 - \n> 9 minutes. Restarting the postmaster causes the cycle to repeat, i.e. \n> the first one or two inserts are back to the 1.5 minute range.\n\nJust to be clear on this: you have to restart the postmaster to bring\nthe time back down? Simply starting a fresh backend session doesn't do\nit?\n\nAre you using particularly large values for shared_buffers or any of the\nother resource parameters?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Mar 2004 10:39:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rapid degradation after postmaster restart " }, { "msg_contents": "Tom Lane wrote:\n> I realize this question might take some patience to answer, but what\n> does the performance curve look like beyond three trials? Does it level\n> off or continue to get worse? If it doesn't level off, does the\n> degradation seem linear in the number of trials, or worse than linear?\n\nI try to gather some data during the weekend and report back.\n\nThanks,\n\nJoe\n", "msg_date": "Sat, 13 Mar 2004 07:51:48 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Tom Lane wrote:\n> Just to be clear on this: you have to restart the postmaster to bring\n> the time back down? Simply starting a fresh backend session doesn't do\n> it?\n\nYes, a full postmaster restart is needed. It is a command line script \nthat does the insert, so each one is a new backend.\n\n> Are you using particularly large values for shared_buffers or any of the\n> other resource parameters?\n\nI'll have to look at this again (I have to vpn in to the company lan \nwhich kills all my current connections) -- the server and application \nbelong to another department at my employer.\n\nIIRC, shared buffers was reasonable, maybe 128MB. One thing that is \nworthy of note is that they are using pg_autovacuum and a very low \nvacuum_mem setting (1024). But I also believe that max_fsm_relations and \nmax_fsm_pages have been bumped up from default (something like 10000 & \n200000).\n\nI'll post the non-default postgresql.conf settings shortly. The extended \ntests discussed in the nearby post will take a bit more time to get.\n\nThanks,\n\nJoe\n\n", "msg_date": "Sat, 13 Mar 2004 08:03:25 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Marty Scholes wrote:\n> I have seen similar results to what you are describing.\n> \n> I found that running a full vacuum:\n> \n> vacuumdb -fza\n> \n> followed by a checkpoint makes it run fast again.\n> \n> Try timing the update with and without a full vacuum.\n\nWill do. I'll let you know how it goes.\n\nThanks for the reply.\n\nJoe\n\n", "msg_date": "Sat, 13 Mar 2004 08:07:12 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Joe,\n\n> IIRC, shared buffers was reasonable, maybe 128MB. One thing that is \n> worthy of note is that they are using pg_autovacuum and a very low \n> vacuum_mem setting (1024). But I also believe that max_fsm_relations and \n> max_fsm_pages have been bumped up from default (something like 10000 & \n> 200000).\n\npg_autovacuum may be your problem. Imagine this:\n\n1) The chain of updates and inserts called by the procedures makes enough \nchanges, on its own, to trigger pg_autovacuum.\n2) Because they have a big database, and a low vacuum_mem, a vacuum of the \nlargest table takes noticable time, like several minutes.\n3) This means that the vacuum is still running during the second and \nsucceeding events ....\n\nSomething to check by watching the process list.\n\nFWIW, I don't use pg_autovacuum for databases which have frequent large batch \nupdates; I find it results in uneven performance.\n\nFeel free to phone me if you're still stuck!\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Sat, 13 Mar 2004 08:51:22 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Joe Conway wrote:\n\n> Tom Lane wrote:\n>\n>> Just to be clear on this: you have to restart the postmaster to bring\n>> the time back down? Simply starting a fresh backend session doesn't do\n>> it?\n>\n>\n> IIRC, shared buffers was reasonable, maybe 128MB. One thing that is \n> worthy of note is that they are using pg_autovacuum and a very low \n> vacuum_mem setting (1024). But I also believe that max_fsm_relations \n> and max_fsm_pages have been bumped up from default (something like \n> 10000 & 200000).\n>\n\npg_autovacuum could be a problem if it's vacuuming too often. Have you \nlooked to see if a vacuum or analyze is running while the server is \nslow? If so, have you played with the pg_autovacuum default vacuum and \nanalyze thresholds? If it appears that it is related to pg_autovacuum \nplease send me the command options used to run it and a logfile of it's \noutput running at at a debug level of -d2\n\n\nMatthew\n\n", "msg_date": "Sat, 13 Mar 2004 12:33:43 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Joe Conway wrote:\n\n> A few pg_autovacuum questions came out of this:\n>\n> First, the default vacuum scaling factor is 2, which I think implies\n> the big table should only get vacuumed every 56 million or so changes.\n> I didn't come anywhere near that volume in my tests, yet the table did\n> get vacuumed more than once (I was watching the pg_autovacuum log\n> output). Do I misunderstand this setting?\n\n\nI think you understand correctly. A table with 1,000,000 rows should \nget vacuumed approx every 2,000,000 changes (assuming default values for \n-V ). FYI and insert and a delete count as one change, but and update \ncounts as two.\n\nUnfortunately, the running with -d2 would show the numbers that \npg_autovacuum is using to decide if it when it should vacuum or \nanalyze. Also, are you sure that it vacuumed more than once and \nwasn't doing analyzes most of the time? \n\nAlso, I'm not sure if 2 is a good default value for the scaling factor \nbut I erred on the side of not vacuuming too often.\n\n> Second, Matthew requested pg_autovacuum run with -d2; I found that\n> with -d2 set, pg_autovacuum would immediately exit on start. -d0 and\n> -d1 work fine however.\n\n\nThat's unfortunate as that is the detail we need to see what \npg_autovacuum thinks is really going on. We had a similar sounding \ncrash on FreeBSD due to some unitialized variables that were being \nprinted out by the debug code, however that was fixed a long time ago. \nAny chance you can look into this?\n\n> That's all I can think of at the moment. I'd like to try the 7.4 patch \n> that makes vacuum sleep every few pages -- can anyone point me to the \n> latest and greatest that will apply to 7.4?\n\n\nYes I would be very curious to see the results with the vacuum delay \npatch installed (is that patch applied to HEAD?)\n\n\n", "msg_date": "Mon, 15 Mar 2004 15:40:42 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Matthew T. O'Connor wrote:\n> I think you understand correctly. A table with 1,000,000 rows should \n> get vacuumed approx every 2,000,000 changes (assuming default values for \n> -V ). FYI and insert and a delete count as one change, but and update \n> counts as two.\n> \n> Unfortunately, the running with -d2 would show the numbers that \n> pg_autovacuum is using to decide if it when it should vacuum or \n> analyze. Also, are you sure that it vacuumed more than once and \n> wasn't doing analyzes most of the time?\n\nYeah, I'm sure. Snippets from the log:\n\n[...lots-o-tables...]\n[2004-03-14 12:44:48 PM] added table: specdb.\"public\".\"parametric_states\"\n[2004-03-14 12:49:48 PM] Performing: VACUUM ANALYZE \n\"public\".\"transaction_data\"\n[2004-03-14 01:29:59 PM] Performing: VACUUM ANALYZE \n\"public\".\"transaction_data\"\n[2004-03-14 02:08:26 PM] Performing: ANALYZE \"public\".\"out_of_spec\"\n[2004-03-14 02:08:26 PM] Performing: VACUUM ANALYZE \n\"public\".\"transaction_data\"\n[2004-03-14 02:22:44 PM] Performing: VACUUM ANALYZE \"public\".\"spc_graphs\"\n[2004-03-14 03:06:45 PM] Performing: VACUUM ANALYZE \"public\".\"out_of_spec\"\n[2004-03-14 03:06:45 PM] Performing: VACUUM ANALYZE \n\"public\".\"transaction_data\"\n[2004-03-14 03:19:51 PM] Performing: VACUUM ANALYZE \"public\".\"spc_graphs\"\n[2004-03-14 03:21:09 PM] Performing: ANALYZE \"public\".\"parametric_states\"\n[2004-03-14 03:54:57 PM] Performing: ANALYZE \"public\".\"out_of_spec\"\n[2004-03-14 03:54:57 PM] Performing: VACUUM ANALYZE \n\"public\".\"transaction_data\"\n[2004-03-14 04:07:52 PM] Performing: VACUUM ANALYZE \"public\".\"spc_graphs\"\n[2004-03-14 04:09:33 PM] Performing: ANALYZE \"public\".\"equip_status_history\"\n[2004-03-14 04:09:33 PM] Performing: VACUUM ANALYZE \n\"public\".\"parametric_states\"\n[2004-03-14 04:43:46 PM] Performing: VACUUM ANALYZE \"public\".\"out_of_spec\"\n[2004-03-14 04:43:46 PM] Performing: VACUUM ANALYZE \n\"public\".\"transaction_data\"\n[2004-03-14 04:56:35 PM] Performing: VACUUM ANALYZE \"public\".\"spc_graphs\"\n[2004-03-14 04:58:32 PM] Performing: ANALYZE \"public\".\"parametric_states\"\n[2004-03-14 05:28:58 PM] added database: specdb\n\nThis is the entire period of the first test, with default autovac \nsettings. The table \"public\".\"transaction_data\" is the one with 28 \nmillion active rows. The entire test run inserts about 600 x 600 = \n360,000 rows, out of which roughly two-thirds are later deleted.\n\n> That's unfortunate as that is the detail we need to see what \n> pg_autovacuum thinks is really going on. We had a similar sounding \n> crash on FreeBSD due to some unitialized variables that were being \n> printed out by the debug code, however that was fixed a long time ago. \n> Any chance you can look into this?\n\nI can try. The server belongs to another department, and they are under \nthe gun to get back on track with their testing. Also, they compiled \nwithout debug symbols, so I need to get permission to recompile.\n\n> Yes I would be very curious to see the results with the vacuum delay \n> patch installed (is that patch applied to HEAD?)\n\nAny idea where I can get my hands on the latest version. I found the \noriginal post from Tom, but I thought there was a later version with \nboth number of pages and time to sleep as knobs.\n\nThanks,\n\nJoe\n", "msg_date": "Mon, 15 Mar 2004 20:59:16 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> Any idea where I can get my hands on the latest version. I found the \n> original post from Tom, but I thought there was a later version with \n> both number of pages and time to sleep as knobs.\n\nThat was as far as I got. I think Jan posted a more complex version\nthat would still be reasonable to apply to 7.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Mar 2004 00:25:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rapid degradation after postmaster restart " }, { "msg_contents": "Joe Conway wrote:\n\n> Yeah, I'm sure. Snippets from the log:\n>\n> [...lots-o-tables...]\n> [2004-03-14 12:44:48 PM] added table: specdb.\"public\".\"parametric_states\"\n> [2004-03-14 12:49:48 PM] Performing: VACUUM ANALYZE \n> \"public\".\"transaction_data\"\n> [2004-03-14 01:29:59 PM] Performing: VACUUM ANALYZE \n> \"public\".\"transaction_data\"\n> [2004-03-14 02:08:26 PM] Performing: ANALYZE \"public\".\"out_of_spec\"\n> [2004-03-14 02:08:26 PM] Performing: VACUUM ANALYZE \n> \"public\".\"transaction_data\"\n> [2004-03-14 02:22:44 PM] Performing: VACUUM ANALYZE \"public\".\"spc_graphs\"\n> [2004-03-14 03:06:45 PM] Performing: VACUUM ANALYZE \n> \"public\".\"out_of_spec\"\n> [2004-03-14 03:06:45 PM] Performing: VACUUM ANALYZE \n> \"public\".\"transaction_data\"\n> [2004-03-14 03:19:51 PM] Performing: VACUUM ANALYZE \"public\".\"spc_graphs\"\n> [2004-03-14 03:21:09 PM] Performing: ANALYZE \"public\".\"parametric_states\"\n> [2004-03-14 03:54:57 PM] Performing: ANALYZE \"public\".\"out_of_spec\"\n> [2004-03-14 03:54:57 PM] Performing: VACUUM ANALYZE \n> \"public\".\"transaction_data\"\n> [2004-03-14 04:07:52 PM] Performing: VACUUM ANALYZE \"public\".\"spc_graphs\"\n> [2004-03-14 04:09:33 PM] Performing: ANALYZE \n> \"public\".\"equip_status_history\"\n> [2004-03-14 04:09:33 PM] Performing: VACUUM ANALYZE \n> \"public\".\"parametric_states\"\n> [2004-03-14 04:43:46 PM] Performing: VACUUM ANALYZE \n> \"public\".\"out_of_spec\"\n> [2004-03-14 04:43:46 PM] Performing: VACUUM ANALYZE \n> \"public\".\"transaction_data\"\n> [2004-03-14 04:56:35 PM] Performing: VACUUM ANALYZE \"public\".\"spc_graphs\"\n> [2004-03-14 04:58:32 PM] Performing: ANALYZE \"public\".\"parametric_states\"\n> [2004-03-14 05:28:58 PM] added database: specdb\n\n\nYeah, you're right.....\n\n> This is the entire period of the first test, with default autovac \n> settings. The table \"public\".\"transaction_data\" is the one with 28 \n> million active rows. The entire test run inserts about 600 x 600 = \n> 360,000 rows, out of which roughly two-thirds are later deleted.\n\n\nStrange... I wonder if this is some integer overflow problem. There was \none reported recently and fixed as of CVS head yesterday, you might try \nthat, however without the -d2 output I'm only guessing at why \npg_autovacuum is vacuuming so much / so often.\n\n> I can try. The server belongs to another department, and they are \n> under the gun to get back on track with their testing. Also, they \n> compiled without debug symbols, so I need to get permission to recompile.\n\n\nGood luck, I hope you can get permission. Would e nice to fix this \nlittle crash.\n\n>> Yes I would be very curious to see the results with the vacuum delay \n>> patch installed (is that patch applied to HEAD?)\n>\n>\n> Any idea where I can get my hands on the latest version. I found the \n> original post from Tom, but I thought there was a later version with \n> both number of pages and time to sleep as knobs.\n\n\nI think Jan posted one a while back.... [searches archives...] But I \nmust say I'm at a loss to find it in the archives. Anyone know where a \ngood delay patch is for 7.4? If we can't find one, any chance you can \ndo some testing with CVS HEAD just to see if that works any better. I \nknow there has been a fair amount of work done to improve this situation \n(not just vacuum delay, but ARC etc...)\n.\n", "msg_date": "Tue, 16 Mar 2004 00:32:27 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>>Any idea where I can get my hands on the latest version. I found the \n>>original post from Tom, but I thought there was a later version with \n>>both number of pages and time to sleep as knobs.\n> \n> That was as far as I got. I think Jan posted a more complex version\n> that would still be reasonable to apply to 7.4.\n\nI thought that too, but was having trouble finding it. I'll look again.\n\nThanks,\n\nJoe\n\n", "msg_date": "Mon, 15 Mar 2004 21:38:03 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Matthew T. O'Connor wrote:\n> Strange... I wonder if this is some integer overflow problem. There was \n> one reported recently and fixed as of CVS head yesterday, you might try \n> that, however without the -d2 output I'm only guessing at why \n> pg_autovacuum is vacuuming so much / so often.\n\nI'll see what I can do tomorrow to track it down.\n\nI have already recommended to the program manager that they switch to \n7.4.2 plus the autovacuum patch. Not sure they will be willing to make \nany changes at this stage in their release process though.\n\n> If we can't find one, any chance you can \n> do some testing with CVS HEAD just to see if that works any better. I \n> know there has been a fair amount of work done to improve this situation \n> (not just vacuum delay, but ARC etc...)\n\nI might do that, but not likely on Solaris. I can probably get a copy of \nthe current database and testing scripts, and give it a try on one of my \nown machines (all Linux, either RHAS3, RH9, or Fedora).\n\nJoe\n\n", "msg_date": "Mon, 15 Mar 2004 21:48:05 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "[moving to hackers]\n\nMatthew T. O'Connor wrote:\n> Good luck, I hope you can get permission. Would e nice to fix this \n> little crash.\n\nI went ahead and recompiled with --enable-debug, and get this trace:\n\n#0 0xfefb3218 in strlen () from /usr/lib/libc.so.1\n#1 0xff006520 in _doprnt () from /usr/lib/libc.so.1\n#2 0xff0082e8 in sprintf () from /usr/lib/libc.so.1\n#3 0x1213c in print_db_info (dbi=0x28980, print_tbl_list=0)\n at pg_autovacuum.c:681\n#4 0x120fc in print_db_list (db_list=0x25f80, print_table_lists=0)\n at pg_autovacuum.c:673\n#5 0x11b44 in init_db_list () at pg_autovacuum.c:416\n#6 0x12c58 in main (argc=154384, argv=0xff043a54) at pg_autovacuum.c:1007\n\nLine 681 is this:\n sprintf(logbuffer, \"dbname: %s Username %s Passwd %s\",\n dbi->dbname, dbi->username, dbi->password);\n\nIt appears that dbi->password is a null pointer:\n(gdb) print dbi->dbname\n$1 = 0x25f68 \"template1\"\n(gdb) print dbi->username\n$2 = 0x25b20 \"dba\"\n(gdb) print dbi->password\n$3 = 0x0\n\nProblem is, since this is a development machine, they have everything \nset to \"trust\" in pg_hba.conf. I added a \"-P foo\" to the command line, \nand it starts up fine now.\n\nHTH,\n\nJoe\n", "msg_date": "Mon, 15 Mar 2004 23:11:35 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] rapid degradation after postmaster restart" }, { "msg_contents": "> [moving to hackers]\n>\n> Line 681 is this:\n> sprintf(logbuffer, \"dbname: %s Username %s Passwd %s\",\n> dbi->dbname, dbi->username, dbi->password);\n>\n> It appears that dbi->password is a null pointer:\n> (gdb) print dbi->dbname\n> $1 = 0x25f68 \"template1\"\n> (gdb) print dbi->username\n> $2 = 0x25b20 \"dba\"\n> (gdb) print dbi->password\n> $3 = 0x0\n>\n> Problem is, since this is a development machine, they have everything\n> set to \"trust\" in pg_hba.conf. I added a \"-P foo\" to the command line,\n> and it starts up fine now.\n\nOk, that is about what I figured the problem would be. I will try to take\na look at this soon and submit a patch. However since you can work around\nit now, can you do another test run with -d2?\n\nThanks for tracking this down.\n\nMatthew\n\n\n\n", "msg_date": "Tue, 16 Mar 2004 10:50:06 -0500 (EST)", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] rapid degradation after postmaster restart" }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n> \n>>Any idea where I can get my hands on the latest version. I found the \n>>original post from Tom, but I thought there was a later version with \n>>both number of pages and time to sleep as knobs.\n> \n> That was as far as I got. I think Jan posted a more complex version\n> that would still be reasonable to apply to 7.4.\n\nI have tested Tom's original patch now. The good news -- it works great \nin terms of reducing the load imposed by vacuum -- almost to the level \nof being unnoticeable. The bad news -- in a simulation test which loads \nan hour's worth of data, even with delay set to 1 ms, vacuum of the \nlarge table exceeds two hours (vs 12-14 minutes with delay = 0). Since \nthat hourly load is expected 7 x 24, this obviously isn't going to work.\n\nThe problem with Jan's more complex version of the patch (at least the \none I found - perhaps not the right one) is it includes a bunch of other \nexperimental stuff that I'd not want to mess with at the moment. Would \nchanging the input units (for the original patch) from milli-secs to \nmicro-secs be a bad idea? If so, I guess I'll get to extracting what I \nneed from Jan's patch.\n\nThanks,\n\nJoe\n\n", "msg_date": "Tue, 16 Mar 2004 20:49:01 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "On Tue, 2004-03-16 at 23:49, Joe Conway wrote:\n> I have tested Tom's original patch now. The good news -- it works great \n> in terms of reducing the load imposed by vacuum -- almost to the level \n> of being unnoticeable. The bad news -- in a simulation test which loads \n> an hour's worth of data, even with delay set to 1 ms, vacuum of the \n> large table exceeds two hours (vs 12-14 minutes with delay = 0). Since \n> that hourly load is expected 7 x 24, this obviously isn't going to work.\n\nIf memory serves, the problem is that you actually sleep 10ms even when\nyou set it to 1. One of the thing changed in Jan's later patch was the\nability to specify how many pages to work on before sleeping, rather\nthan how long to sleep inbetween every 1 page. You might be able to do\na quick hack and have it do 10 pages or so before sleeping.\n\nMatthew\n\n", "msg_date": "Wed, 17 Mar 2004 00:12:20 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> I have tested Tom's original patch now. The good news -- it works great \n> in terms of reducing the load imposed by vacuum -- almost to the level \n> of being unnoticeable. The bad news -- in a simulation test which loads \n> an hour's worth of data, even with delay set to 1 ms, vacuum of the \n> large table exceeds two hours (vs 12-14 minutes with delay = 0). Since \n> that hourly load is expected 7 x 24, this obviously isn't going to work.\n\nTurns the dial down a bit too far then ...\n\n> The problem with Jan's more complex version of the patch (at least the \n> one I found - perhaps not the right one) is it includes a bunch of other \n> experimental stuff that I'd not want to mess with at the moment. Would \n> changing the input units (for the original patch) from milli-secs to \n> micro-secs be a bad idea?\n\nUnlikely to be helpful; on most kernels the minimum sleep delay is 1 or\n10 msec, so asking for a few microsec is the same as asking for some\nmillisec. I think what you need is a knob of the form \"sleep N msec\nafter each M pages of I/O\". I'm almost certain that Jan posted such a\npatch somewhere between my original and the version you refer to above.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Mar 2004 00:17:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rapid degradation after postmaster restart " }, { "msg_contents": "Matthew T. O'Connor wrote:\n> If memory serves, the problem is that you actually sleep 10ms even when\n> you set it to 1. One of the thing changed in Jan's later patch was the\n> ability to specify how many pages to work on before sleeping, rather\n> than how long to sleep inbetween every 1 page. You might be able to do\n> a quick hack and have it do 10 pages or so before sleeping.\n\nI thought I remembered something about that.\n\nIt turned out to be less difficult than I first thought to extract the \nvacuum delay stuff from Jan's performance patch. I haven't yet tried it \nout, but it's attached in case you are interested. I'll report back once \nI have some results.\n\nJoe", "msg_date": "Tue, 16 Mar 2004 21:18:27 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "> The problem with Jan's more complex version of the patch (at least the\n> one I found - perhaps not the right one) is it includes a bunch of other\n> experimental stuff that I'd not want to mess with at the moment. Would\n> changing the input units (for the original patch) from milli-secs to\n> micro-secs be a bad idea? If so, I guess I'll get to extracting what I\n> need from Jan's patch.\n\nJan's vacuum-delay-only patch that nobody can find is here:\n\nhttp://archives.postgresql.org/pgsql-hackers/2003-11/msg00518.php\n\nI've been using it in testing & production without any problems.\n", "msg_date": "Wed, 17 Mar 2004 09:40:35 -0600 (CST)", "msg_from": "\"Arthur Ward\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Sorry I haven't had a chance to reply to this sooner.\n\nOn Fri, Mar 12, 2004 at 05:38:37PM -0800, Joe Conway wrote:\n> The problem is this: the application runs an insert, that fires off a \n> trigger, that cascades into a fairly complex series of functions, that \n> do a bunch of calculations, inserts, updates, and deletes. Immediately \n> after a postmaster restart, the first insert or two take about 1.5 \n> minutes (undoubtedly this could be improved, but it isn't the main \n> issue). However by the second or third insert, the time increases to 7 - \n> 9 minutes. Restarting the postmaster causes the cycle to repeat, i.e. \n> the first one or two inserts are back to the 1.5 minute range.\n\nThe vacuum delay stuff that you're working on may help, but I can't\nreally believe it's your salvation if this is happening after only a\nfew minutes. No matter how much you're doing inside those functions,\nyou surely can't be causing so many dead tuples that a vacuum is\nnecessary that soon. Did you try not vacuuming for a little while to\nsee if it helps?\n\nI didn't see it anywhere in this thread, but are you quite sure that\nyou're not swapping? Note that vmstat on multiprocessor Solaris\nmachines is not notoriously useful. You may want to have a look at\nwhat the example stuff in the SE Toolkit tells you, or what you get\nfrom sar. I believe you have to use a special kernel setting on\nSolaris to mark shared memory as being ineligible for swap.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThis work was visionary and imaginative, and goes to show that visionary\nand imaginative work need not end up well. \n\t\t--Dennis Ritchie\n", "msg_date": "Wed, 17 Mar 2004 13:11:15 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Andrew Sullivan wrote:\n> Sorry I haven't had a chance to reply to this sooner.\n\n> The vacuum delay stuff that you're working on may help, but I can't\n> really believe it's your salvation if this is happening after only a\n> few minutes. No matter how much you're doing inside those functions,\n> you surely can't be causing so many dead tuples that a vacuum is\n> necessary that soon. Did you try not vacuuming for a little while to\n> see if it helps?\n\nI discussed it later in the thread, but we're adding about 400K rows per \nhour and deleting most of them after processing (note this is a \ncommercial app, written and maintained by another department -- I can \nrecommend changes, but this late into their release cycle they are very \nreluctant to change the app). This is 7 x 24 data collection from \nequipment, so there is no \"slow\" time to use as a maintenance window.\n\nBut since the server in question is a test machine, I was able to shut \neverything off long enough to do a full vacuum -- it took about 12 hours.\n\n> I didn't see it anywhere in this thread, but are you quite sure that\n> you're not swapping? Note that vmstat on multiprocessor Solaris\n> machines is not notoriously useful. You may want to have a look at\n> what the example stuff in the SE Toolkit tells you, or what you get\n> from sar. I believe you have to use a special kernel setting on\n> Solaris to mark shared memory as being ineligible for swap.\n\nI'm (reasonably) sure there is no swapping. Minimum free memory (from \ntop) is about 800 MB, and \"vmstat -S\" shows no swap-in or swap-out.\n\nI've been playing with a version of Jan's performance patch in the past \nfew hours. Based on my simulations, it appears that a 1 ms delay every \n10 pages is just about right. The performance hit is negligible (based \non overall test time, and cpu % used by the vacuum process). I still \nhave a bit more analysis to do, but this is looking pretty good. More \nlater...\n\nJoe\n", "msg_date": "Wed, 17 Mar 2004 11:19:36 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Arthur Ward wrote:\n> Jan's vacuum-delay-only patch that nobody can find is here:\n> \n> http://archives.postgresql.org/pgsql-hackers/2003-11/msg00518.php\n> \n> I've been using it in testing & production without any problems.\n\nGreat to know -- many thanks.\n\nI've hacked my own vacuum-delay-only patch form Jan's all_performance \npatch. It looks like the only difference is that it uses usleep() \ninstead of select(). So far the tests look promising.\n\nThanks,\n\nJoe\n\n", "msg_date": "Wed, 17 Mar 2004 11:38:54 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rapid degradation after postmaster restart" }, { "msg_contents": "Andrew Sullivan wrote:\n\n>The vacuum delay stuff that you're working on may help, but I can't\n>really believe it's your salvation if this is happening after only a\n>few minutes. No matter how much you're doing inside those functions,\n>you surely can't be causing so many dead tuples that a vacuum is\n>necessary that soon. Did you try not vacuuming for a little while to\n>see if it helps?\n> \n>\n\nSome of this thread was taken off line so I'm not sure it was mentioned \non the list, but a big part of the problem was that Joe was running into \nthe same bug that Cott Lang ran into a while ago which caused the vacuum \nthreshold to get set far too low resulting in vacuums far too often.. \nThis has been fixed and the patch has been committed unfortunately it \ndidn't make it into 7.4.2, but it will be in 7.4.3 / 7.5.\n\n>I didn't see it anywhere in this thread, but are you quite sure that\n>you're not swapping? Note that vmstat on multiprocessor Solaris\n>machines is not notoriously useful. You may want to have a look at\n>what the example stuff in the SE Toolkit tells you, or what you get\n>from sar. I believe you have to use a special kernel setting on\n>Solaris to mark shared memory as being ineligible for swap.\n> \n>\n\nI haven't heard from Joe how things are going with the fixed \npg_autovacuum but that in combination with the vacuum delay stuff should \nwork well.\n\nMatthew\n\n\n\n", "msg_date": "Wed, 17 Mar 2004 15:57:09 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rapid degradation after postmaster restart" } ]
[ { "msg_contents": "Hi list,\n\nI was more or less toying with an idea for a project I have, which \nincludes renumbering a primary key (don't ask, it's necessary :/ )\n\nAnyway, I was looking into the usefullness of a INSERT INTO newtable \nSELECT field, field, CASE pkey WHEN x1 THEN y1 WHEN x2 THEN y2 etc END \nFROM oldtable\n\nThe resulting select was about 1.7MB of query-text, mostly composed of \nthe CASE-statement. So I discarded that idea, I still wanted to know how \nmuch time it would take on my database (MySQL) and found it to take \nabout 1100 seconds, in contrast to simply selecting the data, which'd \ntake about 0.7 seconds orso... The table I tested this on is about 30MB.\n\nOf course I wanted to know how long it'd take on postgresql, selecting \nthe pkey-field only (without the case) took also some 0.7 seconds (the \nentire table may have been more).\nBut the CASE-version took 9026139.201 ms, i.e. over 9000 seconds about 8 \ntimes slower than MySQL.\n\nWhat I'm wondering about:\nAlthough I was not expecting Postgresql to heavily beat MySQL, I was \nsurprised to see it so much slower. Is the CASE-statement in Postgresql \nthat inefficient? Or is it simply not very scalable (i.e. don't try to \nhave 100000 cases like I did)?\n\nThe database is a lightly optimised gentoo-compile of 7.4.2, the \nmysql-version was 4.0.18 in case anyone wanted to know that.\n\n\nBest regards,\n\nArjen van der Meijden\n\n\nPS, don't try to \"help improve the query\" I discarded the idea as too \ninefficient and went along with a simple left join to get the \"new pkey\" \nout of a temporary table ;)\n\n\n", "msg_date": "Sun, 14 Mar 2004 20:33:00 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": true, "msg_subject": "Large CASE-statement is pretty slow?" }, { "msg_contents": "Arjen van der Meijden <[email protected]> writes:\n> Anyway, I was looking into the usefullness of a INSERT INTO newtable \n> SELECT field, field, CASE pkey WHEN x1 THEN y1 WHEN x2 THEN y2 etc END \n> FROM oldtable\n\n> The resulting select was about 1.7MB of query-text, mostly composed of \n> the CASE-statement.\n\nHm, you mean one single SELECT, one single CASE? How many WHEN clauses\nexactly? Exactly what did a typical clause of the CASE look like?\n\nI wouldn't be too surprised to find some bit of code that's O(N^2) in\nthe number of arms of the CASE, or something like that; it's not an area\nthat we've ever felt the need to optimize. But I'd like a fairly\nspecific test case before trying to look into it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Mar 2004 16:09:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large CASE-statement is pretty slow? " }, { "msg_contents": "Tom Lane wrote:\n\n> Arjen van der Meijden <[email protected]> writes:\n> \n>>Anyway, I was looking into the usefullness of a INSERT INTO newtable \n>>SELECT field, field, CASE pkey WHEN x1 THEN y1 WHEN x2 THEN y2 etc END \n>>FROM oldtable\n> \n> \n>>The resulting select was about 1.7MB of query-text, mostly composed of \n>>the CASE-statement.\n> \n> \n> Hm, you mean one single SELECT, one single CASE? How many WHEN clauses\n> exactly? Exactly what did a typical clause of the CASE look like?\nYes, one SELECT-query with one single CASE-statement.\nThe CASE-statement had the simple-case-structure like:\nSELECT CASE UserID WHEN 1 THEN 1 WHEN 34 THEN 2 ... etc\n\nI noticed, by the way, that the ordering is on the THEN y parameter, the \nx parameter (WHEN x THEN y) is \"more or less increasing\".\n\nBut some numbers:\nThe table I did my tests on has 88291 rows, I did the select on the \ninteger primary key, so the CASE was the only column in the select.\nI'm running the query again on a table that has only the primary key of \nmy original table and it seems to be as slow.\nI'm not really sure how many WHEN's there are in that CASE, but it is \nsupposed to be a relocation of all primary key-values to some other \nvalue, so it will contain some number close to that 88291.\n\n> I wouldn't be too surprised to find some bit of code that's O(N^2) in\n> the number of arms of the CASE, or something like that; it's not an area\n> that we've ever felt the need to optimize. But I'd like a fairly\n> specific test case before trying to look into it.\n\nWell, I have discarded this type of query as \"too inefficient\" and found \na better way, so don't feel the need to optimize it just because I \nnoticed it is slow with very large CASEs. Although CASEs with a few \nhundred WHENs wont be that uncommon and might improve a bit as well?\n\nI can send you the \"primary key only\"-table and the query off list if \nyou want to. That won't make me violate any privacy rule and is probably \na good test case?\n\nBest regards,\n\nArjen van der Meijden\n\n\n", "msg_date": "Mon, 15 Mar 2004 00:21:17 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large CASE-statement is pretty slow?" }, { "msg_contents": "Arjen van der Meijden <[email protected]> writes:\n\n> \n> Of course I wanted to know how long it'd take on postgresql, selecting the\n> pkey-field only (without the case) took also some 0.7 seconds (the entire table\n> may have been more).\n> But the CASE-version took 9026139.201 ms, i.e. over 9000 seconds about 8 times\n> slower than MySQL.\n\nWas this the select with the CASE, or the update?\n\nIf you did the update and have lots of foreign key references to the table\nthen every record that's updated forces a check to make sure there are no\nreferences to that record (or an update if it's ON UPDATE CASCADE). If there\nare no indexes on the referencing table columns that will be very slow.\n\n-- \ngreg\n\n", "msg_date": "15 Mar 2004 12:15:45 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large CASE-statement is pretty slow?" }, { "msg_contents": "\nArjen van der Meijden <[email protected]> writes:\n\n> Well, I have discarded this type of query as \"too inefficient\" and found a\n> better way\n\nLoading the mapping into a table with an index and doing an update using\n\"from\" to do a join seems likely to end up being the most efficient method.\nPostgres would probably not even bother with the index and do a hash join.\n\n\n-- \ngreg\n\n", "msg_date": "15 Mar 2004 12:20:49 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large CASE-statement is pretty slow?" }, { "msg_contents": "Greg Stark wrote:\n\n> Arjen van der Meijden <[email protected]> writes:\n> \n> \n> Was this the select with the CASE, or the update?\n\nIt was just the select to see how long it'd take. I already anticipated \nit to be possibly a \"slow query\", so I only did the select first.\n\nBest regards,\n\nArjen van der Meijden\n\n\n", "msg_date": "Mon, 15 Mar 2004 19:54:21 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large CASE-statement is pretty slow?" }, { "msg_contents": "Arjen van der Meijden <[email protected]> writes:\n> [ huge CASE is pretty slow ]\n\nI did some profiling of the test case that Arjen was kind enough to send\nme. It seems there are two distinct problems. One is that the parser\nuses repeated lappend()'s to construct the list of CASE arms; this\nmakes building the structure O(N^2) in the number of arms. (If you\nsimply try to EXPLAIN the query, you find out that the parse time is\nabout a third of the run time :-( ... and 90% of that is spent inside\nnconc() which is the guts of lappend.) This problem is slated to be\nfixed by Neil Conway's upcoming rewrite of the list support, which will\nconvert lappend into a constant-time operation.\n\nThe other difficulty is that the evaluation machinery for arithmetic\nexpressions has a lot of overhead. The profile run shows:\n\n % cumulative self self total \n time seconds seconds calls s/call s/call name \n 38.15 41.92 41.92 229646 0.00 0.00 nconc\n 21.76 65.84 23.92 199054454 0.00 0.00 ExecEvalExpr\n 11.38 78.34 12.50 10000 0.00 0.00 ExecEvalCase\n 8.43 87.61 9.27 66348151 0.00 0.00 ExecEvalFuncArgs\n 8.12 96.54 8.93 66348151 0.00 0.00 ExecMakeFunctionResult\n 2.96 99.78 3.25 66348151 0.00 0.00 ExecEvalVar\n 1.23 101.14 1.36 10058 0.00 0.00 AllocSetCheck\n 1.23 102.49 1.35 66348151 0.00 0.00 ExecEvalOper\n 1.12 103.72 1.24 76537 0.00 0.00 OpernameGetCandidates\n 0.85 104.66 0.94 66424693 0.00 0.00 int4eq\n\n(Note: I added LIMIT 10000 to the query so that the CASE is only carried\nout 10000 times, rather than nearly 90000 times as in Arjen's original\ntest case. Without this, the call-counter overflows for ExecEvalExpr,\nand the time percentages seem to get confused. One must recognize\nthough that this overstates the parser overhead compared to the original\ntest case.)\n\nClearly the useful work (int4eq) is getting a bit swamped by the ExecEval\nmechanism. I have some ideas about reducing the overhead, which I'll\npost to the pghackers list in a bit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Mar 2004 15:34:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large CASE-statement is pretty slow? " } ]
[ { "msg_contents": "Hi,\n\ni have 2 Functions, one ist called by page visitors (something about 2,000 times / 1 hour)\nand the other just by the admin (let say 1 time per hour or per login)\ni often get a deadlock error after calling the admin function\nyes they try to access the same table somewhere in the function code.\nThe Admin function can take up to 20-30 seconds and visitor function just 20 to 30 ms\nWhat can i do there? Have i to study a lot about locking tables or something else?\n\nThanks for your Help\nregards,\nBoris\n", "msg_date": "Sun, 14 Mar 2004 22:43:41 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Deadlocks... " } ]
[ { "msg_contents": "We're in the throes of an MS SQL to PostgreSQL migration; our databases\ninclude a number of ~5M row tables. We decided to take this opportunity\nto clean up and slightly re-normalize our schemas, given what we've\nlearned about the data over its lifetime and such, else we wouldn't be\nexperiencing any of the following (we could instead just dump and `copy\nfrom`).\n\nWe have a temporary table, public.tempprod, containing 4.7M rows, one\nfor each row in account.cust. account.cust has, among others, two\ncolumns, prod and subprod, which we're trying to update from tempprod\njoined against prod. The update tends to take unnecessarily long--\nrather, we've had to finally kill it after its taking obscenely too\nlong.\n\nThe table:\n\n# \\d account.cust\n Table \"account.cust\"\n Column | Type | Modifiers \n-----------+-----------------------------+----------------------------------\n----\n custid | bigint | not null default\n | |\nnextval('account.custid_seq'::text)\n ownerid | integer | not null\n origid | text | not null\n pname | text |\n fname | text |\n mname | text |\n lname | text |\n suffix | text |\n addr1 | text |\n addr2 | text |\n addr3 | text |\n city | text |\n state | text |\n zip | text |\n zipplus | text |\n homeph | text |\n workph | text |\n otherph | text |\n ssn | text |\n isactive | boolean | default true\n createddt | timestamp without time zone | default now()\n prodid | bigint |\n subprodid | bigint |\nIndexes:\n \"cust_pkey\" primary key, btree (custid)\n \"ix_addr1\" btree (addr1) WHERE (addr1 IS NOT NULL)\n \"ix_addr2\" btree (addr2) WHERE (addr2 IS NOT NULL)\n \"ix_city\" btree (city) WHERE (city IS NOT NULL)\n \"ix_fname\" btree (fname) WHERE (fname IS NOT NULL)\n \"ix_homeph\" btree (homeph) WHERE (homeph IS NOT NULL)\n \"ix_lname\" btree (lname) WHERE (lname IS NOT NULL)\n \"ix_mname\" btree (mname) WHERE (mname IS NOT NULL)\n \"ix_origid\" btree (origid)\n \"ix_ssn\" btree (ssn) WHERE (ssn IS NOT NULL)\n \"ix_state\" btree (state) WHERE (state IS NOT NULL)\n \"ix_workph\" btree (workph) WHERE (workph IS NOT NULL)\n \"ix_zip\" btree (zip) WHERE (zip IS NOT NULL)\n\nWe're currently running on a dual Xeon 700 (I know, I know; it's what\nwe've got) with 2.5GB RAM and 4x36GB SCSI in hardware RAID 5 (Dell\nPerc3 something-or-other controller). If we can demonstrate that \nPostgreSQL will meet our needs, we'll be going production on a dual\nOpteron, maxed memory, with a 12-disk Fibre Channel array.\n\nThe query is:\n\nupdate account.cust set prodid = \n(select p.prodid from account.prod p\n\tjoin public.tempprod t on t.pool = p.origid\n\twhere custid = t.did)\n\nAnd then, upon its completion, s/prod/subprod/.\n\nThat shouldn't run overnight, should it, let alone for -days-?\n\nIn experimenting with ways of making the updates take less time, we tried\nadding product and subproduct columns to tempprod, and updating those.\nThat seemed to work marginally better:\n\nexplain analyze update public.tempprod set prodid = \n(select account.prod.prodid::bigint \n\tfrom account.prod \n\twhere public.tempprod.pool::text = account.prod.origid::text)\n\nSeq Scan on tempprod (cost=0.00..9637101.35 rows 4731410 width=56) (actual\ntime=24273.467..16090470.438 rows=4731410 loops=1)\n SubPlan\n -> Limit (cost=0.00..2.02 rows=2 width=8) (actual time=0.134..0.315\n rows=1 loops=4731410)\n -> Seq Scan on prod (cost=0.00..2.02 rows=2 width=8) (actual\n time=0.126..0.305 rows=1 loops=4731410)\n Filter: (($0)::text = (origid)::text)\nTotal runtime: 2284551.962 ms\n\nBut then going from public.tempprod to account.cust again takes days. I\njust cancelled an update that's been running since last Thursday.\nAlas, given how long the queries take to run, I can't supply an `explain\nanalyze`. The `explain` seems reasonable enough:\n\n# explain update account.cust set prodid = tempprod.prodid\n\twhere tempprod.did = origid;\n\n Merge Join (cost=0.00..232764.69 rows=4731410 width=252)\n Merge Cond: ((\"outer\".origid)::text = (\"inner\".did)::text)\n -> Index Scan using ix_origid on cust (cost=0.00..94876.83\n rows=4731410 width=244)\n -> Index Scan using ix_did on tempprod (cost=0.00..66916.71\n rows=4731410 width=18)\n\nThe relevant bits from my postgreql.conf (note, we built with a BLCKSZ\nof 16K):\n\nshared_buffers = 4096\nsort_mem = 32768\nvacuum_mem = 32768\nwal_buffers = 16384\ncheckpoint_segments = 64\ncheckpoint_timeout = 1800\ncheckpoint_warning = 30\ncommit_delay = 50000\neffective_cache_size = 131072\n\nAny advice, suggestions or comments of the \"You bleeding idiot, why do\nyou have frob set to x?!\" sort welcome. Unfortunately, if we can't\nimprove this, significantly, the powers what be will probably pass\non PostgreSQL, even though everything we've done so far--with this\nmarked exception--performs pretty spectacularly, all told.\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n", "msg_date": "Mon, 15 Mar 2004 14:28:46 -0600", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "atrocious update performance" }, { "msg_contents": "> # explain update account.cust set prodid = tempprod.prodid\n> \twhere tempprod.did = origid;\n> \n> Merge Join (cost=0.00..232764.69 rows=4731410 width=252)\n> Merge Cond: ((\"outer\".origid)::text = (\"inner\".did)::text)\n> -> Index Scan using ix_origid on cust (cost=0.00..94876.83\n> rows=4731410 width=244)\n> -> Index Scan using ix_did on tempprod (cost=0.00..66916.71\n> rows=4731410 width=18)\n\nI'm going to hazard a guess and say you have a number of foreign keys\nthat refer to account.cust.prodid? This is probably the time consuming\npart -- perhaps even a missing index on one of those keys that refers to\nthis field.\n\nGoing the other way should be just as good for your purposes, and much\nfaster since you're not updating several foreign key'd fields bound to\naccount.cust.prodid.\n\nUPDATE tempprod.prodid = prodid\n FROM account.cust\n WHERE temprod.did = cust.origid;\n\n\n", "msg_date": "Mon, 15 Mar 2004 16:06:33 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "> > # explain update account.cust set prodid = tempprod.prodid\n> > \twhere tempprod.did = origid;\n\n> > Merge Join (cost=0.00..232764.69 rows=4731410 width=252)\n> > Merge Cond: ((\"outer\".origid)::text = (\"inner\".did)::text)\n> > -> Index Scan using ix_origid on cust (cost=0.00..94876.83\n> > rows=4731410 width=244)\n> > -> Index Scan using ix_did on tempprod (cost=0.00..66916.71\n> > rows=4731410 width=18)\n \n> I'm going to hazard a guess and say you have a number of foreign keys\n> that refer to account.cust.prodid? This is probably the time consuming\n> part -- perhaps even a missing index on one of those keys \n> that refers to\n> this field.\n\nActually, there are no foreign keys to those columns. Once they're\npopulated, I'll apply a foreign key constraint and they'll refer to the\nappropriate row in the prod and subprod tables, but nothing will \nreference account.cust.[sub]prodid. There are, of course, several foreign\nkeys referencing account.cust.custid.\n\n> Going the other way should be just as good for your purposes, and much\n> faster since you're not updating several foreign key'd fields bound to\n> account.cust.prodid.\n\n> UPDATE tempprod.prodid = prodid\n> FROM account.cust\n> WHERE temprod.did = cust.origid;\n\nNot quite. Without this update, acount.cust.[sub]prodid are null. The\ndata was strewn across multiple tables in MS SQL; we're normalizing it\ninto one, hence the need to populate the two columns independently.\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc. \n\n", "msg_date": "Mon, 15 Mar 2004 15:15:55 -0600", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "Bulk updates are generally dogs (not just in pg), so I avoid doing them by\ndoing faster selects and inserts. You can create a new table using 'create\ntable as' to produce your target results. This is real fast - avoiding the\nrow iteration in insert, allowing the select optimizer to run and no index\noverhead. Then alter/rename, add indexes and whatever else hangs off the\ntable (or if you're lazy do an insert/select into the original target\ntable). I often see 2 orders of magnitude improvement doing this, and no\nneed to vacuum.\n\n/Aaron\n\n----- Original Message ----- \nFrom: \"Rosser Schwarz\" <[email protected]>\nTo: <[email protected]>\nSent: Monday, March 15, 2004 3:28 PM\nSubject: [PERFORM] atrocious update performance\n\n\nWe're in the throes of an MS SQL to PostgreSQL migration; our databases\ninclude a number of ~5M row tables. We decided to take this opportunity\nto clean up and slightly re-normalize our schemas, given what we've\nlearned about the data over its lifetime and such, else we wouldn't be\nexperiencing any of the following (we could instead just dump and `copy\nfrom`).\n\nWe have a temporary table, public.tempprod, containing 4.7M rows, one\nfor each row in account.cust. account.cust has, among others, two\ncolumns, prod and subprod, which we're trying to update from tempprod\njoined against prod. The update tends to take unnecessarily long--\nrather, we've had to finally kill it after its taking obscenely too\nlong.\n\nThe table:\n\n# \\d account.cust\n Table \"account.cust\"\n Column | Type | Modifiers\n-----------+-----------------------------+----------------------------------\n----\n custid | bigint | not null default\n | |\nnextval('account.custid_seq'::text)\n ownerid | integer | not null\n origid | text | not null\n pname | text |\n fname | text |\n mname | text |\n lname | text |\n suffix | text |\n addr1 | text |\n addr2 | text |\n addr3 | text |\n city | text |\n state | text |\n zip | text |\n zipplus | text |\n homeph | text |\n workph | text |\n otherph | text |\n ssn | text |\n isactive | boolean | default true\n createddt | timestamp without time zone | default now()\n prodid | bigint |\n subprodid | bigint |\nIndexes:\n \"cust_pkey\" primary key, btree (custid)\n \"ix_addr1\" btree (addr1) WHERE (addr1 IS NOT NULL)\n \"ix_addr2\" btree (addr2) WHERE (addr2 IS NOT NULL)\n \"ix_city\" btree (city) WHERE (city IS NOT NULL)\n \"ix_fname\" btree (fname) WHERE (fname IS NOT NULL)\n \"ix_homeph\" btree (homeph) WHERE (homeph IS NOT NULL)\n \"ix_lname\" btree (lname) WHERE (lname IS NOT NULL)\n \"ix_mname\" btree (mname) WHERE (mname IS NOT NULL)\n \"ix_origid\" btree (origid)\n \"ix_ssn\" btree (ssn) WHERE (ssn IS NOT NULL)\n \"ix_state\" btree (state) WHERE (state IS NOT NULL)\n \"ix_workph\" btree (workph) WHERE (workph IS NOT NULL)\n \"ix_zip\" btree (zip) WHERE (zip IS NOT NULL)\n\nWe're currently running on a dual Xeon 700 (I know, I know; it's what\nwe've got) with 2.5GB RAM and 4x36GB SCSI in hardware RAID 5 (Dell\nPerc3 something-or-other controller). If we can demonstrate that\nPostgreSQL will meet our needs, we'll be going production on a dual\nOpteron, maxed memory, with a 12-disk Fibre Channel array.\n\nThe query is:\n\nupdate account.cust set prodid =\n(select p.prodid from account.prod p\njoin public.tempprod t on t.pool = p.origid\nwhere custid = t.did)\n\nAnd then, upon its completion, s/prod/subprod/.\n\nThat shouldn't run overnight, should it, let alone for -days-?\n\nIn experimenting with ways of making the updates take less time, we tried\nadding product and subproduct columns to tempprod, and updating those.\nThat seemed to work marginally better:\n\nexplain analyze update public.tempprod set prodid =\n(select account.prod.prodid::bigint\nfrom account.prod\nwhere public.tempprod.pool::text = account.prod.origid::text)\n\nSeq Scan on tempprod (cost=0.00..9637101.35 rows 4731410 width=56) (actual\ntime=24273.467..16090470.438 rows=4731410 loops=1)\n SubPlan\n -> Limit (cost=0.00..2.02 rows=2 width=8) (actual time=0.134..0.315\n rows=1 loops=4731410)\n -> Seq Scan on prod (cost=0.00..2.02 rows=2 width=8) (actual\n time=0.126..0.305 rows=1 loops=4731410)\n Filter: (($0)::text = (origid)::text)\nTotal runtime: 2284551.962 ms\n\nBut then going from public.tempprod to account.cust again takes days. I\njust cancelled an update that's been running since last Thursday.\nAlas, given how long the queries take to run, I can't supply an `explain\nanalyze`. The `explain` seems reasonable enough:\n\n# explain update account.cust set prodid = tempprod.prodid\nwhere tempprod.did = origid;\n\n Merge Join (cost=0.00..232764.69 rows=4731410 width=252)\n Merge Cond: ((\"outer\".origid)::text = (\"inner\".did)::text)\n -> Index Scan using ix_origid on cust (cost=0.00..94876.83\n rows=4731410 width=244)\n -> Index Scan using ix_did on tempprod (cost=0.00..66916.71\n rows=4731410 width=18)\n\nThe relevant bits from my postgreql.conf (note, we built with a BLCKSZ\nof 16K):\n\nshared_buffers = 4096\nsort_mem = 32768\nvacuum_mem = 32768\nwal_buffers = 16384\ncheckpoint_segments = 64\ncheckpoint_timeout = 1800\ncheckpoint_warning = 30\ncommit_delay = 50000\neffective_cache_size = 131072\n\nAny advice, suggestions or comments of the \"You bleeding idiot, why do\nyou have frob set to x?!\" sort welcome. Unfortunately, if we can't\nimprove this, significantly, the powers what be will probably pass\non PostgreSQL, even though everything we've done so far--with this\nmarked exception--performs pretty spectacularly, all told.\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Mon, 15 Mar 2004 16:29:51 -0500", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "On Mon, 2004-03-15 at 16:15, Rosser Schwarz wrote:\n> > > # explain update account.cust set prodid = tempprod.prodid\n> > > \twhere tempprod.did = origid;\n> \n> > > Merge Join (cost=0.00..232764.69 rows=4731410 width=252)\n> > > Merge Cond: ((\"outer\".origid)::text = (\"inner\".did)::text)\n> > > -> Index Scan using ix_origid on cust (cost=0.00..94876.83\n> > > rows=4731410 width=244)\n> > > -> Index Scan using ix_did on tempprod (cost=0.00..66916.71\n> > > rows=4731410 width=18)\n> \n> > I'm going to hazard a guess and say you have a number of foreign keys\n> > that refer to account.cust.prodid? This is probably the time consuming\n> > part -- perhaps even a missing index on one of those keys \n> > that refers to\n> > this field.\n> \n> Actually, there are no foreign keys to those columns. Once they're\n> populated, I'll apply a foreign key constraint and they'll refer to the\n> appropriate row in the prod and subprod tables, but nothing will \n> reference account.cust.[sub]prodid. There are, of course, several foreign\n> keys referencing account.cust.custid.\n\nIf there are no feign keys to it, I wouldn't expect it to take more than\n10 minutes on slow hardware.\n\nFresh out of ideas here.\n\n\n", "msg_date": "Mon, 15 Mar 2004 16:54:35 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "> You can create a new table using 'create table as' to produce your\n> target results. This is real fast ...\n> I often see 2 orders of magnitude improvement doing this, and no\n> need to vacuum.\n\nIndeed:\n\n\"Query returned successfully with no result in 582761 ms.\"\n\nThough I must say, ten minutes is nominally more than two orders of\nmangitude performance improvement, versus several days.\n\nMany thanks, Aaron.\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n", "msg_date": "Mon, 15 Mar 2004 17:20:32 -0600", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "\"Rosser Schwarz\" <[email protected]> writes:\n>> You can create a new table using 'create table as' to produce your\n>> target results. This is real fast ...\n>> I often see 2 orders of magnitude improvement doing this, and no\n>> need to vacuum.\n\n> Indeed:\n> \"Query returned successfully with no result in 582761 ms.\"\n> Though I must say, ten minutes is nominally more than two orders of\n> mangitude performance improvement, versus several days.\n\nHm. There is no way that inserting a row is two orders of magnitude\nfaster than updating a row --- they both require storing a new row and\nmaking whatever index entries are needed. The only additional cost of\nthe update is finding the old row (not a very big deal AFAICS in the\nexamples you gave) and marking it deleted (definitely cheap). So\nthere's something awfully fishy going on here.\n\nI'm inclined to suspect an issue with foreign-key checking. You didn't\ngive us any details about foreign key relationships your \"cust\" table is\ninvolved in --- could we see those? And the schemas of the other tables\ninvolved?\n\nAlso, exactly which PG version is this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Mar 2004 19:08:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "The original point was about a very slow update of an entire table with a\nplan that looped, and over a dozen conditional indices - vs. a 'create as'\nin a CPU starved environment. I stand by my statement about observing the\norders of magnitude difference. In theory I agree that the update should be\nin the same order of magnitude as the create as, but in practice I disagree.\nI also think something is wrong on the logical side (besides FKs, are there\nany triggers?) but was responding to the Gordian knot issue of bailing out\nof pg.\n\nCan you post a sample extract, Rosser? Otherwise, I'll try to put together a\nsample of a slow mass join update.\n\n/Aaron\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Rosser Schwarz\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, March 15, 2004 7:08 PM\nSubject: Re: [PERFORM] atrocious update performance\n\n\n> \"Rosser Schwarz\" <[email protected]> writes:\n> >> You can create a new table using 'create table as' to produce your\n> >> target results. This is real fast ...\n> >> I often see 2 orders of magnitude improvement doing this, and no\n> >> need to vacuum.\n>\n> > Indeed:\n> > \"Query returned successfully with no result in 582761 ms.\"\n> > Though I must say, ten minutes is nominally more than two orders of\n> > mangitude performance improvement, versus several days.\n>\n> Hm. There is no way that inserting a row is two orders of magnitude\n> faster than updating a row --- they both require storing a new row and\n> making whatever index entries are needed. The only additional cost of\n> the update is finding the old row (not a very big deal AFAICS in the\n> examples you gave) and marking it deleted (definitely cheap). So\n> there's something awfully fishy going on here.\n>\n> I'm inclined to suspect an issue with foreign-key checking. You didn't\n> give us any details about foreign key relationships your \"cust\" table is\n> involved in --- could we see those? And the schemas of the other tables\n> involved?\n>\n> Also, exactly which PG version is this?\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Mon, 15 Mar 2004 21:42:09 -0500", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "\n\"Rosser Schwarz\" <[email protected]> writes:\n\n> Actually, there are no foreign keys to those columns. Once they're\n> populated, I'll apply a foreign key constraint and they'll refer to the\n> appropriate row in the prod and subprod tables, but nothing will \n> reference account.cust.[sub]prodid. There are, of course, several foreign\n> keys referencing account.cust.custid.\n\nJust to be clear, the foreign key constraints they're worrying about are not\nconstraints on the table you're updating. They're constraints on other tables\nreferring to the table you're updating. \n\nSince you're updating the column here postgres has to be sure nothing is\nreferring to the old value you're obliterating, and to do that it has to\nselect for possible records in the referencing tables referring to the value.\nIf there are any references in other tables referring to this column then you\nneed an index on the column in the referencing table to be able to update the\ncolumn in referenced table efficiently.\n\n-- \ngreg\n\n", "msg_date": "16 Mar 2004 02:31:06 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "Rosser Schwarz wrote:\n\n > shared_buffers = 4096\n> sort_mem = 32768\n> vacuum_mem = 32768\n> wal_buffers = 16384\n> checkpoint_segments = 64\n> checkpoint_timeout = 1800\n> checkpoint_warning = 30\n> commit_delay = 50000\n> effective_cache_size = 131072\n\nYou didn't mention the OS so I would take it as either linux/freeBSD.\n\nFirst of all, your shared buffers are low. 4096 is 64MB with 16K block size. I \nwould say at least push them to 150-200MB.\n\nSecondly your sort mem is too high. Note that it is per sort per query. You \ncould build a massive swap storm with such a setting.\n\nSimilarly pull down vacuum and WAL buffers to around 512-1024 each.\n\nI know that your problem is solved by using insert rather than updates. But I \njust want to point out that you still need to analyze the table to update the \nstatistics or the further queres will not be exactly good.\n\nAnd lastly, you can bundle entire thing including creating duplicate table, \npopulating it, renaming original table etc in a single transaction and nobody \nwill notice it. I am almost sure MS-SQL can not do that. Not many databases have \ntrasact-safe DDLs out there..\n\n HTH\n\n Shridhar\n", "msg_date": "Tue, 16 Mar 2004 13:08:49 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "Shridhar Daithankar <[email protected]> writes:\n> Rosser Schwarz wrote:\n>> shared_buffers = 4096\n>> sort_mem = 32768\n>> vacuum_mem = 32768\n>> wal_buffers = 16384\n>> checkpoint_segments = 64\n>> checkpoint_timeout = 1800\n>> checkpoint_warning = 30\n>> commit_delay = 50000\n>> effective_cache_size = 131072\n\n> First of all, your shared buffers are low. 4096 is 64MB with 16K block\n> size. I would say at least push them to 150-200MB.\n\nCheck. Much more than that isn't necessarily better though.\nshared_buffers = 10000 is frequently mentioned as a \"sweet spot\".\n\n> Secondly your sort mem is too high. Note that it is per sort per query. You \n> could build a massive swap storm with such a setting.\n\nAgreed, but I doubt that has anything to do with the immediate\nproblem, since he's not testing parallel queries.\n\n> Similarly pull down vacuum and WAL buffers to around 512-1024 each.\n\nThe vacuum_mem setting here is 32Mb, which seems okay to me, if not on\nthe low side. Again though it's not his immediate problem.\n\nI agree that the wal_buffers setting is outlandishly large; I can't see\nany plausible reason for it to be more than a few dozen. I don't know\nwhether oversized wal_buffers can directly cause any performance issues,\nbut it's certainly not a well-tested scenario.\n\nThe other setting I was going to comment on is checkpoint_warning;\nit seems mighty low in comparison to checkpoint_timeout. If you are\ntargeting a checkpoint every half hour, I'd think you'd want the system\nto complain about checkpoints spaced more closely than several minutes.\n\nBut with the possible exception of wal_buffers, I can't see anything in\nthese settings that explains the originally complained-of performance\nproblem. I'm still wondering about foreign key checks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Mar 2004 10:25:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "while you weren't looking, Tom Lane wrote:\n\n> But with the possible exception of wal_buffers, I can't see \n> anything in\n> these settings that explains the originally complained-of performance\n> problem. I'm still wondering about foreign key checks.\n\nMany of the configs I posted were fairly wild values, set to gather\ndata points for further tweaking. Unfortunately, with this query\nthere hasn't been time for many passes, and I've too much else on my\nplate to try concocting demonstration cases. The postmaster's been\nhupped with more sane values, but I experienced this same issue with\nthe defaults.\n\nAs for foreign keys, three tables refer to account.cust; all of them\nrefer to account.cust.custid, the pk. One of those tables has several\nhundred thousand rows, many more to come; the others are empty. Unless\nI've woefully misunderstood, the presence or absence of a foreign key\nreferring to one column should be moot for updates writing another\ncolumn, shouldn't it?\n\nTo answer your (and others') question, Tom, 7.4.1 on 2.4.20-18.9smp.\nRed Hat, I believe. I was handed the machine, which is also in use\nfor lightweight production stuff: intranet webserver, rinky-dink\nMySQL doo-dads, &c. I'm sure that has an impact, usurping the disk\nheads and such--maybe even more than I'd expect--but I can't imagine\nthat'd cause an update to one 4.7M row table, from another 4.7M row\ntable, both clustered on a join column that maps one-to-one between\nthem, to take days. I'm baffled; everything else is perfectly snappy,\ngiven the hardware. Anything requiring a sequential scan over one of\nthe big tables is a slog, but that's to be expected and hence all the\nindices.\n\nWatching iostat, I've observed a moderately cyclic read-big, write-\nbig pattern, wavelengths generally out of phase, interspersed with\nsmaller, almost epicycles--from the machine's other tasks, I'm sure.\ntop has postmaster's cpu usage rarely breaking 25% over the course\nof the query's execution, and spending most of its time much lower;\nmemory usage hovers somewhere north of 500MB.\n\nIn what little time I had to stare at a disturbingly matrix-esque\narray of terminals scrolling sundry metrics, I didn't notice a\ncorrelation between cpu usage spikes and peaks in the IO cycle's\nwaveforms. For whatever that's worth.\n\nThe other tables involved are:\n\n# \\d account.acct\n Table \"account.acct\"\n Column | Type | Modifiers \n------------+-----------------------------+---------------------------------\n----\n acctid | bigint | not null default\n |\nnextval('account.acctid_seq'::text)\n custid | bigint |\n acctstatid | integer | not null\n acctno | character varying(50) |\n bal | money |\n begdt | timestamp without time zone | not null\n enddt | timestamp without time zone |\n debtid | character varying(50) |\nIndexes:\n \"acct_pkey\" primary key, btree (acctid)\n \"ix_acctno\" btree (acctno) WHERE (acctno IS NOT NULL)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (custid) REFERENCES account.cust(custid)\n ON UPDATE CASCADE ON DELETE RESTRICT\n \"$2\" FOREIGN KEY (acctstatid) REFERENCES account.acctstat(acctstatid)\n ON UPDATE CASCADE ON DELETE RESTRICT\n\n# \\d account.note\n Table \"account.note\"\n Column | Type | Modifiers \n-----------+-----------------------------+----------------------------------\n---\n noteid | bigint | not null default\n |\nnextval('account.noteid_seq'::text)\n custid | bigint | not null\n note | text | not null\n createddt | timestamp without time zone | not null default now()\nIndexes:\n \"note_pkey\" primary key, btree (noteid)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (custid) REFERENCES account.cust(custid)\n ON UPDATE CASCADE ON DELETE RESTRICT\n\n# \\d account.origacct\n Table \"account.origacct\"\n Column | Type | Modifiers\n-------------+-----------------------------+-----------\n custid | bigint |\n lender | character varying(50) |\n chgoffdt | timestamp without time zone |\n opendt | timestamp without time zone |\n offbureaudt | timestamp without time zone |\n princbal | money |\n intbal | money |\n totbal | money |\n lastpayamt | money |\n lastpaydt | timestamp without time zone |\n debttype | integer |\n debtid | character varying(10) |\n acctno | character varying(50) |\nForeign-key constraints:\n \"$1\" FOREIGN KEY (custid) REFERENCES account.cust(custid)\n ON UPDATE CASCADE ON DELETE RESTRICT\n\nAnd the table we were joining to get the new values for prodid and\nsubprodid:\n\n# \\d tempprod\n Table \"public.tempprod\"\n Column | Type | Modifiers\n-----------+-----------------------+-----------\n debtid | character varying(10) | not null\n pool | character varying(10) | not null\n port | character varying(10) | not null\n subprodid | bigint |\n prodid | bigint |\nIndexes:\n \"ix_debtid\" btree (debtid)\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n", "msg_date": "Tue, 16 Mar 2004 11:32:56 -0600", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "\"Rosser Schwarz\" <[email protected]> writes:\n> As for foreign keys, three tables refer to account.cust; all of them\n> refer to account.cust.custid, the pk. One of those tables has several\n> hundred thousand rows, many more to come; the others are empty. Unless\n> I've woefully misunderstood, the presence or absence of a foreign key\n> referring to one column should be moot for updates writing another\n> column, shouldn't it?\n\nWell, that is the crux of the issue, and also why I was asking about\nversions. It's only been since 7.3.4 or so that we skip checking FKs on\nupdate.\n\nLooking at the code, though, the update check is only skipped if the\nprevious version of the row predates the current transaction.\n(Otherwise we can't be sure that the value was ever checked.) This\nmeans that slow FK checks could be your problem if the application is\nset up to issue multiple UPDATEs affecting the same row(s) during a\nsingle transaction. I'm not clear on whether that applies to you or not.\n\nAnd anyway the bottom line is: have you got indexes on the columns\n*referencing* account.cust.custid? If not you probably ought to add\nthem, since without them there will definitely be some slow cases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Mar 2004 13:04:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "while you weren't looking, Tom Lane wrote:\n\n> ...slow FK checks could be your problem if the application is set\n> up to issue multiple UPDATEs affecting the same row(s) during a\n> single transaction. I'm not clear on whether that applies to you\n> or not.\n\nIt shouldn't. It's just one large batch update that should be hitting\nevery row serially.\n\n> And anyway the bottom line is: have you got indexes on the columns\n> *referencing* account.cust.custid?\n\nNo. I'd've sworn I had one on account.acct.custid, since that table\nis popupated (currently ~500K rows), but it's not.\n\n$ time psql tci -c \"explain analyze select * from account.acct where\ncustid = 257458\"\n QUERY PLAN\n-----------------------------------------------------------------------\n Seq Scan on acct (cost=0.00..7166.68 rows=2 width=71) (actual\n time=1047.122..1047.122 rows=0 loops=1)\n Filter: (custid = 257458)\n Total runtime: 1047.362 ms\n(3 rows)\n\n\nreal 0m1.083s\nuser 0m0.010s\nsys 0m0.000s\n\nIf it is looking up the custid in account.acct for each row, that's,\nsay, 1 seconds per lookup, for 4.7 million lookups, for, if my math\nis right (4,731,410 / 3600 / 24) 54 days. I suppose that tracks, but\nthat doesn't make sense, given what you said about the fk checks,\nabove.\n\nOf course, if I index the column and amend the query to say \"where\ncustid = 194752::bigint\" I get back much saner numbers:\n\n QUERY PLAN\n----------------------------------------------------------------------\n Index Scan using ix_fk_acct_custid on acct (cost=0.00..3.34 rows=2\n width=71) (actual time=0.126..0.141 rows=2 loops=1)\n Index Cond: (custid = 194752::bigint)\n Total runtime: 0.314 ms\n(3 rows)\n\n\nreal 0m0.036s\nuser 0m0.010s\nsys 0m0.000s\n\nWhich would still take just under two days.\n\n$ time psql tci -c \"explain analyze update account.cust set prodid =\ntempprod.prodid, subprodid = tempprod.subprodid where origid =\ntempprod.debtid\"\n\nBut if I'm not touching the column referenced from account.acct, why\nwould it be looking there at all? I've got an explain analyze of the\nupdate running now, but until it finishes, I can't say for certain\nwhat it's doing. explain, alone, says:\n\n$ time psql tci -c \"explain update account.cust set prodid =\ntempprod.prodid, subprodid = tempprod.subprodid where origid =\ntempprod.debtid;\"\n QUERY PLAN\n---------------------------------------------------------------------\n Merge Join (cost=0.00..232764.69 rows=4731410 width=252)\n Merge Cond: ((\"outer\".origid)::text = (\"inner\".debtid)::text)\n -> Index Scan using ix_origid on cust (cost=0.00..94876.83 \n rows=4731410 width=236)\n -> Index Scan using ix_debtid on tempprod (cost=0.00..66916.71\n rows=4731410 width=26)\n(4 rows)\n\n\nreal 0m26.965s\nuser 0m0.010s\nsys 0m0.000s\n\nwhich shows it not hitting account.acct at all. (And why did it take\nthe planner 20-some seconds to come up with that query plan?)\n\ntempprod doesn't have an index either, but then it doesn't reference\naccount.cust; instead, the update would be done by joining the two on\ndebtid/origid, which map one-to-one, are both indexed, and with both\ntables clustered on those indices--exactly as was the CREATE TABLE AS\nAaron suggested elsethread.\n\nUnfortunately, this isn't the only large update we'll have to do. We\nreceive a daily, ~100K rows file that may have new values for any field\nof any row in account.cust, .acct or sundry other tables. The process\nof updating from that file is time-critical; it must run in minutes, at\nthe outside.\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n", "msg_date": "Tue, 16 Mar 2004 13:58:47 -0600", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "\"Rosser Schwarz\" <[email protected]> writes:\n> But if I'm not touching the column referenced from account.acct, why\n> would it be looking there at all? I've got an explain analyze of the\n> update running now, but until it finishes, I can't say for certain\n> what it's doing. explain, alone, says:\n\nEXPLAIN won't tell you anything about triggers that might get fired\nduring the UPDATE, so it's not much help for investigating possible\nFK performance problems. EXPLAIN ANALYZE will give you some indirect\nevidence: the difference between the total query time and the total time\nreported for the topmost plan node represents the time spent running\ntriggers and physically updating the tuples. I suspect we are going\nto see a big difference.\n\n> which shows it not hitting account.acct at all. (And why did it take\n> the planner 20-some seconds to come up with that query plan?)\n\nIt took 20 seconds to EXPLAIN? That's pretty darn odd in itself. I'm\nstarting to think there must be something quite whacked-out about your\ninstallation, but I haven't got any real good ideas about what.\n\n(I'm assuming of course that there weren't a ton of other jobs eating\nCPU while you tried to do the EXPLAIN.)\n\n[ thinks for awhile... ] The only theory that comes to mind for making\nthe planner so slow is oodles of dead tuples in pg_statistic. Could I\ntrouble you to run\n\tvacuum full verbose pg_statistic;\nand send along the output?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Mar 2004 15:14:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "while you weren't looking, Tom Lane wrote:\n\n> EXPLAIN won't tell you anything about triggers that might get fired\n> during the UPDATE, so it's not much help for investigating possible\n> FK performance problems. EXPLAIN ANALYZE will give you some indirect\n> evidence: the difference between the total query time and the total time\n> reported for the topmost plan node represents the time spent running\n> triggers and physically updating the tuples. I suspect we are going\n> to see a big difference.\n\nIt's still running.\n\n> It took 20 seconds to EXPLAIN? That's pretty darn odd in itself.\n\nIt struck me, too.\n\n> I'm starting to think there must be something quite whacked-out about\n> your installation, but I haven't got any real good ideas about what.\n\nBuilt from source. configure arguments:\n\n./configure --prefix=/var/postgresql --bindir=/usr/bin\n--enable-thread-safety --with-perl --with-python --with-openssl\n--with-krb5=/usr/kerberos\n\nI can answer more specific questions; otherwise, I'm not sure what to\nlook for, either. If we could take the machine out of production (oh,\nhell; I think I just volunteered myself for weekend work) long enough\nto reinstall everything to get a fair comparison...\n\nSo far as I know, though, it's a more or less stock Red Hat. 2.4.20-\nsomething.\n\n> (I'm assuming of course that there weren't a ton of other jobs eating\n> CPU while you tried to do the EXPLAIN.)\n\nCPU's spiked sopradically, which throttled everything else, but it never\nstays high. top shows the current explain analyze running between 50-\nish% and negligible. iostat -k 3 shows an average of 3K/sec written, for\na hundred-odd tps.\n\nI can't get any finer-grained than that, unfortunately; the machine was\nhanded to me with a single, contiguous filesystem, in production use.\n\n> [ thinks for awhile... ] The only theory that comes to mind\n> for making\n> the planner so slow is oodles of dead tuples in pg_statistic. Could I\n> trouble you to run\n> vacuum full verbose pg_statistic;\n> and send along the output?\n\nINFO: vacuuming \"pg_catalog.pg_statistic\"\nINFO: \"pg_statistic\": found 215 removable, 349 nonremovable row versions\nin 7 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 72 to 8132 bytes long.\nThere were 3 unused item pointers.\nTotal free space (including removable row versions) is 91572 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n7 pages containing 91572 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.71 sec.\nINFO: index \"pg_statistic_relid_att_index\" now contains 349 row versions\nin 2 pages\nDETAIL: 215 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_statistic\": moved 120 row versions, truncated 7 to 5 pages\nDETAIL: CPU 0.03s/0.01u sec elapsed 0.17 sec.\nINFO: index \"pg_statistic_relid_att_index\" now contains 349 row versions\nin 2 pages\nDETAIL: 120 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: vacuuming \"pg_toast.pg_toast_16408\"\nINFO: \"pg_toast_16408\": found 12 removable, 12 nonremovable row versions\nin 5 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 660 to 8178 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 91576 bytes.\n2 pages are or will become empty, including 0 at the end of the table.\n5 pages containing 91576 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.27 sec.\nINFO: index \"pg_toast_16408_index\" now contains 12 row versions in 2 pages\nDETAIL: 12 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.05 sec.\nINFO: \"pg_toast_16408\": moved 10 row versions, truncated 5 to 3 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: index \"pg_toast_16408_index\" now contains 12 row versions in 2 pages\nDETAIL: 10 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nHaving never more than glanced at the output of \"vacuum verbose\", I\ncan't say whether that makes the cut for oodles. My suspicion is no.\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n", "msg_date": "Tue, 16 Mar 2004 16:18:41 -0600", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "\"Rosser Schwarz\" <[email protected]> writes:\n> Having never more than glanced at the output of \"vacuum verbose\", I\n> can't say whether that makes the cut for oodles. My suspicion is no.\n\nNope, it sure doesn't. We occasionally see people who don't know they\nneed to vacuum regularly and have accumulated hundreds or thousands of\ndead tuples for every live one :-(. That's clearly not the issue here.\n\nI'm fresh out of ideas, and the fact that this is a live server kinda\nlimits what we can do experimentally ... but clearly, *something* is\nvery wrong.\n\nWell, when you don't know what to look for, you still have to look.\nOne possibly useful idea is to trace the kernel calls of the backend\nprocess while it does that ridiculously long EXPLAIN --- think you could\ntry that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Mar 2004 17:29:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "while you weren't looking, Tom Lane wrote:\n\n[trace]\n\n`strace -p 21882` run behind the below query and plan ... below that.\n\n# explain update account.cust set prodid = tempprod.prodid, subprodid =\ntempprod.subprodid where origid = tempprod.debtid;\n QUERY PLAN\n-------------------------------------------------------------------------\n Merge Join (cost=0.00..232764.69 rows=4731410 width=252)\n Merge Cond: ((\"outer\".origid)::text = (\"inner\".debtid)::text)\n -> Index Scan using ix_origid on cust (cost=0.00..94876.83\n rows=4731410 width=236)\n -> Index Scan using ix_debtid on tempprod (cost=0.00..66916.71\n rows=4731410 width=26)\n(4 rows)\n\n----------\n\nrecv(9, \"Q\\0\\0\\0}explain update account.cust\"..., 8192, 0) = 126\ngettimeofday({1079482151, 106228}, NULL) = 0\nbrk(0) = 0x82d9000\nbrk(0x82db000) = 0x82db000\nopen(\"/var/lib/pgsql/data/base/495616/6834170\", O_RDWR|O_LARGEFILE) = 8\n_llseek(8, 212402176, [212402176], SEEK_SET) = 0\nwrite(8, \"\\342\\1\\0\\0\\0\\314\\374\\6\\24\\0\\0\\0\\214\\7pG\\360\\177\\1\\200\\320\"...,\n32768) = 32768\nclose(8) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16635\", O_RDWR|O_LARGEFILE) = 8\nread(8, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b1\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834168\", O_RDWR|O_LARGEFILE) = 10\n_llseek(10, 60817408, [60817408], SEEK_SET) = 0\nwrite(10, \"\\342\\1\\0\\0`\\334\\5\\7\\24\\0\\0\\0t\\0010x\\360\\177\\1\\200\\330\\377\"...,\n32768) = 32768\nclose(10) = 0\nread(8, \"\\334\\1\\0\\0h\\217\\270n\\24\\0\\0\\0H\\0H|\\360\\177\\1\\200@\\376\\220\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834165\", O_RDWR|O_LARGEFILE) = 10\n_llseek(10, 130777088, [130777088], SEEK_SET) = 0\nwrite(10, \"\\342\\1\\0\\0<\\341\\7\\7\\24\\0\\0\\0004\\t0I\\360\\177\\1\\200\\330\\377\"...,\n32768) = 32768\nclose(10) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16595\", O_RDWR|O_LARGEFILE) = 10\nread(10, \"\\334\\1\\0\\0\\360\\216\\270n\\24\\0\\0\\0X\\0@y\\0\\200\\1\\200\\320\\371\"...,\n32768) = 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834168\", O_RDWR|O_LARGEFILE) = 11\n_llseek(11, 145915904, [145915904], SEEK_SET) = 0\nwrite(11, \"\\342\\1\\0\\0\\300\\350\\n\\7\\24\\0\\0\\0\\224\\6\\310Z\\360\\177\\1\\200\"...,\n32768) = 32768\nclose(11) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16614\", O_RDWR|O_LARGEFILE) = 11\nread(11, \"\\0\\0\\0\\0\\24\\231P\\306\\16\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834166\", O_RDWR|O_LARGEFILE) = 12\n_llseek(12, 148570112, [148570112], SEEK_SET) = 0\nwrite(12, \"\\342\\1\\0\\0\\274\\365\\22\\7\\24\\0\\0\\0X\\3\\234o\\360\\177\\1\\200\"...,\n32768)\n= 32768\nclose(12) = 0\n_llseek(11, 98304, [98304], SEEK_SET) = 0\nread(11, \"\\0\\0\\0\\0\\24\\231P\\306\\16\\0\\0\\0\\34\\0\\234\\177\\360\\177\\1\\200\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834163\", O_RDWR|O_LARGEFILE) = 12\n_llseek(12, 251789312, [251789312], SEEK_SET) = 0\nwrite(12, \"\\342\\1\\0\\0l\\366\\23\\7\\24\\0\\0\\0\\364\\10\\260J\\360\\177\\1\\200\"...,\n32768)\n= 32768\nclose(12) = 0\n_llseek(11, 32768, [32768], SEEK_SET) = 0\nread(11, \"\\340\\1\\0\\0\\324\\231\\273\\241\\24\\0\\0\\0\\234\\5\\330\\26\\360\\177\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834165\", O_RDWR|O_LARGEFILE) = 12\n_llseek(12, 117309440, [117309440], SEEK_SET) = 0\nwrite(12, \"\\342\\1\\0\\0d\\36)\\7\\24\\0\\0\\0000\\tHI\\360\\177\\1\\200\\330\\377\"...,\n32768)\n= 32768\nclose(12) = 0\nopen(\"/var/lib/pgsql/data/base/495616/1259\", O_RDWR|O_LARGEFILE) = 12\n_llseek(12, 32768, [32768], SEEK_SET) = 0\nread(12, \"\\334\\1\\0\\0\\324v-p\\24\\0\\0\\0000\\3\\304\\3\\0\\200\\1\\200<\\377\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834173\", O_RDWR|O_LARGEFILE) = 13\n_llseek(13, 247824384, [247824384], SEEK_SET) = 0\nwrite(13, \"\\342\\1\\0\\0h *\\7\\24\\0\\0\\0\\204\\4dm\\360\\177\\1\\200\\340\\377\"...,\n32768)\n= 32768\nclose(13) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16613\", O_RDWR|O_LARGEFILE) = 13\nread(13, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b1\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834168\", O_RDWR|O_LARGEFILE) = 14\n_llseek(14, 204472320, [204472320], SEEK_SET) = 0\nwrite(14, \"\\342\\1\\0\\0\\314\\272:\\7\\24\\0\\0\\0\\324\\t\\354K\\360\\177\\1\\200\"...,\n32768)\n= 32768\nclose(14) = 0\nread(13, \"\\340\\1\\0\\0X\\231\\273\\241\\24\\0\\0\\0\\370\\6Dk\\360\\177\\1\\200\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834166\", O_RDWR|O_LARGEFILE) = 14\n_llseek(14, 152010752, [152010752], SEEK_SET) = 0\nwrite(14, \"\\342\\1\\0\\0p\\277<\\7\\24\\0\\0\\0\\364\\n\\220I\\360\\177\\1\\200\\334\"...,\n32768) = 32768\nclose(14) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16610\", O_RDWR|O_LARGEFILE) = 14\nread(14, \"\\0\\0\\0\\0\\10\\317\\27\\t\\16\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834170\", O_RDWR|O_LARGEFILE) = 15\n_llseek(15, 86441984, [86441984], SEEK_SET) = 0\nwrite(15, \"\\342\\1\\0\\0\\330B?\\7\\24\\0\\0\\0\\370\\6 N\\360\\177\\1\\200\\310\\377\"...,\n32768) = 32768\nclose(15) = 0\n_llseek(14, 98304, [98304], SEEK_SET) = 0\nread(14, \"\\340\\1\\0\\0,l\\257\\241\\24\\0\\0\\0(\\0\\250\\177\\360\\177\\1\\200\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834166\", O_RDWR|O_LARGEFILE) = 15\n_llseek(15, 121896960, [121896960], SEEK_SET) = 0\nwrite(15, \"\\342\\1\\0\\0\\264\\303?\\7\\24\\0\\0\\0\\234\\tHP\\360\\177\\1\\200\\334\"...,\n32768) = 32768\nclose(15) = 0\n_llseek(14, 65536, [65536], SEEK_SET) = 0\nread(14, \"\\334\\1\\0\\0\\310u\\252n\\23\\0\\0\\0\\234\\20\\320=\\360\\177\\1\\200\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834173\", O_RDWR|O_LARGEFILE) = 15\n_llseek(15, 41549824, [41549824], SEEK_SET) = 0\nwrite(15, \"\\342\\1\\0\\0\\0\\312B\\7\\24\\0\\0\\0\\234\\7\\350T\\360\\177\\1\\200\\330\"...,\n32768) = 32768\nclose(15) = 0\nopen(\"/var/lib/pgsql/data/base/495616/1249\", O_RDWR|O_LARGEFILE) = 15\n_llseek(15, 229376, [229376], SEEK_SET) = 0\nread(15, \"O\\1\\0\\0\\214\\241\\200\\0\\23\\0\\0\\0\\364\\3\\0\\4\\0\\200\\1\\200\\200\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834163\", O_RDWR|O_LARGEFILE) = 16\n_llseek(16, 57147392, [57147392], SEEK_SET) = 0\nwrite(16, \"\\342\\1\\0\\0004\\320G\\7\\24\\0\\0\\0\\374\\7\\200P\\360\\177\\1\\200\"...,\n32768)\n= 32768\nclose(16) = 0\n_llseek(15, 163840, [163840], SEEK_SET) = 0\nread(15, \"\\21\\1\\0\\0\\214\\3\\224R\\23\\0\\0\\0\\364\\3\\0\\4\\0\\200\\1\\200\\200\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834163\", O_RDWR|O_LARGEFILE) = 16\n_llseek(16, 241893376, [241893376], SEEK_SET) = 0\nwrite(16, \"\\342\\1\\0\\0\\220TK\\7\\24\\0\\0\\0,\\t`I\\360\\177\\1\\200\\330\\377\"...,\n32768)\n= 32768\nclose(16) = 0\n_llseek(12, 0, [0], SEEK_SET) = 0\nread(12, \"O\\1\\0\\0\\350\\340\\316,\\23\\0\\0\\0X\\3\\230\\3\\0\\200\\1\\200d\\304\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834171\", O_RDWR|O_LARGEFILE) = 16\n_llseek(16, 88702976, [88702976], SEEK_SET) = 0\nwrite(16, \"\\342\\1\\0\\0\\324\\326K\\7\\24\\0\\0\\0`\\v\\370E\\360\\177\\1\\200\\334\"...,\n32768) = 32768\nclose(16) = 0\n_llseek(14, 32768, [32768], SEEK_SET) = 0\nread(14, \"\\0\\0\\0\\0\\10\\317\\27\\t\\16\\0\\0\\0\\234\\20\\320=\\360\\177\\1\\200\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834173\", O_RDWR|O_LARGEFILE) = 16\n_llseek(16, 152043520, [152043520], SEEK_SET) = 0\nwrite(16, \"\\342\\1\\0\\0\\220fU\\7\\24\\0\\0\\0l\\n\\320K\\360\\177\\1\\200\\334\\377\"...,\n32768) = 32768\nclose(16) = 0\n_llseek(15, 0, [0], SEEK_SET) = 0\nread(15, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\364\\3\\0\\4\\0\\200\\1\\200\\200\\377\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834163\", O_RDWR|O_LARGEFILE) = 16\n_llseek(16, 70025216, [70025216], SEEK_SET) = 0\nwrite(16, \"\\342\\1\\0\\0\\370\\rk\\7\\24\\0\\0\\0 \\10\\250O\\360\\177\\1\\200\\330\"...,\n32768)\n= 32768\nclose(16) = 0\nread(15, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\364\\3\\0\\4\\0\\200\\1\\200\\200\\377\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834163\", O_RDWR|O_LARGEFILE) = 16\n_llseek(16, 152764416, [152764416], SEEK_SET) = 0\nwrite(16, \"\\342\\1\\0\\0008\\222m\\7\\24\\0\\0\\0\\370\\10\\230J\\360\\177\\1\\200\"...,\n32768)\n= 32768\nclose(16) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16630\", O_RDWR|O_LARGEFILE) = 16\nread(16, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b1\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834163\", O_RDWR|O_LARGEFILE) = 17\n_llseek(17, 143753216, [143753216], SEEK_SET) = 0\nwrite(17, \"\\342\\1\\0\\0\\314!w\\7\\24\\0\\0\\0\\20\\t\\10J\\360\\177\\1\\200\\330\"...,\n32768)\n= 32768\nclose(17) = 0\nread(16, \"\\340\\1\\0\\0\\340\\204\\264\\241\\24\\0\\0\\0H\\2Ty\\360\\177\\1\\200\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834170\", O_RDWR|O_LARGEFILE) = 17\n_llseek(17, 192512000, [192512000], SEEK_SET) = 0\nwrite(17, \"\\342\\1\\0\\0`\\253y\\7\\24\\0\\0\\0\\250\\7\\330G\\360\\177\\1\\200\\324\"...,\n32768) = 32768\nclose(17) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16390\", O_RDWR|O_LARGEFILE) = 17\nread(17, \"\\334\\1\\0\\0t\\242\\23p\\24\\0\\0\\0\\0\\2\\210\\2\\0\\200\\1\\200\\24\\377\"...,\n32768) = 32768\nopen(\"/var/lib/pgsql/data/base/495616/16396\", O_RDWR|O_LARGEFILE) = 18\n_llseek(18, 0, [32768], SEEK_END) = 0\nopen(\"/var/lib/pgsql/data/base/495616/6834168\", O_RDWR|O_LARGEFILE) = 19\n_llseek(19, 63471616, [63471616], SEEK_SET) = 0\nwrite(19, \"\\342\\1\\0\\0\\2444\\200\\7\\24\\0\\0\\0$\\10\\240O\\360\\177\\1\\200\\330\"...,\n32768) = 32768\nclose(19) = 0\n_llseek(18, 0, [0], SEEK_SET) = 0\nread(18, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0$\\0\\240}\\0\\200\\1\\200h\\3770\\1\\320\"...,\n32768) = 32768\nbrk(0) = 0x82db000\nbrk(0x82dd000) = 0x82dd000\nopen(\"/var/lib/pgsql/data/base/495616/6834166\", O_RDWR|O_LARGEFILE) = 19\n_llseek(19, 64290816, [64290816], SEEK_SET) = 0\nwrite(19, \"\\342\\1\\0\\0<\\265\\200\\7\\24\\0\\0\\0d\\t`Q\\360\\177\\1\\200\\334\\377\"...,\n32768) = 32768\nclose(19) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16605\", O_RDWR|O_LARGEFILE) = 19\nread(19, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b1\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834163\", O_RDWR|O_LARGEFILE) = 20\n_llseek(20, 152731648, [152731648], SEEK_SET) = 0\nwrite(20, \"\\342\\1\\0\\0\\264=\\206\\7\\24\\0\\0\\0\\370\\10\\230J\\360\\177\\1\\200\"...,\n32768) = 32768\nclose(20) = 0\nread(19, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\264\\3pq\\360\\177\\1\\200\\300\\363\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834170\", O_RDWR|O_LARGEFILE) = 20\n_llseek(20, 150274048, [150274048], SEEK_SET) = 0\nwrite(20, \"\\342\\1\\0\\0\\230\\310\\212\\7\\24\\0\\0\\0\\210\\7lO\\360\\177\\1\\200\"...,\n32768)\n= 32768\nclose(20) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16398\", O_RDWR|O_LARGEFILE) = 20\nread(20, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\264\\3`_\\0\\200\\1\\200\\334\\377H\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834166\", O_RDWR|O_LARGEFILE) = 21\n_llseek(21, 260046848, [260046848], SEEK_SET) = 0\nwrite(21, \"\\342\\1\\0\\0\\4\\322\\220\\7\\24\\0\\0\\0\\264\\2\\320r\\360\\177\\1\\200\"...,\n32768) = 32768\nclose(21) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16639\", O_RDWR|O_LARGEFILE) = 21\nread(21, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b1\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834170\", O_RDWR|O_LARGEFILE) = 22\n_llseek(22, 174424064, [174424064], SEEK_SET) = 0\nwrite(22, \"\\342\\1\\0\\0\\200\\\\\\225\\7\\24\\0\\0\\0D\\t$H\\360\\177\\1\\200\\330\"...,\n32768)\n= 32768\nclose(22) = 0\nread(21, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\200\\t\\254c\\360\\177\\1\\200\\344\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834163\", O_RDWR|O_LARGEFILE) = 22\n_llseek(22, 109084672, [109084672], SEEK_SET) = 0\nwrite(22, \"\\342\\1\\0\\0\\310\\335\\226\\7\\24\\0\\0\\0 \\10\\250O\\360\\177\\1\\200\"...,\n32768) = 32768\nclose(22) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16392\", O_RDWR|O_LARGEFILE) = 22\nread(22, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0X\\3\\350\\3\\0\\200\\1\\200h\\3770\\1\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834170\", O_RDWR|O_LARGEFILE) = 23\n_llseek(23, 200900608, [200900608], SEEK_SET) = 0\nwrite(23, \"\\342\\1\\0\\0\\314\\344\\232\\7\\24\\0\\0\\0\\344\\7\\304G\\360\\177\\1\"...,\n32768)\n= 32768\nclose(23) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16606\", O_RDWR|O_LARGEFILE) = 23\nread(23, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b1\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834168\", O_RDWR|O_LARGEFILE) = 24\n_llseek(24, 85426176, [85426176], SEEK_SET) = 0\nwrite(24, \"\\342\\1\\0\\0\\30\\345\\232\\7\\24\\0\\0\\0\\264\\7\\360V\\360\\177\\1\\200\"...,\n32768) = 32768\nclose(24) = 0\nread(23, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0H\\1 {\\360\\177\\1\\200P\\377 \\0@\\377\"...,\n32768) = 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834166\", O_RDWR|O_LARGEFILE) = 24\n_llseek(24, 156729344, [156729344], SEEK_SET) = 0\nwrite(24, \"\\342\\1\\0\\0\\260e\\233\\7\\24\\0\\0\\0\\30\\n\\334M\\360\\177\\1\\200\"...,\n32768)\n= 32768\nclose(24) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16400\", O_RDWR|O_LARGEFILE) = 24\nread(24, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0H\\1,u\\0\\200\\1\\200\\334\\377H\\0\\270\"...,\n32768) = 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834168\", O_RDWR|O_LARGEFILE) = 25\n_llseek(25, 92995584, [92995584], SEEK_SET) = 0\nwrite(25, \"\\342\\1\\0\\0\\244i\\235\\7\\24\\0\\0\\0\\360\\ttO\\360\\177\\1\\200\\324\"...,\n32768) = 32768\nclose(25) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16607\", O_RDWR|O_LARGEFILE) = 25\nread(25, \"\\0\\0\\0\\0\\264\\341\\v\\t\\16\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834170\", O_RDWR|O_LARGEFILE) = 26\n_llseek(26, 209387520, [209387520], SEEK_SET) = 0\nwrite(26, \"\\342\\1\\0\\0<m\\237\\7\\24\\0\\0\\0\\\\\\7\\214H\\360\\177\\1\\200\\320\"...,\n32768)\n= 32768\nclose(26) = 0\nread(25, \"N\\1\\0\\0X\\227`\\236\\23\\0\\0\\0\\334\\0\\320|\\360\\177\\1\\200\\340\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834168\", O_RDWR|O_LARGEFILE) = 26\n_llseek(26, 108363776, [108363776], SEEK_SET) = 0\nwrite(26, \"\\342\\1\\0\\0\\334\\375\\251\\7\\24\\0\\0\\0\\24\\10`K\\360\\177\\1\\200\"...,\n32768)\n= 32768\nclose(26) = 0\nbrk(0) = 0x82dd000\nbrk(0x82de000) = 0x82de000\nopen(\"/var/lib/pgsql/data/base/495616/16384\", O_RDWR|O_LARGEFILE) = 26\nread(26, \"N\\1\\0\\0\\244\\1H\\332\\23\\0\\0\\0\\360\\0\\244N\\0\\200\\1\\200\\270\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834173\", O_RDWR|O_LARGEFILE) = 27\n_llseek(27, 85000192, [85000192], SEEK_SET) = 0\nwrite(27, \"\\342\\1\\0\\0\\364\\0\\254\\7\\24\\0\\0\\0008\\txQ\\360\\177\\1\\200\\334\"...,\n32768) = 32768\nclose(27) = 0\nread(11, \"\\334\\1\\0\\0\\344\\3422.\\23\\0\\0\\0t\\1\\320e\\360\\177\\1\\200\\244\"...,\n32768)\n= 32768\nmmap2(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)\n=\n0x404b8000\nopen(\"/var/lib/pgsql/data/base/495616/6834163\", O_RDWR|O_LARGEFILE) = 27\n_llseek(27, 255524864, [255524864], SEEK_SET) = 0\nwrite(27, \"\\342\\1\\0\\0\\34\\3\\256\\7\\24\\0\\0\\0\\374\\10\\200J\\360\\177\\1\\200\"...,\n32768) = 32768\nclose(27) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16640\", O_RDWR|O_LARGEFILE) = 27\nread(27, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b1\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834170\", O_RDWR|O_LARGEFILE) = 28\n_llseek(28, 202047488, [202047488], SEEK_SET) = 0\nwrite(28, \"\\342\\1\\0\\0\\240\\20\\263\\7\\24\\0\\0\\0\\220\\7\\30H\\360\\177\\1\\200\"...,\n32768) = 32768\nclose(28) = 0\n_llseek(27, 98304, [98304], SEEK_SET) = 0\nread(27, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\34\\0\\224\\177\\360\\177\\1\\200\\350\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834168\", O_RDWR|O_LARGEFILE) = 28\n_llseek(28, 141524992, [141524992], SEEK_SET) = 0\nwrite(28, \"\\342\\1\\0\\0$\\36\\274\\7\\24\\0\\0\\0p\\10(L\\360\\177\\1\\200\\324\\377\"...,\n32768) = 32768\nclose(28) = 0\n_llseek(27, 32768, [32768], SEEK_SET) = 0\nread(27, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\314\\5\\330\\7\\360\\177\\1\\200\\234\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834166\", O_RDWR|O_LARGEFILE) = 28\n_llseek(28, 149422080, [149422080], SEEK_SET) = 0\nwrite(28, \"\\342\\1\\0\\0\\250\\36\\274\\7\\24\\0\\0\\0\\214\\3\\230n\\360\\177\\1\\200\"...,\n32768) = 32768\nclose(28) = 0\n_llseek(22, 65536, [65536], SEEK_SET) = 0\nread(22, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\370\\2(\\22\\0\\200\\1\\200h\\3770\\1\"...,\n32768)\n= 32768\nbrk(0) = 0x82de000\nbrk(0x82e2000) = 0x82e2000\nopen(\"/var/lib/pgsql/data/base/495616/6834165\", O_RDWR|O_LARGEFILE) = 28\n_llseek(28, 125075456, [125075456], SEEK_SET) = 0\nwrite(28, \"\\342\\1\\0\\0\\0\\237\\274\\7\\24\\0\\0\\0<\\t\\0I\\360\\177\\1\\200\\330\"...,\n32768)\n= 32768\nclose(28) = 0\nread(27, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\314\\3\\3301\\360\\177\\1\\200\\234\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834166\", O_RDWR|O_LARGEFILE) = 28\n_llseek(28, 199000064, [199000064], SEEK_SET) = 0\nwrite(28, \"\\342\\1\\0\\0\\304&\\301\\7\\24\\0\\0\\0\\310\\nlJ\\360\\177\\1\\200\\334\"...,\n32768) = 32768\nclose(28) = 0\n_llseek(22, 32768, [32768], SEEK_SET) = 0\nread(22, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0X\\3\\350\\3\\0\\200\\1\\200h\\3770\\1\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834168\", O_RDWR|O_LARGEFILE) = 28\n_llseek(28, 146145280, [146145280], SEEK_SET) = 0\nwrite(28, \"\\342\\1\\0\\0\\224\\252\\303\\7\\24\\0\\0\\0\\200\\2 r\\360\\177\\1\\200\"...,\n32768)\n= 32768\nclose(28) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16652\", O_RDWR|O_LARGEFILE) = 28\nread(28, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b1\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834170\", O_RDWR|O_LARGEFILE) = 29\n_llseek(29, 244252672, [244252672], SEEK_SET) = 0\nwrite(29, \"\\342\\1\\0\\0\\0003\\310\\7\\24\\0\\0\\0\\260\\0074H\\360\\177\\1\\200\"...,\n32768)\n= 32768\nclose(29) = 0\nread(28, \"\\340\\1\\0\\0\\30*\\262\\241\\24\\0\\0\\0\\210\\4\\224r\\360\\177\\1\\200\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834163\", O_RDWR|O_LARGEFILE) = 29\n_llseek(29, 2228224, [2228224], SEEK_SET) = 0\nwrite(29, \"\\342\\1\\0\\0h\\264\\310\\7\\24\\0\\0\\0\\34\\10\\300O\\360\\177\\1\\200\"...,\n32768)\n= 32768\nclose(29) = 0\nopen(\"/var/lib/pgsql/data/base/495616/1247\", O_RDWR|O_LARGEFILE) = 29\nread(29, \"\\0\\0\\0\\0\\244\\5\\201\\0\\v\\0\\0\\0H\\3\\224\\3\\0\\200\\1\\200h\\377\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834163\", O_RDWR|O_LARGEFILE) = 30\n_llseek(30, 20316160, [20316160], SEEK_SET) = 0\nwrite(30, \"\\342\\1\\0\\0@\\270\\312\\7\\24\\0\\0\\0P\\10\\210N\\360\\177\\1\\200\\330\"...,\n32768) = 32768\nclose(30) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16612\", O_RDWR|O_LARGEFILE) = 30\nread(30, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b1\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834173\", O_RDWR|O_LARGEFILE) = 31\n_llseek(31, 12058624, [12058624], SEEK_SET) = 0\nwrite(31, \"\\342\\1\\0\\0\\340\\301\\320\\7\\24\\0\\0\\0l\\7 N\\360\\177\\1\\200\\334\"...,\n32768) = 32768\nclose(31) = 0\nread(30, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\320\\2\\0u\\360\\177\\1\\200p\\372 \"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834168\", O_RDWR|O_LARGEFILE) = 31\n_llseek(31, 281968640, [281968640], SEEK_SET) = 0\nwrite(31, \"\\342\\1\\0\\0$\\317\\331\\7\\24\\0\\0\\0\\270\\1,w\\360\\177\\1\\200\\334\"...,\n32768) = 32768\nclose(31) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16418\", O_RDWR|O_LARGEFILE) = 31\nread(31, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\320\\2\\354a\\0\\200\\1\\200\\324\\377\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834170\", O_RDWR|O_LARGEFILE) = 32\n_llseek(32, 336003072, [336003072], SEEK_SET) = 0\nwrite(32, \"\\342\\1\\0\\0\\264Z\\340\\7\\24\\0\\0\\0D\\0104H\\360\\177\\1\\200\\330\"...,\n32768)\n= 32768\nclose(32) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16641\", O_RDWR|O_LARGEFILE) = 32\nread(32, \"\\0\\0\\0\\0\\\\\\242I\\0\\10\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834166\", O_RDWR|O_LARGEFILE) = 33\n_llseek(33, 138903552, [138903552], SEEK_SET) = 0\nwrite(33, \"\\342\\1\\0\\0\\300u\\355\\7\\24\\0\\0\\0\\334\\4\\10h\\360\\177\\1\\200\"...,\n32768)\n= 32768\nclose(33) = 0\n_llseek(32, 98304, [98304], SEEK_SET) = 0\nread(32, \"\\0\\0\\0\\0\\\\\\242I\\0\\10\\0\\0\\0\\34\\0\\334\\177\\360\\177\\1\\200\\350\"...,\n32768) = 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834165\", O_RDWR|O_LARGEFILE) = 33\n_llseek(33, 6062080, [6062080], SEEK_SET) = 0\nwrite(33, \"\\342\\1\\0\\0t~\\360\\7\\24\\0\\0\\0\\4\\10PP\\360\\177\\1\\200\\330\\377\"...,\n32768) = 32768\nclose(33) = 0\n_llseek(32, 32768, [32768], SEEK_SET) = 0\nread(32, \"\\0\\0\\0\\0\\200\\20\\276\\0\\v\\0\\0\\0\\0\\17,S\\360\\177\\1\\200\\344\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834163\", O_RDWR|O_LARGEFILE) = 33\n_llseek(33, 17596416, [17596416], SEEK_SET) = 0\nwrite(33, \"\\342\\1\\0\\0\\314\\376\\360\\7\\24\\0\\0\\0\\24\\10\\360O\\360\\177\\1\"...,\n32768)\n= 32768\nclose(33) = 0\nopen(\"/var/lib/pgsql/data/base/495616/1255\", O_RDWR|O_LARGEFILE) = 33\n_llseek(33, 458752, [458752], SEEK_SET) = 0\nread(33, \"\\0\\0\\0\\0\\270\\10\\276\\0\\v\\0\\0\\0\\300\\1\\370\\1\\0\\200\\1\\2000\"..., 32768)\n=\n32768\nbrk(0) = 0x82e2000\nbrk(0x82e4000) = 0x82e4000\nbrk(0) = 0x82e4000\nbrk(0x82e6000) = 0x82e6000\nbrk(0) = 0x82e6000\nbrk(0x82e7000) = 0x82e7000\nopen(\"/var/lib/pgsql/data/base/495616/6834168\", O_RDWR|O_LARGEFILE) = 34\n_llseek(34, 242122752, [242122752], SEEK_SET) = 0\nwrite(34, \"\\342\\1\\0\\0\\340\\201\\362\\7\\24\\0\\0\\0\\224\\2\\334r\\360\\177\\1\"...,\n32768)\n= 32768\nclose(34) = 0\nread(15, \"\\0\\0\\0\\0\\320r\\37\\0\\5\\0\\0\\0\\364\\3\\0\\4\\0\\200\\1\\200\\200\\377\"...,\n32768)\n= 32768\nbrk(0) = 0x82e7000\nbrk(0x82e8000) = 0x82e8000\nopen(\"/var/lib/pgsql/data/base/495616/6834166\", O_RDWR|O_LARGEFILE) = 34\n_llseek(34, 242810880, [242810880], SEEK_SET) = 0\nwrite(34, \"\\342\\1\\0\\0d\\202\\362\\7\\24\\0\\0\\0P\\3\\304o\\360\\177\\1\\200\\334\"...,\n32768) = 32768\nclose(34) = 0\nopen(\"/var/lib/pgsql/data/base/495616/16629\", O_RDWR|O_LARGEFILE) = 34\nread(34, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b1\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834165\", O_RDWR|O_LARGEFILE) = 35\n_llseek(35, 33292288, [33292288], SEEK_SET) = 0\nwrite(35, \"\\342\\1\\0\\0\\234\\231\\375\\7\\24\\0\\0\\0(\\10xO\\360\\177\\1\\200\\330\"...,\n32768) = 32768\nclose(35) = 0\nread(34, \"\\340\\1\\0\\0\\244\\204\\264\\241\\24\\0\\0\\0H\\2Ty\\360\\177\\1\\200\"..., 32768)\n=\n32768\nbrk(0) = 0x82e8000\nbrk(0x82e9000) = 0x82e9000\nbrk(0) = 0x82e9000\nbrk(0x82eb000) = 0x82eb000\nbrk(0) = 0x82eb000\nbrk(0x82ec000) = 0x82ec000\nbrk(0) = 0x82ec000\nbrk(0x82ed000) = 0x82ed000\nopen(\"/var/lib/pgsql/data/base/495616/6834163\", O_RDWR|O_LARGEFILE) = 35\n_llseek(35, 4456448, [4456448], SEEK_SET) = 0\nwrite(35, \"\\342\\1\\0\\0\\364\\31\\376\\7\\24\\0\\0\\0\\34\\10\\300O\\360\\177\\1\\200\"...,\n32768) = 32768\nopen(\"/var/lib/pgsql/data/base/495616/16647\", O_RDWR|O_LARGEFILE) = 36\nread(36, \"\\0\\0\\0\\0\\4\\307}\\0\\v\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b1\"..., 32768)\n=\n32768\nbrk(0) = 0x82ed000\nbrk(0x82ee000) = 0x82ee000\nopen(\"/var/lib/pgsql/data/base/495616/6834170\", O_RDWR|O_LARGEFILE) = 37\n_llseek(37, 265158656, [265158656], SEEK_SET) = 0\nwrite(37, \"\\342\\1\\0\\0@\\34\\377\\7\\24\\0\\0\\0\\224\\7lG\\360\\177\\1\\200\\324\"...,\n32768)\n= 32768\nread(36, \"\\336\\1\\0\\0000\\327V\\272\\24\\0\\0\\0\\210\\5 j\\360\\177\\1\\200@\"..., 32768)\n=\n32768\n_llseek(35, 161415168, [161415168], SEEK_SET) = 0\nwrite(35, \"\\342\\1\\0\\0p\\35\\0\\10\\24\\0\\0\\0D\\t\\320H\\360\\177\\1\\200\\330\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/16408\", O_RDWR|O_LARGEFILE) = 38\nread(38, \"\\336\\1\\0\\0\\224\\273V\\272\\24\\0\\0\\0H\\2h\\2\\0\\200\\1\\2004\\377\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/6834173\", O_RDWR|O_LARGEFILE) = 39\n_llseek(39, 133332992, [133332992], SEEK_SET) = 0\nwrite(39, \"\\342\\1\\0\\0\\34\\340(\\10\\24\\0\\0\\0p\\3\\334l\\360\\177\\1\\200\\330\"...,\n32768) = 32768\nopen(\"/var/lib/pgsql/data/base/495616/16604\", O_RDWR|O_LARGEFILE) = 40\nread(40, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b1\"..., 32768)\n=\n32768\n_llseek(39, 244875264, [244875264], SEEK_SET) = 0\nwrite(39, \"\\342\\1\\0\\0\\264\\343*\\10\\24\\0\\0\\0L\\2Ht\\360\\177\\1\\200\\334\"...,\n32768)\n= 32768\nread(40, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\264\\3pq\\360\\177\\1\\200\\340\\366\"...,\n32768)\n= 32768\n_llseek(37, 119635968, [119635968], SEEK_SET) = 0\nwrite(37, \"\\342\\1\\0\\0\\350k.\\10\\24\\0\\0\\0\\334\\6(N\\360\\177\\1\\200\\324\"...,\n32768)\n= 32768\n_llseek(38, 65536, [65536], SEEK_SET) = 0\nread(38, \"\\336\\1\\0\\0\\214\\272V\\272\\24\\0\\0\\0\\334\\0\\370\\0\\0\\200\\1\\200\"...,\n32768)\n= 32768\n_llseek(37, 103841792, [103841792], SEEK_SET) = 0\nwrite(37, \"\\342\\1\\0\\0\\24t3\\10\\24\\0\\0\\0\\300\\6@M\\360\\177\\1\\200\\324\\377\"...,\n32768) = 32768\n_llseek(38, 32768, [32768], SEEK_SET) = 0\nread(38, \"\\336\\1\\0\\0\\260\\325V\\272\\24\\0\\0\\0000\\2\\200\\2\\0\\200\\1\\200\"...,\n32768)\n= 32768\nbrk(0) = 0x82ee000\nbrk(0x82f0000) = 0x82f0000\nbrk(0) = 0x82f0000\nbrk(0x82f2000) = 0x82f2000\ngettimeofday({1079482178, 920849}, NULL) = 0\nbrk(0) = 0x82f2000\nbrk(0x82f4000) = 0x82f4000\nbrk(0) = 0x82f4000\nbrk(0x82f6000) = 0x82f6000\nbrk(0) = 0x82f6000\nbrk(0x82fa000) = 0x82fa000\nbrk(0) = 0x82fa000\nbrk(0x8302000) = 0x8302000\n_llseek(37, 79331328, [79331328], SEEK_SET) = 0\nwrite(37, \"\\342\\1\\0\\0\\200\\3747\\10\\24\\0\\0\\0\\300\\0068N\\360\\177\\1\\200\"...,\n32768)\n= 32768\nopen(\"/var/lib/pgsql/data/base/495616/16653\", O_RDWR|O_LARGEFILE) = 41\nread(41, \"\\0\\0\\0\\0\\20\\0\\0\\0\\1\\0\\0\\0\\24\\0\\360\\177\\360\\177\\1\\200b1\"..., 32768)\n=\n32768\nopen(\"/var/lib/pgsql/data/base/495616/6834168\", O_RDWR|O_LARGEFILE) = 42\n_llseek(42, 262144, [262144], SEEK_SET) = 0\nwrite(42, \"\\342\\1\\0\\0000\\3758\\10\\24\\0\\0\\0\\0\\6Xb\\360\\177\\1\\200\\320\"...,\n32768)\n= 32768\nread(41, \"\\340\\1\\0\\0\\224*\\262\\241\\24\\0\\0\\0\\210\\4T+\\360\\177\\1\\200\"..., 32768)\n=\n32768\nbrk(0) = 0x8302000\nbrk(0x8304000) = 0x8304000\ngettimeofday({1079482178, 957454}, NULL) = 0\ngettimeofday({1079482178, 957580}, NULL) = 0\nsend(6, \"\\4\\0\\0\\0\\334\\3\\0\\0\\7\\0\\0\\0zU\\0\\0\\0\\220\\7\\0\\1\\0\\0\\0\\16\\0\"..., 988,\n0)\n= 988\nsend(6, \"\\4\\0\\0\\0\\334\\3\\0\\0\\7\\0\\0\\0zU\\0\\0\\0\\220\\7\\0\\1\\0\\0\\0\\16\\0\"..., 988,\n0)\n= 988\nsend(6, \"\\4\\0\\0\\0\\334\\3\\0\\0\\7\\0\\0\\0zU\\0\\0\\0\\220\\7\\0\\1\\0\\0\\0\\16\\0\"..., 988,\n0)\n= 988\nsend(6, \"\\4\\0\\0\\0\\274\\1\\0\\0\\7\\0\\0\\0zU\\0\\0\\0\\220\\7\\0\\1\\0\\0\\0\\6\\0\"..., 444, 0)\n=\n444\nsend(9, \"T\\0\\0\\0#\\0\\1QUERY PLAN\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\31\\377\\377\\377\"..., 394,\n0) = 394\nrecv(9, \"X\\0\\0\\0\\4\", 8192, 0) = 5\nexit_group(0) = ?\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n", "msg_date": "Tue, 16 Mar 2004 18:20:39 -0600", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "\"Rosser Schwarz\" <[email protected]> writes:\n> `strace -p 21882` run behind the below query and plan ... below that.\n\nHmm ... that took 20 seconds eh?\n\nIt is a fairly interesting trace. It shows that the backend needed to\nread 63 system catalog pages (that weren't already in shared memory),\nwhich is not too unreasonable I think ... though I wonder if more of\nthem shouldn't have been in memory already. The odd thing is that for\n*every single read* it was necessary to first dump out a dirty page\nin order to make a buffer free. That says you are running with the\nentire contents of shared buffer space dirty at all times. That's\nprobably not the regime you want to be operating in. I think we already\nsuggested increasing shared_buffers. You might also want to think about\nnot using such a large checkpoint interval. (The background-writing\nlogic already committed for 7.5 should help this problem, but it's not\nthere in 7.4.)\n\nAnother interesting fact is that the bulk of the writes were \"blind\nwrites\", involving an open()/write()/close() sequence instead of keeping\nthe open file descriptor around for re-use. This is not too surprising\nin a freshly started backend, I guess; it's unlikely to have had reason\nto create a relation descriptor for the relations it may have to dump\npages for. In some Unixen, particularly Solaris, open() is fairly\nexpensive and so blind writes are bad news. I didn't think it was a big\nproblem in Linux though. (This is another area we've improved for 7.5:\nthere are no more blind writes. But that won't help you today.)\n\nWhat's not immediately evident is whether the excess I/O accounted for\nall of the slowdown. Could you retry the strace with -r and -T options\nso we can see how much time is being spent inside and outside the\nsyscalls?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Mar 2004 20:22:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "Quick observations:\n\n1. We have an explanation for what's going on, based on the message being\nexactly 666 lines long :-)\n2. I'm clueless on the output, but perhaps Tom can see something. A quick\nglance shows that the strace seemed to run 27 seconds, during which it did:\n count| call\n -------|---------\n 84 | _llseek\n 40 | brk\n 54 | close\n 88 | open\n 63 | read\nin other words, nothing much (though it did *a lot* of opens and closes of\ndb files to do nothing ).\n\nCan you do another strace for a few minutes against the actual update query\nadding the -c/-t options and control-c out?\n\n----- Original Message ----- \nFrom: \"Rosser Schwarz\" <[email protected]>\nTo: \"'Tom Lane'\" <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, March 16, 2004 7:20 PM\nSubject: Re: [PERFORM] atrocious update performance\n\n\nwhile you weren't looking, Tom Lane wrote:\n\n[trace]\n\n`strace -p 21882` run behind the below query and plan ... below that.\n\n# explain update account.cust set prodid = tempprod.prodid, subprodid =\ntempprod.subprodid where origid = tempprod.debtid;\n", "msg_date": "Tue, 16 Mar 2004 22:25:16 -0500", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "On Tuesday 16 March 2004 00:08, Tom Lane wrote:\n>\n> I'm inclined to suspect an issue with foreign-key checking. You didn't\n> give us any details about foreign key relationships your \"cust\" table is\n> involved in --- could we see those? And the schemas of the other tables\n> involved?\n\nTwo questions Tom:\n1. Do the stats tables record FK checks, or just explicit table accesses?\n2. If not, should they?\n\nIf the only real activity is this update then simple before/after views of the \nstats might be revealing.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 17 Mar 2004 09:11:37 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n> Two questions Tom:\n> 1. Do the stats tables record FK checks, or just explicit table accesses?\n\nThe stats record everything, IIRC.\n\n> If the only real activity is this update then simple before/after\n> views of the stats might be revealing.\n\nThat's quite a good thought, though since Rosser's system is live it\nmight be hard to get a view of just one query's activity.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Mar 2004 10:16:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "while you weren't looking, Tom Lane wrote:\n\n> What's not immediately evident is whether the excess I/O accounted for\n> all of the slowdown. Could you retry the strace with -r and -T options\n> so we can see how much time is being spent inside and outside the\n> syscalls?\n\nUnlike the previous run (this is a trace of the explain), this one went\nimmediately. No delay.\n\nI also have, per Aaron's request, a trace -ct against the backend running\nthe explain analyze. I killed it well before \"a few minutes\"; it's just\nshy of 900K. I don't think I'll be forwarding that on to the list, though\nI can put it up on a web server somewhere easily enough.\n\nTry <http://www.totalcardinc.com/pg/postmaster.trace>.\n\n# `strace -rT -p 25075`\n 0.000000 read(0, \"\\r\", 1) = 1 <5.514983>\n 5.516215 write(1, \"\\n\", 1) = 1 <0.000034>\n 0.000545 rt_sigprocmask(SIG_BLOCK, [INT], [33], 8) = 0 <0.000013>\n 0.000200 ioctl(0, SNDCTL_TMR_STOP, {B38400 opost isig icanon echo ...})\n= 0\n <0.000032>\n 0.000162 rt_sigprocmask(SIG_SETMASK, [33], NULL, 8) = 0 <0.000013>\n 0.000120 rt_sigaction(SIGINT, {0x804d404, [], SA_RESTORER|SA_RESTART,\n0x420\n276f8}, {0x401ec910, [], SA_RESTORER, 0x420276f8}, 8) = 0 <0.000015>\n 0.000154 rt_sigaction(SIGTERM, {SIG_DFL}, {0x401ec910, [], SA_RESTORER,\n0x4\n20276f8}, 8) = 0 <0.000014>\n 0.000136 rt_sigaction(SIGQUIT, {SIG_DFL}, {0x401ec910, [], SA_RESTORER,\n0x4\n20276f8}, 8) = 0 <0.000013>\n 0.000134 rt_sigaction(SIGALRM, {SIG_DFL}, {0x401ec910, [], SA_RESTORER,\n0x4\n20276f8}, 8) = 0 <0.000012>\n 0.000164 rt_sigaction(SIGTSTP, {SIG_DFL}, {0x401ec910, [], SA_RESTORER,\n0x4\n20276f8}, 8) = 0 <0.000013>\n 0.000140 rt_sigaction(SIGTTOU, {SIG_DFL}, {0x401ec910, [], SA_RESTORER,\n0x4\n20276f8}, 8) = 0 <0.000013>\n 0.000135 rt_sigaction(SIGTTIN, {SIG_DFL}, {0x401ec910, [], SA_RESTORER,\n0x4\n20276f8}, 8) = 0 <0.000013>\n 0.000135 rt_sigaction(SIGWINCH, {SIG_DFL}, {0x401ec9d0, [],\nSA_RESTORER, 0x\n420276f8}, 8) = 0 <0.000014>\n 0.000250 rt_sigaction(SIGPIPE, {SIG_IGN}, {SIG_DFL}, 8) = 0 <0.000013>\n 0.000138 send(3, \"Q\\0\\0\\0}explain update account.cust\"..., 126, 0) =\n126 <0\n.000032>\n 0.000164 rt_sigaction(SIGPIPE, {SIG_DFL}, {SIG_IGN}, 8) = 0 <0.000013>\n 0.000132 poll([{fd=3, events=POLLIN|POLLERR, revents=POLLIN}], 1, -1) =\n1 <\n0.222093>\n 0.222388 recv(3, \"T\\0\\0\\0#\\0\\1QUERY\nPLAN\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\31\\377\\377\\377\n\"..., 16384, 0) = 394 <0.000031>\n 0.000360 ioctl(0, SNDCTL_TMR_TIMEBASE, {B38400 opost isig icanon echo\n...})\n = 0 <0.000019>\n 0.000137 ioctl(1, SNDCTL_TMR_TIMEBASE, {B38400 opost isig icanon echo\n...})\n = 0 <0.000013>\n 0.000135 ioctl(1, TIOCGWINSZ, {ws_row=64, ws_col=80, ws_xpixel=0,\nws_ypixel\n=0}) = 0 <0.000015>\n 0.000175 write(1, \" \"..., 92) = 92\n<0.000038\n>\n 0.000184 write(1, \"--------------------------------\"..., 92) = 92\n<0.000025\n>\n 0.000154 write(1, \" Merge Join (cost=0.00..232764.\"..., 59) = 59\n<0.000023\n>\n 0.000136 write(1, \" Merge Cond: ((\\\"outer\\\".origid)\"..., 65) = 65\n<0.0000\n23>\n 0.000134 write(1, \" -> Index Scan using ix_origi\"..., 88) = 88\n<0.000025\n>\n 0.000129 write(1, \" -> Index Scan using ix_debti\"..., 91) = 91\n<0.000025\n>\n 0.000136 write(1, \"(4 rows)\\n\", 9) = 9 <0.000022>\n 0.000116 write(1, \"\\n\", 1) = 1 <0.000021>\n 0.000144 rt_sigprocmask(SIG_BLOCK, NULL, [33], 8) = 0 <0.000013>\n 0.000121 rt_sigaction(SIGINT, {0x804d404, [], SA_RESTORER|SA_RESTART,\n0x420\n276f8}, {0x804d404, [], SA_RESTORER|SA_RESTART, 0x420276f8}, 8) = 0\n<0.000015>\n 0.000208 rt_sigprocmask(SIG_BLOCK, [INT], [33], 8) = 0 <0.000013>\n 0.000129 ioctl(0, TIOCGWINSZ, {ws_row=64, ws_col=80, ws_xpixel=0,\nws_ypixel\n=0}) = 0 <0.000013>\n 0.000102 ioctl(0, TIOCSWINSZ, {ws_row=64, ws_col=80, ws_xpixel=0,\nws_ypixel\n=0}) = 0 <0.000014>\n 0.000105 ioctl(0, SNDCTL_TMR_TIMEBASE, {B38400 opost isig icanon echo\n...})\n = 0 <0.000013>\n 0.000127 ioctl(0, SNDCTL_TMR_STOP, {B38400 opost isig -icanon -echo\n...}) =\n 0 <0.000028>\n 0.000147 rt_sigprocmask(SIG_SETMASK, [33], NULL, 8) = 0 <0.000012>\n 0.000114 rt_sigaction(SIGINT, {0x401ec910, [], SA_RESTORER,\n0x420276f8}, {0\nx804d404, [], SA_RESTORER|SA_RESTART, 0x420276f8}, 8) = 0 <0.000012>\n 0.000149 rt_sigaction(SIGTERM, {0x401ec910, [], SA_RESTORER,\n0x420276f8}, {\nSIG_DFL}, 8) = 0 <0.000013>\n 0.000136 rt_sigaction(SIGQUIT, {0x401ec910, [], SA_RESTORER,\n0x420276f8}, {\nSIG_DFL}, 8) = 0 <0.000012>\n 0.000136 rt_sigaction(SIGALRM, {0x401ec910, [], SA_RESTORER,\n0x420276f8}, {\nSIG_DFL}, 8) = 0 <0.000012>\n 0.000136 rt_sigaction(SIGTSTP, {0x401ec910, [], SA_RESTORER,\n0x420276f8}, {\nSIG_DFL}, 8) = 0 <0.000013>\n 0.000136 rt_sigaction(SIGTTOU, {0x401ec910, [], SA_RESTORER,\n0x420276f8}, {\nSIG_DFL}, 8) = 0 <0.000012>\n 0.000136 rt_sigaction(SIGTTIN, {0x401ec910, [], SA_RESTORER,\n0x420276f8}, {\nSIG_DFL}, 8) = 0 <0.000013>\n 0.000212 rt_sigaction(SIGWINCH, {0x401ec9d0, [], SA_RESTORER,\n0x420276f8},\n{SIG_DFL}, 8) = 0 <0.000012>\n 0.000188 write(1, \"\\r\\rtci=# \\rtci=# \", 15) = 15 <0.000019>\n 0.000112 rt_sigprocmask(SIG_BLOCK, NULL, [33], 8) = 0 <0.000012>\n 0.000110 read(0, \"\\\\\", 1) = 1 <18.366895>\n 18.368284 write(1, \"\\rtci=# \\\\\\rtci=# \\\\\", 16) = 16 <0.000029>\n 0.000134 rt_sigprocmask(SIG_BLOCK, NULL, [33], 8) = 0 <0.000013>\n 0.000125 read(0, \"q\", 1) = 1 <0.117572>\n 0.117719 write(1, \"\\rtci=# \\\\q\\rtci=# \\\\q\", 18) = 18 <0.000020>\n 0.000118 rt_sigprocmask(SIG_BLOCK, NULL, [33], 8) = 0 <0.000012>\n 0.000107 read(0, \"\\r\", 1) = 1 <1.767409>\n 1.767604 write(1, \"\\n\", 1) = 1 <0.000032>\n 0.000140 rt_sigprocmask(SIG_BLOCK, [INT], [33], 8) = 0 <0.000013>\n 0.000138 ioctl(0, SNDCTL_TMR_STOP, {B38400 opost isig icanon echo ...})\n= 0\n <0.000030>\n 0.000143 rt_sigprocmask(SIG_SETMASK, [33], NULL, 8) = 0 <0.000013>\n 0.000111 rt_sigaction(SIGINT, {0x804d404, [], SA_RESTORER|SA_RESTART,\n0x420\n276f8}, {0x401ec910, [], SA_RESTORER, 0x420276f8}, 8) = 0 <0.000014>\n 0.000153 rt_sigaction(SIGTERM, {SIG_DFL}, {0x401ec910, [], SA_RESTORER,\n0x4\n20276f8}, 8) = 0 <0.000013>\n 0.000134 rt_sigaction(SIGQUIT, {SIG_DFL}, {0x401ec910, [], SA_RESTORER,\n0x4\n20276f8}, 8) = 0 <0.000013>\n 0.000134 rt_sigaction(SIGALRM, {SIG_DFL}, {0x401ec910, [], SA_RESTORER,\n0x4\n20276f8}, 8) = 0 <0.000013>\n 0.000133 rt_sigaction(SIGTSTP, {SIG_DFL}, {0x401ec910, [], SA_RESTORER,\n0x4\n20276f8}, 8) = 0 <0.000013>\n 0.000134 rt_sigaction(SIGTTOU, {SIG_DFL}, {0x401ec910, [], SA_RESTORER,\n0x4\n20276f8}, 8) = 0 <0.000013>\n 0.000134 rt_sigaction(SIGTTIN, {SIG_DFL}, {0x401ec910, [], SA_RESTORER,\n0x4\n20276f8}, 8) = 0 <0.000012>\n 0.000134 rt_sigaction(SIGWINCH, {SIG_DFL}, {0x401ec9d0, [],\nSA_RESTORER, 0x\n420276f8}, 8) = 0 <0.000014>\n 0.001271 rt_sigaction(SIGINT, {SIG_DFL}, {0x804d404, [],\nSA_RESTORER|SA_RES\nTART, 0x420276f8}, 8) = 0 <0.000013>\n 0.000532 rt_sigaction(SIGPIPE, {SIG_IGN}, {SIG_DFL}, 8) = 0 <0.000014>\n 0.000145 send(3, \"X\\0\\0\\0\\4\", 5, 0) = 5 <0.000028>\n 0.000126 rt_sigaction(SIGPIPE, {SIG_DFL}, {SIG_IGN}, 8) = 0 <0.000013>\n 0.000140 close(3) = 0 <0.000033>\n 0.000147 rt_sigaction(SIGPIPE, {SIG_DFL}, {SIG_DFL}, 8) = 0 <0.000013>\n 0.000197 open(\"/var/lib/pgsql/.psql_history\", O_WRONLY|O_CREAT|O_TRUNC,\n060\n0) = 3 <0.000168>\n 0.000694 write(3, \"\\\\d payment.batch\\nalter sequence \"..., 16712) =\n16712 <\n0.000209>\n 0.000311 close(3) = 0 <0.000057>\n 0.055587 munmap(0x40030000, 4096) = 0 <0.000032>\n 0.000130 exit_group(0) = ?\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc. \n\n", "msg_date": "Wed, 17 Mar 2004 11:33:31 -0600", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "\"Rosser Schwarz\" <[email protected]> writes:\n>> Could you retry the strace with -r and -T options\n\n> Unlike the previous run (this is a trace of the explain), this one went\n> immediately. No delay.\n\nHm. It looks like you mistakenly traced psql rather than the backend,\nbut since the delay went away we wouldn't have learned anything anyhow.\nHave you got any idea what conditions may have changed between seeing\ndelay and not seeing delay?\n\n> I also have, per Aaron's request, a trace -ct against the backend running\n> the explain analyze. I killed it well before \"a few minutes\"; it's just\n> shy of 900K. I don't think I'll be forwarding that on to the list, though\n> I can put it up on a web server somewhere easily enough.\n> Try <http://www.totalcardinc.com/pg/postmaster.trace>.\n\nThis is pretty odd too. It looks like it's doing checkpoints every so\noften (look for the writes to pg_control), which a backend engaged in\na long-running query surely ought not be doing. Need to think about\nwhy that might be...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Mar 2004 12:58:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "while you weren't looking, Tom Lane wrote:\n\n> Hm. It looks like you mistakenly traced psql rather than the backend,\n> but since the delay went away we wouldn't have learned \n> anything anyhow.\n> Have you got any idea what conditions may have changed between seeing\n> delay and not seeing delay?\n\nNone, offhand. I have noticed that when a large query is running,\nthe machine can sporadically just freeze--or at least take inordinately\nlong for some other process, be it top or ls, another query, or whatever\nto start. Nothing looks saturated when it happens, and, while you can\ncount on it to happen, it's not consistent enough to reproduce.\n\n> This is pretty odd too. It looks like it's doing checkpoints every so\n> often (look for the writes to pg_control), which a backend engaged in\n> a long-running query surely ought not be doing. Need to think about\n> why that might be...\n\nDoes the fact that all the reads and writes are 32K mean anything out\nof the ordinary? $PGSRC/src/include/pg_config_manual.h has BLCKSZ\n#defined to 16384. I was running previously with a 32K BLCKSZ, but\nthat turned out to be rather sub-optimal for as heavily indexed as our\ntables are. I've dumped and rebuilt several times since then.\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n", "msg_date": "Wed, 17 Mar 2004 12:42:22 -0600", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "\"Rosser Schwarz\" <[email protected]> writes:\n> while you weren't looking, Tom Lane wrote:\n>> Have you got any idea what conditions may have changed between seeing\n>> delay and not seeing delay?\n\n> None, offhand. I have noticed that when a large query is running,\n> the machine can sporadically just freeze--or at least take inordinately\n> long for some other process, be it top or ls, another query, or whatever\n> to start. Nothing looks saturated when it happens, and, while you can\n> count on it to happen, it's not consistent enough to reproduce.\n\nInteresting. You should leave \"vmstat 1\" running in the background and\nsee if you can correlate these freezes with bursts of disk I/O or swap.\nI saw a couple of delays in your big strace that seemed odd --- a couple\nof one-second-plus intervals, and a four-second-plus interval, with no\nobvious reason for them. Perhaps the same issue?\n\n> Does the fact that all the reads and writes are 32K mean anything out\n> of the ordinary? $PGSRC/src/include/pg_config_manual.h has BLCKSZ\n> #defined to 16384. I was running previously with a 32K BLCKSZ, but\n> that turned out to be rather sub-optimal for as heavily indexed as our\n> tables are. I've dumped and rebuilt several times since then.\n\nI hate to break it to you, but that most definitely means you are\nrunning with BLCKSZ = 32K. Whatever you thought you were rebuilding\ndidn't take effect.\n\nI agree that the larger blocksize is of dubious value. People used to\ndo that back when the blocksize limited your row width, but these days\nI think you're probably best off with the standard 8K.\n\nAnother thing that's fairly striking is the huge bursts of WAL activity\n--- your trace shows that the thing is writing entire WAL segments (16\nMB) at one go, rather than dribbling it out a page or two at a time as\nthe code is intended to do. I think what is happening is that you have\nwal_buffers = 1024 (correct?) yielding 32MB of WAL buffers, and since\nthere are no other transactions happening, nothing gets written until\nyou hit the \"write when the buffers are half full\" heuristic. I would\nsuggest knocking wal_buffers down to something closer to the default\n(maybe 4 or 8 buffers) to reduce these I/O storms. (Memo to hackers:\nwe need to see to it that the new background writer process takes some\nresponsibility for pushing out filled WAL buffers, not only data\nbuffers.)\n\nIf the big EXPLAIN ANALYZE is still running, would you get a dump of its\nopen files (see \"lsof -p\") and correlate those with the tables being\nused in the query? I'm trying to figure out what the different writes\nand reads represent.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Mar 2004 14:48:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "while you weren't looking, Tom Lane wrote:\n\n> I hate to break it to you, but that most definitely means you are\n> running with BLCKSZ = 32K. Whatever you thought you were rebuilding\n> didn't take effect.\n\nI saw that and thought so. The other day, I was rooting around in\n$PGDATA, and saw a lot of 32K files and wondered for a moment, too.\nIf that's the case, though, that's ... weird.\n \n> I agree that the larger blocksize is of dubious value. People used to\n> do that back when the blocksize limited your row width, but these days\n> I think you're probably best off with the standard 8K.\n\nI'd been experimenting with larger blocksizes after we started seeing\na lot of seqscans in query plans. 32K proved quickly that it hurts\nindex scan performance, so I was--I thought--trying 16.\n\n> If the big EXPLAIN ANALYZE is still running, would you get a dump of its\n> open files (see \"lsof -p\") and correlate those with the tables being\n> used in the query? I'm trying to figure out what the different writes\n> and reads represent.\n\nIt looks rather like it's hitting the foreign keys; one of the files\nthat shows is the account.note table, which has an fk to the pk of the\ntable being updated. The file's zero size, but it's open. The only\nreason it should be open is if foreign keys are being checked, yes?\n\nYou'd said that the foreign keys were only checked if last-change is\nafter current-query, as of 7.3.4, yes? `rpm -qa postgresql` comes up\nwith 7.3.2-3, which makes no sense, 'cos I know I removed it before\ninstalling current; I remember making sure no-one was using pg on this\nmachine, and remember saying rpm -e.\n\nRegardless, something thinks it's still there. Is there any way that\nit is, and that I've somehow been running 7.3.2 all along? `which\npsql`, &c show the bindir from my configure, but I'm not sure that's\nsufficient.\n\nHow would I tell? I don't remember any of the binaries having a\n--version argument.\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n", "msg_date": "Wed, 17 Mar 2004 14:34:07 -0600", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "I wrote:\n\n> Regardless, something thinks it's still there. Is there any way that\n> it is, and that I've somehow been running 7.3.2 all along? `which\n> psql`, &c show the bindir from my configure, but I'm not sure that's\n> sufficient.\n\nThe weird thing is that I know I never built 7.3.anything with 32K\nBLCKSZ, never built 7.3.anything at all. If 7.3 were installed, would\nit have any problem reading a 7.4 cluster?\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n", "msg_date": "Wed, 17 Mar 2004 14:43:36 -0600", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "\"Rosser Schwarz\" <[email protected]> writes:\n>> Regardless, something thinks it's still there. Is there any way that\n>> it is, and that I've somehow been running 7.3.2 all along? `which\n>> psql`, &c show the bindir from my configure, but I'm not sure that's\n>> sufficient.\n\n\"select version()\" is the definitive test for backend version.\n\n> The weird thing is that I know I never built 7.3.anything with 32K\n> BLCKSZ, never built 7.3.anything at all. If 7.3 were installed, would\n> it have any problem reading a 7.4 cluster?\n\n7.3 would refuse to start on a 7.4 cluster, and vice versa.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Mar 2004 15:52:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "I've been following this thread closely as I have the same problem\nwith an UPDATE. Everything is identical here right down to the\nstrace output.\n\nHas anyone found a workaround or resolved the problem? If not,\nI have test systems here which I can use to help up test and explore.\n\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n", "msg_date": "Mon, 22 Mar 2004 09:08:28 -0500", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "Greg Spiegelberg wrote:\n\n> I've been following this thread closely as I have the same problem\n> with an UPDATE. Everything is identical here right down to the\n> strace output.\n\n> Has anyone found a workaround or resolved the problem? If not,\n> I have test systems here which I can use to help up test and explore.\n\nI'm still gathering data. The explain analyze I'd expected to finish\nThursday afternoon hasn't yet. I'm going to kill it and try a few\nsmaller runs, increasing in size, until the behavior manifests.\n\nWill advise.\n\n/rls\n\n", "msg_date": "Tue, 23 Mar 2004 10:07:57 -0600", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "Rosser Schwarz wrote:\n> Greg Spiegelberg wrote:\n> \n> \n>>I've been following this thread closely as I have the same problem\n>>with an UPDATE. Everything is identical here right down to the\n>>strace output.\n> \n> \n>>Has anyone found a workaround or resolved the problem? If not,\n>>I have test systems here which I can use to help up test and explore.\n> \n> \n> I'm still gathering data. The explain analyze I'd expected to finish\n> Thursday afternoon hasn't yet. I'm going to kill it and try a few\n> smaller runs, increasing in size, until the behavior manifests.\n> \n> Will advise.\n\nI've replaced my atrocious UPDATE with the following.\n\nbegin;\n-- Drop all contraints\nalter table ORIG drop constraint ...;\n-- Drop all indexes\ndrop index ...;\n-- Update\nupdate ORIG set column=... where...;\ncommit;\n\nProblem is when I recreate the indexes and add the constraints back\non ORIG I end up with the same long running process. The original\nUPDATE runs for about 30 minutes on a table of 400,000 with the\nWHERE matching about 70% of the rows. The above runs for about 2\nminutes without adding the constraints or indexes however adding the\nconstraints and creating the dropped indexes negates any gain.\n\nRedHat 7.3 + Kernel 2.4.24 + ext3 + PostgreSQL 7.3.5\nDual PIII 1.3'ishGHz, 2GB Memory\nU160 OS drives and a 1Gbps test SAN on a Hitachi 9910\n\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n", "msg_date": "Tue, 23 Mar 2004 17:27:23 -0500", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "Greg Spiegelberg <[email protected]> writes:\n> RedHat 7.3 + Kernel 2.4.24 + ext3 + PostgreSQL 7.3.5\n ^^^^^^^^^^^^^^^^\n\nPlease try 7.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Mar 2004 17:52:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance " }, { "msg_contents": "Greg Spiegelberg wrote:\n\n> > Will advise.\n\nAfter creating 100, 1K, 10K, 100K and 1M-row subsets of account.cust and\nthe corresponding rows/tables with foreign key constraints referring to\nthe table, I'm unable to reproduce the behavior at issue.\n\nexplain analyze looks like the following, showing the query run with the\njoin column indexed and not, respectively:\n\n# explain analyze update test.cust100 set prodid = tempprod.prodid,\nsubprodid = tempprod.subprodid where origid = tempprod.debtid;\n-- with index\n QUERY PLAN \n-----------------------------------------------------------------------\n Merge Join (cost=0.00..25.64 rows=500 width=220) (actual\n time=0.241..13.091 rows=100 loops=1)\n Merge Cond: ((\"outer\".origid)::text = (\"inner\".debtid)::text)\n -> Index Scan using ix_origid_cust100 on cust100 (cost=0.00..11.50\n rows=500 width=204) (actual time=0.125..6.465 rows=100 loops=1)\n -> Index Scan using ix_debtid on tempprod (cost=0.00..66916.71\n rows=4731410 width=26) (actual time=0.057..1.497 rows=101 loops=1)\n Total runtime: 34.067 ms\n(5 rows)\n\n-- without index\n QUERY PLAN \n----------------------------------------------------------------------\n Merge Join (cost=7.32..16.71 rows=100 width=220) (actual\n time=4.415..10.918 rows=100 loops=1)\n Merge Cond: ((\"outer\".debtid)::text = \"inner\".\"?column22?\")\n -> Index Scan using ix_debtid on tempprod (cost=0.00..66916.71\n rows=4731410 width=26) (actual time=0.051..1.291 rows=101 loops=1)\n -> Sort (cost=7.32..7.57 rows=100 width=204) (actual\n time=4.311..4.450 rows=100 loops=1)\n Sort Key: (cust100.origid)::text\n -> Seq Scan on cust100 (cost=0.00..4.00 rows=100 width=204)\n (actual time=0.235..2.615 rows=100 loops=1)\n Total runtime: 25.031 ms\n(7 rows)\n\nWith the join column indexed, it takes roughly .32ms/row on the first\nfour tests (100.. 100K), and about .48ms/row on 1M rows. Without the\nindex, it runs 100 rows @ .25/row, 1000 @ .26, 10000 @ .27, 100000 @\n.48 and .5 @ 1M rows.\n\nIn no case does the query plan reflect foreign key validation. Failing\nany other suggestions for diagnosis in the soon, I'm going to nuke the\nPostgreSQL install, scour it from the machine and start from scratch.\nFailing that, I'm going to come in some weekend and re-do the machine.\n\n> Problem is when I recreate the indexes and add the constraints back\n> on ORIG I end up with the same long running process. The original\n> UPDATE runs for about 30 minutes on a table of 400,000 with the\n> WHERE matching about 70% of the rows. The above runs for about 2\n> minutes without adding the constraints or indexes however adding the\n> constraints and creating the dropped indexes negates any gain.\n\nIs this a frequently-run update?\n\nIn my experience, with my seemingly mutant install, dropping indices\nand constraints to shave 14/15 off the update time would be worth the\neffort. Just script dropping, updating and recreating into one large\ntransaction. It's a symptom-level fix, but re-creating the fifteen\nindices on one of our 5M row tables doesn't take 28 minutes, and your\nhardware looks to be rather less IO and CPU bound than ours. I'd also\nsecond Tom's suggestion of moving to 7.4.\n\n/rls\n\n", "msg_date": "Wed, 24 Mar 2004 10:38:35 -0600", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "After deinstalling and scrubbing PostgreSQL from my server and doing\na clean build using a vanilla 7.4.2 tree, I'm rather more confident\nthat foreign key validation is at cause in my performance problems.\n\nI recreated my schemas and ran the original update, with foreign\nkeys referring to the identity column of the target table. The\nupdate took roughly two days, as I'd predicted based on my analysis\nof the previous installation. (I can't say how long with certainty,\nbeyond that it finished some time between when I left work one night\nand came in the next morning, the second day after starting the\nquery.) I'm not sure what was wrong with the previous install, such\nthat the update took several days; two-ish days is long enough.\n\nJust this morning, however, I created a copy of the target table (all\n4.7M rows), with absolutely no foreign keys referring to it, and ran\nthe update against the copy. That update took 2300 seconds. The\njoin columns were indexed in both cases.\n\nI'm in the process of migrating the machine to run kernel 2.6.4,\nfollowing the thread started by Gary, though I suspect that the\nkernel revision is moot with respect to whether or not foreign keys\nare being incorrectly validated. I can keep the 2.4 kernel and\nmodules around to run using the current versions for testing\npurposes, though any such work would necessarily be off-hours.\n\nPlease advise of anything I can do to help narrow down the specific\ncause of the issue; I know just enough C to be mildly dangerous.\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n", "msg_date": "Mon, 5 Apr 2004 12:05:37 -0500", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "On 5 Apr 2004 at 12:05, Rosser Schwarz wrote:\n\n> Just this morning, however, I created a copy of the target table (all\n> 4.7M rows), with absolutely no foreign keys referring to it, and ran\n> the update against the copy. That update took 2300 seconds. The\n> join columns were indexed in both cases.\n\nHave you added indexes for the custid column for tables account.acct accunt.orgacct \nand note?\n\nI haven't followed the entire thread but it you have cascading FK on those tables \nwithout an index on the column that could cause your delay.\n\nKevin Barnard\nSpeedFC\n\n\n", "msg_date": "Mon, 05 Apr 2004 13:04:52 -0500", "msg_from": "\"Kevin Barnard\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "while you weren't looking, Kevin Barnard wrote:\n\n> Have you added indexes for the custid column for tables \n> account.acct accunt.orgacct and note?\n\nThey were indexed in the original case, yes. There was no\nneed to index them in today's test case, as that was done\npurely in attempt to rule in or out foreign key validation\nas the cause of the performance hit. No foreign keys that\nmight be validated, no need to index the foreign key columns.\n\n> I haven't followed the entire thread but it you have \n> cascading FK on those tables without an index on the\n> column that could cause your delay.\n\nThe issue is that the foreign keys are being validated at\nall, when the column being referenced by those foreign keys\n(account.cust.custid) is never touched.\n\nRegardless of whether or not the referencing columns are\nindexed, validating them at all--in this specific case--is\nbroken. The column they refer to is never touched; they\nshould remain utterly ignorant of whatever happens to other\ncolumns in the same row.\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n", "msg_date": "Mon, 5 Apr 2004 14:59:32 -0500", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "On Mon, 5 Apr 2004, Kevin Barnard wrote:\n\n> On 5 Apr 2004 at 12:05, Rosser Schwarz wrote:\n> \n> > Just this morning, however, I created a copy of the target table (all\n> > 4.7M rows), with absolutely no foreign keys referring to it, and ran\n> > the update against the copy. That update took 2300 seconds. The\n> > join columns were indexed in both cases.\n> \n> Have you added indexes for the custid column for tables account.acct accunt.orgacct \n> and note?\n> \n> I haven't followed the entire thread but it you have cascading FK on those tables \n> without an index on the column that could cause your delay.\n\nalso make sure the fk/pk types match, or the index likely won't get used \nanyway.\n\n", "msg_date": "Mon, 5 Apr 2004 16:48:49 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "On Mon, 5 Apr 2004, Rosser Schwarz wrote:\n\n> while you weren't looking, Kevin Barnard wrote:\n>\n> > Have you added indexes for the custid column for tables\n> > account.acct accunt.orgacct and note?\n>\n> They were indexed in the original case, yes. There was no\n> need to index them in today's test case, as that was done\n> purely in attempt to rule in or out foreign key validation\n> as the cause of the performance hit. No foreign keys that\n> might be validated, no need to index the foreign key columns.\n>\n> > I haven't followed the entire thread but it you have\n> > cascading FK on those tables without an index on the\n> > column that could cause your delay.\n>\n> The issue is that the foreign keys are being validated at\n> all, when the column being referenced by those foreign keys\n> (account.cust.custid) is never touched.\n>\n> Regardless of whether or not the referencing columns are\n> indexed, validating them at all--in this specific case--is\n> broken. The column they refer to is never touched; they\n> should remain utterly ignorant of whatever happens to other\n> columns in the same row.\n\nIt shouldn't be checking the other table if the values of the key column\nhadn't changed. The ri_KeysEqual check should be causing it to return just\nbefore actually doing the check on the other table (it still does a few\nthings before then but nothing that should be particularly expensive). In\nsome simple tests on my 7.4.2 machine, this appears to work for me on pk\ncascade updates. It would be interesting to know if it's actually doing\nany checks for you, you might be able to poke around the triggers\n(backend/utils/adt/ri_triggers.c).\n", "msg_date": "Mon, 5 Apr 2004 17:20:20 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: atrocious update performance" }, { "msg_contents": "Hi,\n\nWe have got a G5 64-bit processor to replace an old G4 32-bit \nprocessor. Given everything\nelse equal, should we see a big improvement on PG's performance?\n\nThe other question I have is that, when I tried different size for \nshared_buffer ( i used 10,000,\n1,000, 528, 256) and Max connections=32, it gives me error when I tried \nto start PG using\npg_ctl start as postgres. It kept saying this is bigger than the system \nShared Memory. So finally\nI started PG using SystemStarter start PostgreSQL and it seems starting \nOK. Any idea?\n\n\nThanks a lot!\n\nQing Zhao\n\n", "msg_date": "Mon, 5 Apr 2004 17:53:49 -0700", "msg_from": "Qing Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "possible improvement between G4 and G5" }, { "msg_contents": "Qing Zhao <[email protected]> writes:\n> We have got a G5 64-bit processor to replace an old G4 32-bit\n> processor. Given everything else equal, should we see a big\n> improvement on PG's performance?\n\nNope. Database performance typically depends on disk performance first,\nand RAM size second. A 64-bit processor might help by allowing you to\ninstall more RAM, but you didn't say that you had.\n\n> The other question I have is that, when I tried different size for\n> shared_buffer ( i used 10,000, 1,000, 528, 256) and Max\n> connections=32, it gives me error when I tried to start PG using\n> pg_ctl start as postgres. It kept saying this is bigger than the\n> system Shared Memory.\n\nOut-of-the-box, Mac OS X has a very low SHMMAX limit. See the PG admin\ndocs or the mail list archives about how to increase it. You should do\nthis --- most people find that you want to set shared_buffers to 1000 or\n10000 or so for best performance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Apr 2004 01:47:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible improvement between G4 and G5 " }, { "msg_contents": "\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Qing Zhao\" <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, April 06, 2004 1:47 AM\nSubject: Re: [PERFORM] possible improvement between G4 and G5\n\n\n> Qing Zhao <[email protected]> writes:\n> > We have got a G5 64-bit processor to replace an old G4 32-bit\n> > processor. Given everything else equal, should we see a big\n> > improvement on PG's performance?\n>\n> Nope. Database performance typically depends on disk performance first,\n> and RAM size second.\n\nI'm surprised by this thought. I tend to hit CPU bottlenecks more often than\nI/O ones. In most applications, db I/O is a combination of buffer misses and\nlogging, which are both reasonably constrained. RAM size seems to me to be\nthe best way to improve performance, and then CPU which is needed to perform\nthe in-memory searching, locking, versioning, and processing, and finally\nI/O (this is not the case in small I/O subsystems - if you have less than a\ndozen drives, you're easily I/O bound). I/O is often the thing I tune first,\nbecause I can do it in place without buying hardware.\n\nConceptually, an RDBMS converts slow random I/O into in memory processing\nand sequential logging writes. If successful, it should reduce the I/O\noverhead.\n\n/Aaron\n", "msg_date": "Tue, 6 Apr 2004 11:45:29 -0400", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible improvement between G4 and G5" }, { "msg_contents": "On Tue, Apr 06, 2004 at 01:47:22AM -0400, Tom Lane wrote:\n> Qing Zhao <[email protected]> writes:\n> > We have got a G5 64-bit processor to replace an old G4 32-bit\n> > processor. Given everything else equal, should we see a big\n> > improvement on PG's performance?\n> \n> Nope. Database performance typically depends on disk performance first,\n> and RAM size second. A 64-bit processor might help by allowing you to\n> install more RAM, but you didn't say that you had.\n\nMemory bandwidth is a consideration too, so you might see some\nperformance improvements on a G5. We recently debated between Xeons and\nOpterons in a new PGSQL server and a little poking around on the lists\nindicated that the Opterons did perform better, presumably due to the\nincreased memory bandwidth. Incidentally, this is why you need about 2x\nthe CPUs on Sun hardware vs RS6000 hardware for database stuff (and that\ngets expensive if you're paying per CPU!).\n-- \nJim C. Nasby, Database Consultant [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Tue, 6 Apr 2004 10:52:35 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible improvement between G4 and G5" }, { "msg_contents": "Aaron,\n\n> I'm surprised by this thought. I tend to hit CPU bottlenecks more often than\n> I/O ones. In most applications, db I/O is a combination of buffer misses and\n> logging, which are both reasonably constrained. \n\nNot my experience at all. In fact, the only times I've seen modern platforms \nmax out the CPU was when:\na) I had bad queries with bad plans, or\nb) I had reporting queires that did a lot of calculation for display (think \nOLAP).\n\nOtherwise, on the numerous servers I administrate, RAM spikes, and I/O \nbottlenecks, but the CPU stays almost flat.\n\nOf course, most of my apps are large databases (i.e. too big for RAM) with a \nheavy transaction-processing component.\n\nWhat kind of applications are you running?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 6 Apr 2004 11:52:17 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible improvement between G4 and G5" }, { "msg_contents": "\n\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Aaron Werman\" <[email protected]>; \"Qing Zhao\" <[email protected]>;\n\"Tom Lane\" <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, April 06, 2004 2:52 PM\nSubject: Re: [PERFORM] possible improvement between G4 and G5\n\n\n> Aaron,\n>\n> > I'm surprised by this thought. I tend to hit CPU bottlenecks more often\nthan\n> > I/O ones. In most applications, db I/O is a combination of buffer misses\nand\n> > logging, which are both reasonably constrained.\n>\n> Not my experience at all. In fact, the only times I've seen modern\nplatforms\n> max out the CPU was when:\n> a) I had bad queries with bad plans, or\n> b) I had reporting queires that did a lot of calculation for display\n(think\n> OLAP).\n>\n> Otherwise, on the numerous servers I administrate, RAM spikes, and I/O\n> bottlenecks, but the CPU stays almost flat.\n>\n> Of course, most of my apps are large databases (i.e. too big for RAM) with\na\n> heavy transaction-processing component.\n>\n> What kind of applications are you running?\n>\n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n\n<hot air>\n\nI do consulting, so they're all over the place and tend to be complex. Very\nfew fit in RAM, but still are very buffered. These are almost all backed\nwith very high end I/O subsystems, with dozens of spindles with battery\nbacked up writethrough cache and gigs of buffers, which may be why I worry\nso much about CPU. I have had this issue with multiple servers.\n\nConsider an analysis db with 10G data. Of that, 98% of the access is read\nand only 2% write (that is normal for almost anything that is not order\nentry, even transaction processing with thorough cross validation). Almost\nall the queries access 10%, or 1G of the data. Of the reads, they average ~3\nlevel b-trees, with the first 2 levels certainly cached, and the last ones\noften cached. Virtually all the I/O activity is logical reads against\nbuffer. A system with a 100 transactions which on average access 200 rows\ndoes 98% of 200 rows x 100 transactions x 3 logical I/Os per read = 58,800\nlogical reads, of which actually maybe a hundred are physical reads. It\nalso does 2% of 200 rows x 100 transactions x (1 table logical I/O and say 2\nindex logical writes) per write = 1,200 logical writes to log, of which\nthere are 100 transaction commit synch writes, and in reality less than that\nbecause of queuing against logs (there are also 1,200 logical writes\ndeferred to checkpoint, of which it is likely to only be 40 physical writes\nbecause of page overlaps).\n\nTransaction processing is a spectrum between activity logging, and database\ncentric design. The former, where actions are stored in the database is\ntotally I/O bound with the engine acting as a thin layer of logical to\nphysical mapping. Database centric processing makes the engine a functional\nserver of discrete actions - and is a big CPU hog.\n\nWhat my CPU tends to be doing is a combination of general processing,\ncomplex SQL processing: nested loops and sorting and hashing and triggers\nand SPs.\n\nI'm curious about you having flat CPU, which is not my experience. Are your\napps mature and stable?\n\n</hot air>\n\n/Aaron\n", "msg_date": "Tue, 6 Apr 2004 16:22:46 -0400", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible improvement between G4 and G5" }, { "msg_contents": "Aaron,\n\n> I do consulting, so they're all over the place and tend to be complex. Very\n> few fit in RAM, but still are very buffered. These are almost all backed\n> with very high end I/O subsystems, with dozens of spindles with battery\n> backed up writethrough cache and gigs of buffers, which may be why I worry\n> so much about CPU. I have had this issue with multiple servers.\n\nAha, I think this is the difference. I never seem to be able to get my \nclients to fork out for adequate disk support. They are always running off \nsingle or double SCSI RAID in the host server; not the sort of setup you \nhave.\n\n> What my CPU tends to be doing is a combination of general processing,\n> complex SQL processing: nested loops and sorting and hashing and triggers\n> and SPs.\n\nI haven't noticed SPs to be particularly CPU-hoggish, more RAM.\n\n> I'm curious about you having flat CPU, which is not my experience. Are your\n> apps mature and stable?\n\nWell, \"flat\" was a bit of an exaggeration ... there are spikes ... but average \nCPU load is < 30%. I think the difference is that your clients listen to \nyou about disk access. Mine are all too liable to purchase a quad-Xeon \nmachine but with an Adaptec RAID-5 card with 4 drives, and *then* call me and \nask for advice.\n\nAs a result, most intensive operations don't tend to swamp the CPU because \nthey are waiting for disk. \n\nI have noticed the limitiations on RAM for 64 vs. 32, as I find it easier to \nconvince a client to get 8GB RAM than four-channel RAID with 12 drives, \nmostly because the former is cheaper. Linux 2.4 + Bigmem just doesn't cut \nit for making effective use of > 3GB of RAM.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 6 Apr 2004 14:41:53 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible improvement between G4 and G5" }, { "msg_contents": ">>>>> \"JB\" == Josh Berkus <[email protected]> writes:\n\nJB> Aaron,\n>> I do consulting, so they're all over the place and tend to be complex. Very\n>> few fit in RAM, but still are very buffered. These are almost all backed\n>> with very high end I/O subsystems, with dozens of spindles with battery\n>> backed up writethrough cache and gigs of buffers, which may be why I worry\n>> so much about CPU. I have had this issue with multiple servers.\n\nJB> Aha, I think this is the difference. I never seem to be able to\nJB> get my clients to fork out for adequate disk support. They are\nJB> always running off single or double SCSI RAID in the host server;\nJB> not the sort of setup you have.\n\nEven when I upgraded my system to a 14-spindle RAID5 with 128M cache\nand 4GB RAM on a dual Xeon system, I still wind up being I/O bound\nquite often.\n\nI think it depends on what your \"working set\" turns out to be. My\nworkload really spans a lot more of the DB than I can end up caching.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 19 Apr 2004 13:39:00 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible improvement between G4 and G5" } ]
[ { "msg_contents": "This apeared on the Freebsd-perfomace list and though people here could help \nas well.\n\n\n\n---------- Forwarded Message ----------\n\nSubject: Configuring disk cache size on postgress\nDate: March 16, 2004 10:44 am\nFrom: Dror Matalon <[email protected]>\nTo: [email protected]\n\nHi Folks,\n\nWhen configuring postgres, one of the variables to configure is\neffective_cache_size:\n\tSets the optimizer's assumption about the effective size of the disk\n\tcache (that is, the portion of the kernel's disk cache that will be\n\tused for PostgreSQL data files). This is measured in disk pages, which\n\tare normally 8 kB each.\n\t(http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html)\n\nThe conventional wisdom on the postgres list has been that for freebsd\nyou calculate this by doing `sysctl -n vfs.hibufspace` / 8192).\n\nNow I'm running 4.9 with 2 Gig of ram and sysctl -n vfs.hibufspace\nindicates usage of 200MB.\n\nQuestions:\n1. How much RAM is freebsd using for *disk* caching? Is it part of the\ngeneral VM or is it limited to the above 200MB? I read Matt Dillon's\nhttp://www.daemonnews.org/200001/freebsd_vm.html, but most of the\ndiscussion there seems to be focused on caching programs and program\ndata.\n\n2. Can I tell, and if so how, how much memory the OS is using for disk\ncaching?\n\n3. What are the bufspace variables for?\n\nThis subject has been touched on before in\nhttp://unix.derkeiler.com/Mailing-Lists/FreeBSD/performance/2003-09/0045.html\nwhich point to a patch to increase the bufspace.\n\nRegards,\n\nDror\n\n\n--\nDror Matalon\nZapatec Inc\n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.fastbuzz.com\nhttp://www.zapatec.com\n_______________________________________________\[email protected] mailing list\nhttp://lists.freebsd.org/mailman/listinfo/freebsd-performance\nTo unsubscribe, send any mail to\n \"[email protected]\"\n\n-------------------------------------------------------\n\n-- \nDarcy Buskermolen\nWavefire Technologies Corp.\nph: 250.717.0200\nfx: 250.763.1759\nhttp://www.wavefire.com\n\n", "msg_date": "Tue, 16 Mar 2004 10:54:35 -0800", "msg_from": "Darcy Buskermolen <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Configuring disk cache size on postgress" } ]
[ { "msg_contents": "\nI sent this message to the list and although it shows up in the archives,\nI did not receive a copy of it through the list, so I'm resending as I\nsuspect others did not see it either.\n\n---------- Forwarded message ----------\nDate: Sat, 13 Mar 2004 22:48:01 -0500 (EST)\nFrom: Kris Jurka <[email protected]>\nTo: Tom Lane <[email protected]>\nCc: Eric Brown <[email protected]>, [email protected]\nSubject: Re: [PERFORM] severe performance issue with planner \n\nOn Thu, 11 Mar 2004, Tom Lane wrote:\n\n> \"Eric Brown\" <[email protected]> writes:\n> > [ planning a 9-table query takes too long ]\n> \n> See http://www.postgresql.org/docs/7.4/static/explicit-joins.html\n> for some useful tips.\n> \n\nIs this the best answer we've got? For me with an empty table this query \ntakes 4 seconds to plan, is that the expected planning time? I know I've \ngot nine table queries that don't take that long.\n\nSetting geqo_threshold less than 9, it takes 1 second to plan. Does this \nindicate that geqo_threshold is set too high, or is it a tradeoff between \nplanning time and plan quality? If the planning time is so high because \nthe are a large number of possible join orders, should geqo_threhold be \nbased on the number of possible plans somehow instead of the number of \ntables involved?\n\nKris Jurka\n\n", "msg_date": "Wed, 17 Mar 2004 02:33:44 -0500 (EST)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": true, "msg_subject": "Re: severe performance issue with planner (fwd)" }, { "msg_contents": "Kris Jurka <[email protected]> writes:\n> On Thu, 11 Mar 2004, Tom Lane wrote:\n>> \"Eric Brown\" <[email protected]> writes:\n>>> [ planning a 9-table query takes too long ]\n>> \n>> See http://www.postgresql.org/docs/7.4/static/explicit-joins.html\n>> for some useful tips.\n\n> Is this the best answer we've got? For me with an empty table this query \n> takes 4 seconds to plan, is that the expected planning time? I know I've \n> got nine table queries that don't take that long.\n\nThe problem with this example is that it's a nine-way self-join.\nOrdinarily the planner can eliminate many possible join paths at low\nlevels, because they are more expensive than other available options.\nBut in this situation all the available options have *exactly the same\ncost estimate* because they are all founded on exactly the same statistics.\nThe planner fails to prune any of them and ends up making a random\nchoice after examining way too many alternatives.\n\nMaybe we should think about instituting a hard upper limit on the number\nof alternatives considered. But I'm not sure what the consequences of\nthat would be. In the meantime, the answer for the OP is to arbitrarily\nlimit the number of join orders considered, as described in the\nabove-mentioned web page. With the given query constraints there's\nreally only one join order worth thinking about ...\n\n> Setting geqo_threshold less than 9, it takes 1 second to plan. Does this \n> indicate that geqo_threshold is set too high, or is it a tradeoff between \n> planning time and plan quality?\n\nSelecting the GEQO planner doesn't really matter here, because it has\nno better clue about how to choose among a lot of alternatives with\nidentical cost estimates.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Mar 2004 00:03:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: severe performance issue with planner (fwd) " } ]
[ { "msg_contents": "http://www.databasejournal.com/features/postgresql/article.php/3323561\n\n Shridhar\n", "msg_date": "Wed, 17 Mar 2004 13:16:37 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": true, "msg_subject": "A good article about application tuning" } ]
[ { "msg_contents": "Hi all,\n\nwe have a question about the pagesize in PostgreSQL:\n\nUsing different pagesizes: 4K, 8K, 16K, 32K, when we store different \nrecord sizes\nsuch as in the following example:\n\nCREATE TABLE TEST_1 (\nF1 VARCHAR(10),\nF2 VARCHAR(5) );\n\nCREATE TABLE TEST_2 (\nF1 VARCHAR(10),\nF2 VARCHAR(10) );\n\nwe're consistently having the following storage behavior:\n\n60 records / 4k_page\n120 records / 8k_page\n240 records / 16k_page\n480 records / 32k_page.\n\nSo it seems that it doesn't matter whether the record size is\n15 bytes or 20 bytes, there's maximum number of records per page\nas shown above.\n\nAny clues if there's any parameter or bug causing that?\n\nGan (for Amgad)\n-- \n+--------------------------------------------------------+\n| Seum-Lim GAN email : [email protected] |\n| Lucent Technologies |\n| 2000 N. Naperville Road, 6B-403F tel : (630)-713-6665 |\n| Naperville, IL 60566, USA. fax : (630)-713-7272 |\n| web : http://inuweb.ih.lucent.com/~slgan |\n+--------------------------------------------------------+\n", "msg_date": "Wed, 17 Mar 2004 17:52:13 -0600", "msg_from": "Seum-Lim Gan <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL Disk Usage and Page Size" }, { "msg_contents": "On Thu, 2004-03-18 at 10:52, Seum-Lim Gan wrote:\n> Hi all,\n> \n> we have a question about the pagesize in PostgreSQL:\n> \n> Using different pagesizes: 4K, 8K, 16K, 32K, when we store different \n> record sizes\n> such as in the following example:\n> \n> CREATE TABLE TEST_1 (\n> F1 VARCHAR(10),\n> F2 VARCHAR(5) );\n> \n> CREATE TABLE TEST_2 (\n> F1 VARCHAR(10),\n> F2 VARCHAR(10) );\n> \n> we're consistently having the following storage behavior:\n> \n> 60 records / 4k_page\n> 120 records / 8k_page\n> 240 records / 16k_page\n> 480 records / 32k_page.\n> \n> So it seems that it doesn't matter whether the record size is\n> 15 bytes or 20 bytes, there's maximum number of records per page\n> as shown above.\n> \n> Any clues if there's any parameter or bug causing that?\n> \n> Gan (for Amgad)\n\nWell, you're size counts are completely wrong, for starters.\n\nEach varchar uses 4 bytes + length of the string, so that's 8 more bytes\nper row. Then you may have an OID as well for another 4 bytes. I'd also\nnot be surprised if the length of the string is rounded up to the\nnearest word (although I don't know if it is), and I'd be amazed if the\nlength of the record isn't rounded to some boundary too.\n\nThere's a handy page in the documentation that talks about how to know\nhow big rows are, I suggest you start there...\n\n\tStephen", "msg_date": "Thu, 18 Mar 2004 11:07:35 +1100", "msg_from": "Stephen Robert Norris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Disk Usage and Page Size" }, { "msg_contents": "On Wed, 17 Mar 2004, Seum-Lim Gan wrote:\n\n> we have a question about the pagesize in PostgreSQL:\n>\n> Using different pagesizes: 4K, 8K, 16K, 32K, when we store different\n> record sizes\n> such as in the following example:\n>\n> CREATE TABLE TEST_1 (\n> F1 VARCHAR(10),\n> F2 VARCHAR(5) );\n>\n> CREATE TABLE TEST_2 (\n> F1 VARCHAR(10),\n> F2 VARCHAR(10) );\n>\n> we're consistently having the following storage behavior:\n>\n> 60 records / 4k_page\n> 120 records / 8k_page\n> 240 records / 16k_page\n> 480 records / 32k_page.\n>\n> So it seems that it doesn't matter whether the record size is\n> 15 bytes or 20 bytes, there's maximum number of records per page\n> as shown above.\n\nThe rows aren't 15 or 20 bytes, they're something closer to:\n\nrow header (24 bytes?) + f1 length (4 bytes) + actual bytes for f1 +\nf2 length (4 bytes) + actual bytes for f2\n(I'm not sure about additional padding, but there's probably some to word\nboundaries)\n\nAnd since you're using varchar, you won't see an actual row size\ndifference unless you're using different data between the two tables.\n\nIf you're in a one byte encoding and putting in maximum length strings,\nI'd expect something like 52 and 56 bytes for the above two tables.\n", "msg_date": "Wed, 17 Mar 2004 16:18:02 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Disk Usage and Page Size" } ]
[ { "msg_contents": "On Thu, 18 Mar 2004, Saleh, Amgad H (Amgad) wrote:\n\n> Stephan / Stephen\n>\n> We know about the overhead and do understand the math you've provided.\n> This is not the question we're asking. We've just provided the table definitions as\n> examples.\n>\n> The real question was, even with the 52 & 56 (assuming right),' I wouldn't get\n> the same number of records per page for all 4k, 8k, 16k, and 32k pages.\n\nOn my system, I don't using your tests, IIRC I got 134 with TEST_1 and\nlike 128 or so on TEST_2 when I used strings of maximum length for the\ncolumns.\n\n>\n> To make it more clear to you here's an example:\n>\n> For an 8k-page: we've got 120 records/page for both tables and other tables such as\n>\n> CREATE TABLE TEST_3 (\n> F1 VARCHAR(10),\n> F2 VARCHAR(12) );\n\nAre you storing the same data in all three tables or different data in all\nthree tables? That's important because there's no difference in length\nbetween varchar(5) and varchar(12) when storing the same 5 character\nstring.\n\n", "msg_date": "Thu, 18 Mar 2004 08:56:03 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL Disk Usage and Page Size" } ]
[ { "msg_contents": "On Thu, 18 Mar 2004, Saleh, Amgad H (Amgad) wrote:\n\n>\n> Stephan:\n>\n> In each table we're storing the max. string length.\n>\n> For example:\n>\n> for TEST_1, we're storing 'abcdefghjk' and 'lmnop'\n> for TEST_2, we're storing 'abcdefghjk' and 'lmnopqrstu'\n> for TEST_3, we're storing 'abcdefghjk' and 'lmnopqrstuvw'\n\nHmm, on my machine it seemed like I was getting slightly different row\ncount per page results for the first two cases. The last two aren't going\nto be different due to padding if the machine pads to 4 byte boundaries.\n\n", "msg_date": "Thu, 18 Mar 2004 09:57:50 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL Disk Usage and Page Size" } ]
[ { "msg_contents": "\nIn porting an application from v7.2 and v7.3, I noticed that a join on a varchar column and a text column was ignoring indices that were helpful in v7.2. When I explicitly cast the text to a varchar (or set ENABLE_SEQSCAN TO false) the index is scanned and it works as efficiently as in v7.2. \n\nObviously there were many casting improvements made in 7.3, but our application doesn't exactly see it that way. Is explicit casting the only solution (other than schema modification)? I haven't found anything in the documentation on this subject.\n\nThanks,\nMike\n", "msg_date": "Thu, 18 Mar 2004 13:52:37 -0500", "msg_from": "Michael Adler <[email protected]>", "msg_from_op": true, "msg_subject": "string casting for index usage" }, { "msg_contents": "Michael Adler <[email protected]> writes:\n> In porting an application from v7.2 and v7.3, I noticed that a join on a varchar column and a text column was ignoring indices that were helpful in v7.2. When I explicitly cast the text to a varchar (or set ENABLE_SEQSCAN TO false) the index is scanned and it works as efficiently as in v7.2. \n\nMaybe you should be moving to 7.4, instead.\n\nA desultory test didn't show any difference between 7.2.4 and 7.3.6\nin this respect, however. Perhaps you forgot to ANALYZE yet in the\nnew database?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Mar 2004 15:39:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: string casting for index usage " }, { "msg_contents": "On Thu, Mar 18, 2004 at 03:39:12PM -0500, Tom Lane wrote:\n> Michael Adler <[email protected]> writes:\n> > In porting an application from v7.2 and v7.3, I noticed that a join on a varchar column and a text column was ignoring indices that were helpful in v7.2. When I explicitly cast the text to a varchar (or set ENABLE_SEQSCAN TO false) the index is scanned and it works as efficiently as in v7.2. \n> \n> Maybe you should be moving to 7.4, instead.\n\nThat's a fair suggestion, but it's not practical for our 75 sites, most without decent network access. If this is in fact addressed in newer releases, then my point is mostly inconsequential.\n\nWe use Debian stable (7.2.1-2woody4) and Debian testing (7.3.4-9). \n\n> A desultory test didn't show any difference between 7.2.4 and 7.3.6\n> in this respect, however. Perhaps you forgot to ANALYZE yet in the\n> new database?\n\nI have a test with sample data and queries to demonstrate what I'm seeing. I hope it is useful. \n\nHaving to do manual casts is not cruel and unusual, but it's not encouraging to see performance go down after an upgrade. If anyone has any clever solutions, let me know. \n\ntables, data, and queries: \nhttp://www.panix.com/~adler/manual-cast-for-index-scan.sql\n\nmy test output:\nhttp://www.panix.com/~adler/manual-cast-for-index-scan_7.3.4-9.out\nhttp://www.panix.com/~adler/manual-cast-for-index-scan_7.2.1-2woody4.out\n\n(the times are not horrific in these specific examples, but the sequential scan makes them unscalable). \n\n\nmanual-cast-for-index-scan_7.3.4-9.out:\n\nDROP TABLE t1;\nDROP TABLE\nDROP TABLE t2;\nDROP TABLE\nCREATE TABLE t1 (\n key_col text,\n grp text\n);\nCREATE TABLE\nCOPY t1 FROM stdin;\nCREATE UNIQUE INDEX tempindex1 ON t1 USING btree (key_col);\nCREATE INDEX\nCREATE TABLE t2 (\n item_num character varying(5),\n key_col character varying(14)\n);\nCREATE TABLE\nCOPY t2 FROM stdin;\nCREATE INDEX tempindex2 ON t2 USING btree (key_col);\nCREATE INDEX\nVACUUM ANALYZE;\nVACUUM\nSELECT version();\n PostgreSQL 7.3.4 on i386-pc-linux-gnu, compiled by GCC i386-linux-gcc (GCC) 3.3.2 (Debian)\n\nEXPLAIN ANALYZE SELECT item_num, t1.key_col FROM t1 LEFT JOIN t2 ON (t2.key_col = t1.key_col) WHERE grp = '24';\n Nested Loop (cost=0.00..23803.27 rows=194 width=31) (actual time=20.95..1401.46 rows=69 loops=1)\n Join Filter: ((\"inner\".key_col)::text = \"outer\".key_col)\n -> Seq Scan on t1 (cost=0.00..492.94 rows=194 width=18) (actual time=0.32..30.27 rows=69 loops=1)\n Filter: (grp = '24'::text)\n -> Seq Scan on t2 (cost=0.00..66.87 rows=4287 width=13) (actual time=0.01..12.06 rows=4287 loops=69)\n Total runtime: 1401.73 msec\n\nEXPLAIN ANALYZE SELECT item_num, t1.key_col FROM t1 LEFT JOIN t2 ON (t2.key_col::text = t1.key_col) WHERE grp = '24';\n Nested Loop (cost=0.00..23803.27 rows=194 width=31) (actual time=20.27..1398.82 rows=69 loops=1)\n Join Filter: ((\"inner\".key_col)::text = \"outer\".key_col)\n -> Seq Scan on t1 (cost=0.00..492.94 rows=194 width=18) (actual time=0.26..25.91 rows=69 loops=1)\n Filter: (grp = '24'::text)\n -> Seq Scan on t2 (cost=0.00..66.87 rows=4287 width=13) (actual time=0.01..12.02 rows=4287 loops=69)\n Total runtime: 1399.08 msec\n\nEXPLAIN ANALYZE SELECT item_num, t1.key_col FROM t1 LEFT JOIN t2 ON (t2.key_col = t1.key_col::varchar(24)) WHERE grp = '24';\n Nested Loop (cost=0.00..4819.13 rows=194 width=31) (actual time=0.52..27.46 rows=69 loops=1)\n -> Seq Scan on t1 (cost=0.00..492.94 rows=194 width=18) (actual time=0.27..25.94 rows=69 loops=1)\n Filter: (grp = '24'::text)\n -> Index Scan using tempindex2 on t2 (cost=0.00..22.17 rows=12 width=13) (actual time=0.01..0.01 rows=0 loops=69)\n Index Cond: (t2.key_col = (\"outer\".key_col)::character varying(24))\n Total runtime: 27.70 msec\n\n\n\nmanual-cast-for-index-scan_7.2.1-2woody4.out:\n\nDROP TABLE t1;\nDROP\nDROP TABLE t2;\nDROP\nCREATE TABLE t1 (\n key_col text,\n grp text\n);\nCREATE\nCOPY t1 FROM stdin;\nCREATE UNIQUE INDEX tempindex1 ON t1 USING btree (key_col);\nCREATE\nCREATE TABLE t2 (\n item_num character varying(5),\n key_col character varying(14)\n);\nCREATE\nCOPY t2 FROM stdin;\nCREATE INDEX tempindex2 ON t2 USING btree (key_col);\nCREATE\nVACUUM ANALYZE;\nVACUUM\nSELECT version();\n PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.4\n\nEXPLAIN ANALYZE SELECT item_num, t1.key_col FROM t1 LEFT JOIN t2 ON (t2.key_col = t1.key_col) WHERE grp = '24';\npsql:castedneed.sql:29127: NOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..1405.88 rows=204 width=32) (actual time=0.46..40.60 rows=69 loops=1)\n -> Seq Scan on t1 (cost=0.00..505.94 rows=204 width=18) (actual time=0.35..39.09 rows=69 loops=1)\n -> Index Scan using tempindex2 on t2 (cost=0.00..4.27 rows=11 width=14) (actual time=0.01..0.01 rows=0 loops=69)\nTotal runtime: 40.81 msec\n\nEXPLAIN\nEXPLAIN ANALYZE SELECT item_num, t1.key_col FROM t1 LEFT JOIN t2 ON (t2.key_col::text = t1.key_col) WHERE grp = '24';\npsql:castedneed.sql:29128: NOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..1405.88 rows=204 width=32) (actual time=0.40..39.88 rows=69 loops=1)\n -> Seq Scan on t1 (cost=0.00..505.94 rows=204 width=18) (actual time=0.35..38.44 rows=69 loops=1)\n -> Index Scan using tempindex2 on t2 (cost=0.00..4.27 rows=11 width=14) (actual time=0.01..0.01 rows=0 loops=69)\nTotal runtime: 40.07 msec\n\nEXPLAIN\nEXPLAIN ANALYZE SELECT item_num, t1.key_col FROM t1 LEFT JOIN t2 ON (t2.key_col = t1.key_col::varchar(24)) WHERE grp = '24';\npsql:castedneed.sql:29129: NOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..1416.66 rows=4383 width=32) (actual time=0.40..41.59 rows=69 loops=1)\n -> Seq Scan on t1 (cost=0.00..505.94 rows=204 width=18) (actual time=0.36..40.05 rows=69 loops=1)\n -> Index Scan using tempindex2 on t2 (cost=0.00..4.30 rows=11 width=14) (actual time=0.01..0.01 rows=0 loops=69)\nTotal runtime: 41.78 msec\n\nEXPLAIN\n\n", "msg_date": "Fri, 19 Mar 2004 17:22:17 -0500", "msg_from": "Michael Adler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: string casting for index usage" }, { "msg_contents": "Michael Adler <[email protected]> writes:\n> On Thu, Mar 18, 2004 at 03:39:12PM -0500, Tom Lane wrote:\n>> A desultory test didn't show any difference between 7.2.4 and 7.3.6\n>> in this respect, however. Perhaps you forgot to ANALYZE yet in the\n>> new database?\n\n> I have a test with sample data and queries to demonstrate what I'm seeing.\n\nAh. I had been testing the equivalent of this query with an INNER join\ninstead of a LEFT join. Both 7.2 and 7.3 pick a plan with an inner\nindexscan on t1 in that case. The LEFT join prevents use of such a\nplan, and the only way to do it quickly in those releases is to use an\ninner indexscan on t2.\n\n7.2 is really cheating here, because what is happening under the hood is\nthat the parser resolves the query as \"textcol texteq varcharcol::text\",\nthere not being any direct text=varchar operator. (text is chosen as\nthe preferred type over varchar when it would otherwise be a coin flip.)\nBut then the planner would simply assume that it's okay to substitute\nvarchareq for texteq, apparently on the grounds that if the input types\nare binary compatible then the operators must be interchangeable. That\nmade it possible to match the join clause to the varchar-opclass index\non t2. But of course this theory is ridiculous on its face ... it\nhappens to be okay for varchar and text but in general you'd not have\nthe same comparison semantics for two different operators. (As an\nexample, int4 and OID are binary compatible but their index operators\nare definitely not interchangeable, because one is signed comparison and\nthe other unsigned.)\n\n7.3 is an intermediate state in which we'd ripped out the bogus planner\nassumption but not developed fully adequate substitutes.\n\n7.4 is substantially smarter than either, and can generate merge and\nhash joins as well as ye plain olde indexed nestloop for this query.\nIn a quick test, it seemed that all three plan types yielded about the\nsame runtimes for this query with this much data. I didn't have time\nto try scaling up the amount of data to see where things went, but I'd\nexpect the nestloop to be a loser at large scales even with an inner\nindexscan.\n\nAnyway, bottom line is that 7.4 and CVS tip are competitive with 7.2\nagain, only they do it right this time ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Mar 2004 18:28:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: string casting for index usage " } ]
[ { "msg_contents": "explain\nSELECT COUNT(u.ukey) FROM u, d WHERE d.ukey = u.ukey AND u.pkey = 260 \nAND (u.status = 3 ) AND NOT u.boolfield ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------\n Aggregate (cost=45707.84..45707.84 rows=1 width=4)\n -> Nested Loop (cost=0.00..45707.16 rows=273 width=4)\n -> Seq Scan on usertable u (cost=0.00..44774.97 rows=272 \nwidth=4)\n Filter: ((pkey = 260) AND (status = 3) AND (NOT boolfield))\n -> Index Scan using d_pkey on d (cost=0.00..3.41 rows=1 width=4)\n Index Cond: (d.ukey = \"outer\".ukey)\n\n\nexplain\nSELECT COUNT(u.ukey) FROM u, d WHERE d.ukey = u.ukey AND u.pkey = 260 \nAND (d.status = 3 ) AND NOT u.boolfield ;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------\n Aggregate (cost=28271.38..28271.38 rows=1 width=4)\n -> Nested Loop (cost=0.00..28271.38 rows=1 width=4)\n -> Seq Scan on d (cost=0.00..28265.47 rows=1 width=4)\n Filter: (status = 3)\n -> Index Scan using u_pkey on u (cost=0.00..5.89 rows=1 width=4)\n Index Cond: ((\"outer\".ukey = u.ukey) AND (u.pkey = 260))\n Filter: (NOT boolfield)\n\n\nexplain\nSELECT COUNT(u.ukey) FROM u, d WHERE d.ukey = u.ukey AND u.pkey = 260 \nAND (u.status = 3 OR d.status = 3 ) AND NOT u.boolfield ;\n\n\n QUERY PLAN\n---------------------------------------------------------------------------------------\n Aggregate (cost=128867.45..128867.45 rows=1 width=4)\n -> Hash Join (cost=32301.47..128866.77 rows=272 width=4)\n Hash Cond: (\"outer\".ukey = \"inner\".ukey)\n Join Filter: ((\"inner\".status = 3) OR (\"outer\".status = 3))\n -> Seq Scan on u (cost=0.00..41215.97 rows=407824 width=6)\n Filter: ((pkey = 260) AND (NOT boolfield))\n -> Hash (cost=25682.98..25682.98 rows=1032998 width=6)\n -> Seq Scan on d (cost=0.00..25682.98 rows=1032998 \nwidth=6)\n\n\n... so what do I do? It would be a real pain to rewrite this query to \nrun twice and add the results up, especially since I don't always know \nbeforehand when it will be faster based on different values to the query.\n", "msg_date": "Thu, 18 Mar 2004 16:21:32 -0500", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "two seperate queries run faster than queries ORed together" }, { "msg_contents": "On Thursday 18 March 2004 21:21, Joseph Shraibman wrote:\n> explain\n> SELECT COUNT(u.ukey) FROM u, d WHERE d.ukey = u.ukey AND u.pkey = 260\n> AND (u.status = 3 OR d.status = 3 ) AND NOT u.boolfield ;\n>\n>\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n>------------ Aggregate (cost=128867.45..128867.45 rows=1 width=4)\n> -> Hash Join (cost=32301.47..128866.77 rows=272 width=4)\n> Hash Cond: (\"outer\".ukey = \"inner\".ukey)\n> Join Filter: ((\"inner\".status = 3) OR (\"outer\".status = 3))\n> -> Seq Scan on u (cost=0.00..41215.97 rows=407824 width=6)\n> Filter: ((pkey = 260) AND (NOT boolfield))\n\nThere's your problem. For some reason it thinks it's getting 407,824 rows back \nfrom that filtered seq-scan. I take it that pkey is a primary-key and is \ndefined as being UNIQUE? If you actually did have several hundred thousand \nmatches then a seq-scan might be sensible.\n\nI'd start by analyze-ing the table in question, and if that doesn't have any \neffect look at the column stats and see what spread of values it thinks you \nhave.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 22 Mar 2004 16:55:28 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two seperate queries run faster than queries ORed together" }, { "msg_contents": "Richard Huxton wrote:\n> On Thursday 18 March 2004 21:21, Joseph Shraibman wrote:\n> \n>>explain\n>>SELECT COUNT(u.ukey) FROM u, d WHERE d.ukey = u.ukey AND u.pkey = 260\n>>AND (u.status = 3 OR d.status = 3 ) AND NOT u.boolfield ;\n>>\n>>\n>> QUERY PLAN\n>>---------------------------------------------------------------------------\n>>------------ Aggregate (cost=128867.45..128867.45 rows=1 width=4)\n>> -> Hash Join (cost=32301.47..128866.77 rows=272 width=4)\n>> Hash Cond: (\"outer\".ukey = \"inner\".ukey)\n>> Join Filter: ((\"inner\".status = 3) OR (\"outer\".status = 3))\n>> -> Seq Scan on u (cost=0.00..41215.97 rows=407824 width=6)\n>> Filter: ((pkey = 260) AND (NOT boolfield))\n> \n> \n> There's your problem. For some reason it thinks it's getting 407,824 rows back \n> from that filtered seq-scan. I take it that pkey is a primary-key and is \n> defined as being UNIQUE? If you actually did have several hundred thousand \n> matches then a seq-scan might be sensible.\n> \nNo, pkey is not the primary key in this case. The number of entries in u \nthat have pkey 260 and not boolfield is 344706. The number of those that \nhave status == 3 is 7. To total number of entries in d that have status \n == 3 is 4.\n\n> I'd start by analyze-ing the table in question,\nIs done every night.\n\nThe problem is that it seems the planner doesn't think to do the \ndifferent parts of the OR seperately and then combine the answers.\n", "msg_date": "Mon, 22 Mar 2004 12:55:31 -0500", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: two seperate queries run faster than queries ORed together" }, { "msg_contents": "Joseph Shraibman <[email protected]> writes:\n> No, pkey is not the primary key in this case. The number of entries in u \n> that have pkey 260 and not boolfield is 344706.\n\n... and every one of those rows *must* be included in the join input,\nregardless of its status value, because it might join to some d row that\nhas status=3. Conversely, every single row of d must be considered in\nthe join because it might join to some u row with status=3. So any way\nyou slice it, this query requires a large and expensive join operation,\nno matter that there are only a few rows with the right status values in\nthe other table.\n\nI'd rewrite the query if I were you.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Mar 2004 13:24:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two seperate queries run faster than queries ORed together " }, { "msg_contents": "Tom Lane wrote:\n> Joseph Shraibman <[email protected]> writes:\n> \n>>No, pkey is not the primary key in this case. The number of entries in u \n>>that have pkey 260 and not boolfield is 344706.\n> \n> \n> ... and every one of those rows *must* be included in the join input,\n\n*If* you use one big join in the first place. If postgres ran the query \nto first get the values with status == 3 from u, then ran the query to \nget the entries from d, then combined them, the result would be the same \nbut the output faster. Instead it is doing seq scans on both tables and \ndoing an expensive join that returns only a few rows.\n\n", "msg_date": "Mon, 22 Mar 2004 14:00:30 -0500", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: two seperate queries run faster than queries ORed together" }, { "msg_contents": "\nOn Mon, 22 Mar 2004, Joseph Shraibman wrote:\n\n> Tom Lane wrote:\n> > Joseph Shraibman <[email protected]> writes:\n> >\n> >>No, pkey is not the primary key in this case. The number of entries in u\n> >>that have pkey 260 and not boolfield is 344706.\n> >\n> >\n> > ... and every one of those rows *must* be included in the join input,\n>\n> *If* you use one big join in the first place. If postgres ran the query\n> to first get the values with status == 3 from u, then ran the query to\n> get the entries from d, then combined them, the result would be the same\n> but the output faster. Instead it is doing seq scans on both tables and\n\nWell, you have to be careful on the combination to not give the wrong\nanswers if there's a row with u.status=3 that matches a row d.status=3.\n", "msg_date": "Mon, 22 Mar 2004 11:32:30 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two seperate queries run faster than queries ORed" }, { "msg_contents": "Stephan Szabo wrote:\n> On Mon, 22 Mar 2004, Joseph Shraibman wrote:\n> \n> \n>>Tom Lane wrote:\n>>\n>>>Joseph Shraibman <[email protected]> writes:\n>>>\n>>>\n>>>>No, pkey is not the primary key in this case. The number of entries in u\n>>>>that have pkey 260 and not boolfield is 344706.\n>>>\n>>>\n>>>... and every one of those rows *must* be included in the join input,\n>>\n>>*If* you use one big join in the first place. If postgres ran the query\n>>to first get the values with status == 3 from u, then ran the query to\n>>get the entries from d, then combined them, the result would be the same\n>>but the output faster. Instead it is doing seq scans on both tables and\n> \n> \n> Well, you have to be careful on the combination to not give the wrong\n> answers if there's a row with u.status=3 that matches a row d.status=3.\n\nRight you would have to avoid duplicates. The existing DISTINCT code \nshould be able to handle that.\n", "msg_date": "Mon, 22 Mar 2004 14:40:04 -0500", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: two seperate queries run faster than queries ORed together" }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> Well, you have to be careful on the combination to not give the wrong\n> answers if there's a row with u.status=3 that matches a row d.status=3.\n\nWe could in theory handle that using something similar to the method\ncurrently used for \"OR\" indexscans (that is, rather than doing either\nUNION- or UNION-ALL-like processing, we drop tuples from later scans\nthat meet the qual tests of the earlier scans). However I don't see any\nclean way in the current planner to cost out both approaches and pick\nthe cheaper one. It looks to me like we'd have to do over the *entire*\njoin planning process each way, which is ugly as well as unreasonably\nexpensive. The problem is that the OR approach only wins when the\ncomponent clauses of the OR can drop down to lower levels of the plan\ntree if they are considered separately. But a plan tree with a\nrestriction at a low level and one without it are two different things,\nand the dynamic-programming approach we use to build up join plans\ndoesn't yield the same solutions. (As indeed it shouldn't, since the\nwhole point of Joseph's example is to get fundamentally different plans\nfor the two parts of the OR.)\n\nWe could possibly approach it heuristically, that is examine the clauses\nand try to guess whether it's better to split them apart or not. But\neven assuming that we punt on that part of the problem, it seems like a\nmess. For instance suppose that there are additional relations in the\nquery that aren't mentioned in the OR clause. The planner may want to\njoin some of those relations in advance of forming the join that the OR\nitself describes. Pushing down different parts of the OR might cause\nthe best join path to change. How could you merge multiple scans if\nsome include extra relations and some don't?\n\nIn short, I see how such a plan could be executed, but I don't see any\neffective approach for generating the plan ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Mar 2004 15:44:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two seperate queries run faster than queries ORed " } ]
[ { "msg_contents": "I think i just sent an email out to the qrong person.. so if this ends up in \nthe list 2x i'm very sorry:\n\ni changed the query to:SELECT\nw8.wid,\nw8.variant,\nw8.num_variants,\nsum_text(w8.unicode) as unicodes,\nsum_text(w8.pinyin) as pinyins\nFROM\nwords as w0 JOIN words as w1 ON(w1.wid = w0.wid AND w1.variant = w0.variant \nAND w1.sequence = w0.sequence + 1 AND w1.pinyin LIKE 'fu_')\nJOIN words as w2 ON(w2.wid = w1.wid AND w2.variant = w1.variant AND \nw2.sequence = w1.sequence + 1 AND w2.pinyin LIKE 'ji_')\nJOIN words as w3 ON(w3.wid = w2.wid AND w3.variant = w2.variant AND \nw3.sequence = w2.sequence + 1 AND w3.pinyin LIKE 'guan_')\nJOIN words as w4 ON(w4.wid = w3.wid AND w4.variant = w3.variant AND \nw4.sequence = w3.sequence + 1 AND w4.pinyin LIKE 'kai_')\nJOIN words as w5 ON(w5.wid = w4.wid AND w5.variant = w4.variant AND \nw5.sequence = w4.sequence + 1 AND w5.pinyin LIKE 'fang_')\nJOIN words as w6 ON(w6.wid = w5.wid AND w6.variant = w5.variant AND \nw6.sequence = w5.sequence + 1 AND w6.pinyin LIKE 'xi_')\nJOIN words as w7 ON(w7.wid = w6.wid AND w7.variant = w6.variant AND \nw7.sequence = w6.sequence + 1 AND w7.pinyin LIKE 'tong_')\nJOIN words as w8 ON(w8.wid = w7.wid AND w8.variant = w7.variant)\nWHERE\nw0.wid > 0 AND\nw0.pinyin = 'zheng4' AND\nw0.def_exists = 't' AND\nw0.sequence = 0\nGROUP BY\nw8.wid,\nw8.variant,\nw8.num_variants,\nw8.page_order,\nw0.sequence ,\nw1.sequence ,\nw2.sequence ,\nw3.sequence ,\nw4.sequence ,\nw5.sequence ,\nw6.sequence ,\nw7.sequence\nORDER BY\nw8.page_order;\n\n\nand this cuts the time from 2900ms to about 1200ms. Is there any way to get \nbetter time since the prepared statements for this explicit join query is \nabout 320ms...\n\nthx so far guys\n\n\n\n>\n>Kris Jurka <[email protected]> writes:\n> > On Thu, 11 Mar 2004, Tom Lane wrote:\n> >> \"Eric Brown\" <[email protected]> writes:\n> >>> [ planning a 9-table query takes too long ]\n> >>\n> >> See http://www.postgresql.org/docs/7.4/static/explicit-joins.html\n> >> for some useful tips.\n>\n> > Is this the best answer we've got? For me with an empty table this \n>query\n> > takes 4 seconds to plan, is that the expected planning time? I know \n>I've\n> > got nine table queries that don't take that long.\n>\n>The problem with this example is that it's a nine-way self-join.\n>Ordinarily the planner can eliminate many possible join paths at low\n>levels, because they are more expensive than other available options.\n>But in this situation all the available options have *exactly the same\n>cost estimate* because they are all founded on exactly the same statistics.\n>The planner fails to prune any of them and ends up making a random\n>choice after examining way too many alternatives.\n>\n>Maybe we should think about instituting a hard upper limit on the number\n>of alternatives considered. But I'm not sure what the consequences of\n>that would be. In the meantime, the answer for the OP is to arbitrarily\n>limit the number of join orders considered, as described in the\n>above-mentioned web page. With the given query constraints there's\n>really only one join order worth thinking about ...\n>\n> > Setting geqo_threshold less than 9, it takes 1 second to plan. Does \n>this\n> > indicate that geqo_threshold is set too high, or is it a tradeoff \n>between\n> > planning time and plan quality?\n>\n>Selecting the GEQO planner doesn't really matter here, because it has\n>no better clue about how to choose among a lot of alternatives with\n>identical cost estimates.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n\n_________________________________________________________________\nFREE pop-up blocking with the new MSN Toolbar � get it now! \nhttp://clk.atdmt.com/AVE/go/onm00200415ave/direct/01/\n\n", "msg_date": "Thu, 18 Mar 2004 20:23:52 -0500", "msg_from": "\"Eric Brown\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: severe performance issue with planner (fwd)" } ]
[ { "msg_contents": "I also tried this (printf1 in irc suggested \"folding\" the joins) :\n\nSELECT\nw8.wid,\nw8.variant,\nw8.num_variants,\nsum_text(w8.unicode) as unicodes,\nsum_text(w8.pinyin) as pinyins\nFROM\n(words as w8 JOIN\n(words as w7 JOIN\n(words as w6 JOIN\n(words as w5 JOIN\n(words as w4 JOIN\n(words as w3 JOIN\n(words as w2 JOIN\n(words as w0 JOIN words as w1\n\nON(w1.wid = w0.wid AND w1.variant = w0.variant AND w1.sequence = w0.sequence \n+ 1 AND w1.pinyin LIKE 'fu_'))\nON(w2.wid = w1.wid AND w2.variant = w1.variant AND w2.sequence = w1.sequence \n+ 1 AND w2.pinyin LIKE 'ji_'))\nON(w3.wid = w2.wid AND w3.variant = w2.variant AND w3.sequence = w2.sequence \n+ 1 AND w3.pinyin LIKE 'guan_'))\nON(w4.wid = w3.wid AND w4.variant = w3.variant AND w4.sequence = w3.sequence \n+ 1 AND w4.pinyin LIKE 'kai_'))\nON(w5.wid = w4.wid AND w5.variant = w4.variant AND w5.sequence = w4.sequence \n+ 1 AND w5.pinyin LIKE 'fang_'))\nON(w6.wid = w5.wid AND w6.variant = w5.variant AND w6.sequence = w5.sequence \n+ 1 AND w6.pinyin LIKE 'xi_'))\nON(w7.wid = w6.wid AND w7.variant = w6.variant AND w7.sequence = w6.sequence \n+ 1 AND w7.pinyin LIKE 'tong_'))\nON(w8.wid = w7.wid AND w8.variant = w7.variant))\n\n\nWHERE\nw0.wid > 0 AND\nw0.pinyin = 'zheng4' AND\nw0.def_exists = 't' AND\nw0.sequence = 0\nGROUP BY\nw8.wid,\nw8.variant,\nw8.num_variants,\nw8.page_order,\nw0.sequence ,\nw1.sequence ,\nw2.sequence ,\nw3.sequence ,\nw4.sequence ,\nw5.sequence ,\nw6.sequence ,\nw7.sequence\nORDER BY\nw8.page_order;\n\n\nthis gets teh time down to 800ms (not too shabby..).. and as a prepared \nstatement, it only takes 15ms!!! i am hopeful there is a way to totally \nbypass most of this overhead.. but i need more help :\\\n\n_________________________________________________________________\nMSN Toolbar provides one-click access to Hotmail from any Web page � FREE \ndownload! http://clk.atdmt.com/AVE/go/onm00200413ave/direct/01/\n\n", "msg_date": "Thu, 18 Mar 2004 20:59:39 -0500", "msg_from": "\"Eric Brown\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: severe performance issue with planner (fwd)" } ]
[ { "msg_contents": "Hello all,\n\nI have a question/observation about vacuum performance. I'm running \nSolaris 9, pg 7.4.1.\nThe process in questions is doing a vacuum:\n\nbash-2.05$ /usr/ucb/ps auxww | grep 4885\nfiasco 4885 19.1 3.7605896592920 ? O 19:29:44 91:38 postgres: \nfiasco fiasco [local] VACUUM\n\nI do a truss on the process and see the output below looping over and \nover. Note the constant opening and closing of the file 42064889.3.\n\nWhy the open/close cycle as opposed to caching the file descriptor \nsomewhere?\n\nIf PG really does need to loop like this, it should be much faster to \nset the cwd and then open without the path in the file name. You're \nforcing the kernel to do a lot of work walking the path, checking for \nnfs mounts, symlinks, etc.\n\nThanks!\n\n-- Alan\n\nopen64(\"/export/nst1/fi/pg/data1/base/91488/42064889.3\", O_RDWR) = 47\nllseek(47, 0x18F6E000, SEEK_SET) = 0x18F6E000\nwrite(47, \"\\0\\0\\0 zA9A3D9E8\\0\\0\\0 \"\".., 8192) = 8192\nclose(47) = 0\nread(29, \"\\0\\0\\0 }ED WF1B0\\0\\0\\0 $\".., 8192) = 8192\nopen64(\"/export/nst1/fi/pg/data1/base/91488/42064889.3\", O_RDWR) = 47\nllseek(47, 0x18F78000, SEEK_SET) = 0x18F78000\nwrite(47, \"\\0\\0\\0 zA9AC 090\\0\\0\\0 \"\".., 8192) = 8192\nclose(47) = 0\nllseek(43, 0x26202000, SEEK_SET) = 0x26202000\nread(43, \"\\0\\0\\084 EC9FC P\\0\\0\\0 )\".., 8192) = 8192\nsemop(52, 0xFFBFC5E0, 1) = 0\nsemop(52, 0xFFBFC640, 1) = 0\nopen64(\"/export/nst1/fi/pg/data1/base/91488/42064889.3\", O_RDWR) = 47\nllseek(47, 0x18F62000, SEEK_SET) = 0x18F62000\nwrite(47, \"\\0\\0\\0 zA9C2\\bB8\\0\\0\\0 \"\".., 8192) = 8192\nclose(47) = 0\nread(29, \"\\0\\0\\0 }ED X1210\\0\\0\\0 $\".., 8192) = 8192\nsemop(52, 0xFFBFC5E0, 1) = 0\nsemop(52, 0xFFBFC640, 1) = 0\nopen64(\"/export/nst1/fi/pg/data1/base/91488/42064889.3\", O_RDWR) = 47\nllseek(47, 0x18018000, SEEK_SET) = 0x18018000\nwrite(47, \"\\0\\0\\0 zA997ADB0\\0\\0\\0 \"\".., 8192) = 8192\nclose(47) = 0\nllseek(43, 0x26200000, SEEK_SET) = 0x26200000\nread(43, \"\\0\\0\\084 EC4F5E8\\0\\0\\0 )\".., 8192) = 8192\nsemop(52, 0xFFBFC5E0, 1) = 0\nsemop(52, 0xFFBFC640, 1) = 0\nllseek(13, 13918208, SEEK_SET) = 13918208\nwrite(13, \"D0 Z\\001\\0\\0\\0 )\\0\\0\\087\".., 8192) = 8192\nwrite(13, \"D0 Z\\001\\0\\0\\0 )\\0\\0\\087\".., 8192) = 8192\nwrite(13, \"D0 Z\\001\\0\\0\\0 )\\0\\0\\087\".., 8192) = 8192\nwrite(13, \"D0 Z\\001\\0\\0\\0 )\\0\\0\\087\".., 8192) = 8192\nwrite(13, \"D0 Z\\001\\0\\0\\0 )\\0\\0\\087\".., 8192) = 8192\nopen64(\"/export/nst1/fi/pg/data1/base/91488/42064889.3\", O_RDWR) = 47\nllseek(47, 0x18F52000, SEEK_SET) = 0x18F52000\nwrite(47, \"\\0\\0\\0 zABE7 V10\\0\\0\\0 \"\".., 8192) = 8192\nclose(47) = 0\nsemop(46, 0xFFBFC5D0, 1) = 0\nread(29, \"\\0\\0\\0 }ED X 2 p\\0\\0\\0 $\".., 8192) = 8192\nsemop(52, 0xFFBFC5E0, 1) = 0\nsemop(52, 0xFFBFC640, 1) = 0\nllseek(43, 0x270DA000, SEEK_SET) = 0x270DA000\nwrite(43, \"\\0\\0\\087A2E8 #B8\\0\\0\\0 )\".., 8192) = 8192\nllseek(43, 0x261FE000, SEEK_SET) = 0x261FE000\nread(43, \"\\0\\0\\084 EC498\\0\\0\\0\\0 )\".., 8192) = 8192\npoll(0xFFBFC100, 0, 10) = 0\nopen64(\"/export/nst1/fi/pg/data1/base/91488/42064889.3\", O_RDWR) = 47\nllseek(47, 0x1804A000, SEEK_SET) = 0x1804A000\nwrite(47, \"\\0\\0\\0 zAA0F8DE0\\0\\0\\0 \"\".., 8192) = 8192\nclose(47) = 0\nread(29, \"\\0\\0\\0 }ED X RD0\\0\\0\\0 $\".., 8192) = 8192\nsemop(52, 0xFFBFC5E0, 1) = 0\nsemop(52, 0xFFBFC640, 1) = 0\nwrite(13, \"D0 Z\\001\\0\\0\\0 )\\0\\0\\087\".., 8192) = 8192\nwrite(13, \"D0 Z\\001\\0\\0\\0 )\\0\\0\\087\".., 8192) = 8192\nwrite(13, \"D0 Z\\001\\0\\0\\0 )\\0\\0\\087\".., 8192) = 8192\nwrite(13, \"D0 Z\\001\\0\\0\\0 )\\0\\0\\087\".., 8192) = 8192\nopen64(\"/export/nst1/fi/pg/data1/base/91488/42064889.3\", O_RDWR) = 47\n\n", "msg_date": "Thu, 18 Mar 2004 23:22:14 -0500", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum performance" }, { "msg_contents": "Alan Stange <[email protected]> writes:\n> I do a truss on the process and see the output below looping over and \n> over. Note the constant opening and closing of the file 42064889.3.\n> Why the open/close cycle as opposed to caching the file descriptor \n> somewhere?\n\nThis is probably a \"blind write\". We've gotten rid of those for 7.5.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Mar 2004 01:18:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum performance " } ]
[ { "msg_contents": "Hi All,\n\nWe are evaluating the options for having multiple databases vs. schemas on a\nsingle database cluster for a custom grown app that we developed. Each app\ninstalls same set of tables for each service. And the service could easily\nbe in thousands. so Is it better to have 1000 databases vs 1000 schemas in a\ndatabase cluster. What are the performance overhead of having multiple\ndatabases vs. schemas (if any). I'm leaning towards having schemas rather\nthan databases but i would like to get others opinion on this. Appreciate\nyour reply.\n\nThanks,\nStalin\n\n\n\n", "msg_date": "Mon, 22 Mar 2004 13:00:21 -0800", "msg_from": "\"Subbiah, Stalin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Databases Vs. Schemas" } ]
[ { "msg_contents": "--sorry to repost, just subscribed to the list. hopefully it gets to the\nlist this time --\n\nHi All,\n\nWe are evaluating the options for having multiple databases vs. schemas on a\nsingle database cluster for a custom grown app that we developed. Each app\ninstalls same set of tables for each service. And the service could easily\nbe in thousands. so Is it better to have 1000 databases vs 1000 schemas in a\ndatabase cluster. What are the performance overhead of having multiple\ndatabases vs. schemas (if any). I'm leaning towards having schemas rather\nthan databases but i would like to get others opinion on this. Appreciate\nyour reply.\n\nThanks,\nStalin\n", "msg_date": "Mon, 22 Mar 2004 13:30:24 -0800", "msg_from": "\"Subbiah, Stalin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Databases Vs. Schemas" }, { "msg_contents": "\"Subbiah, Stalin\" <[email protected]> writes:\n> Is it better to have 1000 databases vs 1000 schemas in a\n> database cluster.\n\nYou almost certainly want to go for schemas, at least from a performance\npoint of view. The overhead of a schema is small (basically one more\nrow in pg_namespace) whereas the overhead of a database is not trivial.\n\nThe main reason you might not want to use schemas is if you want fairly\nairtight separation between different services. Separate databases\nwould prevent services from looking at each others' catalog entries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Mar 2004 17:04:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Databases Vs. Schemas " }, { "msg_contents": "Stalin,\n\n> We are evaluating the options for having multiple databases vs. schemas on a\n> single database cluster for a custom grown app that we developed. Each app\n> installs same set of tables for each service. And the service could easily\n> be in thousands. so Is it better to have 1000 databases vs 1000 schemas in a\n> database cluster. What are the performance overhead of having multiple\n> databases vs. schemas (if any). I'm leaning towards having schemas rather\n> than databases but i would like to get others opinion on this. Appreciate\n> your reply.\n\nNo performance difference AFAIK. The real question is whether you have to \nhave queries joining several \"databases\". If yes, use Schema; if no, use \ndatabases.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 22 Mar 2004 21:19:50 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Databases Vs. Schemas" }, { "msg_contents": "hi\n\nJosh Berkus wrote:\n\n> Stalin,\n> \n> \n>>We are evaluating the options for having multiple databases vs. schemas on a\n>>single database cluster for a custom grown app that we developed. Each app\n>>installs same set of tables for each service. And the service could easily\n>>be in thousands. so Is it better to have 1000 databases vs 1000 schemas in a\n>>database cluster. What are the performance overhead of having multiple\n>>databases vs. schemas (if any). I'm leaning towards having schemas rather\n>>than databases but i would like to get others opinion on this. Appreciate\n>>your reply.\n> \n> \n> No performance difference AFAIK. The real question is whether you have to \n> have queries joining several \"databases\". If yes, use Schema; if no, use \n> databases.\n\ndon't forget the pg_hba.conf :) You need 1000 declaration. Was a thread \nbefore, title: performance problem - 10.000 databases\nCheck this:\nhttp://groups.google.com/groups?hl=en&lr=&ie=UTF-8&threadm=1068039213.28814.116.camel%40franki-laptop.tpi.pl&rnum=10&prev=/groups%3Fq%3D1000%2Bdatabase%2Bgroup:comp.databases.postgresql.*%26hl%3Den%26lr%3D%26ie%3DUTF-8%26group%3Dcomp.databases.postgresql.*%26selm%3D1068039213.28814.116.camel%2540franki-laptop.tpi.pl%26rnum%3D10\n\nC.\n", "msg_date": "Tue, 23 Mar 2004 12:06:42 +0100", "msg_from": "CoL <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Databases Vs. Schemas" }, { "msg_contents": "We have a similarly sized database and we went with schemas. We did \nsomething different, though, we created one schema that contained all \nof the tables (we used the public schema) and then created the hundreds \nof schemas with views that access only the related rows for a \nparticular schema. Something like this:\n\ncreate table public.file (siteid int, id int, [fields]);\ncreate schema sc1;\ncreate view sc1.file as select * from public.file where siteid = 1;\ncreate schema sc2;\ncreate view sc2.file as select * from public file where siteid = 2;\n\nAnd we also created rules to allow update, delete, and insert on those \nviews so that they looked like tables. The reason we did this is \nbecause we ran into issues with too many open files during pg_dump when \nwe had thousands of tables instead of about 1 hundred tables and \nthousands of views.\n\nWe, however, did have a need to periodically select data from 2 schemas \nat a time, and it was simpler logic than if we needed 2 database \nconnections.\n\nAdam Ruth\n\nOn Mar 22, 2004, at 2:30 PM, Subbiah, Stalin wrote:\n\n> --sorry to repost, just subscribed to the list. hopefully it gets to \n> the\n> list this time --\n>\n> Hi All,\n>\n> We are evaluating the options for having multiple databases vs. \n> schemas on a\n> single database cluster for a custom grown app that we developed. Each \n> app\n> installs same set of tables for each service. And the service could \n> easily\n> be in thousands. so Is it better to have 1000 databases vs 1000 \n> schemas in a\n> database cluster. What are the performance overhead of having multiple\n> databases vs. schemas (if any). I'm leaning towards having schemas \n> rather\n> than databases but i would like to get others opinion on this. \n> Appreciate\n> your reply.\n>\n> Thanks,\n> Stalin\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Tue, 23 Mar 2004 06:54:02 -0700", "msg_from": "Adam Ruth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Databases Vs. Schemas" } ]
[ { "msg_contents": "As anyone done benchmarking tests with postgres running on solaris and linux\n(redhat) assuming both environment has similar hardware, memory, processing\nspeed etc. By reading few posts here, i can see linux would outperform\nsolaris cause linux being very good at kernel caching than solaris which is\nbeing the key performance booster for postgres. what is the preferred OS\nfor postgres deployment if given an option between linux and solaris. As\nwell as filesystem to be used (xfs, ufs, ext3...). Any pointer to source of\ninformation is appreciated.\n\nThanks,\nStalin\n", "msg_date": "Mon, 22 Mar 2004 16:05:45 -0800", "msg_from": "\"Subbiah, Stalin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "Stalin,\n\n> As anyone done benchmarking tests with postgres running on solaris and linux\n> (redhat) assuming both environment has similar hardware, memory, processing\n> speed etc. By reading few posts here, i can see linux would outperform\n> solaris cause linux being very good at kernel caching than solaris which is\n> being the key performance booster for postgres. what is the preferred OS\n> for postgres deployment if given an option between linux and solaris. As\n> well as filesystem to be used (xfs, ufs, ext3...). Any pointer to source of\n> information is appreciated.\n\nMost of that is a matter of opinion. Read the cumulative archives of this \nlist.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 22 Mar 2004 21:20:21 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "The hardware platform to deploy onto may well influence your choice :\n\nIntel is usually the most cost effective , which means using Linux makes \nsense in that case (anybody measured Pg performance on Solaris/Intel....?).\n\nIf however, you are going to run a very \"big in some sense\" database, \nthen 64 bit hardware is desirable and you can look at the Sun offerings. \nIn this case you can run either Linux or Solaris (some informal \nbenchmarks suggest that for small numbers of cpus, Linux is probably \nfaster).\n\nIt might be worth considering Apple if you want a 64-bit chip that has a \nclock speed comparable to Intel's - the Xserv is similarly priced to Sun \nV210 (both dual cpu 1U's).\n\nAre you free to choose any hardware?\n\nbest wishes\n\nMark\n\nSubbiah, Stalin wrote:\n\n>(snipped) what is the preferred OS\n>for postgres deployment if given an option between linux and solaris.\n>\n> \n>\n\n", "msg_date": "Tue, 23 Mar 2004 17:39:38 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "Mark,\n\n> It might be worth considering Apple if you want a 64-bit chip that has a\n> clock speed comparable to Intel's - the Xserv is similarly priced to Sun\n> V210 (both dual cpu 1U's).\n\nPersonally I'd stay *far* away from the XServs until Apple learns to build \nsome real server harware. The current XServs have internal parts more \nappropriate to a Dell desktop (promise controller, low-speed commodity IDE \ndrives), than a server.\n\nIf Apple has prices these IU desktop machines similar to Sun, then I sense \ndoom ahead for the Apple Server Division.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 23 Mar 2004 08:15:39 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "On Mon, Mar 22, 2004 at 04:05:45PM -0800, Subbiah, Stalin wrote:\n> being the key performance booster for postgres. what is the preferred OS\n> for postgres deployment if given an option between linux and solaris. As\n\nOne thing this very much depends on is what you're trying to do. \nSuns have a reputation for greater reliability. While my own\nexperience with Sun hardware has been rather shy of sterling, I _can_\nsay that it stands head and shoulders above a lot of the x86 gear you\ncan get.\n\nIf you're planning to use Solaris on x86, don't bother. Solaris is a\nslow, bloated pig compared to Linux, at least when it comes to\nmanaging the largish number of processes that Postgres requires.\n\nIf pure speed is what you're after, I have found that 2-way, 32 bit\nLinux on P-IIIs compares very favourably to 4 way 64 bit Ultra SPARC\nIIs.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe fact that technology doesn't work is no bar to success in the marketplace.\n\t\t--Philip Greenspun\n", "msg_date": "Tue, 23 Mar 2004 12:36:32 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "\n\nJosh Berkus wrote:\n\n>Mark,\n>\n> \n>\n>>It might be worth considering Apple if you want a 64-bit chip that has a\n>>clock speed comparable to Intel's - the Xserv is similarly priced to Sun\n>>V210 (both dual cpu 1U's).\n>> \n>>\n>\n>Personally I'd stay *far* away from the XServs until Apple learns to build \n>some real server harware. The current XServs have internal parts more \n>appropriate to a Dell desktop (promise controller, low-speed commodity IDE \n>drives), than a server.\n>\n>If Apple has prices these IU desktop machines similar to Sun, then I sense \n>doom ahead for the Apple Server Division.\n>\n> \n>\n(thinks...) Point taken - the Xserv is pretty \"entry level\"...\n\nHowever, having recently benchmarked a 280R vs a PIII Dell using a \nPromise ide raid controller - and finding the Dell comparable (with \nwrite cache *disabled*), I suspect that the Xserv has a pretty good \nchance of outperforming a V210 (certainly would be interesting to try \nout....)\n\nWhat I think has happened is that over the last few years then \"cheap / \nslow\" ide stuff has gotten pretty fast - even when you make \"write mean \nwrite\"....\n\ncheers\n\nMark\n\n", "msg_date": "Wed, 24 Mar 2004 20:53:31 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Benchmarking postgres on Solaris/Linux" } ]
[ { "msg_contents": "ok, at the advice of jurka, I will post new results:\n\nHere's the query as I have changed it now:\nSELECT\nw8.wid,\nw8.variant,\nw8.num_variants,\nsum_text(w8.unicode) as unicodes,\nsum_text(w8.pinyin) as pinyins\nFROM\n(words as w8 JOIN\n(words as w7 JOIN\n(words as w6 JOIN\n(words as w5 JOIN\n(words as w4 JOIN\n(words as w3 JOIN\n(words as w2 JOIN\n(words as w0 JOIN words as w1\n\nON(w1.wid = w0.wid AND w1.variant = w0.variant AND w1.sequence = w0.sequence \n+ 1 AND w1.pinyin LIKE 'fu_'))\nON(w2.wid = w1.wid AND w2.variant = w1.variant AND w2.sequence = w1.sequence \n+ 1 AND w2.pinyin LIKE 'ji_'))\nON(w3.wid = w2.wid AND w3.variant = w2.variant AND w3.sequence = w2.sequence \n+ 1 AND w3.pinyin LIKE 'guan_'))\nON(w4.wid = w3.wid AND w4.variant = w3.variant AND w4.sequence = w3.sequence \n+ 1 AND w4.pinyin LIKE 'kai_'))\nON(w5.wid = w4.wid AND w5.variant = w4.variant AND w5.sequence = w4.sequence \n+ 1 AND w5.pinyin LIKE 'fang_'))\nON(w6.wid = w5.wid AND w6.variant = w5.variant AND w6.sequence = w5.sequence \n+ 1 AND w6.pinyin LIKE 'xi_'))\nON(w7.wid = w6.wid AND w7.variant = w6.variant AND w7.sequence = w6.sequence \n+ 1 AND w7.pinyin LIKE 'tong_'))\nON(w8.wid = w7.wid AND w8.variant = w7.variant))\n\n\nWHERE\nw0.wid > 0 AND\nw0.pinyin = 'zheng4' AND\nw0.def_exists = 't' AND\nw0.sequence = 0\nGROUP BY\nw8.wid,\nw8.variant,\nw8.num_variants,\nw8.page_order,\nw0.sequence ,\nw1.sequence ,\nw2.sequence ,\nw3.sequence ,\nw4.sequence ,\nw5.sequence ,\nw6.sequence ,\nw7.sequence\nORDER BY\nw8.page_order;\n\n\nAnd here's the output of explain analyze:\n\n\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=54.26..54.26 rows=1 width=43) (actual time=14.916..14.917 rows=1 \nloops=1)\n Sort Key: w8.page_order\n -> HashAggregate (cost=54.24..54.25 rows=1 width=43) (actual \ntime=14.891..14.892 rows=1 loops=1)\n -> Nested Loop (cost=0.00..54.10 rows=4 width=43) (actual \ntime=3.676..14.446 rows=12 loops=1)\n -> Nested Loop (cost=0.00..48.09 rows=1 width=64) (actual \ntime=3.638..14.269 rows=1 loops=1)\n Join Filter: (\"outer\".\"sequence\" = (\"inner\".\"sequence\" \n+ 1))\n -> Nested Loop (cost=0.00..42.06 rows=1 width=56) \n(actual time=3.581..14.181 rows=1 loops=1)\n Join Filter: ((\"inner\".wid = \"outer\".wid) AND \n(\"inner\".variant = \"outer\".variant) AND (\"inner\".\"sequence\" = \n(\"outer\".\"sequence\" + 1)) AND (\"outer\".\"sequence\" = (\"inner\".\"sequence\" + \n1)))\n -> Nested Loop (cost=0.00..36.05 rows=1 \nwidth=48) (actual time=2.152..12.443 rows=1 loops=1)\n Join Filter: ((\"outer\".\"sequence\" = \n(\"inner\".\"sequence\" + 1)) AND (\"inner\".\"sequence\" = (\"outer\".\"sequence\" + \n1)))\n -> Nested Loop (cost=0.00..30.03 rows=1 \nwidth=40) (actual time=2.104..12.368 rows=1 loops=1)\n Join Filter: ((\"outer\".variant = \n\"inner\".variant) AND (\"outer\".wid = \"inner\".wid) AND (\"inner\".\"sequence\" = \n(\"outer\".\"sequence\" + 1)))\n -> Nested Loop (cost=0.00..24.02 \nrows=1 width=32) (actual time=2.040..11.226 rows=1 loops=1)\n Join Filter: \n(\"outer\".\"sequence\" = (\"inner\".\"sequence\" + 1))\n -> Nested Loop \n(cost=0.00..18.00 rows=1 width=24) (actual time=1.979..11.147 rows=1 \nloops=1)\n Join Filter: \n((\"inner\".variant = \"outer\".variant) AND (\"inner\".wid = \"outer\".wid))\n -> Nested Loop \n(cost=0.00..12.00 rows=1 width=16) (actual time=0.258..8.765 rows=1 loops=1)\n -> Index Scan \nusing pinyin_index on words w3 (cost=0.00..5.99 rows=1 width=8) (actual \ntime=0.084..2.399 rows=304 loops=1)\n Index Cond: \n(((pinyin)::text >= 'guan'::character varying) AND ((pinyin)::text < \n'guao'::character varying))\n Filter: \n((pinyin)::text ~~ 'guan_'::text)\n -> Index Scan \nusing words2_pkey on words w1 (cost=0.00..6.00 rows=1 width=8) (actual \ntime=0.018..0.018 rows=0 loops=304)\n Index Cond: \n((\"outer\".wid = w1.wid) AND (\"outer\".variant = w1.variant))\n Filter: \n((pinyin)::text ~~ 'fu_'::text)\n -> Index Scan using \npinyin_index on words w7 (cost=0.00..5.99 rows=1 width=8) (actual \ntime=0.025..1.863 rows=338 loops=1)\n Index Cond: \n(((pinyin)::text >= 'tong'::character varying) AND ((pinyin)::text < \n'tonh'::character varying))\n Filter: \n((pinyin)::text ~~ 'tong_'::text)\n -> Index Scan using \nwords2_pkey on words w6 (cost=0.00..6.00 rows=1 width=8) (actual \ntime=0.037..0.052 rows=1 loops=1)\n Index Cond: ((w6.wid = \n\"outer\".wid) AND (w6.variant = \"outer\".variant))\n Filter: ((pinyin)::text \n~~ 'xi_'::text)\n -> Index Scan using pinyin_index on \nwords w4 (cost=0.00..5.99 rows=1 width=8) (actual time=0.028..0.874 \nrows=165 loops=1)\n Index Cond: (((pinyin)::text >= \n'kai'::character varying) AND ((pinyin)::text < 'kaj'::character varying))\n Filter: ((pinyin)::text ~~ \n'kai_'::text)\n -> Index Scan using words2_pkey on words \nw2 (cost=0.00..6.00 rows=1 width=8) (actual time=0.023..0.047 rows=1 \nloops=1)\n Index Cond: ((\"outer\".wid = w2.wid) \nAND (\"outer\".variant = w2.variant))\n Filter: ((pinyin)::text ~~ \n'ji_'::text)\n -> Index Scan using pinyin_index on words w5 \n(cost=0.00..5.99 rows=1 width=8) (actual time=0.025..1.436 rows=259 loops=1)\n Index Cond: (((pinyin)::text >= \n'fang'::character varying) AND ((pinyin)::text < 'fanh'::character varying))\n Filter: ((pinyin)::text ~~ 'fang_'::text)\n -> Index Scan using words2_pkey on words w0 \n(cost=0.00..6.01 rows=1 width=8) (actual time=0.030..0.058 rows=1 loops=1)\n Index Cond: ((\"outer\".wid = w0.wid) AND (w0.wid > \n0) AND (\"outer\".variant = w0.variant))\n Filter: (((pinyin)::text = 'zheng4'::text) AND \n(def_exists = true) AND (\"sequence\" = 0))\n -> Index Scan using words2_pkey on words w8 \n(cost=0.00..6.00 rows=1 width=27) (actual time=0.019..0.103 rows=12 loops=1)\n Index Cond: ((w8.wid = \"outer\".wid) AND (w8.variant = \n\"outer\".variant))\nTotal runtime: 15.987 ms\n(44 rows)\n\nTime: 838.446 ms\n\n\nAs you can see, there still appears to be some issue with planning taking \n~820ms, while the actual query should only take 16ms (and as you can see i \nused the \"folding\" technique which takes some load off of the planner.\n\n\nHere is the old query/analysis as run right now:\nSELECT\n w8.wid,\n w8.variant,\n w8.num_variants,\n sum_text(w8.unicode) as unicodes,\n sum_text(w8.pinyin) as pinyins\nFROM\n words as w0, words as w1,\n words as w2, words as w3,\n words as w4, words as w5,\n words as w6, words as w7,\n words as w8\nWHERE\n w0.wid > 0 AND\n w0.pinyin = 'zheng4' AND\n w0.def_exists = 't' AND\n w0.sequence = 0 AND\n w1.wid = w0.wid AND\n w1.pinyin LIKE 'fu_' AND\n w1.variant = w0.variant AND\n w1.sequence = (w0.sequence + 1) AND\n w2.wid = w1.wid AND\n w2.pinyin LIKE 'ji_' AND\n w2.variant = w1.variant AND\n w2.sequence = (w1.sequence + 1) AND\n w3.wid = w2.wid AND\n w3.pinyin LIKE 'guan_' AND\n w3.variant = w2.variant AND\n w3.sequence = (w2.sequence + 1) AND\n w4.wid = w3.wid AND\n w4.pinyin LIKE 'kai_' AND\n w4.variant = w3.variant AND\n w4.sequence = (w3.sequence + 1) AND\n w5.wid = w4.wid AND\n w5.pinyin LIKE 'fang_' AND\n w5.variant = w4.variant AND\n w5.sequence = (w4.sequence + 1) AND\n w6.wid = w5.wid AND\n w6.pinyin LIKE 'xi_' AND\n w6.variant = w5.variant AND\n w6.sequence = (w5.sequence + 1) AND\n w7.wid = w6.wid AND\n w7.pinyin LIKE 'tong_' AND\n w7.variant = w6.variant AND\n w7.sequence = (w6.sequence + 1) AND\n w8.wid = w7.wid AND\n w8.variant = w7.variant\nGROUP BY\n w8.wid,\n w8.variant,\n w8.num_variants,\n w8.page_order ,\n w0.sequence ,\n w1.sequence ,\n w2.sequence ,\n w3.sequence ,\n w4.sequence ,\n w5.sequence ,\n w6.sequence ,\n w7.sequence\nORDER BY\n w8.page_order;\n\nwepy=> EXPLAIN ANALYZE wepy=> explain ANALYZE \n QUERY \nPLAN\nwepy-> wepy-> \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nwepy-> wepy-> Sort (cost=54.15..54.16 rows=1 width=43) (actual \ntime=1043.148..1043.149 rows=1 loops=1)\nwepy-> wepy-> Sort Key: w8.page_order\nwepy-> wepy-> -> HashAggregate (cost=54.14..54.14 rows=1 width=43) \n(actual time=1043.121..1043.122 rows=1 loops=1)\nwepy-> wepy-> -> Nested Loop (cost=0.00..54.10 rows=1 width=43) \n(actual time=9.627..1042.565 rows=12 loops=1)\nwepy-> wepy-> Join Filter: ((\"inner\".\"sequence\" = \n(\"outer\".\"sequence\" + 1)) AND (\"outer\".\"sequence\" = (\"inner\".\"sequence\" + \n1)))\nwepy-> wepy-> -> Nested Loop (cost=0.00..48.08 rows=1 \nwidth=83) (actual time=9.557..1041.784 rows=12 loops=1)\nwepy-> wepy-> Join Filter: ((\"outer\".wid = \"inner\".wid) \nAND (\"outer\".variant = \"inner\".variant) AND (\"outer\".\"sequence\" = \n(\"inner\".\"sequence\" + 1)) AND (\"inner\".\"sequence\" = (\"outer\".\"sequence\" + \n1)))\nwepy-> wepy-> -> Nested Loop (cost=0.00..11.99 rows=1 \nwidth=16) (actual time=3.152..290.176 rows=6 loops=1)\nwepy-> wepy-> Join Filter: ((\"inner\".wid = \n\"outer\".wid) AND (\"inner\".variant = \"outer\".variant) AND (\"inner\".\"sequence\" \n= (\"outer\".\"sequence\" + 1)))\nwepy-> wepy-> -> Index Scan using pinyin_index \non words w4 (cost=0.00..5.99 rows=1 width=8) (actual time=0.084..1.233 \nrows=165 loops=1)\nwepy-> wepy-> Index Cond: (((pinyin)::text \n >= 'kai'::character varying) AND ((pinyin)::text < 'kaj'::character \nvarying))\nwepy-> wepy-> Filter: ((pinyin)::text ~~ \n'kai_'::text)\nwepy-> wepy-> -> Index Scan using pinyin_index \non words w5 (cost=0.00..5.99 rows=1 width=8) (actual time=0.018..1.411 \nrows=259 loops=165)\nwepy-> wepy-> Index Cond: (((pinyin)::text \n >= 'fang'::character varying) AND ((pinyin)::text < 'fanh'::character \nvarying))\nwepy-> wepy-> Filter: ((pinyin)::text ~~ \n'fang_'::text)\nwepy-> wepy-> -> Nested Loop (cost=0.00..36.06 rows=1 \nwidth=67) (actual time=4.500..125.184 rows=12 loops=6)\nwepy-> wepy-> -> Nested Loop (cost=0.00..30.05 \nrows=1 width=40) (actual time=4.446..125.003 rows=1 loops=6)\nwepy-> wepy-> Join Filter: \n(\"inner\".\"sequence\" = (\"outer\".\"sequence\" + 1))\nwepy-> wepy-> -> Nested Loop \n(cost=0.00..24.03 rows=1 width=32) (actual time=4.391..124.920 rows=1 \nloops=6)\nwepy-> wepy-> -> Nested Loop \n(cost=0.00..18.01 rows=1 width=24) (actual time=4.339..124.781 rows=3 \nloops=6)\nwepy-> wepy-> Join Filter: \n((\"inner\".variant = \"outer\".variant) AND (\"inner\".wid = \"outer\".wid) AND \n(\"inner\".\"sequence\" = (\"outer\".\"sequence\" + 1)))\nwepy-> wepy-> -> Nested Loop \n(cost=0.00..12.00 rows=1 width=16) (actual time=0.154..10.898 rows=18 \nloops=6)\nwepy-> wepy-> -> Index \nScan using pinyin_index on words w3 (cost=0.00..5.99 rows=1 width=8) \n(actual time=0.027..2.358 rows=304 loops=6)\nwepy-> wepy-> Index \nCond: (((pinyin)::text >= 'guan'::character varying) AND ((pinyin)::text < \n'guao'::character varying))\nwepy-> wepy-> \nFilter: ((pinyin)::text ~~ 'guan_'::text)\nwepy-> wepy-> -> Index \nScan using words2_pkey on words w6 (cost=0.00..6.00 rows=1 width=8) (actual \ntime=0.025..0.025 rows=0 loops=1824)\nwepy-> wepy-> Index \nCond: ((w6.wid = \"outer\".wid) AND (w6.variant = \"outer\".variant))\nwepy-> wepy-> \nFilter: ((pinyin)::text ~~ 'xi_'::text)\nwepy-> wepy-> -> Index Scan \nusing pinyin_index on words w7 (cost=0.00..5.99 rows=1 width=8) (actual \ntime=0.017..5.788 rows=338 loops=108)\nwepy-> wepy-> Index Cond: \n(((pinyin)::text >= 'tong'::character varying) AND ((pinyin)::text < \n'tonh'::character varying))\nwepy-> wepy-> Filter: \n((pinyin)::text ~~ 'tong_'::text)\nwepy-> wepy-> -> Index Scan using \nwords2_pkey on words w0 (cost=0.00..6.01 rows=1 width=8) (actual \ntime=0.026..0.035 rows=0 loops=18)\nwepy-> wepy-> Index Cond: \n((\"outer\".wid = w0.wid) AND (w0.wid > 0) AND (\"outer\".variant = w0.variant))\nwepy-> wepy-> Filter: \n(((pinyin)::text = 'zheng4'::text) AND (def_exists = true) AND (\"sequence\" = \n0))\nwepy-> wepy-> -> Index Scan using \nwords2_pkey on words w1 (cost=0.00..6.00 rows=1 width=8) (actual \ntime=0.021..0.047 rows=1 loops=6)\nwepy-> wepy-> Index Cond: \n((\"outer\".wid = w1.wid) AND (\"outer\".variant = w1.variant))\nwepy-> wepy-> Filter: ((pinyin)::text \n~~ 'fu_'::text)\nwepy-> wepy-> -> Index Scan using words2_pkey on \nwords w8 (cost=0.00..6.00 rows=1 width=27) (actual time=0.021..0.085 \nrows=12 loops=6)\nwepy-> wepy-> Index Cond: ((w8.wid = \n\"outer\".wid) AND (w8.variant = \"outer\".variant))\nwepy-> wepy-> -> Index Scan using words2_pkey on words w2 \n(cost=0.00..6.00 rows=1 width=8) (actual time=0.021..0.046 rows=1 loops=12)\nwepy-> wepy-> Index Cond: ((w2.wid = \"outer\".wid) AND \n(w2.variant = \"outer\".variant))\nwepy-> wepy-> Filter: ((pinyin)::text ~~ 'ji_'::text)\nwepy-> wepy-> Total runtime: 1044.239 ms\nwepy-> wepy-> (43 rows)\nwepy-> wepy->\nwepy-> wepy-> Time: 3939.938 ms\n\n\n\nEven the total runtime has improved dramatically with the use of explicit \njoins and folding. However, I still need to cut the query time down much \nmore... My last resort is to dig through the source code and see if I can \nunderstand how the planner works, but I suspect it's thousands of lines of \ncode :\\\n\nBTW, I have since upgraded to 7.4.2 (from 7.4.1), and you can verify these \nplanner times even with an empty table. The table spec is below:\n\n Table \"public.words\"\n Column | Type | Modifiers\n--------------+----------------------+-----------\nwid | integer | not null\nsequence | smallint | not null\nvariant | smallint | not null\nchar_count | smallint | not null\nunicode | character varying(5) | not null\npinyin | character varying(8) | not null\nsimpvar | character varying(5) |\nzvar | character varying(5) |\ncompatvar | character varying(5) |\ndef_exists | boolean | not null\nnum_variants | smallint |\npage_order | integer |\npinyins | character varying |\nunicodes | character varying |\nIndexes:\n \"words2_pkey\" primary key, btree (wid, variant, \"sequence\")\n \"page_index\" btree (page_order)\n \"pinyin_index\" btree (pinyin)\n \"unicode_index\" btree (unicode)\n\n_________________________________________________________________\nGet rid of annoying pop-up ads with the new MSN Toolbar � FREE! \nhttp://clk.atdmt.com/AVE/go/onm00200414ave/direct/01/\n\n", "msg_date": "Mon, 22 Mar 2004 20:30:29 -0500", "msg_from": "\"Eric Brown\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: severe performance issue with planner" }, { "msg_contents": "\"Eric Brown\" <[email protected]> writes:\n> Here's the query as I have changed it now:\n\nNow that you've switched to JOIN syntax, you can cut the planning time\nto nil by setting join_collapse_limit to 1. See\nhttp://www.postgresql.org/docs/7.4/static/explicit-joins.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Mar 2004 12:16:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: severe performance issue with planner " } ]
[ { "msg_contents": "Dear PostgresQL Experts,\n\nI am trying to get to the bottom of some efficiency problems and hope that\nyou can help. The difficulty seems to be with INTERSECT expressions.\n\nI have a query of the form\n select A from T where C1 intersect select A from T where C2;\nIt runs in about 100 ms.\n\nBut it is equivalent to this query\n select A from T where C1 and C2;\nwhich runs in less than 10 ms.\n\nLooking at the output of \"explain analyse\" on the first query, it seems\nthat PostgresQL always computes the two sub-expressions and then computes\nan explicit intersection on the results. I had hoped that it would notice\nthat both subexpressions are scanning the same input table T and convert\nthe expression to the second form.\n\nIs there a reason why it can't do this transformation?\n\n(Of course, I could just re-write my code to use the second form, but my\napplication is generating these bits of SQL programmatically, and it is not\ntrivial as in some cases the two tables are not the same so an intersection\nreally is needed; if PostgresQL can do it for me, that would be much\nbetter. I don't want to write an SQL parser!)\n\n\nWhile studying the same code I found another case where my INTERSECT\nexpressions don't seem to be optimised as much as I'd like. In this case,\none of the subexpressions being intersected is empty much of the time. But\neven when it is empty, PostgresQL computes the other (expensive)\nsubexpression and does an intersect. Could PostgresQL do something like this:\n\n- guess which subexpression is likely to produce fewest rows\n- compute this subexpression\n- if empty, return now with an empty result\n- compute other subexpression\n- compute intersection\n- return intersection\n\nAlternatively, it could be defined that the left subexpression is always\ncomputed first and the second not computed if it is empty, like the\nbehaviour of logical AND and OR operators in C.\n\nThanks in advance for any suggestions.\n\n--Phil.\n\n", "msg_date": "Tue, 23 Mar 2004 09:12:01 -0500", "msg_from": "\"Phil Endecott\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimisation of INTERSECT expressions" }, { "msg_contents": "On Tue, 23 Mar 2004, Phil Endecott wrote:\n\n> Dear PostgresQL Experts,\n>\n> I am trying to get to the bottom of some efficiency problems and hope that\n> you can help. The difficulty seems to be with INTERSECT expressions.\n>\n> I have a query of the form\n> select A from T where C1 intersect select A from T where C2;\n> It runs in about 100 ms.\n>\n> But it is equivalent to this query\n> select A from T where C1 and C2;\n> which runs in less than 10 ms.\n>\n> Looking at the output of \"explain analyse\" on the first query, it seems\n> that PostgresQL always computes the two sub-expressions and then computes\n> an explicit intersection on the results. I had hoped that it would notice\n> that both subexpressions are scanning the same input table T and convert\n> the expression to the second form.\n>\n> Is there a reason why it can't do this transformation?\n\nProbably because noone's bothered to try to prove under what conditions\nit's the same.\n\nFor example, given a non-unique A, the two queries can give different\nanswers (if say the same two A values match both C1 and C2 in different\nrows how many output rows does each give? *), also given a non-stable A\n(for example random) the two queries are not necessarily equivalent.\n", "msg_date": "Tue, 23 Mar 2004 06:50:53 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation of INTERSECT expressions" }, { "msg_contents": "\nOn Tue, 23 Mar 2004, Stephan Szabo wrote:\n\n> On Tue, 23 Mar 2004, Phil Endecott wrote:\n>\n> > Dear PostgresQL Experts,\n> >\n> > I am trying to get to the bottom of some efficiency problems and hope that\n> > you can help. The difficulty seems to be with INTERSECT expressions.\n> >\n> > I have a query of the form\n> > select A from T where C1 intersect select A from T where C2;\n> > It runs in about 100 ms.\n> >\n> > But it is equivalent to this query\n> > select A from T where C1 and C2;\n> > which runs in less than 10 ms.\n> >\n> > Looking at the output of \"explain analyse\" on the first query, it seems\n> > that PostgresQL always computes the two sub-expressions and then computes\n> > an explicit intersection on the results. I had hoped that it would notice\n> > that both subexpressions are scanning the same input table T and convert\n> > the expression to the second form.\n> >\n> > Is there a reason why it can't do this transformation?\n>\n> Probably because noone's bothered to try to prove under what conditions\n> it's the same.\n>\n> For example, given a non-unique A, the two queries can give different\n> answers (if say the same two A values match both C1 and C2 in different\n> rows how many output rows does each give? *), also given a non-stable A\n> (for example random) the two queries are not necessarily equivalent.\n\nUgh, the example got trimmed out for the *\n\nGiven a non-unique A, C1 as B>5, c2 as C>5 and the data:\nA | B | C\n1 | 6 | 1\n1 | 1 | 6\n\nThe intersect gives 1 row, the and query gives 0 AFAICS.\n\n", "msg_date": "Tue, 23 Mar 2004 07:14:46 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation of INTERSECT expressions" }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> Given a non-unique A, C1 as B>5, c2 as C>5 and the data:\n> A | B | C\n> 1 | 6 | 1\n> 1 | 1 | 6\n> The intersect gives 1 row, the and query gives 0 AFAICS.\n\nAnother way that the queries are not equivalent is that INTERSECT is\ndefined to remove duplicate output rows (much like DISTINCT) whereas\nthe AND form of course won't do that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Mar 2004 10:47:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation of INTERSECT expressions " } ]
[ { "msg_contents": "I asked:\n> select A from T where C1 intersect select A from T where C2;\n> select A from T where C1 and C2;\n> [why isn't the first optimised into the second?]\n\nStephan Szabo answered:\n> Given a non-unique A, C1 as B>5, c2 as C>5 and the data:\n> A | B | C\n> 1 | 6 | 1\n> 1 | 1 | 6\n> The intersect gives 1 row, the and query gives 0 AFAICS.\n\nTom Lane answered: \n> Another way that the queries are not equivalent is that INTERSECT is\n> defined to remove duplicate output rows (much like DISTINCT) whereas\n> the AND form of course won't do that.\n\nThanks! In my case the attribute A is unique - it is the primary key - and\nI hadn't considered the more general case properly.\n\nSo I suppose I'll have to find a more sophisticated way to generate my\nqueries. Imagine a user interface for a search facility with various\nbuttons and text entry fields. At the moment, for each part of the search\nthat the user has enabled I create a string of SQL. I then compose them\ninto a single statement using INTERSECT. Each sub-query always returns the\nsame attribute, but to make things complicated they may come from different\ntables. It now seems that I'll have to merge the queries more thoroughly.\n Does anyone have any suggestions about how to do this? I'd like a nice\ngeneral technique that works for all possible subqueries, as my current\ncomposition with INTERSECT does.\n\n\nAny thoughts on my other question about empty intersections?\n\nThanks again for the feedback.\n\n--Phil.\n", "msg_date": "Tue, 23 Mar 2004 11:21:39 -0500", "msg_from": "\"Phil Endecott\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimisation of INTERSECT expressions " }, { "msg_contents": "On Tue, Mar 23, 2004 at 11:21:39 -0500,\n Phil Endecott <[email protected]> wrote:\n> Does anyone have any suggestions about how to do this? I'd like a nice\n> general technique that works for all possible subqueries, as my current\n> composition with INTERSECT does.\n\nOne adjustment you might make is using INTERSECT ALL if you know there\ncan't be duplicates. Then time won't be wasted trying to remove duplicates.\n", "msg_date": "Tue, 23 Mar 2004 11:05:00 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation of INTERSECT expressions" }, { "msg_contents": "Phil,\n\n> So I suppose I'll have to find a more sophisticated way to generate my\n> queries. Imagine a user interface for a search facility with various\n> buttons and text entry fields. At the moment, for each part of the search\n> that the user has enabled I create a string of SQL. I then compose them\n> into a single statement using INTERSECT. Each sub-query always returns the\n> same attribute, but to make things complicated they may come from different\n> tables. It now seems that I'll have to merge the queries more thoroughly.\n> Does anyone have any suggestions about how to do this? I'd like a nice\n> general technique that works for all possible subqueries, as my current\n> composition with INTERSECT does.\n\nI've done this but it involves a choice between a lot of infrastrucure for \nfully configurable queries, or limiting user choice. The former option \nrequires that you construct reference tables holding what search fields are \navailable, what kind of values they hold, and what operators to use while \nquerying, as well as a table storing the joins used for the various tables \nthat can be queried.\n\nBased on that, you can construct dynamically a query on any field or combo of \nfields listed in your reference tables.\n\nIf search options are more constrained, you can simply take the easier path of \nhard-coding the query building blocks into a set-returning function. I do \nthis all the time for Web search interfaces, where the user only has about 9 \nthings to search on.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 23 Mar 2004 09:17:40 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation of INTERSECT expressions" }, { "msg_contents": "Bruno Wolff III <[email protected]> writes:\n> On Tue, Mar 23, 2004 at 11:21:39 -0500,\n> Phil Endecott <[email protected]> wrote:\n>> Does anyone have any suggestions about how to do this? I'd like a nice\n>> general technique that works for all possible subqueries, as my current\n>> composition with INTERSECT does.\n\n> One adjustment you might make is using INTERSECT ALL if you know there\n> can't be duplicates. Then time won't be wasted trying to remove duplicates.\n\nActually, I don't think that will help. UNION ALL is much faster than\nUNION, because it doesn't have to match up duplicates, but INTERSECT\nand EXCEPT still have to match rows from the inputs in order to find\nout if they should emit a row at all. IIRC there will not be any\nnoticeable speed difference with or without ALL.\n\nAFAICS, what Phil will want to do is\n\n\tSELECT a FROM table1 WHERE cond11 AND cond12 AND ...\n\tINTERSECT\n\tSELECT a FROM table2 WHERE cond21 AND cond22 AND ...\n\tINTERSECT\n\t...\n\nwhich is more painful to assemble than his current approach, but it\nshouldn't be *that* bad --- you just need to tag each condition with the\ntable it applies to, and bring together matching tags when you build the\nSQL string.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Mar 2004 12:21:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation of INTERSECT expressions " } ]
[ { "msg_contents": ">And we also created rules to allow update, delete, and insert on those \n>views so that they looked like tables. The reason we did this is \n>because we ran into issues with too many open files during pg_dump when \n>we had thousands of tables instead of about 1 hundred tables and \n>thousands of views.\n\nIs it because you had smaller value set for max. allowable number of open\nfiles descriptor. what was ulimit -a set to ?\n\n>We, however, did have a need to periodically select data from 2 schemas \n>at a time, and it was simpler logic than if we needed 2 database \n>connections.\n\nAdam Ruth\n\nOn Mar 22, 2004, at 2:30 PM, Subbiah, Stalin wrote:\n\n> --sorry to repost, just subscribed to the list. hopefully it gets to \n> the\n> list this time --\n>\n> Hi All,\n>\n> We are evaluating the options for having multiple databases vs. \n> schemas on a\n> single database cluster for a custom grown app that we developed. Each \n> app\n> installs same set of tables for each service. And the service could \n> easily\n> be in thousands. so Is it better to have 1000 databases vs 1000 \n> schemas in a\n> database cluster. What are the performance overhead of having multiple\n> databases vs. schemas (if any). I'm leaning towards having schemas \n> rather\n> than databases but i would like to get others opinion on this. \n> Appreciate\n> your reply.\n>\n> Thanks,\n> Stalin\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Tue, 23 Mar 2004 10:16:52 -0800", "msg_from": "\"Subbiah, Stalin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Databases Vs. Schemas" }, { "msg_contents": "\nOn Mar 23, 2004, at 11:16 AM, Subbiah, Stalin wrote:\n\n>> And we also created rules to allow update, delete, and insert on those\n>> views so that they looked like tables. The reason we did this is\n>> because we ran into issues with too many open files during pg_dump \n>> when\n>> we had thousands of tables instead of about 1 hundred tables and\n>> thousands of views.\n>\n> Is it because you had smaller value set for max. allowable number of \n> open\n> files descriptor. what was ulimit -a set to ?\n\nIt was actually running on OS X and it was a shared memory issue. We \nwould have had to recompile the Darwin kernel to get a bigger SHMMAX, \nbut this solution seemed better since we would possibly be installing \non servers where we wouldn't have that much leeway. I think that the \nview idea works better for a number of other reasons. For one, I can \ndo a query on the base table and see all of the rows for all of the \nschemas at once, that has proven quite useful.\n\n>\n>> We, however, did have a need to periodically select data from 2 \n>> schemas\n>> at a time, and it was simpler logic than if we needed 2 database\n>> connections.\n>\n> Adam Ruth\n>\n> On Mar 22, 2004, at 2:30 PM, Subbiah, Stalin wrote:\n>\n>> --sorry to repost, just subscribed to the list. hopefully it gets to\n>> the\n>> list this time --\n>>\n>> Hi All,\n>>\n>> We are evaluating the options for having multiple databases vs.\n>> schemas on a\n>> single database cluster for a custom grown app that we developed. Each\n>> app\n>> installs same set of tables for each service. And the service could\n>> easily\n>> be in thousands. so Is it better to have 1000 databases vs 1000\n>> schemas in a\n>> database cluster. What are the performance overhead of having multiple\n>> databases vs. schemas (if any). I'm leaning towards having schemas\n>> rather\n>> than databases but i would like to get others opinion on this.\n>> Appreciate\n>> your reply.\n>>\n>> Thanks,\n>> Stalin\n>>\n>> ---------------------------(end of\n>> broadcast)---------------------------\n>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that \n>> your\n>> message can get through to the mailing list cleanly\n>>\n>\n\n", "msg_date": "Tue, 23 Mar 2004 20:31:46 -0700", "msg_from": "Adam Ruth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Databases Vs. Schemas" } ]
[ { "msg_contents": "We are looking into Sun V210 (2 x 1 GHz cpu, 2 gig ram, 5.8Os) vs. Dell 1750\n(2 x 2.4 GHz xeon, 2 gig ram, RH3.0). database will mostly be\nwrite intensive and disks will be on raid 10. Wondering if 64bit 1 GHz to\n32bit 2.4 GHz make a big difference here. \n\nThanks!\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Andrew\nSullivan\nSent: Tuesday, March 23, 2004 9:37 AM\nTo: '[email protected]'\nSubject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux\n\n\nOn Mon, Mar 22, 2004 at 04:05:45PM -0800, Subbiah, Stalin wrote:\n> being the key performance booster for postgres. what is the preferred OS\n> for postgres deployment if given an option between linux and solaris. As\n\nOne thing this very much depends on is what you're trying to do. \nSuns have a reputation for greater reliability. While my own\nexperience with Sun hardware has been rather shy of sterling, I _can_\nsay that it stands head and shoulders above a lot of the x86 gear you\ncan get.\n\nIf you're planning to use Solaris on x86, don't bother. Solaris is a\nslow, bloated pig compared to Linux, at least when it comes to\nmanaging the largish number of processes that Postgres requires.\n\nIf pure speed is what you're after, I have found that 2-way, 32 bit\nLinux on P-IIIs compares very favourably to 4 way 64 bit Ultra SPARC\nIIs.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe fact that technology doesn't work is no bar to success in the\nmarketplace.\n\t\t--Philip Greenspun\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Tue, 23 Mar 2004 10:40:32 -0800", "msg_from": "\"Subbiah, Stalin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "If it's going to be write intensive then the RAID controller will be the most important thing. A dual p3/500 with a write-back\ncache will smoke either of the boxes you mention using software RAID on write performance.\n\nAs for the compute intensive side (complex joins & sorts etc), the Dell will most likely beat the Sun by some distance, although\nwhat the Sun lacks in CPU power it may make up a bit in memory bandwidth/latency.\n\nMatt\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Subbiah,\n> Stalin\n> Sent: 23 March 2004 18:41\n> To: 'Andrew Sullivan'; '[email protected]'\n> Subject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux\n>\n>\n> We are looking into Sun V210 (2 x 1 GHz cpu, 2 gig ram, 5.8Os) vs. Dell 1750\n> (2 x 2.4 GHz xeon, 2 gig ram, RH3.0). database will mostly be\n> write intensive and disks will be on raid 10. Wondering if 64bit 1 GHz to\n> 32bit 2.4 GHz make a big difference here.\n>\n> Thanks!\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Andrew\n> Sullivan\n> Sent: Tuesday, March 23, 2004 9:37 AM\n> To: '[email protected]'\n> Subject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux\n>\n>\n> On Mon, Mar 22, 2004 at 04:05:45PM -0800, Subbiah, Stalin wrote:\n> > being the key performance booster for postgres. what is the preferred OS\n> > for postgres deployment if given an option between linux and solaris. As\n>\n> One thing this very much depends on is what you're trying to do.\n> Suns have a reputation for greater reliability. While my own\n> experience with Sun hardware has been rather shy of sterling, I _can_\n> say that it stands head and shoulders above a lot of the x86 gear you\n> can get.\n>\n> If you're planning to use Solaris on x86, don't bother. Solaris is a\n> slow, bloated pig compared to Linux, at least when it comes to\n> managing the largish number of processes that Postgres requires.\n>\n> If pure speed is what you're after, I have found that 2-way, 32 bit\n> Linux on P-IIIs compares very favourably to 4 way 64 bit Ultra SPARC\n> IIs.\n>\n> A\n>\n> --\n> Andrew Sullivan | [email protected]\n> The fact that technology doesn't work is no bar to success in the\n> marketplace.\n> \t\t--Philip Greenspun\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n\n\n", "msg_date": "Tue, 23 Mar 2004 18:51:43 -0000", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "Matt, Stalin,\n\n> As for the compute intensive side (complex joins & sorts etc), the Dell will \nmost likely beat the Sun by some distance, although\n> what the Sun lacks in CPU power it may make up a bit in memory bandwidth/\nlatency.\n\nPersonally, I've been unimpressed by Dell/Xeon; I think the Sun might do \nbetter than you think, comparitively. On all the Dell servers I've used so \nfar, I've not seen performance that comes even close to the hardware specs.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 23 Mar 2004 12:13:29 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "On Tue, 23 Mar 2004, Josh Berkus wrote:\n\n> Matt, Stalin,\n> \n> > As for the compute intensive side (complex joins & sorts etc), the Dell will \n> most likely beat the Sun by some distance, although\n> > what the Sun lacks in CPU power it may make up a bit in memory bandwidth/\n> latency.\n> \n> Personally, I've been unimpressed by Dell/Xeon; I think the Sun might do \n> better than you think, comparitively. On all the Dell servers I've used so \n> far, I've not seen performance that comes even close to the hardware specs.\n\nWe use a 2600 at work (dual 2.8GHz) with the LSI/Megaraid based battery \nbacked caching controller, and it flies. Truly flies.\n\nIt's not Dell that's so slow, it's the default adaptec RAID controller or \nIDE drives that are slow. Ours has 533 MHz memory bus, by the way.\n\n", "msg_date": "Tue, 23 Mar 2004 13:52:50 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "> Personally, I've been unimpressed by Dell/Xeon; I think the Sun might do\n> better than you think, comparitively. On all the Dell servers I've used\n> so\n> far, I've not seen performance that comes even close to the hardware\n> specs.\n\nIt's true that any difference will be far less than the GHz ratio, and I\ncan't really speak for Dell servers in general, but a pair of 2.4GHz Xeons\nin a Dell workstation gets about 23 SPECint_rate2000, and a pair of 1GHz\nUltraSparc IIIs in a SunFire V210 gets 10. The ratios are the same for\nother non-FP benchmarks.\n\nNow the Suns do have some architectural advantages, and they used to have\nfar superior memory bandwidth than intel boxes, and they often still do\nfor more than 2 cpus, and definitely do for more than four. But my\npersonal experience is that for 4 cpus or less the entry level UNIX\nofferings from Sun/IBM/HP fell behind in raw performance (FP excepted) two\nor three years ago. The posh hardware's an entirely different matter of\ncourse.\n\nOn the other hand, I can think of innumerable non performance related\nreasons to buy a 'real UNIX box' as a low end DB server. CPU performance\nis way down the priority list compared with IO throughput, stability,\nmanageability, support, etc etc.\n\nGiven that the original question was about a very heavily write-oriented\nenvironment, I'd take the Sun every day of the week, assuming that those\ncompile option changes have sorted out the oddly slow PG performance at\nlast.\n\nM\n", "msg_date": "Tue, 23 Mar 2004 20:53:42 -0000 (GMT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "On Tue, Mar 23, 2004 at 08:53:42PM -0000, [email protected] wrote:\n\n> is way down the priority list compared with IO throughput, stability,\n> manageability, support, etc etc.\n\nIndeed, if our Suns actually diabled the broken hardware when they\ndied, fell over, and rebooted themselves, I'd certainly praise them\nto heaven. But I have to say that the really very good reporting of\nfailing memory has saved me some headaches. \n\n> environment, I'd take the Sun every day of the week, assuming that those\n> compile option changes have sorted out the oddly slow PG performance at\n> last.\n\nI seem to have hit a bad batch of Dell hardware recently, which makes\nme second this opinion.\n\nI should say, also, that my initial experience of AIX has been\nextremely good. I can't comment on the fun it might involve in the\nlong haul, of course.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThis work was visionary and imaginative, and goes to show that visionary\nand imaginative work need not end up well. \n\t\t--Dennis Ritchie\n", "msg_date": "Tue, 23 Mar 2004 17:03:38 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "> Indeed, if our Suns actually diabled the broken hardware when they\n> died, fell over, and rebooted themselves, I'd certainly praise them\n> to heaven. But I have to say that the really very good reporting of\n> failing memory has saved me some headaches.\n\nHa! Yes, it would seem the obvious thing to do - but as you say, at least\nyou get told what borked and may even be able to remove it without\nstopping the machine. Sometimes. Or at least you get a nice lunch from\nyour Sun reseller.\n\n> I should say, also, that my initial experience of AIX has been\n> extremely good. I can't comment on the fun it might involve in the\n> long haul, of course.\n\nThe current crop of power4+ boxen is reputed to even be able to recover\nfrom a failed CPU without a restart. Not *always* one imagines, but\nusefully often enough for the banking mob to get sweaty over the feature. \nMore importantly though, IBM seems committed to supporting all this\ngoodness under Linux too (though not BSD I fear - sorry Bruce)\n\nNow if these vendors could somehow eliminate downtime due to human error\nwe'd be talking *serious* reliablity.\n\nM\n", "msg_date": "Tue, 23 Mar 2004 23:35:47 -0000 (GMT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "On Tue, Mar 23, 2004 at 11:35:47PM -0000, [email protected] wrote:\n> More importantly though, IBM seems committed to supporting all this\n> goodness under Linux too (though not BSD I fear - sorry Bruce)\n\nAlthough so far they don't. And let me tell you, AIX's reputation\nfor being strange is well earned. It has some real nice features,\nthough: topas is awfully nice for spotting bottlenecks, and it works\nin a terminal so you don't have to have X and all the rest of that\nstuff installed. We're just in the preliminary stages with this\nsystem, but my experience so far has been positive. On new machines,\nthough, one _hopes_ that hardware failures are relatively infrequent.\n\n> Now if these vendors could somehow eliminate downtime due to human error\n> we'd be talking *serious* reliablity.\n\nYou mean making the OS smart enough to know when clearing the arp\ncache is a bonehead operation, or just making the hardware smart\nenough to realise that the keyswitch really shouldn't be turned\nwhile 40 people are logged in? (Either way, I agree this'd be an\nimprovement. It'd sure make colocation a lot less painful.)\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n", "msg_date": "Wed, 24 Mar 2004 07:04:40 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "> > Now if these vendors could somehow eliminate downtime due to human error\n> > we'd be talking *serious* reliablity.\n>\n> You mean making the OS smart enough to know when clearing the arp\n> cache is a bonehead operation, or just making the hardware smart\n> enough to realise that the keyswitch really shouldn't be turned\n> while 40 people are logged in? (Either way, I agree this'd be an\n> improvement. It'd sure make colocation a lot less painful.)\n\nWell I was joking really, but those are two very good examples! Yes, machines should require extra confirmation for operations like\nthose. Hell, even a simple 'init 0' would be well served by a prompt that says \"There are currently 400 network sockets open, 50\nremote users logged in, and 25 disk IOs per second. What's more, there's nobody logged in at the console to boot me up again\nafterwards - are you _sure_ you want to shut the machine down?\". It's also crazy that there's no prompt after an 'rm -rf' (we could\nhave 'rm -rf --iacceptfullresponsibility' for an unprompted version).\n\nStuff like that would have saved me from a few embarrassments in the past for sure ;-)\n\nIt drives me absolutely nuts every time I see a $staggeringly_expensive clustered server whose sysadmins are scared to do a failover\ntest in case something goes wrong! Or which has worse uptime than my desktop PC because the cluster software's poorly set up or\nadministered. Or which has both machines on the same circuit breaker. I could go on but it's depressing me.\n\nFavourite anecdote: A project manager friend of mine had a new 'lights out' datacenter to set up. The engineers, admins and\noperators swore blind that everything had been tested in every possible way, and that incredible uptime was guaranteed. 'So if I\njust pull this disk out everything will keep working?' he asked, and then pulled the disk out without waiting for an answer...\n\nEver since he told me that story I've done exactly that with every piece of so-called 'redundant' hardware a vendor tries to flog\nme. Ask them to set it up, then just do nasty things to it without asking for permission. Less than half the gear makes it through\nthat filter, and actually you can almost tell from the look on the technical sales rep's face as you reach for the\ndrive/cable/card/whatever whether it will or won't.\n\nM\n\n\n\n\n\n", "msg_date": "Wed, 24 Mar 2004 12:39:48 -0000", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": ">>>>> \"SS\" == Stalin Subbiah <Subbiah> writes:\n\nSS> We are looking into Sun V210 (2 x 1 GHz cpu, 2 gig ram, 5.8Os)\nSS> vs. Dell 1750 (2 x 2.4 GHz xeon, 2 gig ram, RH3.0). database will\nSS> mostly be write intensive and disks will be on raid 10. Wondering\nSS> if 64bit 1 GHz to 32bit 2.4 GHz make a big difference here.\n\nSpend all your money speeding up your disk system. If you're mostly\nwriting (like my main app) then that's your bottleneck. I use a dell\n2650 with external RAID 5 on 14 spindles. I didn't need that much\ndisk space, but went for maxing out the number of spindles. RAID 5\nwas faster than RAID10 or RAID50 with this configuration for me.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Wed, 24 Mar 2004 12:05:49 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" } ]
[ { "msg_contents": "Hello,\n\nI am using postgres 7.4.2 as a backend for geocode data for a mapping\napplication. My question is why can't I get a consistent use of my indexes\nduring a query, I tend to get a lot of seq scan results.\n\nI use a standard query:\n\nSELECT lat, long, mac, status FROM (\n SELECT text(mac) as mac, lat long, CASE status WHEN 0 THEN 0 WHEN 1 THEN\n1 ELSE -1 END \n as status FROM cable_billing LEFT OUTER JOIN davic USING(mac) WHERE\nboxtype='d'\n)AS FOO WHERE (long>=X1) AND (long<=X2) AND (lat>=Y1) AND (lat<=Y2)\n\nWhere X1,X2,Y1,Y2 are the coordinates for the rectangle of the map viewing\narea.\n\nQUERY PLAN #1 & #2 are from when I get a view from 10 miles out, sometimes\nit uses the index(#1) and most of the time not(#2). I do run into plans\nthat seq scan both sides of the join.\n\nQUERY PLAN #3 is when I view from 5 miles out, and I have much greater\nchance of getting index scans ( about 90% of the time).\n\nI have listed information about the database below.\n\nCable_billing ~500,000 rows updated once per day\nDavic ~500,000 rows, about 100 rows update per minute\n\nAny info or suggestions would be appreciated.\n\nWoody\n\n\ntwc-ral-overview=# \\d cable_billing;\n Table \"public.cable_billing\"\n Column | Type | Modifiers \n-----------------+------------------------+-----------\n cable_billingid | integer | not null\n mac | macaddr | not null\n account | integer | \n number | character varying(10) | \n address | character varying(200) | \n region | character varying(30) | \n division | integer | \n franchise | integer | \n node | character varying(10) | \n lat | numeric | \n long | numeric | \n trunk | character varying(5) | \n ps | character varying(5) | \n fd | character varying(5) | \n le | character varying(5) | \n update | integer | \n boxtype | character(1) | \nIndexes: cable_billing_pkey primary key btree (mac),\n cable_billing_account_index btree (account),\n cable_billing_lat_long_idx btree (lat, long),\n cable_billing_node_index btree (node),\n cable_billing_region_index btree (region)\n\ntwc-ral-overview=# \\d davic\n Table \"public.davic\"\n Column | Type | Modifiers \n---------+-----------------------+-----------\n davicid | integer | not null\n mac | macaddr | not null\n source | character varying(20) | \n status | smallint | \n updtime | integer | \n type | character varying(10) | \n avail1 | integer | \nIndexes: davic_pkey primary key btree (mac)\n\n\n\ntwc-ral-overview=# vacuum analyze;\nVACUUM\ntwc-ral-overview=# explain analyze SELECT lat, long, mac, status FROM\n(SELECT text(mac) as mac, lat, long, CASE status WHEN 0 THEN 0 WHEN 1 THEN 1\nELSE -1 END as status FROM cable_billing LEFT OUTER JOIN davic USING(mac)\nWHERE boxtype='d') AS foo WHERE (long>=-78.70723462816063) AND\n(long<=-78.53096764204116) AND (lat>=35.57411187866667) AND\n(lat<=35.66366331376857);\nQUERY PLAN #1\n\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-----\n Nested Loop Left Join (cost=0.00..23433.18 rows=1871 width=34) (actual\ntime=0.555..5095.434 rows=3224 loops=1)\n -> Index Scan using cable_billing_lat_long_idx on cable_billing\n(cost=0.00..12145.85 rows=1871 width=32) (actual time=0.431..249.931\nrows=3224 loops=1)\n Index Cond: ((lat >= 35.57411187866667) AND (lat <=\n35.66366331376857) AND (long >= -78.70723462816063) AND (long <=\n-78.53096764204116))\n Filter: (boxtype = 'd'::bpchar)\n -> Index Scan using davic_pkey on davic (cost=0.00..6.01 rows=1\nwidth=8) (actual time=1.476..1.480 rows=1 loops=3224)\n Index Cond: (\"outer\".mac = davic.mac)\n Total runtime: 5100.028 ms\n(7 rows)\n\n\n\ntwc-ral-overview=# vacuum analyze;\nVACUUM\ntwc-ral-overview=# explain analyze SELECT lat, long, mac, status FROM\n(SELECT text(mac) as mac, lat, long, CASE status WHEN 0 THEN 0 WHEN 1 THEN 1\nELSE -1 END as status FROM cable_billing LEFT OUTER JOIN davic USING(mac)\nWHERE boxtype='d') AS foo WHERE (long>=-78.87878592206046) AND\n(long<=-78.70220280717479) AND (lat>=35.71703190638861) AND\n(lat<=35.80658335998006);\nQUERY PLAN #2\n\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------\n Nested Loop Left Join (cost=0.00..76468.90 rows=9223 width=34) (actual\ntime=0.559..17387.427 rows=19997 loops=1)\n -> Seq Scan on cable_billing (cost=0.00..20837.76 rows=9223 width=32)\n(actual time=0.290..7117.799 rows=19997 loops=1)\n Filter: ((boxtype = 'd'::bpchar) AND (long >= -78.87878592206046)\nAND (long <= -78.70220280717479) AND (lat >= 35.71703190638861) AND (lat <=\n35.80658335998006))\n -> Index Scan using davic_pkey on davic (cost=0.00..6.01 rows=1\nwidth=8) (actual time=0.455..0.461 rows=1 loops=19997)\n Index Cond: (\"outer\".mac = davic.mac)\n Total runtime: 17416.501 ms\n(6 rows)\n\n\n\ntwc-ral-overview=# explain analyze SELECT lat, long, mac, status FROM\n(SELECT text(mac) as mac, lat, long, CASE status WHEN 0 THEN 0 WHEN 1 THEN 1\nELSE -1 END as status FROM cable_billing LEFT OUTER JOIN davic USING(mac)\nWHERE boxtype='d') AS foo WHERE (long>=-78.83419423836857) AND\n(long<=-78.7467945148866) AND (lat>=35.73964586635293) AND\n(lat<=35.783969313080604);\nQUERY PLAN #3\n\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-----\n Nested Loop Left Join (cost=0.00..29160.02 rows=2327 width=34) (actual\ntime=0.279..510.773 rows=5935 loops=1)\n -> Index Scan using cable_billing_lat_long_idx on cable_billing\n(cost=0.00..15130.08 rows=2326 width=32) (actual time=0.197..274.115\nrows=5935 loops=1)\n Index Cond: ((lat >= 35.73964586635293) AND (lat <=\n35.783969313080604) AND (long >= -78.83419423836857) AND (long <=\n-78.7467945148866))\n Filter: (boxtype = 'd'::bpchar)\n -> Index Scan using davic_pkey on davic (cost=0.00..6.01 rows=1\nwidth=8) (actual time=0.021..0.024 rows=1 loops=5935)\n Index Cond: (\"outer\".mac = davic.mac)\n Total runtime: 516.782 ms\n(7 rows)\n\n----------------------------------------------------------------------------\n-----------------------------------\n\niglass Networks\n211-A S. Salem St.\n(919) 387-3550 x813\nP.O. Box 651\n(919) 387-3570 fax\nApex, NC 27502\nhttp://www.iglass.net\n\n", "msg_date": "Tue, 23 Mar 2004 13:49:07 -0500", "msg_from": "Woody Woodring <[email protected]>", "msg_from_op": true, "msg_subject": "Help with query plan inconsistencies" }, { "msg_contents": "I'm going to ask because someone else surely will:\n\nDo you regularily vacuum/analyze the database?\n\nWoody Woodring wrote:\n> Hello,\n> \n> I am using postgres 7.4.2 as a backend for geocode data for a mapping\n> application. My question is why can't I get a consistent use of my indexes\n> during a query, I tend to get a lot of seq scan results.\n> \n> I use a standard query:\n> \n> SELECT lat, long, mac, status FROM (\n> SELECT text(mac) as mac, lat long, CASE status WHEN 0 THEN 0 WHEN 1 THEN\n> 1 ELSE -1 END \n> as status FROM cable_billing LEFT OUTER JOIN davic USING(mac) WHERE\n> boxtype='d'\n> )AS FOO WHERE (long>=X1) AND (long<=X2) AND (lat>=Y1) AND (lat<=Y2)\n> \n> Where X1,X2,Y1,Y2 are the coordinates for the rectangle of the map viewing\n> area.\n> \n> QUERY PLAN #1 & #2 are from when I get a view from 10 miles out, sometimes\n> it uses the index(#1) and most of the time not(#2). I do run into plans\n> that seq scan both sides of the join.\n> \n> QUERY PLAN #3 is when I view from 5 miles out, and I have much greater\n> chance of getting index scans ( about 90% of the time).\n> \n> I have listed information about the database below.\n> \n> Cable_billing ~500,000 rows updated once per day\n> Davic ~500,000 rows, about 100 rows update per minute\n> \n> Any info or suggestions would be appreciated.\n> \n> Woody\n> \n> \n> twc-ral-overview=# \\d cable_billing;\n> Table \"public.cable_billing\"\n> Column | Type | Modifiers \n> -----------------+------------------------+-----------\n> cable_billingid | integer | not null\n> mac | macaddr | not null\n> account | integer | \n> number | character varying(10) | \n> address | character varying(200) | \n> region | character varying(30) | \n> division | integer | \n> franchise | integer | \n> node | character varying(10) | \n> lat | numeric | \n> long | numeric | \n> trunk | character varying(5) | \n> ps | character varying(5) | \n> fd | character varying(5) | \n> le | character varying(5) | \n> update | integer | \n> boxtype | character(1) | \n> Indexes: cable_billing_pkey primary key btree (mac),\n> cable_billing_account_index btree (account),\n> cable_billing_lat_long_idx btree (lat, long),\n> cable_billing_node_index btree (node),\n> cable_billing_region_index btree (region)\n> \n> twc-ral-overview=# \\d davic\n> Table \"public.davic\"\n> Column | Type | Modifiers \n> ---------+-----------------------+-----------\n> davicid | integer | not null\n> mac | macaddr | not null\n> source | character varying(20) | \n> status | smallint | \n> updtime | integer | \n> type | character varying(10) | \n> avail1 | integer | \n> Indexes: davic_pkey primary key btree (mac)\n> \n> \n> \n> twc-ral-overview=# vacuum analyze;\n> VACUUM\n> twc-ral-overview=# explain analyze SELECT lat, long, mac, status FROM\n> (SELECT text(mac) as mac, lat, long, CASE status WHEN 0 THEN 0 WHEN 1 THEN 1\n> ELSE -1 END as status FROM cable_billing LEFT OUTER JOIN davic USING(mac)\n> WHERE boxtype='d') AS foo WHERE (long>=-78.70723462816063) AND\n> (long<=-78.53096764204116) AND (lat>=35.57411187866667) AND\n> (lat<=35.66366331376857);\n> QUERY PLAN #1\n> \n> ----------------------------------------------------------------------------\n> ----------------------------------------------------------------------------\n> -----\n> Nested Loop Left Join (cost=0.00..23433.18 rows=1871 width=34) (actual\n> time=0.555..5095.434 rows=3224 loops=1)\n> -> Index Scan using cable_billing_lat_long_idx on cable_billing\n> (cost=0.00..12145.85 rows=1871 width=32) (actual time=0.431..249.931\n> rows=3224 loops=1)\n> Index Cond: ((lat >= 35.57411187866667) AND (lat <=\n> 35.66366331376857) AND (long >= -78.70723462816063) AND (long <=\n> -78.53096764204116))\n> Filter: (boxtype = 'd'::bpchar)\n> -> Index Scan using davic_pkey on davic (cost=0.00..6.01 rows=1\n> width=8) (actual time=1.476..1.480 rows=1 loops=3224)\n> Index Cond: (\"outer\".mac = davic.mac)\n> Total runtime: 5100.028 ms\n> (7 rows)\n> \n> \n> \n> twc-ral-overview=# vacuum analyze;\n> VACUUM\n> twc-ral-overview=# explain analyze SELECT lat, long, mac, status FROM\n> (SELECT text(mac) as mac, lat, long, CASE status WHEN 0 THEN 0 WHEN 1 THEN 1\n> ELSE -1 END as status FROM cable_billing LEFT OUTER JOIN davic USING(mac)\n> WHERE boxtype='d') AS foo WHERE (long>=-78.87878592206046) AND\n> (long<=-78.70220280717479) AND (lat>=35.71703190638861) AND\n> (lat<=35.80658335998006);\n> QUERY PLAN #2\n> \n> ----------------------------------------------------------------------------\n> ----------------------------------------------------------------------------\n> -------------------\n> Nested Loop Left Join (cost=0.00..76468.90 rows=9223 width=34) (actual\n> time=0.559..17387.427 rows=19997 loops=1)\n> -> Seq Scan on cable_billing (cost=0.00..20837.76 rows=9223 width=32)\n> (actual time=0.290..7117.799 rows=19997 loops=1)\n> Filter: ((boxtype = 'd'::bpchar) AND (long >= -78.87878592206046)\n> AND (long <= -78.70220280717479) AND (lat >= 35.71703190638861) AND (lat <=\n> 35.80658335998006))\n> -> Index Scan using davic_pkey on davic (cost=0.00..6.01 rows=1\n> width=8) (actual time=0.455..0.461 rows=1 loops=19997)\n> Index Cond: (\"outer\".mac = davic.mac)\n> Total runtime: 17416.501 ms\n> (6 rows)\n> \n> \n> \n> twc-ral-overview=# explain analyze SELECT lat, long, mac, status FROM\n> (SELECT text(mac) as mac, lat, long, CASE status WHEN 0 THEN 0 WHEN 1 THEN 1\n> ELSE -1 END as status FROM cable_billing LEFT OUTER JOIN davic USING(mac)\n> WHERE boxtype='d') AS foo WHERE (long>=-78.83419423836857) AND\n> (long<=-78.7467945148866) AND (lat>=35.73964586635293) AND\n> (lat<=35.783969313080604);\n> QUERY PLAN #3\n> \n> ----------------------------------------------------------------------------\n> ----------------------------------------------------------------------------\n> -----\n> Nested Loop Left Join (cost=0.00..29160.02 rows=2327 width=34) (actual\n> time=0.279..510.773 rows=5935 loops=1)\n> -> Index Scan using cable_billing_lat_long_idx on cable_billing\n> (cost=0.00..15130.08 rows=2326 width=32) (actual time=0.197..274.115\n> rows=5935 loops=1)\n> Index Cond: ((lat >= 35.73964586635293) AND (lat <=\n> 35.783969313080604) AND (long >= -78.83419423836857) AND (long <=\n> -78.7467945148866))\n> Filter: (boxtype = 'd'::bpchar)\n> -> Index Scan using davic_pkey on davic (cost=0.00..6.01 rows=1\n> width=8) (actual time=0.021..0.024 rows=1 loops=5935)\n> Index Cond: (\"outer\".mac = davic.mac)\n> Total runtime: 516.782 ms\n> (7 rows)\n> \n> ----------------------------------------------------------------------------\n> -----------------------------------\n> \n> iglass Networks\n> 211-A S. Salem St.\n> (919) 387-3550 x813\n> P.O. Box 651\n> (919) 387-3570 fax\n> Apex, NC 27502\n> http://www.iglass.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n", "msg_date": "Tue, 23 Mar 2004 14:17:09 -0500", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with query plan inconsistencies" }, { "msg_contents": "I currently have it set up to vacuum/analyze every 2 hours. However my\nQUERY PLAN #1 & #2 in my example I ran my explain immediately after a\nvacuum/analyze.\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Joseph\nShraibman\nSent: Tuesday, March 23, 2004 2:17 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Help with query plan inconsistencies\n\n\nI'm going to ask because someone else surely will:\n\nDo you regularily vacuum/analyze the database?\n\nWoody Woodring wrote:\n> Hello,\n> \n> I am using postgres 7.4.2 as a backend for geocode data for a mapping \n> application. My question is why can't I get a consistent use of my \n> indexes during a query, I tend to get a lot of seq scan results.\n> \n> I use a standard query:\n> \n> SELECT lat, long, mac, status FROM (\n> SELECT text(mac) as mac, lat long, CASE status WHEN 0 THEN 0 WHEN 1 \n> THEN 1 ELSE -1 END\n> as status FROM cable_billing LEFT OUTER JOIN davic USING(mac) \n> WHERE boxtype='d' )AS FOO WHERE (long>=X1) AND (long<=X2) AND \n> (lat>=Y1) AND (lat<=Y2)\n> \n> Where X1,X2,Y1,Y2 are the coordinates for the rectangle of the map \n> viewing area.\n> \n> QUERY PLAN #1 & #2 are from when I get a view from 10 miles out, \n> sometimes it uses the index(#1) and most of the time not(#2). I do \n> run into plans that seq scan both sides of the join.\n> \n> QUERY PLAN #3 is when I view from 5 miles out, and I have much greater \n> chance of getting index scans ( about 90% of the time).\n> \n> I have listed information about the database below.\n> \n> Cable_billing ~500,000 rows updated once per day\n> Davic ~500,000 rows, about 100 rows update per minute\n> \n> Any info or suggestions would be appreciated.\n> \n> Woody\n> \n> \n> twc-ral-overview=# \\d cable_billing;\n> Table \"public.cable_billing\"\n> Column | Type | Modifiers \n> -----------------+------------------------+-----------\n> cable_billingid | integer | not null\n> mac | macaddr | not null\n> account | integer | \n> number | character varying(10) | \n> address | character varying(200) | \n> region | character varying(30) | \n> division | integer | \n> franchise | integer | \n> node | character varying(10) | \n> lat | numeric | \n> long | numeric | \n> trunk | character varying(5) | \n> ps | character varying(5) | \n> fd | character varying(5) | \n> le | character varying(5) | \n> update | integer | \n> boxtype | character(1) | \n> Indexes: cable_billing_pkey primary key btree (mac),\n> cable_billing_account_index btree (account),\n> cable_billing_lat_long_idx btree (lat, long),\n> cable_billing_node_index btree (node),\n> cable_billing_region_index btree (region)\n> \n> twc-ral-overview=# \\d davic\n> Table \"public.davic\"\n> Column | Type | Modifiers \n> ---------+-----------------------+-----------\n> davicid | integer | not null\n> mac | macaddr | not null\n> source | character varying(20) | \n> status | smallint | \n> updtime | integer | \n> type | character varying(10) | \n> avail1 | integer | \n> Indexes: davic_pkey primary key btree (mac)\n> \n> \n> \n> twc-ral-overview=# vacuum analyze;\n> VACUUM\n> twc-ral-overview=# explain analyze SELECT lat, long, mac, status FROM \n> (SELECT text(mac) as mac, lat, long, CASE status WHEN 0 THEN 0 WHEN 1 \n> THEN 1 ELSE -1 END as status FROM cable_billing LEFT OUTER JOIN davic \n> USING(mac) WHERE boxtype='d') AS foo WHERE (long>=-78.70723462816063) \n> AND\n> (long<=-78.53096764204116) AND (lat>=35.57411187866667) AND\n> (lat<=35.66366331376857);\n> QUERY PLAN #1\n> \n> ----------------------------------------------------------------------\n> ------\n>\n----------------------------------------------------------------------------\n> -----\n> Nested Loop Left Join (cost=0.00..23433.18 rows=1871 width=34) (actual\n> time=0.555..5095.434 rows=3224 loops=1)\n> -> Index Scan using cable_billing_lat_long_idx on cable_billing\n> (cost=0.00..12145.85 rows=1871 width=32) (actual time=0.431..249.931\n> rows=3224 loops=1)\n> Index Cond: ((lat >= 35.57411187866667) AND (lat <=\n> 35.66366331376857) AND (long >= -78.70723462816063) AND (long <=\n> -78.53096764204116))\n> Filter: (boxtype = 'd'::bpchar)\n> -> Index Scan using davic_pkey on davic (cost=0.00..6.01 rows=1\n> width=8) (actual time=1.476..1.480 rows=1 loops=3224)\n> Index Cond: (\"outer\".mac = davic.mac)\n> Total runtime: 5100.028 ms\n> (7 rows)\n> \n> \n> \n> twc-ral-overview=# vacuum analyze;\n> VACUUM\n> twc-ral-overview=# explain analyze SELECT lat, long, mac, status FROM \n> (SELECT text(mac) as mac, lat, long, CASE status WHEN 0 THEN 0 WHEN 1 \n> THEN 1 ELSE -1 END as status FROM cable_billing LEFT OUTER JOIN davic \n> USING(mac) WHERE boxtype='d') AS foo WHERE (long>=-78.87878592206046) \n> AND\n> (long<=-78.70220280717479) AND (lat>=35.71703190638861) AND\n> (lat<=35.80658335998006);\n> QUERY PLAN #2\n> \n> ----------------------------------------------------------------------\n> ------\n>\n----------------------------------------------------------------------------\n> -------------------\n> Nested Loop Left Join (cost=0.00..76468.90 rows=9223 width=34) (actual\n> time=0.559..17387.427 rows=19997 loops=1)\n> -> Seq Scan on cable_billing (cost=0.00..20837.76 rows=9223 width=32)\n> (actual time=0.290..7117.799 rows=19997 loops=1)\n> Filter: ((boxtype = 'd'::bpchar) AND (long >= -78.87878592206046)\n> AND (long <= -78.70220280717479) AND (lat >= 35.71703190638861) AND (lat\n<=\n> 35.80658335998006))\n> -> Index Scan using davic_pkey on davic (cost=0.00..6.01 rows=1\n> width=8) (actual time=0.455..0.461 rows=1 loops=19997)\n> Index Cond: (\"outer\".mac = davic.mac)\n> Total runtime: 17416.501 ms\n> (6 rows)\n> \n> \n> \n> twc-ral-overview=# explain analyze SELECT lat, long, mac, status FROM \n> (SELECT text(mac) as mac, lat, long, CASE status WHEN 0 THEN 0 WHEN 1 \n> THEN 1 ELSE -1 END as status FROM cable_billing LEFT OUTER JOIN davic \n> USING(mac) WHERE boxtype='d') AS foo WHERE (long>=-78.83419423836857) \n> AND\n> (long<=-78.7467945148866) AND (lat>=35.73964586635293) AND\n> (lat<=35.783969313080604);\n> QUERY PLAN #3\n> \n> ----------------------------------------------------------------------\n> ------\n>\n----------------------------------------------------------------------------\n> -----\n> Nested Loop Left Join (cost=0.00..29160.02 rows=2327 width=34) (actual\n> time=0.279..510.773 rows=5935 loops=1)\n> -> Index Scan using cable_billing_lat_long_idx on cable_billing\n> (cost=0.00..15130.08 rows=2326 width=32) (actual time=0.197..274.115\n> rows=5935 loops=1)\n> Index Cond: ((lat >= 35.73964586635293) AND (lat <=\n> 35.783969313080604) AND (long >= -78.83419423836857) AND (long <=\n> -78.7467945148866))\n> Filter: (boxtype = 'd'::bpchar)\n> -> Index Scan using davic_pkey on davic (cost=0.00..6.01 rows=1\n> width=8) (actual time=0.021..0.024 rows=1 loops=5935)\n> Index Cond: (\"outer\".mac = davic.mac)\n> Total runtime: 516.782 ms\n> (7 rows)\n> \n> ----------------------------------------------------------------------\n> ------\n> -----------------------------------\n> \n> iglass Networks\n> 211-A S. Salem St.\n> (919) 387-3550 x813\n> P.O. Box 651\n> (919) 387-3570 fax\n> Apex, NC 27502\n> http://www.iglass.net\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n", "msg_date": "Wed, 24 Mar 2004 08:59:07 -0500", "msg_from": "\"George Woodring\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with query plan inconsistencies" }, { "msg_contents": "On Tuesday 23 March 2004 18:49, Woody Woodring wrote:\n> Hello,\n>\n> I am using postgres 7.4.2 as a backend for geocode data for a mapping\n> application. My question is why can't I get a consistent use of my indexes\n> during a query, I tend to get a lot of seq scan results.\n\nI'm not sure it wants to be using the indexes all of the time.\n\n> Nested Loop Left Join (cost=0.00..23433.18 rows=1871 width=34) (actual\n> time=0.555..5095.434 rows=3224 loops=1)\n> Total runtime: 5100.028 ms\n\n> Nested Loop Left Join (cost=0.00..76468.90 rows=9223 width=34) (actual\n> time=0.559..17387.427 rows=19997 loops=1)\n> Total runtime: 17416.501 ms\n\n> Nested Loop Left Join (cost=0.00..29160.02 rows=2327 width=34) (actual\n> time=0.279..510.773 rows=5935 loops=1)\n> Total runtime: 516.782 ms\n\n#1 = 630 rows/sec (with index on cable_billing)\n#2 = 1,148 rows/sec (without index)\n#3 = 11,501 rows/sec (with index)\n\nThe third case is so much faster, I suspect the data wasn't cached at the \nbeginning of this run.\n\nIn any case #2 is faster than #1. If the planner is getting things wrong, \nyou're not showing it here.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 24 Mar 2004 16:44:39 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with query plan inconsistencies" } ]
[ { "msg_contents": "What bus speeds?\r\n \r\n533MHz on the 32-bit Intel will give you about 4.2Gbps of IO throughput...\r\n \r\nI think the Sun will be 150MHz, 64bit is 2.4Gbps of IO. Correct me if i am wrong.\r\n \r\nThanks,\r\nAnjan\r\n\r\n\t-----Original Message----- \r\n\tFrom: Subbiah, Stalin [mailto:[email protected]] \r\n\tSent: Tue 3/23/2004 1:40 PM \r\n\tTo: 'Andrew Sullivan'; '[email protected]' \r\n\tCc: \r\n\tSubject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux\r\n\t\r\n\t\r\n\r\n\tWe are looking into Sun V210 (2 x 1 GHz cpu, 2 gig ram, 5.8Os) vs. Dell 1750\r\n\t(2 x 2.4 GHz xeon, 2 gig ram, RH3.0). database will mostly be\r\n\twrite intensive and disks will be on raid 10. Wondering if 64bit 1 GHz to\r\n\t32bit 2.4 GHz make a big difference here.\r\n\t\r\n\tThanks!\r\n\t\r\n\t-----Original Message-----\r\n\tFrom: [email protected]\r\n\t[mailto:[email protected]]On Behalf Of Andrew\r\n\tSullivan\r\n\tSent: Tuesday, March 23, 2004 9:37 AM\r\n\tTo: '[email protected]'\r\n\tSubject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux\r\n\t\r\n\t\r\n\tOn Mon, Mar 22, 2004 at 04:05:45PM -0800, Subbiah, Stalin wrote:\r\n\t> being the key performance booster for postgres. what is the preferred OS\r\n\t> for postgres deployment if given an option between linux and solaris. As\r\n\t\r\n\tOne thing this very much depends on is what you're trying to do.\r\n\tSuns have a reputation for greater reliability. While my own\r\n\texperience with Sun hardware has been rather shy of sterling, I _can_\r\n\tsay that it stands head and shoulders above a lot of the x86 gear you\r\n\tcan get.\r\n\t\r\n\tIf you're planning to use Solaris on x86, don't bother. Solaris is a\r\n\tslow, bloated pig compared to Linux, at least when it comes to\r\n\tmanaging the largish number of processes that Postgres requires.\r\n\t\r\n\tIf pure speed is what you're after, I have found that 2-way, 32 bit\r\n\tLinux on P-IIIs compares very favourably to 4 way 64 bit Ultra SPARC\r\n\tIIs.\r\n\t\r\n\tA\r\n\t\r\n\t--\r\n\tAndrew Sullivan | [email protected]\r\n\tThe fact that technology doesn't work is no bar to success in the\r\n\tmarketplace.\r\n\t --Philip Greenspun\r\n\t\r\n\t---------------------------(end of broadcast)---------------------------\r\n\tTIP 2: you can get off all lists at once with the unregister command\r\n\t (send \"unregister YourEmailAddressHere\" to [email protected])\r\n\t\r\n\t---------------------------(end of broadcast)---------------------------\r\n\tTIP 9: the planner will ignore your desire to choose an index scan if your\r\n\t joining column's datatypes do not match\r\n\t\r\n\r\n", "msg_date": "Tue, 23 Mar 2004 13:53:48 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" } ]
[ { "msg_contents": "\nHello fellow PostgreSQL users.\n\nWe've been working on this interesting issue for some time now, and we're\nhoping that someone can help.\n\nWe've recently integrated postgres into an existing mature app. Its a\ntime sensitive 24x7 system. It runs on HP9000, a K370 Dual Processor\nsystem. Postgres is version 7.3.2. Its spawned as a child from a parent\nsupervisory process, and they communicate to eachother via shared memory.\n\nWe preform 9-12K selects per hour\n 6-8K inserts per hour (a few updates here as well)\n 1-1.5K Deletes per hour.\n\nIt maintains 48hours of data, so its not a large database; roughly\n<600mbs. We do this by running a housekeeping program in a cron job.\nIt deletes all data older then 48hours, then vaccuum analyzes. It will\nalso preform a reindex if the option is set before it vaccuum's.\n\nPostgres initially worked wonderfully, fast and solid. It\npreformed complex joins in 0.01secs, and was able to keep up with our\nmessage queue. It stayed this way for almost a year during our\ndevelopment.\n\nRecently it started eating up the cpu, and cannot keepup with the system\nlike it used to. The interesting thing here is that it still runs great\non an older system with less ram, one slower cpu, and an older disk.\n\nWe tried the following with no success:\n\nrunning VACCUUM FULL\ndropping all tables and staring anew\nreinstalling postgres\ntweaking kernel parameters (various combos)\ntweaking postgres parameters (various combos)\na number of other ideas\n\nA final note, we have our app on two systems ready for hot backup. The\nhot backup system is that older slower system that I mentioned before. The\ntwo communicate with eachother via rpc's.\n\nAny help anyone can give to steer us in the right direction would be much\nappreciated.\n\nThanks again\n\nFabio E.\n\n\n\nJust in case:\n\nvmstat\n procs memory page\nfaults cpu\n r b w avm free re at pi po fr de sr\nin sy cs us sy id\n 1 0 0 7631 124955 30 31 1 0 0 0 1\n566 964 138 25 2 73\n\ntop\n\nSystem: prokyon Tue Mar 23 19:12:54\n2004\nLoad averages: 0.36, 0.33, 0.31\n170 processes: 169 sleeping, 1 running\nCpu states:\nCPU LOAD USER NICE SYS IDLE BLOCK SWAIT INTR SSYS\n 0 0.07 8.9% 0.0% 0.0% 91.1% 0.0% 0.0% 0.0% 0.0%\n 1 0.72 71.3% 0.0% 1.0% 27.7% 0.0% 0.0% 0.0% 0.0%\n 2 0.29 29.7% 1.0% 5.0% 64.4% 0.0% 0.0% 0.0% 0.0%\n--- ---- ----- ----- ----- ----- ----- ----- ----- -----\navg 0.36 36.3% 1.0% 2.0% 60.8% 0.0% 0.0% 0.0% 0.0%\n\nMemory: 33180K (22268K) real, 38868K (28840K) virtual, 499708K free Page#\n1/17\n\nCPU TTY PID USERNAME PRI NI SIZE RES STATE TIME %WCPU %CPU\nCOMMAND\n 0 pty/ttyp1 18631 am 154 20 6096K 2412K sleep 3:17 93.84 93.68\npostg\n 0 rroot 18622 am 154 20 1888K 1192K sleep 0:01 0.78 0.78\namcodecon\n\n\nipcs\n\nIPC status from /dev/kmem as of Tue Mar 23 19:19:19 2004\nT ID KEY MODE OWNER GROUP\nMessage Queues:\nq 0 0x3c180239 -Rrw--w--w- root root\nq 1 0x3e180239 --rw-r--r-- root root\nShared Memory:\nm 0 0x2f100002 --rw------- root sys\nm 1 0x4118020d --rw-rw-rw- root root\nm 2 0x4e0c0002 --rw-rw-rw- root root\nm 3 0x4114006c --rw-rw-rw- root root\nm 4 0x4118387e --rw-rw-rw- am am\nm 3805 0x0052e2c1 --rw------- postgres postgres\nm 8606 0x0c6629c9 --rw-r----- root sys\nm 407 0x06347849 --rw-rw-rw- root sys\nSemaphores:\ns 0 0x2f100002 --ra-ra-ra- root sys\ns 1 0x4118020d --ra-ra-ra- root root\ns 2 0x4e0c0002 --ra-ra-ra- root root\ns 3 0x4114006c --ra-ra-ra- root root\ns 4 0x00446f6e --ra-r--r-- root root\ns 5 0x00446f6d --ra-r--r-- root root\ns 6 0x01090522 --ra-r--r-- root root\ns 7 0x61142e7c --ra-ra-ra- root root\ns 8 0x73142e7c --ra-ra-ra- root root\ns 9 0x70142e7c --ra-ra-ra- root root\ns 10 0x69142e7c --ra-ra-ra- root root\ns 11 0x75142e7c --ra-ra-ra- root root\ns 12 0x63142e7c --ra-ra-ra- root root\ns 13 0x64142e7c --ra-ra-ra- root root\ns 14 0x66142e7c --ra-ra-ra- root root\ns 15 0x6c142e7c --ra-ra-ra- root root\ns 1168 0x0052e2c1 --ra------- postgres postgres\ns 401 0x0052e2c2 --ra------- postgres postgres\ns 402 0x0052e2c3 --ra------- postgres postgres\n\n\n", "msg_date": "Tue, 23 Mar 2004 14:21:34 -0500 (EST)", "msg_from": "Fabio Esposito <[email protected]>", "msg_from_op": true, "msg_subject": "postgres eating CPU on HP9000" }, { "msg_contents": "Fabio Esposito <[email protected]> writes:\n> We've recently integrated postgres into an existing mature app. Its a\n> time sensitive 24x7 system. It runs on HP9000, a K370 Dual Processor\n> system. Postgres is version 7.3.2. Its spawned as a child from a parent\n> supervisory process, and they communicate to eachother via shared memory.\n\nYou would be well advised to update to 7.3.6, though I'm not sure if any\nof the post-7.3.2 fixes have anything to do with your speed problem.\n\n> Recently it started eating up the cpu, and cannot keepup with the system\n> like it used to. The interesting thing here is that it still runs great\n> on an older system with less ram, one slower cpu, and an older disk.\n\n> We tried the following with no success:\n\n> running VACCUUM FULL\n> dropping all tables and staring anew\n\nDid you start from a fresh initdb, or just drop and recreate user\ntables? I'm wondering about index bloat on the system tables ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Mar 2004 16:53:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU on HP9000 " }, { "msg_contents": "On Tue, 23 Mar 2004, Fabio Esposito wrote:\n\n> \n> Hello fellow PostgreSQL users.\n> \n> We've been working on this interesting issue for some time now, and we're\n> hoping that someone can help.\n> \n> We've recently integrated postgres into an existing mature app. Its a\n> time sensitive 24x7 system. It runs on HP9000, a K370 Dual Processor\n> system. Postgres is version 7.3.2. Its spawned as a child from a parent\n> supervisory process, and they communicate to eachother via shared memory.\n> \n> We preform 9-12K selects per hour\n> 6-8K inserts per hour (a few updates here as well)\n> 1-1.5K Deletes per hour.\n> \n> It maintains 48hours of data, so its not a large database; roughly\n> <600mbs. We do this by running a housekeeping program in a cron job.\n> It deletes all data older then 48hours, then vaccuum analyzes. It will\n> also preform a reindex if the option is set before it vaccuum's.\n> \n> Postgres initially worked wonderfully, fast and solid. It\n> preformed complex joins in 0.01secs, and was able to keep up with our\n> message queue. It stayed this way for almost a year during our\n> development.\n> \n> Recently it started eating up the cpu, and cannot keepup with the system\n> like it used to. The interesting thing here is that it still runs great\n> on an older system with less ram, one slower cpu, and an older disk.\n> \n> We tried the following with no success:\n> \n> running VACCUUM FULL\n> dropping all tables and staring anew\n> reinstalling postgres\n> tweaking kernel parameters (various combos)\n> tweaking postgres parameters (various combos)\n> a number of other ideas\n\nThis almost sounds like a problem (fixed in 7.4 I believe) where some \nsystem catalog indexes would get huge over time, and couldn't be vacuumed \nor reindexed while the database was up in multi-user mode.\n\nI'll defer to Tom or Bruce or somebody to say if my guess is even close...\n\n", "msg_date": "Fri, 26 Mar 2004 15:17:51 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU on HP9000" }, { "msg_contents": "Fabio,\n\n> Postgres initially worked wonderfully, fast and solid. It\n> preformed complex joins in 0.01secs, and was able to keep up with our\n> message queue. It stayed this way for almost a year during our\n> development.\n> \n> Recently it started eating up the cpu, and cannot keepup with the system\n> like it used to. The interesting thing here is that it still runs great\n> on an older system with less ram, one slower cpu, and an older disk.\n\nThis really points to a maintenance problem. How often do you run VACUUM \nANALYZE? You have a very high rate of data turnover, and should need to \nVACUUM frequently.\n\nAlso, what's you max_fsm_pages setting.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 26 Mar 2004 15:11:22 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU on HP9000" }, { "msg_contents": "<snip>\n\nWe are experiencing exactly the same problem here, and we use 7.4 on\nLinux/i386 SMP (2 processors). Our databases does even more access:\nabout 30k selects per hour, 10k updates and inserts per hour\n\nVacuum analyze is done daily.\n\nWe migrated our database to a new server. Initially, everything was fine,\nand pretty fast. In a week or so, Vacuum performance is pretty slow. What\nwas done in 15 minutes now takes 2 hours. Postgres is consuming a lot of\nCPU power and, when the system is in peak period, it's even worse.\n\nSure, we have a large database. 3 tables have more than 10M records, but\nmore or less suddenly, we're having a heavy performance prejudice.\n\n>\n> This almost sounds like a problem (fixed in 7.4 I believe) where some\n> system catalog indexes would get huge over time, and couldn't be\n> vacuumed or reindexed while the database was up in multi-user mode.\n>\n> I'll defer to Tom or Bruce or somebody to say if my guess is even\n> close...\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n\n", "msg_date": "Sat, 27 Mar 2004 07:38:37 -0300 (BRT)", "msg_from": "\"Marcus Andree S. Magalhaes\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU on HP9000" }, { "msg_contents": "Marcus,\n\n> We are experiencing exactly the same problem here, and we use 7.4 on\n> Linux/i386 SMP (2 processors). Our databases does even more access:\n> about 30k selects per hour, 10k updates and inserts per hour\n>\n> Vacuum analyze is done daily.\n\nWhat is your max_fsm_pages setting? If you are getting 10,000 updates per \nhour, daily VACUUM ANALYZE may not be enough.\n\nAlso do you run VACUUM ANALYZE as a superuser, or as a regular user?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 28 Mar 2004 11:24:43 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU on HP9000" }, { "msg_contents": "> Marcus,\n>\n>> We are experiencing exactly the same problem here, and we use 7.4 on\n>> Linux/i386 SMP (2 processors). Our databases does even more access:\n>> about 30k selects per hour, 10k updates and inserts per hour\n>>\n>> Vacuum analyze is done daily.\n>\n> What is your max_fsm_pages setting? If you are getting 10,000\n> updates per hour, daily VACUUM ANALYZE may not be enough.\n>\n\nmax_fsm_pages is set to 500000\n\n> Also do you run VACUUM ANALYZE as a superuser, or as a regular user?\n>\n\nAs a regular user (database owner). Is thery any difference when vacuuming\nas a super user?\n\n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if\n> your\n> joining column's datatypes do not match\n\n\n\n", "msg_date": "Mon, 29 Mar 2004 07:57:43 -0300 (BRT)", "msg_from": "\"Marcus Andree S. Magalhaes\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU on HP9000" }, { "msg_contents": "\n\n\nOn Fri, 26 Mar 2004, Josh Berkus wrote:\n\n> Fabio,\n>\n> > Recently it started eating up the cpu, and cannot keepup with the system\n> > like it used to. The interesting thing here is that it still runs great\n> > on an older system with less ram, one slower cpu, and an older disk.\n>\n> This really points to a maintenance problem. How often do you run VACUUM\n> ANALYZE? You have a very high rate of data turnover, and should need to\n> VACUUM frequently.\n>\n> Also, what's you max_fsm_pages setting.\n>\n\nWe run VACUUM ANALYZE after we remove about 1000 rows every hour on the\nhalh hour. Our max_fsm_pages is set to 10000\n\nThanks again\n\nFabio\n\n", "msg_date": "Mon, 29 Mar 2004 08:09:22 -0500 (EST)", "msg_from": "Fabio Esposito <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres eating CPU on HP9000" }, { "msg_contents": "\"Marcus Andree S. Magalhaes\" <[email protected]> writes:\n>> Also do you run VACUUM ANALYZE as a superuser, or as a regular user?\n\n> As a regular user (database owner). Is thery any difference when vacuuming\n> as a super user?\n\nThat's your problem. A regular user won't have permissions to vacuum\nany tables but his own ... in particular, not the system tables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Mar 2004 10:36:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU on HP9000 " }, { "msg_contents": "\nI'm sorry all, when you say regular user as opposed to superuser are you\ntalking about the user that postgres is installed and running as? Should\nthis be done as the os's root?\n\nFabio\n\nOn Mon, 29 Mar 2004, Tom Lane wrote:\n\n> \"Marcus Andree S. Magalhaes\" <[email protected]> writes:\n> >> Also do you run VACUUM ANALYZE as a superuser, or as a regular user?\n>\n> > As a regular user (database owner). Is thery any difference when vacuuming\n> > as a super user?\n>\n> That's your problem. A regular user won't have permissions to vacuum\n> any tables but his own ... in particular, not the system tables.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n\n", "msg_date": "Mon, 29 Mar 2004 12:00:16 -0500 (EST)", "msg_from": "Fabio Esposito <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres eating CPU on HP9000 " }, { "msg_contents": "\nOn Mar 29, 2004, at 9:36 AM, Tom Lane wrote:\n\n> \"Marcus Andree S. Magalhaes\" <[email protected]> writes:\n>>> Also do you run VACUUM ANALYZE as a superuser, or as a regular user?\n>\n>> As a regular user (database owner). Is thery any difference when \n>> vacuuming\n>> as a super user?\n>\n> That's your problem. A regular user won't have permissions to vacuum\n> any tables but his own ... in particular, not the system tables.\n>\n> \t\t\tregards, tom lane\n\nIf I vacuum as the superuser, are the system tables automatically \nvacuumed? Or, does using -a from the vacuumdb command accomplish this? \n Or, is there something else I have to specify on the vacuumdb command \nline?\n\nThanks!\nMark\n\n", "msg_date": "Mon, 29 Mar 2004 11:23:49 -0600", "msg_from": "Mark Lubratt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU on HP9000 " }, { "msg_contents": "On Mon, Mar 29, 2004 at 12:00:16 -0500,\n Fabio Esposito <[email protected]> wrote:\n> \n> I'm sorry all, when you say regular user as opposed to superuser are you\n> talking about the user that postgres is installed and running as? Should\n> this be done as the os's root?\n\nThe os user used for creating the cluster with initdb is a superuser.\nAny accounts created with the permission to create more users are also\nsuperusers.\n", "msg_date": "Mon, 29 Mar 2004 11:36:23 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU on HP9000" }, { "msg_contents": "Fabio,\n\n> We run VACUUM ANALYZE after we remove about 1000 rows every hour on the\n> halh hour. Our max_fsm_pages is set to 10000\n\nHave you checked how long these vacuums take? If they are starting to \noverlap, that would explain your high CPU usage and poor performance. You \nmight want to consider raising FSM_pages and vacuuming less frequently.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 29 Mar 2004 11:14:47 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU on HP9000" }, { "msg_contents": "\n\nThe Vacuum's don't take too long, 10 minutes at most. I can tell from ps\n-ef | grep and top that its the selects/inserts/updates from the postgres\nrelated to our app that take all that time up. If we rerun initdb and\nreload the data, it works great for about two days, then goes bad again.\n\nWe are in the process of trying out 7.4.2 right now, just waiting on the\nreload of pg_dump.\n\nFabio\n\n> Fabio,\n>\n> > We run VACUUM ANALYZE after we remove about 1000 rows every hour on the\n> > halh hour. Our max_fsm_pages is set to 10000\n>\n> Have you checked how long these vacuums take? If they are starting to\n> overlap, that would explain your high CPU usage and poor performance. You\n> might want to consider raising FSM_pages and vacuuming less frequently.\n>\n> --\n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n", "msg_date": "Mon, 29 Mar 2004 14:36:24 -0500 (EST)", "msg_from": "Fabio Esposito <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres eating CPU on HP9000" }, { "msg_contents": "Fabio,\n\n> The Vacuum's don't take too long, 10 minutes at most. I can tell from ps\n> -ef | grep and top that its the selects/inserts/updates from the postgres\n> related to our app that take all that time up. If we rerun initdb and\n> reload the data, it works great for about two days, then goes bad again.\n> \n> We are in the process of trying out 7.4.2 right now, just waiting on the\n> reload of pg_dump.\n\nWell, test running VACUUM ANALYZE as the \"postgres\" superuser and see if that \nfixes the issue.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 29 Mar 2004 12:23:27 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU on HP9000" } ]
[ { "msg_contents": "I am trying to optimize a query that does a lot of aggregation. I have a\nlarge number of columns that are part of the result, and most are\naggregates. They are acting on two temporary tables, the largest of which\nshould have at most 1 million tuples, and the smaller around 5000; the the\nsmaller table matches the number of rows expecting in the result. I've\nplayed around with some indexes on the temp tables, and analyzing them; even\nusing a vacuum analyze, and the worst part is always a groupAggregate.\n\n This query can be optimized at the expense of other operations; it will\nbe run during low usage hours. I have tried to bump up sort_mem to get the\nquery optimizer to cosider a HashAggregate instread of a groupAggregate;\nsetting it as high as 2 gigs still had the query optimizer using\nGroupAggregate.\n\nThe troublesome query is:\n\nselect\n tempItems.category_id,\n date('2003-11-22'),\n sum(a) as a,\n count(*) as b,\n sum(case when type = 1 then 0 else someNumber end) as successful,\n sum(c) as c,\n ........\n ........\n tempAggregates.mode as mode\n -variations of the above repeated around 30 times, with a few other\naggregates like min and max making an appearance, and some array stuff\n from tempItems join tempAggregates using (category_id)\n group by tempItems.category_id, mode\n\nI've tried just grouping by category_id, and doing a max(mode), but that\ndoesn't have much of an effect on performance; although row estimation for\nthe group aggregate was better. A lot is being done, so maybe I can't get\nit to be much more efficient...\n\nHere's the output of an explain analyze:\n\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n---\n GroupAggregate (cost=0.00..338300.34 rows=884 width=345) (actual\ntime=86943.272..382718.104 rows=3117 loops=1)\n -> Merge Join (cost=0.00..93642.52 rows=1135610 width=345) (actual\ntime=0.148..24006.748 rows=1120974 loops=1)\n Merge Cond: (\"outer\".category_id = \"inner\".category_id)\n -> Index Scan using tempaggregatesindex on tempaggregates\n(cost=0.00..91.31 rows=3119 width=115) (actual time=0.055..6.573 rows=3117\nloops=1)\n -> Index Scan using tempitemsindex on tempitems\n(cost=0.00..79348.45 rows=1135610 width=241) (actual time=0.064..7511.980\nrows=1121164 loops=1)\n Total runtime: 382725.502 ms\n(6 rows)\n\nAny thoughts or suggestions would be appreciated.\n\n-Adam Palmblad\n\n", "msg_date": "Tue, 23 Mar 2004 12:03:48 -0800", "msg_from": "\"A Palmblad\" <[email protected]>", "msg_from_op": true, "msg_subject": "SLOW query with aggregates" }, { "msg_contents": "\"A Palmblad\" <[email protected]> writes:\n> GroupAggregate (cost=0.00..338300.34 rows=884 width=345) (actual\n> time=86943.272..382718.104 rows=3117 loops=1)\n> -> Merge Join (cost=0.00..93642.52 rows=1135610 width=345) (actual\n> time=0.148..24006.748 rows=1120974 loops=1)\n\nYou do not have a planning problem here, and trying to change the plan\nis a waste of time. The slowness is in the actual computation of the\naggregate functions; ergo the only way to speed it up is to change what\nyou're computing. What aggregates are you computing exactly, and over\nwhat datatypes?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Mar 2004 15:32:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLOW query with aggregates " }, { "msg_contents": "\n\"A Palmblad\" <[email protected]> writes:\n\n> GroupAggregate (cost=0.00..338300.34 rows=884 width=345) (actual\n> time=86943.272..382718.104 rows=3117 loops=1)\n> -> Merge Join (cost=0.00..93642.52 rows=1135610 width=345) (actual\n> time=0.148..24006.748 rows=1120974 loops=1)\n\nI think the reason you're getting a GroupAggregate here instead of a\nHashAggregate is that the MergeJoin is already producing the records in the\ndesired order, so the GroupAggregate doesn't require an extra sort, ie, it's\neffectively free.\n\nYou might be able to verify this by running the query with \n\nenable_indexscan = off and/or enable_mergejoin = off\n\nsome combination of which might get the planner to do a seqscan of the large\ntable with a hash join to the small table and then a HashAggregate.\n\nIf you're reading a lot of the large table the seqscan could be a little\nfaster, not much though. And given the accurate statistics guesses here the\nplanner may well have gotten this one right and the seqscan is slower. Can't\nhurt to be verify it though.\n\n-- \ngreg\n\n", "msg_date": "23 Mar 2004 22:42:20 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLOW query with aggregates" } ]
[ { "msg_contents": "As anyone done performance benchmark testing with solaris sparc/intel linux.\nI once read a post here, which had benchmarking test results for using\ndifferent filesystem like xfs, ext3, ext2, ufs etc. i couldn't find that\nlink anymore and google is failing on me, so anyone have the link handy.\n\nThanks!\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]]\nSent: Tuesday, March 23, 2004 12:13 PM\nTo: Matt Clark; Subbiah, Stalin; 'Andrew Sullivan';\[email protected]\nSubject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux\n\n\nMatt, Stalin,\n\n> As for the compute intensive side (complex joins & sorts etc), the Dell\nwill \nmost likely beat the Sun by some distance, although\n> what the Sun lacks in CPU power it may make up a bit in memory bandwidth/\nlatency.\n\nPersonally, I've been unimpressed by Dell/Xeon; I think the Sun might do \nbetter than you think, comparitively. On all the Dell servers I've used\nso \nfar, I've not seen performance that comes even close to the hardware specs.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n", "msg_date": "Tue, 23 Mar 2004 12:42:33 -0800", "msg_from": "\"Subbiah, Stalin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "On Tue, 2004-03-23 at 12:42, Subbiah, Stalin wrote:\n> As anyone done performance benchmark testing with solaris sparc/intel linux.\n> I once read a post here, which had benchmarking test results for using\n> different filesystem like xfs, ext3, ext2, ufs etc. i couldn't find that\n> link anymore and google is failing on me, so anyone have the link handy.\n> \n> Thanks!\n\nThis link: http://developer.osdl.org/markw/ takes you to Mark Wong's\ndatabase developer page. The top set of links shows performance results\nfor Linux kernels running an OLTP workload (dbt-2) under PostgreSQL. He\nhas numbers for ia32 and ia64 under different file system types.\n\nTo do a \"good enough\" comparison, one would need to port this test kit\nto solaris. So far, this kit is only running on Linux. No one, to my\nknowledge has it running on any other platform. But I suspect there are\nsome working to port the kits.\n\n> \n> -----Original Message-----\n> From: Josh Berkus [mailto:[email protected]]\n> Sent: Tuesday, March 23, 2004 12:13 PM\n> To: Matt Clark; Subbiah, Stalin; 'Andrew Sullivan';\n> [email protected]\n> Subject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux\n> \n> \n> Matt, Stalin,\n> \n> > As for the compute intensive side (complex joins & sorts etc), the Dell\n> will \n> most likely beat the Sun by some distance, although\n> > what the Sun lacks in CPU power it may make up a bit in memory bandwidth/\n> latency.\n> \n> Personally, I've been unimpressed by Dell/Xeon; I think the Sun might do \n> better than you think, comparitively. On all the Dell servers I've used\n> so \n> far, I've not seen performance that comes even close to the hardware specs.\n\n", "msg_date": "Tue, 23 Mar 2004 13:56:35 -0800", "msg_from": "Craig Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "Subbiah, Stalin wrote:\n> As anyone done performance benchmark testing with solaris sparc/intel linux.\n> I once read a post here, which had benchmarking test results for using\n> different filesystem like xfs, ext3, ext2, ufs etc. i couldn't find that\n> link anymore and google is failing on me, so anyone have the link handy.\n\nIf you're talking about the work I did, it's here:\nhttp://www.potentialtech.com/wmoran/ (then follow the link)\n\nAnyway, that should be easily portable to any platform that will run Postgres,\nbut I don't know how useful it is in comparing two different platforms. See\nthe information in the document. It was intended only to test disk access speed,\nand attempts to flood the HDD system with database work to do.\n\n> \n> Thanks!\n> \n> -----Original Message-----\n> From: Josh Berkus [mailto:[email protected]]\n> Sent: Tuesday, March 23, 2004 12:13 PM\n> To: Matt Clark; Subbiah, Stalin; 'Andrew Sullivan';\n> [email protected]\n> Subject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux\n> \n> \n> Matt, Stalin,\n> \n> \n>>As for the compute intensive side (complex joins & sorts etc), the Dell\n> \n> will \n> most likely beat the Sun by some distance, although\n> \n>>what the Sun lacks in CPU power it may make up a bit in memory bandwidth/\n> \n> latency.\n> \n> Personally, I've been unimpressed by Dell/Xeon; I think the Sun might do \n> better than you think, comparitively. On all the Dell servers I've used\n> so \n> far, I've not seen performance that comes even close to the hardware specs.\n> \n\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Tue, 23 Mar 2004 17:10:19 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" }, { "msg_contents": "Are you talking about\nhttp://www.potentialtech.com/wmoran/postgresql.php#conclusion\n----- Original Message ----- \nFrom: \"Subbiah, Stalin\" <[email protected]>\nTo: <[email protected]>; \"Matt Clark\" <[email protected]>; \"Subbiah, Stalin\"\n<[email protected]>; \"'Andrew Sullivan'\" <[email protected]>;\n<[email protected]>\nSent: Tuesday, March 23, 2004 3:42 PM\nSubject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux\n\n\n> As anyone done performance benchmark testing with solaris sparc/intel\nlinux.\n> I once read a post here, which had benchmarking test results for using\n> different filesystem like xfs, ext3, ext2, ufs etc. i couldn't find that\n> link anymore and google is failing on me, so anyone have the link handy.\n>\n> Thanks!\n>\n> -----Original Message-----\n> From: Josh Berkus [mailto:[email protected]]\n> Sent: Tuesday, March 23, 2004 12:13 PM\n> To: Matt Clark; Subbiah, Stalin; 'Andrew Sullivan';\n> [email protected]\n> Subject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux\n>\n>\n> Matt, Stalin,\n>\n> > As for the compute intensive side (complex joins & sorts etc), the Dell\n> will\n> most likely beat the Sun by some distance, although\n> > what the Sun lacks in CPU power it may make up a bit in memory\nbandwidth/\n> latency.\n>\n> Personally, I've been unimpressed by Dell/Xeon; I think the Sun might do\n> better than you think, comparitively. On all the Dell servers I've used\n> so\n> far, I've not seen performance that comes even close to the hardware\nspecs.\n>\n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n", "msg_date": "Tue, 23 Mar 2004 17:48:21 -0500", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" } ]
[ { "msg_contents": "\n\n---------- Forwarded Message ----------\n\nSubject: FreeBSD, PostgreSQL, semwait and sbwait!\nDate: March 23, 2004 12:02 pm\nFrom: \"Jason Coene\" <[email protected]>\nTo: <[email protected]>\n\nHello all,\n\nWe're having a substantial problem with our FreeBSD 5.2 database server\nrunning PostgreSQL - it's getting a lot of traffic (figure about 3,000\nqueries per second), but queries are slow, and it's seemingly waiting on\nother things than CPU time.\n\nThe database server is a dual P4-2.8 w/ HT enabled (kernel finds 4\nprocessors), 2GB RAM, 4 disk Serial ATA on 3ware RAID, gigabit Ethernet\nconnection to web servers. It's running FreeBSD 5.2 and PostgreSQL 7.4.1.\n\nThe server is taking a while to respond to both connections, and then\nqueries (between 1-3 seconds, on a query that should execute in 0.05 or\nless).\n\nThe CPU usage for the server never goes above 30% (70% idle), and the CPU\ntime that's in use is nearly always split equal between user and system.\nThe system is using\n\nDoing a \"top\", this is what we see:\n\nlast pid: 51833; load averages: 13.72, 11.74, 10.01 up 0+01:55:45 15:00:03\n116 processes: 1 running, 99 sleeping, 16 lock\nCPU states: 14.6% user, 0.0% nice, 23.7% system, 0.2% interrupt, 61.5% idle\nMem: 91M Active, 1043M Inact, 160M Wired, 52K Cache, 112M Buf, 644M Free\nSwap: 4096M Total, 4096M Free\n\n20354 pgsql 131 0 80728K 5352K select 0 0:24 1.71% 1.71% postgres\n36415 pgsql 4 0 81656K 67468K sbwait 2 0:00 3.23% 0.59% postgres\n36442 pgsql 128 0 82360K 15868K select 2 0:00 1.75% 0.24% postgres\n36447 pgsql -4 0 82544K 10616K semwai 0 0:00 2.05% 0.20% postgres\n36461 pgsql -4 0 81612K 6844K semwai 2 0:00 2.05% 0.20% postgres\n36368 pgsql 4 0 82416K 20780K sbwait 3 0:00 0.50% 0.15% postgres\n36459 pgsql -4 0 81840K 7816K semwai 0 0:00 1.54% 0.15% postgres\n36469 pgsql -4 0 81840K 7964K semwai 2 0:00 1.54% 0.15% postgres\n36466 pgsql 129 0 81840K 7976K *Giant 2 0:00 1.54% 0.15% postgres\n36479 pgsql -4 0 81528K 6648K semwai 0 0:00 3.00% 0.15% postgres\n36457 pgsql -4 0 81840K 8040K semwai 1 0:00 1.03% 0.10% postgres\n36450 pgsql 129 0 82352K 8188K *Giant 2 0:00 1.03% 0.10% postgres\n36472 pgsql -4 0 81824K 7416K semwai 2 0:00 1.03% 0.10% postgres\n36478 pgsql 131 0 81840K 7936K select 0 0:00 2.00% 0.10% postgres\n36454 pgsql 4 0 82416K 16300K sbwait 3 0:00 0.51% 0.05% postgres\n36414 pgsql 4 0 82416K 15872K sbwait 2 0:00 0.27% 0.05% postgres\n\nOur kernel is GENERIC plus:\n\nmaxusers 512\noptions SYSVSHM\noptions SHMMAXPGS=262144\noptions SHMSEG=512\noptions SHMMNI=512\noptions SYSVSEM\noptions SEMMNI=512\noptions SEMMNS=1024\noptions SEMMNU=512\noptions SEMMAP=512\noptions NMBCLUSTERS=32768\n\nInteresting bits from postgresql.conf:\n\nmax_connections = 512\nshared_buffers = 8192\nsort_mem = 16384\nvacuum_mem = 8192\nfsync = false\n\nIt seems that queries are executing fine once they start, but it's taking a\nwhile for them to get going, while the postgres process sits in semwait,\nsbwait or select. This problem doesn't happen when there's little load on\nthe server, it's only when we open it for public consumption that it\nexhibits these problems.\n\nAnyone have this type of problem before? Am I missing something?\n\nThanks, Jason\n\n_______________________________________________\[email protected] mailing list\nhttp://lists.freebsd.org/mailman/listinfo/freebsd-performance\nTo unsubscribe, send any mail to\n \"[email protected]\"\n\n-------------------------------------------------------\n\n-- \nDarcy Buskermolen\nWavefire Technologies Corp.\nph: 250.717.0200\nfx: 250.763.1759\nhttp://www.wavefire.com\n\n", "msg_date": "Tue, 23 Mar 2004 12:48:34 -0800", "msg_from": "Darcy Buskermolen <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: FreeBSD, PostgreSQL, semwait and sbwait!" }, { "msg_contents": "Darcy,\n\nI suggest getting this person over here instead. They have a *lot* to learn \nabout tuning PostgreSQL.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 23 Mar 2004 14:16:01 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: FreeBSD, PostgreSQL, semwait and sbwait!" }, { "msg_contents": "Darcy Buskermolen <[email protected]> writes:\n> The database server is a dual P4-2.8 w/ HT enabled (kernel finds 4\n> processors), 2GB RAM, 4 disk Serial ATA on 3ware RAID, gigabit Ethernet\n> connection to web servers. It's running FreeBSD 5.2 and PostgreSQL 7.4.1.\n\nHm. What happens if you turn off the hyperthreading?\n\nWe have seen a number of reports recently that suggest that our\nspinlocking code behaves inefficiently on hyperthreaded machines.\nThis hasn't got to the point where we have any substantiated evidence,\nmind you, but maybe you can help provide some.\n\nAlso it might be interesting to put one of these into the outer loop in\ns_lock():\n\n\t__asm__ __volatile__(\n\t\t\" rep; nop\t\t\t\\n\"\n\t\t: : : \"memory\");\n\n(This suggestion is a quick-and-dirty backport of a change that's\nalready in CVS tip.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Mar 2004 17:17:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: FreeBSD, PostgreSQL, semwait and sbwait! " }, { "msg_contents": "Tom,\n\n> Hm. What happens if you turn off the hyperthreading?\n\nForget hyperthreading. Look at their postgresql.conf settings. 8mb shared \nmem, 16mb sort mem per connection for 512 connections, default \neffective_cache_size. \n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 23 Mar 2004 14:55:08 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: FreeBSD, PostgreSQL, semwait and sbwait!" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Forget hyperthreading. Look at their postgresql.conf settings. 8mb shared\n> mem, 16mb sort mem per connection for 512 connections, default \n> effective_cache_size. \n\nThey could well be going into swap hell due to the oversized sort_mem,\nbut that didn't quite seem to explain the reported behavior. I'd want\nto see vmstat or similar output to confirm whether the disks are busy,\nthough. Amazing how many people forget that a database is normally\nI/O-bound rather than CPU-bound.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Mar 2004 18:00:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: FreeBSD, PostgreSQL, semwait and sbwait! " }, { "msg_contents": "\n\nJosh Berkus wrote:\n\n> Forget hyperthreading. Look at their postgresql.conf settings. 8mb shared\n>\n>mem, 16mb sort mem per connection for 512 connections, default \n>effective_cache_size. \n>\n> \n>\nUmm...its 64Mb shared buffers isn't it ?\n\nHowever agree completely with general thrust of message.... particularly \nthe 16Mb of sort mem / connection - a very bad idea unless you are \nrunning a data warehouse box for only a few users (not 512 of them...)\n\nregards\n\nMark\n\n", "msg_date": "Wed, 24 Mar 2004 21:09:02 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: FreeBSD, PostgreSQL, semwait and sbwait!" }, { "msg_contents": "Darcy Buskermolen wrote:\n\n>---------- Forwarded Message ----------\n>\n>Subject: FreeBSD, PostgreSQL, semwait and sbwait!\n>Date: March 23, 2004 12:02 pm\n>From: \"Jason Coene\" <[email protected]>\n>To: <[email protected]>\n>\n>Hello all,\n>\n>We're having a substantial problem with our FreeBSD 5.2 database server\n>running PostgreSQL - it's getting a lot of traffic (figure about 3,000\n>queries per second), but queries are slow, and it's seemingly waiting on\n>other things than CPU time\n> \n>\nCould this be a 5.2 performance issue ?\n\nIn spite of certain areas where the 5.x series performance is known to \nbe much better than 4.x (e.g networking), this may not be manifested in \npractice for a complete application.\n(e.g. I am still running 4.9 as it outperformed 5.1 vastly for a ~100 \ndatabase sessions running queries - note that I have not tried out 5.2, \nso am happy to be corrected on this)\n\nregards\n\nMark\n\n", "msg_date": "Wed, 24 Mar 2004 21:26:27 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: FreeBSD, PostgreSQL, semwait and sbwait!" }, { "msg_contents": "Hello,\n\n>> We're having a substantial problem with our FreeBSD 5.2 database \n>> server\n>> running PostgreSQL - it's getting a lot of traffic (figure about 3,000\n>> queries per second), but queries are slow, and it's seemingly waiting \n>> on\n>> other things than CPU time\n>>\n> Could this be a 5.2 performance issue ?\n>\n> In spite of certain areas where the 5.x series performance is known to \n> be much better than 4.x (e.g networking), this may not be manifested \n> in practice for a complete application.\n> (e.g. I am still running 4.9 as it outperformed 5.1 vastly for a ~100 \n> database sessions running queries - note that I have not tried out \n> 5.2, so am happy to be corrected on this)\nI found the same problem.\n\nI use OpenBSD 3.3,\nOn Pentium 2,4 GHz with 1 Gb RAM, RAID 10.\nWith PostgreSQL 7.4.1 with 32 Kb bock's size (to match ffs and raid \nblock's size)\nWith pg_autovacuum daemon from Pg 7.5.\n\nI run a web indexer.\nsd0 raid-1 with system pg-log and indexer-log\nsd1 raid-10 with pg-data and indexer-data\nThe sd1 disk achives between 10 and 40 Mb/s on normal operation.\n\nWhen I get semwait in top, system waits ;-)\nNot much disk activity.\nNot much log in pg or indexer.\nJust wait....\n\nWhat can I do ?\n\n > sudo top -s1 -S -I\nload averages: 4.45, 4.45, 3.86 \n 11:25:52\n97 processes: 1 running, 96 idle\nCPU states: 2.3% user, 0.0% nice, 3.8% system, 0.8% interrupt, \n93.1% idle\nMemory: Real: 473M/803M act/tot Free: 201M Swap: 0K/3953M used/tot\n\n PID USERNAME PRI NICE SIZE RES STATE WAIT TIME CPU COMMAND\n 2143 postgres -5 0 4008K 37M sleep biowai 1:02 1.81% postgres\n28662 postgres 14 0 4060K 37M sleep semwai 0:59 1.17% postgres\n25794 postgres 14 0 4072K 37M sleep semwai 1:30 0.93% postgres\n23271 postgres -5 0 4060K 37M sleep biowai 1:13 0.29% postgres\n14619 root 28 0 276K 844K run - 0:01 0.00% top\n\n > vmstat -w1 sd0 sd1\n r b w avm fre flt re pi po fr sr sd0 sd1 in sy cs \nus sy id\n 0 4 0 527412 36288 1850 0 0 0 0 0 26 72 368 8190 588 \n0 4 96\n 0 4 0 527420 36288 1856 0 0 0 0 0 0 86 356 8653 620 \n2 2 97\n 0 4 0 527432 36280 1853 0 0 0 0 0 0 54 321 8318 458 \n1 3 96\n 0 4 0 527436 36248 1864 0 0 0 0 0 0 77 358 8417 539 \n1 2 97\n 0 4 0 522828 40932 2133 0 0 0 0 0 7 70 412 15665 724 \n2 3 95\n 0 4 0 522896 40872 1891 0 0 0 0 0 15 72 340 9656 727 \n3 5 92\n 0 4 0 522900 40872 1841 0 0 0 0 0 0 69 322 8308 536 \n1 2 98\n 0 4 0 522920 40860 1846 0 0 0 0 0 1 69 327 8023 520 \n2 2 97\n 0 4 0 522944 40848 1849 0 0 0 0 0 4 76 336 8035 567 \n1 2 97\n 0 4 0 522960 40848 1843 0 0 0 0 0 0 77 331 14669 587 \n3 2 95\n 0 4 0 522976 40836 1848 0 0 0 0 0 4 81 339 8384 581 \n1 2 97\n 0 4 0 522980 40836 1841 0 0 0 0 0 3 65 320 8068 502 \n1 4 95\n 0 4 0 523000 40824 1848 0 0 0 0 0 14 74 341 8226 564 \n3 2 95\n 0 4 0 523020 40812 1844 0 0 0 0 0 0 67 317 7606 530 \n2 1 97\n 1 4 0 523052 40796 1661 0 0 0 0 0 0 68 315 11603 493 \n2 2 97\n 1 4 0 523056 40800 233 0 0 0 0 0 12 87 341 12550 609 \n2 2 96\n 0 4 0 523076 40788 1845 0 0 0 0 0 0 82 334 12457 626 \n2 2 96\n 0 4 0 523100 40776 1851 0 0 0 0 0 0 91 345 10914 623 \n2 3 95\n 0 4 0 523120 40764 1845 0 0 0 0 0 0 92 343 19213 596 \n1 5 95\n 0 4 0 523136 40752 1845 0 0 0 0 0 0 97 349 8659 605 \n2 2 96\n 0 4 0 523144 40748 4501 0 0 0 0 0 32 78 385 15632 934 \n25 12 64\n 0 4 0 523168 40728 1853 0 0 0 0 0 3 74 335 3965 531 \n0 2 98\n\n > ps -Upostgresql -Ostart | grep -v idle\n PID STARTED TT STAT TIME COMMAND\n 8267 10:53AM ?? Is 0:00.28 /usr/local/bin/pg_autovacuum -D -L \n/var/pgsql/autovacuum\n23271 10:54AM ?? I 1:13.56 postmaster: dps dps 127.0.0.1 SELECT \n(postgres)\n28662 10:55AM ?? I 0:59.98 postmaster: dps dps 127.0.0.1 SELECT \n(postgres)\n25794 10:56AM ?? D 1:30.48 postmaster: dps dps 127.0.0.1 SELECT \n(postgres)\n 2143 11:02AM ?? D 1:02.06 postmaster: dps dps 127.0.0.1 DELETE \n(postgres)\n25904 10:52AM C0- I 0:00.07 /usr/local/bin/postmaster -D \n/var/pgsql (postgres)\n10908 10:52AM C0- I 0:05.96 postmaster: stats collector process \n (postgres)\n 7045 10:52AM C0- I 0:05.19 postmaster: stats buffer process \n(postgres)\n\n > grep -v -E '^#' /var/pgsql/postgresql.conf\ntcpip_socket = true\nmax_connections = 100\nshared_buffers = 1024 # 32KB\nmax_fsm_pages = 1000000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 200 # min 100, ~50 bytes each\nwal_buffers = 32 # min 4, 8KB each\ncheckpoint_segments = 16 # in logfile segments, min 1, 16MB each\ncommit_delay = 100 # range 0-100000, in microseconds\neffective_cache_size = 4096 # 32KB each\nrandom_page_cost = 3\ndefault_statistics_target = 200 # range 1-1000\nclient_min_messages = notice # Values, in order of decreasing detail:\nlog_min_messages = log # Values, in order of decreasing detail:\nlog_min_duration_statement = 20000 # Log all statements whose\nlog_timestamp = true\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = true\nstats_reset_on_server_start = true\nlc_messages = 'C' # locale for system error message \nstrings\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\nexplain_pretty_print = true\n\n > sysctl -a | grep seminfo\nkern.seminfo.semmni = 256\nkern.seminfo.semmns = 2048\nkern.seminfo.semmnu = 30\nkern.seminfo.semmsl = 60\nkern.seminfo.semopm = 100\nkern.seminfo.semume = 10\nkern.seminfo.semusz = 100\nkern.seminfo.semvmx = 32767\nkern.seminfo.semaem = 16384\n\n > systat\nvmstat\n 7 users Load 3.48 3.64 3.56 Fri Apr 30 \n14:42:18 2004\n\n memory totals (in KB) PAGING SWAPPING \nInterrupts\n real virtual free in out in out 361 \ntotal\nActive 514768 527436 36280 ops 100 \nclock\nAll 992496 1005164 4071736 pages 128 \nrtc\n 45 \nfxp0\nProc:r d s w Csw Trp Sys Int Sof Flt 6 forks 88 \ntwe0\n 4 26 580 1848 8395 361 249 1856 6 fkppw\n fksvm\n 3.0% Sys 1.1% User 0.0% Nice 95.9% Idle pwait\n| | | | | | | | | | | relck\n=> rlkok\n noram\nNamei Sys-cache Proc-cache No-cache 80 ndcpy\n Calls hits % hits % miss % 54 fltcp\n 812 806 99 5 1 1 0 208 zfod\n 95 cow\nDiscs cd0 sd0 sd1 sd2 fd0 128 fmin\nseeks 6 82 170 ftarg\nxfers 6 82 60208 itarg\nKbyte 47 2554 226 wired\n sec 1.0 pdfre\n\n > tail -f /var/pgsql/log\n2004-04-30 11:35:03 LOG: recycled transaction log file \n\"000000C8000000CA\"\n2004-04-30 11:35:03 LOG: recycled transaction log file \n\"000000C8000000CB\"\n2004-04-30 11:35:03 LOG: recycled transaction log file \n\"000000C8000000CC\"\n2004-04-30 11:35:03 LOG: recycled transaction log file \n\"000000C8000000BF\"\n2004-04-30 11:35:03 LOG: recycled transaction log file \n\"000000C8000000C0\"\n2004-04-30 11:35:03 LOG: recycled transaction log file \n\"000000C8000000C1\"\n2004-04-30 11:35:03 LOG: recycled transaction log file \n\"000000C8000000C2\"\n2004-04-30 11:36:46 LOG: duration: 28284.360 ms statement: SELECT \nrec_id,url FROM url WHERE status > 300 AND status<>304 AND \n(referrer='28615' OR referrer='0') AND bad_since_time<1083317778\n2004-04-30 11:36:46 LOG: duration: 24918.201 ms statement: SELECT \nrec_id,url FROM url WHERE status > 300 AND status<>304 AND \n(referrer='122879' OR referrer='0') AND bad_since_time<1083317781\n2004-04-30 11:36:46 LOG: duration: 21173.427 ms statement: SELECT \nrec_id,url FROM url WHERE status > 300 AND status<>304 AND \n(referrer='586182' OR referrer='0') AND bad_since_time<1083317785\n\n From PhpPgAdmin: Table url: Info\nRow Performance\nSequential Index Rows\nScan Read Scan Fetch Insert Udate Delete\n1 414173 85711 10963854 20431 8707 594\n\nI/O Performance\nHeap Index TOAST TOAST Index\nDisk Cache % Disk Cache % Disk Cache % Disk Cache %\n3298907 7790769 (70%) 200782 1274898 (86%) 0 0 (0%) 0 0 \n(0%)\n\nIndex Row Performance\nIndex Scan Read Fetch\nurl_bad_since_time 0 0 0\nurl_crc 2924 131566 131566\nurl_hops 0 0 0\nurl_last_mod_time 0 0 0\nurl_next_index_time 5 5120 5120\nurl_pkey 9187 8980 8980\nurl_referrer 4431 10753641 10753641\nurl_seed 0 0 0\nurl_serverid 0 0 0\nurl_siteid 0 0 0\nurl_status 0 0 0\nurl_url 69164 64547 64547\n\nIndex I/O Performance\nIndex Disk Cache %\nurl_bad_since_time 7169 80280 (92%)\nurl_crc 9106 19200 (68%)\nurl_hops 9071 109864 (92%)\nurl_last_mod_time 5836 27887 (83%)\nurl_next_index_time 12004 109887 (90%)\nurl_pkey 7501 52825 (88%)\nurl_referrer 58765 97634 (62%)\nurl_seed 30293 88712 (75%)\nurl_serverid 8647 110078 (93%)\nurl_siteid 8888 109864 (93%)\nurl_status 7448 111250 (94%)\nurl_url 36054 357417 (91%)\n\n\nCordialement,\nJean-Gérard Pailloncy\n\n", "msg_date": "Fri, 30 Apr 2004 14:45:55 +0200", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: FreeBSD, PostgreSQL, semwait and sbwait!" }, { "msg_contents": "I am wondering if your wait is caused by contention between \npg_autovacuum and the DELETE that is running. Your large Pg blocksize \n(32K) *may* be contributing to any possible contention as well. Maybe \ntry disabling pg_autovacuum to see if there is any change in behaviour.\n\nAlso going through my head is '32 Kb bock's size (to match ffs and raid \nblock's size)' - does that mean you have raid strip size = 32K? maybe \ntry 128K (I know it sounds like a bad thing, but generally raid stripes \nof 128K->256K are better than 32K->64K)\n\nregards\n\nMark\n\n\nPailloncy Jean-G�rard wrote:\n\n> Hello,\n>\n>>\n> I found the same problem.\n>\n> I use OpenBSD 3.3,\n> On Pentium 2,4 GHz with 1 Gb RAM, RAID 10.\n> With PostgreSQL 7.4.1 with 32 Kb bock's size (to match ffs and raid \n> block's size)\n> With pg_autovacuum daemon from Pg 7.5.\n>\n> I run a web indexer.\n> sd0 raid-1 with system pg-log and indexer-log\n> sd1 raid-10 with pg-data and indexer-data\n> The sd1 disk achives between 10 and 40 Mb/s on normal operation.\n>\n> When I get semwait in top, system waits ;-)\n> Not much disk activity.\n> Not much log in pg or indexer.\n> Just wait....\n>\n", "msg_date": "Tue, 04 May 2004 19:39:22 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: FreeBSD, PostgreSQL, semwait and sbwait!" } ]
[ { "msg_contents": "Yep. Thanks Bill.\n\n-----Original Message-----\nFrom: Bill Moran [mailto:[email protected]]\nSent: Tuesday, March 23, 2004 2:10 PM\nTo: Subbiah, Stalin\nCc: [email protected]\nSubject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux\n\n\nSubbiah, Stalin wrote:\n> As anyone done performance benchmark testing with solaris sparc/intel\nlinux.\n> I once read a post here, which had benchmarking test results for using\n> different filesystem like xfs, ext3, ext2, ufs etc. i couldn't find that\n> link anymore and google is failing on me, so anyone have the link handy.\n\nIf you're talking about the work I did, it's here:\nhttp://www.potentialtech.com/wmoran/ (then follow the link)\n\nAnyway, that should be easily portable to any platform that will run\nPostgres,\nbut I don't know how useful it is in comparing two different platforms. See\nthe information in the document. It was intended only to test disk access\nspeed,\nand attempts to flood the HDD system with database work to do.\n\n> \n> Thanks!\n> \n> -----Original Message-----\n> From: Josh Berkus [mailto:[email protected]]\n> Sent: Tuesday, March 23, 2004 12:13 PM\n> To: Matt Clark; Subbiah, Stalin; 'Andrew Sullivan';\n> [email protected]\n> Subject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux\n> \n> \n> Matt, Stalin,\n> \n> \n>>As for the compute intensive side (complex joins & sorts etc), the Dell\n> \n> will \n> most likely beat the Sun by some distance, although\n> \n>>what the Sun lacks in CPU power it may make up a bit in memory bandwidth/\n> \n> latency.\n> \n> Personally, I've been unimpressed by Dell/Xeon; I think the Sun might do \n> better than you think, comparitively. On all the Dell servers I've used\n> so \n> far, I've not seen performance that comes even close to the hardware\nspecs.\n> \n\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n", "msg_date": "Tue, 23 Mar 2004 15:18:20 -0800", "msg_from": "\"Subbiah, Stalin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] Benchmarking postgres on Solaris/Linux" } ]
[ { "msg_contents": "Hi,\n\nI am running pg 7.4.1 on linux box.\nI have a midle size DB with many updates and after it I try to run\nvacuum full analyze.\nIt takes about 2 h.\nIf I try to dump and reload the DB it take 20 min.\n\nHow can I improve the vacuum full analyze time?\n\nMy configuration:\n\nshared_buffers = 15000 # min 16, at least max_connections*2,\n8KB each\nsort_mem = 10000 # min 64, size in KB\nvacuum_mem = 32000 # min 1024, size in KB\neffective_cache_size = 40000 # typically 8KB each\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n\n#max_fsm_relations = 1000 # min 100, ~50 bytes each\n\n\nregards,\nivan.\n\n", "msg_date": "Wed, 24 Mar 2004 11:20:15 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "slow vacuum performance" }, { "msg_contents": "pginfo wrote:\n> Hi,\n> \n> I am running pg 7.4.1 on linux box.\n> I have a midle size DB with many updates and after it I try to run\n> vacuum full analyze.\n> It takes about 2 h.\n> If I try to dump and reload the DB it take 20 min.\n> \n> How can I improve the vacuum full analyze time?\n\nHow often are you vacuuming? If you've gone a LONG time since the last vacuum,\nit can take quite a while, to the point where a dump/restore is faster.\n\nA recent realization that I've had some misconceptions about vacuuming led me\nto re-read section 8.2 of the admin guide (on vacuuming) ... I highly suggest\na review of these 3 pages of the admin manual, as it contains an excellent\ndescription of why databases need vacuumed, that one can use to determine how\noften vacuuming is necessary.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Wed, 24 Mar 2004 09:07:27 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow vacuum performance" }, { "msg_contents": "Hi Bill,\nI am vacuuming every 24 h.\nI have a cron script about i.\nBut if I make massive update (for example it affects 1 M rows) and I start vacuum,\nit take this 2 h.\nAlso I will note, that this massive update is running in one transaction ( I can\nnot update 100K and start vacuum after it).\n\nregards,\nivan.\n\nBill Moran wrote:\n\n> pginfo wrote:\n> > Hi,\n> >\n> > I am running pg 7.4.1 on linux box.\n> > I have a midle size DB with many updates and after it I try to run\n> > vacuum full analyze.\n> > It takes about 2 h.\n> > If I try to dump and reload the DB it take 20 min.\n> >\n> > How can I improve the vacuum full analyze time?\n>\n> How often are you vacuuming? If you've gone a LONG time since the last vacuum,\n> it can take quite a while, to the point where a dump/restore is faster.\n>\n> A recent realization that I've had some misconceptions about vacuuming led me\n> to re-read section 8.2 of the admin guide (on vacuuming) ... I highly suggest\n> a review of these 3 pages of the admin manual, as it contains an excellent\n> description of why databases need vacuumed, that one can use to determine how\n> often vacuuming is necessary.\n>\n> --\n> Bill Moran\n> Potential Technologies\n> http://www.potentialtech.com\n\n\n\n", "msg_date": "Wed, 24 Mar 2004 15:21:32 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow vacuum performance" }, { "msg_contents": "On Wed, 24 Mar 2004, pginfo wrote:\n\n> Hi,\n> \n> I am running pg 7.4.1 on linux box.\n> I have a midle size DB with many updates and after it I try to run\n> vacuum full analyze.\n\nIs there a reason to not use just regular vacuum / analyze (i.e. NOT \nfull)? \n\n> It takes about 2 h.\n\nFull vacuums, by their nature, tend to be a bit slow. It's better to let \nthe database achieve a kind of \"steady state\" with regards to number of \ndead tuples, and use regular vacuums to reclaim said space rather than a \nfull vacuum.\n\n> How can I improve the vacuum full analyze time?\n> \n> My configuration:\n> \n> shared_buffers = 15000 # min 16, at least max_connections*2,\n> 8KB each\n> sort_mem = 10000 # min 64, size in KB\n\nYou might want to look at dropping sort_mem. It would appear you've been \ngoing through the postgresql.conf file and bumping up numbers to see what \nworks and what doesn't. While most of the settings aren't too dangerous \nto crank up a little high, sort_mem is quite dangerous to crank up high, \nshould you have a lot of people connected who are all sorting. Note that \nsort_mem is a limit PER SORT, not per backend, or per database, or per \nuser, or even per table, but per sort. IF a query needs to run three or \nfour sorts, it can use 3 or 4x sort_mem. If a hundred users do this at \nonce, they can then use 300 or 400x sort_mem. You can see where I'm \nheading.\n\nNote that for individual sorts in batch files, like import processes, you \ncan bump up sort_mem with the set command, so you don't have to have a \nlarge setting in postgresql.conf to use a lot of sort mem when you need \nto, you can just grab it during that one session.\n\n> vacuum_mem = 32000 # min 1024, size in KB\n\nIf you've got lots of memory, crank up vacuum_mem to the 200 to 500 meg \nrange and see what happens.\n\nFor a good tuning guide, go here:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n", "msg_date": "Wed, 24 Mar 2004 09:26:40 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow vacuum performance" }, { "msg_contents": "Hi,\n\nscott.marlowe wrote:\n\n> On Wed, 24 Mar 2004, pginfo wrote:\n>\n> > Hi,\n> >\n> > I am running pg 7.4.1 on linux box.\n> > I have a midle size DB with many updates and after it I try to run\n> > vacuum full analyze.\n>\n> Is there a reason to not use just regular vacuum / analyze (i.e. NOT\n> full)?\n>\n\nYes, in case I make massive updates (only in my case of cource) for example\n2 M rows, I do not expect to have 2M new rows in next 180 days.That is the\nreaso for running vacuum full.\nMy idea was to free unneedet space and so to have faster system.\nIt is possible that I am wrong.\n\n\n> > It takes about 2 h.\n>\n> Full vacuums, by their nature, tend to be a bit slow. It's better to let\n> the database achieve a kind of \"steady state\" with regards to number of\n> dead tuples, and use regular vacuums to reclaim said space rather than a\n> full vacuum.\n>\n> > How can I improve the vacuum full analyze time?\n> >\n> > My configuration:\n> >\n> > shared_buffers = 15000 # min 16, at least max_connections*2,\n> > 8KB each\n> > sort_mem = 10000 # min 64, size in KB\n>\n> You might want to look at dropping sort_mem. It would appear you've been\n> going through the postgresql.conf file and bumping up numbers to see what\n> works and what doesn't. While most of the settings aren't too dangerous\n> to crank up a little high, sort_mem is quite dangerous to crank up high,\n> should you have a lot of people connected who are all sorting. Note that\n> sort_mem is a limit PER SORT, not per backend, or per database, or per\n> user, or even per table, but per sort. IF a query needs to run three or\n> four sorts, it can use 3 or 4x sort_mem. If a hundred users do this at\n> once, they can then use 300 or 400x sort_mem. You can see where I'm\n> heading.\n>\n> Note that for individual sorts in batch files, like import processes, you\n> can bump up sort_mem with the set command, so you don't have to have a\n> large setting in postgresql.conf to use a lot of sort mem when you need\n> to, you can just grab it during that one session.\n>\n\nI know. In my case we are using many ID's declared as varchar/name (I know it\nis bad idea, butwe are migrating this system from oracle) and pg have very\nbad performance with varchar/name indexes.\nThe only solution I found was to increase the sort mem.\nBut, I wll try to decrease this one and to see the result.\n\n> > vacuum_mem = 32000 # min 1024, size in KB\n>\n> If you've got lots of memory, crank up vacuum_mem to the 200 to 500 meg\n> range and see what happens.\n>\n\nI wil try it today. It is good idea and hope it will help.\n\n> For a good tuning guide, go here:\n>\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n I know it. It is the best I found and also the site.\n\nThanks for the help.\nivan.\n\n", "msg_date": "Wed, 24 Mar 2004 17:49:55 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow vacuum performance" }, { "msg_contents": "\n\nscott.marlowe wrote:\n\n> On Wed, 24 Mar 2004, pginfo wrote:\n>\n> > Hi,\n> >\n> > scott.marlowe wrote:\n> >\n> > > On Wed, 24 Mar 2004, pginfo wrote:\n> > >\n> > > > Hi,\n> > > >\n> > > > I am running pg 7.4.1 on linux box.\n> > > > I have a midle size DB with many updates and after it I try to run\n> > > > vacuum full analyze.\n> > >\n> > > Is there a reason to not use just regular vacuum / analyze (i.e. NOT\n> > > full)?\n> > >\n> >\n> > Yes, in case I make massive updates (only in my case of cource) for example\n> > 2 M rows, I do not expect to have 2M new rows in next 180 days.That is the\n> > reaso for running vacuum full.\n> > My idea was to free unneedet space and so to have faster system.\n> > It is possible that I am wrong.\n>\n> It's all about percentages. If you've got an average of 5% dead tuples\n> with regular vacuuming, then full vacuums won't gain you much, if\n> anything. If you've got 20 dead tuples for each live one, then a full\n> vacuum is pretty much a necessity. The generally accepted best\n> performance comes with 5 to 50% or so dead tuples. Keep in mind, having a\n> few dead tuples is actually a good thing, as your database won't grow then\n> srhink the file all the time, but keep it in a steady state size wise.\n\nthanks for the good analyze,ivan.\n\n\n", "msg_date": "Wed, 24 Mar 2004 18:08:51 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow vacuum performance" }, { "msg_contents": "On Wed, 24 Mar 2004, pginfo wrote:\n\n> Hi,\n> \n> scott.marlowe wrote:\n> \n> > On Wed, 24 Mar 2004, pginfo wrote:\n> >\n> > > Hi,\n> > >\n> > > I am running pg 7.4.1 on linux box.\n> > > I have a midle size DB with many updates and after it I try to run\n> > > vacuum full analyze.\n> >\n> > Is there a reason to not use just regular vacuum / analyze (i.e. NOT\n> > full)?\n> >\n> \n> Yes, in case I make massive updates (only in my case of cource) for example\n> 2 M rows, I do not expect to have 2M new rows in next 180 days.That is the\n> reaso for running vacuum full.\n> My idea was to free unneedet space and so to have faster system.\n> It is possible that I am wrong.\n\nIt's all about percentages. If you've got an average of 5% dead tuples \nwith regular vacuuming, then full vacuums won't gain you much, if \nanything. If you've got 20 dead tuples for each live one, then a full \nvacuum is pretty much a necessity. The generally accepted best \nperformance comes with 5 to 50% or so dead tuples. Keep in mind, having a \nfew dead tuples is actually a good thing, as your database won't grow then \nsrhink the file all the time, but keep it in a steady state size wise.\n\n\n\n", "msg_date": "Wed, 24 Mar 2004 11:13:55 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow vacuum performance" } ]
[ { "msg_contents": "We've got a table containing userdata, such as a bigint column 'icq'. To\neasily check whether a user has an icq number entered, we made the following\nindex:\n\tuserinfo_icq_ne0_id_key btree (id) WHERE (icq <> 0::bigint),\n\nHowever, it doesn't seem to be used:\n\n> EXPLAIN ANALYZE SELECT id FROM userinfo WHERE icq <> '0';\n Seq Scan on userinfo (cost=0.00..47355.90 rows=849244 width=4) (actual time=0.563..1222.963 rows=48797 loops=1)\n Filter: (icq <> 0::bigint)\n Total runtime: 1258.703 ms\n\n> SET enable_seqscan TO off;\n> EXPLAIN ANALYZE SELECT id FROM userinfo WHERE icq <> '0';\n Index Scan using userinfo_icq_ne0_id_key on userinfo (cost=0.00..65341.34 rows=48801 width=4) (actual time=0.124..256.478 rows=48797 loops=1)\n Filter: (icq <> 0::bigint)\n Total runtime: 290.804 ms\n\nIt would even rather use much larger indexes, for example the integer pics with\nindex:\n\tuserinfo_pics_gt0_id_key btree (id) WHERE (pics > 0),\n\n> EXPLAIN ANALYZE SELECT id FROM userinfo WHERE icq <> '0' AND pics > 0;\n Index Scan using userinfo_pics_gt0_id_key on userinfo (cost=0.00..60249.29 rows=323478 width=4) (actual time=0.039..1349.590 rows=23500 loops=1)\n Filter: ((icq <> 0::bigint) AND (pics > 0))\n Total runtime: 1368.227 ms\n\nWe're running PostgreSQL 7.4.1 on a Debian/Linux 2.4 system with 4GB RAM and a\nfast SCSI RAID array, with settings:\n\nshared_buffers = 65536 # min max_connections*2 or 16, 8KB each\nsort_mem = 16384 # min 64, size in KB\neffective_cache_size = 327680 # typically 8KB each\nrandom_page_cost = 1.5 # 4 # units are one sequential page fetch cost\n\n-- \nShiar - http://www.shiar.org\n> Mi devas forfughi antau fluganta nubskrapulo alterighos sur mia kapo\n", "msg_date": "Wed, 24 Mar 2004 13:11:25 +0100", "msg_from": "Shiar <[email protected]>", "msg_from_op": true, "msg_subject": "bigint index not used" }, { "msg_contents": "Shiar <[email protected]> writes:\n>> EXPLAIN ANALYZE SELECT id FROM userinfo WHERE icq <> '0';\n> Seq Scan on userinfo (cost=0.00..47355.90 rows=849244 width=4) (actual time=0.563..1222.963 rows=48797 loops=1)\n> Filter: (icq <> 0::bigint)\n> Total runtime: 1258.703 ms\n\nThe rows estimate is way off, which might or might not have much to do\nwith the issue, but it's surely suspicious.\n\n> We're running PostgreSQL 7.4.1 on a Debian/Linux 2.4 system with 4GB RAM and a\n> fast SCSI RAID array, with settings:\n\nUpdate to 7.4.2 and follow the procedure in the release notes about\nfixing pg_statistic; that may make things better. int8 columns are\nvulnerable to the statistic misalignment bug.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Mar 2004 18:15:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bigint index not used " } ]
[ { "msg_contents": "I have a query which get's data from a single table.\nWhen I try to get data from for an RFQ which has around 5000 rows, it \nis breaking off at 18th row.\nIf i reduce some columns , then it returns all the rows and not so slow.\n I have tried with different sets of column and there is no pattern \nbased on columns.\n\n But one thing is sure one size of the rows grows more than some bytes, \nthe records do not get returned. Now the following query returns me all \n5001 rows to me pretty fast\n\n\n select\n _level_ as l,\n nextval('seq_pk_bom_detail') as bom_detail,\n prior nextval('seq_pk_bom_detail') as parent_subassembly,\n parent_part_number,\n customer_part_number,\n /* mfr_name,\n mfr_part,\n description,*/\n commodity,\n needs_date,\n target_price,\n comments,\n case qty_per\n when null then 0.00001\n when 0 then 0.00001\n else qty_per\n end,\n qty_multiplier1,\n qty_multiplier2,\n qty_multiplier3,\n qty_multiplier4,\n qty_multiplier5\n from bom_detail_work_clean\n where (0=0)\n and bom_header=20252\n and file_number = 1\n start with customer_part_number = 'Top Assembly 1'\n connect by parent_part_number = prior customer_part_number;\n\n\nBut if I uncomment the description then it returns me only 18 rows.\n\n select\n _level_ as l,\n nextval('seq_pk_bom_detail') as bom_detail,\n prior nextval('seq_pk_bom_detail') as parent_subassembly,\n parent_part_number,\n customer_part_number,\n /* mfr_name,\n mfr_part,*/\n description,\n commodity,\n needs_date,\n target_price,\n comments,\n case qty_per\n when null then 0.00001\n when 0 then 0.00001\n else qty_per\n end,\n qty_multiplier1,\n qty_multiplier2,\n qty_multiplier3,\n qty_multiplier4,\n qty_multiplier5\n from bom_detail_work_clean\n where (0=0)\n and bom_header=20252\n and file_number = 1\n start with customer_part_number = 'Top Assembly 1'\n connect by parent_part_number = prior customer_part_number;\n\nNow these 18 rows are level 2 records in heirarchical query. I have a \nfeeling the server has some memory paging mechanism\nand if it can not handle beyond certain byets, it just returns what it \nhas.\n During your investigation of optimization of postgreSQL did you come \nacross any setting that might help us ?\n\nThanks!\n\nQing\n\nPS: I just reload the file while reducing the content in the \ndescription column.\nThe file got uploaded. So looks like the problem is size of the record \nbeing inserted.", "msg_date": "Thu, 25 Mar 2004 09:58:33 -0800", "msg_from": "Qing Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "column size too large, is this a bug?" }, { "msg_contents": "Qing Zhao <[email protected]> writes:\n> I have a query which get's data from a single table.\n> When I try to get data from for an RFQ which has around 5000 rows, it \n> is breaking off at 18th row.\n> If i reduce some columns , then it returns all the rows and not so slow.\n\nWhat client-side software are you using? This is surely a limitation on\nthe client side, because there is no such problem in the server.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Mar 2004 13:20:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug? " }, { "msg_contents": "Tom,\n\nThanks for your help!\nIt's not through one client. I am using JDBC. But the same things \nhappen when I use client like psql.\n\nQing\nOn Mar 25, 2004, at 10:20 AM, Tom Lane wrote:\n\n> Qing Zhao <[email protected]> writes:\n>> I have a query which get's data from a single table.\n>> When I try to get data from for an RFQ which has around 5000 rows, it\n>> is breaking off at 18th row.\n>> If i reduce some columns , then it returns all the rows and not so \n>> slow.\n>\n> What client-side software are you using? This is surely a limitation \n> on\n> the client side, because there is no such problem in the server.\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Thu, 25 Mar 2004 14:20:56 -0800", "msg_from": "Qing Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: column size too large, is this a bug? " }, { "msg_contents": "Qing Zhao <[email protected]> writes:\n> It's not through one client. I am using JDBC. But the same things \n> happen when I use client like psql.\n\nThat's really hard to believe. Can you provide a reproducible test\ncase?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Mar 2004 17:28:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug? " }, { "msg_contents": "On Thu, 25 Mar 2004, Qing Zhao wrote:\n\n> select\n> _level_ as l,\n> nextval('seq_pk_bom_detail') as bom_detail,\n> prior nextval('seq_pk_bom_detail') as parent_subassembly,\n> parent_part_number,\n> customer_part_number,\n> /* mfr_name,\n> mfr_part,\n> description,*/\n> commodity,\n> needs_date,\n> target_price,\n> comments,\n> case qty_per\n> when null then 0.00001\n> when 0 then 0.00001\n> else qty_per\n> end,\n> qty_multiplier1,\n> qty_multiplier2,\n> qty_multiplier3,\n> qty_multiplier4,\n> qty_multiplier5\n> from bom_detail_work_clean\n> where (0=0)\n> and bom_header=20252\n> and file_number = 1\n> start with customer_part_number = 'Top Assembly 1'\n> connect by parent_part_number = prior customer_part_number;\n\nWhat version are you running, and did you apply any patches (for example\none to support the start with/connect by syntax used above?)\n\n", "msg_date": "Thu, 25 Mar 2004 14:57:28 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug?" }, { "msg_contents": "It is 7.3.4 on MAC OS X (darwin). The patch we applied is \nhier-Pg7.3-0.5, which allows\nto perform hierarchical queries on PgSQL using Oracle's syntax.\n\nThanks!\n\nQing\n\nOn Mar 25, 2004, at 2:57 PM, Stephan Szabo wrote:\n\n> On Thu, 25 Mar 2004, Qing Zhao wrote:\n>\n>> select\n>> _level_ as l,\n>> nextval('seq_pk_bom_detail') as bom_detail,\n>> prior nextval('seq_pk_bom_detail') as parent_subassembly,\n>> parent_part_number,\n>> customer_part_number,\n>> /* mfr_name,\n>> mfr_part,\n>> description,*/\n>> commodity,\n>> needs_date,\n>> target_price,\n>> comments,\n>> case qty_per\n>> when null then 0.00001\n>> when 0 then 0.00001\n>> else qty_per\n>> end,\n>> qty_multiplier1,\n>> qty_multiplier2,\n>> qty_multiplier3,\n>> qty_multiplier4,\n>> qty_multiplier5\n>> from bom_detail_work_clean\n>> where (0=0)\n>> and bom_header=20252\n>> and file_number = 1\n>> start with customer_part_number = 'Top Assembly 1'\n>> connect by parent_part_number = prior customer_part_number;\n>\n> What version are you running, and did you apply any patches (for \n> example\n> one to support the start with/connect by syntax used above?)\n>\n>", "msg_date": "Thu, 25 Mar 2004 15:11:03 -0800", "msg_from": "Qing Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: column size too large, is this a bug?" }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> On Thu, 25 Mar 2004, Qing Zhao wrote:\n>> start with customer_part_number = 'Top Assembly 1'\n>> connect by parent_part_number = prior customer_part_number;\n\n> What version are you running, and did you apply any patches (for example\n> one to support the start with/connect by syntax used above?)\n\nOh, good eye ... it's that infamous CONNECT BY patch again, without doubt.\n\nI think we should add \"Have you applied any patches to your copy of\nPostgres?\" to the standard bug report form ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Mar 2004 19:39:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug? " }, { "msg_contents": "Tom,\n\n> Oh, good eye ... it's that infamous CONNECT BY patch again, without doubt.\n\nHey, who does this patch? What's wrong wiith it?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 25 Mar 2004 17:21:58 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> Oh, good eye ... it's that infamous CONNECT BY patch again, without doubt.\n\n> Hey, who does this patch? What's wrong wiith it?\n\nI'm just venting my annoyance at people expecting us to support\nhacked-up versions, especially without telling us they're hacked-up.\nThis is the third or fourth trouble report I can recall that was\neventually traced to that patch (after considerable effort).\n\nAnyway, my guess for the immediate problem is incorrect installation of\nthe patch, viz not doing a complete \"make clean\" and rebuild after\npatching. The patch changes the Query struct which is referenced in\nmany more files than are actually modified by the patch, and so if you\ndidn't build with --enable-depend then a simple \"make\" will leave you\nwith a patchwork of files that have different ideas about the field\noffsets in Query. I'm a bit surprised it doesn't just dump core...\n\n(That's not directly the fault of the patch, though, except to the\nextent that it can be blamed for coming without adequate installation\ninstructions. What is directly the fault of the patch is that it\ndoesn't force an initdb by changing catversion. The prior trouble\nreports had to do with views not working because their stored rules were\nincompatible with the patched backend. We should not have had to deal\nwith that, and neither should those users.)\n\nTheory B, of course, is that this is an actual bug in the patch and not\njust incorrect installation. I'm not interested enough to investigate\nthough. \n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Mar 2004 21:04:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug? " }, { "msg_contents": "> Theory B, of course, is that this is an actual bug in the patch and not\n> just incorrect installation. I'm not interested enough to investigate\n> though. \n\nIs there still someone around who's working on getting a similar patch \ninto 7.5? Seems there huge user demand for such a thing...\n\n(And no, I'm not volunteering, it's well beyond my abilities...)\n\nChris\n\n", "msg_date": "Fri, 26 Mar 2004 11:00:46 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug?" }, { "msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n> Is there still someone around who's working on getting a similar patch \n> into 7.5? Seems there huge user demand for such a thing...\n\nAndrew Overholt did some preliminary work toward implementing the\nSQL99-spec WITH functionality (which subsumes what CONNECT BY does,\nand a few other things too). But he's left Red Hat and gone back\nto school. One of the many things on my todo list is to pick up that\npatch and get it finished.\n\nIIRC Andrew had finished the parser work and we had a paper design for\nthe executor support.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Mar 2004 22:11:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug? " }, { "msg_contents": "> Andrew Overholt did some preliminary work toward implementing the\n> SQL99-spec WITH functionality (which subsumes what CONNECT BY does,\n> and a few other things too). But he's left Red Hat and gone back\n> to school. One of the many things on my todo list is to pick up that\n> patch and get it finished.\n\nOut of interest, what is your 7.5 todo list?\n\nChris\n\n", "msg_date": "Fri, 26 Mar 2004 11:38:47 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug?" }, { "msg_contents": "Thanks a lot! We were migrating to Postgres from Oracle and\nevery now and then, we ran into something that we do not\nunderstand completely and it is a learning process for us.\n\nYour responses have made it much clear for us. BTW, do you\nthink that it's better for us just to rewrite everything so we don't\nneed to use the patch at all? Why do others still use it?\n\nThanks!\n\nQing\nOn Mar 25, 2004, at 6:04 PM, Tom Lane wrote:\n\n> Josh Berkus <[email protected]> writes:\n>>> Oh, good eye ... it's that infamous CONNECT BY patch again, without \n>>> doubt.\n>\n>> Hey, who does this patch? What's wrong wiith it?\n>\n> I'm just venting my annoyance at people expecting us to support\n> hacked-up versions, especially without telling us they're hacked-up.\n> This is the third or fourth trouble report I can recall that was\n> eventually traced to that patch (after considerable effort).\n>\n> Anyway, my guess for the immediate problem is incorrect installation of\n> the patch, viz not doing a complete \"make clean\" and rebuild after\n> patching. The patch changes the Query struct which is referenced in\n> many more files than are actually modified by the patch, and so if you\n> didn't build with --enable-depend then a simple \"make\" will leave you\n> with a patchwork of files that have different ideas about the field\n> offsets in Query. I'm a bit surprised it doesn't just dump core...\n>\n> (That's not directly the fault of the patch, though, except to the\n> extent that it can be blamed for coming without adequate installation\n> instructions. What is directly the fault of the patch is that it\n> doesn't force an initdb by changing catversion. The prior trouble\n> reports had to do with views not working because their stored rules \n> were\n> incompatible with the patched backend. We should not have had to deal\n> with that, and neither should those users.)\n>\n> Theory B, of course, is that this is an actual bug in the patch and not\n> just incorrect installation. I'm not interested enough to investigate\n> though.\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Fri, 26 Mar 2004 09:29:20 -0800", "msg_from": "Qing Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: column size too large, is this a bug? " }, { "msg_contents": "Quig,\n\n> Your responses have made it much clear for us. BTW, do you\n> think that it's better for us just to rewrite everything so we don't\n> need to use the patch at all? Why do others still use it?\n\nOthers use it because of the same reason you do. \n\nIf you want to use the patch for seemless porting, I suggest that you contact \nEvgen directly. He's not very active on the main project mailing lists, so \nyou'll need to e-mail him personally. You may also need to sponsor him for \nbug fixes, since he is apparently an independent developer. I don't really \nknow him.\n\nAs an alternative, you may want to take a look at the IS_CONNECTED_BY patch \nin /contrib/tablefunc in the PostgreSQL source. As this was developed by \nJoe Conway, who is a very active major contributor in the community, it is \nmore likely to be bug-free. However, it will force you to change your \nquery syntax somewhat.\n\nOf course, there are other query tree structures you could use if you're \nwilling to modify your database design. But you may not want to go that \nfar.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 26 Mar 2004 10:01:47 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug?" }, { "msg_contents": "\nI used to use the connect-by patch, but have since rewritten everything \nto use a nested set model. I was\nhaving problems that, while not immediately traceable back to the \npatch, showed up when I started\nusing it and went away when I stopped (strange locking behavior, \ncrashing with vacuum full, problems after\ndropping columns) . Plus the annoyance of maintaining a non-stock build \nacross numerous installations\nexceeded its benefits. Relying on it for a business critical situation \nbecame too much of a risk.\n\n\n\nOn Mar 26, 2004, at 12:29 PM, Qing Zhao wrote:\n\n> Thanks a lot! We were migrating to Postgres from Oracle and\n> every now and then, we ran into something that we do not\n> understand completely and it is a learning process for us.\n>\n> Your responses have made it much clear for us. BTW, do you\n> think that it's better for us just to rewrite everything so we don't\n> need to use the patch at all? Why do others still use it?\n>\n> Thanks!\n>\n> Qing\n> On Mar 25, 2004, at 6:04 PM, Tom Lane wrote:\n>\n>> Josh Berkus <[email protected]> writes:\n>>>> Oh, good eye ... it's that infamous CONNECT BY patch again, without \n>>>> doubt.\n>>\n>>> Hey, who does this patch? What's wrong wiith it?\n>>\n>> I'm just venting my annoyance at people expecting us to support\n>> hacked-up versions, especially without telling us they're hacked-up.\n>> This is the third or fourth trouble report I can recall that was\n>> eventually traced to that patch (after considerable effort).\n>>\n>> Anyway, my guess for the immediate problem is incorrect installation \n>> of\n>> the patch, viz not doing a complete \"make clean\" and rebuild after\n>> patching. The patch changes the Query struct which is referenced in\n>> many more files than are actually modified by the patch, and so if you\n>> didn't build with --enable-depend then a simple \"make\" will leave you\n>> with a patchwork of files that have different ideas about the field\n>> offsets in Query. I'm a bit surprised it doesn't just dump core...\n>>\n>> (That's not directly the fault of the patch, though, except to the\n>> extent that it can be blamed for coming without adequate installation\n>> instructions. What is directly the fault of the patch is that it\n>> doesn't force an initdb by changing catversion. The prior trouble\n>> reports had to do with views not working because their stored rules \n>> were\n>> incompatible with the patched backend. We should not have had to deal\n>> with that, and neither should those users.)\n>>\n>> Theory B, of course, is that this is an actual bug in the patch and \n>> not\n>> just incorrect installation. I'm not interested enough to investigate\n>> though.\n>>\n>> \t\t\tregards, tom lane\n>>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n--------------------\n\nAndrew Rawnsley\nPresident\nThe Ravensfield Digital Resource Group, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n\n", "msg_date": "Fri, 26 Mar 2004 13:15:21 -0500", "msg_from": "Andrew Rawnsley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug? " }, { "msg_contents": "Andrew,\n\n> I used to use the connect-by patch, but have since rewritten everything\n> to use a nested set model. \n\nCool! You're probably the only person I know other than me using nested sets \nin a production environment.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 28 Mar 2004 11:25:41 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug?" }, { "msg_contents": "\nWell, I don't know if I would use it in an insert-heavy environment (at \nleast the way I implemented it), but for select-heavy\nstuff I don't know why you would want to use anything else. Hard to \nbeat the performance of a simple BETWEEN.\n\nOn Mar 28, 2004, at 2:25 PM, Josh Berkus wrote:\n\n> Andrew,\n>\n>> I used to use the connect-by patch, but have since rewritten \n>> everything\n>> to use a nested set model.\n>\n> Cool! You're probably the only person I know other than me using \n> nested sets\n> in a production environment.\n>\n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n--------------------\n\nAndrew Rawnsley\nPresident\nThe Ravensfield Digital Resource Group, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n\n", "msg_date": "Sun, 28 Mar 2004 14:49:54 -0500", "msg_from": "Andrew Rawnsley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug?" }, { "msg_contents": "On Sunday 28 March 2004 14:25, Josh Berkus wrote:\n> Andrew,\n>\n> > I used to use the connect-by patch, but have since rewritten everything\n> > to use a nested set model.\n>\n> Cool! You're probably the only person I know other than me using nested\n> sets in a production environment.\n\nYou cut me deep there Josh, real deep. :-)\n\nIf you search the pgsql-sql archives you'll find some helpful threads on using \nnested sets in PostgreSQL, one in particular I was involved with was a \ngeneric \"move_tree\" function that enabled moving a node from one branch to \nanother. \n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Tue, 30 Mar 2004 09:20:27 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug?" }, { "msg_contents": "Robert,\n\n> If you search the pgsql-sql archives you'll find some helpful threads on\n> using nested sets in PostgreSQL, one in particular I was involved with was\n> a generic \"move_tree\" function that enabled moving a node from one branch\n> to another.\n\nI have to admit to failing to follow -SQL over the last few months. This \nlist and Hackers are pretty much the only ones I read all of.\n\nMaybe I should get back on -SQL and we can compare move_tree functions :-) \n\nDid yours use a temp table, or some other means?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 30 Mar 2004 08:38:50 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Sets WAS: column size too large, is this a bug?" }, { "msg_contents": "On Tuesday 30 March 2004 11:38, Josh Berkus wrote:\n> Robert,\n>\n> > If you search the pgsql-sql archives you'll find some helpful threads on\n> > using nested sets in PostgreSQL, one in particular I was involved with\n> > was a generic \"move_tree\" function that enabled moving a node from one\n> > branch to another.\n>\n> I have to admit to failing to follow -SQL over the last few months. This\n> list and Hackers are pretty much the only ones I read all of.\n>\n> Maybe I should get back on -SQL and we can compare move_tree functions :-)\n>\n> Did yours use a temp table, or some other means?\n\nNope, Greg Mullane and I worked out the math and came up with an algorithm of \nsorts that we could apply to the tree when moving elements. \n\n<digs a little>\nhttp://archives.postgresql.org/pgsql-sql/2002-11/msg00355.php\n\nSeemed to work though someone else had posted yet another version after \nours... and in fact the one posted is not exactly what I use now either :-)\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Tue, 30 Mar 2004 15:06:08 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Sets WAS: column size too large, is this a bug?" }, { "msg_contents": "Robert,\n\n> http://archives.postgresql.org/pgsql-sql/2002-11/msg00355.php\n> \n> Seemed to work though someone else had posted yet another version after \n> ours... and in fact the one posted is not exactly what I use now either :-)\n\nHmmm ... I'd want to do a *lot* of testing before I trusted that approach. \nSeems like it could be very vunerable to order-of-exection issues.\n\nI'll start a GUIDE on it, people can post their various Nested Sets solutions.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 30 Mar 2004 12:13:12 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Sets WAS: column size too large, is this a bug?" } ]
[ { "msg_contents": "I've run into this odd planner choice which I don't quite understand.\n\nI have two tables articles, users and\narticles.article_id and users.user_id are primary keys.\n\nInsides articles there are two optional fields author_id1, author_id2\nwhich all reference users.user_id.\n\nAnd now the plans:\n(by the way this is pg 7.4 and I set enable_seqscan to off).\n\njargol=# explain select user_id, first_names, last_name from articles, users\nwhere article_id = 5027 and (articles.author_id1 = users.user_id);\n QUERY PLAN\n----------------------------------------------------------------------------\n------\n Nested Loop (cost=0.00..4.04 rows=1 width=26)\n -> Index Scan using articles_pk on articles (cost=0.00..2.01 rows=1\nwidth=4)\n Index Cond: (article_id = 5027)\n -> Index Scan using users_pk on users (cost=0.00..2.01 rows=1 width=26)\n Index Cond: (\"outer\".author_id1 = users.user_id)\n(5 rows)\n\njargol=# explain select user_id, first_names, last_name from articles, users\nwhere article_id = 5027 and (articles.author_id1 = users.user_id or\narticles.author_id2 = users.user_id);\n QUERY PLAN\n----------------------------------------------------------------------------\n-----------------------\n Nested Loop (cost=100000000.00..100000003.11 rows=2 width=26)\n Join Filter: ((\"outer\".author_id1 = \"inner\".user_id) OR\n(\"outer\".author_id2 = \"inner\".user_id))\n -> Index Scan using articles_pk on articles (cost=0.00..2.01 rows=1\nwidth=8)\n Index Cond: (article_id = 5027)\n -> Seq Scan on users (cost=100000000.00..100000001.04 rows=4 width=26)\n(5 rows)\n\nWhy does it think it MUST do a seq-scan in the second case? users.user_id is\na primary key,\nso shouldn't it behave exactly as in the first case?\n\nAny enlightenment on this problem will be much appreciated.\n\nthanks,\nAra Anjargolian\n\n", "msg_date": "Thu, 25 Mar 2004 21:52:31 -0800", "msg_from": "\"Ara Anjargolian\" <[email protected]>", "msg_from_op": true, "msg_subject": "odd planner choice" }, { "msg_contents": "On Thu, 25 Mar 2004, Ara Anjargolian wrote:\n\n> I've run into this odd planner choice which I don't quite understand.\n> \n> I have two tables articles, users and\n> articles.article_id and users.user_id are primary keys.\n> \n> Insides articles there are two optional fields author_id1, author_id2\n> which all reference users.user_id.\n> \n> And now the plans:\n> (by the way this is pg 7.4 and I set enable_seqscan to off).\n> \n> jargol=# explain select user_id, first_names, last_name from articles, users\n> where article_id = 5027 and (articles.author_id1 = users.user_id);\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> ------\n> Nested Loop (cost=0.00..4.04 rows=1 width=26)\n> -> Index Scan using articles_pk on articles (cost=0.00..2.01 rows=1\n> width=4)\n> Index Cond: (article_id = 5027)\n> -> Index Scan using users_pk on users (cost=0.00..2.01 rows=1 width=26)\n> Index Cond: (\"outer\".author_id1 = users.user_id)\n> (5 rows)\n> \n> jargol=# explain select user_id, first_names, last_name from articles, users\n> where article_id = 5027 and (articles.author_id1 = users.user_id or\n> articles.author_id2 = users.user_id);\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> -----------------------\n> Nested Loop (cost=100000000.00..100000003.11 rows=2 width=26)\n> Join Filter: ((\"outer\".author_id1 = \"inner\".user_id) OR\n> (\"outer\".author_id2 = \"inner\".user_id))\n> -> Index Scan using articles_pk on articles (cost=0.00..2.01 rows=1\n> width=8)\n> Index Cond: (article_id = 5027)\n> -> Seq Scan on users (cost=100000000.00..100000001.04 rows=4 width=26)\n> (5 rows)\n> \n> Why does it think it MUST do a seq-scan in the second case? users.user_id is\n> a primary key,\n> so shouldn't it behave exactly as in the first case?\n> \n> Any enlightenment on this problem will be much appreciated.\n\nAre articles.author_id1 and users.user_id the same type? Have you tried \ncasting one to the other's type if they're different?\n\n", "msg_date": "Fri, 26 Mar 2004 15:20:00 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: odd planner choice" }, { "msg_contents": "\"Ara Anjargolian\" <[email protected]> writes:\n> jargol=# explain select user_id, first_names, last_name from articles, users\n> where article_id = 5027 and (articles.author_id1 = users.user_id or\n> articles.author_id2 = users.user_id);\n\n> Why does it think it MUST do a seq-scan in the second case?\n\nThere's no support for generating an OR indexscan in the context of a\njoin.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Mar 2004 18:08:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: odd planner choice " } ]
[ { "msg_contents": "On Fri, 26 Mar 2004, Fabio Esposito wrote:\n\n> \n> On Fri, 26 Mar 2004, scott.marlowe wrote:\n> \n> > > It maintains 48hours of data, so its not a large database; roughly\n> > > <600mbs. We do this by running a housekeeping program in a cron job.\n> > > It deletes all data older then 48hours, then vaccuum analyzes. It will\n> > > also preform a reindex if the option is set before it vaccuum's.\n> > >\n> > This almost sounds like a problem (fixed in 7.4 I believe) where some\n> > system catalog indexes would get huge over time, and couldn't be vacuumed\n> > or reindexed while the database was up in multi-user mode.\n> >\n> > I'll defer to Tom or Bruce or somebody to say if my guess is even close...\n> >\n> We haven't tried 7.4, I will experiment with it next week, I hope it\n> will be that simple.\n\nIn the meantime, a simple dump - reload into a test box running your \ncurrent version may provide some insight. If it fixes the problem, then \nyou likely do have some kind of issue with index / table growth that isn't \nbeing addressed by vacuuming.\n\n", "msg_date": "Fri, 26 Mar 2004 17:05:55 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres eating CPU on HP9000" } ]
[ { "msg_contents": "Fabio,\n\n> I'll have to get back to you on that, but I'm 90% sure its the default out\n> of the box.\n\nRaise it, a lot. Perhaps to 30,000 or 50,000. VACUUM VERBOSE ANALYZE \nshould show you how many data pages are being reclaimed between vacuums.\n\nBecause of your very high rate of updates and deletes, you need to hold a lot \nof data pages open.\n\nYou would also benefit a great deal by upgrading to 7.4. 7.3 will require \nyou to to REINDEXes several times a day with your current setup; 7.4 will \nnot.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 26 Mar 2004 16:26:02 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres eating CPU on HP9000" } ]
[ { "msg_contents": "Fabio Esposito <[email protected]> writes:\n>> Did you start from a fresh initdb, or just drop and recreate user\n>> tables? I'm wondering about index bloat on the system tables ...\n\n> I don't think I re initdb it, just dropped. We did try a reindex command\n> in the interactive editor, with no success.\n\nReindex of what? I'd suggest looking to see the actual sizes of all the\nindexes on system tables. If my guess is right, some of them may be way\nout of line (like larger than their associated tables).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Mar 2004 19:36:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres eating CPU on HP9000 " } ]
[ { "msg_contents": "\n>Andrew,\n\n> > I used to use the connect-by patch, but have since rewritten everything\n> > to use a nested set model.\n\n>Cool! You're probably the only person I know other than me using nested \n>sets\n>in a production environment.\n\n\ncan you explain me what is a nested set?\n\n_________________________________________________________________\nAdd photos to your messages with MSN 8. Get 2 months FREE*. \nhttp://join.msn.com/?page=features/featuredemail\n\n", "msg_date": "Mon, 29 Mar 2004 17:05:00 +0000", "msg_from": "\"Jaime Casanova\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: column size too large, is this a bug?" }, { "msg_contents": "\nIts a way of representing a tree with right-left pointers in each \nrecord (basically re-inventing a hierarchical database\nin a relational model...). A good description is in Joe Celko's SQL \nFor Smarties book. Selection is very fast because\nany node's children have node ID's between the right and left nodes of \nsaid node, so there's no mucking about\nwith connect by and what not. There's a synopsis linked at the PG \nCookbook pages (http://www.brasileiro.net/postgres/cookbook),\nbut the cookbook seems to off-line (I think I'll offer to mirror it - \nthis happens frequently). There's another description at\nhttp://www.intelligententerprise.com/001020/celko.jhtml? \n_requestid=65750.\n\nInsertion takes a fair amount of work, as you generally have to \nre-arrange the node IDs when you add a record.\n\nOn Mar 29, 2004, at 12:05 PM, Jaime Casanova wrote:\n\n>\n>> Andrew,\n>\n>> > I used to use the connect-by patch, but have since rewritten \n>> everything\n>> > to use a nested set model.\n>\n>> Cool! You're probably the only person I know other than me using \n>> nested sets\n>> in a production environment.\n>\n>\n> can you explain me what is a nested set?\n>\n> _________________________________________________________________\n> Add photos to your messages with MSN 8. Get 2 months FREE*. \n> http://join.msn.com/?page=features/featuredemail\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n--------------------\n\nAndrew Rawnsley\nPresident\nThe Ravensfield Digital Resource Group, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n\n", "msg_date": "Mon, 29 Mar 2004 12:25:35 -0500", "msg_from": "Andrew Rawnsley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug?" }, { "msg_contents": "Andrew,\n\n> Its a way of representing a tree with right-left pointers in each \n> record (basically re-inventing a hierarchical database\n> in a relational model...). A good description is in Joe Celko's SQL \n> For Smarties book. Selection is very fast because\n> any node's children have node ID's between the right and left nodes of \n> said node, so there's no mucking about\n> with connect by and what not. There's a synopsis linked at the PG \n> Cookbook pages (http://www.brasileiro.net/postgres/cookbook),\n> but the cookbook seems to off-line (I think I'll offer to mirror it - \n> this happens frequently). There's another description at\n> http://www.intelligententerprise.com/001020/celko.jhtml? \n> _requestid=65750.\n\nI have a full implementation of this. I was going to do it as a magazine \narticle, so I've been holding it off line. However, publication seems to be \nindefinitely delayed, so I'll probably post it on TechDocs as soon as I have \ntime.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 29 Mar 2004 11:12:52 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column size too large, is this a bug?" } ]
[ { "msg_contents": "thanx a lot\n\n_________________________________________________________________\nSTOP MORE SPAM with the new MSN 8 and get 2 months FREE* \nhttp://join.msn.com/?page=features/junkmail\n\n", "msg_date": "Mon, 29 Mar 2004 17:29:10 +0000", "msg_from": "\"Jaime Casanova\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: column size too large, is this a bug?" } ]
[ { "msg_contents": "ok. if i don't misunderstand you (english is not my mother tongue, so i can \nbe wrong). your point is that speed is not necesarily performance, that's \nright.\n\nso, the real question is what is the best filesystem for optimal speed in \npostgresql?\n\n_________________________________________________________________\nMSN 8 helps eliminate e-mail viruses. Get 2 months FREE*. \nhttp://join.msn.com/?page=features/virus\n\n", "msg_date": "Mon, 29 Mar 2004 21:56:10 +0000", "msg_from": "\"Jaime Casanova\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] Raw vs Filesystem" }, { "msg_contents": "On Monday 29 March 2004 22:56, Jaime Casanova wrote:\n> ok. if i don't misunderstand you (english is not my mother tongue, so i can\n> be wrong). your point is that speed is not necesarily performance, that's\n> right.\n>\n> so, the real question is what is the best filesystem for optimal speed in\n> postgresql?\n\nThat's going to depend on a number of things:\n\n1. Size of database\n2. Usage patterns (many updates or mostly reads? single user or many?...)\n3. What hardware you've got\n4. What OS you're running.\n5. How you've configured your hardware, OS and PG.\n\nThere are some test results people have provided in the archives, but whether \nthey apply to your setup is open to argument.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 30 Mar 2004 09:22:42 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Raw vs Filesystem" }, { "msg_contents": "Jaime, Richard,\n\n> That's going to depend on a number of things:\n> There are some test results people have provided in the archives, but\n> whether they apply to your setup is open to argument.\n\nTrue. On Linux overall, XFS, JFS, and Reiser have all looked good at one time \nor another. Ext3 has never been a leader for performance, though, so that's \nan easy elimination.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 30 Mar 2004 08:43:01 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Raw vs Filesystem" }, { "msg_contents": "On Tuesday 30 March 2004 17:43, Josh Berkus wrote:\n> Jaime, Richard,\n>\n> > That's going to depend on a number of things:\n> > There are some test results people have provided in the archives, but\n> > whether they apply to your setup is open to argument.\n>\n> True. On Linux overall, XFS, JFS, and Reiser have all looked good at one\n> time or another. Ext3 has never been a leader for performance, though, so\n> that's an easy elimination.\n\nTrue, but on the sorts of commodity boxes I use, it doesn't make sense for me \nto waste time setting up non-standard filesystems - it's cheaper to spend a \nlittle more for better performance. I think SuSE offer Reiser though, so \nmaybe we'll see a wider selection available by default.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 30 Mar 2004 18:28:59 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Raw vs Filesystem" }, { "msg_contents": "> \n> True, but on the sorts of commodity boxes I use, it doesn't make sense for me \n> to waste time setting up non-standard filesystems - it's cheaper to spend a \n> little more for better performance. I think SuSE offer Reiser though, so \n> maybe we'll see a wider selection available by default.\n\nSuSE defaults to Reiser but also allows XFS. I would suggest XFS.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL", "msg_date": "Tue, 30 Mar 2004 12:59:13 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Raw vs Filesystem" }, { "msg_contents": "Josh,\n\n> SuSE defaults to Reiser but also allows XFS. I would suggest XFS.\n\nI've found Reiser to perform very well for databases with many small tables. \n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 30 Mar 2004 13:52:35 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Raw vs Filesystem" } ]
[ { "msg_contents": "Hi,\n\nCan someone provide some insight for me as to why this query takes so \nlong on 6655 rows of data. I'm beginning to think something is seriously \nwrong with my config wrt to memory settings. I note vmstats (at the \nbottom) indicates no swapping so I'm not running out system wide but I \ndon't know whether I am within postgres.\n\nsql> explain ANALYZE SELECT MIN(ref), h_message_id, COUNT(h_message_id) \nFROM mail_969 GROUP BY h_message_id HAVING COUNT(h_message_id) > 25;\n\nAggregate (cost=85031.01..87894.10 rows=28631 width=44) (actual \ntime=185449.57..185449.57 rows=0 loops=1)\n -> Group (cost=85031.01..85746.78 rows=286309 width=44) (actual \ntime=185374.92..185413.32 rows=6655 loops=1)\n -> Sort (cost=85031.01..85031.01 rows=286309 width=44) \n(actual time=185374.91..185379.23 rows=6655 loops=1)\n -> Seq Scan on mail_969 (cost=0.00..59081.09 \nrows=286309 width=44) (actual time=179.65..185228.19 rows=6655 loops=1)\nTotal runtime: 185451.08 msec\n\nTo put this into perspective, we see similar results on a table with \nover 300,000 rows :\n\nsql> explain ANALYZE SELECT MIN(ref), h_message_id, COUNT(h_message_id) \nFROM mail_650 GROUP BY h_message_id HAVING COUNT(h_message_id) > 25;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=88503.85..91596.83 rows=30930 width=48) (actual \ntime=142483.52..149102.57 rows=244 loops=1)\n -> Group (cost=88503.85..89277.10 rows=309298 width=48) (actual \ntime=142444.19..148477.99 rows=309245 loops=1)\n -> Sort (cost=88503.85..88503.85 rows=309298 width=48) \n(actual time=142444.17..142652.83 rows=309245 loops=1)\n -> Seq Scan on mail_650 (cost=0.00..60297.98 \nrows=309298 width=48) (actual time=445.75..105818.97 rows=309245 loops=1)\nTotal runtime: 149181.30 msec\n\nThese selects are part of a cleanup operation on a 70GB DB (normal \nconditions are around 25GB). They find dupes, preserve one primary key, \nand delete the rest. Currently I have an issue that the DB requires a \nfull vacuum from prior runs of the above however this is another \nproblem, detailed on the 'admin' list.\n\nPerhaps this is due to the dataset still being so big (70GB) on \neffectively one disk, but I just thought I'd check with you guys.\n\nHardware:\n\n1GB Ram, SMP 1GHz P3, SvrWks OSB4 chipset, Adaptec aic7899 with 2 \nSCSI-160 disks split between DB and pg_xlog. (I know disks should be \nbetter laid out for a busy db, but this hasn't been my decision :)\n\nConfig:\n\nmax_fsm_relations = 1000\nmax_fsm_pages = 20000\nvacuum_mem = 65536\neffective_cache_size = 95694\nrandom_page_cost = 2\nsort_mem=65536\nmax_connections = 128\nshared_buffers = 15732\nwal_buffers = 64 # need to determin\nwal_files = 64 # range 0-64\nwal_sync_method = fsync # the default varies across platforms:\nwal_debug = 0 # range 0-16\n\n\n# hopefully this should see less LogFlushes per LogInsert - use more WAL \nthough.\ncommit_delay = 10000 # range 0-100000\ncommit_siblings = 2 # range 1-1000\n\n\ncheckpoint_segments = 32 # in logfile segments (16MB each), min 1\ncheckpoint_timeout = 600 # in seconds, range 30-3600\nfsync = false\n#fsync = true\n\n\nvmstats whilst running (indicating no swaping) :\n\n procs memory swap io system \n cpu\n r b w swpd free buff cache si so bi bo in cs us \n sy id\n 1 2 1 45592 10868 11028 853700 0 0 7 3 1 0 5 \n 5 7\n 2 2 0 45592 10288 11236 849312 8 0 732 888 1980 3516 63 \n 12 26\n 8 2 0 45592 11208 11304 849224 0 0 4438 286 2696 3758 66 \n 15 19\n10 2 3 45592 10284 11332 848872 0 0 6344 664 2888 3614 71 \n 18 11\n 2 8 1 45592 10408 11388 845140 0 0 4622 402 2216 2306 70 \n 11 19\n 3 7 2 45592 10416 11440 845972 0 0 3538 68 2052 2079 66 \n 9 25\n10 5 1 45592 10916 11496 846676 0 0 4428 444 2968 4385 75 \n 17 8\n 2 4 0 45592 10380 11592 848348 0 0 5940 184 2609 3421 69 \n 15 16\n\nCheers,\n\n-- \n\nRob Fielding\[email protected]\n\nwww.dsvr.co.uk Development Designer Servers Ltd\n", "msg_date": "Tue, 30 Mar 2004 13:54:36 +0100", "msg_from": "Rob Fielding <[email protected]>", "msg_from_op": true, "msg_subject": "Cleanup query takes along time" } ]
[ { "msg_contents": "hi all,\n\n\ni have an amd athlon with 256 ram (i know, this is not a *real* server but \nmy tables are small)\n\ni'm using vb6 (win98) with pgsql-7.3.4 (rh8) trough the psqlodbc.\n\nwhen i do a select in took long to execute, here is an example\n\n\ntable icc_m_banco\n\nCREATE TABLE ICC_M_BANCO (\n CodBanco SMALLINT NOT NULL,\n Descripcion CHARACTER VARYING(60) NOT NULL,\n RefContable NUMERIC,\n Estado CHAR(1) NOT NULL,\n FecRegistro DATE NOT NULL,\n CONSTRAINT EstadoBanco CHECK ((Estado = 'A') or (Estado = 'I')),\n PRIMARY KEY(CodBanco)\n);\n\n\nselect * from icc_m_banco where codbanco = 1;\n\nit tooks 13s from it's send until it's executed.\n\n\n\nexplain analyze give me this result:\n\nexplain analyze\nselect * from icc_m_banco where codbanco = 1;\n\n\nSeq Scan on icc_m_banco (cost=0.00..1.06 rows=6 width=41) (actual \ntime=7.94..7.96 rows=4 loops=1)\nTotal runtime: 63.37 msec\n(2 rows)\n\n\nso i think its not a database problem (at least it's not all the problem),\nthough it seems to me it is taking a lot of time executing this.\n\n\nam i right? any suggestions?\n\n_________________________________________________________________\nHelp STOP SPAM with the new MSN 8 and get 2 months FREE* \nhttp://join.msn.com/?page=features/junkmail\n\n", "msg_date": "Tue, 30 Mar 2004 19:25:40 +0000", "msg_from": "\"Jaime Casanova\" <[email protected]>", "msg_from_op": true, "msg_subject": "select slow?" }, { "msg_contents": "\nOn 30/03/2004 20:25 Jaime Casanova wrote:\n> hi all,\n> \n> \n> i have an amd athlon with 256 ram (i know, this is not a *real* server \n> but my tables are small)\n> \n> i'm using vb6 (win98) with pgsql-7.3.4 (rh8) trough the psqlodbc.\n> \n> when i do a select in took long to execute, here is an example\n> \n> \n> table icc_m_banco\n> \n> CREATE TABLE ICC_M_BANCO (\n> CodBanco SMALLINT NOT NULL,\n> Descripcion CHARACTER VARYING(60) NOT NULL,\n> RefContable NUMERIC,\n> Estado CHAR(1) NOT NULL,\n> FecRegistro DATE NOT NULL,\n> CONSTRAINT EstadoBanco CHECK ((Estado = 'A') or (Estado = 'I')),\n> PRIMARY KEY(CodBanco)\n> );\n> \n> \n> select * from icc_m_banco where codbanco = 1;\n\nselect * from icc_m_banco where codbanco = 1::int2;\n\n\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n", "msg_date": "Tue, 30 Mar 2004 20:39:21 +0100", "msg_from": "Paul Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select slow?" }, { "msg_contents": "On Tuesday 30 March 2004 20:25, Jaime Casanova wrote:\n> hi all,\n>\n>\n> i have an amd athlon with 256 ram (i know, this is not a *real* server but\n> my tables are small)\n\nNothing wrong with it - it's what I still use as my development server.\n\n> i'm using vb6 (win98) with pgsql-7.3.4 (rh8) trough the psqlodbc.\n>\n> when i do a select in took long to execute, here is an example\n\n> CREATE TABLE ICC_M_BANCO (\n> CodBanco SMALLINT NOT NULL,\n\n> select * from icc_m_banco where codbanco = 1;\n>\n> it tooks 13s from it's send until it's executed.\n\nTry:\n SELECT * FROM icc_m_banco WHERE codbanco = 1::smallint;\n\nBy default, PG will treat a numeric constant as integer not smallint, so when \nit looks for an index it can't find one for integer, so scans instead.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 31 Mar 2004 09:33:24 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select slow?" } ]
[ { "msg_contents": "Hi,\n\nShouldn't the optimizer use indices if the like condition does not have any \nwildcards?\n\nAn example:\n\ngirgen=# explain analyze select * from person where userid = 'girgen';\n QUERY PLAN \n\n---------------------------------------------------------------------------\n------------------------------------------\n Index Scan using person_pkey on person (cost=0.00..5.98 rows=1 width=84) \n(actual time=0.034..0.039 rows=1 loops=1)\n Index Cond: (userid = 'girgen'::text)\n Total runtime: 0.091 ms\n(3 rader)\n\ngirgen=# explain analyze select * from person where userid like 'girgen';\n QUERY PLAN \n\n---------------------------------------------------------------------------\n-----------------------\n Seq Scan on person (cost=0.00..77.08 rows=1 width=84) (actual \ntime=1.137..1.143 rows=1 loops=1)\n Filter: (userid ~~ 'girgen'::text)\n Total runtime: 1.193 ms\n(3 rader)\n\nThe result cannot be different between the two cases. The second query does \nnot use the index since database is initiaized with a locale, \nsv_SE.ISO8859-1, and I need it for correct sorting. (Still dreaming about \nindices with like and locale)... But, since there is no wildcard in the \nstring 'girgen', it should easily be able to use the index, if it only \nbothered to note that there is a wildcard around, right?\n\n\nAnother thing on the same subject:\n\nI use an app that builds searches using some standard method, and it wants \nto always search case-insensitive. Hence, it uses ILIKE instead of `=', \neven for joins, and even for integers. This is a bit lazy, indeed, and also \nwrong. While this is wrong, no doubt, the odd thing I realized was that \nthe optimizer didn't make use of the indices. Same thing here, the \noptimizer should ideally know that it is dealing with integers, where ILIKE \nand LIKE has no meaning, and it should use `=' instead implicitally, hence \nusing indices. This one might be kind of low priority, but the one above \nreally isn't, IMO.\n\n/Palle\n\n", "msg_date": "Wed, 31 Mar 2004 01:06:48 +0200", "msg_from": "Palle Girgensohn <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE should use index when condition doesn't include wildcard" }, { "msg_contents": "Palle Girgensohn <[email protected]> writes:\n> Shouldn't the optimizer use indices if the like condition does not have any \n> wildcards?\n\nI can't get excited about this; if you are depending on LIKE to be fast\nthen you should have locale-insensitive indexes in place to support it.\nSwitching the tests around so that this special case is supported even\nwith an index that doesn't otherwise support LIKE would complicate the\ncode unduly IMHO, to support a rather pointless corner case...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Mar 2004 19:16:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE should use index when condition doesn't include wildcard " }, { "msg_contents": "\n\n--On tisdag, mars 30, 2004 19.16.44 -0500 Tom Lane <[email protected]> \nwrote:\n\n> Palle Girgensohn <[email protected]> writes:\n>> Shouldn't the optimizer use indices if the like condition does not have\n>> any wildcards?\n>\n> I can't get excited about this; if you are depending on LIKE to be fast\n> then you should have locale-insensitive indexes in place to support it.\n> Switching the tests around so that this special case is supported even\n> with an index that doesn't otherwise support LIKE would complicate the\n> code unduly IMHO, to support a rather pointless corner case...\n\nOK, I agree. Sad, though, that throw away ability to use order by is the \nonly way to get index scans using LIKE... :(\n\nBut what about ILIKE. It does not take advantage of indices built with \nlower():\n\ngirgen=# create index person_foo on person (lower(last_name));\ngirgen=# vacuum analyze person;\ngirgen=# explain select * from person where lower(last_name) = \n'girgensohn';\n QUERY PLAN \n\n---------------------------------------------------------------------------\n--\n Index Scan using person_foo on person (cost=0.00..137.58 rows=78 width=96)\n Index Cond: (lower(last_name) = 'girgensohn'::text)\n(2 rows)\n\ngirgen=# explain select * from person where last_name = 'Girgensohn';\n QUERY PLAN\n---------------------------------------------------------\n Seq Scan on person (cost=0.00..441.35 rows=4 width=96)\n Filter: (last_name = 'Girgensohn'::text)\n(2 rows)\n\ngirgen=# explain select * from person where lower(last_name) like \n'girgen%';\n QUERY PLAN \n\n---------------------------------------------------------------------------\n-------------------\n Index Scan using person_foo on person (cost=0.00..137.58 rows=78 width=96)\n Index Cond: ((lower(last_name) >= 'girgen'::text) AND (lower(last_name) \n< 'girgeo'::text))\n Filter: (lower(last_name) ~~ 'girgen%'::text)\n(3 rows)\n\ngirgen=# explain select * from person where last_name ilike 'girgen%';\n QUERY PLAN\n---------------------------------------------------------\n Seq Scan on person (cost=0.00..441.35 rows=5 width=96)\n Filter: (last_name ~~* 'girgen%'::text)\n(2 rows)\n\n\npostgresql 7.4.2, freebsd 4.9 stable.\n\n\n/Palle\n\n", "msg_date": "Wed, 31 Mar 2004 02:28:31 +0200", "msg_from": "Palle Girgensohn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE should use index when condition doesn't include" }, { "msg_contents": "Palle,\n\n> But what about ILIKE. It does not take advantage of indices built with \n> lower():\n\nNope. If you want to use a functional index, you'll need to use the function \nwhen you call the query. ILIKE is not somehow aware that it is equivalent \nto lower().\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 30 Mar 2004 16:56:09 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE should use index when condition doesn't include" }, { "msg_contents": "--On tisdag, mars 30, 2004 16.56.09 -0800 Josh Berkus <[email protected]> \nwrote:\n\n> Palle,\n>\n>> But what about ILIKE. It does not take advantage of indices built with\n>> lower():\n>\n> Nope. If you want to use a functional index, you'll need to use the\n> function when you call the query. ILIKE is not somehow aware that it\n> is equivalent to lower().\n\nToo bad... that was my idea, that it would somehow be aware that it is \nequivalent to lower() like. It really is, isn't it? I would have though \nthey where synonymous. If not, makes ILIKE kind of unusable, at least \nunless you're pretty certain the field will never indexed.\n\n/Palle\n\n", "msg_date": "Wed, 31 Mar 2004 03:04:40 +0200", "msg_from": "Palle Girgensohn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE should use index when condition doesn't include" }, { "msg_contents": "Palle,\n\n> Too bad... that was my idea, that it would somehow be aware that it is \n> equivalent to lower() like. It really is, isn't it? I would have though \n> they where synonymous. If not, makes ILIKE kind of unusable, at least \n> unless you're pretty certain the field will never indexed.\n\nYup. I use it mostly for lookups in reference lists with < 100 items, where \nan index doesn't matter.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 30 Mar 2004 17:06:54 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE should use index when condition doesn't include" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> ILIKE is not somehow aware that it is equivalent to lower().\n\nIs it? Given the wild and wonderful behaviors of locales here and\nthere, I wouldn't want to assume that such an equivalence holds.\n\nIn particular I note that iclike() seems to be multibyte-aware while\nlower() definitely is not. Even if that's just a bug, it's a big leap\nto assume that ILIKE is equivalent to LIKE on lower(). Think about\nTurkish i/I, German esstet (did I spell that right?), ch in various\nlanguages, etc etc.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 Mar 2004 00:33:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE should use index when condition doesn't include " } ]
[ { "msg_contents": "Hi everyone,\n\nI am building a query which uses a clause like \"Where doc_description like\n'%keyword%'\". I know a normal index won't be of any use here, but since the\ntable in question will get fairly big, I do want to use an index.\n\nCan anyone give me some advise on what kind of index I can use here? Or\nshouldn't I use one in this case?\n\nKind regards,\nAlexander Priem.\n", "msg_date": "Wed, 31 Mar 2004 11:51:02 +0200", "msg_from": "\"Priem, Alexander\" <[email protected]>", "msg_from_op": true, "msg_subject": "What index for 'like (%keyword%)' ???" }, { "msg_contents": "On Wednesday 31 March 2004 10:51, Priem, Alexander wrote:\n> Hi everyone,\n>\n> I am building a query which uses a clause like \"Where doc_description like\n> '%keyword%'\". I know a normal index won't be of any use here, but since the\n> table in question will get fairly big, I do want to use an index.\n>\n> Can anyone give me some advise on what kind of index I can use here? Or\n> shouldn't I use one in this case?\n\nYou probably want to look at the contrib/tsearch2 full-text indexing module.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 31 Mar 2004 13:07:54 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What index for 'like (%keyword%)' ???" }, { "msg_contents": "> I am building a query which uses a clause like \"Where doc_description like\n> '%keyword%'\". I know a normal index won't be of any use here, but since the\n> table in question will get fairly big, I do want to use an index.\n> \n> Can anyone give me some advise on what kind of index I can use here? Or\n> shouldn't I use one in this case?\n\nYou have to use a proper full text indexing scheme. Investigate \ncontrib/tsearch2 module in the postgres distribution.\n\nChirs\n\n", "msg_date": "Thu, 01 Apr 2004 09:19:07 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What index for 'like (%keyword%)' ???" } ]
[ { "msg_contents": ">\n>On Tuesday 30 March 2004 20:25, Jaime Casanova wrote:\n>>hi all,\n> >\n> >\n> > i have an amd athlon with 256 ram (i know, this is not a *real* server \n>but\n> > my tables are small)\n\n>Nothing wrong with it - it's what I still use as my development server.\n>\n> > i'm using vb6 (win98) with pgsql-7.3.4 (rh8) trough the psqlodbc.\n> >\n> > when i do a select in took long to execute, here is an example\n>\n> > CREATE TABLE ICC_M_BANCO (\n> > CodBanco SMALLINT NOT NULL,\n\n> > select * from icc_m_banco where codbanco = 1;\n> >\n> > it tooks 13s from it's send until it's executed.\n>\n>Try:\n> SELECT * FROM icc_m_banco WHERE codbanco = 1::smallint;\n>\n>By default, PG will treat a numeric constant as integer not smallint, so \n>when\n>it looks for an index it can't find one for integer, so scans instead.\n>\n>--\n> Richard Huxton\n> Archonet Ltd\n\nThere are no indexes yet, and the table is just 6 rows long so even if \nindexes exists the planner will do a seq scan. that's my whole point 63m for \nseq scan in 6 rows table is too much.\n\n_________________________________________________________________\nThe new MSN 8: smart spam protection and 2 months FREE* \nhttp://join.msn.com/?page=features/junkmail\n\n", "msg_date": "Wed, 31 Mar 2004 14:27:50 +0000", "msg_from": "\"Jaime Casanova\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select slow?" }, { "msg_contents": "\"Jaime Casanova\" <[email protected]> writes:\n> There are no indexes yet, and the table is just 6 rows long so even if \n> indexes exists the planner will do a seq scan. that's my whole point 63m for \n> seq scan in 6 rows table is too much.\n\nThat was 63 milliseconds, according to your original post, which seems\nperfectly reasonable to me seeing that it's not a super-duper server.\n\nThe problem sounds to be either on the client side or somewhere in your\nnetwork. I don't know anything about VB, but you might want to look\nthrough the client-side operations to see what could be eating up the 13\nseconds.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 Mar 2004 10:40:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select slow? " }, { "msg_contents": "\nOn 31/03/2004 16:40 Tom Lane wrote:\n> \"Jaime Casanova\" <[email protected]> writes:\n> > There are no indexes yet, and the table is just 6 rows long so even if\n> > indexes exists the planner will do a seq scan. that's my whole point\n> 63m for\n> > seq scan in 6 rows table is too much.\n> \n> That was 63 milliseconds, according to your original post, which seems\n> perfectly reasonable to me seeing that it's not a super-duper server.\n> \n> The problem sounds to be either on the client side or somewhere in your\n> network. I don't know anything about VB, but you might want to look\n> through the client-side operations to see what could be eating up the 13\n> seconds.\n\n\nGiven that the client and server are on different machines, I'm wondering \nthe bulk of the 13 seconds is due a network mis-configuration or a very \nslow DNS server...\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n", "msg_date": "Wed, 31 Mar 2004 18:27:01 +0100", "msg_from": "Paul Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select slow?" }, { "msg_contents": "As part of my ongoing evaluation of PostgreSQL I have been doing a little stress testing. \nI though I would share an interesting result here..\n\nMachine spec:\n500 MHz PIII\n256MB RAM\n\"old-ish\" IDE HD (5400RPM)\nLinux 2.4.22 kernel (Madrake 9.2)\n\nI have PostgreSQL 7.4.1 installed and have managed to load up a 1.4 GB database \nfrom MS SQLServer. Vaccum analyzed it.\n\nAs a test in PosgreSQL I issued a statement to update a single column of a table \ncontaining 2.8 million rows with the values of a column in a table with similar rowcount. \nUsing the above spec I had to stop the server after 17 hours. The poor thing was \nthrashing the hard disk and doing more swapping than useful work.\n\nHaving obtained a copy of Mandrake 10.0 with the 2.6 kernal I though I would give it a \ngo. Same hardware. Same setup. Same database loaded up. Same postgresql.conf file \nto make sure all the settings were the same. Vaccum analyzed it.\n\nsame update statement COMPLETED in 2 hours 50 minutes. I'm impressed.\n\nI could see from vmstat that the system was achieving much greater IO thoughput than \nthe 2.4 kernel. Although the system was still swapping there seems to be a completely \ndifferent memory management pattern that suits PostgreSQL very well.\n\nJust to see that this wasn't a coincidence I am repeating the test. It is now into the 14th \nhour using the old 2.4 kernel. I'm going to give up.....\n\nHas anyone else done any comparative testing with the 2.6 kernel?\n\nCheers,\nGary.\n\n", "msg_date": "Thu, 01 Apr 2004 20:19:34 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "Gary Doades wrote:\n\n> \n> Has anyone else done any comparative testing with the 2.6 kernel?\n> \n\nI know for a fact that certain stuff is recognized differently between \n2.2, 2.4 and 2.6 kernels.\nFor example i have one box that i installed debian stable on that used a \n2.2 kernel which automatically tuned on DMA on the harddrive, didn't do \nit on a 2.4 kernel, but on 2.6 one it saw it as DMA able.\nSuch things can dramatically affect performance, so make sure to compare \nwhat capabilities the kernel thinks your hardware has between the \nkernels first...\n\nBut i'll grant that the 2.6 kernel is a great deal faster on some of our \ntest servers.\n\nRegards\nMagnus\n\n", "msg_date": "Thu, 01 Apr 2004 22:19:14 +0200", "msg_from": "\"Magnus Naeslund(t)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "\"Gary Doades\" <[email protected]> writes:\n> As a test in PosgreSQL I issued a statement to update a single column\n> of a table containing 2.8 million rows with the values of a column in\n> a table with similar rowcount. Using the above spec I had to stop the\n> server after 17 hours. The poor thing was thrashing the hard disk and\n> doing more swapping than useful work.\n\nThis statement is pretty much content-free, since you did not show us\nthe table schemas, the query, or the EXPLAIN output for the query.\n(I'll forgive you the lack of EXPLAIN ANALYZE, but you could easily\nhave provided all the other hard facts.) There's really no way to tell\nwhere the bottleneck is. Maybe it's a kernel-level issue, but I would\nnot bet on that without more evidence. I'd definitely not bet on it\nwithout direct confirmation that the same query plan was used in both\nsetups.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Apr 2004 01:32:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel. " }, { "msg_contents": "The post was not intended to be content-rich, just my initial feedback \nafter only just switching to 2.6. Since I had largely given up on this \nparticular line of attack using 2.4 I didn't think to do a detailed analysis \nat this time. I was also hoping that others would add to the discussion. \n\nAs this could become important I will be doing more analysis, but due to \nthe nature of the issue and trying to keep as many factors constant as \npossible, this may take some time.\n\nCheers,\nGary.\n\nOn 2 Apr 2004 at 1:32, Tom Lane wrote:\n\n> \"Gary Doades\" <[email protected]> writes:\n> > As a test in PosgreSQL I issued a statement to update a single column\n> > of a table containing 2.8 million rows with the values of a column in\n> > a table with similar rowcount. Using the above spec I had to stop the\n> > server after 17 hours. The poor thing was thrashing the hard disk and\n> > doing more swapping than useful work.\n> \n> This statement is pretty much content-free, since you did not show us\n> the table schemas, the query, or the EXPLAIN output for the query.\n> (I'll forgive you the lack of EXPLAIN ANALYZE, but you could easily\n> have provided all the other hard facts.) There's really no way to tell\n> where the bottleneck is. Maybe it's a kernel-level issue, but I would\n> not bet on that without more evidence. I'd definitely not bet on it\n> without direct confirmation that the same query plan was used in both\n> setups.\n> \n> \t\t\tregards, tom lane\n> \n> \n> -- \n> Incoming mail is certified Virus Free.\n> Checked by AVG Anti-Virus (http://www.grisoft.com).\n> Version: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004\n> \n\n\n", "msg_date": "Fri, 02 Apr 2004 08:07:38 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel. " } ]
[ { "msg_contents": "*statistics target = 100\n*INFO: index \"timeseries_tsid\" now contains *16,677,521* row versions \nin 145605 pages\nDETAIL: 76109 index pages have been deleted, 20000 are currently reusable.\nCPU 12.00s/2.83u sec elapsed 171.26 sec.\nINFO: \"timeseries\": found 0 removable, 16677521 nonremovable row \nversions in 1876702 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were *18,894,051* unused item pointers.\n0 pages are entirely empty.\nCPU 138.74s/28.96u sec elapsed 1079.43 sec.\nINFO: vacuuming \"pg_toast.pg_toast_1286079786\"\nINFO: index \"pg_toast_1286079786_index\" now contains 4846282 row \nversions in 29319 pages\nDETAIL: 10590 index pages have been deleted, 10590 are currently reusable.\nCPU 2.23s/0.55u sec elapsed 28.34 sec.\nINFO: \"pg_toast_1286079786\": found 0 removable, 4846282 nonremovable \nrow versions in 1379686 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 2824978 unused item pointers.\n0 pages are entirely empty.\nCPU 112.92s/19.53u sec elapsed 731.99 sec.\nINFO: analyzing \"public.timeseries\"\nINFO: \"timeseries\": 1876702 pages, *30,000* rows sampled, *41,762,188* \nestimated total rows\n\n \nsetting the default statistics target higher makes the estimate worse:\n*statistics target = 500*\nINFO: index \"timeseries_tsid\" now contains *16,953,429 *row versions in \n145605 pages\nINFO: \"timeseries\": 1891940 pages, *150,000* rows sampled, *64,803,483* \nestimated total rows\n\n*statistics target = 1000 *\nINFO: index \"timeseries_tsid\" now contains *17,216,139* row versions in \n145605 pages\nINFO: \"timeseries\": 1937484 pages, *300,000* rows sampled, *68,544,295* \nestimated total rows\n\n\nI'm trying to understand why the estimated row count is so off. I'm \nassuming this is b/c we do very large deletes and we're leaving around a \nlarge number of almost empty pages. Is this the reason?\n\nLet me know if you need more info.\n\nThanks\nMichael\n\n\n\n\n\n>\n>\n>> INFO: index \"timeseries_tsid\" now contains *16677521* row versions \n>> in 145605 pages\n>> DETAIL: 76109 index pages have been deleted, 20000 are currently \n>> reusable.\n>> CPU 12.00s/2.83u sec elapsed 171.26 sec.\n>> INFO: \"timeseries\": found 0 removable, 16677521 nonremovable row \n>> versions in 1876702 pages\n>> DETAIL: 0 dead row versions cannot be removed yet.\n>> There were 18894051 unused item pointers.\n>> 0 pages are entirely empty.\n>> CPU 138.74s/28.96u sec elapsed 1079.43 sec.\n>> INFO: vacuuming \"pg_toast.pg_toast_1286079786\"\n>> INFO: index \"pg_toast_1286079786_index\" now contains 4846282 row \n>> versions in 29319 pages\n>> DETAIL: 10590 index pages have been deleted, 10590 are currently \n>> reusable.\n>> CPU 2.23s/0.55u sec elapsed 28.34 sec.\n>> INFO: \"pg_toast_1286079786\": found 0 removable, 4846282 nonremovable \n>> row versions in 1379686 pages\n>> DETAIL: 0 dead row versions cannot be removed yet.\n>> There were 2824978 unused item pointers.\n>> 0 pages are entirely empty.\n>> CPU 112.92s/19.53u sec elapsed 731.99 sec.\n>> INFO: analyzing \"public.timeseries\"\n>> INFO: \"timeseries\": 1876702 pages, *30,000* rows sampled, \n>> *41,762,188* estimated total rows\n>>\n>> \n>>\n>\n> setting the default statistics target higher made the estimate worse: \n> (changed from 100 to 500)\n> *\n> statistics target = 500*\n> INFO: index \"timeseries_tsid\" now contains *16,953,429 *row versions \n> in 145605 pages\n> INFO: \"timeseries\": 1891940 pages, *150,000* rows sampled, \n> *64,803,483* estimated total rows\n>\n> *statistics target = 1000\n> *INFO: index \"timeseries_tsid\" now contains *17,216,139* row versions \n> in 145605 pages\n> INFO: \"timeseries\": 1937484 pages,* 300,000* rows sampled, \n> *68,544,295* estimated total rows\n>\n>\n>\n>\n>\n> This probably has something to do with the large deletes we do. I'm \n> looking around to get some more info on statistics collection.\n>\n> -mike\n>\n>\n>\n", "msg_date": "Wed, 31 Mar 2004 12:17:54 -0500", "msg_from": "Michael Guerin <[email protected]>", "msg_from_op": true, "msg_subject": "Estimated rows way off " } ]
[ { "msg_contents": "Do you know if postgres made assumption on the\naccess time time stamp for the files on his\nown file sistem ? If not I'm wondering if\nmount a partition with the option \"anotime\"\ncan improve the disk i/o performance.\n\n\nRegards\nGaetano Mendola\n", "msg_date": "Thu, 01 Apr 2004 01:26:53 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "linux and anotime mount option" } ]
[ { "msg_contents": "Hi,\n\nI remember reading a post ages ago, maybe from Vadim, about the fact that \npeople creating indices on more than two columns will be the first to be \nput againts the wall when the revolution comes... sort of...\n\nIs it always bad to create index xx on yy (field1, field2, field3);\n\nI guess the problem is that the index might often grow bigger than the \ntable, or at least big enough not to speed up the queries?\n\n/Palle\n\n", "msg_date": "Fri, 02 Apr 2004 01:00:45 +0200", "msg_from": "Palle Girgensohn <[email protected]>", "msg_from_op": true, "msg_subject": "single index on more than two coulumns a bad thing?" }, { "msg_contents": "Palle,\n\n> Is it always bad to create index xx on yy (field1, field2, field3);\n\nNo, it seldom bad, in fact. I have some indexes that run up to seven \ncolumns, becuase they are required for unique keys.\n\nIndexes of 3-4 columns are often *required* for many-to-many join tables.\n\nI'm afraid that you've been given some misleading advice.\n\n> I guess the problem is that the index might often grow bigger than the \n> table, or at least big enough not to speed up the queries?\n\nWell, yes ... a 4-column index on a 5-column table could be bigger than the \ntable if allowed to bloat and not re-indexed. But that's just a reason for \nbetter maintainence.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 1 Apr 2004 16:35:45 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: single index on more than two coulumns a bad thing?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> Is it always bad to create index xx on yy (field1, field2, field3);\n\n> I'm afraid that you've been given some misleading advice.\n\nI'd say it's a matter of getting your optimizations straight.\n\nIf you have a query that can make use of that index, and the query is\nexecuted often enough to make it worth maintaining the index during\ntable updates, then by all means make the index.\n\nThe standard advice is meant to warn you against creating a zillion\nindexes without any thought to what you'll be paying in update costs.\nIndexes with more than a couple of columns are usually of only narrow\napplicability, and so you have to be sure that they'll really pay for\nthemselves...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Apr 2004 23:36:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: single index on more than two coulumns a bad thing? " }, { "msg_contents": "\nPalle Girgensohn <[email protected]> writes:\n\n> Is it always bad to create index xx on yy (field1, field2, field3);\n\nAll generalisations are false...\n\nSeriously, it's true that as the length of your index key gets longer the\nharder and harder it is to justify it. That doesn't mean they're always wrong,\nbut you should consider whether a shorter key would perform just as well.\n\nThe other problem with long index keys is that they often show up in the same\nplace as having dozens of indexes on the same table. Usually in shops where\nthe indexes were created after the fact looking at specific queries.\n\n-- \ngreg\n\n", "msg_date": "01 Apr 2004 23:59:46 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: single index on more than two coulumns a bad thing?" }, { "msg_contents": "another thing that I have all over the place is a hierarchy:\nindex on grandfather_table(grandfather)\nindex on father_table(grandfather, father)\nindex on son_table(grandfather, father, son)\n\nalmost all of my indices are composite. Are you thinking about composite\nindices with low cardinality leading columns?\n\n/Aaron\n\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Palle Girgensohn\" <[email protected]>;\n<[email protected]>\nSent: Thursday, April 01, 2004 7:35 PM\nSubject: Re: [PERFORM] single index on more than two coulumns a bad thing?\n\n\n> Palle,\n>\n> > Is it always bad to create index xx on yy (field1, field2, field3);\n>\n> No, it seldom bad, in fact. I have some indexes that run up to seven\n> columns, becuase they are required for unique keys.\n>\n> Indexes of 3-4 columns are often *required* for many-to-many join tables.\n>\n> I'm afraid that you've been given some misleading advice.\n>\n> > I guess the problem is that the index might often grow bigger than the\n> > table, or at least big enough not to speed up the queries?\n>\n> Well, yes ... a 4-column index on a 5-column table could be bigger than\nthe\n> table if allowed to bloat and not re-indexed. But that's just a reason\nfor\n> better maintainence.\n>\n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n", "msg_date": "Fri, 2 Apr 2004 06:56:42 -0500", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: single index on more than two coulumns a bad thing?" }, { "msg_contents": "On Fri, Apr 02, 2004 at 01:00:45 +0200,\n Palle Girgensohn <[email protected]> wrote:\n> \n> Is it always bad to create index xx on yy (field1, field2, field3);\n> \n> I guess the problem is that the index might often grow bigger than the \n> table, or at least big enough not to speed up the queries?\n\nOne place where you need them in postgres is enforcing unique multicolumn\nkeys. These will get created implicitly from the unique (or primary key)\nconstraint. It isn't all that unusual to have a table that describes\na many to many (to many ...) relationship where the primary key is all\nof the columns.\n", "msg_date": "Fri, 2 Apr 2004 09:56:04 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: single index on more than two coulumns a bad thing?" }, { "msg_contents": "--On fredag, april 02, 2004 09.56.04 -0600 Bruno Wolff III <[email protected]> \nwrote:\n\n> On Fri, Apr 02, 2004 at 01:00:45 +0200,\n> Palle Girgensohn <[email protected]> wrote:\n>>\n>> Is it always bad to create index xx on yy (field1, field2, field3);\n>>\n>> I guess the problem is that the index might often grow bigger than the\n>> table, or at least big enough not to speed up the queries?\n>\n> One place where you need them in postgres is enforcing unique multicolumn\n> keys. These will get created implicitly from the unique (or primary key)\n> constraint. It isn't all that unusual to have a table that describes\n> a many to many (to many ...) relationship where the primary key is all\n> of the columns.\n\nTrue, of course!\n\n/Palle\n\n", "msg_date": "Sat, 03 Apr 2004 15:24:07 +0200", "msg_from": "Palle Girgensohn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: single index on more than two coulumns a bad thing?" } ]
[ { "msg_contents": "I am trying to do a spatial join between two tables each of which has a\ncolumn of type BOX called ERRBOX, with R-TREE indices created on both.\n\nThe smaller table, xmm1, has 56,711 rows,\nthe larger one, twomass, has 177,757,299 rows.\n\nThe most efficient way to join these is to do a sequential scan of the\nsmaller table, and an R-tree lookup on the larger. However for a simple\ninner join the optimiser seems to want to do the reverse, for example:\n\nEXPLAIN\nSELECT x.ra AS xra, x.decl AS xdecl, t.ra AS tra, t.decl AS tdecl\nFROM xmm1 AS x INNER JOIN twomass AS t\nON x.errbox && t.errbox;\n\n QUERY PLAN\n----------------------------------------------------------------------------------\n Nested Loop (cost=0.00..196642756520.34 rows=49506496044 width=32)\n -> Seq Scan on twomass t (cost=0.00..9560002.72 rows=177023872 width=48)\n -> Index Scan using xmm1box on xmm1 x (cost=0.00..1107.28 rows=280 width=48)\n Index Cond: (x.errbox && \"outer\".errbox)\n\n\nReversing the join condition (i.e. t.errbox && x.errbox) and similar make\nno difference, nor does using the old implicit join syntax.\n\nIf, however, I specify an outer join such as:\n\nEXPLAIN\nSELECT x.ra AS xra, x.decl AS xdecl, t.ra AS tra, t.decl AS tdecl\nFROM xmm1 AS x LEFT OUTER JOIN twomass AS t\nON x.errbox && t.errbox;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..198945259325.90 rows=49506496044\nwidth=32)\n -> Seq Scan on xmm1 x (cost=0.00..8592.32 rows=55932 width=48)\n -> Index Scan using tbox on twomass t (cost=0.00..3545848.88 rows=885119 width=48)\n Index Cond: (\"outer\".errbox && t.errbox)\n\n\nThis executes, it need hardly be said, a whole lot faster.\n\nI found that I can also force a sequential scan of the smaller table by\ndropping its R-tree index, but I may need this in other operations, so\nthis isn't a very satisfactory solution. It's odd that an outer join\nshould be faster than an inner one, or to put it another way, after\ndropping an index there is more than an order of magnitude speed increase.\n\nI'm using Postgres 7.4.1 on Red Hat Linux. Has anyone had similar\nproblems with spatial joins?\n\n\n-- \nClive Page\nDept of Physics & Astronomy,\nUniversity of Leicester,\nLeicester, LE1 7RH, U.K.\n\n\n", "msg_date": "Fri, 2 Apr 2004 15:55:26 +0100 (BST)", "msg_from": "Clive Page <[email protected]>", "msg_from_op": true, "msg_subject": "Spatial join insists on sequential scan of larger table" }, { "msg_contents": "Clive Page <[email protected]> writes:\n> This executes, it need hardly be said, a whole lot faster.\n\nCould we see EXPLAIN ANALYZE output?\n\nThe estimated costs for the two cases are nearly the same, which says to\nme that there's something wrong with the cost model for r-tree lookups,\nbut I don't know what it is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Apr 2004 10:46:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spatial join insists on sequential scan of larger table " }, { "msg_contents": "On Fri, 2 Apr 2004, Tom Lane wrote:\n\n> Could we see EXPLAIN ANALYZE output?\n\nCertainly, but that's going to take a little time (as the ANALYZE causes\nit to run the actual query, which I only just discovered), so may have to\nwait until Monday if I don't get time to finish it this afternoon.\n\n\n-- \nClive Page\nDept of Physics & Astronomy,\nUniversity of Leicester,\nLeicester, LE1 7RH, U.K.\n\n", "msg_date": "Fri, 2 Apr 2004 17:05:01 +0100 (BST)", "msg_from": "Clive Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Spatial join insists on sequential scan of larger" }, { "msg_contents": "On Fri, 2 Apr 2004, Tom Lane wrote:\n\n> Could we see EXPLAIN ANALYZE output?\n\nThe original EXPLAIN output was:\n\n QUERY PLAN\n----------------------------------------------------------------------------------\n Nested Loop (cost=0.00..196642756520.34 rows=49506496044 width=32)\n -> Seq Scan on twomass t (cost=0.00..9560002.72 rows=177023872 width=48)\n -> Index Scan using xmm1box on xmm1 x (cost=0.00..1107.28 rows=280 width=48)\n Index Cond: (x.errbox && \"outer\".errbox)\n\nThe EXPLAIN ANALYZE query was:\n\nexplain analyze\nSELECT x.ra AS xra, x.decl AS xdecl, t.ra AS tra, t.decl AS tdecl\nINTO tempjoin\nFROM xmm1 AS x INNER JOIN twomass AS t\nON x.errbox && t.errbox;\n\nAnd this produced:\n\n\\timing\nTiming is on.\ndw=# \\i join1.sql\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..196642756520.34 rows=49506496044 width=32) (actual time=701.919..7796111.624 rows=1513 loops=1)\n -> Seq Scan on twomass t (cost=0.00..9560002.72 rows=177023872 width=48) (actual time=22.064..617462.486 rows=177757299 loops=1)\n -> Index Scan using xmmbox on xmm1 x (cost=0.00..1107.28 rows=280 width=48) (actual time=0.036..0.036 rows=0 loops=177757299)\n Index Cond: (x.errbox && \"outer\".errbox)\n Total runtime: 7796410.533 ms\n(5 rows)\n\nTime: 7796996.093 ms\n\n\n-- \nClive Page\nDept of Physics & Astronomy,\nUniversity of Leicester,\nLeicester, LE1 7RH, U.K.\n\n", "msg_date": "Sat, 3 Apr 2004 23:35:11 +0100 (BST)", "msg_from": "Clive Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Spatial join insists on sequential scan of larger" } ]
[ { "msg_contents": "On 2 Apr 2004 at 22:36, pgsql-performance@postgresql. wrote:\n\nOK, some more detail: \n\nBefore wiping 2.4 off my test box for the second time: \n\nSQL Statement for update: \nupdate staff_booking set time_from = r.time_from from order_reqt r where r.reqt_id = \nstaff_booking.reqt_id; \n\nExplain: (on 2.4) \nQUERY PLAN \nMerge Join (cost=0.00..185731.30 rows=2845920 width=92) \n Merge Cond: (\"outer\".reqt_id = \"inner\".reqt_id) \n -> Index Scan using order_reqt_pkey on order_reqt r (cost=0.00..53068.20 \nrows=2206291 width=6) \n -> Index Scan using staff_book_idx2 on staff_booking (cost=0.00..99579.21 \nrows=2845920 width=90) \n\nTotal execution time: 18 hours 12 minutes \n\nvacuum full analyze: total time 3 hours 22 minutes \n\nWait 2 hours for re-install 2.6, set params etc. \nrestore database. \n\nSame SQL Statement \nExplain: (on 2.6) \nQUERY PLAN \nMerge Join (cost=0.00..209740.24 rows=2845920 width=92) \n Merge Cond: (\"outer\".reqt_id = \"inner\".reqt_id) \n -> Index Scan using order_reqt_pkey on order_reqt r (cost=0.00..50734.20 \nrows=2206291 width=6) \n -> Index Scan using staff_book_idx2 on staff_booking (cost=0.00..117921.92 \nrows=2845920 width=90) \n\nTotal execution time: 2 hours 53 minutes \n\nvacuum full analyze: total time 1 hours 6 minutes \n\nTable definitions for the two tables involved: \nCREATE TABLE ORDER_REQT \n( \n\tREQT_ID \t\tSERIAL, \n\tORDER_ID \t\tinteger NOT NULL, \n\tDAYOFWEEK \t\tsmallint NOT NULL CHECK (DAYOFWEEK \nBETWEEN 0 AND 6), \n\tTIME_FROM \t\tsmallint NOT NULL CHECK (TIME_FROM \nBETWEEN 0 AND 1439), \n\tDURATION \t\tsmallint NOT NULL CHECK (DURATION \nBETWEEN 0 AND 1439), \n\tPRODUCT_ID \t\tinteger NOT NULL, \n\tNUMBER_REQT \t\tsmallint NOT NULL DEFAULT (1), \n\tWROPTIONS\t\t\tinteger NOT NULL DEFAULT 0, \n\tUID_REF \t\tinteger NOT NULL, \n\tDT_STAMP \t\ttimestamp NOT NULL DEFAULT \ncurrent_timestamp, \n\tSentinel_Priority \tinteger NOT NULL DEFAULT 0, \n\tPERIOD\t\t\tsmallint NOT NULL DEFAULT 1 CHECK \n(PERIOD BETWEEN -2 AND 4), \n\tFREQUENCY\t\t\tsmallint NOT NULL DEFAULT 1, \n\tPRIMARY KEY (REQT_ID) \n); \n\nCREATE TABLE STAFF_BOOKING \n( \n\tBOOKING_ID \t\tSERIAL, \n\tREQT_ID \t\tinteger NOT NULL, \n\tENTITY_TYPE \t\tsmallint NOT NULL DEFAULT 3 \ncheck(ENTITY_TYPE in(3,4)), \n\tSTAFF_ID \t\tinteger NOT NULL, \n\tCONTRACT_ID \t\tinteger NOT NULL, \n\tTIME_FROM \t\tsmallint NOT NULL CHECK (TIME_FROM \nBETWEEN 0 AND 1439), \n\tDURATION \t\tsmallint NOT NULL CHECK (DURATION \nBETWEEN 0 AND 1439), \n\tPERIOD\t\t\tsmallint NOT NULL DEFAULT 1 CHECK \n(PERIOD BETWEEN -2 AND 4), \n\tFREQUENCY\t\t\tsmallint NOT NULL DEFAULT 1, \n\tTRAVEL_TO \t\tsmallint NOT NULL DEFAULT 0, \n\tUID_REF \t\tinteger NOT NULL, \n\tDT_STAMP \t\ttimestamp NOT NULL DEFAULT \ncurrent_timestamp, \n\tSELL_PRICE \t\tnumeric(10,4) NOT NULL DEFAULT 0, \n\tCOST_PRICE \t\tnumeric(10,4) NOT NULL DEFAULT 0, \n\tMIN_SELL_PRICE \tnumeric(10,4) NOT NULL DEFAULT 0, \n\tMIN_COST_PRICE \tnumeric(10,4) NOT NULL DEFAULT 0, \n\tSentinel_Priority \tinteger NOT NULL DEFAULT 0, \n\tCHECK_INTERVAL \tsmallint NOT NULL DEFAULT 0, \n STATUS\t\t\tsmallint NOT NULL DEFAULT 0, \n\tWROPTIONS\t\t\tinteger NOT NULL DEFAULT 0, \n\tPRIMARY KEY (BOOKING_ID) \n); \n\nForeign keys: \n\nALTER TABLE ORDER_REQT ADD \n\t FOREIGN KEY \n\t( \n\t\tORDER_ID \n\t) REFERENCES MAIN_ORDER ( \n\t\tORDER_ID \n\t) ON DELETE CASCADE; \n\nALTER TABLE ORDER_REQT ADD \n\t FOREIGN KEY \n\t( \n\t\tPRODUCT_ID \n\t) REFERENCES PRODUCT ( \n\t\tPRODUCT_ID \n\t); \n\nALTER TABLE STAFF_BOOKING ADD \n\t FOREIGN KEY \n\t( \n\t\tCONTRACT_ID \n\t) REFERENCES STAFF_CONTRACT ( \n\t\tCONTRACT_ID \n\t); \n\nALTER TABLE STAFF_BOOKING ADD \n\t FOREIGN KEY \n\t( \n\t\tSTAFF_ID \n\t) REFERENCES STAFF ( \n\t\tSTAFF_ID \n\t); \n\n\nIndexes: \n\nCREATE INDEX FK_IDX_ORDER_REQT \n\t ON ORDER_REQT \n\t( \n\t\tORDER_ID \n\t); \n\nCREATE INDEX FK_IDX_ORDER_REQT_2 \n\t ON ORDER_REQT \n\t( \n\t\tPRODUCT_ID \n\t); \n\nCREATE INDEX ORDER_REQT_IDX ON ORDER_REQT \n( \n\tORDER_ID, \n\tPRODUCT_ID \n); \n\nCREATE INDEX ORDER_REQT_IDX4 ON ORDER_REQT \n( \n\tREQT_ID, \n\tTIME_FROM, \n\tDURATION \n); \n\nCREATE INDEX FK_IDX_STAFF_BOOKING \n\t ON STAFF_BOOKING \n\t( \n\t\tCONTRACT_ID \n\t); \n\nCREATE INDEX FK_IDX_STAFF_BOOKING_2 \n\t ON STAFF_BOOKING \n\t( \n\t\tSTAFF_ID \n\t); \n\nCREATE INDEX STAFF_BOOK_IDX1 ON STAFF_BOOKING \n( \n\tSTAFF_ID, \n\tREQT_ID \n); \n\nCREATE INDEX STAFF_BOOK_IDX2 ON STAFF_BOOKING \n( \n\tREQT_ID \n); \n\nCREATE INDEX STAFF_BOOK_IDX3 ON STAFF_BOOKING \n( \n\tBOOKING_ID, \n\tREQT_ID \n); \n\n\nCREATE INDEX STAFF_BOOK_IDX4 ON STAFF_BOOKING \n( \n\tBOOKING_ID, \n\tCONTRACT_ID \n); \n\nThere are no indexes on the columns involved in the update, they are \nnot required for my usual select statements. This is an attempt to \nslightly denormalise the design to get the performance up comparable \nto SQL Server 2000. We hope to move some of our databases over to \nPostgreSQL later in the year and this is part of the ongoing testing. \nSQLServer's query optimiser is a bit smarter that PostgreSQL's (yet) \nso I am hand optimising some of the more frequently used \nSQL and/or tweaking the database design slightly. \n\nLater, after deciphering SQLServers graphical plans I will attempt to \npost comparitive performance/access plans, using the same data of \ncourse, if anyone would be interested.... \n\nCheers, \nGary. \n\n\n\nOn 2 Apr 2004 at 1:32, Tom Lane wrote: \n\n> \"Gary Doades\" <[email protected]> writes: \n> > As a test in PosgreSQL I issued a statement to update a single column \n> > of a table containing 2.8 million rows with the values of a column in \n> > a table with similar rowcount. Using the above spec I had to stop the \n> > server after 17 hours. The poor thing was thrashing the hard disk and \n> > doing more swapping than useful work. \n> \n> This statement is pretty much content-free, since you did not show us \n> the table schemas, the query, or the EXPLAIN output for the query. \n> (I'll forgive you the lack of EXPLAIN ANALYZE, but you could easily \n> have provided all the other hard facts.) There's really no way to tell \n> where the bottleneck is. Maybe it's a kernel-level issue, but I would \n> not bet on that without more evidence. I'd definitely not bet on it \n> without direct confirmation that the same query plan was used in both \n> setups. \n> \n> \t\t\tregards, tom lane \n> \n> ---------------------------(end of broadcast)--------------------------- \n> TIP 3: if posting/reading through Usenet, please send an appropriate \n> subscribe-nomail command to [email protected] so that your \n> message can get through to the mailing list cleanly \n> \n> \n> -- \n> Incoming mail is certified Virus Free. \n> Checked by AVG Anti-Virus (http://www.grisoft.com). \n> Version: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004 \n> \n\n\n", "msg_date": "Sat, 03 Apr 2004 11:50:51 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel. " }, { "msg_contents": "Gary,\n\n> There are no indexes on the columns involved in the update, they are \n> not required for my usual select statements. This is an attempt to \n> slightly denormalise the design to get the performance up comparable \n> to SQL Server 2000. We hope to move some of our databases over to \n> PostgreSQL later in the year and this is part of the ongoing testing. \n> SQLServer's query optimiser is a bit smarter that PostgreSQL's (yet) \n> so I am hand optimising some of the more frequently used \n> SQL and/or tweaking the database design slightly. \n\nHmmm ... that hasn't been my general experience on complex queries. However, \nit may be due to a difference in ANALYZE statistics. I'd love to see you \nincrease your default_stats_target, re-analyze, and see if PostgreSQL gets \n\"smarter\".\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Sat, 3 Apr 2004 10:59:39 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "Almost any cross dbms migration shows a drop in performance. The engine\neffectively trains developers and administrators in what works and what\ndoesn't. The initial migration thus compares a tuned to an untuned version.\n\n/Aaron\n\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Gary Doades\" <[email protected]>; <[email protected]>\nSent: Saturday, April 03, 2004 1:59 PM\nSubject: Re: [PERFORM] PostgreSQL and Linux 2.6 kernel.\n\n\n> Gary,\n>\n> > There are no indexes on the columns involved in the update, they are\n> > not required for my usual select statements. This is an attempt to\n> > slightly denormalise the design to get the performance up comparable\n> > to SQL Server 2000. We hope to move some of our databases over to\n> > PostgreSQL later in the year and this is part of the ongoing testing.\n> > SQLServer's query optimiser is a bit smarter that PostgreSQL's (yet)\n> > so I am hand optimising some of the more frequently used\n> > SQL and/or tweaking the database design slightly.\n>\n> Hmmm ... that hasn't been my general experience on complex queries.\nHowever,\n> it may be due to a difference in ANALYZE statistics. I'd love to see you\n> increase your default_stats_target, re-analyze, and see if PostgreSQL gets\n> \"smarter\".\n>\n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Sat, 3 Apr 2004 17:43:52 -0500", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "On Sat, 2004-04-03 at 03:50, Gary Doades wrote:\n> On 2 Apr 2004 at 22:36, pgsql-performance@postgresql. wrote:\n> \n> OK, some more detail: \n> \n> Before wiping 2.4 off my test box for the second time: \n\nPerhaps I missed it, but which io scheduler are you using under 2.6?\n\n\n", "msg_date": "Sat, 03 Apr 2004 16:52:49 -0700", "msg_from": "Cott Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "> Almost any cross dbms migration shows a drop in performance. The engine\n> effectively trains developers and administrators in what works and what\n> doesn't. The initial migration thus compares a tuned to an untuned version.\n\nI think it is also possible that Microsoft has more programmers working\non tuning issues for SQL Server than PostgreSQL has working on the \nwhole project.\n--\nMike Nolan\n", "msg_date": "Sat, 3 Apr 2004 21:23:57 -0600 (CST)", "msg_from": "Mike Nolan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "Hi Friends,\n Does anybody know the substitute of the oracle function 'connect by\nprior' in postgre sql.\n The query is basically being used to get a tree structure of records. The\nquery in oracle is :-\n\nselect pkmsgid\nfrom mstmessage\nconnect by prior pkmsgid = msgparentid\nstart with msgparentid = 1\n\nKindly suggest.\n\nregards\nKamal\n\n\n\n*********************************************************************\nNetwork Programs is a SEI CMM Level 5 Certified Company\n********************************************************************\nThe information contained in this communication (including any attachments) is\nintended solely for the use of the individual or entity to whom it is addressed\nand others authorized to receive it. It may contain confidential or legally\nprivileged information. If you are not the intended recipient you are hereby\nnotified that any disclosure, copying, distribution or taking any action in\nreliance on the contents of this information is strictly prohibited and may be\nunlawful. If you have received this communication in error, please notify us\nimmediately by responding to this email and then delete it from your system.\nNetwork Programs (India) Limited is neither liable for the proper and complete\ntransmission of the information contained in this communication nor for any\ndelay in its receipt.\n*********************************************************************\n\n", "msg_date": "Sun, 4 Apr 2004 13:47:24 +0530", "msg_from": "\"Kamalraj Singh Madhan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Substitute for this oracle query in postGre" }, { "msg_contents": "Possibly.\n\nA lot of my queries show comparable performance, some a little slower \nand a few a little faster. There are a few, however, that really grind on \nPostgreSQL. I am leaning patterns from these to try and and target the \nmost likely performance problems to come and hand tune these types \nof SQL.\n\nI'm not complaining about PostgreSQL or saying that SQLServer is \nbetter, in most cases it is not. SQLServer seems to be more predictable \nand forgiving in performance which tends to make for lazy SQL \nprogramming. It also has implications when the SQL is dynamically \ncreated based on user input, there are more chances of PostgreSQL \nhitting a performance problem than SQLServer.\n\nOverall I'm still very impressed with PostgreSQL. Given the $7000 per \nprocessor licence for SQLServer makes the case for PostgreSQL even \nstronger!\n\nCheers,\nGary.\n\nOn 3 Apr 2004 at 17:43, Aaron Werman wrote:\n\nAlmost any cross dbms migration shows a drop in performance. The engine\neffectively trains developers and administrators in what works and what\ndoesn't. The initial migration thus compares a tuned to an untuned version.\n\n/Aaron\n\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Gary Doades\" <[email protected]>; <[email protected]>\nSent: Saturday, April 03, 2004 1:59 PM\nSubject: Re: [PERFORM] PostgreSQL and Linux 2.6 kernel.\n\n\n> Gary,\n>\n> > There are no indexes on the columns involved in the update, they are\n> > not required for my usual select statements. This is an attempt to\n> > slightly denormalise the design to get the performance up comparable\n> > to SQL Server 2000. We hope to move some of our databases over to\n> > PostgreSQL later in the year and this is part of the ongoing testing.\n> > SQLServer's query optimiser is a bit smarter that PostgreSQL's (yet)\n> > so I am hand optimising some of the more frequently used\n> > SQL and/or tweaking the database design slightly.\n>\n> Hmmm ... that hasn't been my general experience on complex queries.\nHowever,\n> it may be due to a difference in ANALYZE statistics. I'd love to see you\n> increase your default_stats_target, re-analyze, and see if PostgreSQL gets\n> \"smarter\".\n>\n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n-- \nIncoming mail is certified Virus Free.\nChecked by AVG Anti-Virus (http://www.grisoft.com).\nVersion: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004\n\n", "msg_date": "Sun, 04 Apr 2004 09:49:23 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "Unfortunately I don't understand the question!\n\nMy background is the primarily Win32. The last time I used a *nix OS \nwas about 20 years ago apart from occasional dips into the linux OS \nover the past few years. If you can tell be how to find out what you want \nI will gladly give you the information.\n\nRegards,\nGary.\n\nOn 3 Apr 2004 at 16:52, Cott Lang wrote:\n\n> On Sat, 2004-04-03 at 03:50, Gary Doades wrote:\n> > On 2 Apr 2004 at 22:36, pgsql-performance@postgresql. wrote:\n> > \n> > OK, some more detail: \n> > \n> > Before wiping 2.4 off my test box for the second time: \n> \n> Perhaps I missed it, but which io scheduler are you using under 2.6?\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n> \n> -- \n> Incoming mail is certified Virus Free.\n> Checked by AVG Anti-Virus (http://www.grisoft.com).\n> Version: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004\n> \n\n\n", "msg_date": "Sun, 04 Apr 2004 09:56:34 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "Unfortunately I have to try and keep both SQLServer and PostgreSQL \ncompatibilty. Our main web application is currently SQLServer, but we \nwant to migrate customers who don't care what the DB server is over to \nPostgreSQL. Some of our larger customers demand SQLServer, you \nknow how it is!\n\nI don't want to maintain two sets of code or SQL, so I am trying to find \ncommon ground. The code is not a problem, but the SQL sometimes is.\n\nCheers,\nGary.\n\n\nOn 3 Apr 2004 at 17:43, Aaron Werman wrote:\n\n> Almost any cross dbms migration shows a drop in performance. The engine\n> effectively trains developers and administrators in what works and what\n> doesn't. The initial migration thus compares a tuned to an untuned version.\n> \n> /Aaron\n> \n> ----- Original Message ----- \n> From: \"Josh Berkus\" <[email protected]>\n> To: \"Gary Doades\" <[email protected]>; <[email protected]>\n> Sent: Saturday, April 03, 2004 1:59 PM\n> Subject: Re: [PERFORM] PostgreSQL and Linux 2.6 kernel.\n> \n> \n> > Gary,\n> >\n> > > There are no indexes on the columns involved in the update, they are\n> > > not required for my usual select statements. This is an attempt to\n> > > slightly denormalise the design to get the performance up comparable\n> > > to SQL Server 2000. We hope to move some of our databases over to\n> > > PostgreSQL later in the year and this is part of the ongoing testing.\n> > > SQLServer's query optimiser is a bit smarter that PostgreSQL's (yet)\n> > > so I am hand optimising some of the more frequently used\n> > > SQL and/or tweaking the database design slightly.\n> >\n> > Hmmm ... that hasn't been my general experience on complex queries.\n> However,\n> > it may be due to a difference in ANALYZE statistics. I'd love to see you\n> > increase your default_stats_target, re-analyze, and see if PostgreSQL gets\n> > \"smarter\".\n> >\n> > -- \n> > -Josh Berkus\n> > Aglio Database Solutions\n> > San Francisco\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n> \n> \n> -- \n> Incoming mail is certified Virus Free.\n> Checked by AVG Anti-Virus (http://www.grisoft.com).\n> Version: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004\n> \n\n\n", "msg_date": "Sun, 04 Apr 2004 10:00:10 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "\nOn 04/04/2004 09:56 Gary Doades wrote:\n> Unfortunately I don't understand the question!\n> \n> My background is the primarily Win32. The last time I used a *nix OS\n> was about 20 years ago apart from occasional dips into the linux OS\n> over the past few years. If you can tell be how to find out what you want\n> \n> I will gladly give you the information.\n\n\nGoogling threw up\n\nhttp://spider.tm/apr2004/cstory2.html\n\nInteresting and possibly relevant quote:\n\n\"Benchmarks have shown that in certain conditions the anticipatory \nalgorithm is almost 10 times faster than what 2.4 kernel supports\".\n\nHTH\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n", "msg_date": "Sun, 4 Apr 2004 10:41:33 +0100", "msg_from": "Paul Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "Hi,\n\nTry looking at the contrib/tablefunc add-in module.\n\nChris\n\nKamalraj Singh Madhan wrote:\n> Hi Friends,\n> Does anybody know the substitute of the oracle function 'connect by\n> prior' in postgre sql.\n> The query is basically being used to get a tree structure of records. The\n> query in oracle is :-\n> \n> select pkmsgid\n> from mstmessage\n> connect by prior pkmsgid = msgparentid\n> start with msgparentid = 1\n> \n> Kindly suggest.\n> \n> regards\n> Kamal\n> \n> \n> \n> *********************************************************************\n> Network Programs is a SEI CMM Level 5 Certified Company\n> ********************************************************************\n> The information contained in this communication (including any attachments) is\n> intended solely for the use of the individual or entity to whom it is addressed\n> and others authorized to receive it. It may contain confidential or legally\n> privileged information. If you are not the intended recipient you are hereby\n> notified that any disclosure, copying, distribution or taking any action in\n> reliance on the contents of this information is strictly prohibited and may be\n> unlawful. If you have received this communication in error, please notify us\n> immediately by responding to this email and then delete it from your system.\n> Network Programs (India) Limited is neither liable for the proper and complete\n> transmission of the information contained in this communication nor for any\n> delay in its receipt.\n> *********************************************************************\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n", "msg_date": "Sun, 04 Apr 2004 18:14:47 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Substitute for this oracle query in postGre" }, { "msg_contents": "On Sun, 2004-04-04 at 01:56, Gary Doades wrote:\n> Unfortunately I don't understand the question!\n> \n> My background is the primarily Win32. The last time I used a *nix OS \n> was about 20 years ago apart from occasional dips into the linux OS \n> over the past few years. If you can tell be how to find out what you want \n> I will gladly give you the information.\n\nThere are two available io schedulers in 2.6 (new feature), deadline and\nanticipatory. It should show be listed in the boot messages:\n\ndmesg | grep scheduler\n\nI've seen people arguing for each of the two schedulers, saying one is\nbetter than the other for databases. I'm curious which one you're\nusing. :)\n\n\n\n", "msg_date": "Sun, 04 Apr 2004 06:04:35 -0700", "msg_from": "Cott Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "It says:\n\nUsing anticipatory io scheduler.\n\nThis then fits with the earlier post on other observations of up to 10 \ntimes better performance, which I what I was seeing in in certain \ncircumstances.\n\nCheers,\nGary.\n\n\nOn 4 Apr 2004 at 6:04, Cott Lang wrote:\n\n> On Sun, 2004-04-04 at 01:56, Gary Doades wrote:\n> > Unfortunately I don't understand the question!\n> > \n> > My background is the primarily Win32. The last time I used a *nix OS \n> > was about 20 years ago apart from occasional dips into the linux OS \n> > over the past few years. If you can tell be how to find out what you want \n> > I will gladly give you the information.\n> \n> There are two available io schedulers in 2.6 (new feature), deadline and\n> anticipatory. It should show be listed in the boot messages:\n> \n> dmesg | grep scheduler\n> \n> I've seen people arguing for each of the two schedulers, saying one is\n> better than the other for databases. I'm curious which one you're\n> using. :)\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n> \n> -- \n> Incoming mail is certified Virus Free.\n> Checked by AVG Anti-Virus (http://www.grisoft.com).\n> Version: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004\n> \n\n\n", "msg_date": "Sun, 04 Apr 2004 15:50:22 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "Mike,\n\n> I think it is also possible that Microsoft has more programmers working\n> on tuning issues for SQL Server than PostgreSQL has working on the\n> whole project.\n\nAh, but quantity != quality. Or they wouldn't be trolling our mailing lists \ntrying to hire PostgreSQL programmers for the SQL Server project (really!). \nAnd we had nearly 200 contributors between 7.3 and 7.4 ... a respectable \ndevelopment staff for even a large corporation.\n\nPoint taken, though, SQL Server has done a better job in opitimizing for \n\"dumb\" queries. This is something that PostgreSQL needs to work on, as is \nself-referential updates for large tables, which also tend to be really slow. \nMind you, in SQL Server 7 I used to be able to crash the server with a big \nself-referential update, so this is a common database problem.\n\nUnfortunately, these days only Tom and Neil seem to be seriously working on \nthe query planner (beg pardon in advance if I've missed someone) so I think \nthe real answer is that we need another person interested in this kind of \noptimization before it's going to get much better.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 5 Apr 2004 08:36:52 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "On 4 Apr, Cott Lang wrote:\n> On Sun, 2004-04-04 at 01:56, Gary Doades wrote:\n>> Unfortunately I don't understand the question!\n>> \n>> My background is the primarily Win32. The last time I used a *nix OS \n>> was about 20 years ago apart from occasional dips into the linux OS \n>> over the past few years. If you can tell be how to find out what you want \n>> I will gladly give you the information.\n> \n> There are two available io schedulers in 2.6 (new feature), deadline and\n> anticipatory. It should show be listed in the boot messages:\n> \n> dmesg | grep scheduler\n> \n> I've seen people arguing for each of the two schedulers, saying one is\n> better than the other for databases. I'm curious which one you're\n> using. :)\n\nOur database tests (TPC fair use implementations) show that the deadline\nscheduler has an edge on the anticipatory scheduler. Depending on the\ncurrent state of the AS scheduler, it can be within a few percent to 10%\nor so.\n\nI have some data with one of our tests here:\n\thttp://developer.osdl.org/markw/fs/dbt2_project_results.html\n\nMark\n", "msg_date": "Mon, 5 Apr 2004 09:43:36 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "On 5 Apr 2004 at 8:36, Josh Berkus wrote:\n\n> \n> Point taken, though, SQL Server has done a better job in opitimizing for \n> \"dumb\" queries. This is something that PostgreSQL needs to work on, as is \n> self-referential updates for large tables, which also tend to be really slow. \n> Mind you, in SQL Server 7 I used to be able to crash the server with a big \n> self-referential update, so this is a common database problem.\n> \n\nI agree about the \"dumb\" queries (I'm not mine are *that* dumb :) )\n\nWhen you can write SQL that looks right, feels right, gives the right \nanswers during testing and SQLServer runs them really fast, you stop \nthere and tend not to tinker with the SQL further.\n\nYou *can* (I certainly do) achieve comparable performance with \nPostgreSQL, but you just have to work harder for it. Now that I have \nlearned the characteristics of both servers I can write SQL that is pretty \ngood on both. I suspect that there are people who evaluate PostgreSQL \nby executing their favorite SQLSever queries against it, see that it is \nslower and never bother to go further.\n\nCheers,\nGary.\n\n", "msg_date": "Mon, 05 Apr 2004 19:11:56 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "On Mon, 2004-04-05 at 11:36, Josh Berkus wrote:\n> Unfortunately, these days only Tom and Neil seem to be seriously working on \n> the query planner (beg pardon in advance if I've missed someone)\n\nActually, Tom is the only person actively working on the planner --\nwhile I hope to contribute to it in the future, I haven't done so yet.\n\n-Neil\n\n\n", "msg_date": "Wed, 07 Apr 2004 13:07:45 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "> Josh Berkus wrote:\n> Unfortunately, these days only Tom and Neil seem to be\n> seriously working on\n> the query planner (beg pardon in advance if I've missed\n> someone) so I think\n> the real answer is that we need another person interested in\n> this kind of\n> optimization before it's going to get much better.\n>\n\nHmmmm. Interesting line of thought.\n\nIs the problem \"a person interested\" or is there another issue there?\n\nI was thinking the other day that maybe removing the ability to control\njoin order through explicitly manipulating the FROM clause might\nactually be counter productive, in terms of longer term improvement of\nthe optimizer.\n\nTreating the optimizer as a black box is something I'm very used to from\nother RDBMS. My question is, how can you explicitly re-write a query now\nto \"improve\" it? If there's no way of manipulating queries without\nactually re-writing the optimizer, we're now in a position where we\naren't able to diagnose when the optimizer isn't working effectively.\n\nFor my mind, all the people on this list are potential \"optimizer\ndevelopers\" in the sense that we can all look at queries and see whether\nthere is a problem with particular join plans. Providing good cases of\npoor optimization is just what's needed to assist those few that do\nunderstand the internals to continue improving things.\n\nI guess what I'm saying is it's not how many people you've got working\non the optimizer, its how many accurate field reports of less-than\nperfect optimization reach them. In that case, PostgreSQL is likely in a\nbetter position than Microsoft, since the accessibility of the pg\ndiscussion lists makes such cases much more likely to get aired.\n\nAny thoughts?\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 14 Apr 2004 21:12:18 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "On Wed, Apr 14, 2004 at 21:12:18 +0100,\n Simon Riggs <[email protected]> wrote:\n> \n> I guess what I'm saying is it's not how many people you've got working\n> on the optimizer, its how many accurate field reports of less-than\n> perfect optimization reach them. In that case, PostgreSQL is likely in a\n> better position than Microsoft, since the accessibility of the pg\n> discussion lists makes such cases much more likely to get aired.\n> \n> Any thoughts?\n\nI have seen exactly this happen a number of times over the last several\nyears. However there is still only one Tom Lane implementing the\nimprovements.\n", "msg_date": "Thu, 15 Apr 2004 12:20:59 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "> Bruno Wolff\n> Simon Riggs <[email protected]> wrote:\n> >\n> > I guess what I'm saying is it's not how many people you've\n> got working\n> > on the optimizer, its how many accurate field reports of less-than\n> > perfect optimization reach them. In that case, PostgreSQL\n> is likely in a\n> > better position than Microsoft, since the accessibility of the pg\n> > discussion lists makes such cases much more likely to get aired.\n> >\n> > Any thoughts?\n>\n> I have seen exactly this happen a number of times over the\n> last several\n> years. However there is still only one Tom Lane implementing the\n> improvements.\n>\n\n...and very few Mr.Microsofts too.\n\n[I'm uncomfortable with, and it was not my intent, to discuss such an\nissue with direct reference to particular individuals. There is no\nintent to critiscise or malign anybody named]\n\nRegards, Simon\n\n", "msg_date": "Thu, 15 Apr 2004 20:35:34 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "Simon,\n\n> Is the problem \"a person interested\" or is there another issue there?\n\nIMHO, it's \"a person interested\".\n\n> Treating the optimizer as a black box is something I'm very used to from\n> other RDBMS. My question is, how can you explicitly re-write a query now\n> to \"improve\" it? If there's no way of manipulating queries without\n> actually re-writing the optimizer, we're now in a position where we\n> aren't able to diagnose when the optimizer isn't working effectively.\n\nWell, there is ... all of the various query cost parameters.\n\n> For my mind, all the people on this list are potential \"optimizer\n> developers\" in the sense that we can all look at queries and see whether\n> there is a problem with particular join plans. Providing good cases of\n> poor optimization is just what's needed to assist those few that do\n> understand the internals to continue improving things.\n\n... which is what this list is for.\n\nBut, ultimately, improvements on the planner are still bottlenecked by having \nonly one developer actually hacking the changes.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 15 Apr 2004 13:39:45 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "\nBruno Wolff III <[email protected]> writes:\n\n> I have seen exactly this happen a number of times over the last several\n> years. However there is still only one Tom Lane implementing the\n> improvements.\n\nOb: Well clearly the problem is we need more Tom Lanes.\n\n-- \ngreg\n\n", "msg_date": "15 Apr 2004 17:38:23 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Ob: Well clearly the problem is we need more Tom Lanes.\n\nObHHGReference: \"Haven't you heard? I come in six-packs!\"\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Apr 2004 18:21:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel. " }, { "msg_contents": "Greg Stark wrote:\n> Bruno Wolff III <[email protected]> writes:\n> \n> \n>>I have seen exactly this happen a number of times over the last several\n>>years. However there is still only one Tom Lane implementing the\n>>improvements.\n> \n> \n> Ob: Well clearly the problem is we need more Tom Lanes.\n> \n\nmy $pgGuru = \"Tom Lane\"; my @morepgGurus; my $howmany = 10;\n\nwhile($howmany--) { push @morepgGurus, $pgGuru; }\n\n-- \nUntil later, Geoffrey Registered Linux User #108567\nBuilding secure systems in spite of Microsoft\n", "msg_date": "Thu, 15 Apr 2004 18:23:19 -0400", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "\n>\n> my $pgGuru = \"Tom Lane\"; my @morepgGurus; my $howmany = 10;\n>\n> while($howmany--) { push @morepgGurus, $pgGuru; }\n>\nThis is just wrong...\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n", "msg_date": "Thu, 15 Apr 2004 18:07:34 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "\n\"Joshua D. Drake\" <[email protected]> writes:\n\n> > while($howmany--) { push @morepgGurus, $pgGuru; }\n>\n> This is just wrong...\n\nyeah, it would have been much clearer written as:\n push @morepgGurus, ($pgGuru)x$howmany;\n\nOr at least the perlish:\n for (1..$howmany)\ninstead of C style while syntax.\n\nOk. I stop now.\n\n-- \ngreg\n\n", "msg_date": "15 Apr 2004 22:27:20 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." }, { "msg_contents": "\n>Josh Berkus\n> > Treating the optimizer as a black box is something I'm very\n> used to from\n> > other RDBMS. My question is, how can you explicitly\n> re-write a query now\n> > to \"improve\" it? If there's no way of manipulating queries without\n> > actually re-writing the optimizer, we're now in a position where we\n> > aren't able to diagnose when the optimizer isn't working\n> effectively.\n>\n> Well, there is ... all of the various query cost parameters.\n\nThey are very blunt instruments for such a delicate task.\n\nSurely someone of your experience might have benefit from something\nmore?\n\nMy feeling is, I would, though I want those tools as *a developer*\nrather than for tuning specific queries for people, which is always so\nsensitive to upgrades etc.\n\n> But, ultimately, improvements on the planner are still\n> bottlenecked by having\n> only one developer actually hacking the changes.\n>\n\nDo we have a clear list of optimizations we'd like to be working on?\n\nThe TODO items aren't very related to specific optimizations...\n\nThe only ones I was aware of was deferred subselect evaluation for\nDBT-3.\n\n\n\n...sounds like there's more to discuss here, so I'll duck out now and\nget back to my current project...\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Fri, 16 Apr 2004 08:24:46 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." } ]
[ { "msg_contents": "Actually it hasn't been my experience either. Most of my queries against \nthe database, large and small are either a little quicker or no real \ndifference. I have only really noticed big differences under stress when \nmemory (RAM) is being squeezed. The main winner on 2.6 seems to be \nwrite performance and memory management.\n\nUnfortunately I only have one test machine and I can't really keep \nswitching between 2.4 and 2.6 to do the comparisons. I had written \ndown 27 timings from a set of SQL of varying complexity using the 2.4 \nkernel. Each SQL statement was executed 10 times and the average of \nthe last 5 was used. I can only really compare those timings against the \nnew installation on 2.6. I know that this is not ideal \"real world\" testing, \nbut it is good enough for me at the moment. Unless anyone has \ncontradictory indications then I will proceed with 2.6.\n\nI did increase the default stats target from 10 to 50 and re-analysed. \nThe explain numbers are slightly different, but the time to run was \nalmost the same. Not surprising since the plan was the same.\n\nQUERY PLAN \nMerge Join (cost=0.00..192636.20 rows=2845920 width=92) \n Merge Cond: (\"outer\".reqt_id = \"inner\".reqt_id) \n -> Index Scan using order_reqt_pkey on order_reqt r (cost=0.00..52662.40 \nrows=2206291 width=6) \n -> Index Scan using staff_book_idx2 on staff_booking (cost=0.00..102529.28 \nrows=2845920 width=90) \n\n\nOn 3 Apr 2004 at 10:59, Josh Berkus wrote:\n\nGary,\n\n> There are no indexes on the columns involved in the update, they are \n> not required for my usual select statements. This is an attempt to \n> slightly denormalise the design to get the performance up comparable \n> to SQL Server 2000. We hope to move some of our databases over to \n> PostgreSQL later in the year and this is part of the ongoing testing. \n> SQLServer's query optimiser is a bit smarter that PostgreSQL's (yet) \n> so I am hand optimising some of the more frequently used \n> SQL and/or tweaking the database design slightly. \n\nHmmm ... that hasn't been my general experience on complex queries. However, \nit may be due to a difference in ANALYZE statistics. I'd love to see you \nincrease your default_stats_target, re-analyze, and see if PostgreSQL gets \n\"smarter\".\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n-- \nIncoming mail is certified Virus Free.\nChecked by AVG Anti-Virus (http://www.grisoft.com).\nVersion: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004\n\n", "msg_date": "Sat, 03 Apr 2004 20:32:35 +0100", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." } ]
[ { "msg_contents": "Sorry, I think I misread your post in my last reply. I thought you were still talking about \nthe big update....\n\nThe main thing I have noticed about SQLServer is it seems more willing to do hash or \nmerge joins than PostgreSQL. I have experimented with various postgresql.conf \nparameters and even turned off nested loops to see the difference. When actually \ngetting a merge join out of PostgreSQL when it wanted to do a nested loop it, not \nsurprisingly, took longer to execute.\n\nLooking at the SQLServer plan it seemed to be spending MUCH less time in the sort \noperations than PostgreSQL. This is probably what leads SQLServer to go for \nhash/merge joins more often. The other problem is that the SQLServer timings are \nskewed by its query plan caching.\n\nFor one query SQLserver plan said it spent 2% of its time in a big sort, the same query \nin PostgreSQL when hash join was forced spent 23% of its time on the sort (from explain \nanalyse actual stats). I have played about with the sort_mem, but it doesn't make much \ndiffrence.\n\nI have also noticed that SQLServer tends to fold more complex IN subselects into the \nmain query using merge joins, maybe for the same reason as above.\n\nSQLServer seems to have some more \"exotic\" joins (\"nested loop/left semi join\",\"nested \nloop/left anti semi join\"). These are probably just variants of nested loops, but I don't \nknow enough about it to say if they make a difference. Clustered indexes and clustered \nindex seeks also seem to be a big player in the more complex queries.\n\nI still have quite a lot comparitive testing and tuning to do before I can nail it down \nfurther, but I will let you know when I have some hard stats to go on.\n\n\nOn 3 Apr 2004 at 10:59, Josh Berkus wrote:\n\nGary,\n\n> There are no indexes on the columns involved in the update, they are \n> not required for my usual select statements. This is an attempt to \n> slightly denormalise the design to get the performance up comparable \n> to SQL Server 2000. We hope to move some of our databases over to \n> PostgreSQL later in the year and this is part of the ongoing testing. \n> SQLServer's query optimiser is a bit smarter that PostgreSQL's (yet) \n> so I am hand optimising some of the more frequently used \n> SQL and/or tweaking the database design slightly. \n\nHmmm ... that hasn't been my general experience on complex queries. However, \nit may be due to a difference in ANALYZE statistics. I'd love to see you \nincrease your default_stats_target, re-analyze, and see if PostgreSQL gets \n\"smarter\".\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n-- \nIncoming mail is certified Virus Free.\nChecked by AVG Anti-Virus (http://www.grisoft.com).\nVersion: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004\n\n", "msg_date": "Sat, 03 Apr 2004 21:16:10 +0100", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." } ]
[ { "msg_contents": "Thanks,\n\nI know about set showplan_text, but it is only the equivalent of explain, \nnot explain analyze. The graphical plan gives full statistics, runtime, \npercentage cost, loop execution counts etc. which is much more useful. \nI don't know of a way of getting the graphical plan content in text form.\n\nCheers,\nGary.\n\nOn 3 Apr 2004 at 6:50, @g v t c wrote:\n\nUse \"Set Show_Plan\" or something of the sort in Query Analyzer. Then \nrun your SQL. This will change the graphical plan to a text plan \nsimilar to Postgresql or at least something close to readable.\n\nGary Doades wrote:\n\n>On 2 Apr 2004 at 22:36, pgsql-performance@postgresql. wrote:\n>\n>OK, some more detail: \n>\n>Before wiping 2.4 off my test box for the second time: \n>\n>SQL Statement for update: \n>update staff_booking set time_from = r.time_from from order_reqt r where r.reqt_id = \n>staff_booking.reqt_id; \n>\n>Explain: (on 2.4) \n>QUERY PLAN \n>Merge Join (cost=0.00..185731.30 rows=2845920 width=92) \n> Merge Cond: (\"outer\".reqt_id = \"inner\".reqt_id) \n> -> Index Scan using order_reqt_pkey on order_reqt r (cost=0.00..53068.20 \n>rows=2206291 width=6) \n> -> Index Scan using staff_book_idx2 on staff_booking (cost=0.00..99579.21 \n>rows=2845920 width=90) \n>\n>Total execution time: 18 hours 12 minutes \n>\n>vacuum full analyze: total time 3 hours 22 minutes \n>\n>Wait 2 hours for re-install 2.6, set params etc. \n>restore database. \n>\n>Same SQL Statement \n>Explain: (on 2.6) \n>QUERY PLAN \n>Merge Join (cost=0.00..209740.24 rows=2845920 width=92) \n> Merge Cond: (\"outer\".reqt_id = \"inner\".reqt_id) \n> -> Index Scan using order_reqt_pkey on order_reqt r (cost=0.00..50734.20 \n>rows=2206291 width=6) \n> -> Index Scan using staff_book_idx2 on staff_booking (cost=0.00..117921.92 \n>rows=2845920 width=90) \n>\n>Total execution time: 2 hours 53 minutes \n>\n>vacuum full analyze: total time 1 hours 6 minutes \n>\n>Table definitions for the two tables involved: \n>CREATE TABLE ORDER_REQT \n>( \n>\tREQT_ID \t\tSERIAL, \n>\tORDER_ID \t\tinteger NOT NULL, \n>\tDAYOFWEEK \t\tsmallint NOT NULL CHECK (DAYOFWEEK \n>BETWEEN 0 AND 6), \n>\tTIME_FROM \t\tsmallint NOT NULL CHECK (TIME_FROM \n>BETWEEN 0 AND 1439), \n>\tDURATION \t\tsmallint NOT NULL CHECK (DURATION \n>BETWEEN 0 AND 1439), \n>\tPRODUCT_ID \t\tinteger NOT NULL, \n>\tNUMBER_REQT \t\tsmallint NOT NULL DEFAULT (1), \n>\tWROPTIONS\t\t\tinteger NOT NULL DEFAULT 0, \n>\tUID_REF \t\tinteger NOT NULL, \n>\tDT_STAMP \t\ttimestamp NOT NULL DEFAULT \n>current_timestamp, \n>\tSentinel_Priority \tinteger NOT NULL DEFAULT 0, \n>\tPERIOD\t\t\tsmallint NOT NULL DEFAULT 1 CHECK \n>(PERIOD BETWEEN -2 AND 4), \n>\tFREQUENCY\t\t\tsmallint NOT NULL DEFAULT 1, \n>\tPRIMARY KEY (REQT_ID) \n>); \n>\n>CREATE TABLE STAFF_BOOKING \n>( \n>\tBOOKING_ID \t\tSERIAL, \n>\tREQT_ID \t\tinteger NOT NULL, \n>\tENTITY_TYPE \t\tsmallint NOT NULL DEFAULT 3 \n>check(ENTITY_TYPE in(3,4)), \n>\tSTAFF_ID \t\tinteger NOT NULL, \n>\tCONTRACT_ID \t\tinteger NOT NULL, \n>\tTIME_FROM \t\tsmallint NOT NULL CHECK (TIME_FROM \n>BETWEEN 0 AND 1439), \n>\tDURATION \t\tsmallint NOT NULL CHECK (DURATION \n>BETWEEN 0 AND 1439), \n>\tPERIOD\t\t\tsmallint NOT NULL DEFAULT 1 CHECK \n>(PERIOD BETWEEN -2 AND 4), \n>\tFREQUENCY\t\t\tsmallint NOT NULL DEFAULT 1, \n>\tTRAVEL_TO \t\tsmallint NOT NULL DEFAULT 0, \n>\tUID_REF \t\tinteger NOT NULL, \n>\tDT_STAMP \t\ttimestamp NOT NULL DEFAULT \n>current_timestamp, \n>\tSELL_PRICE \t\tnumeric(10,4) NOT NULL DEFAULT 0, \n>\tCOST_PRICE \t\tnumeric(10,4) NOT NULL DEFAULT 0, \n>\tMIN_SELL_PRICE \tnumeric(10,4) NOT NULL DEFAULT 0, \n>\tMIN_COST_PRICE \tnumeric(10,4) NOT NULL DEFAULT 0, \n>\tSentinel_Priority \tinteger NOT NULL DEFAULT 0, \n>\tCHECK_INTERVAL \tsmallint NOT NULL DEFAULT 0, \n> STATUS\t\t\tsmallint NOT NULL DEFAULT 0, \n>\tWROPTIONS\t\t\tinteger NOT NULL DEFAULT 0, \n>\tPRIMARY KEY (BOOKING_ID) \n>); \n>\n>Foreign keys: \n>\n>ALTER TABLE ORDER_REQT ADD \n>\t FOREIGN KEY \n>\t( \n>\t\tORDER_ID \n>\t) REFERENCES MAIN_ORDER ( \n>\t\tORDER_ID \n>\t) ON DELETE CASCADE; \n>\n>ALTER TABLE ORDER_REQT ADD \n>\t FOREIGN KEY \n>\t( \n>\t\tPRODUCT_ID \n>\t) REFERENCES PRODUCT ( \n>\t\tPRODUCT_ID \n>\t); \n>\n>ALTER TABLE STAFF_BOOKING ADD \n>\t FOREIGN KEY \n>\t( \n>\t\tCONTRACT_ID \n>\t) REFERENCES STAFF_CONTRACT ( \n>\t\tCONTRACT_ID \n>\t); \n>\n>ALTER TABLE STAFF_BOOKING ADD \n>\t FOREIGN KEY \n>\t( \n>\t\tSTAFF_ID \n>\t) REFERENCES STAFF ( \n>\t\tSTAFF_ID \n>\t); \n>\n>\n>Indexes: \n>\n>CREATE INDEX FK_IDX_ORDER_REQT \n>\t ON ORDER_REQT \n>\t( \n>\t\tORDER_ID \n>\t); \n>\n>CREATE INDEX FK_IDX_ORDER_REQT_2 \n>\t ON ORDER_REQT \n>\t( \n>\t\tPRODUCT_ID \n>\t); \n>\n>CREATE INDEX ORDER_REQT_IDX ON ORDER_REQT \n>( \n>\tORDER_ID, \n>\tPRODUCT_ID \n>); \n>\n>CREATE INDEX ORDER_REQT_IDX4 ON ORDER_REQT \n>( \n>\tREQT_ID, \n>\tTIME_FROM, \n>\tDURATION \n>); \n>\n>CREATE INDEX FK_IDX_STAFF_BOOKING \n>\t ON STAFF_BOOKING \n>\t( \n>\t\tCONTRACT_ID \n>\t); \n>\n>CREATE INDEX FK_IDX_STAFF_BOOKING_2 \n>\t ON STAFF_BOOKING \n>\t( \n>\t\tSTAFF_ID \n>\t); \n>\n>CREATE INDEX STAFF_BOOK_IDX1 ON STAFF_BOOKING \n>( \n>\tSTAFF_ID, \n>\tREQT_ID \n>); \n>\n>CREATE INDEX STAFF_BOOK_IDX2 ON STAFF_BOOKING \n>( \n>\tREQT_ID \n>); \n>\n>CREATE INDEX STAFF_BOOK_IDX3 ON STAFF_BOOKING \n>( \n>\tBOOKING_ID, \n>\tREQT_ID \n>); \n>\n>\n>CREATE INDEX STAFF_BOOK_IDX4 ON STAFF_BOOKING \n>( \n>\tBOOKING_ID, \n>\tCONTRACT_ID \n>); \n>\n>There are no indexes on the columns involved in the update, they are \n>not required for my usual select statements. This is an attempt to \n>slightly denormalise the design to get the performance up comparable \n>to SQL Server 2000. We hope to move some of our databases over to \n>PostgreSQL later in the year and this is part of the ongoing testing. \n>SQLServer's query optimiser is a bit smarter that PostgreSQL's (yet) \n>so I am hand optimising some of the more frequently used \n>SQL and/or tweaking the database design slightly. \n>\n>Later, after deciphering SQLServers graphical plans I will attempt to \n>post comparitive performance/access plans, using the same data of \n>course, if anyone would be interested.... \n>\n>Cheers, \n>Gary. \n>\n>\n>\n>On 2 Apr 2004 at 1:32, Tom Lane wrote: \n>\n> \n>\n>>\"Gary Doades\" <[email protected]> writes: \n>> \n>>\n>>>As a test in PosgreSQL I issued a statement to update a single column \n>>>of a table containing 2.8 million rows with the values of a column in \n>>>a table with similar rowcount. Using the above spec I had to stop the \n>>>server after 17 hours. The poor thing was thrashing the hard disk and \n>>>doing more swapping than useful work. \n>>> \n>>>\n>> \n>>This statement is pretty much content-free, since you did not show us \n>>the table schemas, the query, or the EXPLAIN output for the query. \n>>(I'll forgive you the lack of EXPLAIN ANALYZE, but you could easily \n>>have provided all the other hard facts.) There's really no way to tell \n>>where the bottleneck is. Maybe it's a kernel-level issue, but I would \n>>not bet on that without more evidence. I'd definitely not bet on it \n>>without direct confirmation that the same query plan was used in both \n>>setups. \n>> \n>>\t\t\tregards, tom lane \n>> \n>>---------------------------(end of broadcast)--------------------------- \n>>TIP 3: if posting/reading through Usenet, please send an appropriate \n>> subscribe-nomail command to [email protected] so that your \n>> message can get through to the mailing list cleanly \n>> \n>> \n>>-- \n>>Incoming mail is certified Virus Free. \n>>Checked by AVG Anti-Virus (http://www.grisoft.com). \n>>Version: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004 \n>> \n>> \n>>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n>\n> \n>\n\n\n-- \nIncoming mail is certified Virus Free.\nChecked by AVG Anti-Virus (http://www.grisoft.com).\nVersion: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004\n\n", "msg_date": "Sat, 03 Apr 2004 21:20:49 +0100", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." } ]
[ { "msg_contents": "Following on from Josh's response and my previous reply on SQLServer planning.\n\nThe main problem query is this one:\n\nSELECT VS.*,VL.TEL1,SC.CONTRACT_ID,SC.CONTRACT_REF, SC.MAX_HOURS, \nSC.MIN_HOURS, \n (SELECT COUNT(*) FROM TIMESHEET_DETAIL JOIN MAIN_ORDER ON \n(MAIN_ORDER.ORDER_ID = TIMESHEET_DETAIL.ORDER_ID AND \nMAIN_ORDER.CLIENT_ID = 6) WHERE TIMESHEET_DETAIL.CONTRACT_ID = \nSC.CONTRACT_ID) AS VISITS,\n(SELECT (SUM(R.DURATION+1))/60.0 FROM ORDER_REQT R\n JOIN STAFF_BOOKING B ON (B.REQT_ID = R.REQT_ID)\n JOIN BOOKING_PLAN BP ON (BP.BOOKING_ID = B.BOOKING_ID) WHERE \nB.CONTRACT_ID = SC.CONTRACT_ID \n AND BP.BOOKING_DATE BETWEEN '2004-06-12' AND '2004-06-18') AS RHOURS \nFROM VSTAFF VS\nJOIN STAFF_CONTRACT SC ON (SC.STAFF_ID = VS.STAFF_ID)\nJOIN VLOCATION VL ON (VL.LOCATION_ID = VS.LOCATION_ID)\nJOIN SEARCH_REQT_RESULT SR ON (SR.STAFF_ID = VS.STAFF_ID)\nWHERE SR.SEARCH_ID = 1 AND SC.CONTRACT_ID IN\n(SELECT C.CONTRACT_ID FROM STAFF_PRODUCT P,STAFF_CONTRACT C \nWHERE P.CONTRACT_ID=C.CONTRACT_ID AND C.STAFF_ID = VS.STAFF_ID AND \nP.PRODUCT_ID IN (SELECT PRODUCT_ID FROM SEARCH_ORDER_REQT WHERE \nSEARCH_ID = 1) AND C.AVAIL_DATE_FROM <= '2004-06-12' AND \nC.AVAIL_DATE_TO >= '2004-06-18' GROUP BY C.CONTRACT_ID\n HAVING (COUNT(C.CONTRACT_ID) = (SELECT COUNT(DISTINCT PRODUCT_ID) \nFROM SEARCH_ORDER_REQT WHERE SEARCH_ID = 1)))\n\nThe explain analyze is:\nQUERY PLAN\nNested Loop (cost=101.54..1572059.57 rows=135 width=152) (actual \ntime=13749.100..1304586.501 rows=429 loops=1)\n InitPlan\n -> Index Scan using fk_idx_wruserarea on wruserarea (cost=3.26..6.52 rows=1 \nwidth=4) (actual time=0.944..0.944 rows=1 loops=1)\n Index Cond: (area_id = 1)\n Filter: (uid = $4)\n InitPlan\n -> Seq Scan on wruser (cost=0.00..3.26 rows=1 width=4) (actual \ntime=0.686..0.691 rows=1 loops=1)\n Filter: ((username)::name = \"current_user\"())\n -> Hash Join (cost=95.02..3701.21 rows=215 width=138) (actual \ntime=100.476..1337.392 rows=429 loops=1)\n Hash Cond: (\"outer\".staff_id = \"inner\".staff_id)\n Join Filter: (subplan)\n -> Seq Scan on staff_contract sc (cost=0.00..33.24 rows=1024 width=37) (actual \ntime=0.114..245.366 rows=1024 loops=1)\n -> Hash (cost=93.95..93.95 rows=430 width=109) (actual time=38.563..38.563 \nrows=0 loops=1)\n -> Hash Join (cost=47.47..93.95 rows=430 width=109) (actual \ntime=15.502..36.627 rows=429 loops=1)\n Hash Cond: (\"outer\".staff_id = \"inner\".staff_id)\n -> Seq Scan on staff (cost=34.61..66.48 rows=1030 width=105) (actual \ntime=9.655..15.264 rows=1030 loops=1)\n Filter: ((hashed subplan) OR $5)\n SubPlan\n -> Seq Scan on staff_area (cost=10.73..33.38 rows=493 width=4) \n(actual time=8.452..8.452 rows=0 loops=1)\n Filter: ((hashed subplan) OR (area_id = 1))\n SubPlan\n -> Seq Scan on wruserarea (cost=3.26..10.72 rows=5 width=4) \n(actual time=0.977..1.952 rows=1 loops=1)\n Filter: (uid = $1)\n InitPlan\n -> Seq Scan on wruser (cost=0.00..3.26 rows=1 width=4) \n(actual time=0.921..0.926 rows=1 loops=1)\n Filter: ((username)::name = \"current_user\"())\n -> Hash (cost=11.79..11.79 rows=430 width=4) (actual time=5.705..5.705 \nrows=0 loops=1)\n -> Index Scan using fk_idx_search_reqt_result on search_reqt_result \nsr (cost=0.00..11.79 rows=430 width=4) (actual time=0.470..4.482 rows=429 loops=1)\n Index Cond: (search_id = 1)\n SubPlan\n -> HashAggregate (cost=8.32..8.32 rows=1 width=4) (actual time=2.157..2.157 \nrows=1 loops=429)\n Filter: (count(contract_id) = $9)\n InitPlan\n -> Aggregate (cost=1.04..1.04 rows=1 width=4) (actual time=0.172..0.173 \nrows=1 loops=1)\n -> Seq Scan on search_order_reqt (cost=0.00..1.04 rows=1 width=4) \n(actual time=0.022..0.038 rows=1 loops=1)\n Filter: (search_id = 1)\n -> Hash IN Join (cost=1.04..7.27 rows=1 width=4) (actual time=2.064..2.117 \nrows=1 loops=429)\n Hash Cond: (\"outer\".product_id = \"inner\".product_id)\n -> Nested Loop (cost=0.00..6.19 rows=7 width=8) (actual \ntime=1.112..2.081 rows=8 loops=429)\n -> Index Scan using fk_idx_staff_contract_2 on staff_contract c \n(cost=0.00..3.03 rows=1 width=4) (actual time=0.206..0.245 rows=1 loops=429)\n Index Cond: (staff_id = $8)\n Filter: ((avail_date_from <= '2004-06-12'::date) AND (avail_date_to \n>= '2004-06-18'::date))\n -> Index Scan using fk_idx_staff_product on staff_product p \n(cost=0.00..3.08 rows=6 width=8) (actual time=0.873..1.764 rows=8 loops=429)\n Index Cond: (p.contract_id = \"outer\".contract_id)\n -> Hash (cost=1.04..1.04 rows=1 width=4) (actual time=0.086..0.086 \nrows=0 loops=1)\n -> Seq Scan on search_order_reqt (cost=0.00..1.04 rows=1 width=4) \n(actual time=0.037..0.050 rows=1 loops=1)\n Filter: (search_id = 1)\n -> Index Scan using location_pkey on \"location\" (cost=0.00..12.66 rows=1 width=18) \n(actual time=0.876..0.887 rows=1 loops=429)\n Index Cond: (\"location\".location_id = \"outer\".location_id)\n Filter: ((area_id = 1) OR (subplan))\n SubPlan\n -> Index Scan using fk_idx_wruserarea, fk_idx_wruserarea on wruserarea \n(cost=3.26..9.64 rows=1 width=4) (never executed)\n Index Cond: ((area_id = 1) OR (area_id = $7))\n Filter: (uid = $6)\n InitPlan\n -> Seq Scan on wruser (cost=0.00..3.26 rows=1 width=4) (never executed)\n Filter: ((username)::name = \"current_user\"())\n SubPlan\n -> Aggregate (cost=11233.28..11233.29 rows=1 width=2) (actual \ntime=3036.814..3036.815 rows=1 loops=429)\n -> Nested Loop (cost=10391.71..11233.21 rows=30 width=2) (actual \ntime=2817.923..3036.516 rows=34 loops=429)\n -> Hash Join (cost=10391.71..11142.43 rows=30 width=4) (actual \ntime=2813.349..3007.936 rows=34 loops=429)\n Hash Cond: (\"outer\".booking_id = \"inner\".booking_id)\n -> Index Scan using booking_plan_idx2 on booking_plan bp \n(cost=0.00..572.52 rows=23720 width=4) (actual time=0.070..157.028 rows=24613 \nloops=429)\n Index Cond: ((booking_date >= '2004-06-12'::date) AND \n(booking_date <= '2004-06-18'::date))\n -> Hash (cost=10382.78..10382.78 rows=3571 width=8) (actual \ntime=2746.122..2746.122 rows=0 loops=429)\n -> Index Scan using fk_idx_staff_booking on staff_booking b \n(cost=0.00..10382.78 rows=3571 width=8) (actual time=14.168..2733.315 rows=3815 \nloops=429)\n Index Cond: (contract_id = $0)\n -> Index Scan using order_reqt_pkey on order_reqt r (cost=0.00..3.01 \nrows=1 width=6) (actual time=0.826..0.832 rows=1 loops=14401)\n Index Cond: (\"outer\".reqt_id = r.reqt_id)\n -> Aggregate (cost=363.94..363.94 rows=1 width=0) (actual time=0.057..0.058 \nrows=1 loops=429)\n -> Nested Loop (cost=0.00..363.94 rows=1 width=0) (actual time=0.034..0.034 \nrows=0 loops=429)\n -> Index Scan using fk_idx_main_order on main_order (cost=0.00..4.99 \nrows=1 width=4) (actual time=0.031..0.031 rows=0 loops=429)\n Index Cond: (client_id = 6)\n -> Index Scan using fk_idx_timesheet_detail_3 on timesheet_detail \n(cost=0.00..358.93 rows=1 width=4) (never executed)\n Index Cond: (\"outer\".order_id = timesheet_detail.order_id)\n Filter: (contract_id = $0)\nTotal runtime: 1304591.861 ms\n\nLong Time! The main issue here is that the RHOURS subselect is executed as a nested \njoin 429 times. unfortunately this is an expensive subquery.\n\nSQLServer executed this in just over 1 second on comparable hardware. Looking at its \nexecution plan it flattens out the two subselects with a merge join. So I manually rewrote \nthe query using derived tables and joins as:\n\nSELECT VS.*,VL.TEL1,SC.CONTRACT_ID,SC.CONTRACT_REF, SC.MAX_HOURS, \nSC.MIN_HOURS, TBOOK.RHOURS, TVIS.VISITS FROM SEARCH_REQT_RESULT \nSR\nJOIN STAFF_CONTRACT SC ON (SR.STAFF_ID = SC.STAFF_ID) AND \nSC.AVAIL_DATE_FROM <= '2004-06-12' AND SC.AVAIL_DATE_TO >= '2004-06-18'\nJOIN VSTAFF VS ON (VS.STAFF_ID = SC.STAFF_ID)\nJOIN VLOCATION VL ON (VL.LOCATION_ID = VS.LOCATION_ID)\nLEFT OUTER JOIN (SELECT B.CONTRACT_ID, SUM(R.DURATION+1)/60.0 AS \nRHOURS FROM STAFF_BOOKING B\nJOIN BOOKING_PLAN BP ON (BP.BOOKING_ID = B.BOOKING_ID) AND \nBP.BOOKING_DATE BETWEEN '2004-06-12' AND '2004-06-18'\nJOIN ORDER_REQT R ON (R.REQT_ID = B.REQT_ID)\n GROUP BY B.CONTRACT_ID) AS TBOOK\nON (SC.CONTRACT_ID = TBOOK.CONTRACT_ID)\nLEFT OUTER JOIN (SELECT CONTRACT_ID,COUNT(*) AS VISITS FROM \nTIMESHEET_DETAIL\nJOIN MAIN_ORDER ON (MAIN_ORDER.ORDER_ID = \nTIMESHEET_DETAIL.ORDER_ID) WHERE MAIN_ORDER.CLIENT_ID = 6 \nGROUP BY CONTRACT_ID) AS TVIS ON (TVIS.CONTRACT_ID = SC.CONTRACT_ID)\nJOIN (SELECT P.CONTRACT_ID FROM STAFF_PRODUCT P, \nSEARCH_ORDER_REQT SR\nWHERE P.PRODUCT_ID = SR.PRODUCT_ID AND SR.SEARCH_ID = 1\nGROUP BY P.CONTRACT_ID\nHAVING (COUNT(P.CONTRACT_ID) = (SELECT COUNT(DISTINCT PRODUCT_ID) \nFROM SEARCH_ORDER_REQT WHERE SEARCH_ID = 1))) AS TCONT ON \n(TCONT.CONTRACT_ID = SC.CONTRACT_ID)\nWHERE SR.SEARCH_ID = 1\n\nWith the explain analyze as:\nQUERY PLAN\nHash Join (cost=137054.42..137079.74 rows=159 width=192) (actual \ntime=6228.354..6255.058 rows=429 loops=1)\n Hash Cond: (\"outer\".contract_id = \"inner\".contract_id)\n InitPlan\n -> Index Scan using fk_idx_wruserarea on wruserarea (cost=3.26..6.52 rows=1 \nwidth=4) (actual time=0.850..0.850 rows=1 loops=1)\n Index Cond: (area_id = 1)\n Filter: (uid = $3)\n InitPlan\n -> Seq Scan on wruser (cost=0.00..3.26 rows=1 width=4) (actual \ntime=0.670..0.675 rows=1 loops=1)\n Filter: ((username)::name = \"current_user\"())\n -> Subquery Scan tcont (cost=152.63..161.81 rows=612 width=4) (actual \ntime=36.312..42.268 rows=612 loops=1)\n -> HashAggregate (cost=152.63..155.69 rows=612 width=4) (actual \ntime=36.301..40.040 rows=612 loops=1)\n Filter: (count(contract_id) = $7)\n InitPlan\n -> Aggregate (cost=1.04..1.04 rows=1 width=4) (actual time=0.107..0.108 \nrows=1 loops=1)\n -> Seq Scan on search_order_reqt (cost=0.00..1.04 rows=1 width=4) \n(actual time=0.025..0.037 rows=1 loops=1)\n Filter: (search_id = 1)\n -> Hash Join (cost=1.04..148.53 rows=612 width=4) (actual \ntime=0.419..32.284 rows=612 loops=1)\n Hash Cond: (\"outer\".product_id = \"inner\".product_id)\n -> Seq Scan on staff_product p (cost=0.00..109.91 rows=6291 width=8) \n(actual time=0.117..17.943 rows=6291 loops=1)\n -> Hash (cost=1.04..1.04 rows=1 width=4) (actual time=0.190..0.190 \nrows=0 loops=1)\n -> Seq Scan on search_order_reqt sr (cost=0.00..1.04 rows=1 \nwidth=4) (actual time=0.165..0.177 rows=1 loops=1)\n Filter: (search_id = 1)\n -> Hash (cost=136894.61..136894.61 rows=266 width=192) (actual \ntime=6191.923..6191.923 rows=0 loops=1)\n -> Merge Left Join (cost=136886.03..136894.61 rows=266 width=192) (actual \ntime=6143.315..6189.685 rows=429 loops=1)\n Merge Cond: (\"outer\".contract_id = \"inner\".contract_id)\n -> Merge Left Join (cost=136517.64..136525.04 rows=266 width=184) (actual \ntime=6142.896..6171.676 rows=429 loops=1)\n Merge Cond: (\"outer\".contract_id = \"inner\".contract_id)\n -> Sort (cost=5529.68..5530.34 rows=266 width=152) (actual \ntime=129.548..130.027 rows=429 loops=1)\n Sort Key: sc.contract_id\n -> Nested Loop (cost=88.35..5518.96 rows=266 width=152) (actual \ntime=33.213..121.666 rows=429 loops=1)\n -> Hash Join (cost=88.35..143.88 rows=424 width=138) (actual \ntime=32.739..76.357 rows=429 loops=1)\n Hash Cond: (\"outer\".staff_id = \"inner\".staff_id)\n -> Hash Join (cost=47.47..93.95 rows=430 width=109) (actual \ntime=15.232..40.040 rows=429 loops=1)\n Hash Cond: (\"outer\".staff_id = \"inner\".staff_id)\n -> Seq Scan on staff (cost=34.61..66.48 rows=1030 \nwidth=105) (actual time=9.412..16.105 rows=1030 loops=1)\n Filter: ((hashed subplan) OR $4)\n SubPlan\n -> Seq Scan on staff_area (cost=10.73..33.38 \nrows=493 width=4) (actual time=8.380..8.380 rows=0 loops=1)\n Filter: ((hashed subplan) OR (area_id = 1))\n SubPlan\n -> Seq Scan on wruserarea (cost=3.26..10.72 \nrows=5 width=4) (actual time=0.953..1.941 rows=1 loops=1)\n Filter: (uid = $0)\n InitPlan\n -> Seq Scan on wruser (cost=0.00..3.26 \nrows=1 width=4) (actual time=0.902..0.908 rows=1 loops=1)\n Filter: ((username)::name = \n\"current_user\"())\n -> Hash (cost=11.79..11.79 rows=430 width=4) (actual \ntime=5.670..5.670 rows=0 loops=1)\n -> Index Scan using fk_idx_search_reqt_result on \nsearch_reqt_result sr (cost=0.00..11.79 rows=430 width=4) (actual time=0.448..4.516 \nrows=429 loops=1)\n Index Cond: (search_id = 1)\n -> Hash (cost=38.36..38.36 rows=1008 width=37) (actual \ntime=17.386..17.386 rows=0 loops=1)\n -> Seq Scan on staff_contract sc (cost=0.00..38.36 \nrows=1008 width=37) (actual time=0.222..14.063 rows=1008 loops=1)\n Filter: ((avail_date_from <= '2004-06-12'::date) AND \n(avail_date_to >= '2004-06-18'::date))\n -> Index Scan using location_pkey on \"location\" (cost=0.00..12.66 \nrows=1 width=18) (actual time=0.043..0.050 rows=1 loops=429)\n Index Cond: (\"location\".location_id = \"outer\".location_id)\n Filter: ((area_id = 1) OR (subplan))\n SubPlan\n -> Index Scan using fk_idx_wruserarea, fk_idx_wruserarea on \nwruserarea (cost=3.26..9.64 rows=1 width=4) (never executed)\n Index Cond: ((area_id = 1) OR (area_id = $6))\n Filter: (uid = $5)\n InitPlan\n -> Seq Scan on wruser (cost=0.00..3.26 rows=1 width=4) \n(never executed)\n Filter: ((username)::name = \"current_user\"())\n -> Sort (cost=130987.97..130989.96 rows=797 width=36) (actual \ntime=6013.254..6014.112 rows=746 loops=1)\n Sort Key: tbook.contract_id\n -> Subquery Scan tbook (cost=130933.62..130949.56 rows=797 \nwidth=36) (actual time=5993.070..6007.677 rows=746 loops=1)\n -> HashAggregate (cost=130933.62..130941.59 rows=797 \nwidth=6) (actual time=5993.055..6004.099 rows=746 loops=1)\n -> Merge Join (cost=74214.90..130815.02 rows=23720 \nwidth=6) (actual time=4950.951..5807.985 rows=24613 loops=1)\n Merge Cond: (\"outer\".reqt_id = \"inner\".reqt_id)\n -> Index Scan using order_reqt_pkey on order_reqt r \n(cost=0.00..50734.20 rows=2206291 width=6) (actual time=0.444..2753.374 \nrows=447439 loops=1)\n -> Sort (cost=74214.90..74274.20 rows=23720 width=8) \n(actual time=1822.405..1856.081 rows=24613 loops=1)\n Sort Key: b.reqt_id\n -> Nested Loop (cost=0.00..72491.19 rows=23720 \nwidth=8) (actual time=1.955..1633.124 rows=24613 loops=1)\n -> Index Scan using booking_plan_idx2 on \nbooking_plan bp (cost=0.00..572.52 rows=23720 width=4) (actual time=1.468..243.827 \nrows=24613 loops=1)\n Index Cond: ((booking_date >= '2004-06-\n12'::date) AND (booking_date <= '2004-06-18'::date))\n -> Index Scan using staff_booking_pkey on \nstaff_booking b (cost=0.00..3.02 rows=1 width=12) (actual time=0.037..0.042 rows=1 \nloops=24613)\n Index Cond: (\"outer\".booking_id = b.booking_id)\n -> Sort (cost=368.38..368.55 rows=68 width=12) (actual time=0.338..0.338 \nrows=0 loops=1)\n Sort Key: tvis.contract_id\n -> Subquery Scan tvis (cost=365.46..366.31 rows=68 width=12) (actual \ntime=0.307..0.307 rows=0 loops=1)\n -> HashAggregate (cost=365.46..365.63 rows=68 width=4) (actual \ntime=0.302..0.302 rows=0 loops=1)\n -> Nested Loop (cost=0.00..365.12 rows=68 width=4) (actual \ntime=0.290..0.290 rows=0 loops=1)\n -> Index Scan using fk_idx_main_order on main_order \n(cost=0.00..4.99 rows=1 width=4) (actual time=0.286..0.286 rows=0 loops=1)\n Index Cond: (client_id = 6)\n -> Index Scan using fk_idx_timesheet_detail_3 on \ntimesheet_detail (cost=0.00..358.63 rows=120 width=8) (never executed)\n Index Cond: (\"outer\".order_id = timesheet_detail.order_id)\nTotal runtime: 6266.205 ms\n\nThis now gives me the same results, but with orders of magnitude better execution \ntimes!\n\nOddly enough, SQLServer really struggles with the second query, taking longer then \nPostgreSQL!!!!\n\nRegards,\nGary.\n\n\nOn 3 Apr 2004 at 10:59, Josh Berkus wrote:\n\nGary,\n\n> There are no indexes on the columns involved in the update, they are \n> not required for my usual select statements. This is an attempt to \n> slightly denormalise the design to get the performance up comparable \n> to SQL Server 2000. We hope to move some of our databases over to \n> PostgreSQL later in the year and this is part of the ongoing testing. \n> SQLServer's query optimiser is a bit smarter that PostgreSQL's (yet) \n> so I am hand optimising some of the more frequently used \n> SQL and/or tweaking the database design slightly. \n\nHmmm ... that hasn't been my general experience on complex queries. However, \nit may be due to a difference in ANALYZE statistics. I'd love to see you \nincrease your default_stats_target, re-analyze, and see if PostgreSQL gets \n\"smarter\".\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n-- \nIncoming mail is certified Virus Free.\nChecked by AVG Anti-Virus (http://www.grisoft.com).\nVersion: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004\n\n", "msg_date": "Sat, 03 Apr 2004 22:29:01 +0100", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." } ]