threads
listlengths
1
275
[ { "msg_contents": "On 3 Apr 2004 at 21:23, Mike Nolan wrote:\n\n> > Almost any cross dbms migration shows a drop in performance. The engine\n> > effectively trains developers and administrators in what works and what\n> > doesn't. The initial migration thus compares a tuned to an untuned version.\n> \n> I think it is also possible that Microsoft has more programmers working\n> on tuning issues for SQL Server than PostgreSQL has working on the \n> whole project.\n> --\n> Mike Nolan\n> \n\nAgreed. Also considering the high price of SQLServer it is in their \ninterests to spend a lot of resources on tuning/performance to give it a \ncommercial edge over it rivals and in silly benchmark scores.\n\nCheers,\nGary.\n \n\n> -- \n> Incoming mail is certified Virus Free.\n> Checked by AVG Anti-Virus (http://www.grisoft.com).\n> Version: 7.0.230 / Virus Database: 262.6.5 - Release Date: 31/03/2004\n> \n\n\n", "msg_date": "Sun, 04 Apr 2004 09:52:40 +0100", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Linux 2.6 kernel." } ]
[ { "msg_contents": "Hi Aaron,\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of \n> Aaron Werman\n> Sent: vrijdag 2 april 2004 13:57\n> \n> \n> another thing that I have all over the place is a hierarchy:\n> index on grandfather_table(grandfather)\n> index on father_table(grandfather, father)\n> index on son_table(grandfather, father, son)\n> \n\nIt depends on your data-distribution, but I find that in almost all cases it's beneficial to have your indexes the other way round in such cases:\n\nindex on grandfather_table(grandfather)\nindex on father_table(father, grandfather)\nindex on son_table(son, father, grandfather)\n\nThat usually gives a less common, more selective value at the start of the index, making the initial selection in the index smaller.\n\nAnd AFAIK I don't have to rewrite my queries for that; the planner doesn't care about the order of expressions in the query that are on the same level.\n\nThat said, I tend to use 'surrogate keys'; keys generated from sequences or auto-number columns for my tables. It makes the tables less readable, but the indexes remain smaller.\n\n\nGreetings,\n\n--Tim\n\n\n", "msg_date": "Sun, 4 Apr 2004 22:06:11 +0100", "msg_from": "\"Leeuw van der, Tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: single index on more than two coulumns a bad thing?" }, { "msg_contents": "You're absolutely correct that the general rule is to lead a composite index\nwith the highest cardinality index columns for fastest selectivity. Indices\nand all physical design are based on usage. In this case of unique indices\nsupporting primary keys in a hierarchy, it depends. For selection of small\nsets of arbitrary rows, your arrangement is best. For hierarchy based\nqueries, such as \"for grandparent of foo, and parent of bar, give average\nage of sons\" - the hierarchy based index is often more efficient.\n\nSurrogate keys have a role, and can improve performance, but also carry an\nenormous penalty of intentionally obfuscating logical keys and data\nsemantics, and almost always lead to data errors not being caught because\nthey obscure irrational relationships. I hate them, but use them frequently\nin high transaction rate operational systems where there is much functional\nvalidation outside the dbms (and the apps behave therefore like object\ndatabases and surrogate keys are network database pointers) and in data\nwarehousing (where downstream data cannot be corrected anyway).\n\n/Aaron\n\n----- Original Message ----- \nFrom: \"Leeuw van der, Tim\" <[email protected]>\nTo: <[email protected]>\nSent: Sunday, April 04, 2004 5:06 PM\nSubject: Re: [PERFORM] single index on more than two coulumns a bad thing?\n\n\nHi Aaron,\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of\n> Aaron Werman\n> Sent: vrijdag 2 april 2004 13:57\n>\n>\n> another thing that I have all over the place is a hierarchy:\n> index on grandfather_table(grandfather)\n> index on father_table(grandfather, father)\n> index on son_table(grandfather, father, son)\n>\n\nIt depends on your data-distribution, but I find that in almost all cases\nit's beneficial to have your indexes the other way round in such cases:\n\nindex on grandfather_table(grandfather)\nindex on father_table(father, grandfather)\nindex on son_table(son, father, grandfather)\n\nThat usually gives a less common, more selective value at the start of the\nindex, making the initial selection in the index smaller.\n\nAnd AFAIK I don't have to rewrite my queries for that; the planner doesn't\ncare about the order of expressions in the query that are on the same level.\n\nThat said, I tend to use 'surrogate keys'; keys generated from sequences or\nauto-number columns for my tables. It makes the tables less readable, but\nthe indexes remain smaller.\n\n\nGreetings,\n\n--Tim\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Sun, 4 Apr 2004 22:09:21 -0400", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: single index on more than two coulumns a bad thing?" } ]
[ { "msg_contents": "hi list,\n\ni want to convince people to use postgresql instead of ms-sql server, so i\nset up a kind of comparission insert data / select data from postgresql /\nms-sql server\n\nthe table i use was pretty basic,\n\nid bigserial\ndist float8\nx float8\ny float8\nz float8\n\ni filled the table with a function which filled x,y,z with incremental\nincreasing values (1,2,3,4,5,6...) and computing from that the dist value\nfor every tupel (sqrt((x*x)+(y*y)+(z*z))).\n\nthis works fine for both dbms\n\npostgresql needs 13:37 min for 10.000.000 tupel,\nms-sql needs 1:01:27 h for 10.000.000 tupel.\n\nso far so good.\n\ni attached an index on the dist row and started to query the dbs with\nscripts which select a serial row of 100.000,200.000,500.000 tupels based\non the dist row.\ni randomizly compute the start and the end distance and made a \"select\navg(dist) from table where dist > startdist and dist < enddist\"\n\nDid the same with a table with 50.000.000 tupel in ms-sql and postgres.\n\nthe outcome so far:\n\n100.000 from 50.000.000:\n\npostgres: 0.88 sec\nms-sql: 0.38 sec\n\n200.000 from 50.000.000:\n\npostgres: 1.57 sec\nms-sql: 0.54 sec\n\n500.000 from 50.000.000:\n\npostgres: 3.66 sec\nms-sql: 1.18 sec\n\ni try a lot of changes to the postgresql.conf regarding \"Tuning\nPostgreSQL for performance\"\nby\nShridhar Daithankar, Josh Berkus\n\nwhich did not make a big diffrence to the answering times from postgresql.\n\ni'm pretty fine with the insert time...\n\ndo you have any hints like compiler-flags and so on to get the answering\ntime from postgresql equal to ms-sql?\n\n(btw both dbms were running on exactly the same hardware)\n\ni use suse 8.1\n postgresql 7.2 compiled from the rpms for using postgis, but that is\nanothe story...\n 1.5 gig ram\n 1.8 mhz intel cpu\n\n\nevery help welcome\n\nbest regards heiko\n\n\n\n", "msg_date": "Mon, 5 Apr 2004 17:31:39 +0200 (CEST)", "msg_from": "\"Heiko Kehlenbrink\" <[email protected]>", "msg_from_op": true, "msg_subject": "performance comparission postgresql/ms-sql server" }, { "msg_contents": "Heiko,\n\n> 100.000 from 50.000.000:\n>\n> postgres: 0.88 sec\n> ms-sql: 0.38 sec\n>\n> 200.000 from 50.000.000:\n>\n> postgres: 1.57 sec\n> ms-sql: 0.54 sec\n>\n> 500.000 from 50.000.000:\n>\n> postgres: 3.66 sec\n> ms-sql: 1.18 sec\n\nQuestions:\n\n1. Is this the time to return *all rows* or just the first row? Given the \ndifferent way that PostgreSQL fetches rows to the client from MSSQL, it makes \na difference.\n\n2. What are your sort-mem and shared-mem settings?\n\n3. Have you tried clustering the table?\n\n4. Have you done a comparison of selecting random or scattered, instead of \nserial rows? MSSQL has a tendency to physically store rows in \"order\" which \ngives it a certain advantage in this kind of query.\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 5 Apr 2004 08:52:51 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance comparission postgresql/ms-sql server" }, { "msg_contents": "Heiko Kehlenbrink wrote:\n\n> hi list,\n> \n> i want to convince people to use postgresql instead of ms-sql server, so i\n> set up a kind of comparission insert data / select data from postgresql /\n> ms-sql server\n> \n> the table i use was pretty basic,\n> \n> id bigserial\n> dist float8\n> x float8\n> y float8\n> z float8\n> \n> i filled the table with a function which filled x,y,z with incremental\n> increasing values (1,2,3,4,5,6...) and computing from that the dist value\n> for every tupel (sqrt((x*x)+(y*y)+(z*z))).\n> \n> this works fine for both dbms\n> \n> postgresql needs 13:37 min for 10.000.000 tupel,\n> ms-sql needs 1:01:27 h for 10.000.000 tupel.\n> \n> so far so good.\n> \n> i attached an index on the dist row and started to query the dbs with\n> scripts which select a serial row of 100.000,200.000,500.000 tupels based\n> on the dist row.\n> i randomizly compute the start and the end distance and made a \"select\n> avg(dist) from table where dist > startdist and dist < enddist\"\n\nSome basics to check quickly.\n\n1. vacuum analyze the table before you start selecting.\n2. for slow running queries, check explain analyze output and find out who takes \nmaximum time.\n3. Check for typecasting. You need to typecast the query correctly e.g.\n\nselect avg(dist) from table where dist >startdist::float8 and dist<enddist::float8..\n\nThis might still end up with sequential scan depending upon the plan. but if \nindex scan is picked up, it might be plenty fast..\n\nPost explain analyze for the queries if things don't improve.\n\n HTH\n\n Shridhar\n\n", "msg_date": "Mon, 05 Apr 2004 21:24:46 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance comparission postgresql/ms-sql server" }, { "msg_contents": "\"Heiko Kehlenbrink\" <[email protected]> writes:\n> i use suse 8.1\n> postgresql 7.2 compiled from the rpms for using postgis, but that is\n> anothe story...\n\n7.4 might be a little quicker; but in any case you should be doing this\nsort of comparison using the current release, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Apr 2004 12:11:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance comparission postgresql/ms-sql server " }, { "msg_contents": "Heiko Kehlenbrink wrote:\n\n>i use suse 8.1\n> postgresql 7.2 compiled from the rpms for using postgis, but that is\n>\n> \n>\nTry v7.4, there are many performance improvements. It may not make up \nall the differences but it should help.\n", "msg_date": "Mon, 05 Apr 2004 17:28:33 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance comparission postgresql/ms-sql server" }, { "msg_contents": "Heiko Kehlenbrink wrote:\n\n> hkehlenbrink@lin0493l:~> psql -d test -c 'explain analyse select avg(dist)\n> from massive2 where dist > (1000000*sqrt(3.0))::float8 and dist <\n> (1500000*sqrt(3.0))::float8;'\n> NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=14884.61..14884.61 rows=1 width=8) (actual\n> time=3133.24..3133.24 rows=1 loops=1)\n> -> Index Scan using massive2_dist on massive2 (cost=0.00..13648.17\n> rows=494573 width=8) (actual time=0.11..2061.38 rows=499999 loops=1)\n> Total runtime: 3133.79 msec\n> \n> EXPLAIN\n> \n> seems to me that most time was needed for the index scanning...\n\nHmm... I would suggest if you are testing, you should try 7.4.2. 7.4 has some \ngood optimisation for hash agregates though I am not sure if it apply to averaging.\n\nAlso try forcing a seq. scan by turning off index scan. I guess index scan for \nso many rows is not exactly good thing even if tuple size if pretty small.\n\n Shridhar\n", "msg_date": "Tue, 06 Apr 2004 12:57:13 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance comparission postgresql/ms-sql server" }, { "msg_contents": "Heiko Kehlenbrink wrote:\n>>Hmm... I would suggest if you are testing, you should try 7.4.2. 7.4 has\n>>some\n>>good optimisation for hash agregates though I am not sure if it apply to\n>>averaging.\n> would be the last option till we are runing other applications on that 7.2\n> system\n\nI can understand..\n\n>>Also try forcing a seq. scan by turning off index scan. I guess index scan\n>>for\n>>so many rows is not exactly good thing even if tuple size if pretty small.\n> a sequential scann gives me the following result:\n> \n> HKehlenbrink@lin0493l:~> time psql -d test -c 'explain analyse select\n> avg(dist) from massive2 where dist > 1000000*sqrt(3.0)::float8 and dist <\n> 1500000*sqrt(3.0)::float8 ;'\n> NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=1193714.43..1193714.43 rows=1 width=8) (actual\n> time=166718.54..166718.54 rows=1 loops=1)\n> -> Seq Scan on massive2 (cost=0.00..1192478.00 rows=494573 width=8)\n> (actual time=3233.22..165576.40 rows=499999 loops=1)\n> Total runtime: 166733.73 msec\n\nCertainly bad and not an option.. I can't think of anything offhand to speed \nthis up..\n\n Shridhar\n", "msg_date": "Tue, 06 Apr 2004 20:11:15 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance comparission postgresql/ms-sql server" }, { "msg_contents": "Heiko Kehlenbrink wrote:\n\n>i want to convince people to use postgresql instead of ms-sql server, so i\n>set up a kind of comparission insert data / select data from postgresql /\n>ms-sql server\n> \n>\n[...]\n\n>do you have any hints like compiler-flags and so on to get the answering\n>time from postgresql equal to ms-sql?\n>\n>(btw both dbms were running on exactly the same hardware)\n>\n>i use suse 8.1\n> postgresql 7.2 compiled from the rpms for using postgis, but that is\n>anothe story...\n> 1.5 gig ram\n> 1.8 mhz intel cpu\n>\n>\n>every help welcome\n> \n>\nSuse 8.1 comes with 2.4 series kernel I suppose. Many have witnessed a \nspeed increase when using 2.6 series kernel. Might consider this too \nbesides the newer PostgreSQL version already suggested. 2.6 has some \nscheduling options that are not enabled by default but may enhance \ndatabase performance \n(http://story.news.yahoo.com/news?tmpl=story&cid=75&e=2&u=/nf/20040405/tc_nf/23603).\n\nKaarel\n", "msg_date": "Tue, 06 Apr 2004 17:43:21 +0300", "msg_from": "Kaarel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance comparission postgresql/ms-sql server" } ]
[ { "msg_contents": "I think it was on this list - someone posted a message about SETOF \nbeing slower. Tom replied saying it was because it needed to create an \non-disk tuplestore.\n\nI was just looking for some clarification - a SETOF function will \nalways write the reslting tuples to disk (Not buffering in say a \nsort_mem sized buffer)?\n\nI think if that is the case I may need to go back and change some stuff \naround.\nI have a procedure that I broke out a bit to make life easier.\n\nBasically it goes\n\nfor v_row in\n\tselect blah from function_that_gets_data_from_some_cache(....)\n\trowcount := rowcount + 1;\n\treturn next v_row;\nend for;\n\nif rowcount = 0 then\n\t[same thing, but we call some_function_that_creates_data_for_cache]\nend if;\n\nDoing it this way means I avoid having to deal with it in the client \nand I also avoid having a giant stored procedure. (I like short & sweet \nthings)\n\nWhat I've found for timings is this:\n\nselect * from function_that_gets_data_from_some_cache() runs around 1.8 \nms\nbut select * from the_top_level_function() runs around 4.2ms\n(Yes, I know 4.2 ms is fast, but that is not the point).\n\ncould this overhead be related to the SETOF tuplestores?\n\nMight it be better to use refcursor or something or bite the bullet and \nlive with a giant procedure?\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Mon, 5 Apr 2004 12:28:36 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "SETOF performance" }, { "msg_contents": "Jeff wrote:\n> I think it was on this list - someone posted a message about SETOF \n> being slower. Tom replied saying it was because it needed to create an \n> on-disk tuplestore.\n> \n> I was just looking for some clarification - a SETOF function will always \n> write the reslting tuples to disk (Not buffering in say a sort_mem sized \n> buffer)?\n\nI think at least part of what you're seeing is normal function call \noverhead. As far as tuplestores writing to disk, here's what the source \nsays:\n\nIn src/backend/utils/sort/tuplestore.c\n8<---------------------------------------\n * maxKBytes: how much data to store in memory (any data beyond this\n * amount is paged to disk). When in doubt, use work_mem.\n */\nTuplestorestate *\ntuplestore_begin_heap(bool randomAccess, bool interXact, int maxKBytes)\n8<---------------------------------------\n\nIn src/backend/executor/execQual.c:ExecMakeTableFunctionResult():\n8<---------------------------------------\ntupstore = tuplestore_begin_heap(true, false, work_mem);\n8<---------------------------------------\n\nSo up to work_mem (sort_mem in 7.4 and earlier) should be stored in memory.\n\nJoe\n\n", "msg_date": "Mon, 05 Apr 2004 10:49:37 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SETOF performance" } ]
[ { "msg_contents": "\nNo point to beating a dead horse (other than the sheer joy of the thing) since postgres does not have raw device support, but ...\n\nraw devices, at least on solaris, are about 10 times as fast as cooked file systems for Informix. This might still be a gain for postgres' performance, but the portability issues remain.\n\nraw device use in Informix is safer in terms of data because Informix does not ever have to use the regular file system and so issues of buffering and so on go away. My understanding -- fortunately not ever tried in the real world -- is that postgres' WAL log system is as reliable as Informix writing to raw devices.\n\nraw devices can't be copied or tampered with with regular file tools (mv, cp etc.); this changes how backups get done but also adds a layer of insulation between valuable data and users.\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n-----Original Message-----\nFrom:\tChristopher Browne [mailto:[email protected]]\nSent:\tMon 3/29/2004 10:28 AM\nTo:\[email protected]\nCc:\t\nSubject:\tRe: [ADMIN] Raw devices vs. Filesystems\nAfter takin a swig o' Arrakan spice grog, [email protected] (\"Jaime Casanova\") belched out:\n> Can you tell me (or at least guide me to a palce where i can find the\n> answer) what are the benefits of filesystems over raw devices?\n\nFor PostgreSQL, filesystems have the merit that you can actually use\nthem. PostgreSQL doesn't support use of \"raw devices.\"\n\nTwo major benefits of using filesystems as opposed to raw devices are\nthat:\n\na) The use of raw devices is dramatically non-portable; you have to\n reimplement data access on every platform you are trying to\n support; \n\nb) The use of raw devices essentially mandates that you implement\n some form of generic filesystem on top of them, which adds\n considerable complexity to your code.\n\nTwo benefits to raw devices are claimed...\n\nc) It's faster. But that assumes that the \"cooked\" filesystems are\n implemented fairly badly. That was typically true, a dozen\n years ago, but it isn't so typical now, particularly with a\n fancy cacheing controller.\n\nd) It guarantees application control of update ordering. Of course,\n with a cacheing controller, or disk drives that lie to one degree\n or another, those guarantees might be gone anyways.\n\nThere are other filesystem advantages, such as\n\ne) Shifting \"cooked\" data around may be as simple as a \"mv,\" whereas\n reorganizing on raw disk requires DB-specific tools...\n\n> And what filesystem is the best for postgresql performance?\n\nThat would depend, assortedly, on what OS you are using, what kind of\nhardware you are running on, what kind of usage patterns you have, as\nwell as on how you define the notion of \"best.\"\n\nAbsent of any indication of any of those things, the best that can be\nsaid is \"that depends...\"\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://cbbrowne.com/info/languages.html\nTTY Message from The-XGP at MIT-AI:\nThe-XGP@AI 02/59/69 02:59:69\nYour XGP output is startling.\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n\n\n\n", "msg_date": "Mon, 5 Apr 2004 12:43:21 -0700", "msg_from": "\"Gregory S. Williamson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Raw devices vs. Filesystems" }, { "msg_contents": "[email protected] (\"Gregory S. Williamson\") writes:\n> No point to beating a dead horse (other than the sheer joy of the\n> thing) since postgres does not have raw device support, but ... raw\n> devices, at least on solaris, are about 10 times as fast as cooked\n> file systems for Informix. This might still be a gain for postgres'\n> performance, but the portability issues remain.\n\nThat claim seems really rather remarkable.\n\nIt implies an entirely stunning degree of inefficiency in the\nimplementation of filesystems on Solaris.\n\nThe amount of indirection involved in walking through i-nodes and such\nis something I would expect to introduce some percentage of\nperformance loss, but for it to introduce overhead of over 900%\npresumably implies that Sun (and/or Veritas) got something really\nhorribly wrong.\n-- \nselect 'cbbrowne' || '@' || 'cbbrowne.com';\nhttp://www.ntlug.org/~cbbrowne/nonrdbms.html\nRules of the Evil Overlord #1. \"My Legions of Terror will have helmets\nwith clear plexiglass visors, not face-concealing ones.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Tue, 06 Apr 2004 16:57:02 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raw devices vs. Filesystems" }, { "msg_contents": "Chris Browne <[email protected]> writes:\n> That claim seems really rather remarkable.\n> It implies an entirely stunning degree of inefficiency in the\n> implementation of filesystems on Solaris.\n\nSolaris has a reputation for having stunning degrees of inefficiency\nin a number of places :-(. On the other hand I've also heard it praised\nfor its ability to survive partial hardware failures (eg, N out of M\nCPUs down), so maybe that's the price you gotta pay.\n\nBut to get back to the point of this discussion: to allow PG to use raw\ndevices instead of filesystems, we'd first have to do a ton of\nportability work (since raw disk access is nowhere standard), and\nabandon our principle that Postgres does not run as root (since raw disk\naccess is not permitted to non-root processes by any sane sysadmin).\nBut that last is a mighty comforting principle to have, anytime someone\ncomplains that their el cheapo whitebox PC locks up as soon as they\nstart to stress the database. I know I'd have wasted a lot more time\nchasing random hardware breakages if I couldn't say \"system freezes and\nfilesystem corruption are Clearly Not Our Fault\".\n\nAfter that, we get to implement our own filesystem-equivalent management\nof disk space allocation, disk I/O scheduling, etc. Are we really\nsmarter than all those kernel hackers doing this for a living? I doubt it.\n\nAfter that, we get to re-optimize all the existing Postgres behaviors\nthat are designed to sit on top of a standard Unix buffering filesystem\nlayer.\n\nAfter that, we might reap some performance benefits. Or maybe not.\nThere's not a heck of a lot of hard evidence that we would --- and\nwhat there is traces to twenty-year-old assumptions about disk drive\nand OS behavior, which are quite unlikely to still apply today.\n\nPersonally, I have a lot of more-promising projects to pursue...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Apr 2004 01:26:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raw devices vs. Filesystems " }, { "msg_contents": "...and on Wed, Apr 07, 2004 at 01:26:02AM -0400, Tom Lane used the keyboard:\n> \n> After that, we get to implement our own filesystem-equivalent management\n> of disk space allocation, disk I/O scheduling, etc. Are we really\n> smarter than all those kernel hackers doing this for a living? I doubt it.\n> \n> After that, we get to re-optimize all the existing Postgres behaviors\n> that are designed to sit on top of a standard Unix buffering filesystem\n> layer.\n> \n> After that, we might reap some performance benefits. Or maybe not.\n> There's not a heck of a lot of hard evidence that we would --- and\n> what there is traces to twenty-year-old assumptions about disk drive\n> and OS behavior, which are quite unlikely to still apply today.\n> \n> Personally, I have a lot of more-promising projects to pursue...\n> \n\nHas anyone tried PostgreSQL on top of OCFS? Personally, I'm not sure it\nwould even work, as Oracle clearly state that OCFS was _never_ meant to\nbe a fully fledged UNIX filesystem with POSIX features such as correct\ntimestamp updates, inode changes, etc., but OCFSv2 brings some features\nthat might lead one into thinking they're about to make it suitable for\nuses beyond that of just having Oracle databases sitting on top of it.\n\nFurthermore, this filesystem would be a blazing one stop solution for\nall replication issues PostgreSQL currently suffers from, as its main\ndesign goal was to present \"a consistent file system image across the\nservers in a cluster\".\n\nNow, if both goals can be achieved in one go, hell, I'm willing to try\nit out myself in an attempt to extract off of it, some performance\nindicators that could be compared to other database performance tests\nsent to both this and the PERFORM mailing list.\n\nSo, anyone? :)\n\nCheers,\n-- \n Grega Bremec\n Senior Administrator\n Noviforum Ltd., Software & Media\n http://www.noviforum.si/", "msg_date": "Wed, 7 Apr 2004 09:18:58 +0200", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raw devices vs. Filesystems" }, { "msg_contents": "In article <[email protected]>,\nTom Lane <[email protected]> writes:\n\n> But to get back to the point of this discussion: to allow PG to use raw\n> devices instead of filesystems, we'd first have to do a ton of\n> portability work (since raw disk access is nowhere standard), and\n> abandon our principle that Postgres does not run as root (since raw disk\n> access is not permitted to non-root processes by any sane sysadmin).\n\nWhy not? In MySQL/InnoDB, you do a \"chown mysql.daemon /dev/raw/raw1\"\n(or whatever raw disk you want to access), and that's all.\n\n> After that, we get to implement our own filesystem-equivalent management\n> of disk space allocation, disk I/O scheduling, etc. Are we really\n> smarter than all those kernel hackers doing this for a living? I doubt it.\n\nDitto. I don't have hard numbers for MySQL, but I didn't see any\nnoticeable improvement when messing with raw disks (at least under\nLinux).\n\n", "msg_date": "07 Apr 2004 15:05:55 +0200", "msg_from": "Harald Fuchs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raw devices vs. Filesystems" }, { "msg_contents": "Grega,\n\n> Furthermore, this filesystem would be a blazing one stop solution for\n> all replication issues PostgreSQL currently suffers from, as its main\n> design goal was to present \"a consistent file system image across the\n> servers in a cluster\".\n\nDoes it work, though? Without Oracle admin tools?\n\n> Now, if both goals can be achieved in one go, hell, I'm willing to try\n> it out myself in an attempt to extract off of it, some performance\n> indicators that could be compared to other database performance tests\n> sent to both this and the PERFORM mailing list.\n\nHey, any test you wanna run is fine with us. I'm pretty sure that OCFS \nbelongs to Oracle, though, patent & copyright, so we couldn't actually use it \nin practice.\n\nIf your intention in this test is to show the superiority of raw devices, let \nme give you a reality check: barring some major corporate backing getting \ninvolved, we can't possibly implement our own PG-FS for database support. We \nalready have a TODO list which is far too long for our developer pool, and \nimplementing a custom FS either takes a large team (OCFS) or several years of \ndevelopment (Reiser). \n\nNow, if you know somebody who might pay for one, then great ....\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 7 Apr 2004 09:09:16 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Raw devices vs. Filesystems" }, { "msg_contents": "On Wed, Apr 07, 2004 at 09:09:16AM -0700, Josh Berkus wrote:\n\n> If your intention in this test is to show the superiority of raw devices, let \n> me give you a reality check: barring some major corporate backing getting \n> involved, we can't possibly implement our own PG-FS for database support. We \n> already have a TODO list which is far too long for our developer pool, and \n> implementing a custom FS either takes a large team (OCFS) or several years of \n> development (Reiser). \n\nIs there any documentation as to what guarantees PostgreSQL requires\nfrom the filesystem, or what posix semantics can be relaxed?\n\nCheers,\n Steve\n", "msg_date": "Wed, 7 Apr 2004 09:29:47 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Raw devices vs. Filesystems" }, { "msg_contents": "...and on Wed, Apr 07, 2004 at 09:09:16AM -0700, Josh Berkus used the keyboard:\n> \n> Does it work, though? Without Oracle admin tools?\n\nHello, Josh. :)\n\nWell, as I said, that's why I was asking - I'm willing to give it a go\nif nobody can prove me wrong. :)\n\n> > Now, if both goals can be achieved in one go, hell, I'm willing to try\n> > it out myself in an attempt to extract off of it, some performance\n> > indicators that could be compared to other database performance tests\n> > sent to both this and the PERFORM mailing list.\n> \n> Hey, any test you wanna run is fine with us. I'm pretty sure that OCFS \n> belongs to Oracle, though, patent & copyright, so we couldn't actually use it \n> in practice.\n\nI thought you knew - OCFS, OCFS-Tools and OCFSv2 have not only been open-\nsource for quite a while now - they're released under the GPL.\n\n http://oss.oracle.com/projects/ocfs/\n http://oss.oracle.com/projects/ocfs-tools/\n http://oss.oracle.com/projects/ocfs2/\n\nI don't know what that means to you (probably nothing good, as PostgreSQL\nis released under the BSD license), but it most definitely can be considered\na good thing for the end user, as she can download it, compile, and set it\nup on her disks, without the need to pay Oracle royalties. :)\n\n> If your intention in this test is to show the superiority of raw devices, let \n> me give you a reality check: barring some major corporate backing getting \n> involved, we can't possibly implement our own PG-FS for database support. We \n> already have a TODO list which is far too long for our developer pool, and \n> implementing a custom FS either takes a large team (OCFS) or several years of \n> development (Reiser). \n\nNot really - I was just thinking about something not-entirely-a-filesystem\nand POK!, OCFS sprang to mind. It omits many POSIX features that slow down\na traditional filesystem, yet it does know the concept of inodes and most\nof all, it's _really_ heavy on caching. As such, it sounded quite promising\nto me, but trial, I think, is the best test.\n\nThe question does spring up though, that Steve raised in another post - just\nfor the record, what POSIX semantics can a postmaster live without in a\nfilesystem?\n\nCheers,\n-- \n Grega Bremec\n Senior Administrator\n Noviforum Ltd., Software & Media\n http://www.noviforum.si/", "msg_date": "Thu, 8 Apr 2004 06:33:04 +0200", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Raw devices vs. Filesystems" }, { "msg_contents": "Grega,\n\n> Well, as I said, that's why I was asking - I'm willing to give it a go\n> if nobody can prove me wrong. :)\n\nWhy not? If you have time?\n\n> I thought you knew - OCFS, OCFS-Tools and OCFSv2 have not only been open-\n> source for quite a while now - they're released under the GPL.\n\nKeen! Wonder if we can make them regret it.\n\nSeriously, if Oracle opened this stuff, it's probably becuase they used some \nGPL components in it. It also probably means that it won't work for \nanything but Oracle ...\n\n> I don't know what that means to you (probably nothing good, as PostgreSQL\n> is released under the BSD license), \n\nWell, it just means that we can't ship OCFS with PostgreSQL. \n\n> The question does spring up though, that Steve raised in another post -\n> just for the record, what POSIX semantics can a postmaster live without in\n> a filesystem?\n\nYou might want to ask that question again on Hackers. I don't know the \nanswer, myself.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 9 Apr 2004 09:02:00 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Raw devices vs. Filesystems" }, { "msg_contents": "[email protected] (Josh Berkus) wrote:\n>> Well, as I said, that's why I was asking - I'm willing to give it a go\n>> if nobody can prove me wrong. :)\n>\n> Why not? If you have time?\n\nTrue enough.\n\n>> I thought you knew - OCFS, OCFS-Tools and OCFSv2 have not only been\n>> open- source for quite a while now - they're released under the\n>> GPL.\n>\n> Keen! Wonder if we can make them regret it.\n>\n> Seriously, if Oracle opened this stuff, it's probably becuase they\n> used some GPL components in it. It also probably means that it\n> won't work for anything but Oracle ...\n\nIt could be that the experiment shows that OCFS isn't all that\nhelpful. Or that it helps cover inadequacies in certain aspects of\nhow Oracle accesses filesystems.\n\nIf it _does_ show that it is helpful, then that may suggest a\nfilesystem implementation strategy useful for the BSD folks.\n\nThe main \"failure case\" would be if the exercise shows that using OCFS\nis pretty futile.\n-- \nselect 'cbbrowne' || '@' || 'acm.org';\nhttp://www3.sympatico.ca/cbbrowne/linux.html\nDo you know where your towel is?\n", "msg_date": "Fri, 09 Apr 2004 15:34:44 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Raw devices vs. Filesystems" } ]
[ { "msg_contents": "This is what I got...\n\n \n\nTwo servers, one debian, one fedora\n\n \n\nDebain dual 3ghz, 1 gig ram, ide, PostgreSQL 7.2.1 on i686-pc-linux-gnu,\ncompiled by GCC 2.95.4\n\n \n\n \n\nFedora: Dual 3ghz, 1 gig ram, scsi, PostgreSQL 7.3.4-RH on\ni386-redhat-linux-gnu, compiled by GCC i386-redhat-linux-gcc (GCC) 3.3.2\n20031022 (Red Hat Linux 3.3.2-1)\n\n \n\n \n\nBoth have same databases, Both have had vacume full ran on them. Both\ndoing the same query\n\n \n\nSelect * from vpopmail; The vpopmail is a view, this is the view\n\n \n\n \n\n View \"vpopmail\"\n\n Column | Type | Modifiers \n\n-----------+------------------------+-----------\n\n pw_name | character varying(32) | \n\n pw_domain | character varying(64) | \n\n pw_passwd | character varying | \n\n pw_uid | integer | \n\n pw_gid | integer | \n\n pw_gecos | character varying | \n\n pw_dir | character varying(160) | \n\n pw_shell | character varying(20) | \n\nView definition: SELECT ea.email_name AS pw_name, ea.domain AS\npw_domain, get_pwd(u.username, '127.0.0.1'::\"varchar\", '101'::\"varchar\",\n'MD5'::\"varchar\") AS pw_passwd, 0 AS pw_uid, 0 AS pw_gid, ''::\"varchar\"\nAS pw_gecos, ei.directory AS pw_dir, ei.quota AS pw_shell FROM\nemail_addresses ea, email_info ei, users u, user_resources ur WHERE\n(((((ea.user_resource_id = ei.user_resource_id) AND (get_pwd(u.username,\n'127.0.0.1'::\"varchar\", '101'::\"varchar\", 'MD5'::\"varchar\") IS NOT\nNULL)) AND (ur.id = ei.user_resource_id)) AND (u.id = ur.user_id)) AND\n(NOT (EXISTS (SELECT forwarding.email_id FROM forwarding WHERE\n(forwarding.email_id = ea.id)))));\n\n \n\n \n\n \n\nBoth are set to the same buffers and everything... this is the execution\ntime:\n\n \n\nDebian: Total runtime: 35594.81 msec\n\n \n\nFedora: Total runtime: 2279869.08 msec\n\n \n\nHuge difference as you can see... here are the pastes of the stuff\n\n \n\nDebain:\n\n \n\nuser_acl=# explain analyze SELECT count(*) from vpopmail;\n\nNOTICE: QUERY PLAN:\n\n \n\nAggregate (cost=438231.94..438231.94 rows=1 width=20) (actual\ntime=35594.67..35594.67 rows=1 loops=1)\n\n -> Hash Join (cost=434592.51..438142.51 rows=35774 width=20) (actual\ntime=34319.24..35537.11 rows=70613 loops=1)\n\n -> Seq Scan on email_info ei (cost=0.00..1721.40 rows=71640\nwidth=4) (actual time=0.04..95.13 rows=71689 loops=1)\n\n -> Hash (cost=434328.07..434328.07 rows=35776 width=16)\n(actual time=34319.00..34319.00 rows=0 loops=1)\n\n -> Hash Join (cost=430582.53..434328.07 rows=35776\nwidth=16) (actual time=2372.45..34207.21 rows=70613 loops=1)\n\n -> Seq Scan on users u (cost=0.00..1938.51\nrows=71283 width=4) (actual time=0.81..30119.58 rows=70809 loops=1)\n\n -> Hash (cost=430333.64..430333.64 rows=35956\nwidth=12) (actual time=2371.51..2371.51 rows=0 loops=1)\n\n -> Hash Join (cost=2425.62..430333.64\nrows=35956 width=12) (actual time=176.73..2271.14 rows=71470 loops=1)\n\n -> Seq Scan on email_addresses ea\n(cost=0.00..426393.25 rows=35956 width=4) (actual time=0.06..627.49\nrows=71473 loops=1)\n\n SubPlan\n\n -> Index Scan using\nforwarding_idx on forwarding (cost=0.00..5.88 rows=1 width=4) (actual\ntime=0.00..0.00 rows=0 loops=71960)\n\n -> Hash (cost=1148.37..1148.37\nrows=71637 width=8) (actual time=176.38..176.38 rows=0 loops=1)\n\n -> Seq Scan on user_resources ur\n(cost=0.00..1148.37 rows=71637 width=8) (actual time=0.03..82.21\nrows=71686 loops=1)\n\nTotal runtime: 35594.81 msec\n\n \n\nEXPLAIN\n\n \n\n \n\n \n\nAnd for fedora it's\n\n \n\n \n\nAggregate (cost=416775.52..416775.52 rows=1 width=20) (actual\ntime=2279868.57..2279868.58 rows=1 loops=1)\n -> Hash Join (cost=413853.79..416686.09 rows=35772 width=20)\n(actual time=2279271.26..2279803.91 rows=70841 loops=1)\n Hash Cond: (\"outer\".user_resource_id = \"inner\".id)\n -> Seq Scan on email_info ei (cost=0.00..1666.07 rows=71907\nwidth=4) (actual time=8.12..171.10 rows=71907 loops=1)\n -> Hash (cost=413764.36..413764.36 rows=35772 width=16)\n(actual time=2279263.03..2279263.03 rows=0 loops=1)\n -> Hash Join (cost=410712.87..413764.36 rows=35772\nwidth=16) (actual time=993.90..2279008.72 rows=70841 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".user_id)\n -> Seq Scan on users u (cost=0.00..1888.85\nrows=71548 width=4) (actual time=18.38..2277152.51 rows=71028 loops=1)\n Filter: (get_pwd(username,\n'127.0.0.1'::character varying, '101'::character varying,\n'MD5'::character varying) IS NOT NULL)\n -> Hash (cost=410622.99..410622.99 rows=35952\nwidth=12) (actual time=975.40..975.40 rows=0 loops=1)\n -> Hash Join (cost=408346.51..410622.99\nrows=35952 width=12) (actual time=507.52..905.91 rows=71697 loops=1)\n Hash Cond: (\"outer\".id =\n\"inner\".user_resource_id)\n -> Seq Scan on user_resources ur\n(cost=0.00..1108.04 rows=71904 width=8) (actual time=0.05..95.65\nrows=71904 loops=1)\n -> Hash (cost=408256.29..408256.29\nrows=36091 width=4) (actual time=507.33..507.33 rows=0 loops=1)\n -> Seq Scan on email_addresses\nea (cost=0.00..408256.29 rows=36091 width=4) (actual time=0.15..432.83\nrows=71700 loops=1)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using\nforwarding_idx on forwarding (cost=0.00..5.63 rows=1 width=4) (actual\ntime=0.00..0.00 rows=0 loops=72182)\n Index Cond:\n(email_id = $0)\n Total runtime: 2279869.08 msec\n\n(20 rows)\n\n \n\n \n\n \n\nAny suggestions?\n\n \n\nI can't figure this out. There is no reason it should be that much of a\ndifference, It's all the same value's, Thanks in advanced.\n\n \n\nAndrew\n\n\n\n\n\n\n\n\n\n\n\n\n\nThis is what I got…\n \nTwo servers, one debian, one fedora\n \nDebain dual 3ghz, 1 gig ram, ide, PostgreSQL 7.2.1 on\ni686-pc-linux-gnu, compiled by GCC 2.95.4\n \n \nFedora: Dual 3ghz, 1 gig ram, scsi, PostgreSQL 7.3.4-RH on\ni386-redhat-linux-gnu, compiled by GCC i386-redhat-linux-gcc (GCC) 3.3.2\n20031022 (Red Hat Linux 3.3.2-1)\n \n \nBoth have same databases, Both have had vacume full ran on\nthem. Both doing the same query\n \nSelect * from vpopmail; The vpopmail is a view, this is the\nview\n \n \n                View \"vpopmail\"\n  Column   |          Type          | Modifiers \n-----------+------------------------+-----------\n pw_name   | character varying(32)  | \n pw_domain | character varying(64)  | \n pw_passwd | character varying      | \n pw_uid    | integer                | \n pw_gid    | integer                | \n pw_gecos  | character varying      | \n pw_dir    | character varying(160) | \n pw_shell  | character varying(20)  | \nView definition: SELECT ea.email_name AS pw_name, ea.domain\nAS pw_domain, get_pwd(u.username, '127.0.0.1'::\"varchar\",\n'101'::\"varchar\", 'MD5'::\"varchar\") AS pw_passwd, 0 AS\npw_uid, 0 AS pw_gid, ''::\"varchar\" AS pw_gecos, ei.directory AS\npw_dir, ei.quota AS pw_shell FROM email_addresses ea, email_info ei, users u,\nuser_resources ur WHERE (((((ea.user_resource_id = ei.user_resource_id) AND\n(get_pwd(u.username, '127.0.0.1'::\"varchar\",\n'101'::\"varchar\", 'MD5'::\"varchar\") IS NOT NULL)) AND\n(ur.id = ei.user_resource_id)) AND (u.id = ur.user_id)) AND (NOT (EXISTS\n(SELECT forwarding.email_id FROM forwarding WHERE (forwarding.email_id =\nea.id)))));\n \n \n \nBoth are set to the same buffers and everything… this\nis the execution time:\n \nDebian: Total runtime: 35594.81 msec\n \nFedora: Total runtime: 2279869.08 msec\n \nHuge difference as you can see… here are the pastes of\nthe stuff\n \nDebain:\n \nuser_acl=# explain analyze SELECT count(*) from vpopmail;\nNOTICE:  QUERY PLAN:\n \nAggregate  (cost=438231.94..438231.94 rows=1 width=20)\n(actual time=35594.67..35594.67 rows=1 loops=1)\n  ->  Hash Join  (cost=434592.51..438142.51 rows=35774\nwidth=20) (actual time=34319.24..35537.11 rows=70613 loops=1)\n        ->  Seq Scan on email_info ei \n(cost=0.00..1721.40 rows=71640 width=4) (actual time=0.04..95.13 rows=71689\nloops=1)\n        ->  Hash  (cost=434328.07..434328.07 rows=35776\nwidth=16) (actual time=34319.00..34319.00 rows=0 loops=1)\n              ->  Hash Join  (cost=430582.53..434328.07\nrows=35776 width=16) (actual time=2372.45..34207.21 rows=70613 loops=1)\n                    ->  Seq Scan on users u \n(cost=0.00..1938.51 rows=71283 width=4) (actual time=0.81..30119.58 rows=70809\nloops=1)\n                    ->  Hash  (cost=430333.64..430333.64\nrows=35956 width=12) (actual time=2371.51..2371.51 rows=0 loops=1)\n                          ->  Hash Join \n(cost=2425.62..430333.64 rows=35956 width=12) (actual time=176.73..2271.14\nrows=71470 loops=1)\n                                ->  Seq Scan on\nemail_addresses ea  (cost=0.00..426393.25 rows=35956 width=4) (actual time=0.06..627.49\nrows=71473 loops=1)\n                                      SubPlan\n                                        ->  Index Scan\nusing forwarding_idx on forwarding  (cost=0.00..5.88 rows=1 width=4) (actual\ntime=0.00..0.00 rows=0 loops=71960)\n                                ->  Hash \n(cost=1148.37..1148.37 rows=71637 width=8) (actual time=176.38..176.38 rows=0\nloops=1)\n                                      ->  Seq Scan on\nuser_resources ur \n(cost=0.00..1148.37 rows=71637 width=8) (actual time=0.03..82.21 rows=71686\nloops=1)\nTotal runtime: 35594.81 msec\n \nEXPLAIN\n \n \n \nAnd for fedora it’s\n \n \nAggregate  (cost=416775.52..416775.52 rows=1 width=20) (actual time=2279868.57..2279868.58 rows=1 loops=1)   ->  Hash Join  (cost=413853.79..416686.09 rows=35772 width=20) (actual time=2279271.26..2279803.91 rows=70841 loops=1)         Hash Cond: (\"outer\".user_resource_id = \"inner\".id)         ->  Seq Scan on email_info ei  (cost=0.00..1666.07 rows=71907 width=4) (actual time=8.12..171.10 rows=71907 loops=1)         ->  Hash  (cost=413764.36..413764.36 rows=35772 width=16) (actual time=2279263.03..2279263.03 rows=0 loops=1)               ->  Hash Join  (cost=410712.87..413764.36 rows=35772 width=16) (actual time=993.90..2279008.72 rows=70841 loops=1)                     Hash Cond: (\"outer\".id = \"inner\".user_id)                     ->  Seq Scan on users u  (cost=0.00..1888.85 rows=71548 width=4) (actual time=18.38..2277152.51 rows=71028 loops=1)                           Filter: (get_pwd(username, '127.0.0.1'::character varying, '101'::character varying, 'MD5'::character varying) IS NOT NULL)                     ->  Hash  (cost=410622.99..410622.99 rows=35952 width=12) (actual time=975.40..975.40 rows=0 loops=1)                           ->  Hash Join  (cost=408346.51..410622.99 rows=35952 width=12) (actual time=507.52..905.91 rows=71697 loops=1)                                 Hash Cond: (\"outer\".id = \"inner\".user_resource_id)                                 ->  Seq Scan on user_resources ur  (cost=0.00..1108.04 rows=71904 width=8) (actual time=0.05..95.65 rows=71904 loops=1)                                 ->  Hash  (cost=408256.29..408256.29 rows=36091 width=4) (actual time=507.33..507.33 rows=0 loops=1)                                       ->  Seq Scan on email_addresses ea  (cost=0.00..408256.29 rows=36091 width=4) (actual time=0.15..432.83 rows=71700 loops=1)                                             Filter: (NOT (subplan))                                             SubPlan                                               ->  Index Scan using forwarding_idx on forwarding  (cost=0.00..5.63 rows=1 width=4) (actual time=0.00..0.00 rows=0 loops=72182)                                                     Index Cond: (email_id = $0) Total runtime: 2279869.08 msec\n(20 rows)\n \n \n \nAny suggestions?\n \nI can’t figure this out. There is no reason it should be that\nmuch of a difference, It’s all the same value’s, Thanks in\nadvanced.\n \nAndrew", "msg_date": "Mon, 5 Apr 2004 18:41:08 -0700", "msg_from": "\"Andrew Matthews\" <[email protected]>", "msg_from_op": true, "msg_subject": "Wierd issues" }, { "msg_contents": "\"Andrew Matthews\" <[email protected]> writes:\n> [ PG 7.3.4 much slower than 7.2.1 ]\n>\n> Both have same databases, Both have had vacume full ran on them.\n\nYou did ANALYZE too, right?\n\nThe bulk of the time is evidently going into the seqscan on users in\neach case:\n\n> -> Seq Scan on users u (cost=0.00..1938.51 rows=71283 width=4) (actual time=0.81..30119.58 rows=70809 loops=1)\n\n> -> Seq Scan on users u (cost=0.00..1888.85 rows=71548 width=4) (actual time=18.38..2277152.51 rows=71028 loops=1)\n> Filter: (get_pwd(username, '127.0.0.1'::character varying, '101'::character varying, 'MD5'::character varying) IS NOT NULL)\n\nI have to suspect that the inefficiency is inside this get_pwd()\nfunction, but you didn't tell us anything about that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Apr 2004 11:02:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd issues " }, { "msg_contents": "Yes I did do analyze.... the here is the get_pwd function\n\n-- Function: public.get_pwd(varchar, varchar, varchar, varchar)\n\n-- DROP FUNCTION public.get_pwd(varchar, varchar, varchar, varchar);\n\nCREATE OR REPLACE FUNCTION public.get_pwd(varchar, varchar, varchar,\nvarchar)\n RETURNS varchar AS\n'\nDECLARE\n p_username ALIAS for $1;\n p_server ALIAS for $2;\n p_service ALIAS for $3;\n p_pwd_type ALIAS for $4;\n\n l_resource_id integer;\n l_server_id integer;\n l_service_id integer;\n l_allow_deny char(1);\n l_user_id integer;\n l_account_id integer;\n l_passwd varchar(40);\nbegin\n\n -- get server identifier\n select id\n into l_server_id\n from servers s\n where address = p_server;\n\n if NOT FOUND then\n -- try to get default server\n select id \n into l_server_id\n from servers s\n where address = \\'default\\';\n end if;\n\n if l_server_id isnull then\n return NULL;\n end if;\n\n -- get service identifier\n select id\n into l_service_id\n from services s\n where radius_service = p_service;\n \n if l_service_id isnull then\n return NULL;\n end if;\n\n -- get resource identifier (server/service combination)\n select id\n into l_resource_id\n from resources r\n where service_id = l_service_id\n and server_id = l_server_id;\n\n -- could not find resource via server_id, now look via server\\'s group if\nany\n if l_resource_id isnull then\n select id\n into l_resource_id\n from resources r\n where service_id = l_service_id\n and server_group_id = (select server_group_id from servers where id =\nl_server_id);\n end if;\n\n -- could not determine resource user wants to access, so deny by returning\nNULL passwd\n if l_resource_id isnull then\n return NULL;\n end if;\n\n -- at this point we have a valid resource_id\n -- determine if valid username\n select u.id, u.account_id\n into l_user_id, l_account_id\n from users u, accounts a\n where u.username = upper(p_username) -- always uppercase in DB\n and u.del_id = 0\n and u.status = \\'A\\'\n and a.status = \\'A\\'\n and u.account_id = a.id;\n\n -- if active user not found then return NULL for passwd\n if l_user_id isnull then\n return null;\n end if;\n\n -- user specific control\n select allow_deny\n into l_allow_deny\n from users_acl\n where resource_id = l_resource_id\n and user_id = l_user_id;\n \n if l_allow_deny = \\'D\\' then\n return NULL;\n elsif l_allow_deny isnull then -- no user-specific control\n select max(allow_deny) -- \\'D\\' is > \\'A\\' hence deny takes precedence\nif conflict across groups\n into l_allow_deny\n from users_acl\n where resource_id = l_resource_id\n and user_group_id in (select user_group_id from\nuser_group_assignments\n where user_id = l_user_id);\n elsif l_allow_deny = \\'A\\' then\n -- do nothing; -- get and return passwd below\n end if;\n\n if l_allow_deny isnull or l_allow_deny = \\'D\\' then\n return NULL;\n elsif l_allow_deny = \\'A\\' then\n select password\n into l_passwd\n from user_pwds\n where password_type = upper(p_pwd_type)\n and user_id = l_user_id;\n\n return l_passwd;\n else\n return null;\n end if;\n\nend;\n\n'\n LANGUAGE 'plpgsql' VOLATILE;\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, April 09, 2004 8:02 AM\nTo: Andrew Matthews\nCc: [email protected]\nSubject: Re: [PERFORM] Wierd issues \n\n\"Andrew Matthews\" <[email protected]> writes:\n> [ PG 7.3.4 much slower than 7.2.1 ]\n>\n> Both have same databases, Both have had vacume full ran on them.\n\nYou did ANALYZE too, right?\n\nThe bulk of the time is evidently going into the seqscan on users in\neach case:\n\n> -> Seq Scan on users u (cost=0.00..1938.51\nrows=71283 width=4) (actual time=0.81..30119.58 rows=70809 loops=1)\n\n> -> Seq Scan on users u (cost=0.00..1888.85\nrows=71548 width=4) (actual time=18.38..2277152.51 rows=71028 loops=1)\n> Filter: (get_pwd(username,\n'127.0.0.1'::character varying, '101'::character varying, 'MD5'::character\nvarying) IS NOT NULL)\n\nI have to suspect that the inefficiency is inside this get_pwd()\nfunction, but you didn't tell us anything about that...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 9 Apr 2004 09:12:38 -0700", "msg_from": "\"Andrew Matthews\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd issues " } ]
[ { "msg_contents": "hello,\n\n\tI have some question when I use postgresql 7.4.1 on redhat adv server 2.1 .\nI use IBM335 as server, it has 4 cpus, 1G RAM. but I got very bad performance.\nI can only do about 50 inserts per sencond. Event worse than my pc(PIII 800,256M RAM), can anyone give me some advice?\t\n\n        huang yaqin\n        [email protected]\n          2004-04-06\n\n\n", "msg_date": "Tue, 06 Apr 2004 16:01:34 +0800", "msg_from": "huang yaqin <[email protected]>", "msg_from_op": true, "msg_subject": "good pc but bad performance,why?" }, { "msg_contents": "On Tuesday 06 April 2004 09:01, huang yaqin wrote:\n> hello,\n>\n> \tI have some question when I use postgresql 7.4.1 on redhat adv server 2.1\n> . I use IBM335 as server, it has 4 cpus, 1G RAM. but I got very bad\n> performance. I can only do about 50 inserts per sencond. Event worse than\n> my pc(PIII 800,256M RAM), can anyone give me some advice?\n\nHow have you tuned your postgresql.conf file?\nWhat disk systems do you have?\nWhat does vmstat/iostat show as the bottleneck in the system?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 6 Apr 2004 12:16:22 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "huang yaqin wrote:\n> hello,\n> \n> \tI have some question when I use postgresql 7.4.1 on redhat adv server 2.1 .\n> I use IBM335 as server, it has 4 cpus, 1G RAM. but I got very bad performance.\n> I can only do about 50 inserts per sencond. Event worse than my pc(PIII 800,256M RAM), can anyone give me some advice?\t\n\nHave you referenced this document?:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Tue, 06 Apr 2004 08:46:49 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "huang yaqin <[email protected]> writes:\n> \tI have some question when I use postgresql 7.4.1 on redhat adv server 2.1 .\n> I use IBM335 as server, it has 4 cpus, 1G RAM. but I got very bad performance.\n> I can only do about 50 inserts per sencond. Event worse than my pc(PIII 800,256M RAM), can anyone give me some advice?\t\n\nIf the cheap machine appears to be able to commit more transactions\nper second than the better one, it's very likely because the cheap\nmachine has a disk that lies about write completion. Is the server\nusing SCSI disks by any chance? To a first approximation, IDE drives\nlie by default, SCSI drives don't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Apr 2004 11:42:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why? " }, { "msg_contents": "huang yaqin wrote:\n> hello,\n> \n> \tI have some question when I use postgresql 7.4.1 on redhat adv server 2.1 .\n> I use IBM335 as server, it has 4 cpus, 1G RAM. but I got very bad performance.\n\nThis is most likely a dual processor Xeon machine with HT, because the \nx335 is limited to two physical cpus.\n\n> I can only do about 50 inserts per sencond. Event worse than my pc(PIII 800,256M RAM), can anyone give me some advice?\n\nany chance you are using the onboard MPT-Fusion \"Raid\"controller with a \nRAID1 - we have seen absolutely horrible performance from these \ncontrollers here.\nUsing them as a normal SCSI-Controller with Softwareraid on top fixed \nthis for us ...\n\n\nstefan\n", "msg_date": "Tue, 06 Apr 2004 18:22:49 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" } ]
[ { "msg_contents": "I seem to remember discussion of anticipatory vs deadline scheduler in 2.6. \nHere is what Andrew Morton (I think) says:\n\n\"The deadline scheduler has two additional scheduling queues that were not \navailable to the 2.4 IO scheduler. The two new queues are a FIFO read queue \nand a FIFO write queue. This new multi-queue method allows for greater \ninteractivity by giving the read requests a better deadline than write \nrequests, thus ensuring that applications rarely will be delayed by read \nrequests.\n\nDeadline scheduling is best suited for database servers and high disk \nperformance systems. Morton has experienced up to 15 percent increases on \ndatabase loads while using deadline scheduling.\"\n\nhttp://story.news.yahoo.com/news?tmpl=story&cid=75&e=2&u=/nf/20040405/tc_nf/23603\n\nNothing very in-depth in the story.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 6 Apr 2004 14:01:23 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": true, "msg_subject": "Back to Linux 2.6 kernel thoughts..." } ]
[ { "msg_contents": "I am trying to find an efficient way to draw a random sample from a \ncomplex query. I also want it to be easy to use within my application.\n\nSo I've defined a view that encapsulates the query. The id in the \n\"driving\" table is exposed, and I run a query like:\n\nselect * from stats_record_view\n where id in (select id from driver_stats\n order by random()\n limit 30000);\n\ndriver_stats.id is unique, the primary key. The problem I'm having is \nthat neither the ORDER BY nor the LIMIT change the uniqueness of that \ncolumn, but the planner doesn't know that. It does a HashAggregate to \nmake sure the results are unique. It thinks that 200 rows will come out \nof that operation, and then 200 rows is small enough that it thinks a \nNested Loop is the best way to proceed from there.\n\nI can post more query plan, but I don't think it would be that very \nhelpful. I'm considering just making a sample table and creating an \nanalogous view around that. I'd like to be able to keep this as simple \nas possible though.\n\n\nKen\n\n\n", "msg_date": "Tue, 06 Apr 2004 13:25:54 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "plan problem" }, { "msg_contents": "On Tuesday 06 April 2004 21:25, Ken Geis wrote:\n> I am trying to find an efficient way to draw a random sample from a\n> complex query. I also want it to be easy to use within my application.\n>\n> So I've defined a view that encapsulates the query. The id in the\n> \"driving\" table is exposed, and I run a query like:\n>\n> select * from stats_record_view\n> where id in (select id from driver_stats\n> order by random()\n> limit 30000);\n\nHow about a join?\n\nSELECT s.*\nFROM\nstats_record_view s\nJOIN\n(SELECT id FROM driver_stats ORDER BY random() LIMIT 30000) AS r\nON s.id = r.id;\n\nOr, what about a cursor and fetch forward (or back?) a random number of rows \nbefore each fetch. That's probably not going to be so random though.\n\nAlso worth checking the various list archives - this has come up in the past, \nbut some time ago.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 7 Apr 2004 09:38:11 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plan problem" }, { "msg_contents": "Richard Huxton wrote:\n> On Tuesday 06 April 2004 21:25, Ken Geis wrote:\n> \n>>I am trying to find an efficient way to draw a random sample from a\n>>complex query. I also want it to be easy to use within my application.\n>>\n>>So I've defined a view that encapsulates the query. The id in the\n>>\"driving\" table is exposed, and I run a query like:\n>>\n>>select * from stats_record_view\n>> where id in (select id from driver_stats\n>> order by random()\n>> limit 30000);\n> \n> \n> How about a join?\n> \n> SELECT s.*\n> FROM\n> stats_record_view s\n> JOIN\n> (SELECT id FROM driver_stats ORDER BY random() LIMIT 30000) AS r\n> ON s.id = r.id;\n\nYes, I tried this too after I sent the first mail, and this was somewhat \nbetter. I ended up adding a random column to the driving table, putting \nan index on it, and exposing that column in the view. Now I can say\n\nSELECT * FROM stats_record_view WHERE random < 0.093;\n\nFor my application, it's OK if the same sample is picked time after time \nand it may change if data is added.\n\n...\n> Also worth checking the various list archives - this has come up in the past, \n> but some time ago.\n\nThere are some messages in the archives about how to get a random \nsample. I know how to do that, and that's not why I posted my message. \n Are you saying that the planner behavior I spoke of is in the \narchives? I wouldn't know what to search on to find that thread. Does \nanyone think that the planner issue has merit to address? Can someone \nhelp me figure out what code I would look at?\n\n\nKen Geis\n\n\n", "msg_date": "Wed, 07 Apr 2004 02:03:27 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plan problem" }, { "msg_contents": "On Wednesday 07 April 2004 10:03, Ken Geis wrote:\n> Richard Huxton wrote:\n> > On Tuesday 06 April 2004 21:25, Ken Geis wrote:\n> >>I am trying to find an efficient way to draw a random sample from a\n> >>complex query. I also want it to be easy to use within my application.\n> >>\n> >>So I've defined a view that encapsulates the query. The id in the\n> >>\"driving\" table is exposed, and I run a query like:\n> >>\n> >>select * from stats_record_view\n> >> where id in (select id from driver_stats\n> >> order by random()\n> >> limit 30000);\n> >\n> > How about a join?\n> >\n> > SELECT s.*\n> > FROM\n> > stats_record_view s\n> > JOIN\n> > (SELECT id FROM driver_stats ORDER BY random() LIMIT 30000) AS r\n> > ON s.id = r.id;\n>\n> Yes, I tried this too after I sent the first mail, and this was somewhat\n> better. I ended up adding a random column to the driving table, putting\n> an index on it, and exposing that column in the view. Now I can say\n>\n> SELECT * FROM stats_record_view WHERE random < 0.093;\n>\n> For my application, it's OK if the same sample is picked time after time\n> and it may change if data is added.\n\nFair enough - that'll certainly do it.\n\n> > Also worth checking the various list archives - this has come up in the\n> > past, but some time ago.\n>\n> There are some messages in the archives about how to get a random\n> sample. I know how to do that, and that's not why I posted my message.\n> Are you saying that the planner behavior I spoke of is in the\n> archives? I wouldn't know what to search on to find that thread. Does\n> anyone think that the planner issue has merit to address? Can someone\n> help me figure out what code I would look at?\n\nI was assuming after getting a random subset they'd see the same problem you \nare. If not, probably worth looking at. In which case, an EXPLAIN ANALYZE of \nyour original query would be good.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 7 Apr 2004 13:31:30 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plan problem" }, { "msg_contents": "Ken Geis <[email protected]> writes:\n> Does anyone think that the planner issue has merit to address? Can\n> someone help me figure out what code I would look at?\n\nThe planner doesn't currently attempt to \"drill down\" into a sub-select-\nin-FROM to find statistics about the variables emitted by the sub-select.\nSo it's just falling back to a default estimate of the number of\ndistinct values coming out of the sub-select.\n\nThe \"drilling down\" part is not hard; the difficulty comes from trying\nto figure out whether and how the stats from the underlying column would\nneed to be adjusted for the behavior of the sub-select itself. As an\nexample, the result of (SELECT DISTINCT foo FROM bar) would usually have\nmuch different stats from the raw bar.foo column. In your example, the\nLIMIT clause potentially affects the stats by reducing the number of\ndistinct values.\n\nNow in most situations where the sub-select wouldn't change the stats,\nthere's no issue anyway because the planner will flatten the sub-select\ninto the main query. So we really have to figure out the adjustment\npart before we can think about doing much here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Apr 2004 12:50:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plan problem " } ]
[ { "msg_contents": "hello!\n\n\tThanks, you are right.\n\t I use \"postmaster -o \"-F\" \" to start my PG,and performance improved greatly.\n\n Best regards,\n\t\t\t huang yaqin\n\n>huang yaqin <[email protected]> writes:\n>> \tI have some question when I use postgresql 7.4.1 on redhat adv server 2.1 .\n>> I use IBM335 as server, it has 4 cpus, 1G RAM. but I got very bad performance.\n>> I can only do about 50 inserts per sencond. Event worse than my pc(PIII 800,256M RAM), can anyone give me some advice?\t\n>\n>If the cheap machine appears to be able to commit more transactions\n>per second than the better one, it's very likely because the cheap\n>machine has a disk that lies about write completion. Is the server\n>using SCSI disks by any chance? To a first approximation, IDE drives\n>lie by default, SCSI drives don't.\n>\n>\t\t\tregards, tom lane\n>\n>\n>\n>\n>Powered by MessageSoft SMG\n>SPAM, virus-free and secure email\n>http://www.messagesoft.com\n\n= = = = = = = = = = = = = = = = = = = =\n\t\t\t\n\n        致\n礼!\n\n\t\t\t\t\n        huang yaqin\n        [email protected]\n          2004-04-07\n\n\n\n", "msg_date": "Wed, 07 Apr 2004 12:00:37 +0800", "msg_from": "huang yaqin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "On Wednesday 07 April 2004 05:00, huang yaqin wrote:\n> hello!\n>\n> \tThanks, you are right.\n> \t I use \"postmaster -o \"-F\" \" to start my PG,and performance improved\n> greatly.\n\nI don't think Tom was recommending turning fsync off. If you have a system \ncrash/power glitch then the database can become corrupted.\n\nIf you are happy the possibility if losing your data, write performance will \nimprove noticably.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 7 Apr 2004 09:33:12 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "fsync I'm thinking 50 inserts, if autocommiting is 50TPS = ~100 IO per\nsecond (50 WAL + checkpoint IO) = roughly the I/O rate of a single drive.\n\nHuang - Are you using a single drive for pg? If so, there is a safety\nproblem of both the data and logs used for recovery on the same drive. If\nthe drive crashes, there is nothing left for recovery.\n\nAlso, there is a big contention issue, since the log is a fast sequential\nwrite, and checkpointing is random. If the log is on a separate drive,\nyou'll probably see insert speed at disk sequential write speed, since the\nother drive(s) should hopefully be able to keep up when checkpointing. If\nthey share the same drive, you'll see an initial burst of inserts, then a\norder of magnitude performance drop-off as soon as you checkpoint - because\nthe disk is interleaving the log and data writes.\n\nfsync off is only appropriate for externally recoverable processes, such as\nloading an empty server from a file.\n\n/Aaron\n\n----- Original Message ----- \nFrom: \"Richard Huxton\" <[email protected]>\nTo: \"huang yaqin\" <[email protected]>; \"Tom Lane\" <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, April 07, 2004 4:33 AM\nSubject: Re: [PERFORM] good pc but bad performance,why?\n\n\nOn Wednesday 07 April 2004 05:00, huang yaqin wrote:\n> hello锟斤拷\n>\n> Thanks, you are right.\n> I use \"postmaster -o \"-F\" \" to start my PG锟斤拷and performance improved\n> greatly.\n\nI don't think Tom was recommending turning fsync off. If you have a system\ncrash/power glitch then the database can become corrupted.\n\nIf you are happy the possibility if losing your data, write performance will\nimprove noticably.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Wed, 7 Apr 2004 10:36:48 -0400", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" } ]
[ { "msg_contents": "hi shridhar,\n\n\n> Heiko Kehlenbrink wrote:\n>\n>> hi list,\n>>\n>> i want to convince people to use postgresql instead of ms-sql server, so i\n>> set up a kind of comparission insert data / select data from postgresql /\n>> ms-sql server\n>>\n>> the table i use was pretty basic,\n>>\n>> id bigserial\n>> dist float8\n>> x float8\n>> y float8\n>> z float8\n>>\n>> i filled the table with a function which filled x,y,z with incremental\nincreasing values (1,2,3,4,5,6...) and computing from that the dist\nvalue\n>> for every tupel (sqrt((x*x)+(y*y)+(z*z))).\n>>\n>> this works fine for both dbms\n>>\n>> postgresql needs 13:37 min for 10.000.000 tupel,\n>> ms-sql needs 1:01:27 h for 10.000.000 tupel.\n>>\n>> so far so good.\n>>\n>> i attached an index on the dist row and started to query the dbs with\nscripts which select a serial row of 100.000,200.000,500.000 tupels\nbased\n>> on the dist row.\n>> i randomizly compute the start and the end distance and made a \"select\navg(dist) from table where dist > startdist and dist < enddist\"\n>\n> Some basics to check quickly.\n>\n> 1. vacuum analyze the table before you start selecting.\n\nwas done,\n\n> 2. for slow running queries, check explain analyze output and find out\nwho takes\n> maximum time.\n\nhkehlenbrink@lin0493l:~> psql -d test -c 'explain analyse select avg(dist)\nfrom massive2 where dist > (1000000*sqrt(3.0))::float8 and dist <\n(1500000*sqrt(3.0))::float8;'\nNOTICE: QUERY PLAN:\n\nAggregate (cost=14884.61..14884.61 rows=1 width=8) (actual\ntime=3133.24..3133.24 rows=1 loops=1)\n -> Index Scan using massive2_dist on massive2 (cost=0.00..13648.17\nrows=494573 width=8) (actual time=0.11..2061.38 rows=499999 loops=1) Total\nruntime: 3133.79 msec\n\nEXPLAIN\n\nseems to me that most time was needed for the index scanning...\n\n> 3. Check for typecasting. You need to typecast the query correctly e.g.\n>\n> select avg(dist) from table where dist >startdist::float8 and\n> dist<enddist::float8..\n>\n> This might still end up with sequential scan depending upon the plan.\nbut if\n> index scan is picked up, it might be plenty fast..\n>\nnope, the dist row is float8 and the query-borders are float8 too, also\nthe explain says that an index scann was done.\n\n> Post explain analyze for the queries if things don't improve.\n>\nsee above..\n\n> HTH\n>\n> Shridhar\n>\nbest regards\nheiko\n\n\n>\n>\n\n\n\n", "msg_date": "Wed, 7 Apr 2004 09:06:41 +0200 (CEST)", "msg_from": "\"Heiko Kehlenbrink\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance comparission postgresql/ms-sql server" } ]
[ { "msg_contents": "Hello, Richard Huxton,\n\t\n \t\tYou said turning fsync off may cause losing data, that's terrible.\n\tI use SCSI disk, and file system is ext3. I tune postgresql.conf and can't get any improvement. So what can I do?\n\tDoes SCSI disk and IDE disk have difference?\n\n\t Regards,\n\t\t\tHuang yaqin\n\n======= 2004-04-07 09:33:00 =======\n\n>On Wednesday 07 April 2004 05:00, huang yaqin wrote:\n>> hello!\n>>\n>> \tThanks, you are right.\n>> \t I use \"postmaster -o \"-F\" \" to start my PG,and performance improved\n>> greatly.\n>\n>I don't think Tom was recommending turning fsync off. If you have a system\n>crash/power glitch then the database can become corrupted.\n>\n>If you are happy the possibility if losing your data, write performance will\n>improve noticably.\n>\n>--\n> Richard Huxton\n> Archonet Ltd\n>\n>\n>\n>\n>Powered by MessageSoft SMG\n>SPAM, virus-free and secure email\n>http://www.messagesoft.com\n>\n>.\n\n= = = = = = = = = = = = = = = = = = = =\n\t\t\t\n\n        致\n礼!\n\n\t\t\t\t\n        huang yaqin\n        [email protected]\n          2004-04-07\n\n\n\n", "msg_date": "Wed, 07 Apr 2004 16:56:56 +0800", "msg_from": "huang yaqin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "On Wed, 7 Apr 2004, huang yaqin wrote:\n\n> You said turning fsync off may cause losing data, that's terrible. I use\n> SCSI disk, and file system is ext3. I tune postgresql.conf and can't get\n> any improvement. So what can I do?\n\nMake sure you do as much as possible inside one transaction. If you want \nto do 1000 inserts, then do BEGIN; insert ....; insert; ... ; COMMIT;\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Wed, 7 Apr 2004 11:53:59 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "It sounds almost like you're doing one insert per transaction. Try wrapping\nmultiple inserts into a single transaction and see if that helps. This may\nnot be appropriate for your application, but it does guarantee that\ncommitted transactions will not be lost.\n\nMy apologies if you are already doing this. :)\n\nBEGIN;\ninsert ...\ninsert ...\ninsert ...\nCOMMIT;\n\nRegards,\nSteve Butler\n\n----- Original Message ----- \nFrom: \"huang yaqin\" <[email protected]>\nTo: \"Richard Huxton\" <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, April 07, 2004 6:56 PM\nSubject: Re: [PERFORM] good pc but bad performance,why?\n\n\nHello, Richard Huxton,\n\n You said turning fsync off may cause losing data, that's terrible.\nI use SCSI disk, and file system is ext3. I tune postgresql.conf and can't\nget any improvement. So what can I do?\nDoes SCSI disk and IDE disk have difference?\n\n Regards,\nHuang yaqin\n\n\n", "msg_date": "Wed, 7 Apr 2004 20:39:25 +1000", "msg_from": "\"Steven Butler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "On Wed, 2004-04-07 at 20:56, huang yaqin wrote:\n> Hello, Richard Huxton,\n> \t\n> \t\tYou said turning fsync off may cause losing data, that's terrible.\n> \tI use SCSI disk, and file system is ext3. I tune postgresql.conf and can't get any improvement. So what can I do?\n> \tDoes SCSI disk and IDE disk have difference?\n\nYes, turning off fsync means that the database is not guaranteeing\nconsistency of writes to disk any longer. On the other hand your IDE\nsystem probably never was, because IDE drives just typically turn on\nwrite caching in hardware without telling anyone.\n\nSCSI typically doesn't turn on write caching in the physical drive by\ndefault, as Tom Lane pointed out earlier. Good SCSI has a battery\nbacked up cache, and then it is OK to turn on write caching, because the\ncontroller has enough battery to complete all writes in the event of a\npower failure.\n\nOne thing I recommend is to use ext2 (or almost anything but ext3). \nThere is no real need (or benefit) from having the database on a\njournalled filesystem - the journalling is only trying to give similar\nsorts of guarantees to what the fsync in PostgreSQL is doing.\n\nThe suggestion someone else made regarding use of software raid is\nprobably also a good one if you are trying to use the on-board RAID at\nthe moment.\n\nFinally, I would say that because you are seeing poor performance on one\nbox and great performance on another, you should look at the hardware,\nor at the hardware drivers, for the problem - not so much at PostgreSQL.\n\nOf course if it is application performance you want to achieve, we _can_\nhelp here, but you will need to provide more details of what you are\ntrying to do in your application, including;\n - confirmation that you have done a VACUUM and ANALYZE of all tables\nbefore you start\n - output from EXPLAIN ANALYZE for slow queries\n - anything else you think is useful.\n\nwithout that sort of detail we can only give vague suggestions, like\n\"wrap everything in a transaction\" - excellent advice, certainly, but\nyou can read that in the manual.\n\nThere are no magic bullets, but I am sure most of the people on this\nlist have systems that regularly do way more than 50 inserts / second on\nserver hardware.\n\nRegards,\n\t\t\t\t\tAndrew McMillan\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n http://survey.net.nz/ - any questions?\n-------------------------------------------------------------------------\n\n", "msg_date": "Wed, 07 Apr 2004 23:29:42 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "On Wednesday 07 April 2004 16:59, Andrew McMillan wrote:\n> One thing I recommend is to use ext2 (or almost anything but ext3).\n> There is no real need (or benefit) from having the database on a\n> journalled filesystem - the journalling is only trying to give similar\n> sorts of guarantees to what the fsync in PostgreSQL is doing.\n\nThat is not correct assumption. A journalling file system ensures file system \nconsistency even at a cost of loss of some data. And postgresql can not \nguarantee recovery if WAL logs are corrupt. Some months back, there was a \ncase reported where ext2 corrupted WAL and database. BAckup is only solution \nthen..\n\nJournalling file systems are usually very close to ext2 in performance, many a \ntimes lot better. With ext2, you are buying a huge risk.\n\nUnless there are good reason, I would not put a database on ext2. Performance \nisn't one ofthem..\n\n Shridhar\n", "msg_date": "Wed, 7 Apr 2004 17:21:43 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "Sending again bacuse of MUA error.. Chose a wrong address in From..:-(\n\n Shridhar\n\nOn Wednesday 07 April 2004 17:21, Shridhar Daithankar wrote:\n> On Wednesday 07 April 2004 16:59, Andrew McMillan wrote:\n> > One thing I recommend is to use ext2 (or almost anything but ext3).\n> > There is no real need (or benefit) from having the database on a\n> > journalled filesystem - the journalling is only trying to give similar\n> > sorts of guarantees to what the fsync in PostgreSQL is doing.\n>\n> That is not correct assumption. A journalling file system ensures file\n> system consistency even at a cost of loss of some data. And postgresql can\n> not guarantee recovery if WAL logs are corrupt. Some months back, there was\n> a case reported where ext2 corrupted WAL and database. BAckup is only\n> solution then..\n>\n> Journalling file systems are usually very close to ext2 in performance,\n> many a times lot better. With ext2, you are buying a huge risk.\n>\n> Unless there are good reason, I would not put a database on ext2.\n> Performance isn't one ofthem..\n>\n> Shridhar\n", "msg_date": "Wed, 7 Apr 2004 17:24:41 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "On Wed, 7 Apr 2004, Andrew McMillan wrote:\n\n> On Wed, 2004-04-07 at 20:56, huang yaqin wrote:\n> > Hello, Richard Huxton,\n> > \t\n> > \t\tYou said turning fsync off may cause losing data, that's terrible.\n> > \tI use SCSI disk, and file system is ext3. I tune postgresql.conf and can't get any improvement. So what can I do?\n> > \tDoes SCSI disk and IDE disk have difference?\n> \n> Yes, turning off fsync means that the database is not guaranteeing\n> consistency of writes to disk any longer. On the other hand your IDE\n> system probably never was, because IDE drives just typically turn on\n> write caching in hardware without telling anyone.\n> \n> SCSI typically doesn't turn on write caching in the physical drive by\n> default, as Tom Lane pointed out earlier. Good SCSI has a battery\n> backed up cache, and then it is OK to turn on write caching, because the\n> controller has enough battery to complete all writes in the event of a\n> power failure.\n\nActually, almost all SCSI drives turn on write caching by default, they \njust don't lie about fsync, so you still have a one update per revolution \nlimit, but other things can be happening while that write is being \ncommited due to the multi-threaded nature of both the SCSI interface and \nthe kernel drivers associated with them\n\nIt would appear the linux kernel hackers are trying to implement the \nmulti-threaded features of the latest ATA spec, so that, at some future \ndate, you could have IDE drives that cache AND tell the truth of their \nsync AND can do more than one thing at a time.\n\n> One thing I recommend is to use ext2 (or almost anything but ext3). \n> There is no real need (or benefit) from having the database on a\n> journalled filesystem - the journalling is only trying to give similar\n> sorts of guarantees to what the fsync in PostgreSQL is doing.\n\nIs this true? I was under the impression that without at least meta-data \njournaling postgresql could still be corrupted by power failure.\n\n> The suggestion someone else made regarding use of software raid is\n> probably also a good one if you are trying to use the on-board RAID at\n> the moment.\n\nSome onboard RAID controllers are fairly good (dell's 2600 series have an \nadaptec on board that can have battery backed cache that is ok, the lsi\nmegaraid based one is faster under linux though.) But some of them are \npretty poor performers.\n\n> Finally, I would say that because you are seeing poor performance on one\n> box and great performance on another, you should look at the hardware,\n> or at the hardware drivers, for the problem - not so much at PostgreSQL.\n\nMore than likely, the biggest issue is that the SCSI drives are performing \nproper fsync, while the IDE drives are lying. Definitely a time to look \nat a good caching RAID controller.\n\n", "msg_date": "Wed, 7 Apr 2004 13:52:35 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "scott.marlowe wrote:\n> > One thing I recommend is to use ext2 (or almost anything but ext3). \n> > There is no real need (or benefit) from having the database on a\n> > journalled filesystem - the journalling is only trying to give similar\n> > sorts of guarantees to what the fsync in PostgreSQL is doing.\n> \n> Is this true? I was under the impression that without at least meta-data \n> journaling postgresql could still be corrupted by power failure.\n\nIt is false. ext2 isn't crash-safe, and PostgreSQL needs an intact file\nsystem for WAL recovery.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 7 Apr 2004 18:12:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> scott.marlowe wrote:\n>>> There is no real need (or benefit) from having the database on a\n>>> journalled filesystem - the journalling is only trying to give similar\n>>> sorts of guarantees to what the fsync in PostgreSQL is doing.\n>> \n>> Is this true? I was under the impression that without at least meta-data \n>> journaling postgresql could still be corrupted by power failure.\n\n> It is false. ext2 isn't crash-safe, and PostgreSQL needs an intact file\n> system for WAL recovery.\n\nBut it should be okay to set the filesystem to journal only its own\nmetadata. There's no need for it to journal file contents.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Apr 2004 21:31:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > scott.marlowe wrote:\n> >>> There is no real need (or benefit) from having the database on a\n> >>> journalled filesystem - the journalling is only trying to give similar\n> >>> sorts of guarantees to what the fsync in PostgreSQL is doing.\n> >> \n> >> Is this true? I was under the impression that without at least meta-data \n> >> journaling postgresql could still be corrupted by power failure.\n> \n> > It is false. ext2 isn't crash-safe, and PostgreSQL needs an intact file\n> > system for WAL recovery.\n> \n> But it should be okay to set the filesystem to journal only its own\n> metadata. There's no need for it to journal file contents.\n\nCan you set ext2 to journal metadata? I didn't know it could do that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 7 Apr 2004 21:33:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom Lane wrote:\n>> But it should be okay to set the filesystem to journal only its own\n>> metadata. There's no need for it to journal file contents.\n\n> Can you set ext2 to journal metadata? I didn't know it could do that.\n\nNo, ext2 has no journal at all AFAIK. But I believe ext3 has an option\nto journal or not journal file contents, and at least on a Postgres-only\nvolume you'd want to turn that off.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Apr 2004 22:13:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Tom Lane wrote:\n> >> But it should be okay to set the filesystem to journal only its own\n> >> metadata. There's no need for it to journal file contents.\n> \n> > Can you set ext2 to journal metadata? I didn't know it could do that.\n> \n> No, ext2 has no journal at all AFAIK. But I believe ext3 has an option\n> to journal or not journal file contents, and at least on a Postgres-only\n> volume you'd want to turn that off.\n\nRight, ext3 has that option. I don't think XFS needs it (it does\nmeta-data only by default).\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 7 Apr 2004 23:12:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "On Thu, 2004-04-08 at 14:13, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Tom Lane wrote:\n> >> But it should be okay to set the filesystem to journal only its own\n> >> metadata. There's no need for it to journal file contents.\n> \n> > Can you set ext2 to journal metadata? I didn't know it could do that.\n> \n> No, ext2 has no journal at all AFAIK. But I believe ext3 has an option\n> to journal or not journal file contents, and at least on a Postgres-only\n> volume you'd want to turn that off.\n\nNo, it certainly doesn't.\n\nTo be honest I was not aware that PostgreSQL was susceptible to failure\non non[metadata] journalled filesystems - I was [somewhat vaguely] of\nthe understanding that it would work fine on any filesystem.\n\nAnd obviously, from my original post, we can see that I believed\nmetadata journalling was wasted on it.\n\nIs the 'noatime' option worthwhile? Are you saying that PostgreSQL\nshould always be run on a metadata journalled filesystem then, and that\nVFAT, ext2, etc are ++ungood?\n\nThanks,\n\t\t\t\t\tAndrew McMillan.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n A foolish consistency is the hobgoblin of little minds - Shaw\n-------------------------------------------------------------------------\n\n", "msg_date": "Thu, 08 Apr 2004 20:54:39 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "Andrew McMillan wrote:\n> On Thu, 2004-04-08 at 14:13, Tom Lane wrote:\n> \n>> Bruce Momjian <[email protected]> writes:\n>> \n>>> Tom Lane wrote:\n>>> \n>>>> But it should be okay to set the filesystem to journal only its\n>>>> own metadata. There's no need for it to journal file contents.\n>>>> \n>> \n>>> Can you set ext2 to journal metadata? I didn't know it could do\n>>> that.\n>> \n>> No, ext2 has no journal at all AFAIK. But I believe ext3 has an\n>> option to journal or not journal file contents, and at least on a\n>> Postgres-only volume you'd want to turn that off.\n> \n> \n> No, it certainly doesn't.\n\nYou can mount ext3 filesystems as ext2 and they will function just as ext2.\n\n-- \nUntil later, Geoffrey Registered Linux User #108567\nBuilding secure systems in spite of Microsoft\n", "msg_date": "Thu, 08 Apr 2004 07:56:53 -0400", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" }, { "msg_contents": "Andrew McMillan wrote:\n> On Thu, 2004-04-08 at 14:13, Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > Tom Lane wrote:\n> > >> But it should be okay to set the filesystem to journal only its own\n> > >> metadata. There's no need for it to journal file contents.\n> > \n> > > Can you set ext2 to journal metadata? I didn't know it could do that.\n> > \n> > No, ext2 has no journal at all AFAIK. But I believe ext3 has an option\n> > to journal or not journal file contents, and at least on a Postgres-only\n> > volume you'd want to turn that off.\n> \n> No, it certainly doesn't.\n> \n> To be honest I was not aware that PostgreSQL was susceptible to failure\n> on non[metadata] journalled filesystems - I was [somewhat vaguely] of\n> the understanding that it would work fine on any filesystem.\n\nWe expect the filesystem to come back intact. If it doesn't from an\next2 crash, we can't WAL recover in all cases.\n\n> And obviously, from my original post, we can see that I believed\n> metadata journalling was wasted on it.\n\nNo. UFS file systems don't do journaling, but do metadata fsync, which\nis all we need.\n\n> Is the 'noatime' option worthwhile? Are you saying that PostgreSQL\n\nnoatime might help, not sure, but my guess is that most inode fsync's\nare modifications of mtime, which can't be turned off with amount\noption.\n\n> should always be run on a metadata journalled filesystem then, and that\n> VFAT, ext2, etc are ++ungood?\n\nYep. Not sure about VFAT but we do need the filesystem to return after\na crash, obviously, or we can't even get to the xlog directory or the\n/data files to do WAL recovery.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 8 Apr 2004 10:56:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good pc but bad performance,why?" } ]
[ { "msg_contents": "Rather than ask some general, unanswerable question on how to tune my database...I thought I ask where I might find an introduction to...or manual/tutorial for the depths of managing a postgres db. Books? Websites? Assume a basic to intermediate knowledge of DBs in general with a desire to learn about postgres from the ground up. If it makes a difference I'm using a postgres db in a Red Hat Linux OS environment. Thanks!\n\nnid\n\n\n\n\n\n\nRather than ask some general, unanswerable question \non how to tune my database...I thought I ask where I might find an introduction \nto...or manual/tutorial for the depths of managing a postgres db.  \nBooks?  Websites?  Assume a basic to intermediate knowledge of DBs in \ngeneral with a desire to learn about postgres from the ground up.  If it \nmakes a difference I'm using a postgres db in a Red Hat Linux OS \nenvironment.  Thanks!\n \nnid", "msg_date": "Wed, 7 Apr 2004 14:10:01 -0500", "msg_from": "\"Nid\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql educational sources" }, { "msg_contents": "On Wed, 7 Apr 2004, Nid wrote:\n\n> Rather than ask some general, unanswerable question on how to tune my \n> database...I thought I ask where I might find an introduction to...or \n> manual/tutorial for the depths of managing a postgres db. Books? \n> Websites? Assume a basic to intermediate knowledge of DBs in general \n> with a desire to learn about postgres from the ground up. If it makes a \n> difference I'm using a postgres db in a Red Hat Linux OS environment. \n> Thanks!\n\nThe online (adminstration) docs are quite good, and for tuning, look at \nthe excellent tuning document on varlena:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n\n\n", "msg_date": "Wed, 7 Apr 2004 14:27:29 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql educational sources" }, { "msg_contents": "On Wed, 2004-04-07 at 16:27, scott.marlowe wrote:\n> On Wed, 7 Apr 2004, Nid wrote:\n> \n> > Rather than ask some general, unanswerable question on how to tune my \n> > database...I thought I ask where I might find an introduction to...or \n> > manual/tutorial for the depths of managing a postgres db. Books? \n> > Websites? Assume a basic to intermediate knowledge of DBs in general \n> > with a desire to learn about postgres from the ground up. If it makes a \n> > difference I'm using a postgres db in a Red Hat Linux OS environment. \n> > Thanks!\n> \n> The online (adminstration) docs are quite good, and for tuning, look at \n> the excellent tuning document on varlena:\n> \n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n> \n\nActually I might rather suggest looking at\nhttp://techdocs.postgresql.org/ which has a slew of\nlinks/articles/tutorials regarding development and administration of\npostgresql databases (including a link to Scott's aforementioned doc)\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "08 Apr 2004 11:06:11 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql educational sources" } ]
[ { "msg_contents": "What the statistics are? Where can i view it? where can i find info about \nits field and why are they valuable information to performance?\n\nthanx in advance,\n\nJaime Casanova\n\n_________________________________________________________________\nProtect your PC - get McAfee.com VirusScan Online \nhttp://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963\n\n", "msg_date": "Wed, 07 Apr 2004 21:05:15 +0000", "msg_from": "\"Jaime Casanova\" <[email protected]>", "msg_from_op": true, "msg_subject": "statistics" }, { "msg_contents": "\nOn 07/04/2004 22:05 Jaime Casanova wrote:\n> What the statistics are? Where can i view it? where can i find info \n> about its field and why are they valuable information to performance?\n> \n> thanx in advance,\n> \n> Jaime Casanova\n\n\nOK. An idiot's guide to statistics by a full-time idiot...\n\nLet's start with a simple premise. I'm a RDBMS (forget that I'm actually \nan idiot for a moment...) and I've been asked for\n\nselect * from foo where bar = 7;\n\nHow do I go about fulfilling the reequest in the most efficient manner? \n(i.e., ASAP!)\n\nOne way might be to read through the whole table and return only those \nrows which match the where criteron - a sequential scan on the table.\n\nBut wait a minute, there is an index on column bar. Could I use this \ninstead? Well, of course, I could use it but I have to keep sight of the \ngoal of returning the data ASAP and I know that the act of reading \nindex/reading table/... will have a performance penalty due to a lot more \nhead movement on the disk. So how do I make chose between a sequential \nscan and an index scan? Let's lokk at a couple of extreme scenarios:\n\n1) let's look at the condition where all or virtually all of the bar \ncolumns are populated wityh the value 7. In this case it would be more \nefficient to read sequentially through the table.\n\n2) the opposite of (1) - very few of the bar columns have the value 7. In \nthis case using the index could be a winner.\n\nSo generalising, I need to be able to estimate whether doing a sequential \nscan is more efficient that an index scan and this comes down to 2 factors:\n\na) the cost of moving the disk heads all over the place (random page cost)\nb) the spread of values in the selecting column(s)\n\n(a) is specfified in postgresql.conf (see archives for much discusion \nabout what the value should be..)\n(b) is determined by the dastardly trick of actually sampling the data in \nthe table!!! That's what analyze does. It samples your table(s) and uses \nthe result to feeede into it's descision about when to flip between \nsequential and index scans.\n\nHope this makes some kind of sense...\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n", "msg_date": "Thu, 8 Apr 2004 00:48:35 +0100", "msg_from": "Paul Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: statistics" } ]
[ { "msg_contents": "Hi everyone,\n\nI have done some reading on filesystems and I thought to optimize the\nsettings for my PostgreSQL system. I use the ext3 filesystem and have the\nPostgreSQL data and WAL on different physical drives. I made some\nadjustments to my /etc/fstabd file, so it looks like this :\n\n\nLABEL=/ / ext3\nnoatime,data=ordered 1 1\nLABEL=/boot /boot ext3\nnoatime,data=ordered 1 2\nnone /dev/pts devpts gid=5,mode=620\n0 0\nnone /proc proc defaults\n0 0\nnone /dev/shm tmpfs defaults\n0 0\nLABEL=/usr/local/pgsql /usr/local/pgsql ext3\nnoatime,data=writeback 1 2\nLABEL=/usr/local/pgsql /usr/local/pgsql/wal ext3\nnoatime,data=ordered 1 2\n/dev/sda5 swap swap defaults\n0 0\n/dev/cdrom /mnt/cdrom udf,iso9660\nnoauto,owner,kudzu,ro 0 0\n/dev/fd0 /mnt/floppy auto\nnoauto,owner,kudzu 0 0\n\n\nDoes this look OK? My knowledge of filesystems and their (journalling)\noptions is not very broad...\n\nThanks in advance,\nAlexander Priem.\n\n\n\n", "msg_date": "Thu, 8 Apr 2004 09:59:18 +0200 ", "msg_from": "\"Priem, Alexander\" <[email protected]>", "msg_from_op": true, "msg_subject": "data=writeback" }, { "msg_contents": "> LABEL=/usr/local/pgsql /usr/local/pgsql ext3\n> noatime,data=writeback 1 2\n> LABEL=/usr/local/pgsql /usr/local/pgsql/wal ext3\n> noatime,data=ordered 1 2\n\nThe same label mounted on two different mount points is probably I typo?\n\nI'm not sure if data=writeback is ok. I was wondering about the same\nthing after reading the \"good pc but bad performance,why?\" thread.\n\nThis is from man mount:\n\n writeback\n Data ordering is not preserved - data may be written into\n the main file system after its metadata has been commit-\n ted to the journal. This is rumoured to be the highest-\n throughput option. It guarantees internal file system\n integrity, however it can allow old data to appear in\n files after a crash and journal recovery.\n\nHow does this relate to fflush()? Does fflush still garantee \nall data has ben written?\n\nBye, Chris.\n\n\n", "msg_date": "Thu, 08 Apr 2004 11:01:29 +0200", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: data=writeback" } ]
[ { "msg_contents": "> > LABEL=/usr/local/pgsql /usr/local/pgsql ext3\n> > noatime,data=writeback 1 2\n> > LABEL=/usr/local/pgsql /usr/local/pgsql/wal ext3\n> > noatime,data=ordered 1 2\n>\n> The same label mounted on two different mount points is probably I typo?\n\n\nNo, the same label mounted on two different mount points is not a typo. This\nis the way it is in my /etc/fstab.\n\nNote that I did not create this file myself, it was created by the RedHat\nEnterprise Linux 3 ES installer. I created different partitions for the data\ndirectory (/usr/local/pgsql) and the wal directory (/usr/local/pgsql/wal)\nusing the installer and this is how the /etc/fstab file ended up.\n\nWhy, is this bad? They use the same label, but use different mount points?\nCan this cause problems?\n\n", "msg_date": "Thu, 8 Apr 2004 11:26:17 +0200 ", "msg_from": "\"Priem, Alexander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: data=writeback" }, { "msg_contents": "\n> > > LABEL=/usr/local/pgsql /usr/local/pgsql ext3\n> > > noatime,data=writeback 1 2\n> > > LABEL=/usr/local/pgsql /usr/local/pgsql/wal ext3\n> > > noatime,data=ordered 1 2\n> >\n> > The same label mounted on two different mount points is probably I typo?\n> \n> \n> No, the same label mounted on two different mount points is not a typo. This\n> is the way it is in my /etc/fstab.\n> \n> Note that I did not create this file myself, it was created by the RedHat\n> Enterprise Linux 3 ES installer. I created different partitions for the data\n> directory (/usr/local/pgsql) and the wal directory (/usr/local/pgsql/wal)\n> using the installer and this is how the /etc/fstab file ended up.\n> \n> Why, is this bad? They use the same label, but use different mount points?\n> Can this cause problems?\n\nMmm... how can the mounter distinguish the two partitions?\n\nMaybe I'm missing a concept here, but I thought labels must uniquely\nidentify partitions?\n\nSeems suspicious to me...\n\nDoes it work? When you give just \"mount\" at the command line what output\ndo you get?\n\nBye, Chris.\n\n\n", "msg_date": "Thu, 08 Apr 2004 11:44:29 +0200", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: data=writeback" } ]
[ { "msg_contents": "Hello,\n\nI've followed the last discussion about the particular case of\n\"select count(*)\"s on large tables being somewhat slow.\n\nI've seen also this issue already on the todo list, so I know\nit is not a simple question.\nThis problem arises for me on very large tables, which I mean\nstarting from 1 million rows and above.\n\nThe alternative solution I tried, that has an optimal\nspeed up, unfortunately is not a way out, and it is based\non \"EXPLAIN SELECT count(*)\" output parsing, which\nis obviously *not* reliable.\n\nThe times always get better doing a vacuum (and eventually\nreindex) of the table, and they slowly lower again.\n\nIs there an estimate time for this issue to be resolved?\nCan I help in some way (code, test cases, ...)?\n\n-- \nCosimo\n\n", "msg_date": "Thu, 08 Apr 2004 11:43:49 +0200", "msg_from": "Cosimo Streppone <[email protected]>", "msg_from_op": true, "msg_subject": "select count(*) on large tables" }, { "msg_contents": "On Thu, 8 Apr 2004, Cosimo Streppone wrote:\n\n> The alternative solution I tried, that has an optimal\n> speed up, unfortunately is not a way out, and it is based\n> on \"EXPLAIN SELECT count(*)\" output parsing, which\n> is obviously *not* reliable.\n\nTry this to get the estimate:\n\n SELECT relname, reltuples from pg_class order by relname;\n\n> The times always get better doing a vacuum (and eventually\n> reindex) of the table, and they slowly lower again.\n\nYes, the estimate is updated by the analyze.\n\n> Is there an estimate time for this issue to be resolved?\n\nIt's not so easy to \"fix\". The naive fixes makes other operations slower,\nmost notably makes things less concurrent which is bad since it wont scale \nas good for many users then.\n\nYou can always keep the count yourself and have some triggers that update \nthe count on each insert and delete on the table. It will of course make \nall inserts and deletes slower, but if you count all rows often maybe it's \nworth it. Most people do not need to count all rows in a table anyway. You \nusually count all rows such as this and that (some condition).\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Thu, 8 Apr 2004 12:54:29 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) on large tables" }, { "msg_contents": "Cosimo Streppone <[email protected]> writes:\n> Is there an estimate time for this issue to be resolved?\n\nApproximately never. It's a fundamental feature of Postgres' design.\n\nAs noted by Dennis, you can look at the pg_class statistics if a recent\nestimate is good enough, or you can build user-level tracking tools if\nyou'd rather have fast count(*) than concurrent update capability. But\ndon't sit around waiting for the developers to \"fix this bug\", because\nit isn't a bug and it isn't going to be fixed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Apr 2004 10:09:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) on large tables " } ]
[ { "msg_contents": "> > > > LABEL=/usr/local/pgsql /usr/local/pgsql ext3\n> > > > noatime,data=writeback 1 2\n> > > > LABEL=/usr/local/pgsql /usr/local/pgsql/wal ext3\n> > > > noatime,data=ordered 1 2\n> > >\n> > > The same label mounted on two different mount points is probably I \n> > > typo?\n> > \n> > \n> > No, the same label mounted on two different mount points is not a \n> > typo. This is the way it is in my /etc/fstab.\n> > \n> > Note that I did not create this file myself, it was created by the \n> > RedHat Enterprise Linux 3 ES installer. I created different partitions \n> > for the data directory (/usr/local/pgsql) and the wal directory \n> > (/usr/local/pgsql/wal) using the installer and this is how the \n> > /etc/fstab file ended up.\n> > \n> > Why, is this bad? They use the same label, but use different mount \n> > points? Can this cause problems?\n>\n> Mmm... how can the mounter distinguish the two partitions?\n>\n> Maybe I'm missing a concept here, but I thought labels must uniquely\nidentify partitions?\n>\n> Seems suspicious to me...\n>\n> Does it work? When you give just \"mount\" at the command line what output\ndo you get?\n>\n> Bye, Chris.\n\nWhen I give \"mount\" at the command line, everything looks just fine :\n\n/dev/sda2 on / type ext3 (rw,noatime,data=ordered)\nnone on /proc type proc (rw)\nusbdevfs on /proc/bus/usb type usbdevfs (rw)\n/dev/sda1 on /boot type ext3 (rw,noatime,data=ordered)\nnone on /dev/pts type devpts (rw,gid=5,mode=620)\nnone on /dev/shm type tmpfs (rw)\n/dev/sdb1 on /usr/local/pgsql type ext3 (rw,noatime,data=writeback)\n/dev/sda3 on /usr/local/pgsql/wal type ext3 (rw,noatime,data=ordered)\n\nIt looks like the labels are not really used, just the mount-points. Or\ncould this cause other problems I am not aware of? Everything seems to be\nworking just fine, for several months now...\n\n\n", "msg_date": "Thu, 8 Apr 2004 12:10:10 +0200 ", "msg_from": "\"Priem, Alexander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: data=writeback" }, { "msg_contents": "> When I give \"mount\" at the command line, everything looks just fine :\n>\n> /dev/sda2 on / type ext3 (rw,noatime,data=ordered)\n> none on /proc type proc (rw)\n> usbdevfs on /proc/bus/usb type usbdevfs (rw)\n> /dev/sda1 on /boot type ext3 (rw,noatime,data=ordered)\n> none on /dev/pts type devpts (rw,gid=5,mode=620)\n> none on /dev/shm type tmpfs (rw)\n> /dev/sdb1 on /usr/local/pgsql type ext3 (rw,noatime,data=writeback)\n> /dev/sda3 on /usr/local/pgsql/wal type ext3 (rw,noatime,data=ordered)\n>\n> It looks like the labels are not really used, just the mount-points. Or\n> could this cause other problems I am not aware of? Everything seems to\n> be working just fine, for several months now...\n\nProbably /dev/sdb1 and /dev/sda3 have the same labels and mount\nsimply mounts them in a consistent way according to some logic\nwe're not aware of.\n\nI'd say: if it works don't touch it ;)\n\nWhat remains unresolved is the question whether data=writeback is ok\nor not. We'll see if somebody has more information on that one...\n\nBye, Chris.\n\n\n\n\n\n\n", "msg_date": "Thu, 8 Apr 2004 15:01:06 +0200 (CEST)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: data=writeback" }, { "msg_contents": "[email protected] wrote:\n> > When I give \"mount\" at the command line, everything looks just fine :\n> >\n> > /dev/sda2 on / type ext3 (rw,noatime,data=ordered)\n> > none on /proc type proc (rw)\n> > usbdevfs on /proc/bus/usb type usbdevfs (rw)\n> > /dev/sda1 on /boot type ext3 (rw,noatime,data=ordered)\n> > none on /dev/pts type devpts (rw,gid=5,mode=620)\n> > none on /dev/shm type tmpfs (rw)\n> > /dev/sdb1 on /usr/local/pgsql type ext3 (rw,noatime,data=writeback)\n> > /dev/sda3 on /usr/local/pgsql/wal type ext3 (rw,noatime,data=ordered)\n> >\n> > It looks like the labels are not really used, just the mount-points. Or\n> > could this cause other problems I am not aware of? Everything seems to\n> > be working just fine, for several months now...\n> \n> Probably /dev/sdb1 and /dev/sda3 have the same labels and mount\n> simply mounts them in a consistent way according to some logic\n> we're not aware of.\n> \n> I'd say: if it works don't touch it ;)\n> \n> What remains unresolved is the question whether data=writeback is ok\n> or not. We'll see if somebody has more information on that one...\n\nShould be fine. We don't continue until fsync() writes all the data. \nWe don't care what order it is written in, just that is all written\nbefore we continue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 8 Apr 2004 10:52:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: data=writeback" } ]
[ { "msg_contents": "Hello,\n\nI've just started using the tsearch2 system. I'm VERY impressed by the \nspeed.\n\nI've got one question about query planning. Is it understandable to \nhave the query plan estimation be off by a couple of orders of \nmagnitude? Or, is it the fact that the cost estimation is small to \nbegin with that the error between the actual and the estimated is \n\"normal\"?\n\nHere is my explain analyze run immediately after a vacuum full analyze:\n\nkjv=# vacuum full analyze;\nVACUUM\nkjv=# explain analyze select * from kjv where idxFTI @@ \n'corinth'::tsquery;\n QUERY PLAN\n------------------------------------------------------------------------ \n---------------------------------------------\n Index Scan using idxfti_idx on kjv (cost=0.00..125.44 rows=32 \nwidth=193) (actual time=0.796..1.510 rows=6 loops=1)\n Index Cond: (idxfti @@ '\\'corinth\\''::tsquery)\n Filter: (idxfti @@ '\\'corinth\\''::tsquery)\n Total runtime: 1.679 ms\n(4 rows)\n\nThanks!\nMark\n\n", "msg_date": "Thu, 8 Apr 2004 12:33:28 -0500", "msg_from": "Mark Lubratt <[email protected]>", "msg_from_op": true, "msg_subject": "tsearch query plan" }, { "msg_contents": "Mark,\n\n> I've got one question about query planning. Is it understandable to \n> have the query plan estimation be off by a couple of orders of \n> magnitude? Or, is it the fact that the cost estimation is small to \n> begin with that the error between the actual and the estimated is \n> \"normal\"?\n\nWell, your example is not \"a couple orders of magnitude\". 6 vs. 32 is \nactually pretty good accuracy. \n\nNow, 6 vs 192 would potentially be a problem, let alone 32 vs 13,471.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 8 Apr 2004 16:33:40 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tsearch query plan" }, { "msg_contents": "\nOn Apr 8, 2004, at 6:33 PM, Josh Berkus wrote:\n\n> Mark,\n>\n>> I've got one question about query planning. Is it understandable to\n>> have the query plan estimation be off by a couple of orders of\n>> magnitude? Or, is it the fact that the cost estimation is small to\n>> begin with that the error between the actual and the estimated is\n>> \"normal\"?\n>\n> Well, your example is not \"a couple orders of magnitude\". 6 vs. 32 is\n> actually pretty good accuracy.\n>\n> Now, 6 vs 192 would potentially be a problem, let alone 32 vs 13,471.\n>\n\nI guess I was looking more at the cost estimate and not so much at the \nrows estimate. I agree that the row estimate wasn't too bad. But the \ncost estimate seems way out of line.\n\nI'm somewhat new to examining explain analyze output and I'm looking at \nthis as more of an education, since the speed is certainly good anyway. \n I just expected the cost estimate to be more in line especially \nimmediately after an analyze.\n\n-Mark\n\n", "msg_date": "Thu, 8 Apr 2004 22:28:37 -0500", "msg_from": "Mark Lubratt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tsearch query plan" }, { "msg_contents": "Mark Lubratt <[email protected]> writes:\n> I guess I was looking more at the cost estimate and not so much at the \n> rows estimate. I agree that the row estimate wasn't too bad. But the \n> cost estimate seems way out of line.\n\nThe cost estimates are not even in the same units as the actual runtime.\nCost is in an arbitrary scale in which 1 unit = 1 sequential disk block\nfetch. It is unknown what this might equate to on your machine ... but\nit's quite unlikely that it's 1.0 millisecond. The thing to look at\nwhen considering EXPLAIN results is whether the ratios of different cost\nestimates are proportional to the actual runtimes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Apr 2004 01:26:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tsearch query plan " } ]
[ { "msg_contents": "Thanks for the answer.\n\nI know the question was to primitive (it claims: i have no idea about \ndatabases).\nBut i simply didn't find the answer and if a don't ask i won't learn.\n\nSomeday i will talk with Tom Lane about how to improve the planner but until \nthat day comes i have a lot of technical things to learn.\n\n_________________________________________________________________\nAdd photos to your messages with MSN 8. Get 2 months FREE*. \nhttp://join.msn.com/?page=features/featuredemail\n\n", "msg_date": "Thu, 08 Apr 2004 22:42:25 +0000", "msg_from": "\"Jaime Casanova\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: statistics" } ]
[ { "msg_contents": "Doing an upgrade from 7.3.6 to 7.4.2 and I keep seeing the recycled\ntransaction log about every 2 mins. For future upgrades, is there\nsomething that can be set so that I don't have as many recycles? It seems\nto slow down the importing of data.\nHere's my current settings:\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = true # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or\nopen_datasync\nwal_buffers = 32 # min 4, 8KB each\n\n# - Checkpoints -\n\ncheckpoint_segments = 30 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 600 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n\n\nTIA\nPatrick Hatcher\n\n\n", "msg_date": "Fri, 9 Apr 2004 10:18:12 -0700", "msg_from": "\"Patrick Hatcher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Upgrading question (recycled transaction log)" }, { "msg_contents": "\"Patrick Hatcher\" <[email protected]> writes:\n> Doing an upgrade from 7.3.6 to 7.4.2 and I keep seeing the recycled\n> transaction log about every 2 mins. For future upgrades, is there\n> something that can be set so that I don't have as many recycles?\n\nIncreasing checkpoint_segments ... but you seem to have that pretty high\nalready.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Apr 2004 14:03:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrading question (recycled transaction log) " } ]
[ { "msg_contents": "Hey,\n\nHas anyone done performance tests for OpenFTS on a really large database? I \nwas speaking at PerlMongers and somebody asked.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 9 Apr 2004 12:46:04 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Performance data for OpenFTS?" } ]
[ { "msg_contents": "Hi,\n\nI test many times the foolowing query.\n\ndps=# explain analyze select next_index_time from url order by \nnext_index_time desc limit 1;\n \nQUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n----\n Limit (cost=0.00..2.62 rows=1 width=4) (actual time=56.615..56.616 \nrows=1 loops=1)\n -> Index Scan Backward using url_next_index_time on url \n(cost=0.00..768529.55 rows=293588 width=4) (actual time=56.610..56.610 \nrows=1 loops=1)\n Total runtime: 56.669 ms\n(3 rows)\n\ndps=# explain analyze select next_index_time from url order by \nnext_index_time asc limit 1;\n \nQUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-\n Limit (cost=0.00..2.62 rows=1 width=4) (actual \ntime=94879.636..94879.637 rows=1 loops=1)\n -> Index Scan using url_next_index_time on url \n(cost=0.00..768529.55 rows=293588 width=4) (actual \ntime=94879.631..94879.631 rows=1 loops=1)\n Total runtime: 94879.688 ms\n(3 rows)\n\nHow to optimize the last query ? (~ 2000 times slower than the first \none)\nI suppose there is some odd distribution of data in the index ?\nIs the solution to reindex data ?\n\nCordialement,\nJean-G�rard Pailloncy\n", "msg_date": "Mon, 12 Apr 2004 11:26:55 +0200", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": true, "msg_subject": "Index Backward Scan fast / Index Scan slow !" }, { "msg_contents": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]> writes:\n> How to optimize the last query ? (~ 2000 times slower than the first \n> one)\n> I suppose there is some odd distribution of data in the index ?\n\nLooks to me like a whole lot of dead rows at the left end of the index.\nHave you VACUUMed this table lately? It would be interesting to see\nwhat VACUUM VERBOSE has to say about it.\n\n> Is the solution to reindex data ?\n\nIn 7.4 a VACUUM should be sufficient ... or at least, if it isn't\nI'd like to know why not before you destroy the evidence by reindexing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Apr 2004 08:28:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Backward Scan fast / Index Scan slow ! " }, { "msg_contents": "Hi,\n\n>> How to optimize the last query ? (~ 2000 times slower than the first\n>> one)\n>> I suppose there is some odd distribution of data in the index ?\n>\n> Looks to me like a whole lot of dead rows at the left end of the index.\n> Have you VACUUMed this table lately?\n From pg_autovacuum:\n[2004-04-10 05:45:39 AM] Performing: ANALYZE \"public\".\"url\"\n[2004-04-10 11:13:25 AM] Performing: ANALYZE \"public\".\"url\"\n[2004-04-10 03:12:14 PM] Performing: VACUUM ANALYZE \"public\".\"url\"\n[2004-04-11 04:58:29 AM] Performing: ANALYZE \"public\".\"url\"\n[2004-04-11 03:48:25 PM] Performing: ANALYZE \"public\".\"url\"\n[2004-04-11 09:21:31 PM] Performing: ANALYZE \"public\".\"url\"\n[2004-04-12 03:24:06 AM] Performing: ANALYZE \"public\".\"url\"\n[2004-04-12 07:20:08 AM] Performing: VACUUM ANALYZE \"public\".\"url\"\n\n> It would be interesting to see\n> what VACUUM VERBOSE has to say about it.\ndps=# VACUUM VERBOSE url;\nINFO: vacuuming \"public.url\"\nINFO: index \"url_pkey\" now contains 348972 row versions in 2344 pages\nDETAIL: 229515 index row versions were removed.\n41 index pages have been deleted, 41 are currently reusable.\nCPU 0.32s/1.40u sec elapsed 70.66 sec.\nINFO: index \"url_crc\" now contains 215141 row versions in 497 pages\nDETAIL: 108343 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.06s/0.96u sec elapsed 9.13 sec.\nINFO: index \"url_seed\" now contains 348458 row versions in 2987 pages\nDETAIL: 229515 index row versions were removed.\n345 index pages have been deleted, 345 are currently reusable.\nCPU 0.40s/2.38u sec elapsed 74.26 sec.\nINFO: index \"url_referrer\" now contains 349509 row versions in 1964 \npages\nDETAIL: 229515 index row versions were removed.\n65 index pages have been deleted, 65 are currently reusable.\nCPU 0.34s/1.53u sec elapsed 127.37 sec.\nINFO: index \"url_next_index_time\" now contains 349519 row versions in \n3534 pages\nDETAIL: 229515 index row versions were removed.\n3071 index pages have been deleted, 2864 are currently reusable.\nCPU 0.32s/0.67u sec elapsed 76.25 sec.\nINFO: index \"url_status\" now contains 349520 row versions in 3465 pages\nDETAIL: 229515 index row versions were removed.\n2383 index pages have been deleted, 2256 are currently reusable.\nCPU 0.35s/0.85u sec elapsed 89.25 sec.\nINFO: index \"url_bad_since_time\" now contains 349521 row versions in \n2017 pages\nDETAIL: 229515 index row versions were removed.\n38 index pages have been deleted, 38 are currently reusable.\nCPU 0.54s/1.46u sec elapsed 83.77 sec.\nINFO: index \"url_hops\" now contains 349620 row versions in 3558 pages\nDETAIL: 229515 index row versions were removed.\n1366 index pages have been deleted, 1356 are currently reusable.\nCPU 0.43s/0.91u sec elapsed 132.14 sec.\nINFO: index \"url_siteid\" now contains 350551 row versions in 3409 pages\nDETAIL: 229515 index row versions were removed.\n2310 index pages have been deleted, 2185 are currently reusable.\nCPU 0.35s/1.01u sec elapsed 85.08 sec.\nINFO: index \"url_serverid\" now contains 350552 row versions in 3469 \npages\nDETAIL: 229515 index row versions were removed.\n1014 index pages have been deleted, 1009 are currently reusable.\nCPU 0.54s/1.01u sec elapsed 120.40 sec.\nINFO: index \"url_url\" now contains 346563 row versions in 6494 pages\nDETAIL: 213608 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.35s/2.07u sec elapsed 285.05 sec.\nINFO: index \"url_last_mod_time\" now contains 346734 row versions in \n1106 pages\nDETAIL: 213608 index row versions were removed.\n27 index pages have been deleted, 17 are currently reusable.\nCPU 0.17s/0.95u sec elapsed 17.92 sec.\nINFO: \"url\": removed 229515 row versions in 4844 pages\nDETAIL: CPU 0.53s/1.26u sec elapsed 375.64 sec.\nINFO: \"url\": found 229515 removable, 310913 nonremovable row versions \nin 26488 pages\nDETAIL: 29063 dead row versions cannot be removed yet.\nThere were 3907007 unused item pointers.\n192 pages are entirely empty.\nCPU 7.78s/17.09u sec elapsed 3672.29 sec.\nINFO: vacuuming \"pg_toast.pg_toast_127397204\"\nINFO: index \"pg_toast_127397204_index\" now contains 0 row versions in \n1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.06 sec.\nINFO: \"pg_toast_127397204\": found 0 removable, 0 nonremovable row \nversions in 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.07 sec.\nVACUUM\n\n>> Is the solution to reindex data ?\n>\n> In 7.4 a VACUUM should be sufficient ... or at least, if it isn't\n> I'd like to know why not before you destroy the evidence by reindexing.\nYes, of course.\n\nCordialement,\nJean-Gérard Pailloncy\n\n", "msg_date": "Mon, 12 Apr 2004 21:02:02 +0200", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index Backward Scan fast / Index Scan slow ! " }, { "msg_contents": "[ Ah, I just got to your message with the VACUUM VERBOSE results ... ]\n\n=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]> writes:\n> dps=# VACUUM VERBOSE url;\n> INFO: index \"url_next_index_time\" now contains 349519 row versions in \n> 3534 pages\n> DETAIL: 229515 index row versions were removed.\n> 3071 index pages have been deleted, 2864 are currently reusable.\n> CPU 0.32s/0.67u sec elapsed 76.25 sec.\n\nHm, this is odd. That says you've got 349519 live index entries in only\n463 actively-used index pages, or an average of 754 per page, which\nAFAICS could not fit in an 8K page. Are you using a nondefault value of\nBLCKSZ? If so what?\n\nIf you *are* using default BLCKSZ then this index must be corrupt, and\nwhat you probably need to do is REINDEX it. But before you do that,\ncould you send me a copy of the index file?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Apr 2004 17:23:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Backward Scan fast / Index Scan slow ! " }, { "msg_contents": "> Hm, this is odd. That says you've got 349519 live index entries in \n> only\n> 463 actively-used index pages, or an average of 754 per page, which\n> AFAICS could not fit in an 8K page. Are you using a nondefault value \n> of\n> BLCKSZ? If so what?\nSorry, I forgot to specify I use BLCKSZ of 32768, the same blokck's \nsize for newfs, the same for RAID slice's size.\nI test the drive sometimes ago, and found a speed win if the slice size \nthe disk block size and the read block size was the same.\n\nI do not think that a different BLCKSZ should exhibit a slowdown as the \none I found.\n\n> If you *are* using default BLCKSZ then this index must be corrupt, and\n> what you probably need to do is REINDEX it. But before you do that,\n> could you send me a copy of the index file?\nDo you want the index file now, or may I try something before?\n\nCordialement,\nJean-Gérard Pailloncy\n\n", "msg_date": "Tue, 13 Apr 2004 13:12:36 +0200", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index Backward Scan fast / Index Scan slow ! " }, { "msg_contents": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]> writes:\n>> Are you using a nondefault value of\n>> BLCKSZ? If so what?\n\n> Sorry, I forgot to specify I use BLCKSZ of 32768,\n\nOkay, the numbers are sensible then. The index density seems a bit low\n(754 entries/page where the theoretical ideal would be about 1365) but\nnot really out-of-line.\n\n>> could you send me a copy of the index file?\n\n> Do you want the index file now, or may I try something before?\n\nIf you're going to reindex, please do send me a copy of the file first.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Apr 2004 09:47:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Backward Scan fast / Index Scan slow ! " } ]
[ { "msg_contents": "\nGreetings,\n\nIs there any performance penalty of having too many columns in\na table in terms of read and write speeds.\n\nTo order to keep operational queries simple (avoid joins) we plan to\nadd columns in the main customer dimension table.\n\nAdding more columns also means increase in concurrency in the table\nas more and more applications will access the same table.\n\nAny ideas if its better to split the table application wise or is it ok?\n\n\n\nRegds\nmallah.\n", "msg_date": "Mon, 12 Apr 2004 17:24:17 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Effect of too many columns" }, { "msg_contents": "On Mon, Apr 12, 2004 at 17:24:17 +0530,\n Rajesh Kumar Mallah <[email protected]> wrote:\n> \n> Greetings,\n> \n> Is there any performance penalty of having too many columns in\n> a table in terms of read and write speeds.\n> \n> To order to keep operational queries simple (avoid joins) we plan to\n> add columns in the main customer dimension table.\n> \n> Adding more columns also means increase in concurrency in the table\n> as more and more applications will access the same table.\n> \n> Any ideas if its better to split the table application wise or is it ok?\n\nThis is normally a bad idea. If you properly implement constraints in\nwhat is effectively a materialized view, you might end up with a slower\nsystem, depending on your mix of queries. (Generally updating will take\nmore resources.) So you probably want to test your new design under a\nsimulated normal load to see if it actually speeds things up in your\ncase before making the change.\n", "msg_date": "Mon, 12 Apr 2004 07:17:40 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effect of too many columns" } ]
[ { "msg_contents": "We have a large database which recently increased dramatically due to a\nchange in our insert program allowing all entries.\nPWFPM_DEV=# select relname,relfilenode,reltuples from pg_class where relname\n= 'forecastelement';\n relname | relfilenode | reltuples\n-----------------+-------------+-------------\n forecastelement | 361747866 | 4.70567e+08\n\n Column | Type | Modifiers\n----------------+-----------------------------+-----------\n version | character varying(99) |\n origin | character varying(10) |\n timezone | character varying(99) |\n region_id | character varying(20) |\n wx_element | character varying(99) |\n value | character varying(99) |\n flag | character(3) |\n units | character varying(99) |\n valid_time | timestamp without time zone |\n issue_time | timestamp without time zone |\n next_forecast | timestamp without time zone |\n reception_time | timestamp without time zone |\n\nThe program is supposed to check to ensure that all fields but the\nreception_time are unique using a select statement, and if so, insert it.\nDue an error in a change, reception time was included in the select to check\nfor duplicates. The reception_time is created by a program creating the dat\nfile to insert. \nEssentially letting all duplicate files to be inserted.\n\nI tried the delete query below.\nPWFPM_DEV=# delete from forecastelement where oid not in (select min(oid)\nfrom forecastelement group by\nversion,origin,timezone,region_id,wx_element,value,flag,units,valid_time,iss\nue_time,next_forecast);\nIt ran for 3 days creating what I assume is an index in pgsql_tmp of the\ngroup by statement. \nThe query ended up failing with \"dateERROR:write failed\".\nWell the long weekend is over and we do not have the luxury of trying this\nagain. \nSo I was thinking maybe of doing the deletion in chunks, perhaps based on\nreception time.\nAre there any suggestions for a better way to do this, or using multiple\nqueries to delete selectively a week at a time based on the reception_time.\nI would say there are a lot of duplicate entries between mid march to the\nfirst week of April.\n\n\n", "msg_date": "Mon, 12 Apr 2004 10:39:22 -0400", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Deleting certain duplicates" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n \n \n> So I was thinking maybe of doing the deletion in chunks, perhaps based on\n> reception time.\n> Are there any suggestions for a better way to do this, or using multiple\n> queries to delete selectively a week at a time based on the reception_time.\n> I would say there are a lot of duplicate entries between mid march to the\n> first week of April.\n \nYou are on the right track, in that dividing up the table will help. However,\nyou cannot divide on the reception_time as that is the unique column. Analyze\nyour data and divide on a row with a fairly uniform distribution over the\ntime period in question. Then copy a segment out, clean it up, and put it\nback in. Make sure there is an index on the column in question, of course.\n \nFor example, if 1/10 of the table has a \"units\" of 12, you could do something\nlike this:\n \nCREATE INDEX units_dev ON forecastelement (units);\n \nCREATE TEMPORARY TABLE units_temp AS SELECT * FROM forecastelement WHERE units='12';\n \nCREATE INDEX units_oid_index ON units_temp(oid);\n \n(Delete out duplicate rows from units_temp using your previous query or something else)\n \nDELETE FROM forecastelement WHERE units='12';\n \nINSERT INTO forecastelement SELECT * FROM units_temp;\n \nDELETE FROM units_temp;\n \nRepeat as needed until all rows are done. Subsequent runs can be done by doing a\n \nINSERT INTO units_temp SELECT * FROM forecastelement WHERE units='...'\n \nand skipping the CREATE INDEX steps.\n \nOn the other hand, your original deletion query may work as is, with the addition\nof an oid index. Perhaps try an EXPLAIN on it.\n \n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200404200706\n \n-----BEGIN PGP SIGNATURE-----\n \niD8DBQFAhQVWvJuQZxSWSsgRAvLEAKDCVcX3Llm8JgszI/BBC1SobtjVawCfVGKu\nERcV5J2JolwgZRhMbXnNM90=\n=JqET\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Tue, 20 Apr 2004 11:15:58 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deleting certain duplicates" } ]
[ { "msg_contents": "The index is\nIndexes:\n \"forecastelement_rwv_idx\" btree (region_id, wx_element, valid_time)\n\n-----Original Message-----\nFrom: Shea,Dan [CIS] [mailto:[email protected]]\nSent: Monday, April 12, 2004 10:39 AM\nTo: Postgres Performance\nSubject: [PERFORM] Deleting certain duplicates\n\n\nWe have a large database which recently increased dramatically due to a\nchange in our insert program allowing all entries.\nPWFPM_DEV=# select relname,relfilenode,reltuples from pg_class where relname\n= 'forecastelement';\n relname | relfilenode | reltuples\n-----------------+-------------+-------------\n forecastelement | 361747866 | 4.70567e+08\n\n Column | Type | Modifiers\n----------------+-----------------------------+-----------\n version | character varying(99) |\n origin | character varying(10) |\n timezone | character varying(99) |\n region_id | character varying(20) |\n wx_element | character varying(99) |\n value | character varying(99) |\n flag | character(3) |\n units | character varying(99) |\n valid_time | timestamp without time zone |\n issue_time | timestamp without time zone |\n next_forecast | timestamp without time zone |\n reception_time | timestamp without time zone |\n\nThe program is supposed to check to ensure that all fields but the\nreception_time are unique using a select statement, and if so, insert it.\nDue an error in a change, reception time was included in the select to check\nfor duplicates. The reception_time is created by a program creating the dat\nfile to insert. \nEssentially letting all duplicate files to be inserted.\n\nI tried the delete query below.\nPWFPM_DEV=# delete from forecastelement where oid not in (select min(oid)\nfrom forecastelement group by\nversion,origin,timezone,region_id,wx_element,value,flag,units,valid_time,iss\nue_time,next_forecast);\nIt ran for 3 days creating what I assume is an index in pgsql_tmp of the\ngroup by statement. \nThe query ended up failing with \"dateERROR:write failed\".\nWell the long weekend is over and we do not have the luxury of trying this\nagain. \nSo I was thinking maybe of doing the deletion in chunks, perhaps based on\nreception time.\nAre there any suggestions for a better way to do this, or using multiple\nqueries to delete selectively a week at a time based on the reception_time.\nI would say there are a lot of duplicate entries between mid march to the\nfirst week of April.\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n", "msg_date": "Mon, 12 Apr 2004 11:18:31 -0400", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Deleting certain duplicates" }, { "msg_contents": "Shea,Dan [CIS] wrote:\n\n>The index is\n>Indexes:\n> \"forecastelement_rwv_idx\" btree (region_id, wx_element, valid_time)\n>\n>-----Original Message-----\n>From: Shea,Dan [CIS] [mailto:[email protected]]\n>Sent: Monday, April 12, 2004 10:39 AM\n>To: Postgres Performance\n>Subject: [PERFORM] Deleting certain duplicates\n>\n>\n>We have a large database which recently increased dramatically due to a\n>change in our insert program allowing all entries.\n>PWFPM_DEV=# select relname,relfilenode,reltuples from pg_class where relname\n>= 'forecastelement';\n> relname | relfilenode | reltuples\n>-----------------+-------------+-------------\n> forecastelement | 361747866 | 4.70567e+08\n>\n> Column | Type | Modifiers\n>----------------+-----------------------------+-----------\n> version | character varying(99) |\n> origin | character varying(10) |\n> timezone | character varying(99) |\n> region_id | character varying(20) |\n> wx_element | character varying(99) |\n> value | character varying(99) |\n> flag | character(3) |\n> units | character varying(99) |\n> valid_time | timestamp without time zone |\n> issue_time | timestamp without time zone |\n> next_forecast | timestamp without time zone |\n> reception_time | timestamp without time zone |\n>\n>The program is supposed to check to ensure that all fields but the\n>reception_time are unique using a select statement, and if so, insert it.\n>Due an error in a change, reception time was included in the select to check\n>for duplicates. The reception_time is created by a program creating the dat\n>file to insert. \n>Essentially letting all duplicate files to be inserted.\n>\n>I tried the delete query below.\n>PWFPM_DEV=# delete from forecastelement where oid not in (select min(oid)\n>from forecastelement group by\n>version,origin,timezone,region_id,wx_element,value,flag,units,valid_time,iss\n>ue_time,next_forecast);\n>It ran for 3 days creating what I assume is an index in pgsql_tmp of the\n>group by statement. \n>The query ended up failing with \"dateERROR:write failed\".\n>Well the long weekend is over and we do not have the luxury of trying this\n>again. \n>So I was thinking maybe of doing the deletion in chunks, perhaps based on\n>reception time.\n> \n>\n\nits more of an sql question though.\n\nto deduplicate on basis of\n\nversion,origin,timezone,region_id,wx_element,value,flag,units,valid_time,\nissue_time,next_forecast\n\nYou could do this.\n\nbegin work;\ncreate temp_table as select distinct on \n(version,origin,timezone,region_id,wx_element,value,flag,units,valid_time,\nissue_time,next_forecast) * from forecastelement ;\ntruncate table forecastelement ;\ndrop index <index on forecastelement > ;\ninsert into forecastelement select * from temp_table ;\ncommit;\ncreate indexes\nAnalyze forecastelement ;\n\nnote that distinct on will keep only one row out of all rows having \ndistinct values\nof the specified columns. kindly go thru the distinct on manual before \ntrying\nthe queries.\n\nregds\nmallah.\n\n>Are there any suggestions for a better way to do this, or using multiple\n>queries to delete selectively a week at a time based on the reception_time.\n>I would say there are a lot of duplicate entries between mid march to the\n>first week of April.\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n> \n>\n\n", "msg_date": "Tue, 13 Apr 2004 19:57:01 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deleting certain duplicates" } ]
[ { "msg_contents": "I've searched the archives and can't find an answer to this seemingly\nsimple question. Apologies if it's too common.\n \nThe table in question has ~1.3M rows. It has 85 columns, 5 of which\nhave single-column indexes.\n \nThe column in question (CID) has 183 distinct values. For these values,\nthe largest has ~38,000 rows, and the smallest has 1 row. About 30\nvalues have < 100 rows, and about 10 values have > 20,000 rows.\n \nThe database is 7.2.3 running on RedHat 7.1. (we are in process of\nupgrading to PG 7.4.2) All of the query plan options are enabled, and\nthe cpu costs are set to the default values. ( cpu_tuple_cost is 0.01,\ncpu_index_tuple_cost is 0.001). The database is VACUUM'd every night.\n \nThe problem:\nA simply query:\n select count(*) from xxx where CID=<smalval>\nwhere <smalval> is a CID value which has relatively few rows, returns a\nplan using the index on that column.\n \n explain analyze select count(*) from xxx where cid=869366;\n Aggregate (cost=19136.33..19136.33 rows=1 width=0) (actual\ntime=78.49..78.49 rows=1 loops=1)\n -> Index Scan using xxx_cid on emailrcpts (cost=0.00..19122.21\nrows=5648 width=0) (actual time=63.40..78.46 rows=1 loops=1)\n Total runtime: 78.69 msec\n \nThe same plan is true for values which have up to about 20,000 rows:\n \n explain analyze select count(*) from xxx where cid=6223341;\n Aggregate (cost=74384.19..74384.19 rows=1 width=0) (actual\ntime=11614.89..11614.89 rows=1 loops=1)\n -> Index Scan using xxx_cid on emailrcpts (cost=0.00..74329.26\nrows=21974 width=0) (actual time=35.75..11582.10 rows=20114 loops=1)\n Total runtime: 11615.05 msec\n\nHowever for the values that have > 20,000 rows, the plan changes to a\nsequential scan, which is proportionately much slower.\n \n explain analyze select count(*) from xxx where cid=7191032;\n Aggregate (cost=97357.61..97357.61 rows=1 width=0) (actual\ntime=46427.81..46427.82 rows=1 loops=1)\n -> Seq Scan on xxx (cost=0.00..97230.62 rows=50792 width=0)\n(actual time=9104.45..46370.27 rows=37765 loops=1)\n Total runtime: 46428.00 msec\n \n \nThe question: why does the planner consider a sequential scan to be\nbetter for these top 10 values? In terms of elapsed time it is more\nthan twice as slow, proportionate to an index scan for the same number\nof rows.\n \nWhat I tried:\n \nA) alter table xxx alter column cid set statistics 500; \n analyze xxx;\nThis does not affect the results.\n \nB) dropped/rebuilt the index, with no improvement.\n \nC) decreasing cpu_index_tuple_cost by a factor of up to 1000, with no\nsuccess\n \nD) force an index scan for the larger values by using a very high value\nfor cpu_tuple_cost (e.g. .5) but this doesn't seem like a wise thing to\ndo.\n \nYour thoughts appreciated in advance!\n \n- Jeremy \n \n7+ years experience in Oracle performance-tuning\nrelatively new to postgresql\n\n\n\nMessage\n\n\nI've searched the \narchives and can't find an answer to this seemingly simple question.  \nApologies if it's too common.\n \nThe table in \nquestion has ~1.3M rows.  It has 85 columns, 5 of which have single-column \nindexes.\n \nThe column in \nquestion (CID) has 183 distinct values.  For these values, the largest has \n~38,000 rows, and the smallest has 1 row.  About 30 values have < 100 \nrows, and about 10 values have > 20,000 rows.\n \nThe database is \n7.2.3 running on RedHat 7.1. (we are in process of upgrading to \nPG 7.4.2)    All of the query plan options are enabled, and \nthe cpu costs are set to the default values. ( cpu_tuple_cost is 0.01, \ncpu_index_tuple_cost is 0.001).  The database is VACUUM'd every \nnight.\n \nThe \nproblem:\nA simply \nquery:\n    select count(*) from xxx where CID=<smalval>\nwhere \n<smalval> is a CID value which has relatively few rows, returns a plan \nusing the index on that column.\n \n   explain \nanalyze select count(*) from xxx where cid=869366;   Aggregate  \n(cost=19136.33..19136.33 rows=1 width=0) (actual time=78.49..78.49 rows=1 \nloops=1)     ->  Index Scan using xxx_cid on \nemailrcpts  (cost=0.00..19122.21 rows=5648 width=0) (actual \ntime=63.40..78.46 rows=1 loops=1)   Total runtime: 78.69 \nmsec\n \nThe same plan is \ntrue for values which have up to about 20,000 rows:\n \n   explain \nanalyze select count(*) from xxx where cid=6223341;   Aggregate  \n(cost=74384.19..74384.19 rows=1 width=0) (actual time=11614.89..11614.89 rows=1 \nloops=1)     ->  Index Scan using xxx_cid on \nemailrcpts  (cost=0.00..74329.26 rows=21974 width=0) (actual \ntime=35.75..11582.10 rows=20114 loops=1)   Total runtime: 11615.05 \nmsec\nHowever for the \nvalues that have > 20,000 rows, the plan changes to a sequential scan, which \nis proportionately much slower.\n \n   \nexplain analyze select count(*) from xxx where cid=7191032;   \nAggregate  (cost=97357.61..97357.61 rows=1 width=0) (actual \ntime=46427.81..46427.82 rows=1 loops=1)    ->\n  Seq Scan on xxx \n(cost=0.00..97230.62 rows=50792 width=0) (actual time=9104.45..46370.27 \nrows=37765 loops=1)    \nTotal runtime: 46428.00 msec\n \n \nThe question: why \ndoes the planner consider a sequential scan to be better for these top 10 \nvalues?  In terms of elapsed time it is more than twice as slow, \nproportionate to an index scan for the same number of rows.\n \nWhat I \ntried:\n \nA) alter table xxx alter column cid set statistics 500;    \n\n    analyze xxx;\nThis does not affect \nthe results.\n \nB) \n dropped/rebuilt the index, with no improvement.\n \nC) decreasing \ncpu_index_tuple_cost by a factor of up to 1000, with no \nsuccess\n \nD) force an index \nscan for the larger values by using a very high value for cpu_tuple_cost (e.g. \n.5) but this doesn't seem like a wise thing to do.\n \nYour thoughts \nappreciated in advance!\n \n- \nJeremy \n \n7+ years \nexperience in Oracle performance-tuning\nrelatively new to postgresql", "msg_date": "Mon, 12 Apr 2004 11:40:28 -0400", "msg_from": "\"Jeremy Dunn\" <[email protected]>", "msg_from_op": true, "msg_subject": "index v. seqscan for certain values" }, { "msg_contents": "Quick bit of input, since you didn't mention it.\n\nHow often do you run ANALYZE? I found it interesting that a database I\nwas doing tests on sped up by a factor of 20 after ANALYZE. If your\ndata changes a lot, you should probably schedule ANALYZE to run with\nVACUUM.\n\nJeremy Dunn wrote:\n> I've searched the archives and can't find an answer to this seemingly \n> simple question. Apologies if it's too common.\n> \n> The table in question has ~1.3M rows. It has 85 columns, 5 of which \n> have single-column indexes.\n> \n> The column in question (CID) has 183 distinct values. For these values, \n> the largest has ~38,000 rows, and the smallest has 1 row. About 30 \n> values have < 100 rows, and about 10 values have > 20,000 rows.\n> \n> The database is 7.2.3 running on RedHat 7.1. (we are in process of \n> upgrading to PG 7.4.2) All of the query plan options are enabled, and \n> the cpu costs are set to the default values. ( cpu_tuple_cost is 0.01, \n> cpu_index_tuple_cost is 0.001). The database is VACUUM'd every night.\n> \n> The problem:\n> A simply query:\n> select count(*) from xxx where CID=<smalval>\n> where <smalval> is a CID value which has relatively few rows, returns a \n> plan using the index on that column.\n> \n> explain analyze select count(*) from xxx where cid=869366;\n> Aggregate (cost=19136.33..19136.33 rows=1 width=0) (actual \n> time=78.49..78.49 rows=1 loops=1)\n> -> Index Scan using xxx_cid on emailrcpts (cost=0.00..19122.21 \n> rows=5648 width=0) (actual time=63.40..78.46 rows=1 loops=1)\n> Total runtime: 78.69 msec\n> \n> The same plan is true for values which have up to about 20,000 rows:\n> \n> explain analyze select count(*) from xxx where cid=6223341;\n> Aggregate (cost=74384.19..74384.19 rows=1 width=0) (actual \n> time=11614.89..11614.89 rows=1 loops=1)\n> -> Index Scan using xxx_cid on emailrcpts (cost=0.00..74329.26 \n> rows=21974 width=0) (actual time=35.75..11582.10 rows=20114 loops=1)\n> Total runtime: 11615.05 msec\n> However for the values that have > 20,000 rows, the plan changes to a \n> sequential scan, which is proportionately much slower.\n> \n> explain analyze select count(*) from xxx where cid=7191032;\n> Aggregate (cost=97357.61..97357.61 rows=1 width=0) (actual \n> time=46427.81..46427.82 rows=1 loops=1)\n> -> Seq Scan on xxx (cost=0.00..97230.62 rows=50792 width=0) \n> (actual time=9104.45..46370.27 rows=37765 loops=1)\n> Total runtime: 46428.00 msec\n> \n> \n> The question: why does the planner consider a sequential scan to be \n> better for these top 10 values? In terms of elapsed time it is more \n> than twice as slow, proportionate to an index scan for the same number \n> of rows.\n> \n> What I tried:\n> \n> A) alter table xxx alter column cid set statistics 500; \n> analyze xxx;\n> This does not affect the results.\n> \n> B) dropped/rebuilt the index, with no improvement.\n> \n> C) decreasing cpu_index_tuple_cost by a factor of up to 1000, with no \n> success\n> \n> D) force an index scan for the larger values by using a very high value \n> for cpu_tuple_cost (e.g. .5) but this doesn't seem like a wise thing to do.\n> \n> Your thoughts appreciated in advance!\n> \n> - Jeremy \n> \n> 7+ years experience in Oracle performance-tuning\n> relatively new to postgresql\n\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Mon, 12 Apr 2004 12:09:15 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index v. seqscan for certain values" }, { "msg_contents": "Sorry I should have written that we do VACUUM VERBOSE ANALYZE every\nnight.\n\n- Jeremy\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Bill Moran\nSent: Monday, April 12, 2004 12:09 PM\nTo: [email protected]\nCc: Postgresql Performance\nSubject: Re: [PERFORM] index v. seqscan for certain values\n\n\nQuick bit of input, since you didn't mention it.\n\nHow often do you run ANALYZE? I found it interesting that a database I\nwas doing tests on sped up by a factor of 20 after ANALYZE. If your\ndata changes a lot, you should probably schedule ANALYZE to run with\nVACUUM.\n\nJeremy Dunn wrote:\n> I've searched the archives and can't find an answer to this seemingly\n> simple question. Apologies if it's too common.\n> \n> The table in question has ~1.3M rows. It has 85 columns, 5 of which\n> have single-column indexes.\n> \n> The column in question (CID) has 183 distinct values. For these \n> values,\n> the largest has ~38,000 rows, and the smallest has 1 row. About 30 \n> values have < 100 rows, and about 10 values have > 20,000 rows.\n> \n> The database is 7.2.3 running on RedHat 7.1. (we are in process of \n> upgrading to PG 7.4.2) All of the query plan options are enabled,\nand \n> the cpu costs are set to the default values. ( cpu_tuple_cost is 0.01,\n> cpu_index_tuple_cost is 0.001). The database is VACUUM'd every night.\n> \n> The problem:\n> A simply query:\n> select count(*) from xxx where CID=<smalval>\n> where <smalval> is a CID value which has relatively few rows, returns \n> a\n> plan using the index on that column.\n> \n> explain analyze select count(*) from xxx where cid=869366;\n> Aggregate (cost=19136.33..19136.33 rows=1 width=0) (actual\n> time=78.49..78.49 rows=1 loops=1)\n> -> Index Scan using xxx_cid on emailrcpts (cost=0.00..19122.21 \n> rows=5648 width=0) (actual time=63.40..78.46 rows=1 loops=1)\n> Total runtime: 78.69 msec\n> \n> The same plan is true for values which have up to about 20,000 rows:\n> \n> explain analyze select count(*) from xxx where cid=6223341;\n> Aggregate (cost=74384.19..74384.19 rows=1 width=0) (actual\n> time=11614.89..11614.89 rows=1 loops=1)\n> -> Index Scan using xxx_cid on emailrcpts (cost=0.00..74329.26 \n> rows=21974 width=0) (actual time=35.75..11582.10 rows=20114 loops=1)\n> Total runtime: 11615.05 msec\n> However for the values that have > 20,000 rows, the plan changes to a \n> sequential scan, which is proportionately much slower.\n> \n> explain analyze select count(*) from xxx where cid=7191032;\n> Aggregate (cost=97357.61..97357.61 rows=1 width=0) (actual\n> time=46427.81..46427.82 rows=1 loops=1)\n> -> Seq Scan on xxx (cost=0.00..97230.62 rows=50792 width=0) \n> (actual time=9104.45..46370.27 rows=37765 loops=1)\n> Total runtime: 46428.00 msec\n> \n> \n> The question: why does the planner consider a sequential scan to be\n> better for these top 10 values? In terms of elapsed time it is more \n> than twice as slow, proportionate to an index scan for the same number\n\n> of rows.\n> \n> What I tried:\n> \n> A) alter table xxx alter column cid set statistics 500; \n> analyze xxx;\n> This does not affect the results.\n> \n> B) dropped/rebuilt the index, with no improvement.\n> \n> C) decreasing cpu_index_tuple_cost by a factor of up to 1000, with no\n> success\n> \n> D) force an index scan for the larger values by using a very high \n> value\n> for cpu_tuple_cost (e.g. .5) but this doesn't seem like a wise thing\nto do.\n> \n> Your thoughts appreciated in advance!\n> \n> - Jeremy\n> \n> 7+ years experience in Oracle performance-tuning\n> relatively new to postgresql\n\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "Mon, 12 Apr 2004 13:08:05 -0400", "msg_from": "\"Jeremy Dunn\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index v. seqscan for certain values" }, { "msg_contents": "\nOn Mon, 12 Apr 2004, Jeremy Dunn wrote:\n\n> explain analyze select count(*) from xxx where cid=6223341;\n> Aggregate (cost=74384.19..74384.19 rows=1 width=0) (actual\n> time=11614.89..11614.89 rows=1 loops=1)\n> -> Index Scan using xxx_cid on emailrcpts (cost=0.00..74329.26\n> rows=21974 width=0) (actual time=35.75..11582.10 rows=20114 loops=1)\n> Total runtime: 11615.05 msec\n>\n> However for the values that have > 20,000 rows, the plan changes to a\n> sequential scan, which is proportionately much slower.\n>\n> explain analyze select count(*) from xxx where cid=7191032;\n> Aggregate (cost=97357.61..97357.61 rows=1 width=0) (actual\n> time=46427.81..46427.82 rows=1 loops=1)\n> -> Seq Scan on xxx (cost=0.00..97230.62 rows=50792 width=0)\n> (actual time=9104.45..46370.27 rows=37765 loops=1)\n> Total runtime: 46428.00 msec\n>\n> The question: why does the planner consider a sequential scan to be\n> better for these top 10 values? In terms of elapsed time it is more\n> than twice as slow, proportionate to an index scan for the same number\n> of rows.\n\nOne thing to do is to set enable_seqscan=off and run the above and compare\nthe estimated and real costs. It may be possible to lower\nrandom_page_cost to a still reasonable number in order to move the point\nof the switchover to seqscan.\n", "msg_date": "Mon, 12 Apr 2004 10:39:51 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index v. seqscan for certain values" }, { "msg_contents": "\"Jeremy Dunn\" <[email protected]> writes:\n> The question: why does the planner consider a sequential scan to be\n> better for these top 10 values?\n\nAt some point a seqscan *will* be better. In the limit, if the key\nbeing sought is common enough to occur on every page of the table,\nit's certain that a seqscan will require less I/O than an indexscan\n(because reading the index isn't actually saving you any heap fetches).\nIn practice the breakeven point is less than that because Unix kernels\nare better at handling sequential than random access.\n\nYour gripe appears to be basically that the planner's idea of the\nbreakeven point is off a bit. It looks to me like it's within about\na factor of 2 of being right, though, which is not all that bad when\nit's using generic cost parameters.\n\n> A) alter table xxx alter column cid set statistics 500; \n> analyze xxx;\n> This does not affect the results.\n\nIt probably improved the accuracy of the row count estimates, no?\nThe estimate you show for cid=7191032 is off by more than 25% (37765 vs\n50792), which seems like a lot of error for one of the most common\nvalues in the table. (I hope that was with default stats target and\nnot 500.) That leads directly to a 25% overestimate of the cost of\nan indexscan, while having IIRC no impact on the cost of a seqscan.\nSince the cost ratio was more than 25%, this didn't change the selected\nplan, but you want to fix that error as best you can before you move\non to tweaking cost parameters.\n\n> C) decreasing cpu_index_tuple_cost by a factor of up to 1000, with no\n> success\n\nWrong thing. You should be tweaking random_page_cost. Looks to me like\na value near 2 might be appropriate for your setup. Also it is likely\nappropriate to increase effective_cache_size, which is awfully small in\nthe default configuration. I'd set that to something related to your\navailable RAM before trying to home in on a suitable random_page_cost.\n\nAFAIK hardly anyone bothers with changing the cpu_xxx costs ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Apr 2004 13:51:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index v. seqscan for certain values " }, { "msg_contents": "> \"Jeremy Dunn\" <[email protected]> writes:\n> > The question: why does the planner consider a sequential scan to be \n> > better for these top 10 values?\n> \n> At some point a seqscan *will* be better. In the limit, if \n> the key being sought is common enough to occur on every page \n> of the table, it's certain that a seqscan will require less \n> I/O than an indexscan (because reading the index isn't \n> actually saving you any heap fetches). In practice the \n> breakeven point is less than that because Unix kernels are \n> better at handling sequential than random access.\n> \n> Your gripe appears to be basically that the planner's idea of \n> the breakeven point is off a bit. It looks to me like it's \n> within about a factor of 2 of being right, though, which is \n> not all that bad when it's using generic cost parameters.\n\nAgreed. However, given that count(*) is a question that can be answered\n_solely_ using the index (without reference to the actual data blocks),\nI'd expect that the break-even point would be considerably higher than\nthe < 3% (~38,000 / ~1.3M) I'm currently getting. Does PG not use\nsolely the index in this situation??\n\n> > A) alter table xxx alter column cid set statistics 500; \n> > analyze xxx;\n> > This does not affect the results.\n> \n> It probably improved the accuracy of the row count estimates, \n> no? The estimate you show for cid=7191032 is off by more than \n> 25% (37765 vs 50792), which seems like a lot of error for one \n> of the most common values in the table. (I hope that was \n> with default stats target and not 500.) That leads directly \n> to a 25% overestimate of the cost of an indexscan, while \n> having IIRC no impact on the cost of a seqscan. Since the \n> cost ratio was more than 25%, this didn't change the selected \n> plan, but you want to fix that error as best you can before \n> you move on to tweaking cost parameters.\n\nActually it made them worse! Yes, this was the default statistics (10).\nWhen I just tried it again with a value of 300, analyze, then run the\nquery, I get a *worse* result for an estimate. I don't understand this.\n\n\n alter table xxx alter column cid set statistics 300;\n analyze emailrcpts;\n set random_page_cost to 2;\n explain analyze select count(*) from xxx where cid=7191032;\n\n Aggregate (cost=20563.28..20563.28 rows=1 width=0) (actual\ntime=7653.90..7653.90 rows=1 loops=1)\n -> Index Scan using xxx_cid on xxx (cost=0.00..20535.82 rows=10983\nwidth=0) (actual time=72.24..7602.38 rows=37765 loops=1)\n Total runtime: 7654.14 msec\n\nNow it estimates I have only 10,983 rows (~3x too low) instead of the\nold estimate 50,792 (1.3x too high). Why is that ??\n\nAnyway, a workable solution seems to be using a lower value for\nRandom_Page_Cost. Thanks to everyone who replied with this answer. \n\n> Also it is likely appropriate to increase \n> effective_cache_size, which is awfully small in the default \n> configuration. I'd set that to something related to your \n> available RAM before trying to home in on a suitable random_page_cost.\n\nWe have ours set to the default value of 1000, which does seem low for a\nsystem with 1GB of RAM. We'll up this once we figure out what's\navailable. Then tweak the Random_Page_Cost appropriately at that point.\n\nI'd still like to understand the strangeness above, if anyone can shed\nlight.\n\n- Jeremy\n\n", "msg_date": "Mon, 12 Apr 2004 15:05:02 -0400", "msg_from": "\"Jeremy Dunn\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index v. seqscan for certain values " }, { "msg_contents": "On Mon, Apr 12, 2004 at 15:05:02 -0400,\n Jeremy Dunn <[email protected]> wrote:\n> \n> Agreed. However, given that count(*) is a question that can be answered\n> _solely_ using the index (without reference to the actual data blocks),\n> I'd expect that the break-even point would be considerably higher than\n> the < 3% (~38,000 / ~1.3M) I'm currently getting. Does PG not use\n> solely the index in this situation??\n\nThat isn't true. In order to check visibility you need to look at the\ndata blocks.\n", "msg_date": "Mon, 12 Apr 2004 14:55:52 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index v. seqscan for certain values" }, { "msg_contents": "\"Jeremy Dunn\" <[email protected]> writes:\n> Agreed. However, given that count(*) is a question that can be answered\n> _solely_ using the index (without reference to the actual data blocks),\n\nAs Bruno noted, that is not the case in Postgres; we must visit the\ntable rows anyway.\n\n> When I just tried it again with a value of 300, analyze, then run the\n> query, I get a *worse* result for an estimate. I don't understand this.\n\nThat's annoying. How repeatable are these results --- if you do ANALYZE\nover again several times, how much does the row count estimate change\neach time? (It should change somewhat, since ANALYZE is taking a random\nsample, but one would like to think not a whole lot.) Is the variance\nmore or less at the higher stats target? Take a look at a few different\nCID values to get a sense of the accuracy, don't look at just one ...\n\n(Actually, you might find it more profitable to look at the pg_stats\nentry for the CID column rather than reverse-engineering the stats via\nANALYZE. Look at how well the most-common-values list and associated\nfrequency numbers track reality.)\n\nAlso, can you think of any reason for the distribution of CID values\nto be nonuniform within the table? For instance, do rows get inserted\nin order of increasing CID, or is there any clustering of rows with the\nsame CID?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Apr 2004 17:02:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index v. seqscan for certain values " }, { "msg_contents": "> > When I just tried it again with a value of 300, analyze, \n> then run the query, I get a *worse* result for an estimate. I don't\nunderstand \n> > this.\n> \n> That's annoying. How repeatable are these results --- if you \n> do ANALYZE over again several times, how much does the row \n> count estimate change each time? (It should change somewhat, \n> since ANALYZE is taking a random sample, but one would like \n> to think not a whole lot.) Is the variance more or less at \n> the higher stats target? Take a look at a few different CID \n> values to get a sense of the accuracy, don't look at just one ...\n\nYes, it's repeatable. I tried a bunch of times, and there are only\nsmall variations in the stats for the higher stat targets.\n\n> (Actually, you might find it more profitable to look at the \n> pg_stats entry for the CID column rather than \n> reverse-engineering the stats via ANALYZE. Look at how well \n> the most-common-values list and associated frequency numbers \n> track reality.)\n\nI checked the accuracy of the stats for various values, and there is a\nwide variation. I see some values where the estimate is 1.75x the\nactual; and others where the estimate is .44x the actual.\n\n> Also, can you think of any reason for the distribution of CID \n> values to be nonuniform within the table? For instance, do \n> rows get inserted in order of increasing CID, or is there any \n> clustering of rows with the same CID?\n\nThis is almost certainly the answer. The data is initially inserted in\nchunks for each CID, and later on there is a more normal distribution of\ninsert/update/deletes across all CIDs; and then again a new CID will\ncome with a large chunk of rows, etc.\n\nInterestingly, I tried increasing the stat size for the CID column to\n2000, analyzing, and checking the accuracy of the stats again. Even\nwith this relatively high value, the accuracy of the stats is not that\nclose. The value giving .44x previously nows gives an estimate .77x of\nactual. Another value which was at 1.38x of actual is now at .71x of\nactual! \n\nThen just for kicks I set the statistics size to 100,000 (!), analyzed,\nand ran the query again. For the same CID I still got an estimated row\ncount that is .71x the actual rows returned. Why is this not better? I\nwonder how high I'd have to set the statistics collector to get really\ngood data, given the uneven data distribution of this table. Is there\nany other technique that works better to get good estimates, given\nuneven distribution of values?\n\nSo I think this explains the inaccurate stats; and the solution as far\nas I'm concerned is to increase the two params mentioned yesterday\n(effective_cache_size & random_page_cost).\n\nThanks again for the help!\n- Jeremy\n\n", "msg_date": "Tue, 13 Apr 2004 10:41:30 -0400", "msg_from": "\"Jeremy Dunn\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index v. seqscan for certain values " }, { "msg_contents": "\"Jeremy Dunn\" <[email protected]> writes:\n> Interestingly, I tried increasing the stat size for the CID column to\n> 2000, analyzing, and checking the accuracy of the stats again.\n\nThere's a hard limit of 1000, I believe. Didn't it give you a warning\nsaying so?\n\nAt 1000 the ANALYZE sample size would be 300000 rows, or about a quarter\nof your table. I would have thought this would give frequency estimates\nwith much better precision than you seem to be seeing --- but my\nstatistics are rusty enough that I'm not sure about it. Possibly the\nnonuniform clumping of CID has something to do with the poor results.\n\nAny stats majors on the list?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Apr 2004 13:55:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index v. seqscan for certain values " }, { "msg_contents": "\n> There's a hard limit of 1000, I believe. Didn't it give you\n> a warning saying so?\n\nNo warning at 2000, and no warning at 100,000 either!\n\nRemember we are still on 7.2.x. The docs here\nhttp://www.postgresql.org/docs/7.2/static/sql-altertable.html don't say\nanything about a limit. \n\nThis is good to know, if it's true. Can anyone confirm?\n\n- Jeremy\n\n", "msg_date": "Tue, 13 Apr 2004 14:04:19 -0400", "msg_from": "\"Jeremy Dunn\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index v. seqscan for certain values " }, { "msg_contents": "On Tue, 2004-04-13 at 14:04, Jeremy Dunn wrote:\n> \n> > There's a hard limit of 1000, I believe. Didn't it give you\n> > a warning saying so?\n> \n> No warning at 2000, and no warning at 100,000 either!\n> \n> Remember we are still on 7.2.x. The docs here\n> http://www.postgresql.org/docs/7.2/static/sql-altertable.html don't say\n> anything about a limit. \n> \n> This is good to know, if it's true. Can anyone confirm?\n> \n\ntransform=# alter table data_pull alter column msg set statistics\n100000;\nWARNING: lowering statistics target to 1000\nERROR: column \"msg\" of relation \"data_pull\" does not exist\ntransform=# select version();\n version \n----------------------------------------------------------------\n PostgreSQL 7.4beta4 on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "13 Apr 2004 14:35:16 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index v. seqscan for certain values" }, { "msg_contents": "\nHi, all,\n\nI have got a new MaC OS G5 with 8GB RAM. So i tried to increase\nthe shmmax in Kernel so that I can take advantage of the RAM.\n\nI searched the web and read the manual for PG7.4 chapter 16.5.1.\nAfter that, I edited /etc/rc file:\n\nsysctl -w kern.sysv.shmmax=4294967296 // byte\nsysctl -w kern.sysv.shmmin=1\nsysctl -w kern.sysv.shmmni=32\nsysctl -w kern.sysv.shmseg=8\nsysctl -w kern.sysv.shmall=1048576 //4kpage\n\nfor 4G shared RAM.\n\nThen I changed postgresql.conf:\nshared_buffer=100000 //could be bigger?\n\nand restart the machine and postgres server. To my surprise, postgres \nserver wouldn't\nstart, saying that the requested shared memory exceeds kernel's shmmax.\n\nMy suspision is that the change i made in /etc/rc does not take \neffect.Is there a way\nto check it? Is there an\nup limit for how much RAM can be allocated for shared buffer in MAC OS \nX? Or\nis there something wrong with my calculation in numbers?\n\nThanks a lot!\n\nQing\n\n", "msg_date": "Tue, 13 Apr 2004 11:49:43 -0700", "msg_from": "Qing Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "configure shmmax on MAC OS X" }, { "msg_contents": "\nOn OS X, I've always made these changes in:\n\n/System/Library/StartupItems/SystemTuning/SystemTuning\n\nand manually checked it with sysctl after reboot. Works for me.\n\n100k buffers is probably overkill. There can be a performance penalty with too many buffers. See this lists' archives for more. 10k would probably be a better start.\n\n- Jeff\n\n\n>Hi, all,\n>\n>I have got a new MaC OS G5 with 8GB RAM. So i tried to increase\n>the shmmax in Kernel so that I can take advantage of the RAM.\n>\n>I searched the web and read the manual for PG7.4 chapter 16.5.1.\n>After that, I edited /etc/rc file:\n>\n>sysctl -w kern.sysv.shmmax=4294967296 // byte\n>sysctl -w kern.sysv.shmmin=1\n>sysctl -w kern.sysv.shmmni=32\n>sysctl -w kern.sysv.shmseg=8\n>sysctl -w kern.sysv.shmall=1048576 //4kpage\n>\n>for 4G shared RAM.\n>\n>Then I changed postgresql.conf:\n>shared_buffer=100000 //could be bigger?\n>\n>and restart the machine and postgres server. To my surprise, postgres server wouldn't\n>start, saying that the requested shared memory exceeds kernel's shmmax.\n>\n>My suspision is that the change i made in /etc/rc does not take effect.Is there a way\n>to check it? Is there an\n>up limit for how much RAM can be allocated for shared buffer in MAC OS X? Or\n>is there something wrong with my calculation in numbers?\n>\n>Thanks a lot!\n>\n>Qing\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n\n-- \n\nJeff Bohmer\nVisionLink, Inc.\n_________________________________\n303.402.0170\nwww.visionlink.org\n_________________________________\nPeople. Tools. Change. Community.\n", "msg_date": "Tue, 13 Apr 2004 13:25:25 -0600", "msg_from": "Jeff Bohmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configure shmmax on MAC OS X" }, { "msg_contents": "Qing Zhao <[email protected]> writes:\n> My suspision is that the change i made in /etc/rc does not take \n> effect.Is there a way to check it?\n\nsysctl has an option to show the values currently in effect.\n\nI believe that /etc/rc is the correct place to set shmmax on OSX 10.3 or\nlater ... but we have seen prior reports of people having trouble\ngetting the setting to \"take\". There may be some other constraint\ninvolved.\n\n> sysctl -w kern.sysv.shmmax=4294967296 // byte\n\nHmm, does sysctl work for values that exceed the range of int?\n\nThere's no particularly good reason to try to set shmmax as high as you\nare trying anyhow; you really don't need more than a couple hundred meg\nin Postgres shared memory. It's better to leave the kernel to manage\nthe bulk of your RAM.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Apr 2004 15:55:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configure shmmax on MAC OS X " }, { "msg_contents": "Tom:\n\nI used sysctl -A to see the kernel state, I got:\nkern.sysv.shmmax: -1\n\nIt looks the value is too big!\n\nThanks!\n\nQing\nOn Apr 13, 2004, at 12:55 PM, Tom Lane wrote:\n\n> Qing Zhao <[email protected]> writes:\n>> My suspision is that the change i made in /etc/rc does not take\n>> effect.Is there a way to check it?\n>\n> sysctl has an option to show the values currently in effect.\n>\n> I believe that /etc/rc is the correct place to set shmmax on OSX 10.3 \n> or\n> later ... but we have seen prior reports of people having trouble\n> getting the setting to \"take\". There may be some other constraint\n> involved.\n>\n>> sysctl -w kern.sysv.shmmax=4294967296 // byte\n>\n> Hmm, does sysctl work for values that exceed the range of int?\n>\n> There's no particularly good reason to try to set shmmax as high as you\n> are trying anyhow; you really don't need more than a couple hundred meg\n> in Postgres shared memory. It's better to leave the kernel to manage\n> the bulk of your RAM.\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Tue, 13 Apr 2004 13:10:24 -0700", "msg_from": "Qing Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configure shmmax on MAC OS X " }, { "msg_contents": "Hello,\n\nI found that if you SHMALL value was less than your SHMMAX value,\nthe value wouldn't take.\n\nJ\n\n\nTom Lane wrote:\n\n> Qing Zhao <[email protected]> writes:\n> \n>>My suspision is that the change i made in /etc/rc does not take \n>>effect.Is there a way to check it?\n> \n> \n> sysctl has an option to show the values currently in effect.\n> \n> I believe that /etc/rc is the correct place to set shmmax on OSX 10.3 or\n> later ... but we have seen prior reports of people having trouble\n> getting the setting to \"take\". There may be some other constraint\n> involved.\n> \n> \n>>sysctl -w kern.sysv.shmmax=4294967296 // byte\n> \n> \n> Hmm, does sysctl work for values that exceed the range of int?\n> \n> There's no particularly good reason to try to set shmmax as high as you\n> are trying anyhow; you really don't need more than a couple hundred meg\n> in Postgres shared memory. It's better to leave the kernel to manage\n> the bulk of your RAM.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Tue, 13 Apr 2004 13:59:13 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configure shmmax on MAC OS X" }, { "msg_contents": "On Tue, 13 Apr 2004 13:55:49 -0400, Tom Lane <[email protected]> wrote:\n>Possibly the\n>nonuniform clumping of CID has something to do with the poor results.\n\nIt shouldn't. The sampling algorithm is designed to give each tuple the\nsame chance of ending up in the sample, and tuples are selected\nindependently. (IOW each one of the {N \\chooose n} possible samples has\nthe same probability.) There are known problems with nonuniform\ndistribution of dead vs. live and large vs. small tuples, but AFAICS the\norder of values does not matter.\n\nServus\n Manfred\n", "msg_date": "Fri, 16 Apr 2004 01:04:24 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index v. seqscan for certain values " } ]
[ { "msg_contents": "Hi,\n\n> In 7.4 a VACUUM should be sufficient ... or at least, if it isn't\nAtfer VACUUM:\ndps=# explain analyze select next_index_time from url order by \nnext_index_time desc limit 1;\n \nQUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n--\n Limit (cost=0.00..2.62 rows=1 width=4) (actual time=0.098..0.099 \nrows=1 loops=1)\n -> Index Scan Backward using url_next_index_time on url \n(cost=0.00..814591.03 rows=310913 width=4) (actual time=0.096..0.096 \nrows=1 loops=1)\n Total runtime: 0.195 ms\n(3 rows)\n\ndps=# explain analyze select next_index_time from url order by \nnext_index_time asc limit 1;\n \nQUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-\n Limit (cost=0.00..2.62 rows=1 width=4) (actual \ntime=13504.105..13504.106 rows=1 loops=1)\n -> Index Scan using url_next_index_time on url \n(cost=0.00..814591.03 rows=310913 width=4) (actual \ntime=13504.099..13504.099 rows=1 loops=1)\n Total runtime: 13504.158 ms\n(3 rows)\n\nBetter, but......\n\nCordialement,\nJean-G�rard Pailloncy\n\n", "msg_date": "Mon, 12 Apr 2004 21:02:02 +0200", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": true, "msg_subject": "=?ISO-8859-1?Q?Re:__Index_Backward_Scan_fast_/_Index_Sc?=\n\t=?ISO-8859-1?Q?an_slow_!__=28Modifi=E9_par_Pailloncy_Jean-G=E9ra?=\n\t=?ISO-8859-1?Q?rd=29?=" }, { "msg_contents": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]> writes:\n>> In 7.4 a VACUUM should be sufficient ... or at least, if it isn't\n> Atfer VACUUM:\n> Better, but......\n\n... but not much :-(. Okay, could we see VACUUM VERBOSE results for\nthis table?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Apr 2004 16:54:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: =?ISO-8859-1?Q?Re:__Index_Backward_Scan_fast_/_Index_Sc?=\n\t=?ISO-8859-1?Q?an_slow_!__=28Modifi=E9_par_Pailloncy_Jean-G=E9ra?=\n\t=?ISO-8859-1?Q?rd=29?=" } ]
[ { "msg_contents": "I'm running 7.4.2 on an older Linux box (450MHzAMD K-6-III, 450M RAM) \nrunning kernel 2.6.5. My client is a Java/JDBC program on Windows.\n\nI'm having trouble seeing where the bottleneck in my performance is. \nThe client uses about 30% CPU. The server uses 70% CPU plus 1.5% I/O \nwait. The I/O wait is very low because I'm doing a PK index scan where \nthe index and data are on different disks and the table is clustered on \nthe PK index. The network is 100Mb, and it's at 7% of capacity.\n\nI tried making the client operate on two threads on two database \nconnections. That bumped the server utilization to 80% and barely \nchanged the I/O wait. The throughput decreased by a third.\n\nThe only thing I can think of is memory bandwidth. Does anyone have \ntips on how I can investigate more?\n\n", "msg_date": "Mon, 12 Apr 2004 17:17:31 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Tracking down performance issue" } ]
[ { "msg_contents": "In the process of optimizing some queries, I have found the following\nquery seems to degrade in performance the more accurate I make the\nstatistics on the table... whether by using increased alter table ...\nset statistics or by using vacuum..\n\nSELECT \n\tcount( cl.caller_id ), \n\tnpanxx.city, \n\tnpanxx.state \nFROM \n\tcl \n\tLEFT OUTER JOIN npanxx \n\t on substr( cl.caller_id, 1, 3 ) = npanxx.npa \n\t and substr( cl.caller_id, 4, 3 ) = npanxx.nxx \n\tLEFT OUTER JOIN cp \n\t ON cl.caller_id = cp.caller_id \nWHERE \n\tcl.ivr_system_id = 130 \n\tAND \n\tcl.call_time > '2004-03-01 06:00:00.0 CST' \n\tAND \n\tcl.call_time < '2004-04-01 06:00:00.0 CST' \n\tAND \n\tcp.age >= 18 \n\tAND \n\tcp.age <= 24 \n\tAND \n\tcp.gender = 'm' \nGROUP BY \n\tnpanxx.city, \n\tnpanxx.state\n\n\nlive=# analyze cl;\nANALYZE\nlive=# select reltuples from pg_class where relname = 'cl';\n reltuples \n-----------\n 53580\n(1 row)\n\nlive=# select count(*) from cl;\n count \n---------\n 1140166\n(1 row)\n\nThe plan i get under these conditions is actually pretty good...\n\n HashAggregate (cost=28367.22..28367.66 rows=174 width=32) (actual time=1722.060..1722.176 rows=29 loops=1)\n -> Nested Loop (cost=0.00..28365.92 rows=174 width=32) (actual time=518.592..1716.254 rows=558 loops=1)\n -> Nested Loop Left Join (cost=0.00..20837.33 rows=1248 width=32) (actual time=509.991..1286.755 rows=13739 loops=1)\n -> Index Scan using cl_ivr_system_id on cl (cost=0.00..13301.15 rows=1248 width=14) (actual time=509.644..767.421 rows=13739 loops=1)\n Index Cond: (ivr_system_id = 130)\n Filter: ((call_time > '2004-03-01 07:00:00-05'::timestamp with time zone) AND (call_time < '2004-04-01 07:00:00-05'::timestamp with time zone))\n -> Index Scan using npanxx_pkey on npanxx (cost=0.00..6.02 rows=1 width=32) (actual time=0.025..0.027 rows=1 loops=13739)\n Index Cond: ((substr((\"outer\".caller_id)::text, 1, 3) = (npanxx.npa)::text) AND (substr((\"outer\".caller_id)::text, 4, 3) = (npanxx.nxx)::text))\n -> Index Scan using cp_pkey on cp (cost=0.00..6.02 rows=1 width=14) (actual time=0.027..0.027 rows=0 loops=13739)\n Index Cond: ((\"outer\".caller_id)::text = (cp.caller_id)::text)\n Filter: ((age >= 18) AND (age <= 24) AND (gender = 'm'::bpchar))\n Total runtime: 1722.489 ms\n(12 rows)\n\n\nbut when i do \n\nlive=# vacuum cl;\nVACUUM\nlive=# select reltuples from pg_class where relname = 'cl';\n reltuples \n-------------\n 1.14017e+06\n(1 row)\n\n(or alternatively increase the stats target on the table)\n\nI now get the following plan:\n\n HashAggregate (cost=80478.74..80482.41 rows=1471 width=32) (actual time=8132.261..8132.422 rows=29 loops=1)\n -> Merge Join (cost=79951.95..80467.70 rows=1471 width=32) (actual time=7794.338..8130.041 rows=558 loops=1)\n Merge Cond: (\"outer\".\"?column4?\" = \"inner\".\"?column2?\")\n -> Sort (cost=55719.06..55745.42 rows=10546 width=32) (actual time=4031.827..4052.526 rows=13739 loops=1)\n Sort Key: (cl.caller_id)::text\n -> Merge Right Join (cost=45458.30..55014.35 rows=10546 width=32) (actual time=2944.441..3796.787 rows=13739 loops=1)\n Merge Cond: (((\"outer\".npa)::text = \"inner\".\"?column2?\") AND ((\"outer\".nxx)::text = \"inner\".\"?column3?\"))\n -> Index Scan using npanxx_pkey on npanxx (cost=0.00..8032.99 rows=132866 width=32) (actual time=0.200..461.767 rows=130262 loops=1)\n -> Sort (cost=45458.30..45484.67 rows=10546 width=14) (actual time=2942.994..2967.935 rows=13739 loops=1)\n Sort Key: substr((cl.caller_id)::text, 1, 3), substr((cl.caller_id)::text, 4, 3)\n -> Seq Scan on cl (cost=0.00..44753.60 rows=10546 width=14) (actual time=1162.423..2619.662 rows=13739 loops=1)\n Filter: ((ivr_system_id = 130) AND (call_time > '2004-03-01 07:00:00-05'::timestamp with time zone) AND (call_time < '2004-04-01 07:00:00-05'::timestamp with time zone))\n -> Sort (cost=24232.89..24457.06 rows=89667 width=14) (actual time=3761.703..3900.340 rows=98010 loops=1)\n Sort Key: (cp.caller_id)::text\n -> Seq Scan on cp (cost=0.00..15979.91 rows=89667 width=14) (actual time=0.128..1772.215 rows=100302 loops=1)\n Filter: ((age >= 18) AND (age <= 24) AND (gender = 'm'::bpchar))\n Total runtime: 8138.607 ms\n(17 rows)\n\n\nso i guess i am wondering if there is something I should be doing to\nhelp get the better plan at the more accurate stats levels and/or why it\ndoesn't stick with the original plan (I noticed disabling merge joins\ndoes seem to push it back to the original plan). \n\nalternatively if anyone has any general suggestions on speeding up the\nquery I'd be open to that too :-) \n\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "13 Apr 2004 14:02:39 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "query slows down with more accurate stats" }, { "msg_contents": "Robert Treat <[email protected]> writes:\n> live=# analyze cl;\n> ANALYZE\n> live=# select reltuples from pg_class where relname = 'cl';\n> reltuples \n> -----------\n> 53580\n> (1 row)\n> live=# vacuum cl;\n> VACUUM\n> live=# select reltuples from pg_class where relname = 'cl';\n> reltuples \n> -------------\n> 1.14017e+06\n> (1 row)\n\nWell, the first problem is why is ANALYZE's estimate of the total row\ncount so bad :-( ? I suspect you are running into the situation where\nthe initial pages of the table are thinly populated and ANALYZE\nmistakenly assumes the rest are too. Manfred is working on a revised\nsampling method for ANALYZE that should fix this problem in 7.5 and\nbeyond, but for now it seems like a VACUUM FULL might be in order.\n\n> so i guess i am wondering if there is something I should be doing to\n> help get the better plan at the more accurate stats levels and/or why it\n> doesn't stick with the original plan (I noticed disabling merge joins\n> does seem to push it back to the original plan). \n\nWith the larger number of estimated rows it's figuring the nestloop will\nbe too expensive. The row estimate for the cl scan went up from 1248\nto 10546, so the estimated cost for the nestloop plan would go to about\n240000 units vs 80000 for the mergejoin plan. This is obviously off\nrather badly when the true runtimes are 1.7 vs 8.1 seconds :-(.\n\nI think this is an example of a case where we really need better\nestimation of nestloop costs --- it's drastically overestimating the\nrelative cost of the nestloop because it's not accounting for the cache\nbenefits of the repeated index searches. You could probably force the\nnestloop to be chosen by lowering random_page_cost, but that's just a\nkluge solution ... the real problem is the model is wrong.\n\nI have a to-do item to work on this, and will try to bump up its\npriority a bit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Apr 2004 15:18:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query slows down with more accurate stats " }, { "msg_contents": "[Just a quick note here; a more thorough discussion of my test results\nwill be posted to -hackers]\n\nOn Tue, 13 Apr 2004 15:18:42 -0400, Tom Lane <[email protected]> wrote:\n>Well, the first problem is why is ANALYZE's estimate of the total row\n>count so bad :-( ? I suspect you are running into the situation where\n>the initial pages of the table are thinly populated and ANALYZE\n>mistakenly assumes the rest are too. Manfred is working on a revised\n>sampling method for ANALYZE that should fix this problem\n\nThe new method looks very promising with respect to row count\nestimation: I got estimation errors of +/- 1% where the old method was\noff by up to 60%. (My test methods might be a bit biased though :-))\n\nMy biggest concern at the moment is that the new sampling method\nviolates the contract of returning each possible sample with he same\nprobability: getting several tuples from the same page is more likely\nthan with the old method.\n\nServus\n Manfred\n", "msg_date": "Fri, 16 Apr 2004 01:32:53 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query slows down with more accurate stats " }, { "msg_contents": "Manfred Koizar <[email protected]> writes:\n> My biggest concern at the moment is that the new sampling method\n> violates the contract of returning each possible sample with he same\n> probability: getting several tuples from the same page is more likely\n> than with the old method.\n\nHm, are you sure? I recall objecting to your original proposal because\nI thought that would happen, but after further thought it seemed not.\n\nAlso, I'm not at all sure that the old method satisfies that constraint\ncompletely in the presence of nonuniform numbers of tuples per page,\nso we'd not necessarily be going backwards anyhow ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Apr 2004 20:18:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query slows down with more accurate stats " }, { "msg_contents": "On Thu, 15 Apr 2004 20:18:49 -0400, Tom Lane <[email protected]> wrote:\n>> getting several tuples from the same page is more likely\n>> than with the old method.\n>\n>Hm, are you sure?\n\nAlmost sure. Let's look at a corner case: What is the probability of\ngetting a sample with no two tuples from the same page? To simplify the\nproblem assume that each page contains the same number of tuples c.\n\nIf the number of pages is B and the sample size is n, a perfect sampling\nmethod collects a sample where all tuples come from different pages with\nprobability (in OpenOffice.org syntax):\n\n\tp = prod from{i = 0} to{n - 1} {{c(B - i)} over {cB - i}}\n\nor in C:\n\n\tp = 1.0;\n\tfor (i = 0; i < n; ++i)\n\t\tp *= c*(B - i) / (c*B - i)\n\nThis probability grows with increasing B.\n\n>Also, I'm not at all sure that the old method satisfies that constraint\n>completely in the presence of nonuniform numbers of tuples per page,\n>so we'd not necessarily be going backwards anyhow ...\n\nYes, it boils down to a decision whether we want to replace one not\nquite perfect sampling method with another not quite perfect method.\nI'm still working on putting together the pros and cons ...\n\nServus\n Manfred\n", "msg_date": "Fri, 16 Apr 2004 12:16:11 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query slows down with more accurate stats " }, { "msg_contents": "Manfred Koizar <[email protected]> writes:\n> If the number of pages is B and the sample size is n, a perfect sampling\n> method collects a sample where all tuples come from different pages with\n> probability (in OpenOffice.org syntax):\n> \tp = prod from{i = 0} to{n - 1} {{c(B - i)} over {cB - i}}\n\nSo? You haven't proven that either sampling method fails to do the\nsame.\n\nThe desired property can also be phrased as \"every tuple should be\nequally likely to be included in the final sample\". What we actually\nhave in the case of your revised algorithm is \"every page is equally\nlikely to be sampled, and of the pages included in the sample, every\ntuple is equally likely to be chosen\". Given that there are B total\npages of which we sample b pages that happen to contain T tuples (in any\ndistribution), the probability that a particular tuple gets chosen is\n\t(b/B) * (n/T)\nassuming that the two selection steps are independent and unbiased.\n\nNow b, B, and n are not dependent on which tuple we are talking about.\nYou could argue that a tuple on a heavily populated page is\nstatistically likely to see a higher T when it's part of the page sample\npool than a tuple on a near-empty page is likely to see, and therefore\nthere is some bias against selection of the former tuple. But given a\nsample over a reasonably large number of pages, the contribution of any\none page to T should be fairly small and so this effect ought to be\nsmall. In fact, because T directly determines our estimate of the total\nnumber of tuples in the relation, your experiments showing that the new\nmethod gives a reliable tuple count estimate directly prove that T is\npretty stable regardless of exactly which pages get included in the\nsample. So I think this method is effectively unbiased at the tuple\nlevel. The variation in probability of selection of individual tuples\ncan be no worse than the variation in the overall tuple count estimate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2004 10:34:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query slows down with more accurate stats " }, { "msg_contents": "On Tue, 2004-04-13 at 15:18, Tom Lane wrote:\n> Robert Treat <[email protected]> writes:\n> Well, the first problem is why is ANALYZE's estimate of the total row\n> count so bad :-( ? I suspect you are running into the situation where\n> the initial pages of the table are thinly populated and ANALYZE\n> mistakenly assumes the rest are too. \n\nThat was my thinking, which is somewhat confirmed after a vacuum full on\nthe table; now analyze gives pretty accurate states. Of course the\ndownside is that now the query is consistently slower. \n\n> > so i guess i am wondering if there is something I should be doing to\n> > help get the better plan at the more accurate stats levels and/or why it\n> > doesn't stick with the original plan (I noticed disabling merge joins\n> > does seem to push it back to the original plan). \n> \n> With the larger number of estimated rows it's figuring the nestloop will\n> be too expensive. The row estimate for the cl scan went up from 1248\n> to 10546, so the estimated cost for the nestloop plan would go to about\n> 240000 units vs 80000 for the mergejoin plan. This is obviously off\n> rather badly when the true runtimes are 1.7 vs 8.1 seconds :-(.\n> \n> I think this is an example of a case where we really need better\n> estimation of nestloop costs --- it's drastically overestimating the\n> relative cost of the nestloop because it's not accounting for the cache\n> benefits of the repeated index searches. You could probably force the\n> nestloop to be chosen by lowering random_page_cost, but that's just a\n> kluge solution ... the real problem is the model is wrong.\n> \n\nUnfortunately playing with random_page_cost doesn't seem to be enough to\nget it to favor the nested loop... though setting it down to 2 does help\noverall. played with index_cpu_tuple_cost a bit but that seemed even\nless useful. aggravating when you know there is a better plan it could\npick but no (clean) way to get it to do so... \n\n> I have a to-do item to work on this, and will try to bump up its\n> priority a bit.\n> \n\nI'll keep an eye out, thanks Tom.\n\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "16 Apr 2004 11:37:15 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query slows down with more accurate stats" }, { "msg_contents": "On Fri, 16 Apr 2004 10:34:49 -0400, Tom Lane <[email protected]> wrote:\n>> \tp = prod from{i = 0} to{n - 1} {{c(B - i)} over {cB - i}}\n>\n>So? You haven't proven that either sampling method fails to do the\n>same.\n\nOn the contrary, I believe that above formula is more or less valid for\nboth methods. The point is in what I said next:\n| This probability grows with increasing B.\n\nFor the one-stage sampling method B is the number of pages of the whole\ntable. With two-stage sampling we have to use n instead of B and get a\nsmaller probability (for n < B, of course). So this merely shows that\nthe two sampling methods are not equivalent.\n\n>The desired property can also be phrased as \"every tuple should be\n>equally likely to be included in the final sample\".\n\nOnly at first sight. You really expect more from random sampling.\nOtherwise I'd just put one random tuple and its n - 1 successors (modulo\nN) into the sample. This satisfies your condition but you wouldn't call\nit a random sample.\n\nRandom sampling is more like \"every possible sample is equally likely to\nbe collected\", and two-stage sampling doesn't satisfy this condition.\n\nBut if in your opinion the difference is not significant, I'll stop\ncomplaining against my own idea. Is there anybody else who cares?\n\n>You could argue that a tuple on a heavily populated page is\n>statistically likely to see a higher T when it's part of the page sample\n>pool than a tuple on a near-empty page is likely to see, and therefore\n>there is some bias against selection of the former tuple. But given a\n>sample over a reasonably large number of pages, the contribution of any\n>one page to T should be fairly small and so this effect ought to be\n>small.\n\nIt is even better: Storing a certain number of tuples on heavily\npopulated pages takes less pages than to store them on sparsely\npopulated pages (due to tuple size or to dead tuples). So heavily\npopulated pages are less likely to be selected in stage one, and this\nexactly offsets the effect of increasing T.\n\n>So I think this method is effectively unbiased at the tuple level.\n\nServus\n Manfred\n", "msg_date": "Sat, 17 Apr 2004 00:26:22 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query slows down with more accurate stats " }, { "msg_contents": "Manfred Koizar <[email protected]> writes:\n> Random sampling is more like \"every possible sample is equally likely to\n> be collected\", and two-stage sampling doesn't satisfy this condition.\n\nOkay, I finally see the point here: in the limit as the number of pages\nB goes to infinity, you'd expect the probability that each tuple in your\nsample came from a different page to go to 1. But this doesn't happen\nin the two-stage sampling method: the probability doesn't increase\nbeyond the value it would have for B=n. On the average each sample page\nwould supply one tuple, but the odds that this holds *exactly* would be\npretty low.\n\nHowever the existing sampling method has glaring flaws of its own,\nin particular having to do with the fact that a tuple whose slot is\npreceded by N empty slots is N times more likely to be picked than one\nthat has no empty-slot predecessors. The fact that the two-stage\nmethod artificially constrains the sample to come from only n pages\nseems like a minor problem by comparison; I'd happily accept it to get\nrid of the empty-slot bias.\n\nA possible compromise is to limit the number of pages sampled to\nsomething a bit larger than n, perhaps 2n or 3n. I don't have a feeling\nfor the shape of the different-pages probability function; would this\nmake a significant difference, or would it just waste cycles?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Apr 2004 12:00:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query slows down with more accurate stats " }, { "msg_contents": "On Mon, 19 Apr 2004 12:00:10 -0400, Tom Lane <[email protected]> wrote:\n>A possible compromise is to limit the number of pages sampled to\n>something a bit larger than n, perhaps 2n or 3n. I don't have a feeling\n>for the shape of the different-pages probability function; would this\n>make a significant difference, or would it just waste cycles?\n\nI would have replied earlier, if I had a good answer. What I have so\nfar contains at least one, probably two flaws. Knowing not much more\nthan the four basic arithmetic operations I was not able to improve my\nmodel. So I post what I have:\n\nAs usual we assume a constant number c of tuples per page. If we have a\ntable of size B pages and want to collect a sample of n tuples, the\nnumber of possible samples is (again in OOo syntax)\n\n\tleft( binom{cB}{n} right)\n\nIf we select an arbitrary page, the number of possible samples that do\nNOT contain any tuple from this page is\n\n\tleft( binom {c (B-1)} {n} right)\n\nLet's forget about our actual implementations of sampling methods and\npretend we have a perfect random sampling method. So the probability\nPnot(c, B, n) that a certain page is not represented in a random sample\nis\n\n\tleft( binom {c (B-1)} {n} right) over left( binom{cB}{n} right)\n\nwhich can be transformed into the more computing-friendly form\n\n\tprod from{i=0} to{n-1} {{cB-c - i} over {cB - i}}\n\nClearly the probability that a certain page *is* represented in a sample\nis\n\n\tPyes(c, B, n) = 1 - Pnot(c, B, n)\n\nThe next step assumes that these probabilities are independent for\ndifferent pages, which in reality they are not. We simply estimate the\nnumber of pages represented in a random sample as \n\n\tnumPag(c, B, n) = B * Pyes(c, B, n)\n\nHere are some results for n = 3000:\n\n\tB \\ c-> 10 | 100 | 200\n\t-------+-------+-------+-------\n\t 100 | --- | 100 | 100\n\t 1000 | 972 | 953 | 951\n\t 2000 | 1606 | 1559 | 1556\n\t 3000 | 1954 | 1902 | 1899\n\t 6000 | 2408 | 2366 | 2363\n\t 9000 | 2588 | 2555 | 2553\n\t 20000 | 2805 | 2788 | 2787\n\t 30000 | 2869 | 2856 | 2856\n\t100000 | 2960 | 2956 | 2956\n\nThis doesn't look to depend heavily on the number of tuples per page,\nwhich sort of justifies the assumption that c is constant.\n\nIn the next step I tried to estimate the number of pages that contain\nexactly 1, 2, ... tuples of the sample. My naive procedure works as\nfollows (I'm not sure whether it is even valid as a rough approximation,\nconstructive criticism is very welcome):\n\nFor c=100, B=3000, n=3000 we expect 1902 pages to contain at least 1\ntuple of the sample. There are 1098 more tuples than pages, these\ntuples lie somewhere in those 1902 pages from the first step.\nnumPag(99, 1902, 1098) = 836 pages contain at least a second tuple.\nSo the number of pages containing exactly 1 tuple is 1902 - 836 = 1066.\nRepeating these steps we get 611 pages with 2 tuples, 192 with 3, 30\nwith 4, and 3 pages with 5 tuples.\n\nHere are some more numbers for c = 100 and n = 3000:\n\t\n\t B | pages with 1, 2, ... tuples\n\t-------+--------------------------------------------------------\n\t 100 | 1 to 24 tuples: 0, then 1, 2, 4, 10, 18, 26, 24, 11, 4\n\t 1000 | 108, 201, 268, 229, 113, 29, 5\n\t 2000 | 616, 555, 292, 83, 12, 1\n\t 3000 | 1066, 611, 192, 30, 3\n\t 6000 | 1809, 484, 68, 5\n\t 9000 | 2146, 374, 32, 2\n\t 20000 | 2584, 196, 8\n\t 30000 | 2716, 138, 3\n\t100000 | 2912, 44\n\nA small C program to experimentally confirm or refute these calculations\nis attached. Its results are fairly compatible with above numbers,\nIMHO.\n\nServus\n Manfred", "msg_date": "Sun, 25 Apr 2004 22:26:56 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Number of pages in a random sample (was: query slows down with more\n\taccurate stats)" }, { "msg_contents": "\nI have not been following this thread carefully. Just in case you are\ninterested in further reading, you could check this paper:\n\n \"A Bi-Level Bernoulli Scheme for Database Sampling\"\n Peter Haas, Christian Koenig (SIGMOD 2004) \n\n-- \nPip-pip\nSailesh\nhttp://www.cs.berkeley.edu/~sailesh\n\n\n", "msg_date": "Mon, 26 Apr 2004 08:08:16 -0700", "msg_from": "Sailesh Krishnamurthy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of pages in a random sample" }, { "msg_contents": "On Mon, 26 Apr 2004 08:08:16 -0700, Sailesh Krishnamurthy\n<[email protected]> wrote:\n> \"A Bi-Level Bernoulli Scheme for Database Sampling\"\n> Peter Haas, Christian Koenig (SIGMOD 2004) \n\nDoes this apply to our problem? AFAIK with Bernoulli sampling you don't\nknow the sample size in advance.\n\nAnyway, thanks for the hint. Unfortunately I couldn't find the\ndocument. Do you have a link?\n\nServus\n Manfred\n", "msg_date": "Thu, 29 Apr 2004 18:56:36 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Number of pages in a random sample" } ]
[ { "msg_contents": "Folks,\n\tI have a question about views: I want to have a fairly wide view (lots of\ncolumns) where most of the columns have some heavyish calculations in them,\nbut I'm concerned that it will have to calculate every column even when I'm\nnot selecting them. So, the question is, if I have 5 columns in a view but\nonly select 1 column, is the system smart enough to not calculate the unused\ncolumns, or am I taking a performance hit over a smaller view that doesn't\nhave the extra 4 columns?\nThanks,\nPeter Darley\n\n", "msg_date": "Tue, 13 Apr 2004 13:49:53 -0700", "msg_from": "\"Peter Darley\" <[email protected]>", "msg_from_op": true, "msg_subject": "View columns calculated" }, { "msg_contents": "\"Peter Darley\" <[email protected]> writes:\n> \tI have a question about views: I want to have a fairly wide view (lots of\n> columns) where most of the columns have some heavyish calculations in them,\n> but I'm concerned that it will have to calculate every column even when I'm\n> not selecting them. So, the question is, if I have 5 columns in a view but\n> only select 1 column, is the system smart enough to not calculate the unused\n> columns,\n\nIt depends on what the rest of your view looks like. If the view is\nsimple enough to be \"flattened\" into the parent query then the unused\ncolumns will disappear into the ether. If it's not flattenable then\nthey will get evaluated. You can check by seeing whether an EXPLAIN\nshows a separate \"subquery scan\" node corresponding to the view.\n(Without bothering to look at the code, an unflattenable view is one\nthat uses GROUP BY, DISTINCT, aggregates, ORDER BY, LIMIT, UNION,\nINTERSECT, EXCEPT, probably a couple other things.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Apr 2004 17:23:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View columns calculated " }, { "msg_contents": "* Tom Lane <[email protected]> wrote:\n> \"Peter Darley\" <[email protected]> writes:\n> > \tI have a question about views: I want to have a fairly wide view (lots of\n> > columns) where most of the columns have some heavyish calculations in them,\n> > but I'm concerned that it will have to calculate every column even when I'm\n> > not selecting them. So, the question is, if I have 5 columns in a view but\n> > only select 1 column, is the system smart enough to not calculate the unused\n> > columns,\n> \n> It depends on what the rest of your view looks like. If the view is\n> simple enough to be \"flattened\" into the parent query then the unused\n> columns will disappear into the ether. If it's not flattenable then\n> they will get evaluated. You can check by seeing whether an EXPLAIN\n> shows a separate \"subquery scan\" node corresponding to the view.\n> (Without bothering to look at the code, an unflattenable view is one\n> that uses GROUP BY, DISTINCT, aggregates, ORDER BY, LIMIT, UNION,\n> INTERSECT, EXCEPT, probably a couple other things.)\n\nWhat about functions ? \nI'm using several (immutable) functions for mapping IDs to names, etc.\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n cellphone: +49 174 7066481\n---------------------------------------------------------------------\n -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --\n---------------------------------------------------------------------\n", "msg_date": "Thu, 24 Mar 2005 14:48:05 +0100", "msg_from": "Enrico Weigelt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View columns calculated" } ]
[ { "msg_contents": "I run the following command three times to prevent cache/disk results.\n\n[...]\ndps=> explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND \ncrc32=764518963 AND status IN (200,304,206) ORDER BY rec_id LIMIT 1;\n QUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------\n Limit (cost=173.14..173.14 rows=1 width=4) (actual time=0.357..0.358 \nrows=1 loops=1)\n -> Sort (cost=173.14..173.22 rows=32 width=4) (actual \ntime=0.354..0.354 rows=1 loops=1)\n Sort Key: rec_id\n -> Index Scan using url_crc on url (cost=0.00..172.34 \nrows=32 width=4) (actual time=0.039..0.271 rows=50 loops=1)\n Index Cond: (crc32 = 764518963)\n Filter: ((crc32 <> 0) AND ((status = 200) OR (status = \n304) OR (status = 206)))\n Total runtime: 0.410 ms\n(7 rows)\n\ndps=> explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND \ncrc32=764518963 AND status IN (200,304,206) ORDER BY crc32,rec_id LIMIT \n1;\n QUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------\n Limit (cost=173.14..173.14 rows=1 width=8) (actual time=0.378..0.378 \nrows=1 loops=1)\n -> Sort (cost=173.14..173.22 rows=32 width=8) (actual \ntime=0.375..0.375 rows=1 loops=1)\n Sort Key: crc32, rec_id\n -> Index Scan using url_crc on url (cost=0.00..172.34 \nrows=32 width=8) (actual time=0.038..0.278 rows=50 loops=1)\n Index Cond: (crc32 = 764518963)\n Filter: ((crc32 <> 0) AND ((status = 200) OR (status = \n304) OR (status = 206)))\n Total runtime: 0.432 ms\n(7 rows)\ndps=> explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND \ncrc32=419903683 AND status IN (200,304,206) ORDER BY rec_id LIMIT 1;\n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------\n Limit (cost=0.00..37.03 rows=1 width=4) (actual time=156.712..156.713 \nrows=1 loops=1)\n -> Index Scan using url_pkey on url (cost=0.00..14996.82 rows=405 \nwidth=4) (actual time=156.707..156.707 rows=1 loops=1)\n Filter: ((crc32 <> 0) AND (crc32 = 419903683) AND ((status = \n200) OR (status = 304) OR (status = 206)))\n Total runtime: 156.769 ms\n(4 rows)\n\ndps=> explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND \ncrc32=419903683 AND status IN (200,304,206) ORDER BY crc32,rec_id LIMIT \n1;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------------------------------\n Limit (cost=1910.14..1910.14 rows=1 width=8) (actual \ntime=4.558..4.559 rows=1 loops=1)\n -> Sort (cost=1910.14..1911.15 rows=405 width=8) (actual \ntime=4.555..4.555 rows=1 loops=1)\n Sort Key: crc32, rec_id\n -> Index Scan using url_crc on url (cost=0.00..1892.60 \nrows=405 width=8) (actual time=0.042..2.935 rows=719 loops=1)\n Index Cond: (crc32 = 419903683)\n Filter: ((crc32 <> 0) AND ((status = 200) OR (status = \n304) OR (status = 206)))\n Total runtime: 4.636 ms\n(7 rows)\n\nThe value 764518963 is not common, it appears 50 times in the table.\nThe value 419903683 is the third most common value of the table url.\n\ndps=> select u.crc32, count(*) from url u group by u.crc32 order by \ncount(*) desc;\n crc32 | count\n-------------+------\n 0 | 82202\n -946427862 | 10545\n 419903683 | 719\n 945866756 | 670\n[...]\n\nHow to setup pgsql to correctly select the good index for index scan ?\n\nI run Pgsql 7.4.x\nThe database runs under pg_autovacuum daemon.\nAnd a VACUUM FULL VERBOSE ANALYZE was done 10 hours before.\n\nCordialement,\nJean-G�rard Pailloncy\n", "msg_date": "Wed, 14 Apr 2004 14:39:21 +0200", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]> writes:\n> I run the following command three times to prevent cache/disk results.\n\nDo you think that's actually representative of how your database will\nbehave under load?\n\nIf the DB is small enough to be completely cached in RAM, and you\nexpect it to remain so, then it's sensible to optimize on the basis\nof fully-cached test cases. Otherwise I think you are optimizing\nthe wrong thing.\n\nIf you do want to plan on this basis, you want to set random_page_cost\nto 1, make sure effective_cache_size is large, and perhaps increase\nthe cpu_xxx cost numbers. (What you're essentially doing here is\nreducing the estimated cost of a page fetch relative to CPU effort.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Apr 2004 09:22:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "Hi,\n\nI apologize for the mistake.\nSo, I dump the database, I reload it then VACUUM ANALYZE.\nFor each statement: I then quit postgres, start it, execute one \ncommand, then quit.\n\nLe 14 avr. 04, � 14:39, Pailloncy Jean-G�rard a �crit :\n\ndps=# explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND \ncrc32=764518963 AND status IN (200,304,206) ORDER BY rec_id LIMIT 1;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------------------------------\n Limit (cost=169.79..169.79 rows=1 width=4) (actual \ntime=502.397..502.398 rows=1 loops=1)\n -> Sort (cost=169.79..169.86 rows=30 width=4) (actual \ntime=502.393..502.393 rows=1 loops=1)\n Sort Key: rec_id\n -> Index Scan using url_crc on url (cost=0.00..169.05 \nrows=30 width=4) (actual time=43.545..490.895 rows=56 loops=1)\n Index Cond: (crc32 = 764518963)\n Filter: ((crc32 <> 0) AND ((status = 200) OR (status = \n304) OR (status = 206)))\n Total runtime: 502.520 ms\n(7 rows)\ndps=# \\q\n\ndps=# explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND \ncrc32=764518963 AND status IN (200,304,206) ORDER BY crc32,rec_id LIMIT \n1;\n QUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------\n Limit (cost=169.79..169.79 rows=1 width=8) (actual time=5.893..5.894 \nrows=1 loops=1)\n -> Sort (cost=169.79..169.86 rows=30 width=8) (actual \ntime=5.889..5.889 rows=1 loops=1)\n Sort Key: crc32, rec_id\n -> Index Scan using url_crc on url (cost=0.00..169.05 \nrows=30 width=8) (actual time=0.445..5.430 rows=56 loops=1)\n Index Cond: (crc32 = 764518963)\n Filter: ((crc32 <> 0) AND ((status = 200) OR (status = \n304) OR (status = 206)))\n Total runtime: 6.020 ms\n(7 rows)\ndps=# \\q\n\ndps=# explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND \ncrc32=419903683 AND status IN (200,304,206) ORDER BY rec_id LIMIT 1;\n QUERY PLAN\n------------------------------------------------------------------------ \n----------------------------------------------------------\n Limit (cost=0.00..27.95 rows=1 width=4) (actual \ntime=11021.875..11021.876 rows=1 loops=1)\n -> Index Scan using url_pkey on url (cost=0.00..11625.49 rows=416 \nwidth=4) (actual time=11021.868..11021.868 rows=1 loops=1)\n Filter: ((crc32 <> 0) AND (crc32 = 419903683) AND ((status = \n200) OR (status = 304) OR (status = 206)))\n Total runtime: 11021.986 ms\n(4 rows)\ndps=# \\q\n\ndps=# explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND \ncrc32=419903683 AND status IN (200,304,206) ORDER BY crc32,rec_id LIMIT \n1;\n QUERY PLAN\n------------------------------------------------------------------------ \n---------------------------------------------------------\n Limit (cost=2000.41..2000.41 rows=1 width=8) (actual \ntime=48.503..48.504 rows=1 loops=1)\n -> Sort (cost=2000.41..2001.45 rows=416 width=8) (actual \ntime=48.499..48.499 rows=1 loops=1)\n Sort Key: crc32, rec_id\n -> Index Scan using url_crc on url (cost=0.00..1982.31 \nrows=416 width=8) (actual time=4.848..45.452 rows=796 loops=1)\n Index Cond: (crc32 = 419903683)\n Filter: ((crc32 <> 0) AND ((status = 200) OR (status = \n304) OR (status = 206)))\n Total runtime: 48.656 ms\n(7 rows)\ndps=# \\q\n\nSo, with all fresh data, everything rebuild from scratch, on a backend \nthat will done one and only one query, the results is strange.\nWhy adding an ORDER BY clause on a column with one value speed up the \nstuff 502ms to 6ms ?\nWhy when crc32=419903683, which is one of the most often used value in \nthe table, the query planner chose a plan so bad (225 times slower) ?\n\nCordialement,\nJean-G�rard Pailloncy\n\n", "msg_date": "Tue, 20 Apr 2004 19:10:50 +0200", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 225 times slower" }, { "msg_contents": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]> writes:\n> dps=# explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND \n> crc32=419903683 AND status IN (200,304,206) ORDER BY rec_id LIMIT 1;\n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> ----------------------------------------------------------\n> Limit (cost=0.00..27.95 rows=1 width=4) (actual \n> time=11021.875..11021.876 rows=1 loops=1)\n> -> Index Scan using url_pkey on url (cost=0.00..11625.49 rows=416 \n> width=4) (actual time=11021.868..11021.868 rows=1 loops=1)\n> Filter: ((crc32 <> 0) AND (crc32 = 419903683) AND ((status = \n> 200) OR (status = 304) OR (status = 206)))\n> Total runtime: 11021.986 ms\n> (4 rows)\n> dps=# \\q\n\nThe planner is guessing that scanning in rec_id order will produce a\nmatching row fairly quickly (sooner than selecting all the matching rows\nand sorting them would do). It's wrong in this case, but I'm not sure\nit could do better without very detailed cross-column statistics. Am I\nright to guess that the rows that match the WHERE clause are not evenly\ndistributed in the rec_id order, but rather there are no such rows till\nyou get well up in the ordering?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Apr 2004 00:15:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 225 times slower " } ]
[ { "msg_contents": "Hi\nI have .5 million rows in a table. My problem is select count(*) takes ages.\nVACUUM FULL does not help. can anyone please tell me\nhow to i enhance the performance of the setup.\n\nRegds\nmallah.\n\npostgresql.conf\n----------------------\nmax_fsm_pages = 55099264 # min max_fsm_relations*16, 6 \nbytes each\nmax_fsm_relations = 5000\n\n\ntradein_clients=# explain analyze SELECT count(*) from eyp_rfi;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=78311.37..78311.37 rows=1 width=0) (actual \ntime=42306.902..42306.903 rows=1 loops=1)\n -> Seq Scan on eyp_rfi (cost=0.00..77046.49 rows=505949 width=0) \n(actual time=0.032..41525.007 rows=505960 loops=1)\n Total runtime: 42306.995 ms\n(3 rows)\n\ntradein_clients=# SELECT count(*) from eyp_rfi;\n count\n--------\n 505960\n(1 row)\n\ntradein_clients=# VACUUM full verbose eyp_rfi;\nINFO: vacuuming \"public.eyp_rfi\"\nINFO: \"eyp_rfi\": found 0 removable, 505960 nonremovable row versions in \n71987 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 186 to 2036 bytes long.\nThere were 42587 unused item pointers.\nTotal free space (including removable row versions) is 21413836 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n38693 pages containing 19146684 free bytes are potential move destinations.\nCPU 2.62s/0.40u sec elapsed 38.45 sec.\nINFO: index \"eyp_rfi_date\" now contains 505960 row versions in 1197 pages\nDETAIL: 0 index row versions were removed.\n4 index pages have been deleted, 4 are currently reusable.\nCPU 0.05s/0.29u sec elapsed 0.87 sec.\nINFO: index \"eyp_rfi_receiver_uid\" now contains 505960 row versions in \n1163 pages\nDETAIL: 0 index row versions were removed.\n1 index pages have been deleted, 1 are currently reusable.\nCPU 0.03s/0.42u sec elapsed 1.33 sec.\nINFO: index \"eyp_rfi_inhouse\" now contains 505960 row versions in 1208 \npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.04s/0.21u sec elapsed 1.20 sec.\nINFO: index \"eyp_rfi_rfi_id_key\" now contains 505960 row versions in \n1201 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.03s/0.33u sec elapsed 0.81 sec.\nINFO: index \"eyp_rfi_list_id_idx\" now contains 505960 row versions in \n1133 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.02s/0.43u sec elapsed 1.12 sec.\nINFO: index \"eyp_rfi_status\" now contains 505960 row versions in 1448 pages\nDETAIL: 0 index row versions were removed.\n4 index pages have been deleted, 4 are currently reusable.\nCPU 0.05s/0.22u sec elapsed 1.08 sec.\nINFO: index \"eyp_rfi_list_id\" now contains 505960 row versions in 1133 \npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.02s/0.43u sec elapsed 1.00 sec.\nINFO: index \"eyp_rfi_receiver_email\" now contains 505960 row versions \nin 2801 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.16s/0.52u sec elapsed 10.38 sec.\nINFO: index \"eyp_rfi_subj\" now contains 80663 row versions in 463 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.03s/0.14u sec elapsed 3.20 sec.\nINFO: index \"eyp_rfi_sender\" now contains 505960 row versions in 3025 pages\nDETAIL: 0 index row versions were removed.\n6 index pages have been deleted, 6 are currently reusable.\nCPU 0.10s/0.39u sec elapsed 4.99 sec.\nINFO: index \"eyp_sender_uid_idx\" now contains 505960 row versions in \n1216 pages\nDETAIL: 0 index row versions were removed.\n5 index pages have been deleted, 5 are currently reusable.\nCPU 0.04s/0.36u sec elapsed 2.61 sec.\nINFO: index \"eyp_rfi_rec_uid_idx\" now contains 505960 row versions in \n1166 pages\nDETAIL: 0 index row versions were removed.\n1 index pages have been deleted, 1 are currently reusable.\nCPU 0.05s/0.41u sec elapsed 2.04 sec.\nINFO: index \"eyp_rfi_index\" now contains 505960 row versions in 2051 pages\nDETAIL: 0 index row versions were removed.\n7 index pages have been deleted, 7 are currently reusable.\nCPU 0.10s/0.28u sec elapsed 8.16 sec.\nINFO: \"eyp_rfi\": moved 0 row versions, truncated 71987 to 71987 pages\nDETAIL: CPU 2.03s/2.09u sec elapsed 95.24 sec.\nINFO: vacuuming \"pg_toast.pg_toast_19609\"\nINFO: \"pg_toast_19609\": found 0 removable, 105342 nonremovable row \nversions in 21038 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 37 to 2034 bytes long.\nThere were 145 unused item pointers.\nTotal free space (including removable row versions) is 16551072 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n18789 pages containing 16512800 free bytes are potential move destinations.\nCPU 0.70s/0.09u sec elapsed 41.64 sec.\nINFO: index \"pg_toast_19609_index\" now contains 105342 row versions in \n296 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.02u sec elapsed 0.63 sec.\nINFO: \"pg_toast_19609\": moved 0 row versions, truncated 21038 to 21038 \npages\nDETAIL: CPU 0.01s/0.01u sec elapsed 10.03 sec.\nVACUUM\ntradein_clients=# explain analyze SELECT count(*) from eyp_rfi;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=78311.50..78311.50 rows=1 width=0) (actual \ntime=50631.488..50631.489 rows=1 loops=1)\n -> Seq Scan on eyp_rfi (cost=0.00..77046.60 rows=505960 width=0) \n(actual time=0.030..49906.198 rows=505964 loops=1)\n Total runtime: 50631.658 ms\n(3 rows)\n\n\n\n\n\n\n\n\n\nHi \nI have .5 million rows in a table. My problem is select count(*) takes\nages.\nVACUUM FULL does not help. can anyone please tell me\nhow to i enhance the performance of the setup.\n\nRegds\nmallah.\n\npostgresql.conf\n----------------------\nmax_fsm_pages = 55099264                # min max_fsm_relations*16,\n6 bytes each\nmax_fsm_relations = 5000\n\n\ntradein_clients=# explain analyze SELECT count(*) from eyp_rfi;\n                                                       QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=78311.37..78311.37 rows=1 width=0) (actual\ntime=42306.902..42306.903 rows=1 loops=1)\n   ->  Seq Scan on eyp_rfi  (cost=0.00..77046.49 rows=505949\nwidth=0) (actual time=0.032..41525.007 rows=505960 loops=1)\n Total runtime: 42306.995 ms\n(3 rows)\n\ntradein_clients=# SELECT count(*) from eyp_rfi;\n count\n--------\n 505960\n(1 row)\n\ntradein_clients=# VACUUM full verbose eyp_rfi;\nINFO:  vacuuming \"public.eyp_rfi\"\nINFO:  \"eyp_rfi\": found 0 removable, 505960 nonremovable row versions\nin 71987 pages\nDETAIL:  0 dead row versions cannot be removed yet.\nNonremovable row versions range from 186 to 2036 bytes long.\nThere were 42587 unused item pointers.\nTotal free space (including removable row versions) is 21413836 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n38693 pages containing 19146684 free bytes are potential move\ndestinations.\nCPU 2.62s/0.40u sec elapsed 38.45 sec.\nINFO:  index \"eyp_rfi_date\" now contains 505960 row versions in 1197\npages\nDETAIL:  0 index row versions were removed.\n4 index pages have been deleted, 4 are currently reusable.\nCPU 0.05s/0.29u sec elapsed 0.87 sec.\nINFO:  index \"eyp_rfi_receiver_uid\" now contains 505960 row versions in\n1163 pages\nDETAIL:  0 index row versions were removed.\n1 index pages have been deleted, 1 are currently reusable.\nCPU 0.03s/0.42u sec elapsed 1.33 sec.\nINFO:  index \"eyp_rfi_inhouse\" now contains 505960 row versions in 1208\npages\nDETAIL:  0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.04s/0.21u sec elapsed 1.20 sec.\nINFO:  index \"eyp_rfi_rfi_id_key\" now contains 505960 row versions in\n1201 pages\nDETAIL:  0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.03s/0.33u sec elapsed 0.81 sec.\nINFO:  index \"eyp_rfi_list_id_idx\" now contains 505960 row versions in\n1133 pages\nDETAIL:  0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.02s/0.43u sec elapsed 1.12 sec.\nINFO:  index \"eyp_rfi_status\" now contains 505960 row versions in 1448\npages\nDETAIL:  0 index row versions were removed.\n4 index pages have been deleted, 4 are currently reusable.\nCPU 0.05s/0.22u sec elapsed 1.08 sec.\nINFO:  index \"eyp_rfi_list_id\" now contains 505960 row versions in 1133\npages\nDETAIL:  0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.02s/0.43u sec elapsed 1.00 sec.\nINFO:  index \"eyp_rfi_receiver_email\" now contains 505960 row versions\nin 2801 pages\nDETAIL:  0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.16s/0.52u sec elapsed 10.38 sec.\nINFO:  index \"eyp_rfi_subj\" now contains 80663 row versions in 463 pages\nDETAIL:  0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.03s/0.14u sec elapsed 3.20 sec.\nINFO:  index \"eyp_rfi_sender\" now contains 505960 row versions in 3025\npages\nDETAIL:  0 index row versions were removed.\n6 index pages have been deleted, 6 are currently reusable.\nCPU 0.10s/0.39u sec elapsed 4.99 sec.\nINFO:  index \"eyp_sender_uid_idx\" now contains 505960 row versions in\n1216 pages\nDETAIL:  0 index row versions were removed.\n5 index pages have been deleted, 5 are currently reusable.\nCPU 0.04s/0.36u sec elapsed 2.61 sec.\nINFO:  index \"eyp_rfi_rec_uid_idx\" now contains 505960 row versions in\n1166 pages\nDETAIL:  0 index row versions were removed.\n1 index pages have been deleted, 1 are currently reusable.\nCPU 0.05s/0.41u sec elapsed 2.04 sec.\nINFO:  index \"eyp_rfi_index\" now contains 505960 row versions in 2051\npages\nDETAIL:  0 index row versions were removed.\n7 index pages have been deleted, 7 are currently reusable.\nCPU 0.10s/0.28u sec elapsed 8.16 sec.\nINFO:  \"eyp_rfi\": moved 0 row versions, truncated 71987 to 71987 pages\nDETAIL:  CPU 2.03s/2.09u sec elapsed 95.24 sec.\nINFO:  vacuuming \"pg_toast.pg_toast_19609\"\nINFO:  \"pg_toast_19609\": found 0 removable, 105342 nonremovable row\nversions in 21038 pages\nDETAIL:  0 dead row versions cannot be removed yet.\nNonremovable row versions range from 37 to 2034 bytes long.\nThere were 145 unused item pointers.\nTotal free space (including removable row versions) is 16551072 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n18789 pages containing 16512800 free bytes are potential move\ndestinations.\nCPU 0.70s/0.09u sec elapsed 41.64 sec.\nINFO:  index \"pg_toast_19609_index\" now contains 105342 row versions in\n296 pages\nDETAIL:  0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.02u sec elapsed 0.63 sec.\nINFO:  \"pg_toast_19609\": moved 0 row versions, truncated 21038 to 21038\npages\nDETAIL:  CPU 0.01s/0.01u sec elapsed 10.03 sec.\nVACUUM\ntradein_clients=# explain analyze SELECT count(*) from eyp_rfi;\n                                                       QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=78311.50..78311.50 rows=1 width=0) (actual\ntime=50631.488..50631.489 rows=1 loops=1)\n   ->  Seq Scan on eyp_rfi  (cost=0.00..77046.60 rows=505960\nwidth=0) (actual time=0.030..49906.198 rows=505964 loops=1)\n Total runtime: 50631.658 ms\n(3 rows)", "msg_date": "Wed, 14 Apr 2004 23:23:13 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "select count(*) very slow on an already vacuumed table." }, { "msg_contents": "On Wednesday 14 April 2004 18:53, Rajesh Kumar Mallah wrote:\n> Hi\n> I have .5 million rows in a table. My problem is select count(*) takes\n> ages. VACUUM FULL does not help. can anyone please tell me\n> how to i enhance the performance of the setup.\n\n> SELECT count(*) from eyp_rfi;\n\nIf this is the actual query you're running, and you need a guaranteed accurate \nresult, then you only have one option: write a trigger function to update a \ntable_count table with every insert/delete to eyp_rfi.\n\nThere is loads of info on this (and why it isn't as simple as you might think) \nin the archives. First though:\n1. Is this the actual query, or just a representation?\n2. Do you need an accurate figure or just something \"near enough\"?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 14 Apr 2004 20:08:58 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) very slow on an already vacuumed table." }, { "msg_contents": "Richard Huxton wrote:\n\n>On Wednesday 14 April 2004 18:53, Rajesh Kumar Mallah wrote:\n> \n>\n>>Hi\n>>I have .5 million rows in a table. My problem is select count(*) takes\n>>ages. VACUUM FULL does not help. can anyone please tell me\n>>how to i enhance the performance of the setup.\n>> \n>>\n>\n> \n>\n>>SELECT count(*) from eyp_rfi;\n>> \n>>\n>\n>If this is the actual query you're running, and you need a guaranteed accurate \n>result, then you only have one option: write a trigger function to update a \n>table_count table with every insert/delete to eyp_rfi.\n> \n>\n\nit is just an example. in general all the queries that involves eyp_rfi\nbecome slow. reloading the table makes the query faster.\n\nmallah.\n\n>There is loads of info on this (and why it isn't as simple as you might think) \n>in the archives. First though:\n>1. Is this the actual query, or just a representation?\n>2. Do you need an accurate figure or just something \"near enough\"?\n>\n> \n>\n\n", "msg_date": "Thu, 15 Apr 2004 02:13:03 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select count(*) very slow on an already vacuumed table." }, { "msg_contents": "\n\nThe problem is that i want to know if i need a Hardware upgrade\nat the moment.\n\nEg i have another table rfis which contains ~ .6 million records.\n\n\nSELECT count(*) from rfis where sender_uid > 0;\n+--------+\n| count |\n+--------+\n| 564870 |\n+--------+\nTime: 117560.635 ms\n\nWhich is approximate 4804 records per second. Is it an acceptable\nperformance on the hardware below:\n\nRAM: 2 GB\nDISKS: ultra160 , 10 K , 18 GB\nProcessor: 2* 2.0 Ghz Xeon\n\nWhat kind of upgrades shoud be put on the server for it to become\nreasonable fast.\n\n\nRegds\nmallah.\n\n\n\n\nRichard Huxton wrote:\n\n>On Wednesday 14 April 2004 18:53, Rajesh Kumar Mallah wrote:\n> \n>\n>>Hi\n>>I have .5 million rows in a table. My problem is select count(*) takes\n>>ages. VACUUM FULL does not help. can anyone please tell me\n>>how to i enhance the performance of the setup.\n>> \n>>\n>\n> \n>\n>>SELECT count(*) from eyp_rfi;\n>> \n>>\n>\n>If this is the actual query you're running, and you need a guaranteed accurate \n>result, then you only have one option: write a trigger function to update a \n>table_count table with every insert/delete to eyp_rfi.\n>\n>There is loads of info on this (and why it isn't as simple as you might think) \n>in the archives. First though:\n>1. Is this the actual query, or just a representation?\n>2. Do you need an accurate figure or just something \"near enough\"?\n>\n> \n>\n\n", "msg_date": "Thu, 15 Apr 2004 12:40:27 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select count(*) very slow on an already vacuumed table." }, { "msg_contents": "\nThe relation size for this table is 1.7 GB\n\ntradein_clients=# SELECT public.relation_size ('general.rfis');\n+------------------+\n| relation_size |\n+------------------+\n| 1,762,639,872 |\n+------------------+\n(1 row)\n\nRegds\nmallah.\n\nRajesh Kumar Mallah wrote:\n\n>\n>\n> The problem is that i want to know if i need a Hardware upgrade\n> at the moment.\n>\n> Eg i have another table rfis which contains ~ .6 million records.\n>\n>\n> SELECT count(*) from rfis where sender_uid > 0;\n> +--------+\n> | count |\n> +--------+\n> | 564870 |\n> +--------+\n> Time: 117560.635 ms\n>\n> Which is approximate 4804 records per second. Is it an acceptable\n> performance on the hardware below:\n>\n> RAM: 2 GB\n> DISKS: ultra160 , 10 K , 18 GB\n> Processor: 2* 2.0 Ghz Xeon\n>\n> What kind of upgrades shoud be put on the server for it to become\n> reasonable fast.\n>\n>\n> Regds\n> mallah.\n>\n>\n>\n>\n> Richard Huxton wrote:\n>\n>> On Wednesday 14 April 2004 18:53, Rajesh Kumar Mallah wrote:\n>> \n>>\n>>> Hi\n>>> I have .5 million rows in a table. My problem is select count(*) takes\n>>> ages. VACUUM FULL does not help. can anyone please tell me\n>>> how to i enhance the performance of the setup.\n>>> \n>>\n>>\n>> \n>>\n>>> SELECT count(*) from eyp_rfi;\n>>> \n>>\n>>\n>> If this is the actual query you're running, and you need a guaranteed \n>> accurate result, then you only have one option: write a trigger \n>> function to update a table_count table with every insert/delete to \n>> eyp_rfi.\n>>\n>> There is loads of info on this (and why it isn't as simple as you \n>> might think) in the archives. First though:\n>> 1. Is this the actual query, or just a representation?\n>> 2. Do you need an accurate figure or just something \"near enough\"?\n>>\n>> \n>>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n", "msg_date": "Thu, 15 Apr 2004 13:04:40 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select count(*) very slow on an already vacuumed table." }, { "msg_contents": "On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:\n> The problem is that i want to know if i need a Hardware upgrade\n> at the moment.\n>\n> Eg i have another table rfis which contains ~ .6 million records.\n\n> SELECT count(*) from rfis where sender_uid > 0;\n\n> Time: 117560.635 ms\n>\n> Which is approximate 4804 records per second. Is it an acceptable\n> performance on the hardware below:\n>\n> RAM: 2 GB\n> DISKS: ultra160 , 10 K , 18 GB\n> Processor: 2* 2.0 Ghz Xeon\n\nHmm - doesn't seem good, does it? If you run it again, is it much faster \n(since the data should be cached then)? What does \"vmstat 10\" show while \nyou're running the query?\n\nOne thing you should have done is read the performance tuning guide at:\n http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\nThe default values are very conservative, and you will need to change them.\n\n> What kind of upgrades shoud be put on the server for it to become\n> reasonable fast.\n\nIf you've only got one disk, then a second disk for OS/logging. Difficult to \nsay more without knowing numbers of users/activity etc.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 15 Apr 2004 08:53:32 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) very slow on an already vacuumed table." }, { "msg_contents": "Richard Huxton wrote:\n\n>On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:\n> \n>\n>>The problem is that i want to know if i need a Hardware upgrade\n>>at the moment.\n>>\n>>Eg i have another table rfis which contains ~ .6 million records.\n>> \n>>\n>\n> \n>\n>>SELECT count(*) from rfis where sender_uid > 0;\n>> \n>>\n>\n> \n>\n>>Time: 117560.635 ms\n>>\n>>Which is approximate 4804 records per second. Is it an acceptable\n>>performance on the hardware below:\n>>\n>>RAM: 2 GB\n>>DISKS: ultra160 , 10 K , 18 GB\n>>Processor: 2* 2.0 Ghz Xeon\n>> \n>>\n>\n>Hmm - doesn't seem good, does it? If you run it again, is it much faster \n>(since the data should be cached then)? What does \"vmstat 10\" show while \n>you're running the query?\n>\n>One thing you should have done is read the performance tuning guide at:\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n>The default values are very conservative, and you will need to change them.\n> \n>\nHi,\n\nThanks for the interest . my config are not the default ones.\ni was running iostat while running the query. Looks like one\nof the disks doesnt' go past a read performance of 20 ,000 KBytes/sec\n\nwhile the other disk it goes as high as 40,000 . What i am ding \ncurrently is\nloading the table in both the disks and compare the table scan speeds.\n\nThe performance is definitely better in the newly loaded table in the other\ndisk . the load in server is 13 because i am simultaneously re-loading \nthe data\nin other table.\n\n\nrt2=# SELECT count(*) from rfis where sender_uid > 0;\n+--------+\n| count |\n+--------+\n| 564870 |\n+--------+\n(1 row)\n\nTime: 10288.359 ms\n\nrt2=#\n\nshall post the comparitive details under normal load soon\n\n\nregds\nmallah.\n\n\n\n\n\n\n\n> \n>\n>>What kind of upgrades shoud be put on the server for it to become\n>>reasonable fast.\n>> \n>>\n>\n>If you've only got one disk, then a second disk for OS/logging. Difficult to \n>say more without knowing numbers of users/activity etc.\n>\n> \n>\n\n", "msg_date": "Thu, 15 Apr 2004 16:01:26 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select count(*) very slow on an already vacuumed table." }, { "msg_contents": "Hi ,\nI am not sure, but I remember the same problem.\nIt was ot 7.3.x version and and I needet to reindex the table.\n\nI think after 7.4 vacuum also work correct with reindex.\n\nBut I am not sure.\n\nregards,\nivan.\n\nRajesh Kumar Mallah wrote:\n\n> Hi,\n>\n> The problem was solved by reloading the Table.\n> the query now takes only 3 seconds. But that is\n> not a solution.\n>\n> The problem is that such phenomenon obscures our\n> judgement used in optimising queries and database.\n>\n> If a query runs slow we really cant tell if its a problem\n> with query itself , hardware or dead rows.\n>\n> I already did vacumm full on the table but it still did not\n> have that effect on performance.\n> In fact the last figures were after doing a vacuum full.\n>\n> Can there be any more elegent solution to this problem.\n>\n> Regds\n> Mallah.\n>\n> Richard Huxton wrote:\n>\n> >On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:\n> >\n> >\n> >>The problem is that i want to know if i need a Hardware upgrade\n> >>at the moment.\n> >>\n> >>Eg i have another table rfis which contains ~ .6 million records.\n> >>\n> >>\n> >\n> >\n> >\n> >>SELECT count(*) from rfis where sender_uid > 0;\n> >>\n> >>\n> >\n> >\n> >\n> >>Time: 117560.635 ms\n> >>\n> >>Which is approximate 4804 records per second. Is it an acceptable\n> >>performance on the hardware below:\n> >>\n> >>RAM: 2 GB\n> >>DISKS: ultra160 , 10 K , 18 GB\n> >>Processor: 2* 2.0 Ghz Xeon\n> >>\n> >>\n> >\n> >Hmm - doesn't seem good, does it? If you run it again, is it much faster\n> >(since the data should be cached then)? What does \"vmstat 10\" show while\n> >you're running the query?\n> >\n> >One thing you should have done is read the performance tuning guide at:\n> > http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n> >The default values are very conservative, and you will need to change them.\n> >\n> >\n> >\n> >>What kind of upgrades shoud be put on the server for it to become\n> >>reasonable fast.\n> >>\n> >>\n> >\n> >If you've only got one disk, then a second disk for OS/logging. Difficult to\n> >say more without knowing numbers of users/activity etc.\n> >\n> >\n> >\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n\n", "msg_date": "Thu, 15 Apr 2004 12:59:04 +0200", "msg_from": "pginfo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ SOLVED ] select count(*) very slow on an already" }, { "msg_contents": "\nHi,\n\nThe problem was solved by reloading the Table.\nthe query now takes only 3 seconds. But that is\nnot a solution.\n\nThe problem is that such phenomenon obscures our\njudgement used in optimising queries and database.\n\n\nIf a query runs slow we really cant tell if its a problem\nwith query itself , hardware or dead rows.\n\n\nI already did vacumm full on the table but it still did not\nhave that effect on performance.\nIn fact the last figures were after doing a vacuum full.\n\nCan there be any more elegent solution to this problem.\n\nRegds\nMallah.\n\n\n\n\n\nRichard Huxton wrote:\n\n>On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:\n> \n>\n>>The problem is that i want to know if i need a Hardware upgrade\n>>at the moment.\n>>\n>>Eg i have another table rfis which contains ~ .6 million records.\n>> \n>>\n>\n> \n>\n>>SELECT count(*) from rfis where sender_uid > 0;\n>> \n>>\n>\n> \n>\n>>Time: 117560.635 ms\n>>\n>>Which is approximate 4804 records per second. Is it an acceptable\n>>performance on the hardware below:\n>>\n>>RAM: 2 GB\n>>DISKS: ultra160 , 10 K , 18 GB\n>>Processor: 2* 2.0 Ghz Xeon\n>> \n>>\n>\n>Hmm - doesn't seem good, does it? If you run it again, is it much faster \n>(since the data should be cached then)? What does \"vmstat 10\" show while \n>you're running the query?\n>\n>One thing you should have done is read the performance tuning guide at:\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n>The default values are very conservative, and you will need to change them.\n>\n> \n>\n>>What kind of upgrades shoud be put on the server for it to become\n>>reasonable fast.\n>> \n>>\n>\n>If you've only got one disk, then a second disk for OS/logging. Difficult to \n>say more without knowing numbers of users/activity etc.\n>\n> \n>\n\n", "msg_date": "Thu, 15 Apr 2004 17:05:45 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ SOLVED ] select count(*) very slow on an already" }, { "msg_contents": "Rajesh Kumar Mallah wrote:\n> \n> Hi,\n> \n> The problem was solved by reloading the Table.\n> the query now takes only 3 seconds. But that is\n> not a solution.\n\nIf dropping/recreating the table improves things, then we can reasonably\nassume that the table is pretty active with updates/inserts. Correct?\n\n> The problem is that such phenomenon obscures our\n> judgement used in optimising queries and database.\n\nLots of phenomenon obscure that ...\n\n> If a query runs slow we really cant tell if its a problem\n> with query itself , hardware or dead rows.\n> \n> I already did vacumm full on the table but it still did not\n> have that effect on performance.\n> In fact the last figures were after doing a vacuum full.\n\nIf the data gets too fragmented, a vacuum may not be enough. Also, read\nup on the recommendations _against_ vacuum full (recommending only using\nvacuum on databases) With full, vacuum condenses the database, which may\nactually hurt performance. A regular vacuum just fixes things up, and\nmay leave unused space lying around. However, this should apparently\nachieve a balance between usage and vacuum. See the docs, they are much\nbetter at describing this than I am.\n\n> Can there be any more elegent solution to this problem.\n\nAs a guess, look into CLUSTER (a Postgres SQL command). CLUSTER will\nbasically recreate the table while ordering rows based on an index.\n(this might benefit you in other ways as well) Don't forget to analyze\nafter cluster. If the problem is caused by frequent updates/inserts,\nyou may find that re-clustering the table on a certain schedule is\nworthwhile.\n\nBe warned, this suggestion is based on an educated guess, I make no\nguarantees that it will help your problem. Read the docs on cluster\nand come to your own conclusions.\n\n> \n> Regds\n> Mallah.\n> \n> \n> \n> \n> \n> Richard Huxton wrote:\n> \n>> On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:\n>> \n>>\n>>> The problem is that i want to know if i need a Hardware upgrade\n>>> at the moment.\n>>>\n>>> Eg i have another table rfis which contains ~ .6 million records.\n>>> \n>>\n>>\n>> \n>>\n>>> SELECT count(*) from rfis where sender_uid > 0;\n>>> \n>>\n>>\n>> \n>>\n>>> Time: 117560.635 ms\n>>>\n>>> Which is approximate 4804 records per second. Is it an acceptable\n>>> performance on the hardware below:\n>>>\n>>> RAM: 2 GB\n>>> DISKS: ultra160 , 10 K , 18 GB\n>>> Processor: 2* 2.0 Ghz Xeon\n>>> \n>>\n>> Hmm - doesn't seem good, does it? If you run it again, is it much \n>> faster (since the data should be cached then)? What does \"vmstat 10\" \n>> show while you're running the query?\n>>\n>> One thing you should have done is read the performance tuning guide at:\n>> http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n>> The default values are very conservative, and you will need to change \n>> them.\n>>\n>>> What kind of upgrades shoud be put on the server for it to become\n>>> reasonable fast.\n>>> \n>> If you've only got one disk, then a second disk for OS/logging. \n>> Difficult to say more without knowing numbers of users/activity etc.\n\n\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Thu, 15 Apr 2004 09:36:54 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ SOLVED ] select count(*) very slow on an already" }, { "msg_contents": "Bill Moran wrote:\n\n> Rajesh Kumar Mallah wrote:\n>\n>>\n>> Hi,\n>>\n>> The problem was solved by reloading the Table.\n>> the query now takes only 3 seconds. But that is\n>> not a solution.\n>\n>\n> If dropping/recreating the table improves things, then we can reasonably\n> assume that the table is pretty active with updates/inserts. Correct?\n\n\n\nYes the table results from an import process and under goes lots\nof inserts and updates , but thats before the vacuum full operation.\nthe table is not accessed during vacuum. What i want to know is\nis there any wat to automate the dumping and reload of a table\nindividually. will the below be safe and effective:\n\n\nbegin work;\ncreate table new_tab AS select * from tab;\ntruncate table tab;\ninsert into tab select * from new_tab;\ndrop table new_tab;\ncommit;\nanalyze tab;\n\ni havenot tried it but plan to do so.\nbut i feel insert would take ages to update\nthe indexes if any.\n\nBTW\n\nis there any way to disable checks and triggers on\na table temporarily while loading data (is updating\nreltriggers in pg_class safe?)\n\n\n\n\n\n\n\n\n>\n>> The problem is that such phenomenon obscures our\n>> judgement used in optimising queries and database.\n>\n>\n> Lots of phenomenon obscure that ...\n>\ntrue. but there should not be too many.\n\n>> If a query runs slow we really cant tell if its a problem\n>> with query itself , hardware or dead rows.\n>>\n>> I already did vacumm full on the table but it still did not\n>> have that effect on performance.\n>> In fact the last figures were after doing a vacuum full.\n>\n>\n> If the data gets too fragmented, a vacuum may not be enough. Also, read\n> up on the recommendations _against_ vacuum full (recommending only using\n> vacuum on databases) With full, vacuum condenses the database, which may\n> actually hurt performance. A regular vacuum just fixes things up, and\n> may leave unused space lying around. However, this should apparently\n> achieve a balance between usage and vacuum. See the docs, they are much\n> better at describing this than I am.\n>\ni understand simultaneous vacuum and usage detoriates performance mostly.\nbut this case is different.\n\n\n>> Can there be any more elegent solution to this problem.\n>\n>\n> As a guess, look into CLUSTER (a Postgres SQL command). CLUSTER will\n> basically recreate the table while ordering rows based on an index.\n> (this might benefit you in other ways as well) Don't forget to analyze\n> after cluster. If the problem is caused by frequent updates/inserts,\n> you may find that re-clustering the table on a certain schedule is\n> worthwhile.\n\ni could consider that option also.\n\n>\n> Be warned, this suggestion is based on an educated guess, I make no\n> guarantees that it will help your problem. Read the docs on cluster\n> and come to your own conclusions.\n\nThanks .\n\nRegds\nmallah.\n\n\n\n>\n>>\n>> Regds\n>> Mallah.\n>>\n>>\n>>\n>>\n>>\n>> Richard Huxton wrote:\n>>\n>>> On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:\n>>> \n>>>\n>>>> The problem is that i want to know if i need a Hardware upgrade\n>>>> at the moment.\n>>>>\n>>>> Eg i have another table rfis which contains ~ .6 million records.\n>>>> \n>>>\n>>>\n>>>\n>>> \n>>>\n>>>> SELECT count(*) from rfis where sender_uid > 0;\n>>>> \n>>>\n>>>\n>>>\n>>> \n>>>\n>>>> Time: 117560.635 ms\n>>>>\n>>>> Which is approximate 4804 records per second. Is it an acceptable\n>>>> performance on the hardware below:\n>>>>\n>>>> RAM: 2 GB\n>>>> DISKS: ultra160 , 10 K , 18 GB\n>>>> Processor: 2* 2.0 Ghz Xeon\n>>>> \n>>>\n>>>\n>>> Hmm - doesn't seem good, does it? If you run it again, is it much \n>>> faster (since the data should be cached then)? What does \"vmstat 10\" \n>>> show while you're running the query?\n>>>\n>>> One thing you should have done is read the performance tuning guide at:\n>>> http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n>>> The default values are very conservative, and you will need to \n>>> change them.\n>>>\n>>>> What kind of upgrades shoud be put on the server for it to become\n>>>> reasonable fast.\n>>>> \n>>>\n>>> If you've only got one disk, then a second disk for OS/logging. \n>>> Difficult to say more without knowing numbers of users/activity etc.\n>>\n>\n>\n>\n\n", "msg_date": "Thu, 15 Apr 2004 21:49:17 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ SOLVED ] select count(*) very slow on an already" }, { "msg_contents": "On Thursday 15 April 2004 17:19, Rajesh Kumar Mallah wrote:\n> Bill Moran wrote:\n> > Rajesh Kumar Mallah wrote:\n> >> Hi,\n> >>\n> >> The problem was solved by reloading the Table.\n> >> the query now takes only 3 seconds. But that is\n> >> not a solution.\n> >\n> > If dropping/recreating the table improves things, then we can reasonably\n> > assume that the table is pretty active with updates/inserts. Correct?\n>\n> Yes the table results from an import process and under goes lots\n> of inserts and updates , but thats before the vacuum full operation.\n> the table is not accessed during vacuum. What i want to know is\n> is there any wat to automate the dumping and reload of a table\n> individually. will the below be safe and effective:\n\nShouldn't be necessary assuming you vacuum (not full) regularly. However, \nlooking back at your original posting, the vacuum output doesn't seem to show \nany rows that need removing.\n\n# VACUUM full verbose eyp_rfi;\nINFO: vacuuming \"public.eyp_rfi\"\nINFO: \"eyp_rfi\": found 0 removable, 505960 nonremovable row versions in \n71987 pages\nDETAIL: 0 dead row versions cannot be removed yet.\n\nSince your select count(*) showed 505960 rows, I can't see how \ndropping/replacing could make a difference on a sequential scan. Since we're \nnot using any indexes I don't see how it could be related to that.\n\n> begin work;\n> create table new_tab AS select * from tab;\n> truncate table tab;\n> insert into tab select * from new_tab;\n> drop table new_tab;\n> commit;\n> analyze tab;\n>\n> i havenot tried it but plan to do so.\n> but i feel insert would take ages to update\n> the indexes if any.\n\nIt will have to update them, which will take time.\n\n> BTW\n>\n> is there any way to disable checks and triggers on\n> a table temporarily while loading data (is updating\n> reltriggers in pg_class safe?)\n\nYou can take a look at pg_restore and copy how it does it.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 15 Apr 2004 18:44:33 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ SOLVED ] select count(*) very slow on an already" }, { "msg_contents": "\nOn Apr 15, 2004, at 12:44 PM, Richard Huxton wrote:\n\n> On Thursday 15 April 2004 17:19, Rajesh Kumar Mallah wrote:\n>> Bill Moran wrote:\n>\n>> BTW\n>>\n>> is there any way to disable checks and triggers on\n>> a table temporarily while loading data (is updating\n>> reltriggers in pg_class safe?)\n>\n> You can take a look at pg_restore and copy how it does it.\n>\n>\n\nDoes SET CONSTRAINT take care of checks within the transaction? \nTriggers would be a different matter...\n\nMark\n\n", "msg_date": "Thu, 15 Apr 2004 13:29:57 -0500", "msg_from": "Mark Lubratt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ SOLVED ] select count(*) very slow on an already" }, { "msg_contents": "Rajesh Kumar Mallah wrote:\n> Bill Moran wrote:\n> \n>> Rajesh Kumar Mallah wrote:\n>>\n>>> Hi,\n>>>\n>>> The problem was solved by reloading the Table.\n>>> the query now takes only 3 seconds. But that is\n>>> not a solution.\n>>\n>> If dropping/recreating the table improves things, then we can reasonably\n>> assume that the table is pretty active with updates/inserts. Correct?\n> \n> Yes the table results from an import process and under goes lots\n> of inserts and updates , but thats before the vacuum full operation.\n> the table is not accessed during vacuum. What i want to know is\n> is there any wat to automate the dumping and reload of a table\n> individually. will the below be safe and effective:\n\nThe CLUSTER command I described is one way of doing this. It\nessentially automates the task of copying the table, dropping\nthe old one, and recreating it.\n\n>> If the data gets too fragmented, a vacuum may not be enough. Also, read\n>> up on the recommendations _against_ vacuum full (recommending only using\n>> vacuum on databases) With full, vacuum condenses the database, which may\n>> actually hurt performance. A regular vacuum just fixes things up, and\n>> may leave unused space lying around. However, this should apparently\n>> achieve a balance between usage and vacuum. See the docs, they are much\n>> better at describing this than I am.\n>>\n> i understand simultaneous vacuum and usage detoriates performance mostly.\n> but this case is different.\n\nJust want to make sure we're on the same page here. I'm not talking about\nvacuuming simultaneous with anything. I'm simply saying that \"vacuum full\"\nisn't always the best choice. You should probably only be doing \"vacuum\".\nThe reason and details for this are in the admin docs.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Thu, 15 Apr 2004 14:48:41 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ SOLVED ] select count(*) very slow on an already" }, { "msg_contents": "I am running an update on the same table\n\nupdate rfis set inquiry_status='APPROVED' where inquiry_status='a';\n\nIts running for past 20 mins. and top output is below.\nThe PID which is executing the query above is 6712. Can anyone\ntell me why it is in an uninterruptable sleep and does it relate\nto the apparent poor performance? Is it problem with the disk\nhardware. I know at nite this query will run reasonably fast.\n\nI am running on a decent hardware .\n\n\n\nRegds\nmallah.\n\n\n\n 1:41pm up 348 days, 21:10, 1 user, load average: 11.59, 13.69, 11.49\n85 processes: 83 sleeping, 1 running, 0 zombie, 1 stopped\nCPU0 states: 8.1% user, 2.3% system, 0.0% nice, 89.0% idle\nCPU1 states: 3.3% user, 2.3% system, 0.0% nice, 93.2% idle\nCPU2 states: 7.4% user, 1.4% system, 0.0% nice, 90.0% idle\nCPU3 states: 9.3% user, 7.4% system, 0.0% nice, 82.2% idle\nMem: 2064796K av, 2053964K used, 10832K free, 0K shrd, 22288K \nbuff\nSwap: 2048244K av, 88660K used, 1959584K free 1801532K \ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n* 6712 postgres 16 0 86592 84M 83920 D 11.1 4.1 1:36 postmaster*\n13103 postgres 15 0 54584 53M 52556 S 3.5 2.6 0:01 postmaster\n13034 root 16 0 1072 1072 848 R 2.1 0.0 0:02 top\n13064 postgres 15 0 67256 65M 64516 D 2.1 3.2 0:01 postmaster\n13088 postgres 16 0 43324 42M 40812 D 2.1 2.0 0:00 postmaster\n13076 postgres 15 0 49016 47M 46628 S 1.9 2.3 0:00 postmaster\n26931 postgres 15 0 84880 82M 83888 S 1.7 4.1 3:52 postmaster\n13107 postgres 15 0 18400 17M 16488 S 1.5 0.8 0:00 postmaster\n13068 postgres 15 0 44632 43M 42324 D 1.3 2.1 0:00 postmaster\n13074 postgres 15 0 68852 67M 66508 D 1.3 3.3 0:00 postmaster\n13108 postgres 15 0 11692 11M 10496 S 1.3 0.5 0:00 postmaster\n13075 postgres 15 0 50860 49M 47680 S 1.1 2.4 0:04 postmaster\n13066 postgres 15 0 56112 54M 53724 S 0.9 2.7 0:01 postmaster\n13109 postgres 15 0 14528 14M 13272 S 0.9 0.7 0:00 postmaster\n24454 postgres 15 0 2532 2380 1372 S 0.7 0.1 11:58 postmaster\n 12 root 15 0 0 0 0 SW 0.5 0.0 816:30 bdflush\n24455 postgres 15 0 1600 1476 1380 S 0.5 0.0 9:11 postmaster\n12528 postgres 15 0 84676 82M 79920 S 0.3 4.0 0:02 postmaster\n12575 postgres 15 0 76660 74M 75796 D 0.3 3.7 0:09 postmaster\n13038 postgres 15 0 48952 47M 46436 D 0.3 2.3 0:00 postmaster\n13069 postgres 15 0 57464 56M 54852 S 0.3 2.7 0:00 postmaster\n13102 postgres 15 0 17864 17M 16504 D 0.3 0.8 0:00 postmaster\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRichard Huxton wrote:\n\n>On Thursday 15 April 2004 17:19, Rajesh Kumar Mallah wrote:\n> \n>\n>>Bill Moran wrote:\n>> \n>>\n>>>Rajesh Kumar Mallah wrote:\n>>> \n>>>\n>>>>Hi,\n>>>>\n>>>>The problem was solved by reloading the Table.\n>>>>the query now takes only 3 seconds. But that is\n>>>>not a solution.\n>>>> \n>>>>\n>>>If dropping/recreating the table improves things, then we can reasonably\n>>>assume that the table is pretty active with updates/inserts. Correct?\n>>> \n>>>\n>>Yes the table results from an import process and under goes lots\n>>of inserts and updates , but thats before the vacuum full operation.\n>>the table is not accessed during vacuum. What i want to know is\n>>is there any wat to automate the dumping and reload of a table\n>>individually. will the below be safe and effective:\n>> \n>>\n>\n>Shouldn't be necessary assuming you vacuum (not full) regularly. However, \n>looking back at your original posting, the vacuum output doesn't seem to show \n>any rows that need removing.\n>\n># VACUUM full verbose eyp_rfi;\n>INFO: vacuuming \"public.eyp_rfi\"\n>INFO: \"eyp_rfi\": found 0 removable, 505960 nonremovable row versions in \n>71987 pages\n>DETAIL: 0 dead row versions cannot be removed yet.\n>\n>Since your select count(*) showed 505960 rows, I can't see how \n>dropping/replacing could make a difference on a sequential scan. Since we're \n>not using any indexes I don't see how it could be related to that.\n>\n> \n>\n>>begin work;\n>>create table new_tab AS select * from tab;\n>>truncate table tab;\n>>insert into tab select * from new_tab;\n>>drop table new_tab;\n>>commit;\n>>analyze tab;\n>>\n>>i havenot tried it but plan to do so.\n>>but i feel insert would take ages to update\n>>the indexes if any.\n>> \n>>\n>\n>It will have to update them, which will take time.\n>\n> \n>\n>>BTW\n>>\n>>is there any way to disable checks and triggers on\n>>a table temporarily while loading data (is updating\n>>reltriggers in pg_class safe?)\n>> \n>>\n>\n>You can take a look at pg_restore and copy how it does it.\n>\n> \n>\n\n\n\n\n\n\n\n\n\n\nI am running an update on the same table\n\nupdate rfis set inquiry_status='APPROVED' where inquiry_status='a';\n\nIts running for past 20 mins. and top output is below.\nThe PID which is executing the query above is 6712. Can anyone \ntell me why it is in an uninterruptable sleep and does it relate \nto the apparent poor performance? Is it problem with the disk \nhardware. I know at nite this query will run reasonably fast.\n\nI am running on a decent hardware .\n\n\n\nRegds\nmallah.\n\n\n\n 1:41pm  up 348 days, 21:10,  1 user,  load average: 11.59, 13.69, 11.49\n85 processes: 83 sleeping, 1 running, 0 zombie, 1 stopped\nCPU0 states:  8.1% user,  2.3% system,  0.0% nice, 89.0% idle\nCPU1 states:  3.3% user,  2.3% system,  0.0% nice, 93.2% idle\nCPU2 states:  7.4% user,  1.4% system,  0.0% nice, 90.0% idle\nCPU3 states:  9.3% user,  7.4% system,  0.0% nice, 82.2% idle\nMem:  2064796K av, 2053964K used,   10832K free,       0K shrd,  \n22288K buff\nSwap: 2048244K av,   88660K used, 1959584K free                \n1801532K cached\n\n  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND\n 6712 postgres  16   0 86592  84M 83920 D    11.1  4.1   1:36\npostmaster\n13103 postgres  15   0 54584  53M 52556 S     3.5  2.6   0:01 postmaster\n13034 root      16   0  1072 1072   848 R     2.1  0.0   0:02 top\n13064 postgres  15   0 67256  65M 64516 D     2.1  3.2   0:01 postmaster\n13088 postgres  16   0 43324  42M 40812 D     2.1  2.0   0:00 postmaster\n13076 postgres  15   0 49016  47M 46628 S     1.9  2.3   0:00 postmaster\n26931 postgres  15   0 84880  82M 83888 S     1.7  4.1   3:52 postmaster\n13107 postgres  15   0 18400  17M 16488 S     1.5  0.8   0:00 postmaster\n13068 postgres  15   0 44632  43M 42324 D     1.3  2.1   0:00 postmaster\n13074 postgres  15   0 68852  67M 66508 D     1.3  3.3   0:00 postmaster\n13108 postgres  15   0 11692  11M 10496 S     1.3  0.5   0:00 postmaster\n13075 postgres  15   0 50860  49M 47680 S     1.1  2.4   0:04 postmaster\n13066 postgres  15   0 56112  54M 53724 S     0.9  2.7   0:01 postmaster\n13109 postgres  15   0 14528  14M 13272 S     0.9  0.7   0:00 postmaster\n24454 postgres  15   0  2532 2380  1372 S     0.7  0.1  11:58 postmaster\n   12 root      15   0     0    0     0 SW    0.5  0.0 816:30 bdflush\n24455 postgres  15   0  1600 1476  1380 S     0.5  0.0   9:11 postmaster\n12528 postgres  15   0 84676  82M 79920 S     0.3  4.0   0:02 postmaster\n12575 postgres  15   0 76660  74M 75796 D     0.3  3.7   0:09 postmaster\n13038 postgres  15   0 48952  47M 46436 D     0.3  2.3   0:00 postmaster\n13069 postgres  15   0 57464  56M 54852 S     0.3  2.7   0:00 postmaster\n13102 postgres  15   0 17864  17M 16504 D     0.3  0.8   0:00 postmaster\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRichard Huxton wrote:\n\nOn Thursday 15 April 2004 17:19, Rajesh Kumar Mallah wrote:\n \n\nBill Moran wrote:\n \n\nRajesh Kumar Mallah wrote:\n \n\nHi,\n\nThe problem was solved by reloading the Table.\nthe query now takes only 3 seconds. But that is\nnot a solution.\n \n\nIf dropping/recreating the table improves things, then we can reasonably\nassume that the table is pretty active with updates/inserts. Correct?\n \n\nYes the table results from an import process and under goes lots\nof inserts and updates , but thats before the vacuum full operation.\nthe table is not accessed during vacuum. What i want to know is\nis there any wat to automate the dumping and reload of a table\nindividually. will the below be safe and effective:\n \n\n\nShouldn't be necessary assuming you vacuum (not full) regularly. However, \nlooking back at your original posting, the vacuum output doesn't seem to show \nany rows that need removing.\n\n# VACUUM full verbose eyp_rfi;\nINFO: vacuuming \"public.eyp_rfi\"\nINFO: \"eyp_rfi\": found 0 removable, 505960 nonremovable row versions in \n71987 pages\nDETAIL: 0 dead row versions cannot be removed yet.\n\nSince your select count(*) showed 505960 rows, I can't see how \ndropping/replacing could make a difference on a sequential scan. Since we're \nnot using any indexes I don't see how it could be related to that.\n\n \n\nbegin work;\ncreate table new_tab AS select * from tab;\ntruncate table tab;\ninsert into tab select * from new_tab;\ndrop table new_tab;\ncommit;\nanalyze tab;\n\ni havenot tried it but plan to do so.\nbut i feel insert would take ages to update\nthe indexes if any.\n \n\n\nIt will have to update them, which will take time.\n\n \n\nBTW\n\nis there any way to disable checks and triggers on\na table temporarily while loading data (is updating\nreltriggers in pg_class safe?)\n \n\n\nYou can take a look at pg_restore and copy how it does it.", "msg_date": "Fri, 16 Apr 2004 13:53:50 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ SOLVED ] select count(*) very slow on an already" }, { "msg_contents": "\nOn Apr 16, 2004, at 4:23 AM, Rajesh Kumar Mallah wrote:\n\n>\n>\n> I am running an update on the same table\n>\n> update rfis set inquiry_status='APPROVED' where inquiry_status='a';\n>\n> Its running for past 20 mins. and top output is below.\n> The PID which is executing the query above is 6712. Can anyone\n> tell me why it is in an uninterruptable sleep and does it relate\n> to the apparent poor performance? Is it problem with the disk\n> hardware. I know at nite this query will run reasonably fast.\n>\n\nI've had this problem recently. The problem is simply that the disk \ncannot keep up. Most likely you don't see it at night because traffic \nis lower. There are only 2 solutions: 1. get more disks 2. write to \nthe db less\n\nThe machine I was running on had a single(!) disk. It was a quad xeon \nso there was plenty of cpu. I'd see 8-9 processes stuck in the \"D\" \nstate. Doing a simple ls -l somefile would take 10-15 seconds and of \ncourse, db performance was abysmal.\n\nI had a lowly P2 with a few disks in it that was able to run circles \naround it for the simple fact the machine was not waiting for disk. \nAgain, proof that disk is far more important than CPU in a db.\n\ngood luck.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Sat, 17 Apr 2004 10:29:06 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ SOLVED ] select count(*) very slow on an already" } ]
[ { "msg_contents": "Hi,\n\nI am using pg from 3 y. and generaly I do not have big problems with it.\n\nI am searching for best pg distro to run pg (7.4.1).\nAt the moment I am using RedHat AS 3.0, but I think it have some\nperformance problems (I am not sure).\nMy configuration:\nP4 2.8 GHz\n1 GB RAM\n120 GB IDE 7200 disk.\nKernel version 2.4.21-4.EL (it is the installation vesrion for rh 3.0) .\n\nMy problems:\n\nIf I run some query with many reads, I see a massive disk transfer :\nprocs memory swap io\nsystem cpu\n r b swpd free buff cache si so bi bo in cs us sy\nid wa\n 0 0 0 261724 3252 670748 0 0 0 4 105 19 0 0\n100 0\n 0 0 0 261724 3252 670748 0 0 0 0 101 11 0 0\n100 0\n 0 0 0 261724 3260 670748 0 0 0 4 104 19 0 0\n100 0\n 0 1 0 259684 3268 674112 0 0 964 7 131 57 0 0\n95 4\n 1 0 0 119408 3288 808540 0 0 27960 0 572 630 13 14\n24 49\n 1 1 0 15896 3292 914436 0 0 7984 44744 531 275 11 18\n24 47\n 0 2 0 16292 3296 924996 0 0 4145 6413 384 176 2\n5 0 92\n 0 1 0 19928 3316 928844 0 0 11805 13335 497 388 5\n9 5 81\n 0 3 0 19124 3296 924452 0 0 3153 19164 287 295 5 11\n16 68\n 0 1 0 15956 3304 932984 0 0 536 6812 366 123 4\n6 3 87\n 0 2 0 24956 3300 921416 0 0 1931 22936\n\nAnd if I run top, I see a big iowait % (some times 70-80) and very low\nuser % (10-15).\n\nI readet many docs about this problem, but do not find any solution.\n\nMy question:\n\nIf some one is using RH 3.0, pls post some result or suggestions for it\nperformance with pg .\n\nWhat is the best linux distro for pg?\n\nCan I get better performance by using 15K SCSI disk ?\nOr it will be better to have more RAM (2 or 3 GB) ?\n\nregards,\nivan.\n\n", "msg_date": "Thu, 15 Apr 2004 08:03:18 +0200", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "linux distro for better pg performance" }, { "msg_contents": "\n>I am searching for best pg distro to run pg (7.4.1).\n> \n>\nThis is generally based upon opinion. Honestly though, your kernel \nversion is more important for performance than the distro. Personally I \nuse gentoo, love gentoo, and would recommend very few other distros \n(Slackware) for servers. RedHat and others seem to include \nkitchensinkd, when it's not needed.\n\n>At the moment I am using RedHat AS 3.0, but I think it have some\n>performance problems (I am not sure).\n>My configuration:\n>P4 2.8 GHz\n>1 GB RAM\n>120 GB IDE 7200 disk.\n> \n>\nYour IDE drive is the biggest hardward bottleneck here. RPM's and bus \ntransfers are slower than SCSI or SATA.\n\n>Kernel version 2.4.21-4.EL (it is the installation vesrion for rh 3.0) .\n> \n>\nJump to 2.6, it's much better for performance related issues, in my \nexperience.\n\n>My problems:\n>\n>If I run some query with many reads, I see a massive disk transfer :\n>procs memory swap io\n>system cpu\n> r b swpd free buff cache si so bi bo in cs us sy\n>id wa\n> 0 0 0 261724 3252 670748 0 0 0 4 105 19 0 0\n>100 0\n> 0 0 0 261724 3252 670748 0 0 0 0 101 11 0 0\n>100 0\n> 0 0 0 261724 3260 670748 0 0 0 4 104 19 0 0\n>100 0\n> 0 1 0 259684 3268 674112 0 0 964 7 131 57 0 0\n>95 4\n> 1 0 0 119408 3288 808540 0 0 27960 0 572 630 13 14\n>24 49\n> 1 1 0 15896 3292 914436 0 0 7984 44744 531 275 11 18\n>24 47\n> 0 2 0 16292 3296 924996 0 0 4145 6413 384 176 2\n>5 0 92\n> 0 1 0 19928 3316 928844 0 0 11805 13335 497 388 5\n>9 5 81\n> 0 3 0 19124 3296 924452 0 0 3153 19164 287 295 5 11\n>16 68\n> 0 1 0 15956 3304 932984 0 0 536 6812 366 123 4\n>6 3 87\n> 0 2 0 24956 3300 921416 0 0 1931 22936\n>\n>And if I run top, I see a big iowait % (some times 70-80) and very low\n>user % (10-15).\n> \n>\nagain, this is your harddrive, and the kernel can play into that.\n\n>I readet many docs about this problem, but do not find any solution.\n>\n>My question:\n>\n>If some one is using RH 3.0, pls post some result or suggestions for it\n>performance with pg .\n>\n>What is the best linux distro for pg?\n> \n>\nThere's no best, just personal preference.\n\n>Can I get better performance by using 15K SCSI disk ?\n> \n>\nAbsolutely\n\n>Or it will be better to have more RAM (2 or 3 GB) ?\n> \n>\nBetter to have a fast drive, but more ram can be helpful.\n\n>regards,\n>ivan.\n>\n> \n>\nHTH,\n\nGavin\n", "msg_date": "Thu, 15 Apr 2004 06:39:43 -0700", "msg_from": "\"Gavin M. Roy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux distro for better pg performance" }, { "msg_contents": "On Thu, 2004-04-15 at 06:39, Gavin M. Roy wrote:\n> Your IDE drive is the biggest hardward bottleneck here. RPM's and bus \n> transfers are slower than SCSI or SATA.\n\n\nIndividual disk throughput generally has very little bearing on database\nperformance compared to other factors. In fact, IDE bandwidth\nperformance is perfectly adequate for databases, and for database\npurposes indistinguishable from SATA. I would say that average access\nand read/write completion times, especially under load, are by far the\nmost limiting factors, and disk RPM is only one component of this. In\nfact, disk RPM is a very expensive way to get marginally better\nthroughput in this regard, and I would suggest 10k rather than 15k\ndrives for the money.\n\nThere are really only two features that are worth buying in your disk\nsubsystem which many people ignore: TCQ and independently managed I/O\nwith a large battery-backed write-back cache. Currently, the only place\nto really get this is with SCSI RAID. You can get 10k SATA drives, so\nwhen you are buying SCSI you are really buying these features.\n\nDo these features make a difference? Far more than you would imagine. \nOn one postgres server I just upgraded, we went from a 3Ware 8x7200-RPM\nRAID-10 configuration to an LSI 320-2 SCSI 3x10k RAID-5, with 256M\ncache, and got a 3-5x performance improvement in the disk subsystem\nunder full database load. SCSI RAID can service a lot of I/O requests\nfar more efficiently than current IDE/SATA RAID controllers, and it\nshows in the stats. Under these types of loads, the actually bandwidth\nutilized by the disks doesn't come anywhere close to even their rated\nperformance, never mind the theoretical performance of the bus. Service\ntimes for IDE/SATA RAID increases dramatically under load, whereas SCSI\ntends not to under the same load.\n\nConsidering that very good SCSI RAID controllers (e.g. the LSI 320-2\nthat I mention above) are only marginally more expensive than nominally\nequivalent IDE/SATA controller solutions, using SCSI RAID with 10k\ndrives is pretty much the price-performance sweet spot if you use your\ndisk system hard (like we do). For databases with low disk I/O\nintensity, stay with IDE/SATA and save a little money. For databases\nthat have high disk I/O intensity, use SCSI. The price premium for SCSI\nis about 50%, but the performance difference is an integer factor under\nload.\n\n\nj. andrew rogers\n\n\n\n\n\n", "msg_date": "15 Apr 2004 11:28:26 -0700", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux distro for better pg performance" }, { "msg_contents": "J. Andrew Rogers wrote:\n\n> Do these features make a difference? Far more than you would imagine. \n> On one postgres server I just upgraded, we went from a 3Ware 8x7200-RPM\n> RAID-10 configuration to an LSI 320-2 SCSI 3x10k RAID-5, with 256M\n\nIs raid 5 much faster than raid 10? On a 4 disk array with 3 data disks \nand 1 parity disk, you have to write 4/3rds the original data, while on \nraid 10 you have to write 2 times the original data, so logically raid 5 \nshould be faster.\n", "msg_date": "Mon, 03 May 2004 22:30:48 -0400", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux distro for better pg performance" }, { "msg_contents": "Joseph Shraibman wrote:\n\n> J. Andrew Rogers wrote:\n>\n>> Do these features make a difference? Far more than you would \n>> imagine. On one postgres server I just upgraded, we went from a 3Ware \n>> 8x7200-RPM\n>> RAID-10 configuration to an LSI 320-2 SCSI 3x10k RAID-5, with 256M\n>\n> Is raid 5 much faster than raid 10? On a 4 disk array with 3 data \n> disks and 1 parity disk, you have to write 4/3rds the original data, \n> while on raid 10 you have to write 2 times the original data, so \n> logically raid 5 should be faster. \n\nI think this comparison is a bit simplistic. For example, most raid5 \nsetups have full stripes that are more than 8K (the typical IO size in \npostgresql), so one might have to read in portions of the stripe in \norder to compute the parity. The needed bits might be in some disk or \ncontroller cache; if it's not then you lose. If one is able to \nperform full stripe writes then the raid5 config should be faster for \nwrites.\n\nNote also that the mirror has 2 copies of the data, so that the read IOs \nwould be divided across 2 (or more) spindles using round robin or a more \nadvanced algorithm to reduce seek times. \n\nOf course, I might be completely wrong...\n\n-- Alan\n", "msg_date": "Mon, 03 May 2004 23:03:55 -0400", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux distro for better pg performance" }, { "msg_contents": "Joseph Shraibman wrote:\n\n> Is raid 5 much faster than raid 10? On a 4 disk array with 3 data disks \n> and 1 parity disk, you have to write 4/3rds the original data, while on \n> raid 10 you have to write 2 times the original data, so logically raid 5 \n> should be faster.\n\nRAID 5 will give you more capacity, but is usually not recommended for \nwrite intensive applications since RAID 5 writes require four I/O \noperations: parity and data disks must be read, new data is compared to \ndata already on the drive and changes are noted, new parity is \ncalculated, both the parity and data disks are written to. Furthermore, \nif a disk fails, performance is severely affected since all remaining \ndrives must be read for each I/O in order to recalculate the missing \ndisk drives data.\n\nRAID 0+1 has the same performance and capacity as RAID 1+0 (10), but \nless reliability since \"a single drive failure will cause the whole \narray to become, in essence, a RAID Level 0 array\" so I don't know why \nanyone would choose it over RAID 10 where multiple disks can fail.\n\nRAID 1 has the same capacity as RAID 10 (n/2), but RAID 10 has better \nperformance so if you're going to have more than one drive pair, why not \ngo for RAID 10 and get the extra performance from striping?\n\nI have been researching how to configure Postgres for a RAID 10 SAME \nconfiguration as described in the Oracle paper \"Optimal Storage \nConfiguration Made Easy\" \n(http://otn.oracle.com/deploy/availability/pdf/oow2000_same.pdf). Has \nanyone delved into this before?\n\nThe filesystem choice is also a key element in database performance \ntuning. In another Oracle paper entitled Tuning an \"Oracle8i Database \nRunning Linux\" \n(http://otn.oracle.com/oramag/webcolumns/2002/techarticles/scalzo_linux02.html), \nDr. Bert Scalzo says, \"The trouble with these tests-for example, Bonnie, \nBonnie++, Dbench, Iobench, Iozone, Mongo, and Postmark-is that they are \nbasic file system throughput tests, so their results generally do not \npertain in any meaningful fashion to the way relational database systems \naccess data files.\" Instead he suggests users benchmarking filesystems \nfor database applications should use these two well-known and widely \naccepted database benchmarks:\n\nAS3AP (http://www.benchmarkresources.com/handbook/5.html): a scalable, \nportable ANSI SQL relational database benchmark that provides a \ncomprehensive set of tests of database-processing power; has built-in \nscalability and portability for testing a broad range of systems; \nminimizes human effort in implementing and running benchmark tests; and \nprovides a uniform, metric, straightforward interpretation of the results.\n\nTPC-C (http://www.tpc.org/): an online transaction processing (OLTP) \nbenchmark that involves a mix of five concurrent transactions of various \ntypes and either executes completely online or queries for deferred \nexecution. The database comprises nine types of tables, having a wide \nrange of record and population sizes. This benchmark measures the number \nof transactions per second.\n\nI encourage you to read the paper -- Dr. Scalzo's results will surprise \nyou; however, while he benchmarked ext2, ext3, ReiserFS, JFS, and RAW, \nhe did not include XFS.\n\nSGI and IBM did a more detailed study on Linux filesystem performance, \nwhich included XFS, ext2, ext3 (various modes), ReiserFS, and JRS, and \nthe results are presented in a paper entitled \"Filesystem Performance \nand Scalability in Linux 2.4.17\" \n(http://oss.sgi.com/projects/xfs/papers/filesystem-perf-tm.pdf). This \npaper goes over the details on how to properly conduct a filesystem \nbenchmark and addresses scaling and load more so than Dr. Scalzo's tests.\n\nFor further study, I have compiled a list of Linux filesystem resources \nat: http://jamesthornton.com/hotlist/linux-filesystems/.\n-- \n\n James Thornton\n______________________________________________________\nInternet Business Consultant, http://jamesthornton.com\n\n", "msg_date": "Mon, 03 May 2004 22:41:44 -0500", "msg_from": "James Thornton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux distro for better pg performance" }, { "msg_contents": "Joseph Shraibman <[email protected]> writes:\n\n> J. Andrew Rogers wrote:\n> \n> > Do these features make a difference? Far more than you would imagine. On one\n> > postgres server I just upgraded, we went from a 3Ware 8x7200-RPM\n> > RAID-10 configuration to an LSI 320-2 SCSI 3x10k RAID-5, with 256M\n> \n> Is raid 5 much faster than raid 10? On a 4 disk array with 3 data disks and 1\n> parity disk, you have to write 4/3rds the original data, while on raid 10 you\n> have to write 2 times the original data, so logically raid 5 should be faster.\n\nIn RAID5 every write needs to update the parity disk as well. In order to do\nthat for a small random access write you often need read in the rest of the\ndata block being modified to calculate the parity bits. This means writes\noften have higher latency on RAID5 because they first have to do an extra\nread. This is where RAID5 got its bad reputation.\n\nGood modern RAID5 controllers can minimize this problem but I think it's still\nan issue for a lot of lower end hardware. I wonder if postgres's architecture\nmight minimize it already just because of the pattern of writes it generates.\n\n-- \ngreg\n\n", "msg_date": "04 May 2004 02:03:16 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux distro for better pg performance" }, { "msg_contents": "The comparison is actually dead on. If you have lots of write through / read\nbehind cache, RAID 5 can run very quickly, until the write rate overwhelms\nthe cache - at which point the 4 I/O per write / 2 per read stops it. This\nmeans that RAID 5 works, except when stressed, which is a bad paradigm.\n\nIf you do streaming sequential writes on RAID5 on a 4 drive RAID5, 4 writes\nbecome:\n\n- read drive 1 for data\n- read drive 3 for parity\n- write changes to drive 1\n- write changes to drive 3\n\n- read drive 2 for data\n- read drive 4 for parity\n- write changes to drive 2\n- write changes to drive 4\n\n- read drive 3 for data\n- read drive 1 for parity\n- write changes to drive 3\n- write changes to drive 1\n\n- read drive 4 for data\n- read drive 2 for parity\n- write changes to drive 4\n- write changes to drive 2\n\nor\n\ndrive 1: 2 reads, 2 writes\ndrive 2: 2 reads, 2 writes\ndrive 3: 2 reads, 2 writes\ndrive 4: 2 reads, 2 writes\n\nin other words, evenly distributed 16 I/Os. These have to be ordered to be\nrecoverable (otherwise the parity scheme is broken and you can't recover),\nand thus are quasi synchronous.\n\nThe same on RAID 10 is\n\n- write changes to drive 1\n- write copy of changes to drive 2\n- write changes to drive 1\n- write copy of changes to drive 2\n- write changes to drive 1\n- write copy of changes to drive 2\n- write changes to drive 1\n- write copy of changes to drive 2\n\nor\n\ndrive 1: 4 I/Os\ndrive 2: 4 I/Os\n\nin other words 4 I/Os in parallel. There is no wait on streaming I/O on RAID\n10, and this fact is the other main reason RAID 10 gives an order of\nmagnitude better performance.\n\nIf you are writing full blocks in a streaming mode, RAID 3 will be the\nfastest - it is RAID 0 with a parity drive. In every situation I've seen it,\nRAID 5 was either generally slow or got applications into trouble during\nstress: bulk loads, etc. Most DBAs end up on RAID 10 for it's predictability\nand performance.\n\n/Aaron\n\n----- Original Message ----- \nFrom: \"Alan Stange\" <[email protected]>\nTo: \"Joseph Shraibman\" <[email protected]>\nCc: \"J. Andrew Rogers\" <[email protected]>;\n<[email protected]>\nSent: Monday, May 03, 2004 11:03 PM\nSubject: Re: [PERFORM] linux distro for better pg performance\n\n\n> Joseph Shraibman wrote:\n>\n> > J. Andrew Rogers wrote:\n> >\n> >> Do these features make a difference? Far more than you would\n> >> imagine. On one postgres server I just upgraded, we went from a 3Ware\n> >> 8x7200-RPM\n> >> RAID-10 configuration to an LSI 320-2 SCSI 3x10k RAID-5, with 256M\n> >\n> > Is raid 5 much faster than raid 10? On a 4 disk array with 3 data\n> > disks and 1 parity disk, you have to write 4/3rds the original data,\n> > while on raid 10 you have to write 2 times the original data, so\n> > logically raid 5 should be faster.\n>\n> I think this comparison is a bit simplistic. For example, most raid5\n> setups have full stripes that are more than 8K (the typical IO size in\n> postgresql), so one might have to read in portions of the stripe in\n> order to compute the parity. The needed bits might be in some disk or\n> controller cache; if it's not then you lose. If one is able to\n> perform full stripe writes then the raid5 config should be faster for\n> writes.\n>\n> Note also that the mirror has 2 copies of the data, so that the read IOs\n> would be divided across 2 (or more) spindles using round robin or a more\n> advanced algorithm to reduce seek times.\n>\n> Of course, I might be completely wrong...\n>\n> -- Alan\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Tue, 4 May 2004 08:06:22 -0400", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux distro for better pg performance" } ]
[ { "msg_contents": "Hi,\n\nwe have a complex modperl database application using postgresql 7.4.1 on \na new Dual Xeon MP Machine with SLES8 which seems to generate too much \ncontext switches (way more than 100.000) on higher load (meaning system \nload > 2). System response times significantly slow down then. We have \ntuned parameters for weeks now but could not come up with better \nresults. It seems that we have had better performance on an older Dual \nXEON DP Machine running on RedHat 7.3.\n\nHere is the config:\n\ndatabase machine on SuSE SLES 8:\n\n F-S Primergy RX600\n 2x XEON MP 2.5GHz\n 8GB RAM\n Hardware Raid 1+0 140GB\n Kernel 2.4.21-169-smp\n\n Postgresql 7.4.1 (self compiled) with\n max_connections = 170\n shared_buffers = 40000\n effective_cache_size = 800000\n sort_mem = 30000\n vacuum_mem = 420000\n max_fsm_relations = 2000\n max_fsm_pages = 200000\n random_page_cost = 4\n checkpoint_segments = 24\n wal_buffers = 32\n\nmodperl application machine on RH 7.3:\n\n F-S Primergy RX200\n 2x XEON DP 2.4 GHz\n 4 GB RAM\n Kernel 2.4.18-10smp, RedHat 7.3\n Apache 1.3.27 setup:\n MinSpareServers 15\n MaxSpareServers 30\n StartServers 15\n MaxClients 80\n MaxRequestsPerChild 100\n\nvmstat 1 excerpt:\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n 1 0 4868 242372 179488 6942316 0 0 12 8 18 9 6 \n2 92 0\n 2 1 4868 242204 179488 6942500 0 0 64 500 701 117921 35 \n18 48 0\n 0 1 4868 242032 179392 6941560 0 0 16 316 412 132295 28 \n25 47 0\n 1 0 4872 242396 179164 6933776 0 0 128 276 474 69708 21 \n24 56 0\n 3 0 4872 242536 179164 6933808 0 0 0 240 412 113643 27 \n27 46 0\n 2 0 4872 242872 179092 6931708 0 0 48 1132 521 127916 24 \n24 53 0\n 0 0 4876 242876 179092 6927512 0 0 48 532 504 117868 32 \n21 47 0\n 0 0 4876 242504 179096 6927560 0 0 0 188 412 127147 34 \n20 47 0\n 1 0 4876 242152 179096 6927856 0 0 96 276 529 117684 28 \n23 49 0\n 2 0 4876 242864 179096 6928384 0 0 88 560 507 135717 38 \n19 43 0\n 1 0 4876 242848 179096 6928520 0 0 64 232 433 151380 32 \n20 48 0\n 4 0 4876 242832 179144 6928916 0 0 16 10380 2913 112583 28 \n20 52 0\n 4 0 4876 242720 179144 6929240 0 0 196 0 329 154821 32 \n18 50 0\n 3 2 4876 243576 179144 6929408 0 0 0 460 451 160287 29 \n18 52 0\n 3 0 4876 243292 179180 6929468 0 0 16 436 614 51894 15 \n5 80 0\n 0 0 4876 243884 179180 6929580 0 0 0 236 619 154168 29 \n21 49 0\n 2 1 4876 243864 179180 6929860 0 0 128 380 493 155903 31 \n19 50 0\n 2 0 4876 244720 179180 6930276 0 0 16 1208 561 129336 27 \n16 56 0\n 2 0 4876 247204 179180 6930300 0 0 0 0 361 146268 33 \n20 47 0\n 3 0 4876 248620 179180 6930372 0 0 0 168 346 155915 32 \n12 56 0\n 2 0 4876 250476 179180 6930436 0 0 0 184 328 163842 35 \n20 46 0\n 0 0 4876 250496 179180 6930652 0 0 48 260 450 144930 31 \n15 53 0\n 1 0 4876 252236 179180 6930732 0 0 16 244 577 167259 35 \n15 50 0\n 0 0 4876 252236 179180 6930780 0 0 0 464 622 165488 31 \n15 54 0\n 1 0 4876 252268 179180 6930812 0 0 0 132 460 153381 34 \n15 52 0\n 2 0 4876 252268 179180 6930964 0 0 0 216 312 141009 31 \n19 50 0\n 1 0 4876 252264 179180 6930980 0 0 0 56 275 153143 33 \n20 47 0\n 2 0 4876 252212 179180 6931212 0 0 96 296 400 133982 32 \n18 50 0\n 1 0 4876 252264 179180 6931332 0 0 0 300 416 136034 32 \n18 50 0\n 1 1 4876 252264 179180 6931332 0 0 0 236 377 143300 34 \n22 44 0\n 4 0 4876 254876 179180 6931372 0 0 0 124 446 118117 34 \n20 45 0\n 1 0 4876 254876 179180 6931492 0 0 16 144 462 140499 38 \n16 46 0\n 2 0 4876 255860 179180 6931572 0 0 16 144 674 126250 33 \n20 47 0\n 1 0 4876 255860 179180 6931788 0 0 48 264 964 115679 36 \n13 51 0\n 3 0 4876 255864 179180 6931804 0 0 0 100 597 127619 36 \n19 46 0\n 5 1 4876 255864 179180 6931924 0 0 72 352 559 151620 34 \n18 48 0\n 2 0 4876 255860 179184 6932100 0 0 96 120 339 137821 34 \n20 47 0\n 0 0 4876 255860 179184 6932156 0 0 8 168 469 125281 36 \n21 43 0\n 2 0 4876 256092 179184 6932444 0 0 112 328 446 137939 34 \n19 48 0\n 2 0 4876 256092 179184 6932484 0 0 16 184 382 141800 35 \n16 49 0\n 3 0 4876 256464 179184 6932716 0 0 16 356 448 134238 30 \n18 51 0\n 5 0 4876 256464 179184 6932892 0 0 96 600 476 142838 34 \n20 46 0\n 1 0 4876 256464 179184 6933012 0 0 16 176 589 138546 35 \n22 43 0\n 2 0 4876 256436 179184 6933096 0 0 60 76 396 93110 42 \n17 41 0\n 1 0 4876 256464 179184 6933484 0 0 212 276 442 83060 45 \n11 44 0\n 5 0 4876 257612 179184 6933604 0 0 0 472 548 94158 39 \n17 45 0\n 0 0 4876 257560 179184 6933708 0 0 96 96 518 116764 38 \n19 43 0\n 1 0 4876 257612 179184 6933796 0 0 0 1768 729 139013 29 \n19 53 0\n 4 0 4876 257612 179184 6934188 0 0 296 108 332 134703 31 \n21 48 0\n 0 1 4876 258584 179184 6934380 0 0 0 492 405 141198 34 \n18 48 0\n 1 0 4876 258584 179184 6934492 0 0 0 176 575 134771 37 \n16 48 0\n 4 1 4876 257796 179184 6935724 0 0 1176 176 438 151240 33 \n20 48 0\n 1 0 4876 261448 179184 6935836 0 0 0 252 489 134348 29 \n19 51 0\n 2 0 4876 261448 179184 6935852 0 0 0 512 639 130875 34 \n16 49 0\n 2 1 4876 261724 179184 6935924 0 0 0 80 238 144970 33 \n20 47 0\n\n\n\n\n", "msg_date": "Thu, 15 Apr 2004 14:10:01 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": true, "msg_subject": "Toooo many context switches (maybe SLES8?)" }, { "msg_contents": "Dirk Lutzeb�ck wrote:\n> postgresql 7.4.1\n\n> a new Dual Xeon MP\n\n> too much context switches (way more than 100.000) on higher load (meaning system \n> load > 2).\n\nI believe this was fixed in 7.4.2, although I can't seem to find it in \nthe release notes.\n\nJoe\n", "msg_date": "Thu, 15 Apr 2004 09:01:41 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toooo many context switches (maybe SLES8?)" }, { "msg_contents": "Joe, do you know where I should look in the 7.4.2 code to find this out?\n\nDirk\n\n\nJoe Conway wrote:\n\n> Dirk Lutzeb�ck wrote:\n>\n>> postgresql 7.4.1\n>\n>> a new Dual Xeon MP\n>\n>> too much context switches (way more than 100.000) on higher load \n>> (meaning system load > 2).\n>\n>\n> I believe this was fixed in 7.4.2, although I can't seem to find it in \n> the release notes.\n>\n> Joe\n>\n>\n\n\n", "msg_date": "Thu, 15 Apr 2004 18:29:31 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Toooo many context switches (maybe SLES8?)" }, { "msg_contents": "Dirk Lutzeb�ck wrote:\n> Joe, do you know where I should look in the 7.4.2 code to find this out?\n\nI think I was wrong. I just looked in CVS and found the commit I was \nthinking about:\n\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/backend/storage/lmgr/s_lock.c.diff?r1=1.22&r2=1.23\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/include/storage/s_lock.h.diff?r1=1.123&r2=1.124\n\n=========================\nRevision 1.23 / (download) - [select for diffs] , Sat Dec 27 20:58:58 \n2003 UTC (3 months, 2 weeks ago) by tgl\nChanges since 1.22: +5 -1 lines\nDiff to previous 1.22\n\nImprove spinlock code for recent x86 processors: insert a PAUSE\ninstruction in the s_lock() wait loop, and use test before test-and-set\nin TAS() macro to avoid unnecessary bus traffic. Patch from Manfred\nSpraul, reworked a bit by Tom.\n=========================\n\nI thought this had been committed to the 7.4 stable branch as well, but \nit appears not.\n\nJoe\n\n", "msg_date": "Thu, 15 Apr 2004 09:38:33 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toooo many context switches (maybe SLES8?)" }, { "msg_contents": "Joe,\n\n> I believe this was fixed in 7.4.2, although I can't seem to find it in \n> the release notes.\n\nDepends on the cause of the issue. If it's the same issue that I'm currently \nstruggling with, it's not fixed.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 15 Apr 2004 10:40:01 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toooo many context switches (maybe SLES8?)" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n>> Improve spinlock code for recent x86 processors: insert a PAUSE\n>> instruction in the s_lock() wait loop, and use test before test-and-set\n>> in TAS() macro to avoid unnecessary bus traffic. Patch from Manfred\n>> Spraul, reworked a bit by Tom.\n\n> I thought this had been committed to the 7.4 stable branch as well, but \n> it appears not.\n\nI am currently chasing what seems to be the same issue: massive context\nswapping on a dual Xeon system. I tried back-patching the above-mentioned\npatch ... it helps a little but by no means solves the problem ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Apr 2004 15:37:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toooo many context switches (maybe SLES8?) " }, { "msg_contents": "Could this be related to the O(1) scheduler backpatches from 2.6 to 2.4 \nkernel on newer 2.4er distros (RedHat, SuSE)?\n\n\nTom Lane wrote:\n\n>Joe Conway <[email protected]> writes:\n> \n>\n>>>Improve spinlock code for recent x86 processors: insert a PAUSE\n>>>instruction in the s_lock() wait loop, and use test before test-and-set\n>>>in TAS() macro to avoid unnecessary bus traffic. Patch from Manfred\n>>>Spraul, reworked a bit by Tom.\n>>> \n>>>\n>\n> \n>\n>>I thought this had been committed to the 7.4 stable branch as well, but \n>>it appears not.\n>> \n>>\n>\n>I am currently chasing what seems to be the same issue: massive context\n>swapping on a dual Xeon system. I tried back-patching the above-mentioned\n>patch ... it helps a little but by no means solves the problem ...\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n\n", "msg_date": "Thu, 15 Apr 2004 22:03:39 +0200", "msg_from": "[email protected] (Dirk Lutzebaeck)", "msg_from_op": false, "msg_subject": "Re: Toooo many context switches (maybe SLES8?)" }, { "msg_contents": "Folks,\n\n> I am currently chasing what seems to be the same issue: massive context\n> swapping on a dual Xeon system. I tried back-patching the above-mentioned\n> patch ... it helps a little but by no means solves the problem ...\n\nBTW, I'm currently pursuing the possibility that this has something to do with \nthe ServerWorks chipset on those motherboards. If anyone knows a high-end \nhardware+linux kernel geek I can corner, I'd appreciate it.\n\nMaybe I should contact OSDL ...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 15 Apr 2004 13:37:00 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toooo many context switches (maybe SLES8?)" }, { "msg_contents": "Isn't this a linux kernel issue ?\n\nMy understanding is that the scheduler doesn't know that 2 of the CPU's\nare actually the same underlying hardware and sometimes two contexts end\nup fighting for the same underlying chip?\n\n--dc--\n\nOn Thu, 2004-04-15 at 16:37, Josh Berkus wrote:\n> Folks,\n> \n> > I am currently chasing what seems to be the same issue: massive context\n> > swapping on a dual Xeon system. I tried back-patching the above-mentioned\n> > patch ... it helps a little but by no means solves the problem ...\n> \n> BTW, I'm currently pursuing the possibility that this has something to do with \n> the ServerWorks chipset on those motherboards. If anyone knows a high-end \n> hardware+linux kernel geek I can corner, I'd appreciate it.\n> \n> Maybe I should contact OSDL ...\n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Sun, 18 Apr 2004 14:49:53 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toooo many context switches (maybe SLES8?)" }, { "msg_contents": "Don't think so, mine is a vanilla kernel from kernel.org\n\nDave\nOn Thu, 2004-04-15 at 16:03, Dirk Lutzebaeck wrote:\n> Could this be related to the O(1) scheduler backpatches from 2.6 to 2.4 \n> kernel on newer 2.4er distros (RedHat, SuSE)?\n> \n> \n> Tom Lane wrote:\n> \n> >Joe Conway <[email protected]> writes:\n> > \n> >\n> >>>Improve spinlock code for recent x86 processors: insert a PAUSE\n> >>>instruction in the s_lock() wait loop, and use test before test-and-set\n> >>>in TAS() macro to avoid unnecessary bus traffic. Patch from Manfred\n> >>>Spraul, reworked a bit by Tom.\n> >>> \n> >>>\n> >\n> > \n> >\n> >>I thought this had been committed to the 7.4 stable branch as well, but \n> >>it appears not.\n> >> \n> >>\n> >\n> >I am currently chasing what seems to be the same issue: massive context\n> >swapping on a dual Xeon system. I tried back-patching the above-mentioned\n> >patch ... it helps a little but by no means solves the problem ...\n> >\n> >\t\t\tregards, tom lane\n> >\n> > \n> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n> \n> \n> !DSPAM:408535ce93801252113544!\n> \n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Tue, 20 Apr 2004 10:50:52 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toooo many context switches (maybe SLES8?)" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nIs there a way to analyze indexes to provide updated sizes? Is a vacuum the \nonly way to determine the size of an index? Analyze updates the stats so I \ncan see table space sizes but I cannot find an alternative to vacuum for \nindexes.\n\n- -- \n\n- --------------------------------------------------\nJeremy M. Guthrie [email protected]\nNetwork Engineer Phone: 608-298-1061\nBerbee Fax: 608-288-3007\n5520 Research Park Drive NOC: 608-298-1102\nMadison, WI 53711\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.3 (GNU/Linux)\n\niD8DBQFAfsreqtjaBHGZBeURAm3+AJ9F34SESTf8i/oEuKvKfXoh+NcOxwCcDcM9\nHP5LHM3Qidb4wa2/rW5H0cI=\n=mJCz\n-----END PGP SIGNATURE-----\n", "msg_date": "Thu, 15 Apr 2004 12:48:14 -0500", "msg_from": "\"Jeremy M. Guthrie\" <[email protected]>", "msg_from_op": true, "msg_subject": "Any way to 'analyze' indexes to get updated sizes?" }, { "msg_contents": "I need some help. I have a query that refuses to use the provided index and \nis always sequentially scanning causing me large performance headaches. Here \nis the basic situation:\n\nTable A:\ninv_num int\ntype\t\tchar\n.\n.\n.\npkey (inv_num, type)\nindx(inv_num)\n\nTable B (has the same primary key)\n\nSelect *\nfrom table a\nwhere inv_num in (select inv_num from table b where ....)\n\nDoing this causes sequential scans of both tables. If I do a set \nenable_seqscan to false before the query, I get an index scan of table b but \nstill seq scan table a. \n\nIs there anyway to force table a to use this index (or another) and not \nsequentially scan the table?\n\nI'm running 7.3.4 on RedHat EL 2.1.\n\nThanks,\n\nChris\n\n", "msg_date": "Tue, 20 Apr 2004 10:20:05 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": false, "msg_subject": "Use of subquery causes seq scan???" }, { "msg_contents": "Please don't reply to messages to start new threads.\n\nOn Tue, Apr 20, 2004 at 10:20:05 -0400,\n Chris Hoover <[email protected]> wrote:\n> I need some help. I have a query that refuses to use the provided index and \n> is always sequentially scanning causing me large performance headaches. Here \n> is the basic situation:\n> \n> Table A:\n> inv_num int\n> type\t\tchar\n> .\n> .\n> .\n> pkey (inv_num, type)\n> indx(inv_num)\n> \n> Table B (has the same primary key)\n> \n> Select *\n> from table a\n> where inv_num in (select inv_num from table b where ....)\n> \n> Doing this causes sequential scans of both tables. If I do a set \n> enable_seqscan to false before the query, I get an index scan of table b but \n> still seq scan table a. \n> \n> Is there anyway to force table a to use this index (or another) and not \n> sequentially scan the table?\n> \n> I'm running 7.3.4 on RedHat EL 2.1.\n\nIN was slow in 7.3.x and before. The query will probably run much better\nas is in 7.4 and above. In 7.3 you want to rewrite it as a join or using\nEXISTS.\n", "msg_date": "Tue, 20 Apr 2004 12:56:32 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use of subquery causes seq scan???" }, { "msg_contents": "\"Chris Hoover\" <[email protected]> writes:\n> Select *\n> from table a\n> where inv_num in (select inv_num from table b where ....)\n\n> I'm running 7.3.4 on RedHat EL 2.1.\n\nIN (SELECT) constructs pretty well suck in PG releases before 7.4.\nUpdate, or consult the FAQ about rewriting into an EXISTS form.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Apr 2004 17:58:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use of subquery causes seq scan??? " } ]
[ { "msg_contents": "Bill, if you had alot of updates and deletions and wanted to optimize your\ntable, can you just issue the cluster command.\nWill the cluster command rewrite the table without the obsolete data that a\nvacuum flags or do you need to issue a vacuum first?\nDan.\n\n-----Original Message-----\nFrom: Bill Moran [mailto:[email protected]]\nSent: Thursday, April 15, 2004 2:49 PM\nTo: Rajesh Kumar Mallah\nCc: Postgres Performance\nSubject: Re: [PERFORM] [ SOLVED ] select count(*) very slow on an\nalready\n\n\nRajesh Kumar Mallah wrote:\n> Bill Moran wrote:\n> \n>> Rajesh Kumar Mallah wrote:\n>>\n>>> Hi,\n>>>\n>>> The problem was solved by reloading the Table.\n>>> the query now takes only 3 seconds. But that is\n>>> not a solution.\n>>\n>> If dropping/recreating the table improves things, then we can reasonably\n>> assume that the table is pretty active with updates/inserts. Correct?\n> \n> Yes the table results from an import process and under goes lots\n> of inserts and updates , but thats before the vacuum full operation.\n> the table is not accessed during vacuum. What i want to know is\n> is there any wat to automate the dumping and reload of a table\n> individually. will the below be safe and effective:\n\nThe CLUSTER command I described is one way of doing this. It\nessentially automates the task of copying the table, dropping\nthe old one, and recreating it.\n\n>> If the data gets too fragmented, a vacuum may not be enough. Also, read\n>> up on the recommendations _against_ vacuum full (recommending only using\n>> vacuum on databases) With full, vacuum condenses the database, which may\n>> actually hurt performance. A regular vacuum just fixes things up, and\n>> may leave unused space lying around. However, this should apparently\n>> achieve a balance between usage and vacuum. See the docs, they are much\n>> better at describing this than I am.\n>>\n> i understand simultaneous vacuum and usage detoriates performance mostly.\n> but this case is different.\n\nJust want to make sure we're on the same page here. I'm not talking about\nvacuuming simultaneous with anything. I'm simply saying that \"vacuum full\"\nisn't always the best choice. You should probably only be doing \"vacuum\".\nThe reason and details for this are in the admin docs.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n", "msg_date": "Thu, 15 Apr 2004 15:24:32 -0400", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ SOLVED ] select count(*) very slow on an already" }, { "msg_contents": "Shea,Dan [CIS] wrote:\n> Bill, if you had alot of updates and deletions and wanted to optimize your\n> table, can you just issue the cluster command.\n> Will the cluster command rewrite the table without the obsolete data that a\n> vacuum flags or do you need to issue a vacuum first?\n\n From the reference docs:\n\n\"During the cluster operation, a temporary copy of the table is created that\ncontains the table data in the index order. Temporary copies of each index\non the table are created as well. Therefore, you need free space on disk at\nleast equal to the sum of the table size and the index sizes.\n\n\"CLUSTER preserves GRANT, inheritance, index, foreign key, and other ancillary\ninformation about the table.\n\n\"Because the optimizer records statistics about the ordering of tables, it is\nadvisable to run ANALYZE on the newly clustered table. Otherwise, the optimizer\nmay make poor choices of query plans.\"\n\nThe primary reason CLUSTER exists is to allow you to physically reorder a table\nbased on a key. This should provide a performance improvement if data with\nthe same key is accessed all at once. (i.e. if you do \"SELECT * FROM table WHERE\nkey=5\" and it returns 100 rows, those 100 rows are guaranteed to be all on the\nsame part of the disk after CLUSTER, thus a performance improvement should result.)\n\nUpdates and inserts will add data in the next available space in a table with no\nregard for any keys, and _may_ require running all over the disk to retrieve\nthe data in the previous example query.\n\nI doubt if CLUSTER is an end-all optimization tool. The specific reason I\nsuggested it was because the original poster was asking for an easier way to\ndrop/recreate a table (as prior experimentation had shown this to improve\nperformance) I can't think of anything easier than \"CLUSTER <tablename> ON\n<keyname>\"\n\nSince CLUSTER recreates the table, it implicitly removes the dead tuples.\nHowever, it's going to be a LOT slower than vacuum, so if dead tuples are the\nmain problem, vacuum is still the way to go.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Thu, 15 Apr 2004 16:13:30 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ SOLVED ] select count(*) very slow on an already" } ]
[ { "msg_contents": "Just a note, I was trying the cluster command and was short on space. I\nfigured I had enough space for the new table and index. It failed on me\ntwice.\nThe reason is that I noticed for the command to complete, it needed the\nspace of the new table and 2x the space of the new index. \nIt looks like it creates the new table, then a new index. Afterwards it\nlooked like it creates another index in the DB pgsql_tmp. So for me this is\nan important consideration, since the new index size was about 7GB.\nI had not anticipated the second index size so that is why it failed. I\nended up creating a link of pgsql_tmp to another parttion to successfully\ncomplete.\n\nDan.\n\n-----Original Message-----\nFrom: Bill Moran [mailto:[email protected]]\nSent: Thursday, April 15, 2004 4:14 PM\nTo: Shea,Dan [CIS]\nCc: Postgres Performance\nSubject: Re: [PERFORM] [ SOLVED ] select count(*) very slow on an\nalready\n\n\nShea,Dan [CIS] wrote:\n> Bill, if you had alot of updates and deletions and wanted to optimize your\n> table, can you just issue the cluster command.\n> Will the cluster command rewrite the table without the obsolete data that\na\n> vacuum flags or do you need to issue a vacuum first?\n\n From the reference docs:\n\n\"During the cluster operation, a temporary copy of the table is created that\ncontains the table data in the index order. Temporary copies of each index\non the table are created as well. Therefore, you need free space on disk at\nleast equal to the sum of the table size and the index sizes.\n\n\"CLUSTER preserves GRANT, inheritance, index, foreign key, and other\nancillary\ninformation about the table.\n\n\"Because the optimizer records statistics about the ordering of tables, it\nis\nadvisable to run ANALYZE on the newly clustered table. Otherwise, the\noptimizer\nmay make poor choices of query plans.\"\n\nThe primary reason CLUSTER exists is to allow you to physically reorder a\ntable\nbased on a key. This should provide a performance improvement if data with\nthe same key is accessed all at once. (i.e. if you do \"SELECT * FROM table\nWHERE\nkey=5\" and it returns 100 rows, those 100 rows are guaranteed to be all on\nthe\nsame part of the disk after CLUSTER, thus a performance improvement should\nresult.)\n\nUpdates and inserts will add data in the next available space in a table\nwith no\nregard for any keys, and _may_ require running all over the disk to retrieve\nthe data in the previous example query.\n\nI doubt if CLUSTER is an end-all optimization tool. The specific reason I\nsuggested it was because the original poster was asking for an easier way to\ndrop/recreate a table (as prior experimentation had shown this to improve\nperformance) I can't think of anything easier than \"CLUSTER <tablename> ON\n<keyname>\"\n\nSince CLUSTER recreates the table, it implicitly removes the dead tuples.\nHowever, it's going to be a LOT slower than vacuum, so if dead tuples are\nthe\nmain problem, vacuum is still the way to go.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n", "msg_date": "Fri, 16 Apr 2004 09:40:06 -0400", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ SOLVED ] select count(*) very slow on an already" } ]
[ { "msg_contents": "Anyone have any ideas why this query would be so slow?\n\nstats=# explain analyze SELECT work_units, min(raw_rank) AS rank FROM Trank_work_overall GROUP BY work_units;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=1050.12..1085.98 rows=14347 width=16) (actual time=163149.981..163227.758 rows=17849 loops=1)\n -> Seq Scan on trank_work_overall (cost=0.00..804.41 rows=49141 width=16) (actual time=0.071..328.682 rows=49091 loops=1)\n Total runtime: 163296.212 ms\n\n(3 rows)\n\nstats=# \\d Trank_work_overall\nTable \"pg_temp_1.trank_work_overall\"\n Column | Type | Modifiers \n------------+--------+-----------\n raw_rank | bigint | \n work_units | bigint | \n\nstats=# \n\nFreeBSD fritz.distributed.net 5.2.1-RELEASE FreeBSD 5.2.1-RELEASE #1:\nWed Apr 7 18:42:52 CDT 2004\[email protected]:/usr/obj/usr/src/sys/FRITZ amd64\n\nThe machine is a dual opteron with 4G of memory. The query in question\nwas not hitting the disk at all. PostgreSQL 7.4.2 compiled with -O3.\n\nAlso, if I set enable_hashagg = false, it runs in less than a second.\n-- \nJim C. Nasby, Database Consultant [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Fri, 16 Apr 2004 10:17:06 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Poor performance of group by query" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Anyone have any ideas why this query would be so slow?\n\nThat seems very bizarre. Would you be willing to send me a dump of the\ntable off-list?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2004 12:36:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance of group by query " }, { "msg_contents": "\n\n> stats=# explain analyze SELECT work_units, min(raw_rank) AS rank FROM Trank_work_overall GROUP BY work_units;\n>\n> ...\n>\n> raw_rank | bigint | \n> work_units | bigint | \n\n\nIf you create a copy of the same table using regular integers does that run\nfast? And a copy of the table using bigints is still slow like the original?\n\nI know bigints are less efficient than integers because they're handled using\ndynamically allocated memory. This especially bites aggregate functions. But I\ndon't see why it would be any slower for a hash aggregate than a regular\naggregate. It's a pretty gross amount of time for 18k records.\n\nThere was a thought a while back about making 64-bit machines handle 64-bit\ndatatypes like bigints without pointers. That would help on your Opteron.\n\n\n-- \ngreg\n\n", "msg_date": "16 Apr 2004 18:57:51 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance of group by query" }, { "msg_contents": "It might be worth trying out a build with -O2, just to rule out any -O3 \noddness.\n\nregards\n\nMark\n\nJim C. Nasby wrote:\n\n> PostgreSQL 7.4.2 compiled with -O3.\n>\n>\n> \n>\n", "msg_date": "Sun, 18 Apr 2004 15:35:58 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance of group by query" } ]
[ { "msg_contents": "Hello all,\n\nMy apologies if this is not the right mailing list to ask this question, but \nwe are wondering about general performance tuning principles for our main db \nserver.\n\nWe have a web app with a postgres backend. Most queries have subsecond \nresponse times through the web even with high usage. Every once in awhile \nsomeone will run either an ad-hoc query or some other long running db \nprocess. For some reason, it seems that a small number 3-4 of these jobs \nrunning in parallel absolutely floors our server. In monitoring the jobs, \nlinux (Kernel 2.4) drops the long running jobs priority, but even so they \nseem to hog the system resources making subsequent requests for everyone else \nvery slow. Our database at this point is almost entirely processor and \nmemory bound because it isn't too large to fit most of the working data into \nmemory yet. There is generally little disk activity when this occurs. \n\nThese long running processes are almost always complex select statements, not \ngenerally inserts or updates. We continue to monitor and rework the \nbottlenecks, but what is a little scary to us is how easily the database \nbecomes almost completely unresponsive with several large jobs running, \nespecially since we have a large number of users. And it only takes one user \ntrying to view a page with one of these selects clicking multiple times \nbecause it doesn't come back quickly to bring our system to it's knees for \nhours.\n\nWe are looking to move to Kernel 2.6 and possibly a dedicated multiprocessor \nmachine for postgres towards the end of this year. But, I am wondering if \nthere is anything we can do now to increase the interactive performance while \nthere are long running selects running as well. Are there ways to adjust the \npriority of backend processes, or things to tweak to maximize interactive \nthroughput for the quick jobs while the long running ones run in the \nbackground? Or if worse comes to worse to actually kill long running \nprocesses without taking down the whole db as we have had to do on occasion.\n\nOur server is a modest 2.4Ghz P4 with mirrored UW SCSI drives and 1G of \nmemory. The db on disk is around 800M and this machine also hosts our web \napp, so there is some contention for the processor.\n\nDoes anyone have any suggestions or thoughts on things we could look at? Is a \nmultiprocessor box the only answer, or are there other things we should be \nlooking at hardware wise. Thank you for your time.\n-- \nChris Kratz\nSystems Analyst/Programmer\nVistaShare LLC\nwww.vistashare.com\n", "msg_date": "Fri, 16 Apr 2004 11:28:00 -0400", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Long running queries degrade performance" }, { "msg_contents": "> We have a web app with a postgres backend. Most queries have subsecond \n> response times through the web even with high usage. Every once in awhile \n> someone will run either an ad-hoc query or some other long running db \n> process. \n\nAre you sure it is postgres where the delay is occurring? I ask this\nbecause I also have a web-based front end to postgres, and while most of\nthe time the queries respond in about a second every now and then I see\none that takes much longer, sometimes 10-15 seconds.\n\nI've seen this behavior on both my development system and on the\nproduction server. \n\nThe same query a while later might respond quickly again.\n\nI'm not sure where to look for the delay, either, and it is intermittent\nenough that I'm not even sure what monitoring techniques to use.\n--\nMike Nolan\n", "msg_date": "Fri, 16 Apr 2004 10:46:02 -0500 (CDT)", "msg_from": "Mike Nolan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Long running queries degrade performance" }, { "msg_contents": "Fairly sure, when it is happening, postgres usually is taking up the top slots \nfor cpu usage as reported by top. Perhaps there is a better way to monitor \nthis?\n\nThe other thing for us is that others talk about disks being the bottleneck \nwhereas for us it is almost always the processor. I expected the drives to \nkill us early on (we have two uw scsi mirrored drives) but there is very \nlittle disk activity. The disks rarely do much during load for us (at this \npoint). Most likely this is related more to data volume at this point.\n\nAs far as in your case, is there a lot of disk activity happening? More \nlikely you have a situation where something else is happening which blocks \nthe current thread. We ran into two situations recently which exhibited this \nbehavior. One was adding and dropping tables in a transaction which blocks \nany other transaction trying to do the same. And two threads inserting \nrecords with the same primary key value blocks the second till the first \nfinishes. Both of these were triggered by users double clicking links in our \nweb app and were fixed by a better implementation. Perhaps something like \nthat is causing what you are seeing.\n\n-Chris\n\nOn Friday 16 April 2004 11:46 am, Mike Nolan wrote:\n> > We have a web app with a postgres backend. Most queries have subsecond\n> > response times through the web even with high usage. Every once in\n> > awhile someone will run either an ad-hoc query or some other long running\n> > db process.\n>\n> Are you sure it is postgres where the delay is occurring? I ask this\n> because I also have a web-based front end to postgres, and while most of\n> the time the queries respond in about a second every now and then I see\n> one that takes much longer, sometimes 10-15 seconds.\n>\n> I've seen this behavior on both my development system and on the\n> production server.\n>\n> The same query a while later might respond quickly again.\n>\n> I'm not sure where to look for the delay, either, and it is intermittent\n> enough that I'm not even sure what monitoring techniques to use.\n> --\n> Mike Nolan\n\n-- \nChris Kratz\nSystems Analyst/Programmer\nVistaShare LLC\nwww.vistashare.com\n\n", "msg_date": "Fri, 16 Apr 2004 13:56:20 -0400", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Long running queries degrade performance" }, { "msg_contents": "> Fairly sure, when it is happening, postgres usually is taking up the top slots \n> for cpu usage as reported by top. Perhaps there is a better way to monitor \n> this?\n\nGiven the intermittent nature of the problem and its relative brevity \n(5-10 seconds), I don't know whether top offers the granularity needed to\nlocate the bottleneck.\n\n> likely you have a situation where something else is happening which blocks \n> the current thread. \n\nIt happens on my development system, and I'm the only one on it. I know\nI've seen it on the production server, but I think it is a bit more\ncommon on the development server, though that may be a case of which system\nI spend the most time on. (Also, the production server is 1300 miles away\nwith a DSL connection, so I may just be seeing network delays some of\nthe time there.)\n\n> Both of these were triggered by users double clicking links in our \n> web app and were fixed by a better implementation. Perhaps something like \n> that is causing what you are seeing.\n\nMy web app traps double-clicks in javascript and ignores all but the first one.\nThat's because some of the users have mice that give double-clicks even\nwhen they only want one click.\n--\nMike Nolan\n", "msg_date": "Fri, 16 Apr 2004 15:25:12 -0500 (CDT)", "msg_from": "Mike Nolan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Long running queries degrade performance" }, { "msg_contents": "On Friday 16 April 2004 4:25 pm, Mike Nolan wrote:\n> Given the intermittent nature of the problem and its relative brevity\n> (5-10 seconds), I don't know whether top offers the granularity needed to\n> locate the bottleneck.\n\nOur long running processes run on the order of multiple minutes (sometimes for \nover an hour) and it's expected because the sql can be quite complex over \nsomewhat large datasets. But it's the bringing the server to it's knees, \nthat I'm trying to figure out how to address if we can. In other words, let \nthose long running processes run, but somehow still get decent performance \nfor \"quick\" requests.\n\nYours reminds me of what used to happen in our apps back when I worked in java \nand the garbage collector kicked in. Suddenly everything would stop for \n10-15s and then continue on. Sort of makes you think the app froze for some \nreason.\n\n> It happens on my development system, and I'm the only one on it. I know\n> I've seen it on the production server, but I think it is a bit more\n> common on the development server, though that may be a case of which system\n> I spend the most time on. (Also, the production server is 1300 miles away\n> with a DSL connection, so I may just be seeing network delays some of\n> the time there.)\n\nInteresting. Have you tried running a processor monitor and seeing if you are \ngetting a cpu or disk spike when you get the blips? Postgres has been pretty \nconstant for us in it's average runtime for any particular query. We do get \nsome fluctuation, but I've always attributed that to other things happening \nin the background. I sometimes run gkrellm off the server just to \"see\" \nwhat's happening on a macro scale. It's a great early indicator when we are \ngetting slammed one way or another (network, memory, processor, disk, etc). \nPlus it shows a couple of seconds of history so you can see blips pretty \neasily.\n\n> My web app traps double-clicks in javascript and ignores all but the first\n> one. That's because some of the users have mice that give double-clicks\n> even when they only want one click.\n\nHmmm, never thought of doing that. Might be interesting to do something like \nthat in a few key places where we have problems.\n\n> --\n> Mike Nolan\n\n-- \nChris Kratz\nSystems Analyst/Programmer\nVistaShare LLC\nwww.vistashare.com\n", "msg_date": "Fri, 16 Apr 2004 16:51:29 -0400", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Long running queries degrade performance" }, { "msg_contents": "Chris Kratz <[email protected]> writes:\n> ... Or if worse comes to worse to actually kill long running \n> processes without taking down the whole db as we have had to do on occasion.\n\nA quick \"kill -INT\" suffices to issue a query cancel, which I think is\nwhat you want here. You could also consider putting an upper limit on\nhow long things can run by means of statement_timeout.\n\nThose are just band-aids though. Not sure about the underlying problem.\nOrdinarily I'd guess that the big-hog queries are causing trouble by\nevicting everything the other queries need from cache. But since your\ndatabase fits in RAM, that doesn't seem to hold water.\n\nWhat PG version are you running?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2004 17:12:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Long running queries degrade performance " }, { "msg_contents": "On Friday 16 April 2004 5:12 pm, Tom Lane wrote:\n> Chris Kratz <[email protected]> writes:\n> > ... Or if worse comes to worse to actually kill long running\n> > processes without taking down the whole db as we have had to do on\n> > occasion.\n>\n> A quick \"kill -INT\" suffices to issue a query cancel, which I think is\n> what you want here. You could also consider putting an upper limit on\n> how long things can run by means of statement_timeout.\n\nWow, that's exactly what I've been looking for. I thought I had scoured the \nmanuals, but must have missed that one. I need to think about the \nstatement_timeout, the might be a good idea to use as well.\n\n> Those are just band-aids though. Not sure about the underlying problem.\n> Ordinarily I'd guess that the big-hog queries are causing trouble by\n> evicting everything the other queries need from cache. But since your\n> database fits in RAM, that doesn't seem to hold water.\n\nThat makes some sense, perhaps there is some other cache somewhere that is \ncausing the problems. I am doing some tuning and have set the following \nitems in our postgresql.conf:\n\nshared_buffers = 4096\nmax_fsm_relations = 1000\nmax_fsm_pages = 20000\nsort_mem = 2048\neffective_cache_size = 64000\n\nI believe these are the only performance related items we've modified. One \nthing I did today, since we seem to run about 600M of memory available for \nfile caches. The effective cache size used to be much lower, so perhaps that \nwas causing some of the problems.\n\n> What PG version are you running?\n\n7.3.4 with grand hopes to move to 7.4 this summer.\n\n> \t\t\tregards, tom lane\n\n-- \nChris Kratz\nSystems Analyst/Programmer\nVistaShare LLC\nwww.vistashare.com\n", "msg_date": "Fri, 16 Apr 2004 17:26:32 -0400", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Long running queries degrade performance" }, { "msg_contents": "A long time ago, in a galaxy far, far away, [email protected] (Mike Nolan) wrote:\n>> We have a web app with a postgres backend. Most queries have subsecond \n>> response times through the web even with high usage. Every once in awhile \n>> someone will run either an ad-hoc query or some other long running db \n>> process. \n>\n> Are you sure it is postgres where the delay is occurring? I ask this\n> because I also have a web-based front end to postgres, and while most of\n> the time the queries respond in about a second every now and then I see\n> one that takes much longer, sometimes 10-15 seconds.\n>\n> I've seen this behavior on both my development system and on the\n> production server. \n>\n> The same query a while later might respond quickly again.\n>\n> I'm not sure where to look for the delay, either, and it is\n> intermittent enough that I'm not even sure what monitoring\n> techniques to use.\n\nWell, a first thing to do is to see what query plans get set up for\nthe queries. If the plans are varying over time, that suggests\nsomething's up with ANALYZEs.\n\nIf the plans look a bit questionable, then you may be encountering the\nsituation where cache is helping you on the _second_ query but not the\nfirst. I did some tuning yesterday involving the same sort of\n\"symptoms,\" and that turned out to be what was happening.\n\nI'll describe (in vague detail ;-)) what I was seeing.\n\n- The table being queried was a \"transaction\" table, containing tens of\n thousands of records per day. \n\n- The query was pulling summary information about one or another\n customer's activity on that day.\n\n- The best index we had was on transaction date.\n\nThus, the query would walk through the \"txn date\" index, pulling\nrecords into memory, and filtering them against the other selection\ncriteria.\n\nThe table is big, so that data is pretty widely scattered across many\npages.\n\nThe _first_ time the query is run, the data is all out on disk, and\nthere are hundreds-to-thousands of page reads to collect it all. That\ntook 10-15 seconds.\n\nThe _second_ time it was run (as well as subsequent occasions), those\npages were all in cache, so the query runs in under a second.\n\nWhat I wound up doing was to add an index on transaction date and\ncustomer ID, so that a query that specifies both criteria will look\njust for the few hundred (at most) records relevant to a particular\ncustomer. That's fast even the first time around.\n\nWe had a really useful \"hook\" on this one because the developer\nnoticed that the first time he queried for a particular day, it was\nslow. We could \"repeat\" the test easily by just changing to a day\nthat we hadn't pulled into cache yet.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://cbbrowne.com/info/lisp.html\nReferring to undocumented private communications allows one to claim\nvirtually anything: \"we discussed this idea in our working group last\nyear, and concluded that it was totally brain-damaged\".\n-- from the Symbolics Guidelines for Sending Mail\n", "msg_date": "Sat, 17 Apr 2004 07:59:23 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Long running queries degrade performance" } ]
[ { "msg_contents": "Note the time for the hash join step:\n\n \n------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=357.62..26677.99 rows=93668 width=62) (actual time=741.159..443381.011 rows=49091 loops=1)\n Hash Cond: (\"outer\".work_today = \"inner\".work_units)\n -> Hash Join (cost=337.11..24784.11 rows=93668 width=54) (actual time=731.374..417188.519 rows=49091 loops=1)\n Hash Cond: (\"outer\".work_total = \"inner\".work_units)\n -> Seq Scan on email_rank (cost=0.00..22240.04 rows=254056 width=46) (actual time=582.145..1627.759 rows=49091 loops=1)\n Filter: (project_id = 8)\n -> Hash (cost=292.49..292.49 rows=17849 width=16) (actual time=148.944..148.944 rows=0 loops=1)\n -> Seq Scan on rank_tie_overall o (cost=0.00..292.49 rows=17849 width=16) (actual time=0.059..75.984 rows=17849 loops=1)\n -> Hash (cost=17.81..17.81 rows=1081 width=16) (actual time=8.996..8.996 rows=0 loops=1)\n -> Seq Scan on rank_tie_today d (cost=0.00..17.81 rows=1081 width=16) (actual time=0.080..4.635 rows=1081 loops=1)\n Total runtime: 619047.032 ms\n\nBy comparison:\nstats=# set enable_hashjoin=false;\nSET\nstats=# explain analyze select * from email_rank, rank_tie_overall o, rank_tie_today d WHERE email_rank.work_today = d.work_units AND email_rank.work_total = o.work_units AND email_rank.project_id = :ProjectID;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=55391.69..56823.23 rows=93668 width=80) (actual time=2705.344..3349.318 rows=49091 loops=1)\n Merge Cond: (\"outer\".work_units = \"inner\".work_today)\n -> Index Scan using work_units_today on rank_tie_today d (cost=0.00..23.89 rows=1081 width=16) (actual time=0.150..4.874 rows=1081 loops=1)\n -> Sort (cost=55391.69..55625.86 rows=93668 width=64) (actual time=2705.153..2888.039 rows=49091 loops=1)\n Sort Key: email_rank.work_today\n -> Merge Join (cost=45047.64..47656.93 rows=93668 width=64) (actual time=1685.414..2494.342 rows=49091 loops=1)\n Merge Cond: (\"outer\".work_units = \"inner\".work_total)\n -> Index Scan using work_units_overall on rank_tie_overall o (cost=0.00..361.34 rows=17849 width=16) (actual time=0.122..79.383 rows=17849 loops=1)\n -> Sort (cost=45047.64..45682.78 rows=254056 width=48) (actual time=1685.228..1866.215 rows=49091 loops=1)\n Sort Key: email_rank.work_total\n -> Seq Scan on email_rank (cost=0.00..22240.04 rows=254056 width=48) (actual time=786.515..1289.101 rows=49091 loops=1)\n Filter: (project_id = 8)\n Total runtime: 3548.087 ms\n\nEven though the second case is only a select, it seems clear that\nsomething's wrong...\n-- \nJim C. Nasby, Database Consultant [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Fri, 16 Apr 2004 10:45:02 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Horribly slow hash join" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Note the time for the hash join step:\n\nHave you ANALYZEd these tables lately?\n\nIt looks to me like it's hashing on some column that has only a small\nnumber of distinct values, so that the hash doesn't actually help to\navoid comparisons. The planner should know better than to choose such\na plan, but if it's working with obsolete stats ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2004 12:34:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join " }, { "msg_contents": "Yes, stats are up to date, and the values should be fairly unique.\n\nCombined with the hash aggregate problem I saw (see my other email to\nthe list), do you think there could be some issue with the performance\nof the hash function on FreeBSD 5.2 on AMD64?\n\nI'll post the table you requested someplace you can grab it.\n\nOn Fri, Apr 16, 2004 at 12:34:11PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > Note the time for the hash join step:\n> \n> Have you ANALYZEd these tables lately?\n> \n> It looks to me like it's hashing on some column that has only a small\n> number of distinct values, so that the hash doesn't actually help to\n> avoid comparisons. The planner should know better than to choose such\n> a plan, but if it's working with obsolete stats ...\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Fri, 16 Apr 2004 11:46:44 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Horribly slow hash join" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Combined with the hash aggregate problem I saw (see my other email to\n> the list), do you think there could be some issue with the performance\n> of the hash function on FreeBSD 5.2 on AMD64?\n\nYeah, I was wondering about that too. Hard to imagine what though.\nThe hash function should be pretty platform-independent.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2004 13:04:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join " }, { "msg_contents": "[ resending because I fat-fingered the cc: to the list ]\n\nI see the problem: all the entries in your work_units column have the\nlow 32 bits equal to zero.\n\nregression=# select distinct work_units % (2^32)::bigint from Trank_work_overall;\n ?column?\n----------\n 0\n(1 row)\n \nThe hash function for int8 only takes the low word into account, so all\nof the entries end up on the same hash chain, resulting in worst-case\nbehavior. This applies to both your hash join and hash aggregate cases.\n\nWe could change the hash function, perhaps, but then we'd just have\ndifferent cases where there's a problem ... hashing will always fail on\n*some* set of inputs. (Also, I have been harboring some notions of\nsupporting cross-type hash joins for integer types, which will not work\nunless small int8 values hash the same as int4 etc.)\n\nI guess the real issue is why are you encoding work_units like that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Apr 2004 12:08:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join " }, { "msg_contents": "I didn't follow the conversation from the begining, bu I imagine that you\ncould improve\nperformance using the value (work_units % (2^32) ) instead of work_units.\nYou could even make an index on this value. Like that, the HASH function\nwill work well. This is not a good solution, but ...\n\nFor example.\n\ncreate index ind1 on table1 ( work_units % (2^32) );\n\ncreate index ind1 on table2 ( work_units % (2^32) );\n\nSelect * from table1 join table2 on (table1.work_units % (2^32) ) =\n(table2.work_units % (2^32) )\n\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Jim C. Nasby\" <[email protected]>\nCc: <[email protected]>\nSent: Saturday, April 17, 2004 6:08 PM\nSubject: Re: [PERFORM] Horribly slow hash join\n\n\n> [ resending because I fat-fingered the cc: to the list ]\n>\n> I see the problem: all the entries in your work_units column have the\n> low 32 bits equal to zero.\n>\n> regression=# select distinct work_units % (2^32)::bigint from\nTrank_work_overall;\n> ?column?\n> ----------\n> 0\n> (1 row)\n>\n> The hash function for int8 only takes the low word into account, so all\n> of the entries end up on the same hash chain, resulting in worst-case\n> behavior. This applies to both your hash join and hash aggregate cases.\n>\n> We could change the hash function, perhaps, but then we'd just have\n> different cases where there's a problem ... hashing will always fail on\n> *some* set of inputs. (Also, I have been harboring some notions of\n> supporting cross-type hash joins for integer types, which will not work\n> unless small int8 values hash the same as int4 etc.)\n>\n> I guess the real issue is why are you encoding work_units like that?\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n", "msg_date": "Sat, 17 Apr 2004 22:35:09 +0200", "msg_from": "=?iso-8859-1?Q?Marcos_Mart=EDnez=28R=29?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join" }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> We could change the hash function, perhaps, but then we'd just have\n> different cases where there's a problem ... hashing will always fail on\n> *some* set of inputs.\n\nSure, but completely ignoring part of the input seems like an unfortunate\nchoice of hash function.\n\n> (Also, I have been harboring some notions of supporting cross-type hash\n> joins for integer types, which will not work unless small int8 values hash\n> the same as int4 etc.)\n\nThe obvious way to modify the hash function is to xor the high 32 bits with\nthe low 32 bits. That maintains the property you need and at least ensures\nthat all the bits are taken into account.\n\n-- \ngreg\n\n", "msg_date": "17 Apr 2004 19:04:39 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> (Also, I have been harboring some notions of supporting cross-type hash\n>> joins for integer types, which will not work unless small int8 values hash\n>> the same as int4 etc.)\n\n> The obvious way to modify the hash function is to xor the high 32 bits with\n> the low 32 bits. That maintains the property you need\n\nNo it doesn't ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Apr 2004 23:45:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join " }, { "msg_contents": "On Sat, 17 Apr 2004, Tom Lane wrote:\n\n> *some* set of inputs. (Also, I have been harboring some notions of\n> supporting cross-type hash joins for integer types, which will not work\n> unless small int8 values hash the same as int4 etc.)\n\nThe simple solution would be to always extend integers to 64 bits (or\nwhatever the biggest integer is) before calculating the hash. It makes the\nhash function a little slower for smaller types, but it's mostly an\noperation in the cpu and no memory involved, so it's probably not\nnoticable.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Sun, 18 Apr 2004 08:18:47 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Greg Stark <[email protected]> writes:\n> > Tom Lane <[email protected]> writes:\n> >> (Also, I have been harboring some notions of supporting cross-type hash\n> >> joins for integer types, which will not work unless small int8 values hash\n> >> the same as int4 etc.)\n> \n> > The obvious way to modify the hash function is to xor the high 32 bits with\n> > the low 32 bits. That maintains the property you need\n> \n> No it doesn't ...\n\nEh? Oh, negative numbers? So low^high^sign.\n\n\nI wonder if it makes sense to have check the hash distribution after\ngenerating the table and if it's bad then throw it away and try again with a\ndifferent hash function. The \"different hash function\" would probably just be\na seed value changing. Probably way overkill though.\n\n-- \ngreg\n\n", "msg_date": "18 Apr 2004 02:43:09 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join" }, { "msg_contents": "Dennis Bjorklund <[email protected]> writes:\n> On Sat, 17 Apr 2004, Tom Lane wrote:\n>> *some* set of inputs. (Also, I have been harboring some notions of\n>> supporting cross-type hash joins for integer types, which will not work\n>> unless small int8 values hash the same as int4 etc.)\n\n> The simple solution would be to always extend integers to 64 bits (or\n> whatever the biggest integer is) before calculating the hash.\n\nThat creates portability issues though. We do not depend on there being\na 64-bit-int type for anything except int8 itself, and I don't want to\nstart doing so.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Apr 2004 11:39:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join " }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Eh? Oh, negative numbers? So low^high^sign.\n\n[ thinks about it... ] Yeah, that would work. We can't backpatch it\nwithout breaking existing hash indexes on int8, but it'd be reasonable\nto change for 7.5 (since at the rate things are going, we won't have\npg_upgrade for 7.5 anyway...)\n\n> I wonder if it makes sense to have check the hash distribution after\n> generating the table and if it's bad then throw it away and try again with a\n> different hash function. The \"different hash function\" would probably just be\n> a seed value changing. Probably way overkill though.\n\nYeah, it'd be a pain trying to get all the type-specific hash functions\ndoing that. I'm also unconvinced that a simple change of seed value\nwould necessarily make the distribution better. In the worst case, if\nthe real problem is that all the input values are identical, you can\nreseed all day long and it won't fix it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Apr 2004 11:46:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join " }, { "msg_contents": "On Sun, 18 Apr 2004, Tom Lane wrote:\n\n> That creates portability issues though. We do not depend on there being\n> a 64-bit-int type for anything except int8 itself, and I don't want to\n> start doing so.\n\nWhat do you mean? int8 is supported on all platformas and if the \nhasfunction would convert all numbers to int8 before making the hash it \nwould work.\n\nI don't see any portability problems.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Sun, 18 Apr 2004 17:58:38 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join " }, { "msg_contents": "Dennis Bjorklund <[email protected]> writes:\n> What do you mean? int8 is supported on all platformas\n\nNo it isn't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Apr 2004 12:23:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join " }, { "msg_contents": "On Sun, 18 Apr 2004, Tom Lane wrote:\n\n> > What do you mean? int8 is supported on all platformas\n> \n> No it isn't.\n\nSo on platforms where it isn't you would use int4 as the biggest int then. \nI don't really see that as a problem. As long as you calculate the hash on \nthe biggest int on that platform it should work.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Sun, 18 Apr 2004 18:27:09 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join " }, { "msg_contents": "On Sun, Apr 18, 2004 at 18:27:09 +0200,\n Dennis Bjorklund <[email protected]> wrote:\n> On Sun, 18 Apr 2004, Tom Lane wrote:\n> \n> > > What do you mean? int8 is supported on all platformas\n> > \n> > No it isn't.\n> \n> So on platforms where it isn't you would use int4 as the biggest int then. \n> I don't really see that as a problem. As long as you calculate the hash on \n> the biggest int on that platform it should work.\n\nAnother option would be to put the numbers into two int4s. For int4 or\nsmaller types one of these would be zero. int8s would be split between\nthe two. The hash function would then be defined on the two int4s.\n", "msg_date": "Sun, 18 Apr 2004 22:15:54 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join" }, { "msg_contents": "On Sun, 18 Apr 2004, Bruno Wolff III wrote:\n\n> Another option would be to put the numbers into two int4s. For int4 or\n> smaller types one of these would be zero. int8s would be split between\n> the two. The hash function would then be defined on the two int4s.\n\nSure, this is an internal calculation in the hash function. The only \nimportant thing is that the number 7 (for example) gives the same hash \nvalue no matter if it is an int2 or an int8 and that the hash function \nworks well also for int8 numbers (which is does not today).\n\nAt least that was the properties I understood that we wanted.\n\nWe got side tracked into talking about what datatype exists in all \nplatforms, that's not an issue at all.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Mon, 19 Apr 2004 06:43:16 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join" }, { "msg_contents": "\nDennis Bjorklund <[email protected]> writes:\n\n> On Sun, 18 Apr 2004, Bruno Wolff III wrote:\n> \n> > Another option would be to put the numbers into two int4s. For int4 or\n> > smaller types one of these would be zero. int8s would be split between\n> > the two. The hash function would then be defined on the two int4s.\n> \n> Sure, this is an internal calculation in the hash function. The only \n> important thing is that the number 7 (for example) gives the same hash \n> value no matter if it is an int2 or an int8 and that the hash function \n> works well also for int8 numbers (which is does not today).\n\nWhat's missing here is that the actual API for hash functions is that the data\ntype provides a function that hashes to 32 bit integers. Then the hash code\nuses the 32 bit integer to crunch down to the actual number of buckets (using\nmod).\n\nThe choice of 32 bit integers is purely arbitrary. As long as it's larger than\nthan the number of buckets in any sane hash table it's fine. 32 bits is\nplenty.\n\nI question the use of mod to crunch the hash value down though. In the case of\nint4 the mapping to 32 bits is simply the identity. So the entire hash\nfunction ends up being simply \"input mod #buckets\". It seems way too easy to\nfind real world data sets where many numbers will all be multiples of some\nnumber. If that common divisor shares any factors with the number of buckets,\nthen the distribution will be very far from even with many empty buckets.\n\nIf the hash tables were made a power of two then it would be possible to mix\nthe bits of the 32 bit value and just mask off the unneeded bits. I've found\none page via google that mentions mixing bits in a hash function, but I would\nlook for a more serious treatment somewhere.\n\n http://burtleburtle.net/bob/hash/doobs.html\n\nIncidentally, this text claims mod is extremely slow compared to bit\nmanipulations. I don't know that that kind of cycle counting is really is a\nfactor for postgres though.\n\n\nAlso, incidentally, this text is interesting:\n\n http://www.isthe.com/chongo/tech/comp/fnv/\n\n\n\n-- \ngreg\n\n", "msg_date": "19 Apr 2004 01:29:19 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> If the hash tables were made a power of two then it would be possible to mix\n> the bits of the 32 bit value and just mask off the unneeded bits. I've found\n> one page via google that mentions mixing bits in a hash function, but I would\n> look for a more serious treatment somewhere.\n> http://burtleburtle.net/bob/hash/doobs.html\n> Incidentally, this text claims mod is extremely slow compared to bit\n> manipulations.\n\nModding by a *non* power of 2 (esp. a prime) mixes the bits quite well,\nand is likely faster than any multiple-instruction way to do the same.\n\nThe quoted article seems to be by someone who has spent a lot of time\ncounting assembly cycles and none at all reading the last thirty years\nworth of CS literature. Knuth's treatment of hashing has some actual\nmath to it... \n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Apr 2004 02:09:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join " }, { "msg_contents": "Here's an interesting link that suggests that hyperthreading would be\nmuch worse.\n\nhttp://groups.google.com/groups?q=hyperthreading+dual+xeon+idle&start=10&hl=en&lr=&ie=UTF-8&c2coff=1&selm=aukkonen-FE5275.21093624062003%40shawnews.gv.shawcable.net&rnum=16\n\nFWIW, I have anecdotal evidence that suggests that this is the case, on\nof my clients was seeing very large context switches with HTT turned on,\nand without it was much better.\n\nDave\nOn Mon, 2004-04-19 at 02:09, Tom Lane wrote:\n> Greg Stark <[email protected]> writes:\n> > If the hash tables were made a power of two then it would be possible to mix\n> > the bits of the 32 bit value and just mask off the unneeded bits. I've found\n> > one page via google that mentions mixing bits in a hash function, but I would\n> > look for a more serious treatment somewhere.\n> > http://burtleburtle.net/bob/hash/doobs.html\n> > Incidentally, this text claims mod is extremely slow compared to bit\n> > manipulations.\n> \n> Modding by a *non* power of 2 (esp. a prime) mixes the bits quite well,\n> and is likely faster than any multiple-instruction way to do the same.\n> \n> The quoted article seems to be by someone who has spent a lot of time\n> counting assembly cycles and none at all reading the last thirty years\n> worth of CS literature. Knuth's treatment of hashing has some actual\n> math to it... \n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n> \n> \n> !DSPAM:40837183123741526418863!\n> \n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Mon, 19 Apr 2004 08:07:44 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Greg Stark <[email protected]> writes:\n> > If the hash tables were made a power of two then it would be possible to mix\n> > the bits of the 32 bit value and just mask off the unneeded bits. I've found\n> > one page via google that mentions mixing bits in a hash function, but I would\n> > look for a more serious treatment somewhere.\n\n\n> Modding by a *non* power of 2 (esp. a prime) mixes the bits quite well,\n> and is likely faster than any multiple-instruction way to do the same.\n\nWell a) any number that has any factors of two fails to mix in some bits.\nThat's a lot more common than non powers of two. b) The postgres code makes no\nattempt to make the number of buckets a prime and c) Even if the number of\nbuckets were prime then it seems it would still be too easy to find real-world\ndata where all the data have that prime as a factor. As it is they only need\nto have common factors to lose.\n\n> The quoted article seems to be by someone who has spent a lot of time\n> counting assembly cycles and none at all reading the last thirty years\n> worth of CS literature. \n\nYes, well I did note that.\n\n-- \ngreg\n\n", "msg_date": "19 Apr 2004 08:12:54 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join" }, { "msg_contents": "\nDave Cramer <[email protected]> writes:\n\n> Here's an interesting link that suggests that hyperthreading would be\n> much worse.\n\nUh, this is the wrong thread.\n\n-- \ngreg\n\n", "msg_date": "19 Apr 2004 08:16:36 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join" }, { "msg_contents": "Dammit, I somehow deleted a bunch of replies to this.\n\nDid a TODO ever come out of this?\n-- \nJim C. Nasby, Database Consultant [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Tue, 20 Apr 2004 13:46:10 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Horribly slow hash join" }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> Modding by a *non* power of 2 (esp. a prime) mixes the bits quite well,\n> and is likely faster than any multiple-instruction way to do the same.\n> \n> The quoted article seems to be by someone who has spent a lot of time\n> counting assembly cycles and none at all reading the last thirty years\n> worth of CS literature. Knuth's treatment of hashing has some actual\n> math to it... \n\n[incidentally, I just found that the quoted article was independently found by\nBruce Momjian who found it convincing enough to convert the hash_any table\nover to it two years ago]\n\nI just reviewed Knuth as well as C.L.R. and several papers from CACM and\nSIGMOD.\n\nIt seems we have three options:\n\nmod():\n\n Pro: empirical research shows it the best algorithm for avoiding collisions\n\n Con: requires the hash table be a prime size and far from a power of two.\n This is inconvenient to arrange for dynamic tables as used in postgres.\n\nmultiplication method (floor(tablesize * remainder(x * golden ratio)))\n\n Pro: works with any table size\n\n Con: I'm not clear if it can be done without floating point arithmetic. \n It seems like it ought to be possible though.\n\nUniversal hashing:\n\n Pro: won't create gotcha situations where the distribution of data suddenly\n and unexpectedly creates unexpected performance problems. \n\n Con: more complex. \n\nIt would be trivial to switch the implementation from mod() to the\nmultiplicative method which is more suited to postgres's needs. However\nuniversal hashing has some appeal. I hate the idea that a hash join could\nperform perfectly well one day and suddenly become pessimal when new data is\nloaded.\n\nIn a sense universal hashing is less predictable. For a DSS system that could\nbe bad. A query that runs fine every day might fail one day in an\nunpredictable way even though the data is unchanged.\n\nBut in another sense it would be more predictable in that if you run the query\na thousand times the average performance would be very close to the expected\nregardless of what the data is. Whereas more traditional algorithms have some\npatterns of data that will consistently perform badly.\n\nIt's probably not worth it but postgres could maybe even be tricky and pretest\nthe parameters against the common values in the statistics table, generating\nnew ones if they fail to generate a nice distribution. That doesn't really\nguarantee anything though, except that those common values would at least be\nwell distributed to start with.\n\n-- \ngreg\n\n", "msg_date": "04 May 2004 18:15:25 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horribly slow hash join" } ]
[ { "msg_contents": "I am using postgres 7.4.1 and have a problem with a plpgsql function. \nWhen I run the function on the production server it takes approx 33 \nminutes to run. I dumped the DB and copied it to a similarly configured \nbox and ran the function and it ran in about 10 minutes. Can anyone \noffer advice on tuning the function or my database? Here are the \nlengthy, gory details.\n\nF u n c t i o n\nIt updates seven columns of a table 1 to 4 times daily. Current data = \n42,000 rows, new data = 30,000 rows.\n\n CREATE TYPE employeeType AS (empID INTEGER, updateDate DATE, bDate \nINTEGER, val1 NUMERIC, val2 NUMERIC, val3 NUMERIC, val4 NUMERIC, favNum \nNUMERIC);\n\n CREATE OR REPLACE FUNCTION updateEmployeeData() RETURNS SETOF \nemployeeType AS '\n DECLARE\n rec RECORD;\n BEGIN\n FOR rec IN SELECT empID, updateDate, bDate, val1, val2 , \nval3, val4, favNum FROM newData LOOP\n RETURN NEXT rec;\n UPDATE currentData SET val1=rec.val1, val2=rec.val2, \nval3=rec.val2, val4=rec.val4, favNum=rec.favNum, updateDate=rec.updateDate\n WHERE empID=rec.empID;\n END LOOP;\n RETURN;\n END;\n ' LANGUAGE 'plpgsql';\n\nThe emp table has 60 columns, all indexed, about two-thirds are numeric, \nbut they are not affected by this update. The other 50+ columns are \nupdated in the middle of the night and the amount of time that update \ntakes isn't a concern.\n\nLate last night I dumped the table, dropped it and re-created it from \nthe dump (on the production server - when no one was looking). When I \nre-ran the function it took almost 11 minutes, which was pretty much in \nline with my results from the dev server.\n\nD e t a i l s\nv 7.4.1\nDebian stable\n1 GB ram\nshared_buffers = 2048\nsort_mem = 1024\nSHMMAX 360000000 (360,000,000)\nVACUUM FULL ANALYZE is run every night, and I ran it yesterday between \nrunning the function and it made no difference in running time.\ntop shows the postmaster using minimal cpu (0-40%) and miniscule memory. \nvmstat shows a fair amount of IO (bo=1000->4000).\n\nYesterday on the dev server we upgraded to the 2.6 kernel and \nunfortunately only noticed a small increase in update time (about one \nminute).\nSo does anyone have any suggestions for me on speeding this up? Is it \nthe index? The function is run daily during the mid afternoon to early \nevening and really drags the performance of the server down (it also \nhosts a web site).\n\nThanks\nRon\n\n\n", "msg_date": "Fri, 16 Apr 2004 09:41:38 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": true, "msg_subject": "Index Problem?" }, { "msg_contents": "Ron,\n\n> The emp table has 60 columns, all indexed, about two-thirds are numeric, \n> but they are not affected by this update. The other 50+ columns are \n> updated in the middle of the night and the amount of time that update \n> takes isn't a concern.\n\nWell, I'd say that you have an application design problem, but that's not what \nyou asked for help with ;-)\n\n> Late last night I dumped the table, dropped it and re-created it from \n> the dump (on the production server - when no one was looking). When I \n> re-ran the function it took almost 11 minutes, which was pretty much in \n> line with my results from the dev server.\n\nSounds like you need to run a REINDEX on the table -- and after that, \ndramatically increase your max_fsm_pages, and run lazy VACUUM immediately \nafter the batch update to clean up.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 16 Apr 2004 10:01:52 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Problem?" }, { "msg_contents": "Josh Berkus wrote:\n\n>Ron,\n>\n> \n>\n>>The emp table has 60 columns, all indexed, about two-thirds are numeric, \n>>but they are not affected by this update. The other 50+ columns are \n>>updated in the middle of the night and the amount of time that update \n>>takes isn't a concern.\n>> \n>>\n>\n>Well, I'd say that you have an application design problem, but that's not what \n>you asked for help with ;-)\n> \n>\nYeah I agree but I'm not allowed to remove those indexes.\n\n> \n>\n>>Late last night I dumped the table, dropped it and re-created it from \n>>the dump (on the production server - when no one was looking). When I \n>>re-ran the function it took almost 11 minutes, which was pretty much in \n>>line with my results from the dev server.\n>> \n>>\n>\n>Sounds like you need to run a REINDEX on the table -- and after that, \n>dramatically increase your max_fsm_pages, and run lazy VACUUM immediately \n>after the batch update to clean up.\n>\n> \n>\nOn my dev server I increased max_fsm_pages from the default of 20000 to \n40000, increased checkpoint_segments from 3 to 5, and the function ran \nin about 6-7 minutes which is a nice increase. According to the docs \n\"Annotated postgresql.conf and Global User Configuration (GUC) Guide\" on \nvarlena I'll have to re-start postgres for the changes to take effect \nthere (correct?). Also the docs on Varlena show the max_fsm_pages \ndefault to be 10,000 but my default was 20,000, looks like that needs \nupdating.\n\nThanks for your help Josh, I'll see after the weekend what the impact \nthe changes will have on the production server.\n\nRon\n\n", "msg_date": "Fri, 16 Apr 2004 10:55:39 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index Problem?" }, { "msg_contents": "Ron,\n\n> Yeah I agree but I'm not allowed to remove those indexes.\n\nIt's not the indexes I'm talking about, it's the table.\n\n> On my dev server I increased max_fsm_pages from the default of 20000 to \n> 40000, \n\nA better way to set this would be to run VACUUM VERBOSE ANALYZE right after \ndoing one of your update batches, and see how many dead pages are being \nreclaimed, and then set max_fsm_pages to that # + 50% (or more).\n\nincreased checkpoint_segments from 3 to 5, and the function ran \n> in about 6-7 minutes which is a nice increase. According to the docs \n> \"Annotated postgresql.conf and Global User Configuration (GUC) Guide\" on \n> varlena I'll have to re-start postgres for the changes to take effect \n> there (correct?).\n\nCorrect.\n\n> Also the docs on Varlena show the max_fsm_pages \n> default to be 10,000 but my default was 20,000, looks like that needs \n> updating.\n\nI don't think the default has been changed. Anyone?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 16 Apr 2004 12:10:47 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Problem?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> A better way to set this would be to run VACUUM VERBOSE ANALYZE right after \n> doing one of your update batches, and see how many dead pages are being \n> reclaimed, and then set max_fsm_pages to that # + 50% (or more).\n\nActually, since he's running 7.4, there's an even better way. Do a\n\"VACUUM VERBOSE\" (full-database vacuum --- doesn't matter whether you\nANALYZE or not). At the end of the very voluminous output, you'll see\nsomething like\n\nINFO: free space map: 240 relations, 490 pages stored; 4080 total pages needed\nDETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB shared memory.\n\nHere, I would need max_fsm_relations = 240 and max_fsm_pages = 4080 to\nexactly cover the present freespace needs of my system. I concur with\nthe suggestion to bump that up a good deal, of course, but that gives\nyou a real number to start from.\n\nThe DETAIL part of the message shows my current settings (which are the\ndefaults) and what the FSM is costing me in shared memory space.\n\nIf you have multiple active databases, the best approach to getting\nthese numbers is to VACUUM in each one, adding VERBOSE when you do the\nlast one. The FSM report is cluster-wide but you want to be sure the\nunderlying info is up to date for all databases.\n\n>> Also the docs on Varlena show the max_fsm_pages \n>> default to be 10,000 but my default was 20,000, looks like that needs \n>> updating.\n\n> I don't think the default has been changed. Anyone?\n\nYes, I kicked it up for 7.4 because FSM covers indexes too now.\nBoth the 7.3 and 7.4 defaults are pretty arbitrary of course...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2004 16:05:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Problem? " }, { "msg_contents": "Tom Lane wrote:\n\n>Josh Berkus <[email protected]> writes:\n> \n>\n>>A better way to set this would be to run VACUUM VERBOSE ANALYZE right after \n>>doing one of your update batches, and see how many dead pages are being \n>>reclaimed, and then set max_fsm_pages to that # + 50% (or more).\n>> \n>>\n>\n>Actually, since he's running 7.4, there's an even better way. Do a\n>\"VACUUM VERBOSE\" (full-database vacuum --- doesn't matter whether you\n>ANALYZE or not). At the end of the very voluminous output, you'll see\n>something like\n>\n>INFO: free space map: 240 relations, 490 pages stored; 4080 total pages needed\n>DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB shared memory.\n>\n>Here, I would need max_fsm_relations = 240 and max_fsm_pages = 4080 to\n>exactly cover the present freespace needs of my system. I concur with\n>the suggestion to bump that up a good deal, of course, but that gives\n>you a real number to start from.\n>\n>The DETAIL part of the message shows my current settings (which are the\n>defaults) and what the FSM is costing me in shared memory space.\n>\n> \n>\nOkay, after running the function VACUUM VERBOSE is telling me:\n INFO: free space map: 136 relations, 25014 pages stored; 22608 total \npages needed\n DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB \nshared memory.\n\nMy max_fsm_pages was set to 20,000 and I reset it to 40,000 on the dev \nserver and the function ran about 20-30% faster, so I'll try the same on \nthe production server. Thanks for the analysis of the VACUUM info.\n\nRon\n\n", "msg_date": "Fri, 16 Apr 2004 14:54:55 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index Problem?" }, { "msg_contents": "Ron St-Pierre wrote:\n> I am using postgres 7.4.1 and have a problem with a plpgsql function. \n> When I run the function on the production server it takes approx 33 \n> minutes to run. I dumped the DB and copied it to a similarly configured \n> box and ran the function and it ran in about 10 minutes. Can anyone \n> offer advice on tuning the function or my database? Here are the \n> lengthy, gory details.\n> \n> F u n c t i o n\n> It updates seven columns of a table 1 to 4 times daily. Current data = \n> 42,000 rows, new data = 30,000 rows.\n> \n> CREATE TYPE employeeType AS (empID INTEGER, updateDate DATE, bDate \n> INTEGER, val1 NUMERIC, val2 NUMERIC, val3 NUMERIC, val4 NUMERIC, favNum \n> NUMERIC);\n> \n> CREATE OR REPLACE FUNCTION updateEmployeeData() RETURNS SETOF \n> employeeType AS '\n> DECLARE\n> rec RECORD;\n> BEGIN\n> FOR rec IN SELECT empID, updateDate, bDate, val1, val2, val3, val4, favNum FROM newData LOOP\n> RETURN NEXT rec;\n> UPDATE currentData SET val1=rec.val1, val2=rec.val2, val3=rec.val2, val4=rec.val4, favNum=rec.favNum, updateDate=rec.updateDate\n> WHERE empID=rec.empID;\n> END LOOP;\n> RETURN;\n> END;\n> ' LANGUAGE 'plpgsql';\n\nCan't you handle this with a simple update query?\n\nUPDATE\n\tcurrentData\nSET\n\tval1 = newData.val1,\n\tval2 = newData.val2,\n\tval3 = newData.val3,\n\tval4 = newData.val4,\n\tfavNum = newData.favNum,\n\tupdateDate = newData.updateDate\nFROM\n\tnewData\nWHERE\n\tnewDate.empID = currentData.empID\n\nJochem\n\n-- \nI don't get it\nimmigrants don't work\nand steal our jobs\n - Loesje\n\n\n", "msg_date": "Sat, 17 Apr 2004 14:25:40 +0200", "msg_from": "Jochem van Dieten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Problem?" } ]
[ { "msg_contents": "Thanks Mallah,\nI will keep this example in case I need it again sometime in the future.\nUnfortunately, I do not have enough free space at the moment to create a\ntemp table. \n\nDan\n\n-----Original Message-----\nFrom: Rajesh Kumar Mallah [mailto:[email protected]]\nSent: Tuesday, April 13, 2004 10:27 AM\nTo: Shea,Dan [CIS]\nCc: Postgres Performance\nSubject: Re: [PERFORM] Deleting certain duplicates\n\n\nShea,Dan [CIS] wrote:\n\n>The index is\n>Indexes:\n> \"forecastelement_rwv_idx\" btree (region_id, wx_element, valid_time)\n>\n>-----Original Message-----\n>From: Shea,Dan [CIS] [mailto:[email protected]]\n>Sent: Monday, April 12, 2004 10:39 AM\n>To: Postgres Performance\n>Subject: [PERFORM] Deleting certain duplicates\n>\n>\n>We have a large database which recently increased dramatically due to a\n>change in our insert program allowing all entries.\n>PWFPM_DEV=# select relname,relfilenode,reltuples from pg_class where\nrelname\n>= 'forecastelement';\n> relname | relfilenode | reltuples\n>-----------------+-------------+-------------\n> forecastelement | 361747866 | 4.70567e+08\n>\n> Column | Type | Modifiers\n>----------------+-----------------------------+-----------\n> version | character varying(99) |\n> origin | character varying(10) |\n> timezone | character varying(99) |\n> region_id | character varying(20) |\n> wx_element | character varying(99) |\n> value | character varying(99) |\n> flag | character(3) |\n> units | character varying(99) |\n> valid_time | timestamp without time zone |\n> issue_time | timestamp without time zone |\n> next_forecast | timestamp without time zone |\n> reception_time | timestamp without time zone |\n>\n>The program is supposed to check to ensure that all fields but the\n>reception_time are unique using a select statement, and if so, insert it.\n>Due an error in a change, reception time was included in the select to\ncheck\n>for duplicates. The reception_time is created by a program creating the\ndat\n>file to insert. \n>Essentially letting all duplicate files to be inserted.\n>\n>I tried the delete query below.\n>PWFPM_DEV=# delete from forecastelement where oid not in (select min(oid)\n>from forecastelement group by\n>version,origin,timezone,region_id,wx_element,value,flag,units,valid_time,is\ns\n>ue_time,next_forecast);\n>It ran for 3 days creating what I assume is an index in pgsql_tmp of the\n>group by statement. \n>The query ended up failing with \"dateERROR:write failed\".\n>Well the long weekend is over and we do not have the luxury of trying this\n>again. \n>So I was thinking maybe of doing the deletion in chunks, perhaps based on\n>reception time.\n> \n>\n\nits more of an sql question though.\n\nto deduplicate on basis of\n\nversion,origin,timezone,region_id,wx_element,value,flag,units,valid_time,\nissue_time,next_forecast\n\nYou could do this.\n\nbegin work;\ncreate temp_table as select distinct on \n(version,origin,timezone,region_id,wx_element,value,flag,units,valid_time,\nissue_time,next_forecast) * from forecastelement ;\ntruncate table forecastelement ;\ndrop index <index on forecastelement > ;\ninsert into forecastelement select * from temp_table ;\ncommit;\ncreate indexes\nAnalyze forecastelement ;\n\nnote that distinct on will keep only one row out of all rows having \ndistinct values\nof the specified columns. kindly go thru the distinct on manual before \ntrying\nthe queries.\n\nregds\nmallah.\n\n>Are there any suggestions for a better way to do this, or using multiple\n>queries to delete selectively a week at a time based on the reception_time.\n>I would say there are a lot of duplicate entries between mid march to the\n>first week of April.\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n> \n>\n", "msg_date": "Fri, 16 Apr 2004 13:52:15 -0400", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Deleting certain duplicates" } ]
[ { "msg_contents": "Hi,\n\nWhen I included a subquery, the estimated rows (1240)\nis way too high as shown in the following example. \nCan someone explain why? Because of this behavior,\nsome of our queries use hash join instead of nested\nloop.\n\nThanks,\n\nselect version();\n version\n-------------------------------------------------------------\n PostgreSQL 7.3.4 on i686-pc-linux-gnu, compiled by\nGCC 2.96\n(1 row)\n\\d test\n Table \"public.test\"\n Column | Type | Modifiers\n---------+--------------------------+-----------\n id | integer |\n name | character varying(255) |\n d_id | integer |\n c_id | integer |\n r_id | integer |\n u_id | integer |\n scope | integer |\n active | integer |\n created | timestamp with time zone |\n typ | integer |\nIndexes: test_scope_idx btree (scope)\n\nreindex table test;\nvacuum full analyze test;\n\nselect count(*) from test;\n count\n-------\n 4959\n(1 row)\nselect count(*) from test where scope=10;\n count\n-------\n 10\n(1 row)\n\nexplain analyze\nselect * from test\nwhere scope=10; -- so far so good, estimate 12 rows,\nactual 10 rows\n \nQUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\n Index Scan using test_scope_idx on test \n(cost=0.00..4.35 rows=12 width=59) (actual\ntime=0.04..0.11 rows=10 loops=1)\n Index Cond: (scope = 10)\n Total runtime: 0.23 msec\n(3 rows)\n\nexplain analyze\nselect * from test\nwhere scope=(select 10); -- estimate rows is way too\nhigh, do not why????\n \nQUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------\n Index Scan using test_scope_idx on test \n(cost=0.00..40.74 rows=1240 width=59) (actual\ntime=0.06..0.13 rows=10 loops=1)\n Index Cond: (scope = $0)\n InitPlan\n -> Result (cost=0.00..0.01 rows=1 width=0)\n(actual time=0.01..0.01 rows=1 loops=1)\n Total runtime: 0.22 msec\n(5 rows)\n\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Tax Center - File online by April 15th\nhttp://taxes.yahoo.com/filing.html\n", "msg_date": "Fri, 16 Apr 2004 14:45:29 -0700 (PDT)", "msg_from": "Litao Wu <[email protected]>", "msg_from_op": true, "msg_subject": "sunquery and estimated rows" }, { "msg_contents": "Litao Wu <[email protected]> writes:\n> When I included a subquery, the estimated rows (1240)\n> is way too high as shown in the following example. \n\n> select * from test\n> where scope=(select 10);\n\nThe planner sees that as \"where scope = <some complicated expression>\"\nand falls back to a default estimate. It won't simplify a sub-select\nto a constant. (Some people consider that a feature ;-).)\n\nThe estimate should still be derived from the statistics for the\nscope column, but it will just depend on the number of distinct\nvalues for the column and not on the specific comparison constant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2004 19:45:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sunquery and estimated rows " }, { "msg_contents": "В Сбт, 17.04.2004, в 01:45, Tom Lane пишет:\n\n> The planner sees that as \"where scope = <some complicated expression>\"\n> and falls back to a default estimate. It won't simplify a sub-select\n> to a constant. (Some people consider that a feature ;-).)\n\nWhy?\n\nThanks\n\n-- \nMarkus Bertheau <[email protected]>\n\n", "msg_date": "Sun, 18 Apr 2004 21:22:26 +0200", "msg_from": "Markus Bertheau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sunquery and estimated rows" }, { "msg_contents": "Markus Bertheau <[email protected]> writes:\n> В Сбт, 17.04.2004, в 01:45, Tom Lane пишет:\n>> The planner sees that as \"where scope = <some complicated expression>\"\n>> and falls back to a default estimate. It won't simplify a sub-select\n>> to a constant. (Some people consider that a feature ;-).)\n\n> Why?\n\nIt's the only way to prevent it from simplifying when you don't want it\nto.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Apr 2004 19:09:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sunquery and estimated rows " }, { "msg_contents": "On Sun, 2004-04-18 at 19:09, Tom Lane wrote:\n> Markus Bertheau <[email protected]> writes:\n> > , 17.04.2004, 01:45, Tom Lane :\n> >> The planner sees that as \"where scope = <some complicated expression>\"\n> >> and falls back to a default estimate. It won't simplify a sub-select\n> >> to a constant. (Some people consider that a feature ;-).)\n> \n> > Why?\n> \n> It's the only way to prevent it from simplifying when you don't want it\n> to.\n\nI'm having a difficult time coming up with a circumstance where that is\nbeneficial except when stats are out of whack.\n\nDoesn't a prepared statement also falls back to the default estimate for\nvariables.\n\n-- \nRod Taylor <rbt [at] rbt [dot] ca>\n\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\nPGP Key: http://www.rbt.ca/signature.asc", "msg_date": "Sun, 18 Apr 2004 19:42:55 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sunquery and estimated rows" }, { "msg_contents": "Rod Taylor <[email protected]> writes:\n>> It's the only way to prevent it from simplifying when you don't want it\n>> to.\n\n> I'm having a difficult time coming up with a circumstance where that is\n> beneficial except when stats are out of whack.\n\nTry trawling the archives --- I recall several cases in which people\nwere using sub-selects for this purpose.\n\nIn any case, I don't see the value of having the planner check to see if\na sub-select is just a trivial arithmetic expression. The cases where\npeople write that and expect it to be simplified are so few and far\nbetween that I can't believe it'd be a good use of planner cycles.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Apr 2004 22:16:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sunquery and estimated rows " }, { "msg_contents": "Well, the example shown is simplified version.\nNow, let's see a little 'real' example (still\nsimplified version):\n\nTable test is same as before:\n\\d test\n Table \"public.test\"\n Column | Type | Modifiers\n---------+--------------------------+-----------\n id | integer |\n ...\n scope | integer |\n ... \nIndexes: test_scope_idx btree (scope)\n\nselect count(*) from test;\n count\n-------\n 4959\n(1 row)\nselect count(*) from test where scope=10;\n count\n-------\n 10\n(1 row)\n\ncreate table scope_def (scope int primary key, name\nvarchar(30) unique);\ninsert into scope_def values (10, 'TEST_SCOPE');\n\n-- This is not a trivial arithmetic expression\nexplain analyze\nselect * from test\nwhere scope=(select scope from scope_def where name =\n'TEST_SCOPE');\n\n-- estimated row is 1653, returned rows is 10\n \n \n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using test_scope_idx on test \n(cost=0.00..49.91 rows=1653 width=59) (actual\ntime=0.08..0.15 rows=10 loops=1)\n Index Cond: (scope = $0)\n InitPlan\n -> Index Scan using scope_def_name_key on\nscope_def (cost=0.00..4.82 rows=1 width=4) (actual\ntime=0.04..0.04 rows=1 loops=1)\n Index Cond: (name = 'TEST_SCOPE'::character\nvarying)\n Total runtime: 0.22 msec\n(6 rows)\n\n\n-- trivial arithmetic expression\n-- estimated row is 1653, returned rows is 10\nexplain analyze\nselect * from test\nwhere scope=(select 10);\n \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Index Scan using test_scope_idx on test \n(cost=0.00..49.91 rows=1653 width=59) (actual\ntime=0.06..0.14 rows=10 loops=1)\n Index Cond: (scope = $0)\n InitPlan\n -> Result (cost=0.00..0.01 rows=1 width=0)\n(actual time=0.01..0.01 rows=1 loops=1)\n Total runtime: 0.20 msec\n(5 rows)\n\n-- This is the plan I expect to see: estimated rows is\n-- close the actual returned rows.\n-- Do I have to devide the sub-select into two \n-- queries? \n\nexplain analyze\nselect * from test\nwhere scope=10;\n \nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Index Scan using test_scope_idx on test \n(cost=0.00..3.77 rows=10 width=59) (actual\ntime=0.05..0.12 rows=10 loops=1)\n Index Cond: (scope = 10)\n Total runtime: 0.18 msec\n(3 rows)\n\n-- Rewritten query using join in this case\nexplain analyze\nselect test.* from test JOIN scope_def using (scope)\nwhere scope_def.name = 'TEST_SCOPE';\n \n QUERY PLAN \n \n----------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..75.39 rows=5 width=63)\n(actual time=0.07..0.19 rows=10 loops=1)\n -> Index Scan using scope_def_name_key on\nscope_def (cost=0.00..4.82 rows=1 width=4) (actual\ntime=0.04..0.04 rows=1 loops=1)\n Index Cond: (name = 'TEST_SCOPE'::character\nvarying)\n -> Index Scan using test_scope_idx on test \n(cost=0.00..49.91 rows=1653 width=59) (actual\ntime=0.02..0.09 rows=10 loops=1)\n Index Cond: (test.scope = \"outer\".scope)\n Total runtime: 0.28 msec\n(6 rows)\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Tax Center - File online by April 15th\nhttp://taxes.yahoo.com/filing.html\n", "msg_date": "Mon, 19 Apr 2004 09:26:03 -0700 (PDT)", "msg_from": "Litao Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sunquery and estimated rows" } ]
[ { "msg_contents": "Hi Everyone \n\nI am new to this group and relatively new to Postgres, having used MSSQL 7 \nup until now.\n\nOne table in my database in returning even the simplest of queries extremely \nslowly. The table is user table, and even the select userid from users takes \nover 20 seconds to run. There are about 2000 records in the table. \n\n\nThe EXPLAIN ANALYZE on this table produces this output:\nSeq Scan on users (cost=0.00..89482.63 rows=1463 width=4) (actual \ntime=68.836..40233.463 rows=1465 loops=1)\nTotal runtime: 40234.965 ms\n\n\nSELECT USERID FROM USERS produces this:\n1465 rows fetched (25.28 sec)\n\nThe userid field is the primary key and has an index on it with this ddl: \nALTER TABLE \"public\".\"users\" ADD CONSTRAINT \"users_pkey\" PRIMARY KEY \n(\"userid\");\n\nThere are other tables, such as the messages table, that have 10s of \nthousands of rows and they return records much more quickly.\n\n\nThere must be something seriously wrong for simple queries like this to take \nso long. \n\nI should say that we are using the OpenFTS text search on the users table.\n\nIn many cases to make the queries run at reasonable speeds I do an outer \njoin on another table, and surprisingly these results come back very quickly\n\nCan anybody help me in diagnosing this problem.\n\n\nGerard Isdell\n\n\n*************************************************************************\nThis e-mail and any attachments may contain confidential or privileged\ninformation. If you are not the intended recipient, please contact the\nsender immediately and do not use, store or disclose their contents.\nAny views expressed are those of the individual sender and not of Kinetic \nInformation System Services Limited unless otherwise stated.\n\n www.kinetic.co.uk\n\n", "msg_date": "Mon, 19 Apr 2004 11:59:48 +0100", "msg_from": "\"Gerard Isdell\" <[email protected]>", "msg_from_op": true, "msg_subject": "very slow simple query - outer join makes it quicker" }, { "msg_contents": "> There are other tables, such as the messages table, that have 10s of \n> thousands of rows and they return records much more quickly.\n\n> There must be something seriously wrong for simple queries like this to take \n> so long. \n\nHave you run VACUUM recently?\n\nIf not, run VACUUM FULL against the users table and see if that makes a\ndifference.\n\n", "msg_date": "Mon, 19 Apr 2004 08:01:15 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very slow simple query - outer join makes it quicker" }, { "msg_contents": "On Mon, 2004-04-19 at 08:26, Gerard Isdell wrote:\n> Thank, that has worked.\n> \n> I've been running VACUUM regularly and thought that would have done it. \n> \n> Obviously the FULL makes a big difference\n\nIt shouldn't. That FULL makes a significant difference says that you're\nnot running regular VACUUM frequently enough and/or your fsm_* settings\nare too low.\n\n> -----Original Message-----\n> From: Rod Taylor <[email protected]>\n> To: [email protected]\n> Cc: Postgresql Performance <[email protected]>\n> Date: Mon, 19 Apr 2004 08:01:15 -0400\n> Subject: Re: [PERFORM] very slow simple query - outer join makes it quicker\n> \n> > > There are other tables, such as the messages table, that have 10s of \n> > > thousands of rows and they return records much more quickly.\n> > \n> > > There must be something seriously wrong for simple queries like this\n> > to take \n> > > so long. \n> > \n> > Have you run VACUUM recently?\n> > \n> > If not, run VACUUM FULL against the users table and see if that makes a\n> > difference.\n> > \n> \n> \n> *************************************************************************\n> This e-mail and any attachments may contain confidential or privileged\n> information. If you are not the intended recipient, please contact the\n> sender immediately and do not use, store or disclose their contents.\n> Any views expressed are those of the individual sender and not of Kinetic \n> Information System Services Limited unless otherwise stated.\n> \n> www.kinetic.co.uk\n\n", "msg_date": "Mon, 19 Apr 2004 08:36:34 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very slow simple query - outer join makes it quicker" } ]
[ { "msg_contents": "\nI have a query which performs not so well:\n\nSELECT * FROM mm_mediasources ORDER BY number DESC LIMIT 20;\n\ncosts nearly a minute. The table contains over 300 000 records.\n\nThe table has two extensions, which are (a the moment) nearly empty, but\nhave something to do with this, because:\n\nSELECT * FROM only mm_mediasources ORDER BY number DESC LIMIT 20;\n\nperforms ok (8ms). The query plan is then as I would expect:\n\nmedia=# explain SELECT * FROM only mm_mediasources ORDER BY number DESC\nLIMIT 20;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..8.36 rows=20 width=105)\n -> Index Scan Backward using mediasource_object on mm_mediasources\n(cost=0.00..114641.05 rows=274318 width=105)\n\n\n\nThe query plan of the original query, without 'only' does table scans:\n\nmedia=# explain SELECT * FROM mm_mediasources ORDER BY number DESC LIMIT 20;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------\n Limit (cost=47248.70..47248.75 rows=20 width=105)\n -> Sort (cost=47248.70..47934.52 rows=274328 width=105)\n Sort Key: public.mm_mediasources.number\n -> Result (cost=0.00..8364.28 rows=274328 width=105)\n -> Append (cost=0.00..8364.28 rows=274328 width=105)\n -> Seq Scan on mm_mediasources (cost=0.00..8362.18 rows=274318 width=105)\n -> Seq Scan on mm_audiosources mm_mediasources (cost=0.00..1.01 rows=1 width=84)\n -> Seq Scan on mm_videosources mm_mediasources (cost=0.00..1.09 rows=9 width=89)\n\nand presumably because if that performs so lousy.\n\nSimply selecting on a number does work fast:\nmedia=# explain SELECT * FROM mm_mediasources where number = 606973 ;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------\n Result (cost=0.00..6.13 rows=4 width=105)\n -> Append (cost=0.00..6.13 rows=4 width=105)\n -> Index Scan using mediasource_object on mm_mediasources (cost=0.00..4.00 rows=2 width=105)\n Index Cond: (number = 606973)\n -> Seq Scan on mm_audiosources mm_mediasources (cost=0.00..1.01 rows=1 width=84)\n Filter: (number = 606973)\n -> Seq Scan on mm_videosources mm_mediasources (cost=0.00..1.11 rows=1 width=89)\n Filter: (number = 606973)\n\n(3ms)\n\nI suppose seq scans are used on the extensions because they contain so few\nrecords.\n\n\nAll tables have index on number. How do I force it to use them also when I\nuse order by?\n\nI use psql 7.3.2\n\nMichiel\n\n-- \nMichiel Meeuwissen |\nMediapark C101 Hilversum | \n+31 (0)35 6772979 | I hate computers\nnl_NL eo_XX en_US |\nmihxil' |\n [] () |\n", "msg_date": "Mon, 19 Apr 2004 13:30:13 +0200", "msg_from": "Michiel Meeuwissen <[email protected]>", "msg_from_op": true, "msg_subject": "order by index, and inheritance" }, { "msg_contents": "Rod Taylor <[email protected]> wrote:\n> The scan is picking the best method for grabbing everything within the\n> table, since it is not aware that we do not require everything.\n\nHmm. That is a bit silly. Why does it use the index if select only from\nmm_mediasources?\n\n> You can explicitly tell it what you want to do via:\n> \n> SELECT *\n> FROM (SELECT * FROM mm_mediasources ORDER BY number DESC LIMIT 20 \n> UNION SELECT * FROM <subtable> ORDER BY number DESC LIMIT 20) AS tab\n> ORDER BY number DESC LIMIT 20\n\nI think you meant 'only mm_mediasources', and btw order by and limit are not\naccepted before union, so the above query does not compile.\n\nI can't figure out any acceptable work-around. Even if something as the\nabove would work, it still would be hardly elegant, and you can as well have\nno support for inheritance (actually, you can _better_ have no inheritance,\nbecause at least it is clear what works then).\n\nMichiel\n\nbtw. Why are these messages not appearing on the list itself?\n\n-- \nMichiel Meeuwissen\nMediapark C101 Hilversum\n+31 (0)35 6772979\nnl_NL eo_XX en_US\nmihxil'\n [] ()\n", "msg_date": "Thu, 22 Apr 2004 13:02:14 +0200", "msg_from": "Michiel Meeuwissen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: order by index, and inheritance" }, { "msg_contents": "On Thu, 2004-04-22 at 07:02, Michiel Meeuwissen wrote:\n> Rod Taylor <[email protected]> wrote:\n> > The scan is picking the best method for grabbing everything within the\n> > table, since it is not aware that we do not require everything.\n> \n> Hmm. That is a bit silly. Why does it use the index if select only from\n> mm_mediasources?\n> \n> > You can explicitly tell it what you want to do via:\n> > \n> > SELECT *\n> > FROM (SELECT * FROM mm_mediasources ORDER BY number DESC LIMIT 20 \n> > UNION SELECT * FROM <subtable> ORDER BY number DESC LIMIT 20) AS tab\n> > ORDER BY number DESC LIMIT 20\n> \n> I think you meant 'only mm_mediasources', and btw order by and limit are not\n> accepted before union, so the above query does not compile.\n\nYes, I did mean only. Try putting another set of brackets around the\nselects to get ORDER BY, etc. accepted. You can add another layer of\nsubselects in the from if that doesn't work.\n\n", "msg_date": "Thu, 22 Apr 2004 09:46:48 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: order by index, and inheritance" }, { "msg_contents": "Rod Taylor <[email protected]> wrote:\n> On Thu, 2004-04-22 at 07:02, Michiel Meeuwissen wrote:\n> > Rod Taylor <[email protected]> wrote:\n> > > The scan is picking the best method for grabbing everything within the\n> > > table, since it is not aware that we do not require everything.\n> > \n> > Hmm. That is a bit silly. Why does it use the index if select only from\n> > mm_mediasources?\n> > \n> > > You can explicitly tell it what you want to do via:\n> > > \n> > > SELECT *\n> > > FROM (SELECT * FROM mm_mediasources ORDER BY number DESC LIMIT 20 \n> > > UNION SELECT * FROM <subtable> ORDER BY number DESC LIMIT 20) AS tab\n> > > ORDER BY number DESC LIMIT 20\n> > \n> > I think you meant 'only mm_mediasources', and btw order by and limit are not\n> > accepted before union, so the above query does not compile.\n> \n> Yes, I did mean only. Try putting another set of brackets around the\n> selects to get ORDER BY, etc. accepted. You can add another layer of\n> subselects in the from if that doesn't work.\n\n\nOk, I can get it working:\n\nselect number,url \n from ( select number,url from (select number,url from only mm_mediasources order by number desc limit 20) as A \n union select number,url from (select number,url from mm_audiosources order by number desc limit 20) as B\n union select number,url from (select number,url from mm_videosources order by number desc limit 20) as C\n ) as TAB order by number desc limit 20;\n\nThis indeeds performs good (about 10000 times faster then select number,url\nfrom mm_mediasources order by number desc limit 20) . But hardly beautiful,\nand quite useless too because of course I am now going to want to use an\noffset (limit 20 offset 20, you see..), which seems more or less impossible\nin this way, isn't it.\n\nselect number,url \n from ( select number,url from (select number,url from only mm_mediasources order by number desc limit 100020) as A \n union select number,url from (select number,url from mm_audiosources order by number desc limit 100020) as B\n union select number,url from (select number,url from mm_videosources order by number desc limit 100020) as C\n ) as TAB order by number desc limit 20 offset 100000;\n\nThis would be it, I think, but this performs, expectedly, quit bad again,\nthough still 5 times faster then select url,number from mm_mediasources order by number desc limit 20 offset 100000;\n\n\nI'm thinking of dropping inheritance all together and using foreign keys or\nso for the extra fields, to simulate inheritance. That might perhaps work a whole lot better?\n\nThanks anyway,\n\n-- \nMichiel Meeuwissen\nMediapark C101 Hilversum\n+31 (0)35 6772979\nnl_NL eo_XX en_US\nmihxil'\n [] ()\n", "msg_date": "Thu, 22 Apr 2004 16:40:23 +0200", "msg_from": "Michiel Meeuwissen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: order by index, and inheritance" }, { "msg_contents": "\n> This indeeds performs good (about 10000 times faster then select number,url\n> from mm_mediasources order by number desc limit 20) . But hardly beautiful,\n> and quite useless too because of course I am now going to want to use an\n> offset (limit 20 offset 20, you see..), which seems more or less impossible\n> in this way, isn't it.\n\nYes, and the offset is a good reason why PostgreSQL will not be able to\ndo it by itself either.\n\nIs \"number\" unique across the board?\n\nIf so, instead of the offset you could use WHERE number > $lastValue.\n\n", "msg_date": "Thu, 22 Apr 2004 10:46:40 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: order by index, and inheritance" } ]
[ { "msg_contents": "What about quad-XEON setups? Could that be worse? (have dual, and quad setups both) Shall we re-consider XEON-MP CPU machines with high cache (4MB+)?\r\n \r\nVery generally, what number would be considered high, especially, if it coincides with expected heavy load?\r\n \r\nNot sure a specific chipset was mentioned...\r\n \r\nThanks,\r\nAnjan\r\n\r\n\t-----Original Message----- \r\n\tFrom: Greg Stark [mailto:[email protected]] \r\n\tSent: Sun 4/18/2004 8:40 PM \r\n\tTo: Tom Lane \r\n\tCc: [email protected]; Josh Berkus; [email protected]; Neil Conway \r\n\tSubject: Re: [PERFORM] Wierd context-switching issue on Xeon\r\n\t\r\n\t\r\n\r\n\r\n\tTom Lane <[email protected]> writes:\r\n\t\r\n\t> So in the short term I think we have to tell people that Xeon MP is not\r\n\t> the most desirable SMP platform to run Postgres on. (Josh thinks that\r\n\t> the specific motherboard chipset being used in these machines might\r\n\t> share some of the blame too. I don't have any evidence for or against\r\n\t> that idea, but it's certainly possible.)\r\n\t>\r\n\t> In the long run, however, CPUs continue to get faster than main memory\r\n\t> and the price of cache contention will continue to rise. So it seems\r\n\t> that we need to give up the assumption that SpinLockAcquire is a cheap\r\n\t> operation. In the presence of heavy contention it won't be.\r\n\t\r\n\tThere's nothing about the way Postgres spinlocks are coded that affects this?\r\n\t\r\n\tIs it something the kernel could help with? I've been wondering whether\r\n\tthere's any benefits postgres is missing out on by using its own hand-rolled\r\n\tlocking instead of using the pthreads infrastructure that the kernel is often\r\n\tinvolved in.\r\n\t\r\n\t--\r\n\tgreg\r\n\t\r\n\t\r\n\t---------------------------(end of broadcast)---------------------------\r\n\tTIP 2: you can get off all lists at once with the unregister command\r\n\t (send \"unregister YourEmailAddressHere\" to [email protected])\r\n\t\r\n\r\n", "msg_date": "Mon, 19 Apr 2004 09:52:39 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "\nI decided to check the context-switching behavior here for baseline\nsince we have a rather diverse set of postgres server hardware, though\nnothing using Xeon MP that is also running a postgres instance, and\neverything looks normal under load. Some platforms are better than\nothers, but nothing is outside of what I would consider normal bounds.\n\nOur biggest database servers are Opteron SMP systems, and these servers\nare particularly well-behaved under load with Postgres 7.4.2. If there\nis a problem with the locking code and context-switching, it sure isn't\nmanifesting on our Opteron SMP systems. Under rare confluences of\nprocess interaction, we occasionally see short spikes in the 2-3,000\ncs/sec range. It typically peaks at a couple hundred cs/sec under load.\nObviously this is going to be a function of our load profile a certain\nextent.\n\nThe Opterons have proven to be very good database hardware in general\nfor us.\n\n\nj. andrew rogers\n\n\n\n\n\n\n\n", "msg_date": "19 Apr 2004 10:55:50 -0700", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" } ]
[ { "msg_contents": "Hello,\n\nI work for Summersault, LLC. We've been using Postgres since the days of\nPostgres 6.5. We're focused on building database-driven websites using Perl and\nPostgres. We are currently seeking help developing a search system that needs\nto perform complex queries with high performance. Although we have strong\nskills in Perl and Postgres, we are new to the arena of complex,\nhigh-performance search systems.\n\nWe are seeking to hire a consultant to help this as part of the re-vamp\nof the 1-800-Save-A-Pet.com website. \n\n1-800-Save-A-Pet.com is a not-for-profit organization whose website\nfinds homes for homeless pets, promoting pet adoption and saving\nthousands of animal lives. Summersault, LLC is a website development\nfirm focused on creating highly customized database driven websites.\n\nThe ideal consultant has expert experience with the PostgreSQL RDBMS and\nthe Perl programming language, and is intimately familiar with the\narchitecture and implementation of complex database queries for\nhigh-traffic web applications. The consultant should also have a strong\nbackground in creating solutions complementary to this work, e.g.\nassessing hardware requirements, designing a hosting and network\ninfrastructure, and optimizing the algorithm based on real-world\nfeedback. The consultant will work with Summersault developers as a\npart of a larger application development process.\n\nInterested persons or organizations should contact Chris Hardie of\nSummersault, LLC at [email protected] for more information.\n\nThanks!\n\n\tMark\n\n-- \n . . . . . . . . . . . . . . . . . . . . . . . . . . . \n Mark Stosberg Principal Developer \n [email protected] Summersault, LLC \n 765-939-9301 ext 202 database driven websites\n . . . . . http://www.summersault.com/ . . . . . . . .\n\n", "msg_date": "Mon, 19 Apr 2004 20:42:06 +0000 (UTC)", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "seeking consultant for high performance,\n\tcomplex searching with Postgres web app" }, { "msg_contents": "\nHave you checked Tsearch2\n\nhttp://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/\n\nis the most feature rich Full text Search system available\nfor postgresql. We are also using the same system in\nthe revamped version of our website.\n\nRegds\nMallah.\n\n\nMark Stosberg wrote:\n\n>Hello,\n>\n>I work for Summersault, LLC. We've been using Postgres since the days of\n>Postgres 6.5. We're focused on building database-driven websites using Perl and\n>Postgres. We are currently seeking help developing a search system that needs\n>to perform complex queries with high performance. Although we have strong\n>skills in Perl and Postgres, we are new to the arena of complex,\n>high-performance search systems.\n>\n>We are seeking to hire a consultant to help this as part of the re-vamp\n>of the 1-800-Save-A-Pet.com website. \n>\n>1-800-Save-A-Pet.com is a not-for-profit organization whose website\n>finds homes for homeless pets, promoting pet adoption and saving\n>thousands of animal lives. Summersault, LLC is a website development\n>firm focused on creating highly customized database driven websites.\n>\n>The ideal consultant has expert experience with the PostgreSQL RDBMS and\n>the Perl programming language, and is intimately familiar with the\n>architecture and implementation of complex database queries for\n>high-traffic web applications. The consultant should also have a strong\n>background in creating solutions complementary to this work, e.g.\n>assessing hardware requirements, designing a hosting and network\n>infrastructure, and optimizing the algorithm based on real-world\n>feedback. The consultant will work with Summersault developers as a\n>part of a larger application development process.\n>\n>Interested persons or organizations should contact Chris Hardie of\n>Summersault, LLC at [email protected] for more information.\n>\n>Thanks!\n>\n>\tMark\n>\n> \n>\n\n", "msg_date": "Tue, 20 Apr 2004 21:48:00 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seeking consultant for high performance, complex searching" } ]
[ { "msg_contents": "This vacuum is running a marathon. Why will it not end and show me free\nspace map INFO? We have deleted a lot of data and I would like to be\nconfident that these deletions will be used as free space, rather than\ncreating more table files.\n\n\n\nPWFPM_DEV=# select now();vacuum verbose forecastelement;select now();\n now\n-------------------------------\n 2004-04-14 18:36:13.725285+00\n(1 row)\n\nINFO: vacuuming \"public.forecastelement\"\nINFO: index \"forecastelement_rwv_idx\" now contains 473380072 row versions\nin 4986653 pages\nDETAIL: 5592106 index row versions were removed.\n44688 index pages have been deleted, 1 are currently reusable.\nCPU 4942.30s/336.27u sec elapsed 74710.07 sec.\nINFO: \"forecastelement\": removed 5592106 row versions in 126370 pages\nDETAIL: CPU 58.43s/16.99u sec elapsed 366.24 sec.\nINFO: index \"forecastelement_rwv_idx\" now contains 472296119 row versions\nin 5027529 pages\nDETAIL: 5592097 index row versions were removed.\n89120 index pages have been deleted, 0 are currently reusable.\nCPU 4260.08s/327.29u sec elapsed 59592.38 sec.\nINFO: \"forecastelement\": removed 5592097 row versions in 124726 pages\nDETAIL: CPU 33.38s/14.21u sec elapsed 210.36 sec.\nINFO: index \"forecastelement_rwv_idx\" now contains 467784889 row versions\nin 5037914 pages\nDETAIL: 5592089 index row versions were removed.\n134286 index pages have been deleted, 0 are currently reusable.\nCPU 4185.86s/318.19u sec elapsed 57048.65 sec.\nINFO: \"forecastelement\": removed 5592089 row versions in 121657 pages\nDETAIL: CPU 51.19s/14.19u sec elapsed 238.31 sec.\nINFO: index \"forecastelement_rwv_idx\" now contains 462960132 row versions\nin 5039886 pages\nDETAIL: 5592067 index row versions were removed.\n179295 index pages have been deleted, 0 are currently reusable.\nCPU 4002.76s/313.63u sec elapsed 54806.09 sec.\nINFO: \"forecastelement\": removed 5592067 row versions in 122510 pages\nDETAIL: CPU 25.32s/14.47u sec elapsed 187.73 sec.\nINFO: index \"forecastelement_rwv_idx\" now contains 457555142 row versions\nin 5041631 pages\nDETAIL: 5592085 index row versions were removed.\n224480 index pages have been deleted, 0 are currently reusable.\nCPU 4149.42s/310.94u sec elapsed 55880.65 sec.\nINFO: \"forecastelement\": removed 5592085 row versions in 122500 pages\nDETAIL: CPU 16.70s/14.47u sec elapsed 180.27 sec.\nINFO: index \"forecastelement_rwv_idx\" now contains 452191660 row versions\nin 5044414 pages\nDETAIL: 5592089 index row versions were removed.\n269665 index pages have been deleted, 0 are currently reusable.\nCPU 4213.10s/304.61u sec elapsed 55159.36 sec.\nINFO: \"forecastelement\": removed 5592089 row versions in 122663 pages\nDETAIL: CPU 37.28s/14.63u sec elapsed 206.96 sec.\nINFO: index \"forecastelement_rwv_idx\" now contains 446807778 row versions\nin 5046541 pages\nDETAIL: 5592077 index row versions were removed.\n314747 index pages have been deleted, 0 are currently reusable.\nCPU 4039.49s/297.15u sec elapsed 55086.56 sec.\nINFO: \"forecastelement\": removed 5592077 row versions in 122558 pages\nDETAIL: CPU 20.21s/14.74u sec elapsed 227.53 sec.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 19 Apr 2004 23:37:47 -0400", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why will vacuum not end?" }, { "msg_contents": "> This vacuum is running a marathon. Why will it not end and show me free\n> space map INFO? We have deleted a lot of data and I would like to be\n> confident that these deletions will be used as free space, rather than\n> creating more table files.\n\nDoes another postgres query running have a lock on that table?\n\nChris\n\n", "msg_date": "Tue, 20 Apr 2004 12:02:24 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why will vacuum not end?" } ]
[ { "msg_contents": "No, but data is constantly being inserted by userid scores. It is postgres\nrunnimg the vacuum.\nDan.\n\n-----Original Message-----\nFrom: Christopher Kings-Lynne [mailto:[email protected]]\nSent: Tuesday, April 20, 2004 12:02 AM\nTo: Shea,Dan [CIS]\nCc: [email protected]\nSubject: Re: [PERFORM] Why will vacuum not end?\n\n\n> This vacuum is running a marathon. Why will it not end and show me free\n> space map INFO? We have deleted a lot of data and I would like to be\n> confident that these deletions will be used as free space, rather than\n> creating more table files.\n\nDoes another postgres query running have a lock on that table?\n\nChris\n", "msg_date": "Tue, 20 Apr 2004 08:58:09 -0400", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why will vacuum not end?" }, { "msg_contents": "Shea,Dan [CIS] wrote:\n> No, but data is constantly being inserted by userid scores. It is postgres\n> runnimg the vacuum.\n> Dan.\n> \n> -----Original Message-----\n> From: Christopher Kings-Lynne [mailto:[email protected]]\n> Sent: Tuesday, April 20, 2004 12:02 AM\n> To: Shea,Dan [CIS]\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Why will vacuum not end?\n> \n>>This vacuum is running a marathon. Why will it not end and show me free\n>>space map INFO? We have deleted a lot of data and I would like to be\n>>confident that these deletions will be used as free space, rather than\n>>creating more table files.\n> \n> Does another postgres query running have a lock on that table?\n\nThis may be a dumb question (but only because I don't know the answer)\n\nDoesn't/shouldn't vacuum have some kind of timeout so if a table is locked\nit will give up eventually (loudly complaining when it does so)?\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Tue, 20 Apr 2004 09:14:00 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why will vacuum not end?" }, { "msg_contents": "> No, but data is constantly being inserted by userid scores. It is postgres\n> runnimg the vacuum.\n> Dan.\n\nWell, inserts create some locks - perhaps that's the problem...\n\nOtherwise, check the pg_locks view to see if you can figure it out.\n\nChris\n\n", "msg_date": "Wed, 21 Apr 2004 09:26:27 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why will vacuum not end?" } ]
[ { "msg_contents": "I received a copy of pgbench rewritten in Pro*C, which is similar to\nembedded C. I think it was done so the same program could be tested on\nOracle and PostgreSQL.\n\nAre folks interested in this code? Should it be put on gborg or in our\n/contrib/pgbench?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 20 Apr 2004 11:48:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "pgbench written in Pro*C" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I received a copy of pgbench rewritten in Pro*C, which is similar to\n> embedded C. I think it was done so the same program could be tested on\n> Oracle and PostgreSQL.\n\n> Are folks interested in this code? Should it be put on gborg or in our\n> /contrib/pgbench?\n\nIf it requires non-free tools even to build, it is of no value.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Apr 2004 22:51:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgbench written in Pro*C " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > I received a copy of pgbench rewritten in Pro*C, which is similar to\n> > embedded C. I think it was done so the same program could be tested on\n> > Oracle and PostgreSQL.\n> \n> > Are folks interested in this code? Should it be put on gborg or in our\n> > /contrib/pgbench?\n> \n> If it requires non-free tools even to build, it is of no value.\n\nOK, it's only value would be if we could modify it so it compiled using\nour ecpg and Pro*C and the comparison program could be run on both\ndatabases.\n\nI will tell the submitter to put it on gborg if they wish.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 20 Apr 2004 23:31:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgbench written in Pro*C" } ]
[ { "msg_contents": "If this helps - \n\nQuad 2.0GHz XEON with highest load we have seen on the applications, DB performing great - \n\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 1 0 0 1616 351820 66144 10813704 0 0 2 0 1 1 0 2 7\n 3 0 0 1616 349712 66144 10813736 0 0 8 1634 1362 4650 4 2 95\n 0 0 0 1616 347768 66144 10814120 0 0 188 1218 1158 4203 5 1 93\n 0 0 1 1616 346596 66164 10814184 0 0 8 1972 1394 4773 4 1 94\n 2 0 1 1616 345424 66164 10814272 0 0 20 1392 1184 4197 4 2 94\n\nAround 4k CS/sec\nChipset is Intel ServerWorks GC-HE.\nLinux Kernel 2.4.20-28.9bigmem #1 SMP\n\nThanks,\nAnjan\n\n\n-----Original Message-----\nFrom: Dirk Lutzebäck [mailto:[email protected]] \nSent: Tuesday, April 20, 2004 10:29 AM\nTo: Tom Lane; Josh Berkus\nCc: [email protected]; Neil Conway\nSubject: Re: [PERFORM] Wierd context-switching issue on Xeon\n\nDirk Lutzebaeck wrote:\n\n> c) Dual XEON DP, non-bigmem, HT on, E7500 Intel chipset (Supermicro)\n>\n> performs well and I could not observe context switch peaks here (one \n> user active), almost no extra semop calls\n\nDid Tom's test here: with 2 processes I'll reach 200k+ CS with peaks to \n300k CS. Bummer.. Josh, I don't think you can bash the ServerWorks \nchipset here nor bigmem.\n\nDirk\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n", "msg_date": "Tue, 20 Apr 2004 12:48:58 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Anjan,\n\n> Quad 2.0GHz XEON with highest load we have seen on the applications, DB\n> performing great -\n\nCan you run Tom's test? It takes a particular pattern of data access to \nreproduce the issue.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 20 Apr 2004 09:59:52 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "I modified the code in s_lock.c to remove the spins\n\n#define SPINS_PER_DELAY 1\n\nand it doesn't exhibit the behaviour\n\nThis effectively changes the code to \n\n\nwhile(TAS(lock))\n\tselect(10000); // 10ms\n\nCan anyone explain why executing TAS 100 times would increase context\nswitches ?\n\nDave\n\n\nOn Tue, 2004-04-20 at 12:59, Josh Berkus wrote:\n> Anjan,\n> \n> > Quad 2.0GHz XEON with highest load we have seen on the applications, DB\n> > performing great -\n> \n> Can you run Tom's test? It takes a particular pattern of data access to \n> reproduce the issue.\n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Tue, 20 Apr 2004 22:41:03 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Dave:\n\nWhy would test and set increase context swtches:\nNote that it *does not increase* context swtiches when the two threads \nare on the two cores of a single Xeon processor. (use taskset to force \naffinity on linux)\n\nScenario:\nIf the two test and set processes are testing and setting the same bit \nas each other, then they'll see worst case cache coherency misses. \nThey'll ping a cache line back and forth between CPUs. Another case, \nmight be that they're tesing and setting different bits or words, but \nthose bits or words are always in the same cache line, again causing \nworst case cache coherency and misses. The fact that tis doesn't \nhappen when the threads are bound to the 2 cores of a single Xeon \nsuggests it's because they're now sharing L1 cache. No pings/bounces.\n\n\nI wonder do the threads stall so badly when pinging cache lines back \nand forth, that the kernel sees it as an opportunity to put the \nprocess to sleep? or do these worst case misses cause an interrupt?\n\nMy question is: What is it that the two threads waiting for when they \nspin? Is it exactly the same resource, or two resources that happen to \nhave test-and-set flags in the same cache line?\n\nOn Apr 20, 2004, at 7:41 PM, Dave Cramer wrote:\n\n> I modified the code in s_lock.c to remove the spins\n>\n> #define SPINS_PER_DELAY 1\n>\n> and it doesn't exhibit the behaviour\n>\n> This effectively changes the code to\n>\n>\n> while(TAS(lock))\n> \tselect(10000); // 10ms\n>\n> Can anyone explain why executing TAS 100 times would increase context\n> switches ?\n>\n> Dave\n>\n>\n> On Tue, 2004-04-20 at 12:59, Josh Berkus wrote:\n>> Anjan,\n>>\n>>> Quad 2.0GHz XEON with highest load we have seen on the applications, \n>>> DB\n>>> performing great -\n>>\n>> Can you run Tom's test? It takes a particular pattern of data \n>> access to\n>> reproduce the issue.\n> -- \n> Dave Cramer\n> 519 939 0336\n> ICQ # 14675561\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n\n", "msg_date": "Wed, 21 Apr 2004 11:19:46 -0700", "msg_from": "Paul Tuckfield <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Paul Tuckfield <[email protected]> writes:\n> I wonder do the threads stall so badly when pinging cache lines back \n> and forth, that the kernel sees it as an opportunity to put the \n> process to sleep? or do these worst case misses cause an interrupt?\n\nNo; AFAICS the kernel could not even be aware of that behavior.\n\nThe context swap storm is happening because of contention at the next\nlevel up (LWLocks rather than spinlocks). It could be an independent\nissue that just happens to be triggered by the same sort of access\npattern. I put forward a hypothesis that the cache miss storm caused by\nthe test-and-set ops induces the context swap storm by making the code\nmore likely to be executing in certain places at certain times ... but\nit's only a hypothesis.\n\nYesterday evening I had pretty well convinced myself that they were\nindeed independent issues: profiling on a single-CPU machine was telling\nme that the test case I proposed spends over 10% of its time inside\nReadBuffer, which certainly seems like enough to explain a high rate of\ncontention on the BufMgrLock, without any assumptions about funny\nbehavior at the hardware level. However, your report and Dave's suggest\nthat there really is some linkage. So I'm still confused.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Apr 2004 14:51:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "FYI,\n\nI am doing my testing on non hyperthreading dual athlons. \n\nAlso, the test and set is attempting to set the same resource, and not\nsimply a bit. It's really an lock;xchg in assemblelr.\n\nAlso we are using the PAUSE mnemonic, so we should not be seeing any\ncache coherency issues, as the cache is being taken out of the picture\nAFAICS ?\n\nDave\n\nOn Wed, 2004-04-21 at 14:19, Paul Tuckfield wrote:\n> Dave:\n> \n> Why would test and set increase context swtches:\n> Note that it *does not increase* context swtiches when the two threads \n> are on the two cores of a single Xeon processor. (use taskset to force \n> affinity on linux)\n> \n> Scenario:\n> If the two test and set processes are testing and setting the same bit \n> as each other, then they'll see worst case cache coherency misses. \n> They'll ping a cache line back and forth between CPUs. Another case, \n> might be that they're tesing and setting different bits or words, but \n> those bits or words are always in the same cache line, again causing \n> worst case cache coherency and misses. The fact that tis doesn't \n> happen when the threads are bound to the 2 cores of a single Xeon \n> suggests it's because they're now sharing L1 cache. No pings/bounces.\n> \n> \n> I wonder do the threads stall so badly when pinging cache lines back \n> and forth, that the kernel sees it as an opportunity to put the \n> process to sleep? or do these worst case misses cause an interrupt?\n> \n> My question is: What is it that the two threads waiting for when they \n> spin? Is it exactly the same resource, or two resources that happen to \n> have test-and-set flags in the same cache line?\n> \n> On Apr 20, 2004, at 7:41 PM, Dave Cramer wrote:\n> \n> > I modified the code in s_lock.c to remove the spins\n> >\n> > #define SPINS_PER_DELAY 1\n> >\n> > and it doesn't exhibit the behaviour\n> >\n> > This effectively changes the code to\n> >\n> >\n> > while(TAS(lock))\n> > \tselect(10000); // 10ms\n> >\n> > Can anyone explain why executing TAS 100 times would increase context\n> > switches ?\n> >\n> > Dave\n> >\n> >\n> > On Tue, 2004-04-20 at 12:59, Josh Berkus wrote:\n> >> Anjan,\n> >>\n> >>> Quad 2.0GHz XEON with highest load we have seen on the applications, \n> >>> DB\n> >>> performing great -\n> >>\n> >> Can you run Tom's test? It takes a particular pattern of data \n> >> access to\n> >> reproduce the issue.\n> > -- \n> > Dave Cramer\n> > 519 939 0336\n> > ICQ # 14675561\n> >\n> >\n> > ---------------------------(end of \n> > broadcast)---------------------------\n> > TIP 8: explain analyze is your friend\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n> \n> \n> !DSPAM:4086c4d0263544680737483!\n> \n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Wed, 21 Apr 2004 15:13:28 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "attached.\n-- \nDave Cramer\n519 939 0336\nICQ # 14675561", "msg_date": "Wed, 21 Apr 2004 16:49:48 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "On Wed, Apr 21, 2004 at 02:51:31PM -0400, Tom Lane wrote:\n> The context swap storm is happening because of contention at the next\n> level up (LWLocks rather than spinlocks). It could be an independent\n> issue that just happens to be triggered by the same sort of access\n> pattern. I put forward a hypothesis that the cache miss storm caused by\n> the test-and-set ops induces the context swap storm by making the code\n> more likely to be executing in certain places at certain times ... but\n> it's only a hypothesis.\n> \nIf the context swap storm derives from LWLock contention, maybe using\na random order to assign buffer locks in buf_init.c would prevent\nsimple adjacency of buffer allocation to cause the storm. Just offsetting\nthe assignment by the cacheline size should work. I notice that when\ninitializing the buffers in shared memory, both the buf->meta_data_lock\nand the buf->cntx_lock are immediately adjacent in memory. I am not\nfamiliar enough with the flow through postgres to see if there could\nbe \"fighting\" for those two locks. If so, offsetting those by the cache\nline size would also stop the context swap storm.\n\n--Ken\n", "msg_date": "Wed, 21 Apr 2004 17:02:31 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon" }, { "msg_contents": "Kenneth Marshall <[email protected]> writes:\n> If the context swap storm derives from LWLock contention, maybe using\n> a random order to assign buffer locks in buf_init.c would prevent\n> simple adjacency of buffer allocation to cause the storm.\n\nGood try, but no cigar ;-). The test cases I've been looking at take\nonly shared locks on the per-buffer locks, so that's not where the\ncontext swaps are coming from. The swaps have to be caused by the\nBufMgrLock, because that's the only exclusive lock being taken.\n\nI did try increasing the allocated size of the spinlocks to 128 bytes\nto see if it would do anything. It didn't ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Apr 2004 21:45:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon " }, { "msg_contents": "Dave Cramer <[email protected]> writes:\n> diff -c -r1.16 s_lock.c\n> *** backend/storage/lmgr/s_lock.c\t8 Aug 2003 21:42:00 -0000\t1.16\n> --- backend/storage/lmgr/s_lock.c\t21 Apr 2004 20:27:34 -0000\n> ***************\n> *** 76,82 ****\n> \t * The select() delays are measured in centiseconds (0.01 sec) because 10\n> \t * msec is a common resolution limit at the OS level.\n> \t */\n> ! #define SPINS_PER_DELAY\t\t100\n> #define NUM_DELAYS\t\t\t1000\n> #define MIN_DELAY_CSEC\t\t1\n> #define MAX_DELAY_CSEC\t\t100\n> --- 76,82 ----\n> \t * The select() delays are measured in centiseconds (0.01 sec) because 10\n> \t * msec is a common resolution limit at the OS level.\n> \t */\n> ! #define SPINS_PER_DELAY\t\t10\n> #define NUM_DELAYS\t\t\t1000\n> #define MIN_DELAY_CSEC\t\t1\n> #define MAX_DELAY_CSEC\t\t100\n\n\nAs far as I can tell, this does reduce the rate of semop's\nsignificantly, but it does so by bringing the overall processing rate\nto a crawl :-(. I see 97% CPU idle time when using this patch.\nI believe what is happening is that the select() delay in s_lock.c is\nbeing hit frequently because the spin loop isn't allowed to run long\nenough to let the other processor get out of the spinlock.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Apr 2004 22:35:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1 " }, { "msg_contents": "Tom,\n\n> As far as I can tell, this does reduce the rate of semop's\n> significantly, but it does so by bringing the overall processing rate\n> to a crawl :-(. I see 97% CPU idle time when using this patch.\n> I believe what is happening is that the select() delay in s_lock.c is\n> being hit frequently because the spin loop isn't allowed to run long\n> enough to let the other processor get out of the spinlock.\n\nAlso, I tested it on production data, and it reduces the CSes by about 40%. \nAn improvement, but not a magic bullet.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 21 Apr 2004 19:53:24 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "Yeah, I did some more testing myself, and actually get better numbers\nwith increasing spins per delay to 1000, but my suspicion is that it is\nhighly dependent on finding the right delay for the processor you are\non.\n\nMy hypothesis is that if you spin approximately the same or more time\nthan the average time it takes to get finished with the shared resource\nthen this should reduce cs.\n\nCertainly more ideas are required here.\n\nDave \nOn Wed, 2004-04-21 at 22:35, Tom Lane wrote:\n> Dave Cramer <[email protected]> writes:\n> > diff -c -r1.16 s_lock.c\n> > *** backend/storage/lmgr/s_lock.c\t8 Aug 2003 21:42:00 -0000\t1.16\n> > --- backend/storage/lmgr/s_lock.c\t21 Apr 2004 20:27:34 -0000\n> > ***************\n> > *** 76,82 ****\n> > \t * The select() delays are measured in centiseconds (0.01 sec) because 10\n> > \t * msec is a common resolution limit at the OS level.\n> > \t */\n> > ! #define SPINS_PER_DELAY\t\t100\n> > #define NUM_DELAYS\t\t\t1000\n> > #define MIN_DELAY_CSEC\t\t1\n> > #define MAX_DELAY_CSEC\t\t100\n> > --- 76,82 ----\n> > \t * The select() delays are measured in centiseconds (0.01 sec) because 10\n> > \t * msec is a common resolution limit at the OS level.\n> > \t */\n> > ! #define SPINS_PER_DELAY\t\t10\n> > #define NUM_DELAYS\t\t\t1000\n> > #define MIN_DELAY_CSEC\t\t1\n> > #define MAX_DELAY_CSEC\t\t100\n> \n> \n> As far as I can tell, this does reduce the rate of semop's\n> significantly, but it does so by bringing the overall processing rate\n> to a crawl :-(. I see 97% CPU idle time when using this patch.\n> I believe what is happening is that the select() delay in s_lock.c is\n> being hit frequently because the spin loop isn't allowed to run long\n> enough to let the other processor get out of the spinlock.\n> \n> \t\t\tregards, tom lane\n> \n> \n> \n> !DSPAM:40872f7e21492906114513!\n> \n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Wed, 21 Apr 2004 23:06:41 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "More data....\n\nOn a dual xeon with HTT enabled:\n\nI tried increasing the NUM_SPINS to 1000 and it works better.\n\nNUM_SPINLOCKS\tCS\tID\tpgbench\n\n100\t\t250K\t59%\t230 TPS\n1000\t\t125K\t55%\t228 TPS\n\nThis is certainly heading in the right direction ? Although it looks\nlike it is highly dependent on the system you are running on.\n\n--dc--\t \n\n\n\nOn Wed, 2004-04-21 at 22:53, Josh Berkus wrote:\n> Tom,\n> \n> > As far as I can tell, this does reduce the rate of semop's\n> > significantly, but it does so by bringing the overall processing rate\n> > to a crawl :-(. I see 97% CPU idle time when using this patch.\n> > I believe what is happening is that the select() delay in s_lock.c is\n> > being hit frequently because the spin loop isn't allowed to run long\n> > enough to let the other processor get out of the spinlock.\n> \n> Also, I tested it on production data, and it reduces the CSes by about 40%. \n> An improvement, but not a magic bullet.\n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Wed, 21 Apr 2004 23:18:47 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "Dave Cramer <[email protected]> writes:\n> I tried increasing the NUM_SPINS to 1000 and it works better.\n\nDoesn't surprise me. The value of 100 is about right on the assumption\nthat the spinlock instruction per se is not too much more expensive than\nany other instruction. What I was seeing from oprofile suggested that\nthe spinlock instruction cost about 100x more than an ordinary\ninstruction :-( ... so maybe 200 or so would be good on a Xeon.\n\n> This is certainly heading in the right direction ? Although it looks\n> like it is highly dependent on the system you are running on.\n\nYeah. I don't know a reasonable way to tune this number automatically\nfor particular systems ... but at the very least we'd need to find a way\nto distinguish uniprocessor from multiprocessor, because on a\nuniprocessor the optimal value is surely 1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Apr 2004 00:23:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1 " }, { "msg_contents": "> Yeah. I don't know a reasonable way to tune this number automatically\n> for particular systems ... but at the very least we'd need to find a way\n> to distinguish uniprocessor from multiprocessor, because on a\n> uniprocessor the optimal value is surely 1.\n\n From TODO:\n\n* Add code to detect an SMP machine and handle spinlocks accordingly \nfrom distributted.net, http://www1.distributed.net/source, in \nclient/common/cpucheck.cpp\n\nChris\n\n", "msg_date": "Thu, 22 Apr 2004 12:44:07 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "Tom Lane wrote:\n> Dave Cramer <[email protected]> writes:\n> > I tried increasing the NUM_SPINS to 1000 and it works better.\n> \n> Doesn't surprise me. The value of 100 is about right on the assumption\n> that the spinlock instruction per se is not too much more expensive than\n> any other instruction. What I was seeing from oprofile suggested that\n> the spinlock instruction cost about 100x more than an ordinary\n> instruction :-( ... so maybe 200 or so would be good on a Xeon.\n> \n> > This is certainly heading in the right direction ? Although it looks\n> > like it is highly dependent on the system you are running on.\n> \n> Yeah. I don't know a reasonable way to tune this number automatically\n> for particular systems ... but at the very least we'd need to find a way\n> to distinguish uniprocessor from multiprocessor, because on a\n> uniprocessor the optimal value is surely 1.\n\nHave you looked at the code pointed to by our TODO item:\n\t\n\t* Add code to detect an SMP machine and handle spinlocks accordingly\n\t from distributted.net, http://www1.distributed.net/source,\n\t in client/common/cpucheck.cpp\n\nFor BSDOS it has:\n\n #if (CLIENT_OS == OS_FREEBSD) || (CLIENT_OS == OS_BSDOS) || \\\n (CLIENT_OS == OS_OPENBSD) || (CLIENT_OS == OS_NETBSD)\n { /* comment out if inappropriate for your *bsd - cyp (25/may/1999) */\n int ncpus; size_t len = sizeof(ncpus);\n int mib[2]; mib[0] = CTL_HW; mib[1] = HW_NCPU;\n if (sysctl( &mib[0], 2, &ncpus, &len, NULL, 0 ) == 0)\n //if (sysctlbyname(\"hw.ncpu\", &ncpus, &len, NULL, 0 ) == 0)\n cpucount = ncpus;\n }\n\nand I can confirm that on my computer it works:\n\n\thw.ncpu = 2\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 22 Apr 2004 00:56:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> For BSDOS it has:\n\n> #if (CLIENT_OS == OS_FREEBSD) || (CLIENT_OS == OS_BSDOS) || \\\n> (CLIENT_OS == OS_OPENBSD) || (CLIENT_OS == OS_NETBSD)\n> { /* comment out if inappropriate for your *bsd - cyp (25/may/1999) */\n> int ncpus; size_t len = sizeof(ncpus);\n> int mib[2]; mib[0] = CTL_HW; mib[1] = HW_NCPU;\n> if (sysctl( &mib[0], 2, &ncpus, &len, NULL, 0 ) == 0)\n> //if (sysctlbyname(\"hw.ncpu\", &ncpus, &len, NULL, 0 ) == 0)\n> cpucount = ncpus;\n> }\n\nMultiplied by how many platforms? Ewww...\n\nI was wondering about some sort of dynamic adaptation, roughly along the\nlines of \"whenever a spin loop successfully gets the lock after\nspinning, decrease the allowed loop count by one; whenever we fail to\nget the lock after spinning, increase by 100; if the loop count reaches,\nsay, 10000, decide we are on a uniprocessor and irreversibly set it to\n1.\" As written this would tend to incur a select() delay once per\nhundred spinlock acquisitions, which is way too much, but I think we\ncould make it work with a sufficiently slow adaptation rate. The tricky\npart is that a slow adaptation rate means we can't have every backend\nfiguring this out for itself --- the right value would have to be\nmaintained globally, and I'm not sure how to do that without adding a\nlot of overhead.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Apr 2004 01:13:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1 " }, { "msg_contents": "Dave Cramer <[email protected]> writes:\n> My hypothesis is that if you spin approximately the same or more time\n> than the average time it takes to get finished with the shared resource\n> then this should reduce cs.\n\nThe only thing we use spinlocks for nowadays is to protect LWLocks, so\nthe \"average time\" involved is fairly small and stable --- or at least\nthat was the design intention. What we seem to be seeing is that on SMP\nmachines, cache coherency issues cause the TAS step itself to be\nexpensive and variable. However, in the experiments I did, strace'ing\nshowed that actual spin timeouts (manifested by the execution of a\ndelaying select()) weren't actually that common; the big source of\ncontext switches is semop(), which indicates contention at the LWLock\nlevel rather than the spinlock level. So while tuning the spinlock\nlimit count might be a useful thing to do in general, I think it will\nhave only negligible impact on the particular problems we're discussing\nin this thread.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Apr 2004 08:36:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1 " }, { "msg_contents": "Tom,\n\n> The tricky\n> part is that a slow adaptation rate means we can't have every backend\n> figuring this out for itself --- the right value would have to be\n> maintained globally, and I'm not sure how to do that without adding a\n> lot of overhead.\n\nThis may be a moot point, since you've stated that changing the loop timing \nwon't solve the problem, but what about making the test part of make? I \ndon't think too many systems are going to change processor architectures once \nin production, and those that do can be told to re-compile.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 22 Apr 2004 10:37:10 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> This may be a moot point, since you've stated that changing the loop timing \n> won't solve the problem, but what about making the test part of make? I \n> don't think too many systems are going to change processor architectures once\n> in production, and those that do can be told to re-compile.\n\nHaving to recompile to run on single- vs dual-processor machines doesn't\nseem like it would fly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Apr 2004 13:55:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1 " }, { "msg_contents": "Tom,\n\n> Having to recompile to run on single- vs dual-processor machines doesn't\n> seem like it would fly.\n\nOh, I don't know. Many applications require compiling for a target \narchitecture; SQL Server, for example, won't use a 2nd processor without \nre-installation. I'm not sure about Oracle.\n\nIt certainly wasn't too long ago that Linux gurus were esposing re-compiling \nthe kernel for the machine.\n\nAnd it's not like they would *have* to re-compile to use PostgreSQL after \nadding an additional processor. Just if they wanted to maximize peformance \nbenefit.\n\nAlso, this is a fairly rare circumstance, I think; to judge by my clients, \nonce a database server is in production nobody touches the hardware.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 22 Apr 2004 11:11:43 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "Josh Berkus wrote:\n> Tom,\n> \n> > Having to recompile to run on single- vs dual-processor machines doesn't\n> > seem like it would fly.\n> \n> Oh, I don't know. Many applications require compiling for a target \n> architecture; SQL Server, for example, won't use a 2nd processor without \n> re-installation. I'm not sure about Oracle.\n> \n> It certainly wasn't too long ago that Linux gurus were esposing re-compiling \n> the kernel for the machine.\n> \n> And it's not like they would *have* to re-compile to use PostgreSQL after \n> adding an additional processor. Just if they wanted to maximize peformance \n> benefit.\n> \n> Also, this is a fairly rare circumstance, I think; to judge by my clients, \n> once a database server is in production nobody touches the hardware.\n\nA much simpler solution would be for the postmaster to run a test during\nstartup.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 22 Apr 2004 14:31:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "On Thu, 2004-04-22 at 13:55, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> > This may be a moot point, since you've stated that changing the loop timing \n> > won't solve the problem, but what about making the test part of make? I \n> > don't think too many systems are going to change processor architectures once\n> > in production, and those that do can be told to re-compile.\n> \n> Having to recompile to run on single- vs dual-processor machines doesn't\n> seem like it would fly.\n\nIs it something the postmaster could quickly determine and set a global\nduring the startup cycle?\n\n\n", "msg_date": "Thu, 22 Apr 2004 17:22:11 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "On Thu, 2004-04-22 at 10:37 -0700, Josh Berkus wrote:\n> Tom,\n> \n> > The tricky\n> > part is that a slow adaptation rate means we can't have every backend\n> > figuring this out for itself --- the right value would have to be\n> > maintained globally, and I'm not sure how to do that without adding a\n> > lot of overhead.\n> \n> This may be a moot point, since you've stated that changing the loop timing \n> won't solve the problem, but what about making the test part of make? I \n> don't think too many systems are going to change processor architectures once \n> in production, and those that do can be told to re-compile.\n\nSure they do - PostgreSQL is regularly provided as a pre-compiled\ndistribution. I haven't compiled PostgreSQL for years, and we have it\nrunning on dozens of machines, some SMP, some not, but most running\nDebian Linux.\n\nEven having a compiler _installed_ on one of our client's database\nservers would usually be considered against security procedures, and\nwould get a black mark when the auditors came through.\n\nRegards,\n\t\t\t\t\tAndrew McMillan\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n Planning an election? Call us!\n-------------------------------------------------------------------------\n\n", "msg_date": "Sun, 25 Apr 2004 19:13:35 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "Dave,\n\n> Yeah, I did some more testing myself, and actually get better numbers\n> with increasing spins per delay to 1000, but my suspicion is that it is\n> highly dependent on finding the right delay for the processor you are\n> on.\n\nWell, it certainly didn't help here:\n\nprocs memory swap io system cpu\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 2 0 0 14870744 123872 1129912 0 0 0 0 1027 187341 48 27 \n26 0\n 2 0 0 14869912 123872 1129912 0 0 0 48 1030 126490 65 18 \n16 0\n 2 0 0 14867032 123872 1129912 0 0 0 0 1021 106046 72 16 \n12 0\n 2 0 0 14869912 123872 1129912 0 0 0 0 1025 90256 76 14 10 \n0\n 2 0 0 14870424 123872 1129912 0 0 0 0 1022 135249 63 22 \n16 0\n 2 0 0 14872664 123872 1129912 0 0 0 0 1023 131111 63 20 \n17 0\n 1 0 0 14871128 123872 1129912 0 0 0 48 1024 155728 57 22 \n20 0\n 2 0 0 14871128 123872 1129912 0 0 0 0 1028 189655 49 29 \n22 0\n 2 0 0 14871064 123872 1129912 0 0 0 0 1018 190744 48 29 \n23 0\n 2 0 0 14871064 123872 1129912 0 0 0 0 1027 186812 51 26 \n23 0\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 26 Apr 2004 17:03:25 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "Are you testing this with Tom's code, you need to do a baseline\nmeasurement with 10 and then increase it, you will still get lots of cs,\nbut it will be less.\n\nDave\nOn Mon, 2004-04-26 at 20:03, Josh Berkus wrote:\n> Dave,\n> \n> > Yeah, I did some more testing myself, and actually get better numbers\n> > with increasing spins per delay to 1000, but my suspicion is that it is\n> > highly dependent on finding the right delay for the processor you are\n> > on.\n> \n> Well, it certainly didn't help here:\n> \n> procs memory swap io system cpu\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 2 0 0 14870744 123872 1129912 0 0 0 0 1027 187341 48 27 \n> 26 0\n> 2 0 0 14869912 123872 1129912 0 0 0 48 1030 126490 65 18 \n> 16 0\n> 2 0 0 14867032 123872 1129912 0 0 0 0 1021 106046 72 16 \n> 12 0\n> 2 0 0 14869912 123872 1129912 0 0 0 0 1025 90256 76 14 10 \n> 0\n> 2 0 0 14870424 123872 1129912 0 0 0 0 1022 135249 63 22 \n> 16 0\n> 2 0 0 14872664 123872 1129912 0 0 0 0 1023 131111 63 20 \n> 17 0\n> 1 0 0 14871128 123872 1129912 0 0 0 48 1024 155728 57 22 \n> 20 0\n> 2 0 0 14871128 123872 1129912 0 0 0 0 1028 189655 49 29 \n> 22 0\n> 2 0 0 14871064 123872 1129912 0 0 0 0 1018 190744 48 29 \n> 23 0\n> 2 0 0 14871064 123872 1129912 0 0 0 0 1027 186812 51 26 \n> 23 0\n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Mon, 26 Apr 2004 20:16:00 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "Dave,\n\n> Are you testing this with Tom's code, you need to do a baseline\n> measurement with 10 and then increase it, you will still get lots of cs,\n> but it will be less.\n\nNo, that was just a test of 1000 straight up. Tom outlined a method, but I \ndidn't see any code that would help me find a better level, other than just \ntrying each +100 increase one at a time. This would take days of testing \n...\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 27 Apr 2004 11:05:00 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "Josh,\n\nI think you can safely increase by orders of magnitude here, instead of\nby +100, my wild ass guess is that the sweet spot is the spin time\nshould be approximately the time it takes to consume the resource. So if\nyou have a really fast machine then the spin count should be higher. \n\nAlso you have to take into consideration your memory bus speed, with the\npause instruction inserted in the loop the timing is now dependent on\nmemory speed.\n\nBut... you need a baseline first.\n\nDave\nOn Tue, 2004-04-27 at 14:05, Josh Berkus wrote:\n> Dave,\n> \n> > Are you testing this with Tom's code, you need to do a baseline\n> > measurement with 10 and then increase it, you will still get lots of cs,\n> > but it will be less.\n> \n> No, that was just a test of 1000 straight up. Tom outlined a method, but I \n> didn't see any code that would help me find a better level, other than just \n> trying each +100 increase one at a time. This would take days of testing \n> ...\n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Tue, 27 Apr 2004 14:27:33 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" }, { "msg_contents": "Dave,\n\n> But... you need a baseline first.\n\nA baseline on CS? I have that ....\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 27 Apr 2004 14:03:13 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" } ]
[ { "msg_contents": "I am planning to move the pg databases from the internal RAID to\nexternal Fiber Channel over SAN.\n\n \n\nQuestion is - \n\n \n\n-With the db size being as big as, say, 30+GB, how do I move it on the\nnew logical drive? (stop postgresql, and simply move it over somehow and\nmake a link?)\n\n-Currently, the internal RAID volume is ext3 filesystem. Any\nrecommendations for the filesystem on the new FC volume? Rieserfs?\n\n \n\nDBs are 7.4.1(RH9), and 7.2.3 (RH8).\n\n \n\n \n\nAppreciate any pointers.\n\n \n\nThanks,\nAnjan\n\n\n\n\n\n\n\n\n\n\nI am planning to move the pg databases from the internal\nRAID to external Fiber Channel over SAN.\n \nQuestion is – \n \n-With the db size being as big as, say, 30+GB, how do I move\nit on the new logical drive? (stop postgresql, and simply move it over somehow\nand make a link?)\n-Currently, the internal RAID volume is ext3 filesystem. Any\nrecommendations for the filesystem on the new FC volume? Rieserfs?\n \nDBs are 7.4.1(RH9), and 7.2.3 (RH8).\n \n \nAppreciate any pointers.\n \nThanks,\nAnjan", "msg_date": "Tue, 20 Apr 2004 16:20:36 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Moving postgres to FC disks" }, { "msg_contents": "> -With the db size being as big as, say, 30+GB, how do I move it on the \n> new logical drive? (stop postgresql, and simply move it over somehow \n> and make a link?)\n>\nI would stop the database, move the data directory to the new volume \nusing rsync then start up postgresql pointed at the new data directory.\nProviding everything is working correctly you can then remove the old \ndata directory.\n\n> -Currently, the internal RAID volume is ext3 filesystem. Any \n> recommendations for the filesystem on the new FC volume? Rieserfs?\n>\n> \n>\nXFS\n\n> DBs are 7.4.1(RH9), and 7.2.3 (RH8).\n>\n> \n>\n> \n>\n> Appreciate any pointers.\n>\n> \n>\n> Thanks,\n> Anjan\n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n\n\n\n\n\n\n\n\n\n\n-With the db size\nbeing as big as, say, 30+GB, how do I move\nit on the new logical drive? (stop postgresql, and simply move it over\nsomehow\nand make a link?)\n\n\nI would stop the database, move the data directory to the new volume\nusing rsync then start up postgresql pointed at the new data directory.\nProviding everything is working correctly you can then remove the old\ndata directory.\n\n\n\n\n-Currently, the\ninternal RAID volume is ext3 filesystem. Any\nrecommendations for the filesystem on the new FC volume? Rieserfs?\n \n\n\nXFS\n\n\n\nDBs are\n7.4.1(RH9), and 7.2.3 (RH8).\n \n \nAppreciate any\npointers.\n \nThanks,\nAnjan\n\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Tue, 20 Apr 2004 17:27:28 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Moving postgres to FC disks" }, { "msg_contents": "I agree on not linking and adding non-SAN disk dependancy to your DB. I'm trying to understand your FS reasoning. I have never seen XFS run faster than ReiserFS in any situation (or for that matter beat any FS in performance except JFS). XFS has some nifty very large file features, but we're talking about 30G and all modern FSs support >2G files. \n\nMy tendancy would be to stay on ext3, since it is the default RH FS. I would review site preference and the SAN recommended FS and see if they add any compelling points.\n\n/Aaron\n ----- Original Message ----- \n From: Joshua D. Drake \n To: Anjan Dave \n Cc: [email protected] \n Sent: Tuesday, April 20, 2004 8:27 PM\n Subject: Re: [PERFORM] Moving postgres to FC disks\n\n\n\n\n -With the db size being as big as, say, 30+GB, how do I move it on the new logical drive? (stop postgresql, and simply move it over somehow and make a link?)\n\n I would stop the database, move the data directory to the new volume using rsync then start up postgresql pointed at the new data directory.\n Providing everything is working correctly you can then remove the old data directory.\n\n\n\n -Currently, the internal RAID volume is ext3 filesystem. Any recommendations for the filesystem on the new FC volume? Rieserfs?\n\n\n\n XFS\n\n\n DBs are 7.4.1(RH9), and 7.2.3 (RH8).\n\n\n\n\n\n Appreciate any pointers.\n\n\n\n Thanks,\n Anjan\n\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n\n\n\n\n\nI agree on not linking and adding non-SAN disk \ndependancy to your DB. I'm trying to understand your FS reasoning. I have never seen XFS run faster than ReiserFS in any \nsituation (or for that matter beat any FS in performance except JFS). XFS has \nsome nifty very large file features, but we're talking about 30G and all modern \nFSs support >2G files. \n \nMy tendancy would be to stay on ext3, since it is \nthe default RH FS. I would review site preference and the SAN recommended FS and \nsee if they add any compelling points.\n \n/Aaron\n\n----- Original Message ----- \nFrom:\nJoshua D. \n Drake \nTo: Anjan Dave \nCc: [email protected]\n\nSent: Tuesday, April 20, 2004 8:27 \n PM\nSubject: Re: [PERFORM] Moving postgres to \n FC disks\n\n\n\n-With the db \n size being as big as, say, 30+GB, how do I move it on the new logical drive? \n (stop postgresql, and simply move it over somehow and make a \n link?)I would stop the database, move the \n data directory to the new volume using rsync then start up postgresql pointed \n at the new data directory.Providing everything is working correctly you \n can then remove the old data directory.\n\n\n\n-Currently, \n the internal RAID volume is ext3 filesystem. Any recommendations for the \n filesystem on the new FC volume? Rieserfs?\nXFS\n\n\nDBs are \n 7.4.1(RH9), and 7.2.3 (RH8).\n\n\nAppreciate any \n pointers.\n\nThanks,Anjan-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Tue, 20 Apr 2004 22:14:51 -0400", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Moving postgres to FC disks" }, { "msg_contents": "On Tue, 2004-04-20 at 17:27, Joshua D. Drake wrote:\n\n> > -Currently, the internal RAID volume is ext3 filesystem. Any\n> > recommendations for the filesystem on the new FC volume? Rieserfs?\n> > \n> > \n> XFS\n\nWhat Linux distributions are popular in here for PG+XFS? \n\nI'm very disappointed that Redhat Enterprise 3 doesn't appear to support\nXFS/JFS, or anything else. Suse Server 8 seems very dated, at least from\nthe eval I downloaded. I'm curious as to where other people have gone\nwith the death of RH9. I'd have gone on to Redhat 3 if I wasn't\ninterested in getting some of the benefits of XFS at the same time ...\n\n\n\n", "msg_date": "Wed, 21 Apr 2004 14:50:12 -0700", "msg_from": "Cott Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Moving postgres to FC disks" } ]
[ { "msg_contents": "Hi,\n\nDual Xeon P4 2.8\nlinux RedHat AS 3\nkernel 2.4.21-4-EL-smp\n2 GB ram\n\nI can see the same problem:\n\nprocs memory swap io\nsystem cpu\n r b swpd free buff cache si so bi bo in cs us sy\nid wa\n1 0 0 96212 61056 1720240 0 0 0 0 101 11 25 0\n75 0\n 1 0 0 96212 61056 1720240 0 0 0 0 108 139 25\n0 75 0\n 1 0 0 96212 61056 1720240 0 0 0 0 104 173 25\n0 75 0\n 1 0 0 96212 61056 1720240 0 0 0 0 102 11 25\n0 75 0\n 1 0 0 96212 61056 1720240 0 0 0 0 101 11 25\n0 75 0\n 2 0 0 96204 61056 1720240 0 0 0 0 110 53866 31\n4 65 0\n 2 0 0 96204 61056 1720240 0 0 0 0 101 83176 41\n5 54 0\n 2 0 0 96204 61056 1720240 0 0 0 0 102 86050 39\n6 55 0\n 2 0 0 96204 61056 1720240 0 0 0 49 113 73642 41\n5 54 0\n 2 0 0 96204 61056 1720240 0 0 0 0 102 84211 40\n5 55 0\n 2 0 0 96204 61056 1720240 0 0 0 0 101 105165 39\n7 54 0\n 2 0 0 96204 61056 1720240 0 0 0 0 103 97754 38\n6 56 0\n 2 0 0 96204 61056 1720240 0 0 0 0 103 113668 36\n7 57 0\n 2 0 0 96204 61056 1720240 0 0 0 0 103 112003 37\n7 56 0\n\nregards,\nivan.\n\n", "msg_date": "Wed, 21 Apr 2004 07:22:17 +0200", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd context-switching issue on Xeon" } ]
[ { "msg_contents": "My first post to this list :)\n\nScenario:\nI have a database used only with search queries with only one table that\nholds about 450.000/500.000 records.\nThe table is well indexed so that most of the queries are executed with\nindex scan but since there is a big text field in the table (360chars)\nsome search operation (with certain filters) ends up with seq scans.\nThis table is not written during normal operation: twice per week there\nis a batch program that insert about 35.000 records and updates another\n40.000.\n\nlast friday morning, after that batch has been executed, the database \nstarted responding really slowly to queries (expecially seq scans), \nafter a \"vacuum full analize\" things did get something better.\nYesterday the same: before the batch everything was perfect, after every \nquery was really slow, I've vacuum it again and now is ok.\nSince now the db was working fine, it's 4 month's old with two updates \nper week and I vacuum about once per month.\n\nI am using version 7.3 do I need to upgrade to 7.4? also, I was thinking\nabout setting this table in a kind of \"read-only\" mode to improve\nperformance, is this possible?\n\nThank you for your help\nEdoardo Ceccarelli\n", "msg_date": "Wed, 21 Apr 2004 09:17:23 +0200", "msg_from": "Edoardo Ceccarelli <[email protected]>", "msg_from_op": true, "msg_subject": "slow seqscan" }, { "msg_contents": "Edoardo Ceccarelli wrote:\n\n> My first post to this list :)\n>\n> Scenario:\n> I have a database used only with search queries with only one table that\n> holds about 450.000/500.000 records.\n> The table is well indexed so that most of the queries are executed with\n> index scan but since there is a big text field in the table (360chars)\n> some search operation (with certain filters) ends up with seq scans.\n> This table is not written during normal operation: twice per week there\n> is a batch program that insert about 35.000 records and updates another\n> 40.000.\n>\n> last friday morning, after that batch has been executed, the database \n> started responding really slowly to queries (expecially seq scans), \n> after a \"vacuum full analize\" things did get something better.\n> Yesterday the same: before the batch everything was perfect, after \n> every query was really slow, I've vacuum it again and now is ok.\n> Since now the db was working fine, it's 4 month's old with two updates \n> per week and I vacuum about once per month.\n>\n> I am using version 7.3 do I need to upgrade to 7.4? also, I was thinking\n> about setting this table in a kind of \"read-only\" mode to improve\n> performance, is this possible?\n>\n> Thank you for your help\n> Edoardo Ceccarelli\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\nIn general we are going to need more information, like what kind of \nsearch filters you are using on the text field and an EXPLAIN ANALYZE. \nBut can you try and run the following, bearing in mind it will take a \nwhile to complete.\n\nREINDEX TABLE <table_name>\n\n From what I remember there were issues with index space not being \nreclaimed in a vacuum. I believe this was fixed in 7.4. By not \nreclaiming the space the indexes grow larger and larger over time, \ncausing PG to prefer a sequential scan over an index scan (I think).\n\n\nHope that helps\n\nNick\n\n\n", "msg_date": "Wed, 21 Apr 2004 08:47:58 +0100", "msg_from": "Nick Barr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow seqscan" }, { "msg_contents": "Hi Edoardo,\n\n> The table is well indexed so that most of the queries are executed with\n> index scan but since there is a big text field in the table (360chars)\n> some search operation (with certain filters) ends up with seq scans.\n\nPlease paste the exact SELECT query that uses a seqscan, plus the \nEXPLAIN ANALYZE of the SELECT, and the psql output of \\d <table>.\n\n> This table is not written during normal operation: twice per week there\n> is a batch program that insert about 35.000 records and updates another\n> 40.000.\n\nAfter such an update, you need to run VACUUM ANALYZE <table>; Run it \nbefore the update as well, if it doesn't take that long.\n\n> last friday morning, after that batch has been executed, the database \n> started responding really slowly to queries (expecially seq scans), \n> after a \"vacuum full analize\" things did get something better.\n> Yesterday the same: before the batch everything was perfect, after every \n> query was really slow, I've vacuum it again and now is ok.\n> Since now the db was working fine, it's 4 month's old with two updates \n> per week and I vacuum about once per month.\n\nYou need to vacuum analyze (NOT full) once and HOUR, not once a month. \nAdd this command to your crontab to run once an hour and verify that \nit's working:\n\nvacuumdb -a -z -q\n\nOtherwise, install the auto vacuum utility found in \ncontrib/pg_autovacuum in the postgres source. Set this up. It will \nmonitor postgres and run vacuums and analyzes when necessary. You can \nthen remove your cron job.\n\n> I am using version 7.3 do I need to upgrade to 7.4? also, I was thinking\n> about setting this table in a kind of \"read-only\" mode to improve\n> performance, is this possible?\n\nThere's no read only mode to improve performance.\n\nUpgrading to 7.4 will more than likely improve the performance of your \ndatabase in general. Be careful to read the upgrade notes because there \nwere a few incompatibilities.\n\nChris\n\n", "msg_date": "Wed, 21 Apr 2004 15:52:55 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow seqscan" }, { "msg_contents": "\n>\n> In general we are going to need more information, like what kind of \n> search filters you are using on the text field and an EXPLAIN ANALYZE. \n> But can you try and run the following, bearing in mind it will take a \n> while to complete.\n>\n> REINDEX TABLE <table_name>\n>\n> From what I remember there were issues with index space not being \n> reclaimed in a vacuum. I believe this was fixed in 7.4. By not \n> reclaiming the space the indexes grow larger and larger over time, \n> causing PG to prefer a sequential scan over an index scan (I think).\n>\n>\n\nThe query is this:\nSELECT *, oid FROM annuncio400\nWHERE rubric = 'DD' AND LOWER(testo) Like LOWER('cbr%')\nOFFSET 0 LIMIT 11\n\ndba400=# explain analyze SELECT *, oid FROM annuncio400 WHERE rubric = \n'DD' AND LOWER(testo) Like LOWER('cbr%') OFFSET 0 LIMIT 11;\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------------------- \n\nLimit (cost=0.00..3116.00 rows=11 width=546) (actual time=51.47..56.42 \nrows=11 loops=1)\n -> Seq Scan on annuncio400 (cost=0.00..35490.60 rows=125 width=546) \n(actual time=51.47..56.40 rows=12 loops=1)\n Filter: ((rubric = 'DD'::bpchar) AND (lower((testo)::text) ~~ \n'cbr%'::text))\nTotal runtime: 56.53 msec\n(4 rows)\n\n\nBut the strangest thing ever is that if I change the filter with another \none that represent a smaller amount of data it uses the index scan!!!\ncheck this (same table, same query, different rubric=MA index):\n\ndba400=# explain analyze SELECT *, oid FROM annuncio400 WHERE rubric = \n'MA' AND LOWER(testo) Like LOWER('cbr%') OFFSET 0 LIMIT 11; \n QUERY \nPLAN \n------------------------------------------------------------------------------------------------------------------------------- \n\nLimit (cost=0.00..6630.72 rows=9 width=546) (actual time=42.74..42.74 \nrows=0 loops=1)\n -> Index Scan using rubric on annuncio400 (cost=0.00..6968.48 rows=9 \nwidth=546) (actual time=42.73..42.73 rows=0 loops=1)\n Index Cond: (rubric = 'MA'::bpchar)\n Filter: (lower((testo)::text) ~~ 'cbr%'::text)\nTotal runtime: 42.81 msec\n(5 rows)\n\n\nThanks for your help\nEdoardo\n\n>\n>\n", "msg_date": "Wed, 21 Apr 2004 10:34:48 +0200", "msg_from": "Edoardo Ceccarelli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow seqscan" }, { "msg_contents": "> dba400=# explain analyze SELECT *, oid FROM annuncio400 WHERE rubric = \n> 'DD' AND LOWER(testo) Like LOWER('cbr%') OFFSET 0 LIMIT 11;\n> QUERY \n> PLAN \n> -------------------------------------------------------------------------------------------------------------------- \n> \n> Limit (cost=0.00..3116.00 rows=11 width=546) (actual time=51.47..56.42 \n> rows=11 loops=1)\n> -> Seq Scan on annuncio400 (cost=0.00..35490.60 rows=125 width=546) \n> (actual time=51.47..56.40 rows=12 loops=1)\n> Filter: ((rubric = 'DD'::bpchar) AND (lower((testo)::text) ~~ \n> 'cbr%'::text))\n> Total runtime: 56.53 msec\n> (4 rows)\n\nWhat happens if you go:\n\nCREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(rubric, \nLOWER(testo));\n\nor even just:\n\nCREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(LOWER(testo));\n\n> But the strangest thing ever is that if I change the filter with another \n> one that represent a smaller amount of data it uses the index scan!!!\n\nWhat's strange about that? The less data is going to be retrieved, the \nmore likely postgres is to use the index.\n\nI suggest maybe increasing the amount of stats recorded for your rubrik \ncolumn:\n\nALTER TABLE annuncio400 ALTER rubrik SET STATISTICS 100;\nANALYZE annuncio400;\n\nYou could also try reducing the random_page_cost value in your \npostgresql.conf a little, say to 3 (if it's currently 4). That will \nmake postgres more likely to use index scans over seq scans.\n\nChris\n\n", "msg_date": "Wed, 21 Apr 2004 16:53:57 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow seqscan" }, { "msg_contents": "\n> What happens if you go:\n>\n> CREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(rubric, \n> LOWER(testo));\n>\n> or even just:\n>\n> CREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(LOWER(testo));\n>\nI wasn't able to make this 2 field index with lower:\n\ndba400=# CREATE INDEX annuncio400_rubric_testo_idx ON \nannuncio400(rubric, LOWER(testo));\nERROR: parser: parse error at or near \"(\" at character 71\n\nseems impossible to creat 2 field indexes with lower function.\n\nThe other one does not make it use the index.\n\n\n>> But the strangest thing ever is that if I change the filter with \n>> another one that represent a smaller amount of data it uses the \n>> index scan!!!\n>\n>\n> What's strange about that? The less data is going to be retrieved, \n> the more likely postgres is to use the index.\n>\ncan't understand this policy:\n\ndba400=# SELECT count(*) from annuncio400 where rubric='DD';\n count\n-------\n 6753\n(1 row)\n\ndba400=# SELECT count(*) from annuncio400 where rubric='MA';\n count\n-------\n 2165\n(1 row)\n\nso it's using the index on 2000 rows and not for 6000? it's not that \nbig difference, isn't it?\n\n\n> I suggest maybe increasing the amount of stats recorded for your \n> rubrik column:\n>\n> ALTER TABLE annuncio400 ALTER rubrik SET STATISTICS 100;\n> ANALYZE annuncio400;\n>\ndone, almost the same, still not using index\n\n> You could also try reducing the random_page_cost value in your \n> postgresql.conf a little, say to 3 (if it's currently 4). That will \n> make postgres more likely to use index scans over seq scans.\n>\n\nchanged the setting on postgresql.conf, restarted the server,\nnothing has changed.\n\nwhat about setting this to false?\n#enable_seqscan = true\n\nthanks again\nEdoardo\n", "msg_date": "Wed, 21 Apr 2004 11:41:11 +0200", "msg_from": "Edoardo Ceccarelli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow seqscan" }, { "msg_contents": "tried the\n\nenable_seqscan = false\n\nand I'm having all index scans, timing has improved from 600ms to 18ms\n\nwondering what other implications I might expect.\n\n\n\n\nEdoardo Ceccarelli ha scritto:\n\n>\n>> What happens if you go:\n>>\n>> CREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(rubric, \n>> LOWER(testo));\n>>\n>> or even just:\n>>\n>> CREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(LOWER(testo));\n>>\n> I wasn't able to make this 2 field index with lower:\n>\n> dba400=# CREATE INDEX annuncio400_rubric_testo_idx ON \n> annuncio400(rubric, LOWER(testo));\n> ERROR: parser: parse error at or near \"(\" at character 71\n>\n> seems impossible to creat 2 field indexes with lower function.\n>\n> The other one does not make it use the index.\n>\n>\n>>> But the strangest thing ever is that if I change the filter with \n>>> another one that represent a smaller amount of data it uses the \n>>> index scan!!!\n>>\n>>\n>>\n>> What's strange about that? The less data is going to be retrieved, \n>> the more likely postgres is to use the index.\n>>\n> can't understand this policy:\n>\n> dba400=# SELECT count(*) from annuncio400 where rubric='DD';\n> count\n> -------\n> 6753\n> (1 row)\n>\n> dba400=# SELECT count(*) from annuncio400 where rubric='MA';\n> count\n> -------\n> 2165\n> (1 row)\n>\n> so it's using the index on 2000 rows and not for 6000? it's not that \n> big difference, isn't it?\n>\n>\n>> I suggest maybe increasing the amount of stats recorded for your \n>> rubrik column:\n>>\n>> ALTER TABLE annuncio400 ALTER rubrik SET STATISTICS 100;\n>> ANALYZE annuncio400;\n>>\n> done, almost the same, still not using index\n>\n>> You could also try reducing the random_page_cost value in your \n>> postgresql.conf a little, say to 3 (if it's currently 4). That will \n>> make postgres more likely to use index scans over seq scans.\n>>\n>\n> changed the setting on postgresql.conf, restarted the server,\n> nothing has changed.\n>\n> what about setting this to false?\n> #enable_seqscan = true\n>\n> thanks again\n> Edoardo\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n>\n>\n", "msg_date": "Wed, 21 Apr 2004 12:10:02 +0200", "msg_from": "Edoardo Ceccarelli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow seqscan" }, { "msg_contents": "\n> enable_seqscan = false\n> \n> and I'm having all index scans, timing has improved from 600ms to 18ms\n> \n> wondering what other implications I might expect.\n\nLots of really bad implications...it's really not a good idea.\n\nChris\n\n", "msg_date": "Wed, 21 Apr 2004 18:15:29 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow seqscan" }, { "msg_contents": "just created a copy of the same database and it shows that is the \nanalyze that's messing things:\n\nSlow seqscan query executed on dba400\n\ndba400=# explain analyze SELECT *, oid FROM annuncio400 WHERE rubric = \n'DD' AND LOWER(testo) Like LOWER('cbr%') OFFSET 0 LIMIT 11;\n QUERY \nPLAN \n--------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3116.00 rows=11 width=546) (actual time=46.66..51.40 \nrows=11 loops=1)\n -> Seq Scan on annuncio400 (cost=0.00..35490.60 rows=125 width=546) \n(actual time=46.66..51.38 rows=12 loops=1)\n Filter: ((rubric = 'DD'::bpchar) AND (lower((testo)::text) ~~ \n'cbr%'::text))\n Total runtime: 51.46 msec\n(4 rows)\n\n\nfastest index scan query on dba400b (exact copy of dba400)\n\n\ndba400b=# explain analyze SELECT *, oid FROM annuncio400 WHERE rubric = \n'DD' AND LOWER(testo) Like LOWER('cbr%') OFFSET 0 LIMIT 11;\n QUERY \nPLAN \n------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..7058.40 rows=9 width=546) (actual time=1.36..8.18 \nrows=11 loops=1)\n -> Index Scan using rubric on annuncio400 (cost=0.00..7369.42 \nrows=9 width=546) (actual time=1.35..8.15 rows=12 loops=1)\n Index Cond: (rubric = 'DD'::bpchar)\n Filter: (lower((testo)::text) ~~ 'cbr%'::text)\n Total runtime: 8.28 msec\n(5 rows)\n\n\n\nwhat about this index you suggested? it gives me sintax error while \ntrying to create it:\n\nCREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(rubric, \nLOWER(testo));\n\n\nThanks\nEdoardo\n\nChristopher Kings-Lynne ha scritto:\n\n>\n>> enable_seqscan = false\n>>\n>> and I'm having all index scans, timing has improved from 600ms to 18ms\n>>\n>> wondering what other implications I might expect.\n>\n>\n> Lots of really bad implications...it's really not a good idea.\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if \n> your\n> joining column's datatypes do not match\n>\n>\n>\n", "msg_date": "Wed, 21 Apr 2004 13:50:04 +0200", "msg_from": "Edoardo Ceccarelli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow seqscan" }, { "msg_contents": "\nOn Wed, 21 Apr 2004, Edoardo Ceccarelli wrote:\n\n>\n> > What happens if you go:\n> >\n> > CREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(rubric,\n> > LOWER(testo));\n> >\n> > or even just:\n> >\n> > CREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(LOWER(testo));\n> >\n> I wasn't able to make this 2 field index with lower:\n>\n> dba400=# CREATE INDEX annuncio400_rubric_testo_idx ON\n> annuncio400(rubric, LOWER(testo));\n> ERROR: parser: parse error at or near \"(\" at character 71\n\nThat's a 7.4 feature I think (and I think the version with two columns\nmay need extra parens around the lower()). I think the only way to do\nsomething equivalent in 7.3 is to make a function that concatenates the\ntwo in some fashion after having applied the lower to the one part and\nthen using that in the queries as well. Plus, if you're not in \"C\"\nlocale, I'm not sure that it'd help in 7.3 anyway.\n\n> >> But the strangest thing ever is that if I change the filter with\n> >> another one that represent a smaller amount of data it uses the\n> >> index scan!!!\n> >\n> >\n> > What's strange about that? The less data is going to be retrieved,\n> > the more likely postgres is to use the index.\n> >\n> can't understand this policy:\n>\n> dba400=# SELECT count(*) from annuncio400 where rubric='DD';\n> count\n> -------\n> 6753\n> (1 row)\n>\n> dba400=# SELECT count(*) from annuncio400 where rubric='MA';\n> count\n> -------\n> 2165\n> (1 row)\n>\n> so it's using the index on 2000 rows and not for 6000? it's not that\n> big difference, isn't it?\n\nIt's a question of how many pages it thinks it's going to have to retrieve\nin order to handle the request. If it say needs (or think it needs) to\nretrieve 50% of the pages, then given a random_page_cost of 4, it's going\nto expect the index scan to be about twice the cost.\n\nGenerally speaking one good way to compare is to try the query with\nexplain analyze and then change parameters like enable_seqscan and try the\nquery with explain analyze again and compare the estimated rows and costs.\nThat'll give an idea of how it expects the two versions of the query to\ncompare speed wise.\n", "msg_date": "Wed, 21 Apr 2004 07:31:27 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow seqscan" }, { "msg_contents": "\n>>can't understand this policy:\n>>\n>>dba400=# SELECT count(*) from annuncio400 where rubric='DD';\n>> count\n>>-------\n>> 6753\n>>(1 row)\n>>\n>>dba400=# SELECT count(*) from annuncio400 where rubric='MA';\n>> count\n>>-------\n>> 2165\n>>(1 row)\n>>\n>>so it's using the index on 2000 rows and not for 6000? it's not that\n>>big difference, isn't it?\n>> \n>>\n>\n>It's a question of how many pages it thinks it's going to have to retrieve\n>in order to handle the request. If it say needs (or think it needs) to\n>retrieve 50% of the pages, then given a random_page_cost of 4, it's going\n>to expect the index scan to be about twice the cost.\n>\n>Generally speaking one good way to compare is to try the query with\n>explain analyze and then change parameters like enable_seqscan and try the\n>query with explain analyze again and compare the estimated rows and costs.\n>That'll give an idea of how it expects the two versions of the query to\n>compare speed wise.\n>\n>\n> \n>\nOk then how do you explain this?\njust created a copy of the same database\n\nSlow seqscan query executed on dba400\n\ndba400=# explain analyze SELECT *, oid FROM annuncio400 WHERE rubric = \n'DD' AND LOWER(testo) Like LOWER('cbr%') OFFSET 0 LIMIT 11;\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------------------- \n\nLimit (cost=0.00..3116.00 rows=11 width=546) (actual time=46.66..51.40 \nrows=11 loops=1)\n -> Seq Scan on annuncio400 (cost=0.00..35490.60 rows=125 width=546) \n(actual time=46.66..51.38 rows=12 loops=1)\n Filter: ((rubric = 'DD'::bpchar) AND (lower((testo)::text) ~~ \n'cbr%'::text))\nTotal runtime: 51.46 msec\n(4 rows)\n\n\nfastest index scan query on dba400b (exact copy of dba400)\n\n\ndba400b=# explain analyze SELECT *, oid FROM annuncio400 WHERE rubric = \n'DD' AND LOWER(testo) Like LOWER('cbr%') OFFSET 0 LIMIT 11;\n QUERY \nPLAN \n------------------------------------------------------------------------------------------------------------------------------ \n\nLimit (cost=0.00..7058.40 rows=9 width=546) (actual time=1.36..8.18 \nrows=11 loops=1)\n -> Index Scan using rubric on annuncio400 (cost=0.00..7369.42 rows=9 \nwidth=546) (actual time=1.35..8.15 rows=12 loops=1)\n Index Cond: (rubric = 'DD'::bpchar)\n Filter: (lower((testo)::text) ~~ 'cbr%'::text)\nTotal runtime: 8.28 msec\n(5 rows)\n\n\nanyway, shall I try to lower the random_page value since I get an index \nscan? I mean that in my case I've already noted that with index scan \nthat query get executed in 1/10 of the seqscan speed.\n\nThank you\nEdoardo\n", "msg_date": "Wed, 21 Apr 2004 18:23:12 +0200", "msg_from": "Edoardo Ceccarelli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow seqscan" }, { "msg_contents": "Edoardo Ceccarelli <[email protected]> writes:\n> I wasn't able to make this 2 field index with lower:\n\n> dba400=# CREATE INDEX annuncio400_rubric_testo_idx ON \n> annuncio400(rubric, LOWER(testo));\n> ERROR: parser: parse error at or near \"(\" at character 71\n\n> seems impossible to creat 2 field indexes with lower function.\n\nYou need 7.4 to do that; previous releases don't support multi-column\nfunctional indexes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Apr 2004 14:20:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow seqscan " } ]
[ { "msg_contents": "Hi,\n\nHas anyone had a look at:\n\nhttp://people.ac.upc.es/zgomez/\n\nI realize that MySQL & PG cannot really be compared (especially when you \nconsider the issues that MySQL has with things like data integrity) but \nstill surely PG would perform better than the stats show (i.e. #7 31.28 \nseconds versus 42 minutes!!!).\n\nOn a side note it certainly looks like linux kernel 2.6 is quite a bit \nfaster in comparision to 2.4.\n\nNick\n\n\n\n", "msg_date": "Wed, 21 Apr 2004 09:31:39 +0100", "msg_from": "Nick Barr <[email protected]>", "msg_from_op": true, "msg_subject": "MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "> I realize that MySQL & PG cannot really be compared (especially when you \n> consider the issues that MySQL has with things like data integrity) but \n> still surely PG would perform better than the stats show (i.e. #7 31.28 \n> seconds versus 42 minutes!!!).\n\nWe know that PostgreSQL 7.5 will perform much better than 7.4 did due to\nthe efforts of OSDN and Tom.\n\nI've enquired as to whether they ran ANALYZE after the data load. They\ndon't explicitly mention it, and given the mention it took 2.5days to\nload 1GB of data, they're not regular PostgreSQL users.\n\n-- \nRod Taylor <rbt [at] rbt [dot] ca>\n\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\nPGP Key: http://www.rbt.ca/signature.asc", "msg_date": "Wed, 21 Apr 2004 08:19:21 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "On Wed, 2004-04-21 at 08:19, Rod Taylor wrote:\n> > I realize that MySQL & PG cannot really be compared (especially when you \n> > consider the issues that MySQL has with things like data integrity) but \n> > still surely PG would perform better than the stats show (i.e. #7 31.28 \n> > seconds versus 42 minutes!!!).\n> \n> We know that PostgreSQL 7.5 will perform much better than 7.4 did due to\n> the efforts of OSDN and Tom.\n\nOSDL not OSDN.\n\n> I've enquired as to whether they ran ANALYZE after the data load. They\n> don't explicitly mention it, and given the mention it took 2.5days to\n> load 1GB of data, they're not regular PostgreSQL users.\n\n", "msg_date": "Wed, 21 Apr 2004 08:22:29 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "\nOn 21/04/2004 09:31 Nick Barr wrote:\n> Hi,\n> \n> Has anyone had a look at:\n> \n> http://people.ac.upc.es/zgomez/\n> \n> I realize that MySQL & PG cannot really be compared (especially when you \n> consider the issues that MySQL has with things like data integrity) but \n> still surely PG would perform better than the stats show (i.e. #7 31.28 \n> seconds versus 42 minutes!!!).\n\nLooks like he's using the default postgresql.conf settings in which case \nI'm not suprised at pg looking so slow. His stated use of foreign keys \ninvalidates the tests anyway as MyISAM tables don't support FKs so we're \nprobably seeing FK check overheads in pg that are simply ignore by MySQL. \nIn an honest test, MySQL should be reported as failing those tests.\n\nPerhaps one of the advocay team will pick up the batton?\n> \n> On a side note it certainly looks like linux kernel 2.6 is quite a bit \n> faster in comparision to 2.4.\n\nYes, I've seen other benchmarks which also show that.\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n", "msg_date": "Wed, 21 Apr 2004 13:55:21 +0100", "msg_from": "Paul Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "> Looks like he's using the default postgresql.conf settings in which case\n> I'm not suprised at pg looking so slow.\n\nThe question also is, IMHO, why the hell, postgreSQL still comes out of the\nbox with so stupid configuration defaults, totally underestimated for todays\naverage hardware configuration (1+GHz, 0.5+GB RAM, fast FSB, fast HDD).\n\nIt seems to me better strategy to force that 1% of users to \"downgrade\" cfg.\nthan vice-versa.\n\nregards\nch\n\n", "msg_date": "Wed, 21 Apr 2004 15:31:02 +0200", "msg_from": "\"Cestmir Hybl\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "On 21/04/2004 14:31 Cestmir Hybl wrote:\n> > Looks like he's using the default postgresql.conf settings in which\n> case\n> > I'm not suprised at pg looking so slow.\n> \n> The question also is, IMHO, why the hell, postgreSQL still comes out of\n> the\n> box with so stupid configuration defaults, totally underestimated for\n> todays\n> average hardware configuration (1+GHz, 0.5+GB RAM, fast FSB, fast HDD).\n> \n> It seems to me better strategy to force that 1% of users to \"downgrade\"\n> cfg.\n> than vice-versa.\n> \n> regards\n> ch\n> \n\nThis has been discussed many times before. Check the archives.\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n", "msg_date": "Wed, 21 Apr 2004 15:08:09 +0100", "msg_from": "Paul Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "Paul Thomas wrote:\n\n> Looks like he's using the default postgresql.conf settings in which \n> case I'm not suprised at pg looking so slow. His stated use of foreign \n> keys invalidates the tests anyway as MyISAM tables don't support FKs \n> so we're probably seeing FK check overheads in pg that are simply \n> ignore by MySQL. In an honest test, MySQL should be reported as \n> failing those tests.\n\n\nEither failures, or they should not have been using MyISAM, they should \nhave used the table format that supports FK's. This is just not apples \nto apples.\n\n\n", "msg_date": "Wed, 21 Apr 2004 14:57:16 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PG TPC-H benchmarks" } ]
[ { "msg_contents": "I just want to make sure that I am interpreting this data correctly.\n\nFrom pg_statio_user_tables, I have pulled relname, heap_blks_read, \nheap_blks_hit. I get several rows like this:\nrelname\t\t\t\theap_bkls_read\theap_blks_hit\n clmhdr \t8607161\t\t\t196547165\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n\n\nSo this means that I am getting over a 100% cache hit ratio for this table, \nright? If not, please help me understand what these numbers mean.\n\nThanks,\n\nChris\n\n", "msg_date": "Wed, 21 Apr 2004 11:34:16 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help understanding stat tables" }, { "msg_contents": "I think I have figured my problem out.\n\nI was taking heap_blks_hit / heap_blks_read for my hit pct.\n\nIt should be heap_blks_hit/(heap_blks_read+heap_blks_hit), correct?\n\nThanks\nOn Wednesday 21 April 2004 11:34, Chris Hoover wrote:\n> I just want to make sure that I am interpreting this data correctly.\n>\n> From pg_statio_user_tables, I have pulled relname, heap_blks_read,\n> heap_blks_hit. I get several rows like this:\n> relname\t\t\t\theap_bkls_read\theap_blks_hit\n> clmhdr \t8607161\t\t\t196547165\n>\n>\n> So this means that I am getting over a 100% cache hit ratio for this table,\n> right? If not, please help me understand what these numbers mean.\n>\n> Thanks,\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n", "msg_date": "Wed, 21 Apr 2004 12:24:35 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help understanding stat tables" }, { "msg_contents": "\"Chris Hoover\" <[email protected]> writes:\n> I was taking heap_blks_hit / heap_blks_read for my hit pct.\n> It should be heap_blks_hit/(heap_blks_read+heap_blks_hit), correct?\n\nRight.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Apr 2004 15:04:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help understanding stat tables " } ]
[ { "msg_contents": "Hello,\n\nI have a bi-PIII server with 2Gb of RAM with Debian and a PostgreSQL 7.4\nrunning on. What are the bests settings for shared buffers, sort memory and\neffective cache size?\n\nMy main database have a small/mid range size: some tables may have 1 or 2\nmillions of records.\n\nThanks\n\nFr�d�ric Robinet\[email protected]\n\n", "msg_date": "Wed, 21 Apr 2004 18:29:06 +0200", "msg_from": "=?iso-8859-1?Q?Fr=E9d=E9ric_Robinet?= <[email protected]>", "msg_from_op": true, "msg_subject": "Shared buffers, Sort memory, Effective Cache Size" }, { "msg_contents": "Hello,\n\nI have recently configured my PG7.3 on a G5 (8GB RAM) with\nshmmax set to 512MB and shared_buffer=50000, sort_mem=4096\nand effective cache size = 10000. It seems working great so far but\nI am wondering if I should make effctive cache size larger myself.\n\nTnaks!\n\nQing\nOn Apr 21, 2004, at 9:29 AM, Frédéric Robinet wrote:\n\n> Hello,\n>\n> I have a bi-PIII server with 2Gb of RAM with Debian and a PostgreSQL \n> 7.4\n> running on. What are the bests settings for shared buffers, sort \n> memory and\n> effective cache size?\n>\n> My main database have a small/mid range size: some tables may have 1 \n> or 2\n> millions of records.\n>\n> Thanks\n>\n> Frédéric Robinet\n> [email protected]\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Wed, 21 Apr 2004 10:01:30 -0700", "msg_from": "Qing Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, Sort memory, Effective Cache Size" }, { "msg_contents": "On Wed, 21 Apr 2004 10:01:30 -0700, Qing Zhao <[email protected]>\nwrote:\n>I have recently configured my PG7.3 on a G5 (8GB RAM) with\n>shmmax set to 512MB and shared_buffer=50000, sort_mem=4096\n>and effective cache size = 10000. It seems working great so far but\n>I am wondering if I should make effctive cache size larger myself.\n\nYes, much larger! And while you are at it make shared_buffers smaller.\n\nServus\n Manfred\n", "msg_date": "Tue, 27 Apr 2004 14:12:20 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, Sort memory, Effective Cache Size" } ]
[ { "msg_contents": "\n\n---------- Forwarded Message ----------\n\nSubject: Re: [PERFORM] MySQL vs PG TPC-H benchmarks\nDate: Wed, 21 Apr 2004 13:55:21 +0100\nFrom: Paul Thomas <[email protected]>\nTo: Nick Barr <[email protected]>\nCc: \"pgsql-performance @ postgresql . org\" <[email protected]>\n\nOn 21/04/2004 09:31 Nick Barr wrote:\n> Hi,\n>\n> Has anyone had a look at:\n>\n> http://people.ac.upc.es/zgomez/\n>\n> I realize that MySQL & PG cannot really be compared (especially when you\n> consider the issues that MySQL has with things like data integrity) but\n> still surely PG would perform better than the stats show (i.e. #7 31.28\n> seconds versus 42 minutes!!!).\n\nLooks like he's using the default postgresql.conf settings in which case\nI'm not suprised at pg looking so slow. His stated use of foreign keys\ninvalidates the tests anyway as MyISAM tables don't support FKs so we're\nprobably seeing FK check overheads in pg that are simply ignore by MySQL.\nIn an honest test, MySQL should be reported as failing those tests.\n\nPerhaps one of the advocay team will pick up the batton?\n\n> On a side note it certainly looks like linux kernel 2.6 is quite a bit\n> faster in comparision to 2.4.\n\nYes, I've seen other benchmarks which also show that.\n\n--\nPaul Thomas\n+------------------------------+---------------------------------------------\n+\n\n| Thomas Micro Systems Limited | Software Solutions for\n\nBusiness |\n\n| Computer Consultants |\n\nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------\n+\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n-------------------------------------------------------\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 21 Apr 2004 10:31:31 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Re: [PERFORM] MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "Folks,\n\nI've sent a polite e-mail to Mr. Gomez offering our help. Please, nobody \nflame him!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 21 Apr 2004 10:47:03 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "Josh Berkus wrote:\n\n> Folks,\n> \n> I've sent a polite e-mail to Mr. Gomez offering our help. Please, nobody \n> flame him!\n> \n\nPlease keep in mind that the entire test has, other than a similar \ndatabase schema and query types maybe, nothing to do with a TPC-H. I \ndon't see any kind of SUT. Foreign key support on the DB level is not \nrequired by any of the TPC benchmarks. But the System Under Test, which \nis the combination of middleware application and database together with \nall computers and network components these parts are running on, must \nimplement all the required semantics, like ACID properties, referential \nintegrity &c. One could implement a TPC-H with flat files, it's just a \nmajor pain in the middleware.\n\nA proper TPC benchmark implementation would for example be a complete \nPHP+DB application, where the user interaction is done by an emulated \n\"browser\" and what is measured is the http response times, not anything \ngoing on between PHP and the DB. Assuming that all requirements of the \nTPC specification are implemented by either using available DB features, \nor including appropriate workarounds in the PHP code, that would very \nwell lead to something that can compare PHP+MySQL vs. PHP+PostgreSQL.\n\nAll TPC benchmarks I have seen are performed by timing such a system \nafter a considerable rampup time, giving the DB system a chance to \nproperly populate caches and so forth. Rebooting the machine just before \nthe test is the wrong thing here and will especially kill any advanced \ncache algorithms like ARC.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Wed, 21 Apr 2004 15:38:47 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "Folks,\n\nI���m doing the 100GB TPC-H and I���ll show the previous\nresults to our community (Postgres) in 3 weeks before\nfinishing the study.\n\nMy intention is to carry through a test with a VLDB in\na low cost platform (PostgreSQL, Linux and cheap HW)\nand not to compare with another DBMS.\n\nSo far I can tell you that the load time on PG 7.4.2\nwith kernel 2.6.5 on Opteron 64 model 240 in RAID 0\nwith 8 disks (960 GB) loaded the database in less than\n24 hours. \nAbout 7hs:30min to load the data and 16:09:25 to\ncreate the indexes\n\nThe Power test still running and that���s why I���ll not\npresent anything so far. Now I���ll just send to the\nlist my environment configuration.\n\n- The configuration of the machine is:\nDual opteron 64 bits model 240\n4GB RAM\n960 GB on RAID 0\nMandrake Linux 64 with Kernel 2.6.5 (I compiled a\nkernel for this test)\nJava SDK java version \"1.4.2_04\"\nPostgreSQL JDBC pg74.1jdbc3.jar\n\n- The TPC-H configuration is:\nTPC-H 2.0.0\n100GB\nload using flat files\nRefresh functions using java\n\n- The PostgreSQL 7.4.2 configuration is:\n\nadd_missing_from | on\n australian_timezones | off\n authentication_timeout | 60\n check_function_bodies | on\n checkpoint_segments | 128\n checkpoint_timeout | 300\n checkpoint_warning | 30\n client_encoding | SQL_ASCII\n client_min_messages | notice\n commit_delay | 0\n commit_siblings | 5\n cpu_index_tuple_cost | 0.001\n cpu_operator_cost | 0.0025\n cpu_tuple_cost | 0.01\n DateStyle | ISO, MDY\n db_user_namespace | off\n deadlock_timeout | 1000\n debug_pretty_print | off\n debug_print_parse | off\n debug_print_plan | off\n debug_print_rewritten | off\n default_statistics_target | 10\n default_transaction_isolation | read committed\n default_transaction_read_only | off\n dynamic_library_path | $libdir\n effective_cache_size | 150000\n enable_hashagg | on\n enable_hashjoin | on\n enable_indexscan | on\n enable_mergejoin | on\n enable_nestloop | on\n enable_seqscan | on\n enable_sort | on\n enable_tidscan | on\n explain_pretty_print | on\n extra_float_digits | 0\n from_collapse_limit | 8\n fsync | off\n geqo | on\n geqo_effort | 1\n geqo_generations | 0\ngeqo_pool_size | 0\n geqo_selection_bias | 2\n geqo_threshold | 11\n join_collapse_limit | 8\n krb_server_keyfile | unset\n lc_collate | en_US\n lc_ctype | en_US\n lc_messages | C\n lc_monetary | C\n lc_numeric | C\n lc_time | C\n log_connections | off\n log_duration | off\n log_error_verbosity | default\n log_executor_stats | off\n log_hostname | off\n log_min_duration_statement | -1\n log_min_error_statement | panic\n log_min_messages | notice\n log_parser_stats | off\n log_pid | off\n log_planner_stats | off\n log_source_port | off\n log_statement | off\n log_statement_stats | off\n log_timestamp | off\n max_connections | 10\n max_expr_depth | 10000\n max_files_per_process | 1000\n max_fsm_pages | 20000\n max_fsm_relations | 1000\n max_locks_per_transaction | 64\n password_encryption | on\n port | 5432\n pre_auth_delay | 0\n preload_libraries | unset\n random_page_cost | 1.25\n regex_flavor | advanced\n rendezvous_name | unset\n search_path | $user,public\n server_encoding | SQL_ASCII\n server_version | 7.4.2\n shared_buffers | 40000\n silent_mode | off\nsort_mem | 65536\n sql_inheritance | on\n ssl | off\n statement_timeout | 10000000\n stats_block_level | off\n stats_command_string | off\n stats_reset_on_server_start | on\n stats_row_level | off\n stats_start_collector | on\n superuser_reserved_connections | 2\n syslog | 0\n syslog_facility | LOCAL0\n syslog_ident | postgres\n tcpip_socket | on\n TimeZone | unknown\n trace_notify | off\n transaction_isolation | read committed\n transaction_read_only | off\n transform_null_equals | off\n unix_socket_directory | unset\n unix_socket_group | unset\n unix_socket_permissions | 511\n vacuum_mem | 65536\n virtual_host | unset\n wal_buffers | 32\n wal_debug | 0\n wal_sync_method | fdatasync\n zero_damaged_pages | off\n(113 rows)\n\n\nsuggestions, doubts and commentaries are very welcome\n\nregards \n______________________________\nEduardo Cunha de Almeida\nAdministra������o de Banco de Dados\nUFPR - CCE \n+55-41-361-3321\[email protected]\[email protected]\n\n--- Jan Wieck <[email protected]> wrote:\n> Josh Berkus wrote:\n> \n> > Folks,\n> > \n> > I've sent a polite e-mail to Mr. Gomez offering\n> our help. Please, nobody \n> > flame him!\n> > \n> \n> Please keep in mind that the entire test has, other\n> than a similar \n> database schema and query types maybe, nothing to do\n> with a TPC-H. I \n> don't see any kind of SUT. Foreign key support on\n> the DB level is not \n> required by any of the TPC benchmarks. But the\n> System Under Test, which \n> is the combination of middleware application and\n> database together with \n> all computers and network components these parts are\n> running on, must \n> implement all the required semantics, like ACID\n> properties, referential \n> integrity &c. One could implement a TPC-H with flat\n> files, it's just a \n> major pain in the middleware.\n> \n> A proper TPC benchmark implementation would for\n> example be a complete \n> PHP+DB application, where the user interaction is\n> done by an emulated \n> \"browser\" and what is measured is the http response\n> times, not anything \n> going on between PHP and the DB. Assuming that all\n> requirements of the \n> TPC specification are implemented by either using\n> available DB features, \n> or including appropriate workarounds in the PHP\n> code, that would very \n> well lead to something that can compare PHP+MySQL\n> vs. PHP+PostgreSQL.\n> \n> All TPC benchmarks I have seen are performed by\n> timing such a system \n> after a considerable rampup time, giving the DB\n> system a chance to \n> properly populate caches and so forth. Rebooting the\n> machine just before \n> the test is the wrong thing here and will especially\n> kill any advanced \n> cache algorithms like ARC.\n> \n> \n> Jan\n> \n> -- \n>\n#======================================================================#\n> # It's easier to get forgiveness for being wrong\n> than for being right. #\n> # Let's break this rule - forgive me. \n> #\n> #==================================================\n> [email protected] #\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> \n> http://www.postgresql.org/docs/faqs/FAQ.htmlIP 5:\n> Have you checked our extensive FAQ?\n> \n> \nhttp://www.postgresql.org/docs/faqs/FAQ.html\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Photos: High-quality 4x6 digital prints for 25���\nhttp://photos.yahoo.com/ph/print_splash\n", "msg_date": "Thu, 22 Apr 2004 05:53:18 -0700 (PDT)", "msg_from": "Eduardo Almeida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "...and on Thu, Apr 22, 2004 at 05:53:18AM -0700, Eduardo Almeida used the keyboard:\n> \n> - The configuration of the machine is:\n> Dual opteron 64 bits model 240\n> 4GB RAM\n> 960 GB on RAID 0\n> Mandrake Linux 64 with Kernel 2.6.5 (I compiled a\n> kernel for this test)\n> Java SDK java version \"1.4.2_04\"\n> PostgreSQL JDBC pg74.1jdbc3.jar\n> \n> - The TPC-H configuration is:\n> TPC-H 2.0.0\n> 100GB\n> load using flat files\n> Refresh functions using java\n> \n\nI'll just add for the reference, to those that aren't aware of it, the Java\nvirtual machine for x86_64 only exists in the 1.5 branch so far, and it's so\nutterly unstable that most every notable shuffling around in the memory\ncrashes it. :)\n\nHence the 1.4.2_04 is a 32-bit application running in 32-bit mode.\n\nI won't be getting into how much this affects the benchmarks as I didn't\nreally get into how CPU- and memory-intensive the refresh functions are in\nthese, so as I said - let's keep it a reference.\n\nCheers,\n-- \n Grega Bremec\n Senior Administrator\n Noviforum Ltd., Software & Media\n http://www.noviforum.si/", "msg_date": "Thu, 22 Apr 2004 15:42:49 +0200", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "Grega,\n\nThat���s why I used java 32bits and needed to compile\nthe kernel 2.6.5 with the 32bits modules.\nTo reference, Sun has java 64bits just to IA64 and\nSolaris Sparc 64 not to Opteron.\n\nregards,\nEduardo\n--- Grega Bremec <[email protected]> wrote:\n> ...and on Thu, Apr 22, 2004 at 05:53:18AM -0700,\n> Eduardo Almeida used the keyboard:\n> > \n> > - The configuration of the machine is:\n> > Dual opteron 64 bits model 240\n> > 4GB RAM\n> > 960 GB on RAID 0\n> > Mandrake Linux 64 with Kernel 2.6.5 (I compiled a\n> > kernel for this test)\n> > Java SDK java version \"1.4.2_04\"\n> > PostgreSQL JDBC pg74.1jdbc3.jar\n> > \n> > - The TPC-H configuration is:\n> > TPC-H 2.0.0\n> > 100GB\n> > load using flat files\n> > Refresh functions using java\n> > \n> \n> I'll just add for the reference, to those that\n> aren't aware of it, the Java\n> virtual machine for x86_64 only exists in the 1.5\n> branch so far, and it's so\n> utterly unstable that most every notable shuffling\n> around in the memory\n> crashes it. :)\n> \n> Hence the 1.4.2_04 is a 32-bit application running\n> in 32-bit mode.\n> \n> I won't be getting into how much this affects the\n> benchmarks as I didn't\n> really get into how CPU- and memory-intensive the\n> refresh functions are in\n> these, so as I said - let's keep it a reference.\n> \n> Cheers,\n> -- \n> Grega Bremec\n> Senior Administrator\n> Noviforum Ltd., Software & Media\n> http://www.noviforum.si/\n> \n\n> ATTACHMENT part 2 application/pgp-signature \n\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Photos: High-quality 4x6 digital prints for 25���\nhttp://photos.yahoo.com/ph/print_splash\n", "msg_date": "Thu, 22 Apr 2004 06:59:10 -0700 (PDT)", "msg_from": "Eduardo Almeida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "Eduardo Almeida <[email protected]> writes:\n> About 7hs:30min to load the data and 16:09:25 to\n> create the indexes\n\nYou could probably improve the index-create time by temporarily\nincreasing sort_mem. It wouldn't be unreasonable to give CREATE INDEX\nseveral hundred meg to work in. (You don't want sort_mem that big\nnormally, because there may be many sorts happening in parallel,\nbut in a data-loading context there'll just be one active sort.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Apr 2004 11:54:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] MySQL vs PG TPC-H benchmarks " }, { "msg_contents": "Eduardo Almeida wrote:\n\n> Folks,\n> \n> I�m doing the 100GB TPC-H and I�ll show the previous\n> results to our community (Postgres) in 3 weeks before\n> finishing the study.\n> \n> My intention is to carry through a test with a VLDB in\n> a low cost platform (PostgreSQL, Linux and cheap HW)\n> and not to compare with another DBMS.\n\nQphH and Price/QphH will be enought for us to see where in the list we \nare. Unfortunately there are only Sybase and MS SQL results published in \nthe 100 GB category. The 300 GB has DB2 as well. Oracle starts at 1 TB \nand in the 10 TB category Oracle and DB2 are the only players left.\n\n\nJan\n\n> \n> So far I can tell you that the load time on PG 7.4.2\n> with kernel 2.6.5 on Opteron 64 model 240 in RAID 0\n> with 8 disks (960 GB) loaded the database in less than\n> 24 hours. \n> About 7hs:30min to load the data and 16:09:25 to\n> create the indexes\n> \n> The Power test still running and that�s why I�ll not\n> present anything so far. Now I�ll just send to the\n> list my environment configuration.\n> \n> - The configuration of the machine is:\n> Dual opteron 64 bits model 240\n> 4GB RAM\n> 960 GB on RAID 0\n> Mandrake Linux 64 with Kernel 2.6.5 (I compiled a\n> kernel for this test)\n> Java SDK java version \"1.4.2_04\"\n> PostgreSQL JDBC pg74.1jdbc3.jar\n> \n> - The TPC-H configuration is:\n> TPC-H 2.0.0\n> 100GB\n> load using flat files\n> Refresh functions using java\n> \n> - The PostgreSQL 7.4.2 configuration is:\n> \n> add_missing_from | on\n> australian_timezones | off\n> authentication_timeout | 60\n> check_function_bodies | on\n> checkpoint_segments | 128\n> checkpoint_timeout | 300\n> checkpoint_warning | 30\n> client_encoding | SQL_ASCII\n> client_min_messages | notice\n> commit_delay | 0\n> commit_siblings | 5\n> cpu_index_tuple_cost | 0.001\n> cpu_operator_cost | 0.0025\n> cpu_tuple_cost | 0.01\n> DateStyle | ISO, MDY\n> db_user_namespace | off\n> deadlock_timeout | 1000\n> debug_pretty_print | off\n> debug_print_parse | off\n> debug_print_plan | off\n> debug_print_rewritten | off\n> default_statistics_target | 10\n> default_transaction_isolation | read committed\n> default_transaction_read_only | off\n> dynamic_library_path | $libdir\n> effective_cache_size | 150000\n> enable_hashagg | on\n> enable_hashjoin | on\n> enable_indexscan | on\n> enable_mergejoin | on\n> enable_nestloop | on\n> enable_seqscan | on\n> enable_sort | on\n> enable_tidscan | on\n> explain_pretty_print | on\n> extra_float_digits | 0\n> from_collapse_limit | 8\n> fsync | off\n> geqo | on\n> geqo_effort | 1\n> geqo_generations | 0\n> geqo_pool_size | 0\n> geqo_selection_bias | 2\n> geqo_threshold | 11\n> join_collapse_limit | 8\n> krb_server_keyfile | unset\n> lc_collate | en_US\n> lc_ctype | en_US\n> lc_messages | C\n> lc_monetary | C\n> lc_numeric | C\n> lc_time | C\n> log_connections | off\n> log_duration | off\n> log_error_verbosity | default\n> log_executor_stats | off\n> log_hostname | off\n> log_min_duration_statement | -1\n> log_min_error_statement | panic\n> log_min_messages | notice\n> log_parser_stats | off\n> log_pid | off\n> log_planner_stats | off\n> log_source_port | off\n> log_statement | off\n> log_statement_stats | off\n> log_timestamp | off\n> max_connections | 10\n> max_expr_depth | 10000\n> max_files_per_process | 1000\n> max_fsm_pages | 20000\n> max_fsm_relations | 1000\n> max_locks_per_transaction | 64\n> password_encryption | on\n> port | 5432\n> pre_auth_delay | 0\n> preload_libraries | unset\n> random_page_cost | 1.25\n> regex_flavor | advanced\n> rendezvous_name | unset\n> search_path | $user,public\n> server_encoding | SQL_ASCII\n> server_version | 7.4.2\n> shared_buffers | 40000\n> silent_mode | off\n> sort_mem | 65536\n> sql_inheritance | on\n> ssl | off\n> statement_timeout | 10000000\n> stats_block_level | off\n> stats_command_string | off\n> stats_reset_on_server_start | on\n> stats_row_level | off\n> stats_start_collector | on\n> superuser_reserved_connections | 2\n> syslog | 0\n> syslog_facility | LOCAL0\n> syslog_ident | postgres\n> tcpip_socket | on\n> TimeZone | unknown\n> trace_notify | off\n> transaction_isolation | read committed\n> transaction_read_only | off\n> transform_null_equals | off\n> unix_socket_directory | unset\n> unix_socket_group | unset\n> unix_socket_permissions | 511\n> vacuum_mem | 65536\n> virtual_host | unset\n> wal_buffers | 32\n> wal_debug | 0\n> wal_sync_method | fdatasync\n> zero_damaged_pages | off\n> (113 rows)\n> \n> \n> suggestions, doubts and commentaries are very welcome\n> \n> regards \n> ______________________________\n> Eduardo Cunha de Almeida\n> Administra��o de Banco de Dados\n> UFPR - CCE \n> +55-41-361-3321\n> [email protected]\n> [email protected]\n> \n> --- Jan Wieck <[email protected]> wrote:\n>> Josh Berkus wrote:\n>> \n>> > Folks,\n>> > \n>> > I've sent a polite e-mail to Mr. Gomez offering\n>> our help. Please, nobody \n>> > flame him!\n>> > \n>> \n>> Please keep in mind that the entire test has, other\n>> than a similar \n>> database schema and query types maybe, nothing to do\n>> with a TPC-H. I \n>> don't see any kind of SUT. Foreign key support on\n>> the DB level is not \n>> required by any of the TPC benchmarks. But the\n>> System Under Test, which \n>> is the combination of middleware application and\n>> database together with \n>> all computers and network components these parts are\n>> running on, must \n>> implement all the required semantics, like ACID\n>> properties, referential \n>> integrity &c. One could implement a TPC-H with flat\n>> files, it's just a \n>> major pain in the middleware.\n>> \n>> A proper TPC benchmark implementation would for\n>> example be a complete \n>> PHP+DB application, where the user interaction is\n>> done by an emulated \n>> \"browser\" and what is measured is the http response\n>> times, not anything \n>> going on between PHP and the DB. Assuming that all\n>> requirements of the \n>> TPC specification are implemented by either using\n>> available DB features, \n>> or including appropriate workarounds in the PHP\n>> code, that would very \n>> well lead to something that can compare PHP+MySQL\n>> vs. PHP+PostgreSQL.\n>> \n>> All TPC benchmarks I have seen are performed by\n>> timing such a system \n>> after a considerable rampup time, giving the DB\n>> system a chance to \n>> properly populate caches and so forth. Rebooting the\n>> machine just before \n>> the test is the wrong thing here and will especially\n>> kill any advanced \n>> cache algorithms like ARC.\n>> \n>> \n>> Jan\n>> \n>> -- \n>>\n> #======================================================================#\n>> # It's easier to get forgiveness for being wrong\n>> than for being right. #\n>> # Let's break this rule - forgive me. \n>> #\n>> #==================================================\n>> [email protected] #\n>> \n>> \n>> ---------------------------(end of\n>> broadcast)---------------------------\n>> TIP 5: Have you checked our extensive FAQ?\n>> \n>> \n>> http://www.postgresql.org/docs/faqs/FAQ.htmlIP 5:\n>> Have you checked our extensive FAQ?\n>> \n>> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n> \n> \n> \t\n> \t\t\n> __________________________________\n> Do you Yahoo!?\n> Yahoo! Photos: High-quality 4x6 digital prints for 25�\n> http://photos.yahoo.com/ph/print_splash\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Thu, 22 Apr 2004 12:19:47 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "Folks,\n\nI forgot to mention that I used Shell scripts to load\nthe data and use Java just to run the refresh\nfunctions.\n\nTalking about sort_mem config, I used 65000 but in the\nTPCH specification they said that you are not able to\nchange the configs when you start the benchmark, is\nthat a big problem to use 65000? In the TPCH 100GB we\nrun 5 streams in parallel for the throughput test! To\npower test I think is not a problem because it runs\none query after another.\n\nAnother thing is that I put statement_timeout =\n10000000 \n\nSome queries may exceed this timeout and I���ll send the\nEXPLAIN for this ones.\n \nThe last thing is that Jan forgets to mention that\nTeradata doesn���t show up now but in older lists shows\n3TB and 10TB results.\n\nregards\nEduardo\n--- Jan Wieck <[email protected]> wrote:\n> Eduardo Almeida wrote:\n> \n> > Folks,\n> > \n> > I���m doing the 100GB TPC-H and I���ll show the\n> previous\n> > results to our community (Postgres) in 3 weeks\n> before\n> > finishing the study.\n> > \n> > My intention is to carry through a test with a\n> VLDB in\n> > a low cost platform (PostgreSQL, Linux and cheap\n> HW)\n> > and not to compare with another DBMS.\n> \n> QphH and Price/QphH will be enought for us to see\n> where in the list we \n> are. Unfortunately there are only Sybase and MS SQL\n> results published in \n> the 100 GB category. The 300 GB has DB2 as well.\n> Oracle starts at 1 TB \n> and in the 10 TB category Oracle and DB2 are the\n> only players left.\n> \n> \n> Jan\n> \n> > \n> > So far I can tell you that the load time on PG\n> 7.4.2\n> > with kernel 2.6.5 on Opteron 64 model 240 in RAID\n> 0\n> > with 8 disks (960 GB) loaded the database in less\n> than\n> > 24 hours. \n> > About 7hs:30min to load the data and 16:09:25 to\n> > create the indexes\n> > \n> > The Power test still running and that���s why I���ll\n> not\n> > present anything so far. Now I���ll just send to the\n> > list my environment configuration.\n> > \n> > - The configuration of the machine is:\n> > Dual opteron 64 bits model 240\n> > 4GB RAM\n> > 960 GB on RAID 0\n> > Mandrake Linux 64 with Kernel 2.6.5 (I compiled a\n> > kernel for this test)\n> > Java SDK java version \"1.4.2_04\"\n> > PostgreSQL JDBC pg74.1jdbc3.jar\n> > \n> > - The TPC-H configuration is:\n> > TPC-H 2.0.0\n> > 100GB\n> > load using flat files\n> > Refresh functions using java\n> > \n> > - The PostgreSQL 7.4.2 configuration is:\n> > \n> > add_missing_from | on\n> > australian_timezones | off\n> > authentication_timeout | 60\n> > check_function_bodies | on\n> > checkpoint_segments | 128\n> > checkpoint_timeout | 300\n> > checkpoint_warning | 30\n> > client_encoding | SQL_ASCII\n> > client_min_messages | notice\n> > commit_delay | 0\n> > commit_siblings | 5\n> > cpu_index_tuple_cost | 0.001\n> > cpu_operator_cost | 0.0025\n> > cpu_tuple_cost | 0.01\n> > DateStyle | ISO, MDY\n> > db_user_namespace | off\n> > deadlock_timeout | 1000\n> > debug_pretty_print | off\n> > debug_print_parse | off\n> > debug_print_plan | off\n> > debug_print_rewritten | off\n> > default_statistics_target | 10\n> > default_transaction_isolation | read committed\n> > default_transaction_read_only | off\n> > dynamic_library_path | $libdir\n> > effective_cache_size | 150000\n> > enable_hashagg | on\n> > enable_hashjoin | on\n> > enable_indexscan | on\n> > enable_mergejoin | on\n> > enable_nestloop | on\n> > enable_seqscan | on\n> > enable_sort | on\n> > enable_tidscan | on\n> > explain_pretty_print | on\n> > extra_float_digits | 0\n> > from_collapse_limit | 8\n> > fsync | off\n> > geqo | on\n> > geqo_effort | 1\n> > geqo_generations | 0\n> > geqo_pool_size | 0\n> > geqo_selection_bias | 2\n> > geqo_threshold | 11\n> > join_collapse_limit | 8\n> > krb_server_keyfile | unset\n> > lc_collate | en_US\n> > lc_ctype | en_US\n> > lc_messages | C\n> > lc_monetary | C\n> > lc_numeric | C\n> > lc_time | C\n> > log_connections | off\n> > log_duration | off\n> > log_error_verbosity | default\n> > log_executor_stats | off\n> > log_hostname | off\n> > log_min_duration_statement | -1\n> > log_min_error_statement | panic\n> > log_min_messages | notice\n> > log_parser_stats | off\n> > log_pid | off\n> > log_planner_stats | off\n> > log_source_port | off\n> > log_statement | off\n> > log_statement_stats | off\n> > log_timestamp | off\n> > max_connections | 10\n> > max_expr_depth | 10000\n> > max_files_per_process | 1000\n> > max_fsm_pages | 20000\n> > max_fsm_relations | 1000\n> > max_locks_per_transaction | 64\n> > password_encryption | on\n> > port | 5432\n> > pre_auth_delay | 0\n> > preload_libraries | unset\n> > random_page_cost | 1.25\n> > regex_flavor | advanced\n> > rendezvous_name | unset\n> > search_path | $user,public\n> > server_encoding | SQL_ASCII\n> > server_version | 7.4.2\n> > shared_buffers | 40000\n> > silent_mode | off\n> > sort_mem | 65536\n> > sql_inheritance | on\n> > ssl | off\n> > statement_timeout | 10000000\n> > stats_block_level | off\n> > stats_command_string | off\n> > stats_reset_on_server_start | on\n> > stats_row_level | off\n> > stats_start_collector | on\n> > superuser_reserved_connections | 2\n> > syslog | 0\n> > syslog_facility | LOCAL0\n> > syslog_ident | postgres\n> > tcpip_socket | on\n> > TimeZone | unknown\n> > trace_notify | off\n> > transaction_isolation | read committed\n> > transaction_read_only | off\n> > transform_null_equals | off\n> > unix_socket_directory | unset\n> > unix_socket_group | unset\n> > unix_socket_permissions | 511\n> > vacuum_mem | 65536\n> > virtual_host | unset\n> > wal_buffers | 32\n> > wal_debug | 0\n> > wal_sync_method | fdatasync\n> > zero_damaged_pages | off\n> > (113 rows)\n> > \n> > \n> > suggestions, doubts and commentaries are very\n> welcome\n> > \n> > regards \n> > ______________________________\n> > Eduardo Cunha de Almeida\n> > Administra������o de Banco de Dados\n> > UFPR - CCE \n> > +55-41-361-3321\n> > [email protected]\n> > [email protected]\n> > \n> > --- Jan Wieck <[email protected]> wrote:\n> >> Josh Berkus wrote:\n> >> \n> >> > Folks,\n> >> > \n> >> > I've sent a polite e-mail to Mr. Gomez offering\n> >> our help. Please, nobody \n> >> > flame him!\n> >> > \n> >> \n> >> Please keep in mind that the entire test has,\n> other\n> \n=== message truncated ===\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Photos: High-quality 4x6 digital prints for 25���\nhttp://photos.yahoo.com/ph/print_splash\n", "msg_date": "Thu, 22 Apr 2004 10:10:34 -0700 (PDT)", "msg_from": "Eduardo Almeida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "В Чтв, 22.04.2004, в 17:54, Tom Lane пишет:\n> Eduardo Almeida <[email protected]> writes:\n> > About 7hs:30min to load the data and 16:09:25 to\n> > create the indexes\n> \n> You could probably improve the index-create time by temporarily\n> increasing sort_mem. It wouldn't be unreasonable to give CREATE INDEX\n> several hundred meg to work in. (You don't want sort_mem that big\n> normally, because there may be many sorts happening in parallel,\n> but in a data-loading context there'll just be one active sort.)\n\nDoesn't this provide a reason for CREATE INDEX not to honour sort_mem?\n\n-- \nMarkus Bertheau <[email protected]>\n\n", "msg_date": "Thu, 22 Apr 2004 20:20:47 +0200", "msg_from": "Markus Bertheau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] MySQL vs PG TPC-H benchmarks" }, { "msg_contents": "Markus Bertheau <[email protected]> writes:\n>> You could probably improve the index-create time by temporarily\n>> increasing sort_mem. It wouldn't be unreasonable to give CREATE INDEX\n>> several hundred meg to work in. (You don't want sort_mem that big\n>> normally, because there may be many sorts happening in parallel,\n>> but in a data-loading context there'll just be one active sort.)\n\n> Doesn't this provide a reason for CREATE INDEX not to honour sort_mem?\n\nAlready done for 7.5.\n\nhttp://archives.postgresql.org/pgsql-committers/2004-02/msg00025.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Apr 2004 15:22:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-advocacy] MySQL vs PG TPC-H benchmarks " }, { "msg_contents": "...and on Thu, Apr 22, 2004 at 06:59:10AM -0700, Eduardo Almeida used the keyboard:\n\n<snip>\n>\n> To reference, Sun has java 64bits just to IA64 and\n> Solaris Sparc 64 not to Opteron.\n> \n\nAs I mentioned, that is true for the 1.4.x release of the JVMs. We have been\ntesting some JCA builds of 1.5.0 on x86_64 so far, but it is too unstable for\nany kind of serious work.\n\nCheers,\n-- \n Grega Bremec\n Senior Administrator\n Noviforum Ltd., Software & Media\n http://www.noviforum.si/", "msg_date": "Thu, 6 May 2004 10:59:13 +0200", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] MySQL vs PG TPC-H benchmarks" } ]
[ { "msg_contents": "Hi\n\n We are in the process of building a new machine for our production \ndatabase. Below you will see some of the harware specs for the machine. \nI need some help with setting these parameters (shared buffers, \neffective cache, sort mem) in the pg_conf file. Also can anyone explain \nthe difference between shared buffers and effective cache , how these \nare allocated in the main memory (the docs are not clear on this).\n\nHere are the Hardware details:\nOperating System: Red Hat 9\nDatabase Ver: Postgres 7.4\nCPU'S : 4\nRAM : 4 gig\nDatafile layout : RAID 1+0\nTransaction log : on different RAID1 Array\nRAID Stripe Size: 8k\n\n\nThanks!\nPallav\n\n", "msg_date": "Thu, 22 Apr 2004 13:51:42 -0400", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Setting Shared Buffers , Effective Cache, Sort Mem Parameters" }, { "msg_contents": "On Thu, 22 Apr 2004, Pallav Kalva wrote:\n\n> Hi\n> \n> We are in the process of building a new machine for our production \n> database. Below you will see some of the harware specs for the machine. \n> I need some help with setting these parameters (shared buffers, \n> effective cache, sort mem) in the pg_conf file. Also can anyone explain \n> the difference between shared buffers and effective cache , how these \n> are allocated in the main memory (the docs are not clear on this).\n> \n> Here are the Hardware details:\n> Operating System: Red Hat 9\n> Database Ver: Postgres 7.4\n> CPU'S : 4\n> RAM : 4 gig\n> Datafile layout : RAID 1+0\n> Transaction log : on different RAID1 Array\n> RAID Stripe Size: 8k\n\nRead this first:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nBasically shared buffers are the \"play area\" for the database backends to \ntoss data in the air and munge it together. The effective cache size \nreflects the approximate amount of space your operating system is using to \nbuffer Postgresql data. On a dedicated database machine this is about the \nsame as the size of the kernel buffer shown in top. On a mixed machine, \nyou'll have to see how much of what data is getting buffered to get a \nguesstimate of how much kernel cache is being used for pgsql and how much \nfor other processes. Then divide that number in bytes by 8192, the \ndefault block size. On a machine with 1.2 gigs of kernel cache, that'd be \nabout 150,000 blocks.\n\nBuffer sizes from 1000 to 10000 blocks are common. Block sizes from 10000 \nto 50000 can somtimes increase performance, but those sizes only really \nmake sense for machines with lots of ram, and very large datasets being \noperated on.\n\n", "msg_date": "Thu, 22 Apr 2004 15:16:41 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Setting Shared Buffers , Effective Cache, Sort Mem" }, { "msg_contents": "On Thu, 22 Apr 2004 13:51:42 -0400, Pallav Kalva <[email protected]> wrote:\n>I need some help with setting these parameters (shared buffers, \n>effective cache, sort mem) in the pg_conf file.\n\nIt really depends on the kind of queries you intend to run, the number\nof concurrent active connections, the size of the working set (active\npart of the database), what else is running on the machine, and and and\n...\n\nSetting shared_buffers to 10000, effective_cache_size to 400000 (80% of\ninstalled RAM), and sort_mem to a few thousand might be a good start.\n\n> Also can anyone explain \n>the difference between shared buffers and effective cache , how these \n>are allocated in the main memory (the docs are not clear on this).\n\nShared_buffers directly controls how many pages are allocated as\ninternal cache. Effective_cache_size doesn't allocate anything, it is\njust a hint to the planner how much cache is available on the system\nlevel.\n\nServus\n Manfred\n", "msg_date": "Fri, 23 Apr 2004 00:50:37 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Setting Shared Buffers , Effective Cache, Sort Mem Parameters" }, { "msg_contents": "On Fri, 23 Apr 2004 10:20:10 -0400, Pallav Kalva <[email protected]> wrote:\n> the database sizes is around 2- 4 gig and \n>there are 5 of them. this machine is\n> mainly for the databases and nothing is running on them.\n\nDid I understand correctly that you run (or plan to run) five\npostmasters? Is there a special reason that you cannot put all your\ntables into one database?\n\n> setting shared buffers to 10000 allocates (81Mb) and effective \n>cache to 400000 would be around (3gig)\n> does this means that if all of the 81mb of the shared memory gets \n>allocated it will use rest from the effective\n> cache of (3g-81mb) ?\n\nSimply said, if Postgres wants to access a block, it first looks whether\nthis block is already in shared buffers which should be the case, if the\nblock is one of the last 10000 blocks accessed. Otherwise the block has\nto be read in. If the OS has the block in its cache, reading it is just\na (fast) memory operation, else it involves a (slow) physical disk read.\n\nThe number of database pages residing in the OS cache is totally out of\ncontrol of Postgres. Effective_cache_size tells the query planner how\nmany database pages can be *expected* to be present in the OS cache.\n\n>increasing the shared buffers space to 2g\n\nSetting shared_buffers to half your available memory is the worst thing\nyou can do. You would end up caching exactly the same set of blocks in\nthe internal buffers and in the OS cache, thus effectively making one of\nthe caches useless.\n\nBetter keep shared_buffers low and let the OS do its job.\n\nServus\n Manfred\n", "msg_date": "Fri, 23 Apr 2004 23:44:51 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Setting Shared Buffers , Effective Cache, Sort Mem Parameters" }, { "msg_contents": "\nOn Fri, 23 Apr 2004, Manfred Koizar wrote:\n>\n> Setting shared_buffers to half your available memory is the worst thing\n> you can do. You would end up caching exactly the same set of blocks in\n> the internal buffers and in the OS cache, thus effectively making one of\n> the caches useless.\n\nOne minor detail... You wouldn't really cache the _exact_ same blocks\nbecause cache-hits in shared-buffers (on the most frequently accessed\npages) would let the OS cache some other pages in it's cache.\n\nBut in my experience Manfred's right that there's no benefit and\nsome penalty to making shared_buffers so large it takes a significant\npiece away from the OS's caching.\n", "msg_date": "Fri, 23 Apr 2004 15:10:20 -0700 (PDT)", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Setting Shared Buffers , Effective Cache, Sort Mem" }, { "msg_contents": "Ron Mayer <[email protected]> writes:\n> [ on setting shared_buffers = half of RAM ]\n\n> One minor detail... You wouldn't really cache the _exact_ same blocks\n> because cache-hits in shared-buffers (on the most frequently accessed\n> pages) would let the OS cache some other pages in it's cache.\n\n> But in my experience Manfred's right that there's no benefit and\n> some penalty to making shared_buffers so large it takes a significant\n> piece away from the OS's caching.\n\nTrue, it'd probably not be the *exact* worst case. But it'd be a good\napproximation. In practice you should either bet on the kernel doing\nmost of the caching (in which case you set shared_buffers pretty low)\nor bet on Postgres doing most of the caching (in which case you set\nshared_buffers to eat most of RAM).\n\nThe conventional wisdom at this point is to bet the first way; no one\nhas shown performance benefits from setting shared_buffers higher than\nthe low tens of thousands. (Most of the mail list traffic on this\npredates the existence of pgsql-performance, so check the other list\narchives too if you go looking for discussion.)\n\nIt's possible that Jan's recent buffer-management improvements will\nchange the story as of 7.5. I kinda doubt it myself, but it'd be worth\nre-running any experiments you've done when you start working with 7.5.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Apr 2004 22:50:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Setting Shared Buffers , Effective Cache, Sort Mem " }, { "msg_contents": "Tom,\n\n> It's possible that Jan's recent buffer-management improvements will\n> change the story as of 7.5. I kinda doubt it myself, but it'd be worth\n> re-running any experiments you've done when you start working with 7.5.\n\nYes, Jan has indicated to me that he expects to make much heavier use of \nshared buffers under ARC. But 7.5 still seems to be too unstable for me to \ntest this assertion on a large database.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 24 Apr 2004 08:37:42 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Setting Shared Buffers , Effective Cache, Sort Mem" } ]
[ { "msg_contents": "> The planner is guessing that scanning in rec_id order will produce a\n> matching row fairly quickly (sooner than selecting all the matching \n> rows\n> and sorting them would do). It's wrong in this case, but I'm not sure\n> it could do better without very detailed cross-column statistics.\n\n> Am I\n> right to guess that the rows that match the WHERE clause are not evenly\n> distributed in the rec_id order, but rather there are no such rows till\n> you get well up in the ordering?\n\nI must agree that the data are not evenly distributed....\n\nFor table url:\ncount 271.395\nmin rec_id 1\nmax rec_id 3.386.962\n\ndps=> select * from url where crc32=419903683;\ncount 852\nmin rec_id 264.374\nmax rec_id 2.392.046\n\nI do\ndps=> select ctid, rec_id from url where crc32=419903683 order by \ncrc32,rec_id;\nAnd then in a text edit extract the \"page_id\" from ctid\nand there is 409 distinct pages for the 852 rows.\nThere is 4592 pages for the tables url.\n\ndps=> select (rec_id/25), count(*) from url where crc32=419903683 group \nby rec_id/25 having count(*)>4 order by count(*) desc;\n ?column? | count\n----------+-------\n 30289 | 25\n 11875 | 24\n 11874 | 24\n 11876 | 24\n 28154 | 23\n 26164 | 21\n 26163 | 21\n 55736 | 21\n 40410 | 20\n 47459 | 20\n 30290 | 20\n 28152 | 20\n 26162 | 19\n 30291 | 19\n 37226 | 19\n 60357 | 18\n 28150 | 18\n 12723 | 17\n 40413 | 17\n 40412 | 16\n 33167 | 15\n 40415 | 15\n 12961 | 15\n 40414 | 15\n 28151 | 14\n 63961 | 14\n 26165 | 13\n 11873 | 13\n 63960 | 12\n 37225 | 12\n 37224 | 12\n 20088 | 11\n 30288 | 11\n 91450 | 11\n 20087 | 11\n 26892 | 10\n 47458 | 10\n 40411 | 10\n 91451 | 10\n 12722 | 10\n 28153 | 9\n 43488 | 9\n 60358 | 7\n 60356 | 7\n 11877 | 7\n 33168 | 6\n 91448 | 6\n 26161 | 6\n 40409 | 5\n 28155 | 5\n 28318 | 5\n 30292 | 5\n 26891 | 5\n 95666 | 5\n(54 rows)\n\n\n\nAn other question, with VACUUM VERBOSE ANALYZE, I see:\n> INFO: \"url\": removed 568107 row versions in 4592 pages\n> DETAIL: CPU 0.51s/1.17u sec elapsed 174.74 sec.\nAnd I run pg_autovacuum.\nDoes the big number (568107) of removed row indicates I should set a \nhigher max_fsm_pages ?\n\n > grep fsm /var/pgsql/postgresql.conf\nmax_fsm_pages = 60000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 200 # min 100, ~50 bytes each\n\ndps=> VACUUM VERBOSE ANALYSE url;\nINFO: vacuuming \"public.url\"\nINFO: index \"url_crc\" now contains 211851 row versions in 218 pages\nDETAIL: 129292 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/1.38u sec elapsed 5.71 sec.\nINFO: index \"url_seed\" now contains 272286 row versions in 644 pages\nDETAIL: 568107 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.10s/2.96u sec elapsed 13.06 sec.\nINFO: index \"url_referrer\" now contains 272292 row versions in 603 \npages\nDETAIL: 568107 index row versions were removed.\n4 index pages have been deleted, 0 are currently reusable.\nCPU 0.10s/2.98u sec elapsed 22.30 sec.\nINFO: index \"url_next_index_time\" now contains 272292 row versions in \n684 pages\nDETAIL: 568107 index row versions were removed.\n42 index pages have been deleted, 0 are currently reusable.\nCPU 0.07s/1.80u sec elapsed 9.50 sec.\nINFO: index \"url_status\" now contains 272298 row versions in 638 pages\nDETAIL: 568107 index row versions were removed.\n12 index pages have been deleted, 0 are currently reusable.\nCPU 0.03s/2.18u sec elapsed 13.66 sec.\nINFO: index \"url_bad_since_time\" now contains 272317 row versions in \n611 pages\nDETAIL: 568107 index row versions were removed.\n4 index pages have been deleted, 0 are currently reusable.\nCPU 0.07s/2.40u sec elapsed 10.99 sec.\nINFO: index \"url_hops\" now contains 272317 row versions in 637 pages\nDETAIL: 568107 index row versions were removed.\n5 index pages have been deleted, 0 are currently reusable.\nCPU 0.04s/2.24u sec elapsed 12.46 sec.\nINFO: index \"url_siteid\" now contains 272321 row versions in 653 pages\nDETAIL: 568107 index row versions were removed.\n13 index pages have been deleted, 0 are currently reusable.\nCPU 0.14s/2.05u sec elapsed 11.63 sec.\nINFO: index \"url_serverid\" now contains 272321 row versions in 654 \npages\nDETAIL: 568107 index row versions were removed.\n8 index pages have been deleted, 0 are currently reusable.\nCPU 0.10s/2.27u sec elapsed 11.45 sec.\nINFO: index \"url_url\" now contains 272065 row versions in 1892 pages\nDETAIL: 193884 index row versions were removed.\n5 index pages have been deleted, 0 are currently reusable.\nCPU 0.39s/1.50u sec elapsed 36.99 sec.\nINFO: index \"url_last_mod_time\" now contains 272071 row versions in \n317 pages\nDETAIL: 193884 index row versions were removed.\n7 index pages have been deleted, 0 are currently reusable.\nCPU 0.03s/1.38u sec elapsed 5.61 sec.\nINFO: index \"url_pkey\" now contains 272086 row versions in 328 pages\nDETAIL: 193884 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.05s/1.60u sec elapsed 60.64 sec.\nINFO: \"url\": removed 568107 row versions in 4592 pages\nDETAIL: CPU 0.51s/1.17u sec elapsed 174.74 sec.\nINFO: \"url\": found 568107 removable, 272027 nonremovable row versions \nin 4614 pages\nDETAIL: 402 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 1.98s/26.08u sec elapsed 466.27 sec.\nINFO: vacuuming \"pg_toast.pg_toast_137628026\"\nINFO: index \"pg_toast_137628026_index\" now contains 0 row versions in \n1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.06 sec.\nINFO: \"pg_toast_137628026\": found 0 removable, 0 nonremovable row \nversions in 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.07 sec.\nINFO: analyzing \"public.url\"\nINFO: \"url\": 4624 pages, 150000 rows sampled, 577419 estimated total \nrows\nVACUUM\n\nCordialement,\nJean-Gérard Pailloncy\n\n", "msg_date": "Thu, 22 Apr 2004 20:46:51 +0200", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 225 times slower " } ]
[ { "msg_contents": ">> Having to recompile to run on single- vs dual-processor \n>machines doesn't\n>> seem like it would fly.\n>\n>Oh, I don't know. Many applications require compiling for a target \n>architecture; SQL Server, for example, won't use a 2nd \n>processor without \n>re-installation. I'm not sure about Oracle.\n\nUh, that is not quite true - at leasdt not for current versions. SQL Server will pick up and use whatever processors the underlying OS supports. Now, depending on how you install the OS (Windows, that is) you may have ended up with a kernel and HAL that does not support multiprocessor. In this case, you have to change HAL. But you certainly don't have to reinstalsl SQL Server or Windows. Just a reboot (pretty normal when you add a CPU...)\n\nNow, there can be licensing issues if you are in per-processor licensing, but that's a completely different issue. Also, the \"Standard Edition\" only uses up to 4 CPUs, but again, that's a different issue.\n\n//Magnus\n", "msg_date": "Thu, 22 Apr 2004 23:21:33 +0200", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd context-switching issue on Xeon patch for 7.4.1" } ]
[ { "msg_contents": "To all,\n\nEssentials: Running 7.4.1 on OSX on a loaded G5 with dual procs, 8GB \nmemory, direct attached via fibre channel to a fully optioned 3.5TB \nXRaid (14 spindles, 2 sets of 7 in RAID 5) box running RAID 50.\n\nBackground: We are loading what are essentially xml based access logs \nfrom about 20+ webservers daily, about 6GB of raw data. We have a \nclassic star schema. All the ETL tools are custom java code or standard \n*nix tools like sort, uniq etc...\n\nThe problem: We have about 46 million rows in a table with the \nfollowing schema:\n\nTable \"public.d_referral\"\n Column | Type | Modifiers\n--------------------+---------+-----------\n id | integer | not null\n referral_raw_url | text | not null\n job_control_number | integer | not null\nIndexes:\n \"d_referral_pkey\" primary key, btree (id)\n \"idx_referral_url\" btree (referral_raw_url)\n\nThis is one of our dimension tables. Part of the daily ETL process is \nto match all the new referral URL's against existing data in the \nd_referral table. Some of the values in referral_raw_url can be 5000 \ncharacters long :-( . The avg length is : 109.57 characters.\n\nI sort and uniq all the incoming referrals and load them into a temp table.\n\nTable \"public.referral_temp\"\n Column | Type | Modifiers\n--------+------+-----------\n url | text | not null\nIndexes:\n \"referral_temp_pkey\" primary key, btree (url)\n\nI then do a left join\n\nSELECT t1.id, t2.url FROM referral_temp t2 LEFT OUTER JOIN d_referral t1 \nON t2.url = t1.referral_raw_url ORDER BY t1.id\n\nThis is the output from an explain analyze (Please note that I do a set \nenable_index_scan = false prior to issuing this because it takes forever \nusing indexes.):\n\nexplain analyze SELECT t1.id, t2.url FROM referral_temp t2 LEFT OUTER \nJOIN d_referral t1 ON t2.url = t1.referral_raw_url ORDER BY t1.id;\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=4012064.81..4013194.45 rows=451856 width=115) (actual \ntime=1297320.823..1297739.813 rows=476176 loops=1)\n Sort Key: t1.id\n -> Hash Left Join (cost=1052345.95..3969623.10 rows=451856 \nwidth=115) (actual time=1146650.487..1290230.590 rows=476176 loops=1)\n Hash Cond: (\"outer\".url = \"inner\".referral_raw_url)\n -> Seq Scan on referral_temp t2 (cost=0.00..6645.56 \nrows=451856 width=111) (actual time=20.285..1449.634 rows=476176 loops=1)\n -> Hash (cost=729338.16..729338.16 rows=46034716 width=124) \n(actual time=1146440.710..1146440.710 rows=0 loops=1)\n -> Seq Scan on d_referral t1 (cost=0.00..729338.16 \nrows=46034716 width=124) (actual time=14.502..-1064277.123 rows=46034715 \nloops=1)\n Total runtime: 1298153.193 ms\n(8 rows)\n\n\n\nWhat I would like to know is if there are better ways to do the join? I \nneed to get all the rows back from the referral_temp table as they are \nused for assigning FK's for the fact table later in processing. When I \niterate over the values that I get back those with t1.id = null I assign \na new FK and push both into the d_referral table as new entries as well \nas a text file for later use. The matching records are written to a \ntext file for later use. \n\nIf we cannot improve the join performance my question becomes are there \nbetter tools to match up the 46 million and growing at the rate of 1 \nmillion every 3 days, strings outside of postgresql? We don't want to \nhave to invest in zillions of dollars worth of hardware but if we have \nto we will. I just want to make sure we have all the non hardware \npossibilities for improvement covered before we start investing in large \ndisk arrays. \n\nThanks.\n\n--sean\n", "msg_date": "Thu, 22 Apr 2004 17:56:59 -0400", "msg_from": "Sean Shanny <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for ideas on how to speed up warehouse loading" }, { "msg_contents": "hi,\n\nSean Shanny wrote, On 4/22/2004 23:56:\n> \n> SELECT t1.id, t2.url FROM referral_temp t2 LEFT OUTER JOIN d_referral t1 \n> ON t2.url = t1.referral_raw_url ORDER BY t1.id\n\nindex on url (text) has no sense. Try to use and md5 (char(32) column) \nwhich contains the md5 hash of url field. and join these ones. You can \nhave a better index on this char 32 field.\n\ndo not forget to analyze the tables after data load, and you can fine \ntune you postgresql.conf, default_statistics_target for better index \ninfo, and others.\ncheck this info pages:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nC.\n", "msg_date": "Fri, 23 Apr 2004 01:05:20 +0200", "msg_from": "CoL <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for ideas on how to speed up warehouse loading" }, { "msg_contents": "I should have included this as well:\n\n show all;\n name | setting\n--------------------------------+----------------\n add_missing_from | on\n australian_timezones | off\n authentication_timeout | 60\n check_function_bodies | on\n checkpoint_segments | 64\n checkpoint_timeout | 30\n checkpoint_warning | 30\n client_encoding | UNICODE\n client_min_messages | notice\n commit_delay | 0\n commit_siblings | 5\n cpu_index_tuple_cost | 0.001\n cpu_operator_cost | 0.0025\n cpu_tuple_cost | 0.01\n DateStyle | ISO, MDY\n db_user_namespace | off\n deadlock_timeout | 1000\n debug_pretty_print | off\n debug_print_parse | off\n debug_print_plan | off\n debug_print_rewritten | off\n default_statistics_target | 1000\n default_transaction_isolation | read committed\n default_transaction_read_only | off\n dynamic_library_path | $libdir\n effective_cache_size | 400000\n enable_hashagg | on\n enable_hashjoin | on\n enable_indexscan | on\n enable_mergejoin | on\n enable_nestloop | on\n enable_seqscan | on\n enable_sort | on\n enable_tidscan | on\n explain_pretty_print | on\n extra_float_digits | 0\n from_collapse_limit | 8\n fsync | on\n geqo | on\n geqo_effort | 1\n geqo_generations | 0\n geqo_pool_size | 0\n geqo_selection_bias | 2\n geqo_threshold | 11\n join_collapse_limit | 8\n krb_server_keyfile | unset\n lc_collate | C\n lc_ctype | C\n lc_messages | C\n lc_monetary | C\n lc_numeric | C\n lc_time | C\n log_connections | off\n log_duration | off\n log_error_verbosity | default\n log_executor_stats | off\n log_hostname | off\n log_min_duration_statement | -1\n log_min_error_statement | panic\n log_min_messages | notice\n log_parser_stats | off\n log_pid | off\n log_planner_stats | off\n log_source_port | off\n log_statement | off\n log_statement_stats | off\n log_timestamp | on\n max_connections | 100\n max_expr_depth | 10000\n max_files_per_process | 1000\n max_fsm_pages | 20000\n max_fsm_relations | 1000\n max_locks_per_transaction | 64\n password_encryption | on\n port | 5432\n pre_auth_delay | 0\n preload_libraries | unset\n random_page_cost | 4\n regex_flavor | advanced\n rendezvous_name | unset\n search_path | $user,public\n server_encoding | UNICODE\n server_version | 7.4.1\n shared_buffers | 4000\n silent_mode | off\n sort_mem | 64000\n sql_inheritance | on\n ssl | off\n statement_timeout | 0\n stats_block_level | on\n stats_command_string | on\n stats_reset_on_server_start | off\n stats_row_level | on\n stats_start_collector | on\n superuser_reserved_connections | 2\n syslog | 0\n syslog_facility | LOCAL0\n syslog_ident | postgres\n tcpip_socket | on\n TimeZone | unknown\n trace_notify | off\n transaction_isolation | read committed\n transaction_read_only | off\n transform_null_equals | off\n unix_socket_directory | unset\n unix_socket_group | unset\n unix_socket_permissions | 511\n vacuum_mem | 64000\n virtual_host | unset\n wal_buffers | 1024\n wal_debug | 0\n wal_sync_method | open_sync\n zero_damaged_pages | off\n\n\nSean Shanny wrote:\n\n> To all,\n>\n> Essentials: Running 7.4.1 on OSX on a loaded G5 with dual procs, 8GB \n> memory, direct attached via fibre channel to a fully optioned 3.5TB \n> XRaid (14 spindles, 2 sets of 7 in RAID 5) box running RAID 50.\n>\n> Background: We are loading what are essentially xml based access logs \n> from about 20+ webservers daily, about 6GB of raw data. We have a \n> classic star schema. All the ETL tools are custom java code or \n> standard *nix tools like sort, uniq etc...\n>\n> The problem: We have about 46 million rows in a table with the \n> following schema:\n>\n> Table \"public.d_referral\"\n> Column | Type | Modifiers\n> --------------------+---------+-----------\n> id | integer | not null\n> referral_raw_url | text | not null\n> job_control_number | integer | not null\n> Indexes:\n> \"d_referral_pkey\" primary key, btree (id)\n> \"idx_referral_url\" btree (referral_raw_url)\n>\n> This is one of our dimension tables. Part of the daily ETL process is \n> to match all the new referral URL's against existing data in the \n> d_referral table. Some of the values in referral_raw_url can be 5000 \n> characters long :-( . The avg length is : 109.57 characters.\n>\n> I sort and uniq all the incoming referrals and load them into a temp \n> table.\n>\n> Table \"public.referral_temp\"\n> Column | Type | Modifiers\n> --------+------+-----------\n> url | text | not null\n> Indexes:\n> \"referral_temp_pkey\" primary key, btree (url)\n>\n> I then do a left join\n>\n> SELECT t1.id, t2.url FROM referral_temp t2 LEFT OUTER JOIN d_referral \n> t1 ON t2.url = t1.referral_raw_url ORDER BY t1.id\n>\n> This is the output from an explain analyze (Please note that I do a \n> set enable_index_scan = false prior to issuing this because it takes \n> forever using indexes.):\n>\n> explain analyze SELECT t1.id, t2.url FROM referral_temp t2 LEFT OUTER \n> JOIN d_referral t1 ON t2.url = t1.referral_raw_url ORDER BY t1.id;\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------ \n>\n> Sort (cost=4012064.81..4013194.45 rows=451856 width=115) (actual \n> time=1297320.823..1297739.813 rows=476176 loops=1)\n> Sort Key: t1.id\n> -> Hash Left Join (cost=1052345.95..3969623.10 rows=451856 \n> width=115) (actual time=1146650.487..1290230.590 rows=476176 loops=1)\n> Hash Cond: (\"outer\".url = \"inner\".referral_raw_url)\n> -> Seq Scan on referral_temp t2 (cost=0.00..6645.56 \n> rows=451856 width=111) (actual time=20.285..1449.634 rows=476176 loops=1)\n> -> Hash (cost=729338.16..729338.16 rows=46034716 width=124) \n> (actual time=1146440.710..1146440.710 rows=0 loops=1)\n> -> Seq Scan on d_referral t1 (cost=0.00..729338.16 \n> rows=46034716 width=124) (actual time=14.502..-1064277.123 \n> rows=46034715 loops=1)\n> Total runtime: 1298153.193 ms\n> (8 rows)\n>\n>\n>\n> What I would like to know is if there are better ways to do the join? \n> I need to get all the rows back from the referral_temp table as they \n> are used for assigning FK's for the fact table later in processing. \n> When I iterate over the values that I get back those with t1.id = null \n> I assign a new FK and push both into the d_referral table as new \n> entries as well as a text file for later use. The matching records \n> are written to a text file for later use.\n> If we cannot improve the join performance my question becomes are \n> there better tools to match up the 46 million and growing at the rate \n> of 1 million every 3 days, strings outside of postgresql? We don't \n> want to have to invest in zillions of dollars worth of hardware but if \n> we have to we will. I just want to make sure we have all the non \n> hardware possibilities for improvement covered before we start \n> investing in large disk arrays.\n> Thanks.\n>\n> --sean\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if \n> your\n> joining column's datatypes do not match\n>\n", "msg_date": "Thu, 22 Apr 2004 19:30:53 -0400", "msg_from": "Sean Shanny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for ideas on how to speed up warehouse loading" }, { "msg_contents": "One other thing: we are running with a block size of 32K.\n\n\nNick Shanny\n(Brother of above person)\n\nOn Apr 22, 2004, at 7:30 PM, Sean Shanny wrote:\n\n> I should have included this as well:\n>\n> show all;\n> name | setting\n> --------------------------------+----------------\n> add_missing_from | on\n> australian_timezones | off\n> authentication_timeout | 60\n> check_function_bodies | on\n> checkpoint_segments | 64\n> checkpoint_timeout | 30\n> checkpoint_warning | 30\n> client_encoding | UNICODE\n> client_min_messages | notice\n> commit_delay | 0\n> commit_siblings | 5\n> cpu_index_tuple_cost | 0.001\n> cpu_operator_cost | 0.0025\n> cpu_tuple_cost | 0.01\n> DateStyle | ISO, MDY\n> db_user_namespace | off\n> deadlock_timeout | 1000\n> debug_pretty_print | off\n> debug_print_parse | off\n> debug_print_plan | off\n> debug_print_rewritten | off\n> default_statistics_target | 1000\n> default_transaction_isolation | read committed\n> default_transaction_read_only | off\n> dynamic_library_path | $libdir\n> effective_cache_size | 400000\n> enable_hashagg | on\n> enable_hashjoin | on\n> enable_indexscan | on\n> enable_mergejoin | on\n> enable_nestloop | on\n> enable_seqscan | on\n> enable_sort | on\n> enable_tidscan | on\n> explain_pretty_print | on\n> extra_float_digits | 0\n> from_collapse_limit | 8\n> fsync | on\n> geqo | on\n> geqo_effort | 1\n> geqo_generations | 0\n> geqo_pool_size | 0\n> geqo_selection_bias | 2\n> geqo_threshold | 11\n> join_collapse_limit | 8\n> krb_server_keyfile | unset\n> lc_collate | C\n> lc_ctype | C\n> lc_messages | C\n> lc_monetary | C\n> lc_numeric | C\n> lc_time | C\n> log_connections | off\n> log_duration | off\n> log_error_verbosity | default\n> log_executor_stats | off\n> log_hostname | off\n> log_min_duration_statement | -1\n> log_min_error_statement | panic\n> log_min_messages | notice\n> log_parser_stats | off\n> log_pid | off\n> log_planner_stats | off\n> log_source_port | off\n> log_statement | off\n> log_statement_stats | off\n> log_timestamp | on\n> max_connections | 100\n> max_expr_depth | 10000\n> max_files_per_process | 1000\n> max_fsm_pages | 20000\n> max_fsm_relations | 1000\n> max_locks_per_transaction | 64\n> password_encryption | on\n> port | 5432\n> pre_auth_delay | 0\n> preload_libraries | unset\n> random_page_cost | 4\n> regex_flavor | advanced\n> rendezvous_name | unset\n> search_path | $user,public\n> server_encoding | UNICODE\n> server_version | 7.4.1\n> shared_buffers | 4000\n> silent_mode | off\n> sort_mem | 64000\n> sql_inheritance | on\n> ssl | off\n> statement_timeout | 0\n> stats_block_level | on\n> stats_command_string | on\n> stats_reset_on_server_start | off\n> stats_row_level | on\n> stats_start_collector | on\n> superuser_reserved_connections | 2\n> syslog | 0\n> syslog_facility | LOCAL0\n> syslog_ident | postgres\n> tcpip_socket | on\n> TimeZone | unknown\n> trace_notify | off\n> transaction_isolation | read committed\n> transaction_read_only | off\n> transform_null_equals | off\n> unix_socket_directory | unset\n> unix_socket_group | unset\n> unix_socket_permissions | 511\n> vacuum_mem | 64000\n> virtual_host | unset\n> wal_buffers | 1024\n> wal_debug | 0\n> wal_sync_method | open_sync\n> zero_damaged_pages | off\n>\n>\n> Sean Shanny wrote:\n>\n>> To all,\n>>\n>> Essentials: Running 7.4.1 on OSX on a loaded G5 with dual procs, 8GB \n>> memory, direct attached via fibre channel to a fully optioned 3.5TB \n>> XRaid (14 spindles, 2 sets of 7 in RAID 5) box running RAID 50.\n>>\n>> Background: We are loading what are essentially xml based access \n>> logs from about 20+ webservers daily, about 6GB of raw data. We have \n>> a classic star schema. All the ETL tools are custom java code or \n>> standard *nix tools like sort, uniq etc...\n>>\n>> The problem: We have about 46 million rows in a table with the \n>> following schema:\n>>\n>> Table \"public.d_referral\"\n>> Column | Type | Modifiers\n>> --------------------+---------+-----------\n>> id | integer | not null\n>> referral_raw_url | text | not null\n>> job_control_number | integer | not null\n>> Indexes:\n>> \"d_referral_pkey\" primary key, btree (id)\n>> \"idx_referral_url\" btree (referral_raw_url)\n>>\n>> This is one of our dimension tables. Part of the daily ETL process \n>> is to match all the new referral URL's against existing data in the \n>> d_referral table. Some of the values in referral_raw_url can be 5000 \n>> characters long :-( . The avg length is : 109.57 characters.\n>>\n>> I sort and uniq all the incoming referrals and load them into a temp \n>> table.\n>>\n>> Table \"public.referral_temp\"\n>> Column | Type | Modifiers\n>> --------+------+-----------\n>> url | text | not null\n>> Indexes:\n>> \"referral_temp_pkey\" primary key, btree (url)\n>>\n>> I then do a left join\n>>\n>> SELECT t1.id, t2.url FROM referral_temp t2 LEFT OUTER JOIN d_referral \n>> t1 ON t2.url = t1.referral_raw_url ORDER BY t1.id\n>>\n>> This is the output from an explain analyze (Please note that I do a \n>> set enable_index_scan = false prior to issuing this because it takes \n>> forever using indexes.):\n>>\n>> explain analyze SELECT t1.id, t2.url FROM referral_temp t2 LEFT OUTER \n>> JOIN d_referral t1 ON t2.url = t1.referral_raw_url ORDER BY t1.id;\n>> \n>> QUERY PLAN\n>> ---------------------------------------------------------------------- \n>> ---------------------------------------------------------------------- \n>> ----------\n>> Sort (cost=4012064.81..4013194.45 rows=451856 width=115) (actual \n>> time=1297320.823..1297739.813 rows=476176 loops=1)\n>> Sort Key: t1.id\n>> -> Hash Left Join (cost=1052345.95..3969623.10 rows=451856 \n>> width=115) (actual time=1146650.487..1290230.590 rows=476176 loops=1)\n>> Hash Cond: (\"outer\".url = \"inner\".referral_raw_url)\n>> -> Seq Scan on referral_temp t2 (cost=0.00..6645.56 \n>> rows=451856 width=111) (actual time=20.285..1449.634 rows=476176 \n>> loops=1)\n>> -> Hash (cost=729338.16..729338.16 rows=46034716 width=124) \n>> (actual time=1146440.710..1146440.710 rows=0 loops=1)\n>> -> Seq Scan on d_referral t1 (cost=0.00..729338.16 \n>> rows=46034716 width=124) (actual time=14.502..-1064277.123 \n>> rows=46034715 loops=1)\n>> Total runtime: 1298153.193 ms\n>> (8 rows)\n>>\n>>\n>>\n>> What I would like to know is if there are better ways to do the join? \n>> I need to get all the rows back from the referral_temp table as they \n>> are used for assigning FK's for the fact table later in processing. \n>> When I iterate over the values that I get back those with t1.id = \n>> null I assign a new FK and push both into the d_referral table as new \n>> entries as well as a text file for later use. The matching records \n>> are written to a text file for later use.\n>> If we cannot improve the join performance my question becomes are \n>> there better tools to match up the 46 million and growing at the rate \n>> of 1 million every 3 days, strings outside of postgresql? We don't \n>> want to have to invest in zillions of dollars worth of hardware but \n>> if we have to we will. I just want to make sure we have all the non \n>> hardware possibilities for improvement covered before we start \n>> investing in large disk arrays.\n>> Thanks.\n>>\n>> --sean\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 9: the planner will ignore your desire to choose an index scan if \n>> your\n>> joining column's datatypes do not match\n>>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n\n", "msg_date": "Thu, 22 Apr 2004 20:54:15 -0400", "msg_from": "Nicholas Shanny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for ideas on how to speed up warehouse loading" }, { "msg_contents": "Sean Shanny <[email protected]> writes:\n> explain analyze SELECT t1.id, t2.url FROM referral_temp t2 LEFT OUTER \n> JOIN d_referral t1 ON t2.url = t1.referral_raw_url ORDER BY t1.id;\n \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=4012064.81..4013194.45 rows=451856 width=115) (actual \n> time=1297320.823..1297739.813 rows=476176 loops=1)\n> Sort Key: t1.id\n> -> Hash Left Join (cost=1052345.95..3969623.10 rows=451856 \n> width=115) (actual time=1146650.487..1290230.590 rows=476176 loops=1)\n> Hash Cond: (\"outer\".url = \"inner\".referral_raw_url)\n> -> Seq Scan on referral_temp t2 (cost=0.00..6645.56 \n> rows=451856 width=111) (actual time=20.285..1449.634 rows=476176 loops=1)\n> -> Hash (cost=729338.16..729338.16 rows=46034716 width=124) \n> (actual time=1146440.710..1146440.710 rows=0 loops=1)\n> -> Seq Scan on d_referral t1 (cost=0.00..729338.16 \n> rows=46034716 width=124) (actual time=14.502..-1064277.123 rows=46034715 \n> loops=1)\n> Total runtime: 1298153.193 ms\n> (8 rows)\n\n> What I would like to know is if there are better ways to do the join?\n\nWhat have you got sort_mem set to? You might try increasing it to a gig\nor so, since you seem to have plenty of RAM in that box ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Apr 2004 22:03:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for ideas on how to speed up warehouse loading " }, { "msg_contents": "Sean Shanny wrote:\n> explain analyze SELECT t1.id, t2.url FROM referral_temp t2 LEFT OUTER \n> JOIN d_referral t1 ON t2.url = t1.referral_raw_url ORDER BY t1.id;\n\n> What I would like to know is if there are better ways to do the join? I \n> need to get all the rows back from the referral_temp table as they are \n> used for assigning FK's for the fact table later in processing. When I \n> iterate over the values that I get back those with t1.id = null I assign \n> a new FK and push both into the d_referral table as new entries as well \n> as a text file for later use. The matching records are written to a \n> text file for later use.\n\nWould something like this work any better (without disabling index scans):\n\nSELECT t1.id, t2.url\nFROM referral_temp t2, d_referral t1\nWHERE t1.referral_raw_url = t2.url;\n\n<process rows with a match>\n\nSELECT t1.id, t2.url\nFROM referral_temp t2\nWHERE NOT EXISTS\n(select 1 FROM d_referral t1 WHERE t1.referral_raw_url = t2.url);\n\n<process rows without a match>\n\n?\n\nJoe\n", "msg_date": "Thu, 22 Apr 2004 21:38:05 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for ideas on how to speed up warehouse loading" }, { "msg_contents": "By definition, it is equivalent to:\n\nSELECT t1.id, t2.url FROM referral_temp t2 LEFT /*OUTER*/ JOIN d_referral t1\nON t2.url = t1.referral_raw_url\nunion all\nSELECT null, url FROM referral_temp WHERE url is null\nORDER BY 1;\n\n\n\n/Aaron\n\n----- Original Message ----- \nFrom: \"Joe Conway\" <[email protected]>\nTo: \"Sean Shanny\" <[email protected]>\nCc: <[email protected]>\nSent: Friday, April 23, 2004 12:38 AM\nSubject: Re: [PERFORM] Looking for ideas on how to speed up warehouse\nloading\n\n\n> Sean Shanny wrote:\n> > explain analyze SELECT t1.id, t2.url FROM referral_temp t2 LEFT OUTER\n> > JOIN d_referral t1 ON t2.url = t1.referral_raw_url ORDER BY t1.id;\n>\n> > What I would like to know is if there are better ways to do the join? I\n> > need to get all the rows back from the referral_temp table as they are\n> > used for assigning FK's for the fact table later in processing. When I\n> > iterate over the values that I get back those with t1.id = null I assign\n> > a new FK and push both into the d_referral table as new entries as well\n> > as a text file for later use. The matching records are written to a\n> > text file for later use.\n>\n> Would something like this work any better (without disabling index scans):\n>\n> SELECT t1.id, t2.url\n> FROM referral_temp t2, d_referral t1\n> WHERE t1.referral_raw_url = t2.url;\n>\n> <process rows with a match>\n>\n> SELECT t1.id, t2.url\n> FROM referral_temp t2\n> WHERE NOT EXISTS\n> (select 1 FROM d_referral t1 WHERE t1.referral_raw_url = t2.url);\n>\n> <process rows without a match>\n>\n> ?\n>\n> Joe\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n", "msg_date": "Fri, 23 Apr 2004 08:19:36 -0400", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for ideas on how to speed up warehouse loading" }, { "msg_contents": "\nOn Thu, 22 Apr 2004, Sean Shanny wrote:\n\n> I should have included this as well:\n> fsync | on\n> shared_buffers | 4000\n> sort_mem | 64000\n\nFor purposes of loading only, you can try turning off fsync, assuming this \nis a virgin load and you can just re-initdb should bad things happen (OS, \npostgresql crash, power plug pulled, etc...)\n\nAlso increasing sort_mem and shared_buffers might help. Especially \nsort_mem. But turn it back down to something reasonable after the import.\n\nAnd turn fsync back on after the import too. Note you have to restart \npostgresql to make fsync = off take effect.\n\n", "msg_date": "Fri, 23 Apr 2004 10:25:01 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for ideas on how to speed up warehouse loading" } ]
[ { "msg_contents": "Tested the sql on Quad 2.0GHz XEON/8GB RAM:\r\n \r\nDuring the first run, the CS shooted up more than 100k, and was randomly high/low\r\nSecond process made it consistently high 100k+\r\nThird brought it down to anaverage 80-90k\r\nFourth brought it down to an average of 50-60k/s\r\n \r\nBy cancelling the queries one-by-one, the CS started going up again.\r\n \r\n8 logical CPUs in 'top', all of them not at all too busy, load average stood around 2 all the time.\r\n \r\nThanks.\r\nAnjan\r\n \r\n-----Original Message----- \r\nFrom: Josh Berkus [mailto:[email protected]] \r\nSent: Tue 4/20/2004 12:59 PM \r\nTo: Anjan Dave; Dirk Lutzebäck; Tom Lane \r\nCc: [email protected]; Neil Conway \r\nSubject: Re: [PERFORM] Wierd context-switching issue on Xeon\r\n\r\n\r\n\r\n\tAnjan,\r\n\t\r\n\t> Quad 2.0GHz XEON with highest load we have seen on the applications, DB\r\n\t> performing great -\r\n\t\r\n\tCan you run Tom's test? It takes a particular pattern of data access to\r\n\treproduce the issue.\r\n\t\r\n\t--\r\n\tJosh Berkus\r\n\tAglio Database Solutions\r\n\tSan Francisco\r\n\t\r\n\t---------------------------(end of broadcast)---------------------------\r\n\tTIP 9: the planner will ignore your desire to choose an index scan if your\r\n\t joining column's datatypes do not match", "msg_date": "Thu, 22 Apr 2004 22:27:55 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wierd context-switching issue on Xeon" } ]
[ { "msg_contents": "I need some help. I have 5 db servers running our database servers, and they \nall are having various degrees of performance problems. The problems we are \nexperiencing are:\n\n1. General slowness\n2. High loads\n\nAll of our db's are running on Dell Poweredge 2650 with 2 P4 Xeons (2.8 -> \n3.06 GHz) with 8 to 12 GB of memory. The databases are running on attached \nDell Powervault 220s running raid5.\n\nThe databases were created and taken into production before I started working \nhere and are very flat. Most of the major tables have a combined primary key \nusing an int field and a single char field. There are some additional \nindexes on some tables. Most queries I see in the logs are running at less \nthan .01 seconds with many significantly slower.\n\nWe are trying to narrow down the performance problem to either the db or the \nhardware. As the dba, I need to try and get these db's tuned to the best \npossible way considering the current db state. We are in the beginning of a \ncomplete db redesign and application re-write, but the completion and \ndeployment of the new db and app are quite a ways off.\n\nAnyway, we are running the following:\nPE 2650 w/ 2 cpus (2.8-3.06) - HT on\n8-12 GB memory\nOS on raid 0\nDB's on Powervaults 220S using raid 5 (over 6 disks)\nEach Postgresql cluster has 2 db up to almost 170db's (project to level out \nthe num of db's/cluster is being started)\nDB's are no bigger than a few GB in size (largest is about 11GB according to a \ndu -h)\nRunning RH ES 2.1\n\nHere is the postgresql.conf from the server with the 11GB db:\n\nmax_connections = 64\nshared_buffers = 32768\t\t# 256MB=32768(buffs)*8192(bytes/buff)\nmax_fsm_relations = 1000\t# min 10, fsm is free space map, ~40 bytes\nmax_fsm_pages = 10000\t\t# min 1000, fsm is free space map, ~6 bytes\nsort_mem = 4096\t\t\t# 256MB=4096(bytes/proc)*64(procs or conns)\ncheckpoint_segments = 16\t# in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 30\t\t# range 30-3600, in seconds\neffective_cache_size = 131072\t# typically 8KB each\nlog_connections = true\nlog_pid = true\nlog_statement = true\nlog_duration = true\nlog_timestamp = true\nstats_start_collector = true\nstats_reset_on_server_start = true\nstats_command_string = true\nstats_row_level = true\nstats_block_level = true\nLC_MESSAGES = 'en_US'\nLC_MONETARY = 'en_US'\nLC_NUMERIC = 'en_US'\nLC_TIME = 'en_US'\n\nHere is top (server running pretty good right now)\n 9:28am up 25 days, 16:02, 2 users, load average: 0.54, 0.33, 0.22\n94 processes: 91 sleeping, 3 running, 0 zombie, 0 stopped\nCPU0 states: 64.0% user, 0.1% system, 0.0% nice, 34.0% idle\nCPU1 states: 29.0% user, 9.0% system, 0.0% nice, 60.0% idle\nCPU2 states: 2.0% user, 0.1% system, 0.0% nice, 96.0% idle\nCPU3 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nMem: 7720072K av, 7711648K used, 8424K free, 265980K shrd, 749888K buff\nSwap: 2096440K av, 22288K used, 2074152K free 6379304K \ncached\n\nHere is top from another server (with the most db's): \n 9:31am up 25 days, 16:05, 5 users, load average: 2.34, 3.39, 4.28\n147 processes: 145 sleeping, 2 running, 0 zombie, 0 stopped\nCPU0 states: 6.0% user, 1.0% system, 0.0% nice, 91.0% idle\nCPU1 states: 9.0% user, 4.0% system, 0.0% nice, 85.0% idle\nCPU2 states: 9.0% user, 3.0% system, 0.0% nice, 86.0% idle\nCPU3 states: 9.0% user, 4.0% system, 0.0% nice, 85.0% idle\nMem: 7721096K av, 7708040K used, 13056K free, 266132K shrd, 3151336K buff\nSwap: 2096440K av, 24208K used, 2072232K free 3746596K \ncached\n\nThanks for any help/advice,\n\nChris\n\n", "msg_date": "Fri, 23 Apr 2004 09:31:17 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help with performance problems" }, { "msg_contents": "Your second server has queuing (load averages are highish), only 2 processes\nrunning, and almost all cycles are idle. You need to track down your\nbottleneck. Have you looked at iostat/vmstat? I think it would be useful to\npost these, ideally both before and after full vacuum analyze.\n\n/Aaron\n\n\n----- Original Message ----- \nFrom: \"Chris Hoover\" <[email protected]>\nTo: <[email protected]>\nCc: <[email protected]>\nSent: Friday, April 23, 2004 9:31 AM\nSubject: [PERFORM] Help with performance problems\n\n\nI need some help. I have 5 db servers running our database servers, and\nthey\nall are having various degrees of performance problems. The problems we are\nexperiencing are:\n\n1. General slowness\n2. High loads\n\nAll of our db's are running on Dell Poweredge 2650 with 2 P4 Xeons (2.8 ->\n3.06 GHz) with 8 to 12 GB of memory. The databases are running on attached\nDell Powervault 220s running raid5.\n\nThe databases were created and taken into production before I started\nworking\nhere and are very flat. Most of the major tables have a combined primary\nkey\nusing an int field and a single char field. There are some additional\nindexes on some tables. Most queries I see in the logs are running at less\nthan .01 seconds with many significantly slower.\n\nWe are trying to narrow down the performance problem to either the db or the\nhardware. As the dba, I need to try and get these db's tuned to the best\npossible way considering the current db state. We are in the beginning of a\ncomplete db redesign and application re-write, but the completion and\ndeployment of the new db and app are quite a ways off.\n\nAnyway, we are running the following:\nPE 2650 w/ 2 cpus (2.8-3.06) - HT on\n8-12 GB memory\nOS on raid 0\nDB's on Powervaults 220S using raid 5 (over 6 disks)\nEach Postgresql cluster has 2 db up to almost 170db's (project to level out\nthe num of db's/cluster is being started)\nDB's are no bigger than a few GB in size (largest is about 11GB according to\na\ndu -h)\nRunning RH ES 2.1\n\nHere is the postgresql.conf from the server with the 11GB db:\n\nmax_connections = 64\nshared_buffers = 32768 # 256MB=32768(buffs)*8192(bytes/buff)\nmax_fsm_relations = 1000 # min 10, fsm is free space map, ~40 bytes\nmax_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes\nsort_mem = 4096 # 256MB=4096(bytes/proc)*64(procs or conns)\ncheckpoint_segments = 16 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 30 # range 30-3600, in seconds\neffective_cache_size = 131072 # typically 8KB each\nlog_connections = true\nlog_pid = true\nlog_statement = true\nlog_duration = true\nlog_timestamp = true\nstats_start_collector = true\nstats_reset_on_server_start = true\nstats_command_string = true\nstats_row_level = true\nstats_block_level = true\nLC_MESSAGES = 'en_US'\nLC_MONETARY = 'en_US'\nLC_NUMERIC = 'en_US'\nLC_TIME = 'en_US'\n\nHere is top (server running pretty good right now)\n 9:28am up 25 days, 16:02, 2 users, load average: 0.54, 0.33, 0.22\n94 processes: 91 sleeping, 3 running, 0 zombie, 0 stopped\nCPU0 states: 64.0% user, 0.1% system, 0.0% nice, 34.0% idle\nCPU1 states: 29.0% user, 9.0% system, 0.0% nice, 60.0% idle\nCPU2 states: 2.0% user, 0.1% system, 0.0% nice, 96.0% idle\nCPU3 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nMem: 7720072K av, 7711648K used, 8424K free, 265980K shrd, 749888K\nbuff\nSwap: 2096440K av, 22288K used, 2074152K free 6379304K\ncached\n\nHere is top from another server (with the most db's):\n 9:31am up 25 days, 16:05, 5 users, load average: 2.34, 3.39, 4.28\n147 processes: 145 sleeping, 2 running, 0 zombie, 0 stopped\nCPU0 states: 6.0% user, 1.0% system, 0.0% nice, 91.0% idle\nCPU1 states: 9.0% user, 4.0% system, 0.0% nice, 85.0% idle\nCPU2 states: 9.0% user, 3.0% system, 0.0% nice, 86.0% idle\nCPU3 states: 9.0% user, 4.0% system, 0.0% nice, 85.0% idle\nMem: 7721096K av, 7708040K used, 13056K free, 266132K shrd, 3151336K\nbuff\nSwap: 2096440K av, 24208K used, 2072232K free 3746596K\ncached\n\nThanks for any help/advice,\n\nChris\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Fri, 23 Apr 2004 11:16:13 -0400", "msg_from": "\"Aaron Werman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with performance problems" }, { "msg_contents": "Chris,\n\n> I need some help. I have 5 db servers running our database servers, and\n> they all are having various degrees of performance problems. The problems\n> we are experiencing are:\n\nI'mm confused. You're saying \"general slowness\" but say that most queries run \nin under .01 seconds. And you say \"high loads\" but the TOP snapshots you \nprovide show servers with 2 CPUs idle. \n\nAre you sure you actually *have* a performance issue?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 23 Apr 2004 09:42:48 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with performance problems" }, { "msg_contents": "I know the numbers look ok, but we are definetly suffering. Also, if I try to \nrun any sort of vacuum or other db activity during normal business hours, \nload goes through the roof. I have seen loads of over 10 when trying to \nvacuum the larger cluster and would have to kill the vacuums due to \ncomplaints. \n\nI think this is probably related to the hardware configuration, but I want to \nmake sure that there are no changes I could make configuration wise to the db \nthat might lighten the problem.\n\nI'm especially want to make sure that I have the memory parameters set to good \nnumbers for my db's so that I can minimize thrashing between the postgres \nmemory pools and the hard drive. I am thinking that this may be a big issue \nhere?\n\nThanks for any help,\n\nChris\nOn Friday 23 April 2004 12:42, Josh Berkus wrote:\n> Chris,\n>\n> > I need some help. I have 5 db servers running our database servers, and\n> > they all are having various degrees of performance problems. The\n> > problems we are experiencing are:\n>\n> I'mm confused. You're saying \"general slowness\" but say that most queries\n> run in under .01 seconds. And you say \"high loads\" but the TOP snapshots\n> you provide show servers with 2 CPUs idle.\n>\n> Are you sure you actually *have* a performance issue?\n\n", "msg_date": "Fri, 23 Apr 2004 13:16:03 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with performance problems" }, { "msg_contents": "On Fri, 23 Apr 2004, Chris Hoover wrote:\n\n> DB's on Powervaults 220S using raid 5 (over 6 disks)\n\nWhat controller is this, the adaptec? We've found it to be slower than \nthe LSI megaraid based controller, but YMMV.\n\n> Running RH ES 2.1\n\nAre you running the latest kernel for ES 2.1? Early 2.4 kernels are \npretty pokey and have some odd behaviour under load that later 2.4 \nkernels seemed to fix.\n\n> Here is the postgresql.conf from the server with the 11GB db:\n> \n> max_connections = 64\n> shared_buffers = 32768\t\t# 256MB=32768(buffs)*8192(bytes/buff)\n> max_fsm_relations = 1000\t# min 10, fsm is free space map, ~40 bytes\n> max_fsm_pages = 10000\t\t# min 1000, fsm is free space map, ~6 bytes\n\nIF you're doing lots of updates and such, you might want these higher.\nHave you vacuumed full the databases since taking over?\n\n> sort_mem = 4096\t\t\t# 256MB=4096(bytes/proc)*64(procs or conns)\n\nSorry, that's wrong. sort_mem is measure in kbytes. i.e. 8192 means 8 \nmegs sort_mem. Try setting it a bit higher (you've got LOTS of ram in these \nboxes) to something like 16 or 32 meg.\n\n> checkpoint_segments = 16\t# in logfile segments, min 1, 16MB each\n> checkpoint_timeout = 30\t\t# range 30-3600, in seconds\n> effective_cache_size = 131072\t# typically 8KB each\n\nThis still looks low. On one machine you're showing kernel cache of about \n.7 gig, on the other it's 6 gig. 6 gigs of kernel cache would be a \nsetting of 800000. It's more of a nudge factor than an exact science, so \ndon't worry too much.\n\nIf you've got fast I/O look at lowering random page cost to something \nbetween 1 and 2. We use 1.3 to 1.4 on most of our machines with fast \ndrives under them.\n\nI'd use vmstat to see if you're I/O bound. \n\nalso, look for index bloat. Before 7.4 it was a serious problem. With \n7.4 regular vacuuming should reclaim most lost space, but there are corner \ncases where you still might need to re-index.\n\n", "msg_date": "Fri, 23 Apr 2004 11:21:32 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with performance problems" }, { "msg_contents": "Sorry for the confusion here. I can't run any sort of vacuum durin the day \ndue to performance hits. However, I have run vacuums at night. Several \nnights a week I run a vacuumdb -f -z on all of the clusters. I can take \nserveral hours to complete, but it does complete.\n\nDuring the day, I have tried to run a vacuumdb -v and a vacuumdb -z -v during \nthe day since I read it is supposed to help performance, but as I said, it \ncauses to much of a stress on the system.\n\nI did change the vacuumdb script to do set the vacuum_mem to 512 when \nvacuuming to try and help the situation (from the script: ${PATHNAME}psql \n$PSQLOPT $ECHOOPT -c \"SET vacuum_mem=524288;SET autocommit TO 'on';VACUUM \n$full $verbose $analyze $table\" -d $db ), and I reset it to 8192 at the end.\n\nAnyway, thank you for the ideas so far, and any additional will be greatly \nappreciated.\n\nChris\nOn Friday 23 April 2004 13:44, Kevin Barnard wrote:\n> Chris Hoover wrote:\n> >I know the numbers look ok, but we are definetly suffering. Also, if I\n> > try to run any sort of vacuum or other db activity during normal business\n> > hours, load goes through the roof. I have seen loads of over 10 when\n> > trying to vacuum the larger cluster and would have to kill the vacuums\n> > due to complaints.\n>\n> This is your problem then. You have to regularly vacuum the DB. You\n> might want to dump and reload or schedule a vacuum full. If you don't\n> it doesn't matter what you do you will never get decent performance.\n> Make sure you vacuum as a superuser this way you get system tables as well.\n>\n> Killing a vacuum is bad it tends to make the situation worse. If you\n> need to vaccuum one table at a time.\n>\n> >I think this is probably related to the hardware configuration, but I want\n> > to make sure that there are no changes I could make configuration wise to\n> > the db that might lighten the problem.\n> >\n> >I'm especially want to make sure that I have the memory parameters set to\n> > good numbers for my db's so that I can minimize thrashing between the\n> > postgres memory pools and the hard drive. I am thinking that this may be\n> > a big issue here?\n>\n> Get the vacuum done and don't worry about the hardware or the settings\n> until afterwords.\n\n", "msg_date": "Fri, 23 Apr 2004 13:53:17 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with performance problems" }, { "msg_contents": "On Friday 23 April 2004 13:21, scott.marlowe wrote:\n> On Fri, 23 Apr 2004, Chris Hoover wrote:\n> > DB's on Powervaults 220S using raid 5 (over 6 disks)\n>\n> What controller is this, the adaptec? We've found it to be slower than\n> the LSI megaraid based controller, but YMMV.\n>\nWe are using the perc3/di controller. Believe it is using the megaraid \ndriver.\n> > Running RH ES 2.1\n>\n> Are you running the latest kernel for ES 2.1? Early 2.4 kernels are\n> pretty pokey and have some odd behaviour under load that later 2.4\n> kernels seemed to fix.\n>\nI'm not sure we are at the latest and greatest for 2.1, but I am trying to get \nthere. Management won't let me do the upgrade w/o first testing/proving it \nwill not cause any more issues. Due to all of the current issues, and the \ncriticality of these systems to our bottom line, they are being very careful \nwith any change that may impact our users further.\n\nWe are waiting on our datacenter to plug in our test server and powervault so \nthat we can test the upgrades the the latest RH 2.1 kernel.\n> > Here is the postgresql.conf from the server with the 11GB db:\n> >\n> > max_connections = 64\n> > shared_buffers = 32768\t\t# 256MB=32768(buffs)*8192(bytes/buff)\n> > max_fsm_relations = 1000\t# min 10, fsm is free space map, ~40 bytes\n> > max_fsm_pages = 10000\t\t# min 1000, fsm is free space map, ~6 bytes\n>\n> IF you're doing lots of updates and such, you might want these higher.\n> Have you vacuumed full the databases since taking over?\n>\n> > sort_mem = 4096\t\t\t# 256MB=4096(bytes/proc)*64(procs or conns)\n>\n> Sorry, that's wrong. sort_mem is measure in kbytes. i.e. 8192 means 8\n> megs sort_mem. Try setting it a bit higher (you've got LOTS of ram in\n> these boxes) to something like 16 or 32 meg.\n>\n> > checkpoint_segments = 16\t# in logfile segments, min 1, 16MB each\n> > checkpoint_timeout = 30\t\t# range 30-3600, in seconds\n> > effective_cache_size = 131072\t# typically 8KB each\n>\n> This still looks low. On one machine you're showing kernel cache of about\n> .7 gig, on the other it's 6 gig. 6 gigs of kernel cache would be a\n> setting of 800000. It's more of a nudge factor than an exact science, so\n> don't worry too much.\nI believe changing this requires a restart of the cluster (correct?). If so, \nI'll try bumping up the effective_cache_size over the weekend.\n\nAlso, will all of the memory available to these machines, should I be running \nwith larger shared_buffers? It seems like 256M is a bit small.\n>\n> If you've got fast I/O look at lowering random page cost to something\n> between 1 and 2. We use 1.3 to 1.4 on most of our machines with fast\n> drives under them.\n>\n> I'd use vmstat to see if you're I/O bound.\n>\nIf we end up being I/O bound, should the random page cost be set higher?\n\n> also, look for index bloat. Before 7.4 it was a serious problem. With\n> 7.4 regular vacuuming should reclaim most lost space, but there are corner\n> cases where you still might need to re-index.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\nThanks for the help,\n\nChris\n\n", "msg_date": "Fri, 23 Apr 2004 14:01:29 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with performance problems" }, { "msg_contents": "Chris,\n\n> Sorry for the confusion here. I can't run any sort of vacuum durin the day\n> due to performance hits. However, I have run vacuums at night. Several\n> nights a week I run a vacuumdb -f -z on all of the clusters. I can take\n> serveral hours to complete, but it does complete.\n\nWell, here's your first problem: since your FSM pages is low, and you're only \nvacuuming once a day, you've got to have some serious table and index bloat. \nSO you're going to need to do VACUUM FULL on all of your databases, and then \nREINDEX on all of your indexes.\n\nAfter that, raise your max_fsm_pages to something useful, like 1,000,000. Of \ncourse, data on your real rate of updates would help more.\n\nIf you're getting severe disk choke when you vacuum, you probably are I/O \nbound. You may want to try something which allows you to vacuum one table \nat a time, either pg_autovacuum or a custom script.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 23 Apr 2004 11:15:23 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with performance problems" }, { "msg_contents": "Josh Berkus wrote:\n\n>Chris,\n>\n> \n>\n>>Sorry for the confusion here. I can't run any sort of vacuum durin the day\n>>due to performance hits. However, I have run vacuums at night. Several\n>>nights a week I run a vacuumdb -f -z on all of the clusters. I can take\n>>serveral hours to complete, but it does complete.\n>> \n>>\n>\n>Well, here's your first problem: since your FSM pages is low, and you're only \n>vacuuming once a day, you've got to have some serious table and index bloat. \n>SO you're going to need to do VACUUM FULL on all of your databases, and then \n>REINDEX on all of your indexes.\n>\n>After that, raise your max_fsm_pages to something useful, like 1,000,000. Of \n>course, data on your real rate of updates would help more.\n>\n>If you're getting severe disk choke when you vacuum, you probably are I/O \n>bound. You may want to try something which allows you to vacuum one table \n>at a time, either pg_autovacuum or a custom script.\n>\n> \n>\nTom and Josh recently gave me some help about setting the fsm settings \nwhich was quite useful. The full message is at\nhttp://archives.postgresql.org/pgsql-performance/2004-04/msg00229.php\nand the 'most interesting' posrtion was:\n\n Actually, since he's running 7.4, there's an even better way. Do a\n \"VACUUM VERBOSE\" (full-database vacuum --- doesn't matter whether you\n ANALYZE or not). At the end of the very voluminous output, you'll see\n something like\n\n\n INFO: free space map: 240 relations, 490 pages stored; 4080 total pages needed\n DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB shared memory.\n\n\n Here, I would need max_fsm_relations = 240 and max_fsm_pages = 4080 to\n exactly cover the present freespace needs of my system. I concur with\n the suggestion to bump that up a good deal, of course, but that gives\n you a real number to start from.\n\n\n The DETAIL part of the message shows my current settings (which are the\n defaults) and what the FSM is costing me in shared memory space.\n\nGood luck\nRon\n\n\n\n", "msg_date": "Fri, 23 Apr 2004 11:57:23 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with performance problems" }, { "msg_contents": "On Friday 23 April 2004 14:57, Ron St-Pierre wrote:\nDoes this apply to 7.3.4 also?\n> Actually, since he's running 7.4, there's an even better way. Do a\n> \"VACUUM VERBOSE\" (full-database vacuum --- doesn't matter whether you\n> ANALYZE or not). At the end of the very voluminous output, you'll see\n> something like\n>\n>\n> INFO: free space map: 240 relations, 490 pages stored; 4080 total pages\n> needed DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB\n> shared memory.\n>\n>\n> Here, I would need max_fsm_relations = 240 and max_fsm_pages = 4080 to\n> exactly cover the present freespace needs of my system. I concur with\n> the suggestion to bump that up a good deal, of course, but that gives\n> you a real number to start from.\n>\n>\n> The DETAIL part of the message shows my current settings (which are the\n> defaults) and what the FSM is costing me in shared memory space.\n>\n> Good luck\n> Ron\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n", "msg_date": "Fri, 23 Apr 2004 15:27:24 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with performance problems" }, { "msg_contents": "On Fri, 23 Apr 2004, Chris Hoover wrote:\n\n> On Friday 23 April 2004 13:21, scott.marlowe wrote:\n> > On Fri, 23 Apr 2004, Chris Hoover wrote:\n> > > DB's on Powervaults 220S using raid 5 (over 6 disks)\n> >\n> > What controller is this, the adaptec? We've found it to be slower than\n> > the LSI megaraid based controller, but YMMV.\n> >\n> We are using the perc3/di controller. Believe it is using the megaraid \n> driver.\n\nNo, that's the adaptec, the PERC3/DC is the lsi megaraid. See if there \nare newer drivers for the RAID card. In terms of performance, the adaptec \nand lsi drivers have improved considerably in later versions. In terms of \nstability they've largely gotten better with a few in between releases on \nthe megaraid getting poor grades. The latest / greatest from Dell is \npretty up to date.\n\n> > > Running RH ES 2.1\n> >\n> > Are you running the latest kernel for ES 2.1? Early 2.4 kernels are\n> > pretty pokey and have some odd behaviour under load that later 2.4\n> > kernels seemed to fix.\n> >\n> I'm not sure we are at the latest and greatest for 2.1, but I am trying to get \n> there. Management won't let me do the upgrade w/o first testing/proving it \n> will not cause any more issues. Due to all of the current issues, and the \n> criticality of these systems to our bottom line, they are being very careful \n> with any change that may impact our users further.\n\nUnderstood. It's why my production box is still running a 2.4 kernel on \nrh 7.2 with pg 7.2. They just work, but for us stability AND performance \nare both good with our load.\n\nYou can install a new kernel and set up the machine to still boot off of \nthe old one, and test on the weekend to see how it behaves under \nsimulated load. Mining the logs for slow queries is a good way to build \none.\n\nwhile we don't upgrade our production server's applications to the latest \nand greatest all the time (i.e. php or postgresql or openldap) we always \nrun the latest security patches, and I think the latest kernels had \nsecurity fixes for ES 2.1, so NOT upgrading it dangerous. Late model \nlinux kernels (the 2.0.x and 2.2.x where x>20) tend to be VERY stable and \nvery conservatively backported and upgraded, so running a new one isn't \nusually a big risk.\n\n> > > Here is the postgresql.conf from the server with the 11GB db:\n> > >\n> > > max_connections = 64\n> > > shared_buffers = 32768\t\t# 256MB=32768(buffs)*8192(bytes/buff)\n> > > max_fsm_relations = 1000\t# min 10, fsm is free space map, ~40 bytes\n> > > max_fsm_pages = 10000\t\t# min 1000, fsm is free space map, ~6 bytes\n> >\n> > IF you're doing lots of updates and such, you might want these higher.\n> > Have you vacuumed full the databases since taking over?\n> >\n> > > sort_mem = 4096\t\t\t# 256MB=4096(bytes/proc)*64(procs or conns)\n> >\n> > Sorry, that's wrong. sort_mem is measure in kbytes. i.e. 8192 means 8\n> > megs sort_mem. Try setting it a bit higher (you've got LOTS of ram in\n> > these boxes) to something like 16 or 32 meg.\n> >\n> > > checkpoint_segments = 16\t# in logfile segments, min 1, 16MB each\n> > > checkpoint_timeout = 30\t\t# range 30-3600, in seconds\n> > > effective_cache_size = 131072\t# typically 8KB each\n> >\n> > This still looks low. On one machine you're showing kernel cache of about\n> > .7 gig, on the other it's 6 gig. 6 gigs of kernel cache would be a\n> > setting of 800000. It's more of a nudge factor than an exact science, so\n> > don't worry too much.\n> I believe changing this requires a restart of the cluster (correct?). If so, \n> I'll try bumping up the effective_cache_size over the weekend.\n> \n> Also, will all of the memory available to these machines, should I be running \n> with larger shared_buffers? It seems like 256M is a bit small.\n\nNo, you probably shouldn't. PostgreSQL doesn't \"cache\" in the classical \nsense. If all backends close, the stuff they had in their buffers \ndisappears in a flash. So, it's generally considered better to let the \nkernel do the bulk of the caching, and having the buffer area be large \nenough to hold a large portion, if not all, of your working set of data. \nBut between the cache management which is dirt simple and works but seems \nto have performance issues with large numbers of buffers, and the fact \nthat all the memory in it disappears when the last backend using it.\n\nfor instance, in doing the following seq scan select:\n\nexplain analyze select * from test;\n\nwhere test is a ~10 megabyte table, the first time I ran it it took 5 \nseconds to run. The second time took it 2.5, the third 1.9, and it \nlevelled out around there. Starting up another backend and running the \nsame query got a 1.9 second response also. Shutting down both \nconnections, and running the query again, with only the kernel for \ncaching, I got 1.9.\n\nThat's on a 2.4.2[2-4] kernel.\n\n> > If you've got fast I/O look at lowering random page cost to something\n> > between 1 and 2. We use 1.3 to 1.4 on most of our machines with fast\n> > drives under them.\n> >\n> > I'd use vmstat to see if you're I/O bound.\n> >\n> If we end up being I/O bound, should the random page cost be set higher?\n\nNot necessarily. Often times on a machine with a lot of memory, you are \nbetter off using index scans where disk seek time would be expensive, but \nwith indexes in ram, the page cost in comparison to seq pages is almost 1, \nwith a slight overhead cost. So, lowering the random page cost favors \nindexes, generally. If your I/O subsystem is doing a lot of seq scans, \nwhen only part of the data set is ever really being worked on, this tends \nto flush out the kernel cache, and we wind up going back to disk over and \nover. On the other hand, if your data is normally going to be \nsequentially accessed, then you'll have to invest in better RAID hardware \n/ more drives etc...\n\nbut with 12 gigs on one box, and an already reasonably fast I/O subsystem \nin place, I'd think a lower random page cost would help, not hurt \nperformance.\n\nHave you explain analyzed your slower queries?\n\n\n", "msg_date": "Fri, 23 Apr 2004 13:58:51 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with performance problems" }, { "msg_contents": "Chris Hoover wrote:\n\n>On Friday 23 April 2004 14:57, Ron St-Pierre wrote:\n>Does this apply to 7.3.4 also?\n>\nNo it doesn't, I didn't look back through the thread far enough to see \nwhat you were running. I tried it on 7.3.4 and none of the summary info \nlisted below was returned. FWIW one of our DBs was slowing down \nconsiderably on an update (30+ minutes) and after I changed \nmax_fsm_pages from the 7.4 default of 20,000 to 50,000, it completed in \nabout eight minutes.\n\nRon\n\n> \n>\n>> Actually, since he's running 7.4, there's an even better way. Do a\n>> \"VACUUM VERBOSE\" (full-database vacuum --- doesn't matter whether you\n>> ANALYZE or not). At the end of the very voluminous output, you'll see\n>> something like\n>>\n>>\n>> INFO: free space map: 240 relations, 490 pages stored; 4080 total pages\n>>needed DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB\n>>shared memory.\n>>\n>>\n>> Here, I would need max_fsm_relations = 240 and max_fsm_pages = 4080 to\n>> exactly cover the present freespace needs of my system. I concur with\n>> the suggestion to bump that up a good deal, of course, but that gives\n>> you a real number to start from.\n>>\n>>\n>> The DETAIL part of the message shows my current settings (which are the\n>> defaults) and what the FSM is costing me in shared memory space.\n>>\n>>Good luck\n>>Ron\n>>\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 7: don't forget to increase your free space map settings\n>> \n>>\n>\n>\n>\n> \n>\n\n\n", "msg_date": "Fri, 23 Apr 2004 14:36:31 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with performance problems" }, { "msg_contents": "\"Chris Hoover\" <[email protected]> writes:\n> Here is the postgresql.conf from the server with the 11GB db:\n\n> max_fsm_pages = 10000\t\t# min 1000, fsm is free space map, ~6 bytes\n\nIt's unlikely that that's enough for an 11Gb database, especially if\nyou're only vacuuming a few times a week. You should make your next run\nbe a \"vacuum verbose\" and look at the output to get an idea of what sort\nof table bloat you're seeing, but I'll bet it's bad ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Apr 2004 22:58:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with performance problems " }, { "msg_contents": "scott.marlowe wrote:\n> On Fri, 23 Apr 2004, Chris Hoover wrote:\n> \n> \n>>DB's on Powervaults 220S using raid 5 (over 6 disks)\n> \n> \n> What controller is this, the adaptec? We've found it to be slower than \n> the LSI megaraid based controller, but YMMV.\n\nWow, really? You got any more details of the chipset, mobo and kernel \ndriver ?\n\nI've been taken to my wits end wrestling with an LSI MegaRAID 320-1 \ncontroller on a supermicro board all weekend. I just couldn't get \nanything more than 10MB/sec out of it with megaraid driver v1 OR v2 in \nLinux 2.4.26, nor the version in 2.6.6-rc2. After 2 days of humming the \nAdaptec mantra I gave in and switched the array straight onto the \nonboard Adaptec 160 controller (same cable and everything). Software \nRAID 5 gets me over 40MB sec for a nominal cpu hit - more than 4 times \nwhat I could get out of the MegaRAID controller :( Even the 2nd SCSI-2 \nchannel gets 40MB/sec max (pg_xlog :)\n\nAnd HOW LONG does it take to detect drives during POST....ohhhh never \nmind ... I really just wanna rant :) There should be a free counseling \nservice for enraged sysops.\n\n-- \n\nRob Fielding\[email protected]\n\nwww.dsvr.co.uk Development Designer Servers Ltd\n", "msg_date": "Mon, 26 Apr 2004 10:01:58 +0100", "msg_from": "Rob Fielding <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Help with performance problems" }, { "msg_contents": "Ok, I was able to run a vacuumdb -f -v on my largest db over the weekend. \nHowever, I am having trouble reading the results of the table portion. Here \narea a couple of tables, what should I be looking at. First table is the key \ntable to the db, and the second is the largest table in the db. \n\nThanks Chris\n\nINFO: --Relation public.clmhdr--\nINFO: Pages 32191: Changed 0, reaped 5357, Empty 0, New 0; Tup 339351: Vac \n48358, Keep/VTL 0/0, UnUsed 129, MinLen 560, MaxLen 696; Re-using: Free/Av\nail. Space 42011004/32546120; EndEmpty/Avail. Pages 0/5310.\n CPU 0.53s/0.09u sec elapsed 0.61 sec.\nINFO: Index clmhdr_pkey: Pages 1429; Tuples 339351: Deleted 48358.\n CPU 0.06s/0.28u sec elapsed 4.54 sec.\nINFO: Index clmhdr_hdr_user_id_idx: Pages 1711; Tuples 339351: Deleted 48358.\n CPU 0.09s/0.31u sec elapsed 2.40 sec.\nINFO: Index clmhdr_hdr_clm_status_idx: Pages 1237; Tuples 339351: Deleted \n48358.\n CPU 0.03s/0.26u sec elapsed 1.66 sec.\nINFO: Index clmhdr_hdr_create_dt_idx: Pages 1475; Tuples 339351: Deleted \n48358.\n CPU 0.05s/0.24u sec elapsed 1.96 sec.\nINFO: Index clmhdr_inv_idx: Pages 1429; Tuples 339351: Deleted 48358.\n CPU 0.08s/0.22u sec elapsed 1.20 sec.\nINFO: Index clmhdr_userid_status_idx: Pages 2161; Tuples 339351: Deleted \n48358.\n CPU 0.05s/0.18u sec elapsed 3.02 sec.\nINFO: Rel clmhdr: Pages: 32191 --> 28247; Tuple(s) moved: 8257.\n CPU 0.37s/1.81u sec elapsed 16.24 sec.\nINFO: Index clmhdr_pkey: Pages 1429; Tuples 339351: Deleted 8257.\n CPU 0.00s/0.03u sec elapsed 0.03 sec.\nINFO: Index clmhdr_hdr_user_id_idx: Pages 1743; Tuples 339351: Deleted 8257.\n CPU 0.00s/0.05u sec elapsed 0.04 sec.\nINFO: Index clmhdr_hdr_clm_status_idx: Pages 1265; Tuples 339351: Deleted \n8257.\n CPU 0.00s/0.04u sec elapsed 0.03 sec.\nINFO: Index clmhdr_hdr_create_dt_idx: Pages 1503; Tuples 339351: Deleted \n8257.\n CPU 0.00s/0.04u sec elapsed 0.12 sec.\nINFO: Index clmhdr_inv_idx: Pages 1429; Tuples 339351: Deleted 8257.\n CPU 0.00s/0.04u sec elapsed 0.03 sec.\nINFO: Index clmhdr_userid_status_idx: Pages 2203; Tuples 339351: Deleted \n8257.\n CPU 0.01s/0.03u sec elapsed 0.04 sec.\n\nINFO: --Relation public.sent837--\nINFO: Pages 463552: Changed 0, reaped 6690, Empty 0, New 0; Tup 27431539: Vac \n204348, Keep/VTL 0/0, UnUsed 2801, MinLen 107, MaxLen 347; Re-using: \nFree/Avail. Space 54541468/34925860; EndEmpty/Avail. Pages 0/70583.\n CPU 10.68s/2.18u sec elapsed 188.32 sec.\nINFO: Index sent837_pkey: Pages 124424; Tuples 27431539: Deleted 204348.\n CPU 4.24s/3.45u sec elapsed 144.79 sec.\nINFO: Rel sent837: Pages: 463552 --> 459954; Tuple(s) moved: 91775.\n CPU 1.12s/9.36u sec elapsed 20.13 sec.\nINFO: Index sent837_pkey: Pages 124424; Tuples 27431539: Deleted 91775.\n CPU 3.51s/2.03u sec elapsed 6.13 sec.\n\n", "msg_date": "Mon, 26 Apr 2004 13:20:56 -0400", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with performance problems" }, { "msg_contents": "On Mon, 26 Apr 2004, Rob Fielding wrote:\n\n> scott.marlowe wrote:\n> > On Fri, 23 Apr 2004, Chris Hoover wrote:\n> > \n> > \n> >>DB's on Powervaults 220S using raid 5 (over 6 disks)\n> > \n> > \n> > What controller is this, the adaptec? We've found it to be slower than \n> > the LSI megaraid based controller, but YMMV.\n> \n> Wow, really? You got any more details of the chipset, mobo and kernel \n> driver ?\n\nWe're running on a Dell 2650, the controller is the U320 LSI megaraid 2 \nchannel (they only make the one that I know of right now). Don't know my \nmobo chipset offhand, but might be able to find out what one dell includes \non the 2650. The kernel driver is the latest megaraid2 driver as of about \nFeb this year.\n\n> I've been taken to my wits end wrestling with an LSI MegaRAID 320-1 \n> controller on a supermicro board all weekend. I just couldn't get \n> anything more than 10MB/sec out of it with megaraid driver v1 OR v2 in \n> Linux 2.4.26, nor the version in 2.6.6-rc2. After 2 days of humming the \n> Adaptec mantra I gave in and switched the array straight onto the \n> onboard Adaptec 160 controller (same cable and everything). Software \n> RAID 5 gets me over 40MB sec for a nominal cpu hit - more than 4 times \n> what I could get out of the MegaRAID controller :( Even the 2nd SCSI-2 \n> channel gets 40MB/sec max (pg_xlog :)\n> \n> And HOW LONG does it take to detect drives during POST....ohhhh never \n> mind ... I really just wanna rant :) There should be a free counseling \n> service for enraged sysops.\n\nI wonder if your controller is broken or something? Or maybe on a PCI \nslow that has to share IRQs or something. I've had great luck with \nSuperMicro mobos in the past (we're talking dual PPro 200 mobos, so \nseriously, IN THE PAST here... ) Hell, my Dual PPro 200 with an old \nMegaRAID 428 got 18 Megs a second cfer rate no problem.\n\nHave you tried that lsi card in another machine / mobo combo? Can you \ndisable the onboard adaptec? We have on our Dell 2650s, the only active \ncontrollers are the onboard IDE and the add in LSI-320-2 controller.\n\nWe're running ours with 128 Meg cache (I think could be 64) set to write \nback. I think our throughput on a RAID-1 pair was somewhere around 40+ \nmegs a second reads with bonnie++ With RAID-5 it was not really much \nfaster at reads (something like 60 megs a second) but was much more \nscalable under heavy parellel read/write access for PostgreSQL.\n\nHave you updated the BIOS on the mobo to see if that helps? I'm just \nthrowing darts at the wall here.\n\n", "msg_date": "Tue, 27 Apr 2004 12:02:45 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Help with performance problems" } ]
[ { "msg_contents": "\nPWFPM_DEV=# select * from pg_locks;\n relation | database | transaction | pid | mode |\ngranted\n----------+----------+-------------+-------+--------------------------+-----\n----\n 17472 | 17347 | | 2618 | ShareUpdateExclusiveLock | t\n | | 10858533 | 28778 | ExclusiveLock | t\n 17472 | 17347 | | 2618 | ShareUpdateExclusiveLock | t\n | | 10803814 | 2618 | ExclusiveLock | t\n 16759 | 17347 | | 28778 | AccessShareLock | t\n(5 rows)\n\nPWFPM_DEV=#\n\n17347 is the database PWFPM_DEV iod, The pids are below\n\n[root@murphy root]# ps -ef |grep 28778|grep -v \"grep\"\npostgres 28778 504 0 18:06 ? 00:00:00 postgres: scores PWFPM_DEV\n[local] idle\n[root@murphy root]# ps -ef |grep 2618|grep -v \"grep\"\npostgres 2618 504 8 Apr22 ? 02:31:00 postgres: postgres PWFPM_DEV\n[local] VACUUM\n[root@murphy root]#\nA vacuum is running now. I restarted the database, set vacuum_mem =\n'196608'; and started a new vacuum. I also stopped inserting into the\ndatabase.\nI hoping I will get some results.\n\nPWFPM_DEV=# select now();vacuum verbose analyze forecastelement;select\nnow();\n now\n-------------------------------\n 2004-04-22 13:38:02.083592+00\n(1 row)\n\nINFO: vacuuming \"public.forecastelement\"\nINFO: index \"forecastelement_rwv_idx\" now contains 391385895 row versions\nin 5051132 pages\nDETAIL: 27962015 index row versions were removed.\n771899 index pages have been deleted, 496872 are currently reusable.\nCPU 4499.54s/385.76u sec elapsed 55780.91 sec.\nINFO: \"forecastelement\": removed 33554117 row versions in 737471 pages\nDETAIL: CPU 135.61s/83.99u sec elapsed 1101.26 sec.\n-----Original Message-----\nFrom: Christopher Kings-Lynne [mailto:[email protected]]\nSent: Tuesday, April 20, 2004 9:26 PM\nTo: Shea,Dan [CIS]\nCc: [email protected]\nSubject: Re: [PERFORM] Why will vacuum not end?\n\n\n> No, but data is constantly being inserted by userid scores. It is\npostgres\n> runnimg the vacuum.\n> Dan.\n\nWell, inserts create some locks - perhaps that's the problem...\n\nOtherwise, check the pg_locks view to see if you can figure it out.\n\nChris\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Fri, 23 Apr 2004 14:19:04 -0400", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why will vacuum not end?" }, { "msg_contents": "Guys,\n\n> Well, inserts create some locks - perhaps that's the problem...\n>\n> Otherwise, check the pg_locks view to see if you can figure it out.\n\nFWIW, I've had this happen a couple of times, too. Unfortunately, it's \nhappend in the middle of the day so that I had to cancel the processes and \nget the system back to normal in too much of a hurry to consider documenting \nit.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 23 Apr 2004 11:47:58 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why will vacuum not end?" } ]
[ { "msg_contents": "Hi, \n\nI have a query which I think should be using an index all of the time but \npostgres only uses the index part of the time. The index \n(ticket_crm_map_crm_id_suppid) has the where clause column (crm_id) listed \nfirst followed by the selected column (support_person_id). Wouldn't the \nmost efficient plan be to scan the index each time because the only columns \nneeded are in the index? Below is the table, 2 queries showing the \ndifference in plans, followed by the record distribution of ticket_crm_map. \nI first did a 'vacuum analyze' to update the statistics. \n\nThanks,\nBrad \n\n\nathenapost=> \\d ticket_crm_map\n Table \"public.ticket_crm_map\"\n Column | Type | \nModifiers\n ------------------------+-----------------------------+--------------------- \n -----------------------\ntcrm_map_id | integer | not null\nticket_id | integer | not null\ncrm_id | integer | not null\nsupport_person_id | integer | not null\nescalated_to_person_id | integer | not null\nstatus | character varying(50) | not null default \n'Open'::character varying\nclose_date | timestamp without time zone |\nupdated_date | timestamp without time zone |\nupdated_by | character varying(255) |\ncreated_date | timestamp without time zone |\ncreated_by | character varying(255) |\nadditional_info | text |\nsubject | character varying(255) |\nIndexes:\n \"ticket_crm_map_pkey\" primary key, btree (tcrm_map_id)\n \"ticket_crm_map_crm_id_key\" unique, btree (crm_id, ticket_id)\n \"ticket_crm_map_crm_id_suppid\" btree (crm_id, support_person_id)\n \"ticket_crm_map_status\" btree (status)\n \"ticket_crm_map_ticket_id\" btree (ticket_id)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (ticket_id) REFERENCES ticket(ticket_id)\n \"$2\" FOREIGN KEY (crm_id) REFERENCES company_crm(crm_id)\n \"$3\" FOREIGN KEY (support_person_id) REFERENCES person(person_id)\n \"$4\" FOREIGN KEY (escalated_to_person_id) REFERENCES person(person_id)\n \"$5\" FOREIGN KEY (status) REFERENCES ticket_status(status) \n\nathenapost=> explain analyze select distinct support_person_id from \nticket_crm_map where crm_id = 7;\n \nQUERY PLAN\n ---------------------------------------------------------------------------- \n ---------------------------------------------------------------------------- \n ----------\nUnique (cost=1262.99..1265.27 rows=1 width=4) (actual time=15.335..18.245 \nrows=20 loops=1)\n -> Sort (cost=1262.99..1264.13 rows=456 width=4) (actual \ntime=15.332..16.605 rows=2275 loops=1)\n Sort Key: support_person_id\n -> Index Scan using ticket_crm_map_crm_id_suppid on ticket_crm_map \n(cost=0.00..1242.85 rows=456 width=4) (actual time=0.055..11.281 rows=2275 \nloops=1)\n Index Cond: (crm_id = 7)\nTotal runtime: 18.553 ms\n(6 rows) \n\nTime: 20.598 ms\nathenapost=> explain analyze select distinct support_person_id from \nticket_crm_map where crm_id = 1;\n QUERY PLAN\n ---------------------------------------------------------------------------- \n -----------------------------------------------------\nUnique (cost=10911.12..11349.26 rows=32 width=4) (actual \ntime=659.102..791.517 rows=24 loops=1)\n -> Sort (cost=10911.12..11130.19 rows=87628 width=4) (actual \ntime=659.090..713.285 rows=93889 loops=1)\n Sort Key: support_person_id\n -> Seq Scan on ticket_crm_map (cost=0.00..3717.25 rows=87628 \nwidth=4) (actual time=0.027..359.299 rows=93889 loops=1)\n Filter: (crm_id = 1)\nTotal runtime: 814.601 ms\n(6 rows) \n\nTime: 817.095 ms\nathenapost=> select count(*), crm_id from ticket_crm_map group by crm_id;\ncount | crm_id\n -------+--------\n 2554 | 63\n 129 | 25\n 17 | 24\n 110 | 23\n 74 | 22\n 69 | 21\n 2 | 20\n 53 | 82\n 10 | 17\n 16 | 81\n46637 | 16\n 14 | 80\n 2 | 15\n 1062 | 79\n 87 | 78\n 93 | 77\n 60 | 44\n 363 | 76\n 225 | 10\n 4 | 74\n 83 | 9\n 27 | 73\n 182 | 8\n 2275 | 7\n 15 | 71\n 554 | 6\n 44 | 70\n 631 | 5\n 37 | 4\n 190 | 3\n 112 | 2\n93889 | 1\n(32 rows) \n\nTime: 436.697 ms\n", "msg_date": "Fri, 23 Apr 2004 15:21:21 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "index usage" }, { "msg_contents": "\nOn Fri, 23 Apr 2004 [email protected] wrote:\n\n> I have a query which I think should be using an index all of the time but\n> postgres only uses the index part of the time. The index\n> (ticket_crm_map_crm_id_suppid) has the where clause column (crm_id) listed\n> first followed by the selected column (support_person_id). Wouldn't the\n> most efficient plan be to scan the index each time because the only columns\n> needed are in the index? Below is the table, 2 queries showing the\n\nNot necessarily. The rows in the actual file still need to be checked to\nsee if they're visible to the select and if it's expected that the entire\nfile (or a reasonable % of the pages anyway) will need to be loaded using\nthe index isn't necessarily a win.\n\n> athenapost=> explain analyze select distinct support_person_id from\n> ticket_crm_map where crm_id = 1;\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> -----------------------------------------------------\n> Unique (cost=10911.12..11349.26 rows=32 width=4) (actual\n> time=659.102..791.517 rows=24 loops=1)\n> -> Sort (cost=10911.12..11130.19 rows=87628 width=4) (actual\n> time=659.090..713.285 rows=93889 loops=1)\n> Sort Key: support_person_id\n> -> Seq Scan on ticket_crm_map (cost=0.00..3717.25 rows=87628\n> width=4) (actual time=0.027..359.299 rows=93889 loops=1)\n> Filter: (crm_id = 1)\n> Total runtime: 814.601 ms\n\nHow far off is this from the index scan version in time? Try doing\nset enable_seqscan=off; and then explain analyzing again.\nIt's possible that you may wish to lower random_page_cost to change the\nestimated effect of how much more expensive random reads are compared to\nsequential ones.\n", "msg_date": "Mon, 26 Apr 2004 11:58:09 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index usage" }, { "msg_contents": "\nWhen checking an index in postgres the original table has to be checked for \neach result to find if the index entry is still valid? In which case you \ncan't blindly scan the whole index and assume the data is good. I was used \nto Oracle behavior where the index is up to date so it can do the scan \nwithout hitting the original table. \n\nDoes this sound correct to anyone? \n\nThanks,\nBrad \n\n\nStephan Szabo writes:\n> On Fri, 23 Apr 2004 [email protected] wrote: \n> \n>> I have a query which I think should be using an index all of the time but\n>> postgres only uses the index part of the time. The index\n>> (ticket_crm_map_crm_id_suppid) has the where clause column (crm_id) listed\n>> first followed by the selected column (support_person_id). Wouldn't the\n>> most efficient plan be to scan the index each time because the only columns\n>> needed are in the index? Below is the table, 2 queries showing the\n> \n> Not necessarily. The rows in the actual file still need to be checked to\n> see if they're visible to the select and if it's expected that the entire\n> file (or a reasonable % of the pages anyway) will need to be loaded using\n> the index isn't necessarily a win. \n> \n \n\n", "msg_date": "Mon, 26 Apr 2004 12:16:28 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: index usage" }, { "msg_contents": "On Mon, 26 Apr 2004, Stephan Szabo wrote:\n\n> \n> On Fri, 23 Apr 2004 [email protected] wrote:\n> \n> > I have a query which I think should be using an index all of the time but\n> > postgres only uses the index part of the time. The index\n> > (ticket_crm_map_crm_id_suppid) has the where clause column (crm_id) listed\n> > first followed by the selected column (support_person_id). Wouldn't the\n> > most efficient plan be to scan the index each time because the only columns\n> > needed are in the index? Below is the table, 2 queries showing the\n> \n> Not necessarily. The rows in the actual file still need to be checked to\n> see if they're visible to the select and if it's expected that the entire\n> file (or a reasonable % of the pages anyway) will need to be loaded using\n> the index isn't necessarily a win.\n\nWhile those of us familiar with PostgreSQL are well aware of the fact that \nindexes can't be used directly to garner information, but only as a lookup \nto a tuple in the table, it seems this misconception is quite common among \nthose coming to postgreSQL from other databases.\n\nIs there any information that directly reflects this issue in the docs? \nThere are tons of hints that it works this way in how they're written, but \nnothing that just comes out and says that with pgsql's mvcc \nimplementation, an index scan still has to hit the pages that contain the \ntuples, so often in pgsql a seq scan is a win where in other databases and \nindex scan would have been a win?\n\nIf not, where would I add it if I were going to write something up for the \ndocs? Just wondering...\n\n", "msg_date": "Wed, 28 Apr 2004 09:40:08 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index usage" }, { "msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> There are tons of hints that it works this way in how they're written, but \n> nothing that just comes out and says that with pgsql's mvcc \n> implementation, an index scan still has to hit the pages that contain the \n> tuples, so often in pgsql a seq scan is a win where in other databases and \n> index scan would have been a win?\n\n> If not, where would I add it if I were going to write something up for the \n> docs? Just wondering...\n\nAFAIR the only place in the docs that mentions seqscan or indexscan at\nall is the discussion of EXPLAIN in \"Performance Tips\". Perhaps a\nsuitably-enlarged version of that section could cover this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Apr 2004 12:41:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index usage " } ]
[ { "msg_contents": "Josh, how long should a vacuum take on a 87 GB table with a 39 GB index?\n\nI do not think that the verbose option of vacuum is verbose enough.\nThe vacuum keeps redoing the index, but there is no indication as to why it\nis doing this. \n\nI see alot of activity with transaction logs being recycled (15 to 30 every\n3 to 20 minutes). \nIs the vacuum causing this?\n\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]]\nSent: Friday, April 23, 2004 2:48 PM\nTo: Shea,Dan [CIS]; 'Christopher Kings-Lynne'\nCc: [email protected]\nSubject: Re: [PERFORM] Why will vacuum not end?\n\n\nGuys,\n\n> Well, inserts create some locks - perhaps that's the problem...\n>\n> Otherwise, check the pg_locks view to see if you can figure it out.\n\nFWIW, I've had this happen a couple of times, too. Unfortunately, it's \nhappend in the middle of the day so that I had to cancel the processes and \nget the system back to normal in too much of a hurry to consider documenting\n\nit.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 24 Apr 2004 10:45:40 -0400", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why will vacuum not end?" }, { "msg_contents": "Dan,\n\n> Josh, how long should a vacuum take on a 87 GB table with a 39 GB index?\n\nDepends:\n-- What's your disk support?\n-- VACUUM, VACUUM ANALYZE, or VACUUM FULL?\n-- What's your vacuum_mem setting?\n-- What are checkpoint and wal settings?\n\n> I see alot of activity with transaction logs being recycled (15 to 30 every\n> 3 to 20 minutes).\n> Is the vacuum causing this?\n\nProbably, yes. How many checkpoint_buffers do you allow?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 24 Apr 2004 08:35:21 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why will vacuum not end?" }, { "msg_contents": "On Sat, 24 Apr 2004 10:45:40 -0400, \"Shea,Dan [CIS]\" <[email protected]>\nwrote:\n>[...] 87 GB table with a 39 GB index?\n\n>The vacuum keeps redoing the index, but there is no indication as to why it\n>is doing this. \n\nIf VACUUM finds a dead tuple, if does not immediately remove index\nentries pointing to that tuple. It instead collects such tuple ids and\nlater does a bulk delete, i.e. scans the whole index and removes all\nindex items pointing to one of those tuples. The number of tuple ids\nthat can be remembered is controlled by vacuum_mem: it is\n\n\tVacuumMem * 1024 / 6\n\nWhenever this number of dead tuples has been found, VACUUM scans the\nindex (which takes ca. 60000 seconds, more than 16 hours), empties the\nlist and continues to scan the heap ...\n\n From the number of dead tuples you can estimate how often your index\nwill be scanned. If dead tuples are evenly distributed, expect there to\nbe 15 index scans with your current vacuum_mem setting of 196608. So\nyour VACUUM will run for 11 days :-(\n\nOTOH this would mean that there are 500 million dead tuples. Do you\nthink this is possible?\n\nServus\n Manfred\n", "msg_date": "Sat, 24 Apr 2004 19:57:22 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why will vacuum not end?" } ]
[ { "msg_contents": "Manfred is indicating the reason it is taking so long is due to the number\nof dead tuples in my index and the vacuum_mem setting. \nThe last delete that I did before starting a vacuum had 219,177,133\ndeletions.\nDan.\n>Dan,\n\n>> Josh, how long should a vacuum take on a 87 GB table with a 39 GB index?\n\n>Depends:\n>-- What's your disk support?\n\n>-- VACUUM, VACUUM ANALYZE, or VACUUM FULL?\nVACUUM ANALYZE\n>-- What's your vacuum_mem setting?\nset vacuum_mem = '196608'\n#fsync = true # turns forced synchronization on or off\n#wal_sync_method = fsync \n>-- What are checkpoint and wal settings?\nwal_buffers = 64 \ncheckpoint_segments = 30 \ncheckpoint_timeout = 300\n\n>> I see alot of activity with transaction logs being recycled (15 to 30\nevery\n>> 3 to 20 minutes).\n>> Is the vacuum causing this?\n\n>Probably, yes. How many checkpoint_buffers do you allow?\nI am not sure what the checkpoint_buffers are, we are running 7.4.0?\n>-- \n>Josh Berkus\n>Aglio Database Solutions\n>San Francisco\n", "msg_date": "Sat, 24 Apr 2004 15:48:19 -0400", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why will vacuum not end?" }, { "msg_contents": "On Sat, 24 Apr 2004 15:48:19 -0400, \"Shea,Dan [CIS]\" <[email protected]>\nwrote:\n>Manfred is indicating the reason it is taking so long is due to the number\n>of dead tuples in my index and the vacuum_mem setting. \n\n<nitpicking>\nNot dead tuples in the index, but dead tuples in the table.\n</nitpicking>\n\n>The last delete that I did before starting a vacuum had 219,177,133\n>deletions.\n\nOk, with vacuum_mem = 196608 the bulk delete batch size is ca. 33.5 M\ntuple ids. 219 M dead tuples will cause 7 index scans. The time for an\nindex scan is more or less constant, 60000 seconds in your case. So\nyes, a larger vacuum_mem will help, but only if you really have as much\n*free* memory. Forcing the machine into swapping would make things\nworse.\n\nBTW, VACUUM frees millions of index pages, is your FSM large enough?\n\nServus\n Manfred\n", "msg_date": "Sun, 25 Apr 2004 02:05:22 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why will vacuum not end?" } ]
[ { "msg_contents": "There were defintely 219,177,133 deletions. \nThe deletions are most likely from the beginning, it was based on the\nreception_time of the data.\nI would rather not use re-index, unless it is faster then using vacuum.\nWhat do you think would be the best way to get around this?\nIncrease vacuum_mem to a higher amount 1.5 to 2 GB or try a re-index (rather\nnot re-index so that data can be queried without soing a seqscan).\nOnce the index is cleaned up, how does vacuum handle the table? \nDoes it take as long as the index or is it faster?\n\n\n\n-----Original Message-----\nFrom: Manfred Koizar [mailto:[email protected]]\nSent: Saturday, April 24, 2004 1:57 PM\nTo: Shea,Dan [CIS]\nCc: 'Josh Berkus'; [email protected]\nSubject: Re: [PERFORM] Why will vacuum not end?\n\n\nOn Sat, 24 Apr 2004 10:45:40 -0400, \"Shea,Dan [CIS]\" <[email protected]>\nwrote:\n>[...] 87 GB table with a 39 GB index?\n\n>The vacuum keeps redoing the index, but there is no indication as to why it\n>is doing this. \n\nIf VACUUM finds a dead tuple, if does not immediately remove index\nentries pointing to that tuple. It instead collects such tuple ids and\nlater does a bulk delete, i.e. scans the whole index and removes all\nindex items pointing to one of those tuples. The number of tuple ids\nthat can be remembered is controlled by vacuum_mem: it is\n\n\tVacuumMem * 1024 / 6\n\nWhenever this number of dead tuples has been found, VACUUM scans the\nindex (which takes ca. 60000 seconds, more than 16 hours), empties the\nlist and continues to scan the heap ...\n\n From the number of dead tuples you can estimate how often your index\nwill be scanned. If dead tuples are evenly distributed, expect there to\nbe 15 index scans with your current vacuum_mem setting of 196608. So\nyour VACUUM will run for 11 days :-(\n\nOTOH this would mean that there are 500 million dead tuples. Do you\nthink this is possible?\n\nServus\n Manfred\n", "msg_date": "Sat, 24 Apr 2004 15:58:08 -0400", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why will vacuum not end?" }, { "msg_contents": "On Sat, 24 Apr 2004 15:58:08 -0400, \"Shea,Dan [CIS]\" <[email protected]>\nwrote:\n>There were defintely 219,177,133 deletions. \n>The deletions are most likely from the beginning, it was based on the\n>reception_time of the data.\n>I would rather not use re-index, unless it is faster then using vacuum.\n\nI don't know whether it would be faster. But if you decide to reindex,\nmake sure sort_mem is *huge*!\n\n>What do you think would be the best way to get around this?\n>Increase vacuum_mem to a higher amount 1.5 to 2 GB or try a re-index (rather\n>not re-index so that data can be queried without soing a seqscan).\n\nJust out of curiosity: What kind of machine is this running on? And\nhow long does a seq scan take?\n\n>Once the index is cleaned up, how does vacuum handle the table? \n\nIf you are lucky VACUUM frees half the index pages. And if we assume\nthat the most time spent scanning an index goes into random page\naccesses, future VACUUMs will take \"only\" 30000 seconds per index scan.\n\nServus\n Manfred\n", "msg_date": "Sun, 25 Apr 2004 02:28:37 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why will vacuum not end?" }, { "msg_contents": "Dan,\n\n> There were defintely 219,177,133 deletions.\n> The deletions are most likely from the beginning, it was based on the\n> reception_time of the data.\n\nYou need to run VACUUM more often, I think. Vacuuming out 219 million dead \ntuples is going to take a long time no matter how you look at it. If you \nvacuum more often, presumably there will be less for Vacuum to do each time.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 24 Apr 2004 18:49:07 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why will vacuum not end?" } ]
[ { "msg_contents": "It is set at max_fsm_pages = 1500000 .\n\nWe are running a \nDELL PowerEdge 6650 with 4 CPU's\nMem: 3611320k av from top.\nThe database is on a shared device (SAN) raid5, 172 GB.\nQlogic Fibre optic cards(desc: \"QLogic Corp.|QLA2312 Fibre Channel Adapter\")\nconnected to the Dell version of an EMC SAN (FC4700 I believe).\n\nI have set vacuum_mem = 917504;\nand started another vacuum verbose on the table in question.\nTried to set vacuum_mem to 1114112 and vacuum failed, then tried 917504 and\nvacuum started.\n\nPWFPM_DEV=# set vacuum_mem = '1114112';\nSET\nPWFPM_DEV=# show vacuum_mem;\n vacuum_mem\n------------\n 1114112\n(1 row)\n\nPWFPM_DEV=# vacuum verbose forecastelement;\n\nINFO: vacuuming \"public.forecastelement\"\nERROR: invalid memory alloc request size 1140850686\nPWFPM_DEV=# set vacuum_mem = 917504;\nSET\nPWFPM_DEV=# show vacuum_mem;\n vacuum_mem\n------------\n 917504\n(1 row)\n\nPWFPM_DEV=# select now();vacuum verbose forecastelement;select now();\n now\n-------------------------------\n 2004-04-25 01:40:23.367123+00\n(1 row)\n\nINFO: vacuuming \"public.forecastelement\"\n\nI performed a query that used a seqscan\n\nPWFPM_DEV=# explain analyze select count(*) from forecastelement;\n QUERY PLAN\n----------------------------------------------------------------------------\n-------------------------------------------------------------------\n Aggregate (cost=16635987.60..16635987.60 rows=1 width=0) (actual\ntime=13111152.844..13111152.847 rows=1 loops=1)\n -> Seq Scan on forecastelement (cost=0.00..15403082.88 rows=493161888\nwidth=0) (actual time=243.562..12692714.422 rows=264422681 loops=1)\n Total runtime: 13111221.978 ms\n(3 rows)\n\nDan.\n\n-----Original Message-----\nFrom: Manfred Koizar [mailto:[email protected]]\nSent: Saturday, April 24, 2004 8:29 PM\nTo: Shea,Dan [CIS]\nCc: 'Josh Berkus'; [email protected]\nSubject: Re: [PERFORM] Why will vacuum not end?\n\n\nOn Sat, 24 Apr 2004 15:58:08 -0400, \"Shea,Dan [CIS]\" <[email protected]>\nwrote:\n>There were defintely 219,177,133 deletions. \n>The deletions are most likely from the beginning, it was based on the\n>reception_time of the data.\n>I would rather not use re-index, unless it is faster then using vacuum.\n\nI don't know whether it would be faster. But if you decide to reindex,\nmake sure sort_mem is *huge*!\n\n>What do you think would be the best way to get around this?\n>Increase vacuum_mem to a higher amount 1.5 to 2 GB or try a re-index\n(rather\n>not re-index so that data can be queried without soing a seqscan).\n\nJust out of curiosity: What kind of machine is this running on? And\nhow long does a seq scan take?\n\n>Once the index is cleaned up, how does vacuum handle the table? \n\nIf you are lucky VACUUM frees half the index pages. And if we assume\nthat the most time spent scanning an index goes into random page\naccesses, future VACUUMs will take \"only\" 30000 seconds per index scan.\n\nServus\n Manfred\n", "msg_date": "Sun, 25 Apr 2004 09:05:11 -0400", "msg_from": "\"Shea,Dan [CIS]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why will vacuum not end?" }, { "msg_contents": "On Sun, 25 Apr 2004 09:05:11 -0400, \"Shea,Dan [CIS]\" <[email protected]>\nwrote:\n>It is set at max_fsm_pages = 1500000 .\n\nThis might be too low. Your index has ca. 5 M pages, you are going to\ndelete half of its entries, and what you delete is a contiguous range of\nvalues. So up to 2.5 M index pages might be freed (minus inner nodes\nand pages not completely empty). And there will be lots of free heap\npages too ...\n\nI wrote:\n>If you are lucky VACUUM frees half the index pages. And if we assume\n>that the most time spent scanning an index goes into random page\n>accesses, future VACUUMs will take \"only\" 30000 seconds per index scan.\n\nAfter a closer look at the code and after having slept over it I'm not\nso sure any more that the number of tuple ids to be removed has only\nminor influence on the time spent for a bulk delete run. After the\ncurrent VACUUM has finished would you be so kind to run another VACUUM\nVERBOSE with only a few dead tuples and post the results here?\n\nServus\n Manfred\n", "msg_date": "Sun, 25 Apr 2004 22:46:54 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why will vacuum not end?" } ]
[ { "msg_contents": "I am using the oid of the table as the main key and I've found that is \nnot indexed (maybe because I have declared another primary key in the table)\n\nit is a good practice to create an index like this on the oid of a table?\nCREATE INDEX idoid annuncio400 USING btree (oid);\n\n\ndoes it work as a normal index?\n\nThank you\nEdoardo\n", "msg_date": "Mon, 26 Apr 2004 18:38:00 +0200", "msg_from": "Edoardo Ceccarelli <[email protected]>", "msg_from_op": true, "msg_subject": "is a good practice to create an index on the oid?" }, { "msg_contents": "Yes, you can create an index on the oid, but unless you are selecting on\nit, it is of little use.\n\nyou would have to do select * from foo where oid=? to get any value out\nof the index.\n\nDave\nOn Mon, 2004-04-26 at 12:38, Edoardo Ceccarelli wrote:\n> I am using the oid of the table as the main key and I've found that is \n> not indexed (maybe because I have declared another primary key in the table)\n> \n> it is a good practice to create an index like this on the oid of a table?\n> CREATE INDEX idoid annuncio400 USING btree (oid);\n> \n> \n> does it work as a normal index?\n> \n> Thank you\n> Edoardo\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n> \n> \n> !DSPAM:408d7c38183971270217895!\n> \n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Mon, 26 Apr 2004 19:04:07 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [JDBC] is a good practice to create an index on the oid?" }, { "msg_contents": "> I am using the oid of the table as the main key and I've found that is \n> not indexed (maybe because I have declared another primary key in the \n> table)\n> \n> it is a good practice to create an index like this on the oid of a table?\n> CREATE INDEX idoid annuncio400 USING btree (oid);\n\nYes it is - in fact you really should add a unique index, not just a \nnormal index, as you want to enforce uniqueness of the oid column. It \nis theoretically possible to end up with duplicate oids in wraparound \nsituations.\n\nEven better though is to not use oids at all, of course...\n\nChris\n\n", "msg_date": "Tue, 27 Apr 2004 10:48:01 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] is a good practice to create an index on the oid?" }, { "msg_contents": "AFAIK, oids aren't used for anything internally, so duplicates don't\nreally matter. Besides, what would you do about duplicate oid's ?\n\nThe best suggestion is of course his last, don't use them.\n\n\nOn Mon, 2004-04-26 at 22:48, Christopher Kings-Lynne wrote:\n> > I am using the oid of the table as the main key and I've found that is \n> > not indexed (maybe because I have declared another primary key in the \n> > table)\n> > \n> > it is a good practice to create an index like this on the oid of a table?\n> > CREATE INDEX idoid annuncio400 USING btree (oid);\n> \n> Yes it is - in fact you really should add a unique index, not just a \n> normal index, as you want to enforce uniqueness of the oid column. It \n> is theoretically possible to end up with duplicate oids in wraparound \n> situations.\n> \n> Even better though is to not use oids at all, of course...\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n> \n> \n> !DSPAM:408dcc51235334924183622!\n> \n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Tue, 27 Apr 2004 07:59:47 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [JDBC] [PERFORM] is a good practice to create an index on the" }, { "msg_contents": "> AFAIK, oids aren't used for anything internally, so duplicates don't\n> really matter. Besides, what would you do about duplicate oid's ?\n\nIf he's using them _externally_, then he does have to worry about \nduplicates.\n\nChris\n\n", "msg_date": "Tue, 27 Apr 2004 23:01:22 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [JDBC] [PERFORM] is a good practice to create an index on the" }, { "msg_contents": "Edoardo, \n\nAre you using them for referential integrity? If so you would be wise to\nuse sequences instead. \n\nChristopher: yes you are correct, I wasn't sure if that is what he was\ndoing.\n\nDave\nOn Tue, 2004-04-27 at 11:01, Christopher Kings-Lynne wrote:\n> > AFAIK, oids aren't used for anything internally, so duplicates don't\n> > really matter. Besides, what would you do about duplicate oid's ?\n> \n> If he's using them _externally_, then he does have to worry about \n> duplicates.\n> \n> Chris\n> \n> \n> \n> !DSPAM:408e75e0137721921318500!\n> \n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n", "msg_date": "Tue, 27 Apr 2004 11:18:34 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [JDBC] [PERFORM] is a good practice to create an index on the" }, { "msg_contents": "I am going to use them as primary key of the table, so I'll surely need \nthem unique :)\nthank you for you help\nEdoardo\n\nDave Cramer ha scritto:\n\n>Edoardo, \n>\n>Are you using them for referential integrity? If so you would be wise to\n>use sequences instead. \n>\n>Christopher: yes you are correct, I wasn't sure if that is what he was\n>doing.\n>\n>Dave\n>On Tue, 2004-04-27 at 11:01, Christopher Kings-Lynne wrote:\n> \n>\n>>>AFAIK, oids aren't used for anything internally, so duplicates don't\n>>>really matter. Besides, what would you do about duplicate oid's ?\n>>> \n>>>\n>>If he's using them _externally_, then he does have to worry about \n>>duplicates.\n>>\n>>Chris\n>>\n>>\n>>\n>>!DSPAM:408e75e0137721921318500!\n>>\n>>\n>> \n>>\n", "msg_date": "Tue, 27 Apr 2004 23:42:58 +0200", "msg_from": "Edoardo Ceccarelli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [JDBC] [PERFORM] is a good practice to create an index on the" }, { "msg_contents": "> I am going to use them as primary key of the table, so I'll surely need \n> them unique :)\n\nEduoardo, I REALLY suggest you don't use them at all. You should make a \nprimary key like this:\n\nCREATE TABLE blah (\n id SERIAL PRIMARY KEY,\n ...\n);\n\nAlso note that by default, OIDs are NOT dumped by pg_dump. You will \nneed to add extra switches to your pg_dump backup to ensure that they are.\n\nChris\n\n", "msg_date": "Wed, 28 Apr 2004 09:30:44 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [JDBC] [PERFORM] is a good practice to create an index on the" }, { "msg_contents": "do you mean that, declaring an index serial, I'd never have to deal with \nincrementing its primary key? good to know!\nanyway in this particular situation I don't need such accurate \nbehaviour: this table is filled up with a lot of data twice per week and \nit's used only to answer queries.\nI could drop it whenever I want :)\n\nThanks again,\neddy\n\nChristopher Kings-Lynne ha scritto:\n\n>> I am going to use them as primary key of the table, so I'll surely \n>> need them unique :)\n>\n>\n> Eduoardo, I REALLY suggest you don't use them at all. You should make \n> a primary key like this:\n>\n> CREATE TABLE blah (\n> id SERIAL PRIMARY KEY,\n> ...\n> );\n>\n> Also note that by default, OIDs are NOT dumped by pg_dump. You will \n> need to add extra switches to your pg_dump backup to ensure that they \n> are.\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n>\n", "msg_date": "Wed, 28 Apr 2004 10:13:14 +0200", "msg_from": "Edoardo Ceccarelli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [JDBC] [PERFORM] is a good practice to create an index on the" }, { "msg_contents": "> do you mean that, declaring an index serial, I'd never have to deal with \n> incrementing its primary key? good to know!\n\nYep. You can use 'DEFAULT' as the value, eg:\n\nINSERT INTO blah (DEFAULT, ...);\n\n> anyway in this particular situation I don't need such accurate \n> behaviour: this table is filled up with a lot of data twice per week and \n> it's used only to answer queries.\n> I could drop it whenever I want :)\n\nSure.\n\nChris\n\n", "msg_date": "Wed, 28 Apr 2004 16:25:26 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [JDBC] [PERFORM] is a good practice to create an index on the" }, { "msg_contents": "On Wed, Apr 28, 2004 at 10:13:14 +0200,\n Edoardo Ceccarelli <[email protected]> wrote:\n> do you mean that, declaring an index serial, I'd never have to deal with \n> incrementing its primary key? good to know!\n\nThat isn't what is happening. Serial is a special type. It is int plus\na default rule linked to a sequence. No index is created by default\nfor the serial type. Declaring a column as a primary key will however\ncreate a unique index on that column.\n\nAlso note that you should only assume that the serial values are unique.\n(This assumes that you don't use setval and that you don't roll a sequence\nover.) Within a single session you can assume the sequence values will\nbe monotonicly increasing. The values that end up in your table can have\ngaps. Typically this happens when a transaction rolls back after obtaining\na new value from a sequence. It can also happen if you grab sequence\nvalues in larger blocks (which might be more efficient if a session normally\nacquires mulitple values from a particular sequence) than the default 1.\n\n> anyway in this particular situation I don't need such accurate \n> behaviour: this table is filled up with a lot of data twice per week and \n> it's used only to answer queries.\n> I could drop it whenever I want :)\n\nYou really don't want to use oids.\n", "msg_date": "Wed, 28 Apr 2004 03:32:23 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [JDBC] [PERFORM] is a good practice to create an index on the" } ]
[ { "msg_contents": "Hi, \n\nI have a query which I think should be using an index all of the time but \npostgres only uses the index part of the time. The index \n(ticket_crm_map_crm_id_suppid) has the where clause column (crm_id) listed \nfirst followed by the selected column (support_person_id). Wouldn't the \nmost efficient plan be to scan the index regardless of crm_id because the \nonly columns needed are in the index? Below is the table, 2 queries showing \nthe difference in plans, followed by the record distribution of \nticket_crm_map. I first did a 'vacuum analyze' and am running postgres \n7.4.2. \n\nThanks,\nBrad \n\n\nathenapost=> \\d ticket_crm_map\n Table \"public.ticket_crm_map\"\n Column | Type | \nModifiers\n ------------------------+-----------------------------+--------------------- \n -----------------------\ntcrm_map_id | integer | not null\nticket_id | integer | not null\ncrm_id | integer | not null\nsupport_person_id | integer | not null\nescalated_to_person_id | integer | not null\nstatus | character varying(50) | not null default \n'Open'::character varying\nclose_date | timestamp without time zone |\nupdated_date | timestamp without time zone |\nupdated_by | character varying(255) |\ncreated_date | timestamp without time zone |\ncreated_by | character varying(255) |\nadditional_info | text |\nsubject | character varying(255) |\nIndexes:\n \"ticket_crm_map_pkey\" primary key, btree (tcrm_map_id)\n \"ticket_crm_map_crm_id_key\" unique, btree (crm_id, ticket_id)\n \"ticket_crm_map_crm_id_suppid\" btree (crm_id, support_person_id)\n \"ticket_crm_map_status\" btree (status)\n \"ticket_crm_map_ticket_id\" btree (ticket_id)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (ticket_id) REFERENCES ticket(ticket_id)\n \"$2\" FOREIGN KEY (crm_id) REFERENCES company_crm(crm_id)\n \"$3\" FOREIGN KEY (support_person_id) REFERENCES person(person_id)\n \"$4\" FOREIGN KEY (escalated_to_person_id) REFERENCES person(person_id)\n \"$5\" FOREIGN KEY (status) REFERENCES ticket_status(status) \n\nathenapost=> explain analyze select distinct support_person_id from \nticket_crm_map where crm_id = 7;\n \nQUERY PLAN\n ---------------------------------------------------------------------------- \n ---------------------------------------------------------------------------- \n ----------\nUnique (cost=1262.99..1265.27 rows=1 width=4) (actual time=15.335..18.245 \nrows=20 loops=1)\n -> Sort (cost=1262.99..1264.13 rows=456 width=4) (actual \ntime=15.332..16.605 rows=2275 loops=1)\n Sort Key: support_person_id\n -> Index Scan using ticket_crm_map_crm_id_suppid on ticket_crm_map \n(cost=0.00..1242.85 rows=456 width=4) (actual time=0.055..11.281 rows=2275 \nloops=1)\n Index Cond: (crm_id = 7)\nTotal runtime: 18.553 ms\n(6 rows) \n\nTime: 20.598 ms\nathenapost=> explain analyze select distinct support_person_id from \nticket_crm_map where crm_id = 1;\n QUERY PLAN\n ---------------------------------------------------------------------------- \n -----------------------------------------------------\nUnique (cost=10911.12..11349.26 rows=32 width=4) (actual \ntime=659.102..791.517 rows=24 loops=1)\n -> Sort (cost=10911.12..11130.19 rows=87628 width=4) (actual \ntime=659.090..713.285 rows=93889 loops=1)\n Sort Key: support_person_id\n -> Seq Scan on ticket_crm_map (cost=0.00..3717.25 rows=87628 \nwidth=4) (actual time=0.027..359.299 rows=93889 loops=1)\n Filter: (crm_id = 1)\nTotal runtime: 814.601 ms\n(6 rows) \n\nTime: 817.095 ms\nathenapost=> select count(*), crm_id from ticket_crm_map group by crm_id;\ncount | crm_id\n -------+--------\n2554 | 63\n129 | 25\n 17 | 24\n110 | 23\n 74 | 22\n 69 | 21\n 2 | 20\n 53 | 82\n 10 | 17\n 16 | 81\n46637 | 16\n 14 | 80\n 2 | 15\n1062 | 79\n 87 | 78\n 93 | 77\n 60 | 44\n363 | 76\n225 | 10\n 4 | 74\n 83 | 9\n 27 | 73\n182 | 8\n2275 | 7\n 15 | 71\n554 | 6\n 44 | 70\n631 | 5\n 37 | 4\n190 | 3\n112 | 2\n93889 | 1\n(32 rows) \n\nTime: 436.697 ms \n\n", "msg_date": "Mon, 26 Apr 2004 10:11:03 -0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "planner/optimizer question" }, { "msg_contents": "Hi,\n\nYou should try the next queries:\n\nselect support_person_id from ticket_crm_map where crm_id = 7 GROUP BY\nsupport_person_id;\nselect support_person_id from ticket_crm_map where crm_id = 1 GROUP BY\nsupport_person_id;\n\nIt can use the 'ticket_crm_map_crm_id_suppid' index. \n\nGenerally the Postgres use an k-column index if columns of your\nconditions are prefix of the index column.\nFor example:\nCREATE INDEX test_idx on test(col1,col2,col3,col4);\nSELECT * FROM test WHERE col1=3 AND col2=13; -- This can use the index.\n\nBut the next queries cannot use the index:\nSELECT * FROM test WHERE col1=3 AND col3=13;.\nSELECT * FROM test WHERE col2=3;\n\nIf you have problem with seq_scan or sort, you can disable globally and\nlocally: \nSET enable_seqscan=0;\nSET enable_sort = 0;\n\nRegards, Antal Attila\n\n\n", "msg_date": "Tue, 27 Apr 2004 14:54:27 +0200", "msg_from": "\"Atesz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "[email protected] writes:\n> ... Wouldn't the most efficient plan be to scan the index regardless\n> of crm_id because the only columns needed are in the index?\n\nNo. People coming from other databases often have the misconception\nthat queries can be answered by looking only at an index. That is never\ntrue in Postgres because row validity info is only stored in the table;\nso we must always visit the table entry to make sure the row is still\nvalid/visible for the current query.\n\nAccordingly, columns added to the index that aren't constrained by the\nWHERE clause are not very useful ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Apr 2004 00:27:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "I know you will shoot me down, but...\n\nWhy is there an entry in the index for a row if the row is not valid? \nWouldn't it be better for the index entry validity to track the row validity. \nIf a particular data value for a query (join, where etc.) can be satisfied \nby the index entry itself this would be a big performance gain.\n\nCheers,\nGary.\n\nOn 28 Apr 2004 at 0:27, Tom Lane wrote:\n\n> [email protected] writes:\n> > ... Wouldn't the most efficient plan be to scan the index regardless\n> > of crm_id because the only columns needed are in the index?\n> \n> No. People coming from other databases often have the misconception\n> that queries can be answered by looking only at an index. That is never\n> true in Postgres because row validity info is only stored in the table;\n> so we must always visit the table entry to make sure the row is still\n> valid/visible for the current query.\n> \n> Accordingly, columns added to the index that aren't constrained by the\n> WHERE clause are not very useful ...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n\n", "msg_date": "Wed, 28 Apr 2004 07:35:41 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "> Why is there an entry in the index for a row if the row is not valid? \n> Wouldn't it be better for the index entry validity to track the row validity. \n> If a particular data value for a query (join, where etc.) can be satisfied \n> by the index entry itself this would be a big performance gain.\n\nFor SELECTs, yes - but for INSERT, UPDATE and DELETE it would be a big \nperformance loss.\n\nChris\n\n", "msg_date": "Wed, 28 Apr 2004 15:04:14 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "I can understand the performance loss on non-selects for keeping the \nindex validity state tracking the row validity, but would that outweigh the \nperformance gains on selects? Depends on your mix of selects to non \nselects I guess, but other database systems seem to imply that keeping \nthe index on track is worth it overall.\n\nCheers,\nGary.\n\nOn 28 Apr 2004 at 15:04, Christopher Kings-Lynne wrote:\n\n> > Why is there an entry in the index for a row if the row is not valid? \n> > Wouldn't it be better for the index entry validity to track the row validity. \n> > If a particular data value for a query (join, where etc.) can be satisfied \n> > by the index entry itself this would be a big performance gain.\n> \n> For SELECTs, yes - but for INSERT, UPDATE and DELETE it would be a big \n> performance loss.\n> \n> Chris\n> \n\n\n", "msg_date": "Wed, 28 Apr 2004 08:08:03 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "> I can understand the performance loss on non-selects for keeping the \n> index validity state tracking the row validity, but would that outweigh the \n> performance gains on selects? Depends on your mix of selects to non \n> selects I guess, but other database systems seem to imply that keeping \n> the index on track is worth it overall.\n\nYes, some sort of flag on index creation would be sweet :)\n\nChris\n\n", "msg_date": "Wed, 28 Apr 2004 15:23:48 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "On Wed, 28 Apr 2004 07:35:41 +0100, \"Gary Doades\" <[email protected]>\nwrote:\n>Why is there an entry in the index for a row if the row is not valid? \n\nBecause whether a row is seen as valid or not lies in the eye of the\ntransaction looking at it. Full visibility information is stored in the\nheap tuple header. The developers' consensus is that this overhead\nshould not be in every index tuple.\n\nServus\n Manfred\n", "msg_date": "Wed, 28 Apr 2004 09:34:59 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "Manfred Koizar <[email protected]> writes:\n> On Wed, 28 Apr 2004 07:35:41 +0100, \"Gary Doades\" <[email protected]>\n> wrote:\n>> Why is there an entry in the index for a row if the row is not valid? \n\n> Because whether a row is seen as valid or not lies in the eye of the\n> transaction looking at it. Full visibility information is stored in the\n> heap tuple header. The developers' consensus is that this overhead\n> should not be in every index tuple.\n\nStoring that information would at least double the overhead space used\nfor each index tuple. The resulting index bloat would significantly\nslow index operations by requiring more I/O. So it's far from clear\nthat this would be a win, even for those who care only about select\nspeed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Apr 2004 09:05:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "On Wed, 28 Apr 2004 09:05:04 -0400, Tom Lane <[email protected]> wrote:\n>> [ ... visibility information in index tuples ... ]\n\n>Storing that information would at least double the overhead space used\n>for each index tuple. The resulting index bloat would significantly\n>slow index operations by requiring more I/O. So it's far from clear\n>that this would be a win, even for those who care only about select\n>speed.\n\nWhile the storage overhead could be reduced to 1 bit (not a joke) we'd\nstill have the I/O overhead of locating and updating index tuples for\nevery heap tuple deleted/updated.\n\nServus\n Manfred\n", "msg_date": "Thu, 29 Apr 2004 19:03:03 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "On 29 Apr 2004 at 19:03, Manfred Koizar wrote:\n \n> While the storage overhead could be reduced to 1 bit (not a joke) we'd\n> still have the I/O overhead of locating and updating index tuples for\n> every heap tuple deleted/updated.\n\nBut this is what a lot of DBMSs do and seem to do well enough. I can see that the \nMVCC system gives additional problems, but maybe it shouldn't be dismissed so lightly.\n\nComing from a MS SQLServer platform I have spent a lot of time optimising SQL in \nPostgreSQL to be comparable to SQLServer. For the most part I have done this, but \nsome things are just slower in PostgreSQL.\n\nRecently I have been looking at raw performance (CPU, IO) rather than the plans. I \nhave some test queries that (as far as I can determine) use the same access plans on \nPostgreSQL and SQLServer. Getting to the detail, an index scan of an index on a \ninteger column (222512 rows) takes 60ms on SQLServer and 540ms on PostgreSQL.\nA full seq table scan on the same table without the index on the other hand takes 370ms \nin SQLServer and 420ms in PostgreSQL.\n\nI know that the platforms are different (windows 2000 vs Linux 2.6.3), but the statement \nwas executed several times to make sure the index and data was in cache (no disk io) \non both systems. Same data, Same CPU, Same disks, Same memory, Same \nmotherboards.\n\nThe only thing I can think of is the way that the index scan is performed on each \nplatform, SQLServer can use the data directly from the index. This makes the biggest \ndifference in multi join statements where several of the intermediate tables do not need \nto be accessed at all, the data is contained in the join indexes. This results in almost an \norder of magnitude performance difference for the same data.\n\nI would be nice to get a feel for how much performance loss would be incurred in \nmaintaining the index flags against possible performance gains for getting the data back \nout again.\n\nRegards,\nGary.\n\n", "msg_date": "Thu, 29 Apr 2004 19:13:51 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "> I would be nice to get a feel for how much performance loss would be incurred in \n> maintaining the index flags against possible performance gains for getting the data back \n> out again.\n\nI guess the real question is, why maintain index flags and not simply\ndrop the index entry altogether?\n\nA more interesting case would be to have the backend process record\nindex tuples that it would invalidate (if committed), then on commit\nsend that list to a garbage collection process.\n\nIt's still vacuum -- just the reaction time for it would be much\nquicker.\n\n\n", "msg_date": "Thu, 29 Apr 2004 15:12:15 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "> \n> I guess the real question is, why maintain index flags and not simply\n> drop the index entry altogether?\n> \n> A more interesting case would be to have the backend process record\n> index tuples that it would invalidate (if committed), then on commit\n> send that list to a garbage collection process.\n> \n> It's still vacuum -- just the reaction time for it would be much\n> quicker.\n> \nThis was my original question.\n\nI guess the problem is with MVCC. The row may have gone from your \ncurrent view of the table but not from someone elses. I don't (yet) \nunderstand the way it works to say for sure, but I still think it is worth \npursuing further for someone who does know the deep stuff. They seem \nto have concluded that it is not worth it however.\n\nCheers,\nGary.\n\n\n\n", "msg_date": "Thu, 29 Apr 2004 20:23:19 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "while you weren't looking, Gary Doades wrote:\n\n> Recently I have been looking at raw performance (CPU, IO) \n> rather than the plans. I have some test queries that (as far\n> as I can determine) use the same access plans on PostgreSQL\n> and SQLServer. Getting to the detail, an index scan of an\n> index on a integer column (222512 rows) takes 60ms on\n> SQLServer and 540ms on PostgreSQL.\n\nAfter a recent power outage, I had the opportunity to watch both\nPostgreSQL and MS SQL come back from forced shutdowns (clean,\nthough there were active connections, in one case a bulk insert).\nPostgreSQL was available and responsive as soon as the postmaster\nhad started. MS SQL, on the other hand, took the better part of\nan hour to become remotely usable again -- on a radically faster\nmachine (Dell 6650, versus the 6450 we run PostgreSQL on).\n\nDigging a bit, I noted that once MS SQL was up again, it was\nusing nearly 2GB main memory even when more or less idle. From\nthis, and having observed the performance differences between\nthe two, I'm left with little alternative but to surmise that\npart of MS SQL's noted performance advantage [1] is due to its\nforcibly storing its indices in main memory. Its startup lag\n(during which it was utterly unusable; even SELECTs blocked)\ncould be accounted for by reindexing the tables. [2]\n\nGranted, this is only a hypothesis, is rather unverifyable, and\nprobably belongs more on ADVOCACY than it does PERFORM, but it\nseemed relevant.\n\nIt's also entirely possible your indices are using inaccurate\nstatistical information. Have you ANALYZEd recently?\n\n/rls\n\n[1] Again, at least in our case, the comparison is entirely\n invalid, as MS SQL gets a hell of a lot more machine than\n PostgreSQL. Even so, for day-to-day work and queries, even\n our DBA, an until-recently fervent MS SQL advocate can't\n fault PostgreSQL's SELECT, INSERT or DELETE performance.\n We still can't get UPDATEs (at least bulk such) to pass\n muster.\n\n[2] This is further supported by having observed MS SQL run a\n \"recovery process\" on databases that were entirely unused,\n even for SELECT queries, at the time of the outage. The\n only thing it might conceivably need to recover on them\n is in-memory indices that were lost when power was lost.\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n", "msg_date": "Thu, 29 Apr 2004 15:12:29 -0500", "msg_from": "\"Rosser Schwarz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "\n> It's also entirely possible your indices are using inaccurate\n> statistical information. Have you ANALYZEd recently?\n> \n\nIn this example the statistics don't matter. The plans used were the same for \nMSSQL and Postgres. I was trying to eliminate the difference in plans \nbetween the two, which obviously does make a difference, sometimes in \nMSSQL favour and sometimes the other way round. Both systems, having \ndecided to do the same index scan, took noticably different times. The \nPostgres database was fully vacuumed and analysed anyway.\n\nI agree about MSSQL recovery time. it sucks. This is why they are making a \nbig point about the improved recovery time in \"yukon\". Although the recovery \ntime is important, I see this as an exception, whereas at the moment I am \ninterested in the everyday.\n\nCheers,\nGary.\n\n", "msg_date": "Thu, 29 Apr 2004 21:26:18 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "Gary,\n\n> In this example the statistics don't matter. The plans used were the same \nfor \n> MSSQL and Postgres. I was trying to eliminate the difference in plans \n> between the two, which obviously does make a difference, sometimes in \n> MSSQL favour and sometimes the other way round. Both systems, having \n> decided to do the same index scan, took noticably different times. The \n> Postgres database was fully vacuumed and analysed anyway.\n\nIt's also quite possble the MSSQL simply has more efficient index scanning \nimplementation that we do. They've certainly had incentive; their storage \nsystem sucks big time for random lookups and they need those fast indexes. \n(just try to build a 1GB adjacency list tree on SQL Server. I dare ya).\n\nCertainly the fact that MSSQL is essentially a single-user database makes \nthings easier for them. They don't have to maintain multiple copies of the \nindex tuples in memory. I think that may be our main performance loss.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 29 Apr 2004 13:54:33 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "On 29 Apr 2004 at 15:35, Kenneth Marshall wrote:\n\n> Did you try to cluster based on the index?\n> \n> --Ken\n\nYes, This speeds up the index scan a little (12%). This to me just \nreinforces the overhead that subsequently having to go and fetch the \ndata tuple actually has on the performance.\n\nCheers,\nGary.\n\n", "msg_date": "Thu, 29 Apr 2004 22:08:28 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "On 29 Apr 2004 at 13:54, Josh Berkus wrote:\n\n> Gary,\n> \n> \n> It's also quite possble the MSSQL simply has more efficient index scanning \n> implementation that we do. They've certainly had incentive; their storage \n> system sucks big time for random lookups and they need those fast indexes. \n> (just try to build a 1GB adjacency list tree on SQL Server. I dare ya).\n> \n> Certainly the fact that MSSQL is essentially a single-user database makes \n> things easier for them. They don't have to maintain multiple copies of the \n> index tuples in memory. I think that may be our main performance loss.\n> \n\nPossibly, but MSSQL certainly uses data from indexes and cuts out the \nsubsequent (possibly random seek) data fetch. This is also why the \n\"Index Tuning Wizard\" often recommends multi column compound \nindexes in some cases. I've tried these recommendations on occasions \nand they certainly speed up the selects significantly. If anyhing the index \nscan on the new compound index must be slower then the original single \ncolumn index and yet it still gets the data faster.\n\nThis indicates to me that it is not the scan (or IO) performance that is \nmaking the difference, but not having to go get the data row.\n\nCheers,\nGary.\n\n", "msg_date": "Thu, 29 Apr 2004 22:15:12 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "\"Gary Doades\" <[email protected]> writes:\n> In this example the statistics don't matter.\n\nDon't they?\n\nA prior poster mentioned that he thought MSSQL tries to keep all its\nindexes in memory. I wonder whether you are giving Postgres a fair\nchance to do the same. What postgresql.conf settings are you using?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Apr 2004 17:54:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "On 29 Apr 2004 at 17:54, Tom Lane wrote:\n\n> \"Gary Doades\" <[email protected]> writes:\n> > In this example the statistics don't matter.\n> \n> Don't they?\n> \n> A prior poster mentioned that he thought MSSQL tries to keep all its\n> indexes in memory. I wonder whether you are giving Postgres a fair\n> chance to do the same. What postgresql.conf settings are you using?\n> \n> \t\t\tregards, tom lane\n\nAs far as I understand it the statistics only contribute to determining the query plan. \nOnce the access methods are determined, the stats don't matter during the running of \nthe query.\n\nI believe I have given Postgres exactly the same chance. The data is small enough to fit \ninto RAM (all the tables in the query add up to around 50meg) and I executed the query \nseveral times to get a consistent figure for the explain analyze. \n\nHaving picked out an index scan as being the highest time user I concentrated on that in \nthis case and compared the same index scan on MSSQL. At least MSSQL reported it as \nan index scan on the same index for the same number of rows.\n\nThere was nothing wrong with the query plan that Postgres used. As far as I could see it \nwas probably the best one to use, it just physically took longer than the same access \nplan on MSSQL.\n\nThe query and plan are included below, the main thing I was looking at was the index \nscan on staff_booking_pkey being 676ms long.\n\nThe only postgresql.conf parameters changed from the default are:\n\nshared_buffers = 3000\t\nsort_mem = 4096\neffective_cache_size = 15000\ndefault_statistics_target = 100\n\nThere was no disk IO (above the small background IO) during the final run of the query \nas reported by vmstat (Task Mangler on Windows).\n\nSELECT B.CONTRACT_ID,SUM(R.DURATION+1)/60.0 AS SUMDUR FROM \nSEARCH_REQT_RESULT TSR\nJOIN STAFF_BOOKING B ON (B.STAFF_ID = TSR.STAFF_ID)\nJOIN ORDER_REQT R ON (R.REQT_ID = B.REQT_ID)\nJOIN BOOKING_PLAN BP ON (BP.BOOKING_ID = B.BOOKING_ID) AND \nBP.BOOKING_DATE BETWEEN '2004-04-12' AND '2004-04-18' AND \nTSR.SEARCH_ID = 8 GROUP BY B.CONTRACT_ID\n\nQUERY PLAN\nHashAggregate (cost=11205.80..11209.81 rows=401 width=6) (actual \ntime=1179.729..1179.980 rows=50 loops=1)\n -> Nested Loop (cost=326.47..11203.79 rows=401 width=6) (actual \ntime=39.700..1177.149 rows=652 loops=1)\n -> Hash Join (cost=326.47..9990.37 rows=401 width=8) (actual \ntime=39.537..1154.807 rows=652 loops=1)\n Hash Cond: (\"outer\".staff_id = \"inner\".staff_id)\n -> Merge Join (cost=320.39..9885.06 rows=3809 width=12) (actual \ntime=38.316..1143.953 rows=4079 loops=1)\n Merge Cond: (\"outer\".booking_id = \"inner\".booking_id)\n -> Index Scan using staff_booking_pkey on staff_booking b \n(cost=0.00..8951.94 rows=222612 width=16) (actual time=0.218..676.219 rows=222609 \nloops=1)\n -> Sort (cost=320.39..329.91 rows=3808 width=4) (actual \ntime=26.225..32.754 rows=4079 loops=1)\n Sort Key: bp.booking_id\n -> Index Scan using booking_plan_idx2 on booking_plan bp \n(cost=0.00..93.92 rows=3808 width=4) (actual time=0.223..14.186 rows=4079 loops=1)\n Index Cond: ((booking_date >= '2004-04-12'::date) AND \n(booking_date <= '2004-04-18'::date))\n -> Hash (cost=5.59..5.59 rows=193 width=4) (actual time=1.139..1.139 \nrows=0 loops=1)\n -> Index Scan using fk_idx_search_reqt_result on search_reqt_result tsr \n(cost=0.00..5.59 rows=193 width=4) (actual time=0.213..0.764 rows=192 loops=1)\n Index Cond: (search_id = 8)\n -> Index Scan using order_reqt_pkey on order_reqt r (cost=0.00..3.01 rows=1 \nwidth=6) (actual time=0.023..0.025 rows=1 loops=652)\n Index Cond: (r.reqt_id = \"outer\".reqt_id)\nTotal runtime: 1181.239 ms\n\n\nCheers,\nGary.\n\n\n\n\n\n\nOn 29 Apr 2004 at 17:54, Tom Lane wrote:\n\n\n> \"Gary Doades\" <[email protected]> writes:\n> > In this example the statistics don't matter.\n> \n> Don't they?\n> \n> A prior poster mentioned that he thought MSSQL tries to keep all its\n> indexes in memory.  I wonder whether you are giving Postgres a fair\n> chance to do the same.  What postgresql.conf settings are you using?\n> \n>                                  regards, \ntom lane\n\nAs far as I understand it the statistics only contribute to determining the query plan. \nOnce the access methods are determined, the stats don't matter during the running of \nthe query.\n\nI believe I have given Postgres exactly the same chance. The data is small enough to fit \ninto RAM (all the tables in the query add up to around 50meg) and I executed the query \nseveral times to get a consistent figure for the explain analyze. \n\n\nHaving picked out an index scan as being the highest time user I concentrated on that in \nthis case and compared the same index scan on MSSQL. At least MSSQL reported it as \nan index scan on the same index for the same number of rows.\n\n\nThere was nothing wrong with the query plan that Postgres used. As far as I could see it \nwas probably the best one to use, it just physically took longer than the same access \nplan on MSSQL.\n\nThe query and plan are included below, the main thing I was looking at was the index \nscan on staff_booking_pkey being 676ms long.\n\nThe only postgresql.conf parameters changed from the default are:\n\nshared_buffers = 3000   \nsort_mem = 4096\neffective_cache_size = 15000\ndefault_statistics_target = 100\n\nThere was no disk IO (above the small background IO) during the final run of the query \nas reported by vmstat (Task Mangler on Windows).\n\n\nSELECT B.CONTRACT_ID,SUM(R.DURATION+1)/60.0 AS SUMDUR FROM \nSEARCH_REQT_RESULT TSR\nJOIN STAFF_BOOKING B ON (B.STAFF_ID = TSR.STAFF_ID)\nJOIN ORDER_REQT R ON (R.REQT_ID = B.REQT_ID)\nJOIN BOOKING_PLAN BP ON (BP.BOOKING_ID = B.BOOKING_ID) AND \nBP.BOOKING_DATE BETWEEN '2004-04-12' AND '2004-04-18' AND \nTSR.SEARCH_ID = 8 GROUP BY B.CONTRACT_ID\n\n\nQUERY PLAN\nHashAggregate  (cost=11205.80..11209.81 rows=401 width=6) (actual \ntime=1179.729..1179.980 rows=50 loops=1)\n  ->  Nested Loop  (cost=326.47..11203.79 rows=401 width=6) (actual \ntime=39.700..1177.149 rows=652 loops=1)\n        ->  Hash Join  (cost=326.47..9990.37 rows=401 width=8) \n(actual \ntime=39.537..1154.807 rows=652 loops=1)\n              Hash Cond: (\"outer\".staff_id \n= \"inner\".staff_id)\n              ->  Merge Join  \n(cost=320.39..9885.06 rows=3809 width=12) (actual \ntime=38.316..1143.953 rows=4079 loops=1)\n                    \nMerge Cond: (\"outer\".booking_id = \"inner\".booking_id)\n                    \n->  Index Scan using staff_booking_pkey on staff_booking b \n(cost=0.00..8951.94 rows=222612 width=16) (actual time=0.218..676.219 rows=222609 \nloops=1)\n                    \n->  Sort  (cost=320.39..329.91 rows=3808 width=4) (actual \ntime=26.225..32.754 rows=4079 loops=1)\n                          \nSort Key: bp.booking_id\n                          \n->  Index Scan using booking_plan_idx2 on booking_plan bp \n(cost=0.00..93.92 rows=3808 width=4) (actual time=0.223..14.186 rows=4079 loops=1)\n                                \nIndex Cond: ((booking_date >= '2004-04-12'::date) AND \n(booking_date <= '2004-04-18'::date))\n              ->  Hash  (cost=5.59..5.59 \nrows=193 width=4) (actual time=1.139..1.139 \nrows=0 loops=1)\n                    \n->  Index Scan using fk_idx_search_reqt_result on search_reqt_result tsr \n(cost=0.00..5.59 rows=193 width=4) (actual time=0.213..0.764 rows=192 loops=1)\n                          \nIndex Cond: (search_id = 8)\n        ->  Index Scan using order_reqt_pkey on order_reqt r  \n(cost=0.00..3.01 rows=1 \nwidth=6) (actual time=0.023..0.025 rows=1 loops=652)\n              Index Cond: (r.reqt_id = \"outer\".reqt_id)\nTotal runtime: 1181.239 ms\n\n\n\n\nCheers,\nGary.", "msg_date": "Thu, 29 Apr 2004 23:31:00 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Certainly the fact that MSSQL is essentially a single-user database makes \n> things easier for them.\n\nOur recent testing (cf the \"Xeon\" thread) says that the interlocking we\ndo to make the world safe for multiple backends has a fairly high cost\n(at least on some hardware) compared to the rest of the work in\nscenarios where you are doing zero-I/O scans of data already in memory.\nEspecially so for index scans. I'm not sure this completely explains\nthe differential that Gary is complaining about, but it could be part of\nit. Is it really true that MSSQL doesn't support concurrent operations?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Apr 2004 19:17:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "On 29 Apr 2004 at 19:17, Tom Lane wrote:\n\n> Josh Berkus <[email protected]> writes:\n> > Certainly the fact that MSSQL is essentially a single-user database makes \n> > things easier for them.\n> \n> Our recent testing (cf the \"Xeon\" thread) says that the interlocking we\n> do to make the world safe for multiple backends has a fairly high cost\n> (at least on some hardware) compared to the rest of the work in\n> scenarios where you are doing zero-I/O scans of data already in memory.\n> Especially so for index scans. I'm not sure this completely explains\n> the differential that Gary is complaining about, but it could be part of\n> it. Is it really true that MSSQL doesn't support concurrent operations?\n> \n> \t\t\tregards, tom lane\n\nAs far as I am aware SQLSever supports concurrent operations. It \ncertainly creates more threads for each connection. None of my \nobservations of the system under load (50 ish concurrent users, 150 ish \nconnections) suggest that it is serializing queries.\n\nThese tests are currentl on single processor Athlon XP 2000+ systems.\n\nRegards,\nGary.\n", "msg_date": "Fri, 30 Apr 2004 07:33:11 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "Manfred Koizar wrote:\n> On Wed, 28 Apr 2004 09:05:04 -0400, Tom Lane <[email protected]> wrote:\n>>> [ ... visibility information in index tuples ... ]\n>> \n>> Storing that information would at least double the overhead space used\n>> for each index tuple. The resulting index bloat would significantly\n>> slow index operations by requiring more I/O. So it's far from clear\n>> that this would be a win, even for those who care only about select\n>> speed.\n> \n> While the storage overhead could be reduced to 1 bit (not a joke)\n\nYou mean adding an isLossy bit and only where it is set the head \ntuple has to be checked for visibility, if it is not set the head \ntuple does not have to be checked?\n\n\n> we'd\n> still have the I/O overhead of locating and updating index tuples for\n> every heap tuple deleted/updated.\n\nWould there be additional I/O for the additional bit in the index \ntuple (I am unable to find the layout of index tuple headers in \nthe docs)?\n\nJochem\n\n-- \nI don't get it\nimmigrants don't work\nand steal our jobs\n - Loesje\n\n\n", "msg_date": "Fri, 30 Apr 2004 19:46:24 +0200", "msg_from": "Jochem van Dieten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "On 30 Apr 2004 at 9:37, Kevin Barnard wrote:\n\n> \n> I was always under the impression that MSSQL used leaf and row level locking and therefore \n> was not a concurrent, in the same sense that postgres is, database. It would still allow for \n> concurrent connections and such but updates will get blocked/ delayed. I might be wrong.\n> \n\nUltimately you may be right. I don't know enough about SQLServer \ninternals to say either way. Anyway, most of our system is in selects for \n70% of the time. I could try and set up a test for this when I get a bit \nmore time.\n\nUnfortunately I suspect that this topic won't get taken much further. In \norder to test this it would mean modifying quite a bit of code. Whether \nputting additional info in the index header and not visiting the data row \nif all the required data is in the index would be beneficial would require \nquite a bit of work by someone who knows more than I do. I reckon that \nno-one has the time to do this at the moment.\n\nRegards,\nGary.\n\n", "msg_date": "Fri, 30 Apr 2004 19:59:38 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "On Fri, 30 Apr 2004 19:46:24 +0200, Jochem van Dieten\n<[email protected]> wrote:\n>> While the storage overhead could be reduced to 1 bit (not a joke)\n>\n>You mean adding an isLossy bit and only where it is set the head \n>tuple has to be checked for visibility, if it is not set the head \n>tuple does not have to be checked?\n\nYes, something like this. Actually I imagined it the other way round: a\nvisible-to-all flag similar to the existing dead-to-all flag (search for\nLP_DELETE and ItemIdDeleted in nbtree.c).\n\n>> we'd\n>> still have the I/O overhead of locating and updating index tuples for\n>> every heap tuple deleted/updated.\n>\n>Would there be additional I/O for the additional bit in the index \n>tuple (I am unable to find the layout of index tuple headers in \n>the docs)?\n\nYes, the visible-to-all flag would be set as a by-product of an index\nscan, if the heap tuple is found to be visible to all active\ntransactions. This update is non-critical and, I think, not very\nexpensive.\n\nDeleting (and hence updating) a tuple is more critical, regarding both\nconsistency and performance. We'd have to locate all index entries\npointing to the heap tuple and set their visible-to-all flags to false.\n\n", "msg_date": "Fri, 30 Apr 2004 23:36:58 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "Manfred Koizar <[email protected]> writes:\n> Yes, the visible-to-all flag would be set as a by-product of an index\n> scan, if the heap tuple is found to be visible to all active\n> transactions. This update is non-critical\n\nOh really? I think you need to think harder about the transition\nconditions.\n\nDead-to-all is reasonably safe to treat as a hint bit because *it does\nnot ever need to be undone*. Visible-to-all does not have that\nproperty.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Apr 2004 21:19:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "Tom Lane wrote:\n> Manfred Koizar <[email protected]> writes:\n>> \n>> Yes, the visible-to-all flag would be set as a by-product of an index\n>> scan, if the heap tuple is found to be visible to all active\n>> transactions. This update is non-critical\n> \n> Oh really? I think you need to think harder about the transition\n> conditions.\n> \n> Dead-to-all is reasonably safe to treat as a hint bit because *it does\n> not ever need to be undone*. Visible-to-all does not have that\n> property.\n\nYes, really :-)\n\nWhen a tuple is inserted the visible-to-all flag is set to false. \nThe effect of this is that every index scan that finds this tuple \nhas to visit the heap to verify visibility. If it turns out the \ntuple is not only visible to the current transaction, but to all \ncurrent transactions, the visible-to-all flag can be set to true.\nThis is non-critical, because if it is set to false scans will \nnot miss the tuple, they will just visit the heap to verify \nvisibility.\n\nThe moment the heap tuple is updated/deleted the visible-to-all \nflag needs to be set to false again in all indexes. This is \ncritical, and the I/O and (dead)lock costs of unsetting the \nvisible-to-all flag are unknown and might be big enough to ofset \nany advantage on the selects.\n\nBut I believe that for applications with a \"load, select, drop\" \nusage pattern (warehouses, archives etc.) having this \nvisible-to-all flag would be a clear winner.\n\nJochem\n\n-- \nI don't get it\nimmigrants don't work\nand steal our jobs\n - Loesje\n\n", "msg_date": "Sat, 01 May 2004 13:18:04 +0200", "msg_from": "Jochem van Dieten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "On 1 May 2004 at 13:18, Jochem van Dieten wrote:\n\n> Yes, really :-)\n> \n> When a tuple is inserted the visible-to-all flag is set to false. \n> The effect of this is that every index scan that finds this tuple \n> has to visit the heap to verify visibility. If it turns out the \n> tuple is not only visible to the current transaction, but to all \n> current transactions, the visible-to-all flag can be set to true.\n> This is non-critical, because if it is set to false scans will \n> not miss the tuple, they will just visit the heap to verify \n> visibility.\n> \n> The moment the heap tuple is updated/deleted the visible-to-all \n> flag needs to be set to false again in all indexes. This is \n> critical, and the I/O and (dead)lock costs of unsetting the \n> visible-to-all flag are unknown and might be big enough to ofset \n> any advantage on the selects.\n> \n> But I believe that for applications with a \"load, select, drop\" \n> usage pattern (warehouses, archives etc.) having this \n> visible-to-all flag would be a clear winner.\n> \n> Jochem\n> \n\nIf needs be this index maintenance could be set as a configuration \noption. It is likely that database usage patterns are reasonably well \nknown for a particular installation. This option could be set on or off \ndependant on typical transactions.\n\nIn my case with frequent large/complex selects and few very short \ninsert/updates I think it could be a big performance boost. If it works :-)\n\nRegards,\nGary.\n\n", "msg_date": "Sat, 01 May 2004 13:48:35 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "Jochem van Dieten <[email protected]> writes:\n> The moment the heap tuple is updated/deleted the visible-to-all \n> flag needs to be set to false again in all indexes. This is \n> critical,\n\nExactly. This gets you out of the hint-bit semantics and into a ton\nof interesting problems, such as race conditions. (Process A determines\nthat tuple X is visible-to-all, and goes to update the index tuple.\nBefore it can reacquire lock on the index page, process B updates the\nheap tuple and visits the index to clear the flag bit. Once A obtains\nlock it will set the flag bit. Oops.)\n\nBasically what you are buying into with such a thing is multiple copies\nof critical state. It may be only one bit rather than several words,\nbut updating it is no less painful than if it were a full copy of the\ntuple's commit status.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 May 2004 12:21:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "On Sat, 01 May 2004 13:18:04 +0200, Jochem van Dieten\n<[email protected]> wrote:\n>Tom Lane wrote:\n>> Oh really? I think you need to think harder about the transition\n>> conditions.\n\nIndeed.\n\n>> \n>> Dead-to-all is reasonably safe to treat as a hint bit because *it does\n>> not ever need to be undone*. Visible-to-all does not have that\n>> property.\n>\n>Yes, really :-)\n\nNo, not really :-(\n\nAs Tom has explained in a nearby message his concern is that -- unlike\ndead-to-all -- visible-to-all starts as false, is set to true at some\npoint in time, and is eventually set to false again. Problems arise if\none backend wants to set visible-to-all to true while at the same time\nanother backend wants to set it to false.\n\nThis could be curable by using a second bit as a deleted flag (might be\neven the same bit that's now used as dead-to-all, but I'm not sure). An\nindex tuple having both the visible flag (formerly called\nvisible-to-all) and the deleted flag set would cause a heap tuple access\nto check visibility. But that leaves the question of what to do after\nthe deleting transaction has rolled back. I see no clean way from the\nvisible-and-deleted state to visible-to-all.\n\nThis obviously needs another round of hard thinking ...\n\n", "msg_date": "Sun, 02 May 2004 10:03:36 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" }, { "msg_contents": "Manfred Koizar said:\n>\n> As Tom has explained in a nearby message his concern is that --\n> unlike dead-to-all -- visible-to-all starts as false, is set to true\n> at some point in time, and is eventually set to false again.\n> Problems arise if one backend wants to set visible-to-all to true\n> while at the same time another backend wants to set it to false.\n\nGot it, I misinterpreted his concern as \"visible-to-all should not be\nset to true when the tuple is inserted\".\n\n\n> This could be curable by using a second bit as a deleted flag (might\n> be even the same bit that's now used as dead-to-all, but I'm not\n> sure). An index tuple having both the visible flag (formerly called\n> visible-to-all) and the deleted flag set would cause a heap tuple\n> access to check visibility.\n\nOr in a more generalized way: with 2 bits written at the same time you\ncan express 4 states. But only 3 actions need to be signalled:\ndead-to-all, visible-to-all and check-heap. So we can have 2 states\nthat both signal check-heap.\n\nThe immediate solution to the race condition Tom presented would be to\nhave the transaction that invalidates the heap tuple switch the index\ntuple from the one check-heap state to the other. The transaction that\nwants to update to visible-to-all can now see that the state has\nchanged (but not the meaning) and aborts its change.\n\n\n> But that leaves the question of what to\n> do after the deleting transaction has rolled back. I see no clean\n> way from the visible-and-deleted state to visible-to-all.\n\nI'm afraid I don't know enough about the inner workings of rollbacks\nto determine how the scenario \"A determines visible-to-all should be\nset, B invalidates tuple, B rolls back, C invalidates tuple, C\ncommits, A reaquires lock on index\" would work out. I guess I have\nsome more reading to do.\n\nBut if you don't roll back too often it wouldn't even be a huge\nproblem to just leave them in visible-and-deleted state until\neventually they go into the dead-to-all state.\n\nJochem\n\n\n\n\n", "msg_date": "Sun, 2 May 2004 16:30:47 +0200 (CEST)", "msg_from": "\"Jochem van Dieten\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question" } ]
[ { "msg_contents": "Hi,\n\nI came across a very intriguing thing:\n\nI had to join two tables and in both tables I wanted to restrict the \nresult set by some (text/varchar) attributes.\n\nHere is an example:\n\nTable \"item\" # 147 000 entries\n\n Column | Type | Modifiers\n---------------+-----------------------+------------\n id | integer | not null\n description | text |\n comment | text | not null\n order_id | integer |\n\n\nTable \"orders\" # 210 000 entries\n Column | Type | Modifiers\n-----------------+------------------------+-----------\n order_id | integer |\n order_name | character varying(255) |\n\n\nThe tables have 147 000 and 210 000 entries, respectively.\n\nFirst I tried the following query, which took ages:\n\n(Query 1)\nEXPLAIN ANALYZE\nSELECT item.id\nFROM item, orders\nWHERE orders.order_name ~* 'Smit'\n AND item.description ~* 'CD'\n and orders.order_id = item.order_id;\n\n\n\nI found out, that the change of the operator from '~*' to '=' for the \nitem.description brought a great boost in performance (425 secs to 1 \nsec!), but not in cost (Query plans at the end).\n\n(Query 2)\n EXPLAIN ANALYZE\nSELECT item.id\nFROM item, orders\nWHERE orders.order_name ~* 'Smit'\n AND item.description = 'CD'\n and orders.order_id = item.order_id;\n\n\nThe main difference was that Query 2 used the Hash join instead of the \nNested Loop, so I disabled the option 'NESTED LOOP' and got for Query 1 \na similar time as for Query 2.\n\n\nCan anyone tell me, why in one case the Hash join and in the other the \nmuch worse Nested Loop is prefered?\nAnd my second question is, is there any possibility to execute the first \nquery without disabling the Nested Loop first, but get the good \nperformance of the Hash join?\n\n\nMany thanks in advance for your help or suggestions\n\nSilke\n\n\nQUERY PLANS:\n\n#####################################\n\nQuery 1:\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------ \n\n Nested Loop (cost=0.00..28836.75 rows=1 width=4) (actual \ntime=65350.780..452130.702 rows=6 loops=1)\n Join Filter: (\"inner\".order_id = \"outer\".order_id)\n -> Seq Scan on item (cost=0.00..28814.24 rows=1 width=8) (actual \ntime=33.180..1365.190 rows=716 loops=1)\n Filter: (description ~* 'CD'::text)\n -> Seq Scan on orders (cost=0.00..22.50 rows=1 width=4) (actual \ntime=21.644..629.500 rows=18 loops=716)\n Filter: ((order_name)::text ~* 'Smith'::text)\n Total runtime: 452130.782 ms\n###########################################################################\n\nQuery 2:\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------ \n\n Hash Join (cost=22.50..28840.44 rows=4 width=4) (actual \ntime=1187.798..1187.798 rows=0 loops=1)\n Hash Cond: (\"outer\".order_id = \"inner\".order_id)\n -> Seq Scan on item (cost=0.00..28814.24 rows=733 width=8) (actual \ntime=542.737..542.737 rows=0 loops=1)\n Filter: (description = 'CD'::text)\n -> Hash (cost=22.50..22.50 rows=1 width=4) (actual \ntime=645.042..645.042 rows=0 loops=1)\n -> Seq Scan on orders (cost=0.00..22.50 rows=1 width=4) \n(actual time=22.373..644.996 rows=18 loops=1)\n Filter: ((order_name)::text ~* 'Smith'::text)\n Total runtime: 1187.865 ms\n############################################################################\n\n\nQuery 1 with 'set enable_nestloop to false'\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=22.50..28836.75 rows=1 width=4) (actual \ntime=1068.593..2003.330 rows=6 loops=1)\n Hash Cond: (\"outer\".item_id = \"inner\".item_id)\n -> Seq Scan on item (cost=0.00..28814.24 rows=1 width=8) (actual \ntime=33.347..1357.073 rows=716 loops=1)\n Filter: (description ~* 'CD'::text)\n -> Hash (cost=22.50..22.50 rows=1 width=4) (actual \ntime=645.287..645.287 rows=0 loops=1)\n -> Seq Scan on orders (cost=0.00..22.50 rows=1 width=4) \n(actual time=22.212..645.239 rows=18 loops=1)\n Filter: ((order_name)::text ~* 'CD'::text)\n Total runtime: 2003.409 ms\n\n", "msg_date": "Tue, 27 Apr 2004 19:32:36 +0200", "msg_from": "Silke Trissl <[email protected]>", "msg_from_op": true, "msg_subject": "Join problem" }, { "msg_contents": "these two queries are not equal. Query1 returns 6 rows, query2 returns 0 \nrows, because '~*' and '=' operators are not same. BTW when you use '=', \nit could use index on \"item.description\".\nOn query1, \"Seq Scan on item\" estimates 1 row, on query2 it estimates \n733 rows. IMHO that's why query1 uses nested loop, query2 uses hash join.\n\nbye,\nSuller Andras\n\nSilke Trissl ďż˝rta:\n\n> Hi,\n>\n> Query 1:\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------ \n>\n> Nested Loop (cost=0.00..28836.75 rows=1 width=4) (actual \n> time=65350.780..452130.702 rows=6 loops=1)\n> Join Filter: (\"inner\".order_id = \"outer\".order_id)\n> -> Seq Scan on item (cost=0.00..28814.24 rows=1 width=8) (actual \n> time=33.180..1365.190 rows=716 loops=1)\n> Filter: (description ~* 'CD'::text)\n> -> Seq Scan on orders (cost=0.00..22.50 rows=1 width=4) (actual \n> time=21.644..629.500 rows=18 loops=716)\n> Filter: ((order_name)::text ~* 'Smith'::text)\n> Total runtime: 452130.782 ms\n> ########################################################################### \n>\n>\n> Query 2:\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------ \n>\n> Hash Join (cost=22.50..28840.44 rows=4 width=4) (actual \n> time=1187.798..1187.798 rows=0 loops=1)\n> Hash Cond: (\"outer\".order_id = \"inner\".order_id)\n> -> Seq Scan on item (cost=0.00..28814.24 rows=733 width=8) \n> (actual time=542.737..542.737 rows=0 loops=1)\n> Filter: (description = 'CD'::text)\n> -> Hash (cost=22.50..22.50 rows=1 width=4) (actual \n> time=645.042..645.042 rows=0 loops=1)\n> -> Seq Scan on orders (cost=0.00..22.50 rows=1 width=4) \n> (actual time=22.373..644.996 rows=18 loops=1)\n> Filter: ((order_name)::text ~* 'Smith'::text)\n> Total runtime: 1187.865 ms\n> ############################################################################ \n>\n>\n>\n> Query 1 with 'set enable_nestloop to false'\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------- \n>\n> Hash Join (cost=22.50..28836.75 rows=1 width=4) (actual \n> time=1068.593..2003.330 rows=6 loops=1)\n> Hash Cond: (\"outer\".item_id = \"inner\".item_id)\n> -> Seq Scan on item (cost=0.00..28814.24 rows=1 width=8) (actual \n> time=33.347..1357.073 rows=716 loops=1)\n> Filter: (description ~* 'CD'::text)\n> -> Hash (cost=22.50..22.50 rows=1 width=4) (actual \n> time=645.287..645.287 rows=0 loops=1)\n> -> Seq Scan on orders (cost=0.00..22.50 rows=1 width=4) \n> (actual time=22.212..645.239 rows=18 loops=1)\n> Filter: ((order_name)::text ~* 'CD'::text)\n> Total runtime: 2003.409 ms\n\n\n\n", "msg_date": "Wed, 28 Apr 2004 10:15:19 +0200", "msg_from": "=?ISO-8859-2?Q?Suller_Andr=E1s?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join problem" }, { "msg_contents": "Silke Trissl <[email protected]> writes:\n> I found out, that the change of the operator from '~*' to '=' for the \n> item.description brought a great boost in performance (425 secs to 1 \n> sec!), but not in cost (Query plans at the end).\n\nThe main problem seems to be bad estimation of the number of rows\nextracted from the item table. Have you ANALYZEd that table lately?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Apr 2004 08:52:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join problem " } ]
[ { "msg_contents": "Hello pgsql-performance,\n\n I discussed the whole subject for some time in DevShed and didn't\n achieve much (as for results). I wonder if any of you guys can help\n out:\n\n http://forums.devshed.com/t136202/s.html\n\nRegards,\n Vitaly Belman\n \n ICQ: 1912453\n AIM: VitalyB1984\n MSN: [email protected]\n Yahoo!: VitalyBe\n\n", "msg_date": "Wed, 28 Apr 2004 00:27:40 +0300", "msg_from": "Vitaly Belman <[email protected]>", "msg_from_op": true, "msg_subject": "Simply join in PostrgeSQL takes too long" }, { "msg_contents": "Hi,\n\nYou can try some variation:\n\nSELECT \n book_id\nFROM \n bookgenres, genre_children\nWHERE\n bookgenres.genre_id = genre_children.genre_child_id AND \n genre_children.genre_id = 1\nGROUP BY book_id\nLIMIT 10\n\nThe next works if the 'genre_child_id' is UNIQUE on the 'genre_children'\ntable.\n\nSELECT \n book_id\nFROM \n bookgenres\nWHERE\n bookgenres.genre_id = (SELECT genre_child_id FROM genre_children\nWHERE genre_id = 1)\nGROUP BY book_id\nLIMIT 10\n\nYou may need some index. Try these with EXPLAIN!\nCREATE INDEX bookgenres_genre_id_book_id ON bookgenres(genre_id,\nbook_id); or\nCREATE INDEX bookgenres_book_id_genre_id ON bookgenres(book_id,\ngenre_id);\nCREATE INDEX genre_children_genre_id ON genre_children(genre_id);\n\nRegards, Antal Attila\n\n\n", "msg_date": "Tue, 27 Apr 2004 23:56:07 +0200", "msg_from": "\"Atesz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simply join in PostrgeSQL takes too long" }, { "msg_contents": "Vitaly Belman wrote:\n> Hello pgsql-performance,\n> \n> I discussed the whole subject for some time in DevShed and didn't\n> achieve much (as for results). I wonder if any of you guys can help\n> out:\n> \n> http://forums.devshed.com/t136202/s.html\n> \n\nSo cutting and pasting:\n\n----- SCHEMA -----\nCREATE TABLE bv_bookgenres (\n book_id INT NOT NULL,\n genre_id INT NOT NULL\n);\nCREATE TABLE bv_genre_children (\n genre_id INT,\n genre_child_id INT\n);\n-------------------\n\n----- QUERY -----\nselect DISTINCT\n book_id\nfrom\n bookgenres,\n genre_children\nWHERE\n bookgenres.genre_id = genre_children.genre_child_id AND\n genre_children.genre_id = 1\nLIMIT 10\n-----------------\n\n----- EXPLAIN ANALYZE -----\nQUERY PLAN\nLimit (cost=6503.51..6503.70 rows=10 width=4) (actual \ntime=703.000..703.000 rows=10 loops=1)\n -> Unique (cost=6503.51..6738.20 rows=12210 width=4) (actual \ntime=703.000..703.000 rows=10 loops=1)\n -> Sort (cost=6503.51..6620.85 rows=46937 width=4) (actual \ntime=703.000..703.000 rows=24 loops=1)\n Sort Key: bv_bookgenres.book_id\n -> Merge Join (cost=582.45..2861.57 rows=46937 width=4) \n(actual time=46.000..501.000 rows=45082 loops=1)\n Merge Cond: (\"outer\".genre_id = \"inner\".genre_child_id)\n -> Index Scan using genre_id on bv_bookgenres \n(cost=0.00..1462.84 rows=45082 width=8) (actual time=0.000..158.000 \nrows=45082 loops=1)\n -> Sort (cost=582.45..598.09 rows=6256 width=2) \n(actual time=46.000..77.000 rows=49815 loops=1)\n Sort Key: bv_genre_children.genre_child_id\n -> Index Scan using genre_id2 on \nbv_genre_children (cost=0.00..187.98 rows=6256 width=2) (actual \ntime=0.000..31.000 rows=6379 loops=1)\n Index Cond: (genre_id = 1)\nTotal runtime: 703.000 ms\n-------------------------------\n\n----- CONF SETTINGS -----\nshared_buffers = 1000\t\t# min 16, at least max_connections*2, 8KB each\nsort_mem = 10000\n#work_mem = 1024\t\t# min 64, size in KB\n#maintenance_work_mem = 16384\t# min 1024, size in KB\n#max_stack_depth = 2048\t\t# min 100, size in KB\n-------------------------\n\nHave you VACUUM ANALYZED recently. If not do that then rerun the EXPLAIN \nANALYZE.\n\nYou might wanna bump shared_buffers. You have 512MB RAM right? You \nprobably want to bump shared_buffers to 10000, restart PG then run a \nVACUUM ANALYZE. Then rerun the EXPLAIN ANALYZE.\n\nIf that doesnt help try doing a\n\nALTER TABLE bv_genre_children ALTER COLUMN genre_child_id SET STATISTICS \n100;\n\nfollowed by a:\n\nVACUUM ANALYZE bv_genre_children;\n\nYou might also want to be tweaking the effective_cache_size parameter in \n postgresql.conf, but I am unsure how this would work on Windows. Does \nWindows have a kernel disk cache anyone?\n\n\n\n\nHTH\n\nNick\n\n\n\n\n\n", "msg_date": "Tue, 27 Apr 2004 23:00:28 +0100", "msg_from": "Nick Barr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simply join in PostrgeSQL takes too long" }, { "msg_contents": "On Tue, 2004-04-27 at 17:27, Vitaly Belman wrote:\n> Hello pgsql-performance,\n> \n> I discussed the whole subject for some time in DevShed and didn't\n> achieve much (as for results). I wonder if any of you guys can help\n> out:\n> \n> http://forums.devshed.com/t136202/s.html\n\nYou're taking the wrong approach. Rather than using a select query to\nensure that the book_id is distinct, add a constraint to the table so\nthat is guaranteed.\n\n CREATE UNIQUE INDEX bv_bookgeneres_unq ON bv_bookgenres(book_id,\n genre_id);\n \nNow you can do a simple join (Drop the DISTINCT keyword) and achieve the\nsame results.\n\nThe point is that a book cannot be of a certain genre more than once.\n\nWithout the distinct, this should take a matter of a few milliseconds to\nexecute.\n\n\n", "msg_date": "Tue, 27 Apr 2004 18:01:34 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simply join in PostrgeSQL takes too long" }, { "msg_contents": "Vitaly,\n\nI'm afraid that your helper on DevShed is right; 7.5 for Windows is still in \ndevelopment, we've not even *started* to check it for performance yet. \n\nSince the Merge Join is taking 90% of your query time, I might suggest \nincreasing shared_buffers and sort_mem to see if that helps. \n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 27 Apr 2004 15:37:09 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simply join in PostrgeSQL takes too long" }, { "msg_contents": "On Tue, 27 Apr 2004 18:01:34 -0400, Rod Taylor <[email protected]> wrote:\n>On Tue, 2004-04-27 at 17:27, Vitaly Belman wrote:\n>> Hello pgsql-performance,\n>> \n>> I discussed the whole subject for some time in DevShed and didn't\n>> achieve much (as for results). I wonder if any of you guys can help\n>> out:\n>> \n>> http://forums.devshed.com/t136202/s.html\n\n>The point is that a book cannot be of a certain genre more than once.\n\nRod, he has a hierarchy of genres. Genre 1 has 6379 child genres and a\nbook can be in more than one of these.\n\nVitaly, though LIMIT makes this look like a small query, DISTINCT\nrequires the whole result set to be retrieved. 0.7 seconds doesn't look\nso bad for several thousand rows. Did you try with other genre_ids?\n\nMaybe a merge join is not the best choice. Set enable_mergejoin to\nfalse and see whether you get a (hopefully faster) hash join, assuming\nthat sort_mem is large enough to keep the hash table in memory.\n\nIf you send me your table contents I'll try it on Linux.\n\nServus\n Manfred\n", "msg_date": "Wed, 28 Apr 2004 10:24:41 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simply join in PostrgeSQL takes too long" }, { "msg_contents": "> Rod, he has a hierarchy of genres. Genre 1 has 6379 child genres and a\n> book can be in more than one of these.\n\n bookgenres.genre_id = genre_children.genre_child_id AND\n genre_children.genre_id = 1\n\nI see, sorry. I didn't notice the genre_child_id in the where clause.\nFirst glance had them all as genre_id.\n\nWhen I run into this I usually create a 3rd table managed by triggers\nthat would relate the book to all genre entries. Insert takes a little\nlonger, but the selects can still be very quick.\n\nThe below plpgsql forces the kind of algorithm we wish the planner could\nchoose. It should be fairly quick irregardless of dataset.\n\n\nCREATE OR REPLACE FUNCTION book_results(numeric) RETURNS SETOF numeric\nAS\n'\nDECLARE\n v_genre ALIAS FOR $1;\n v_limit integer = 10;\n t_rows RECORD;\n v_transmitted integer = 0;\n\n v_transmitted_values numeric[] = ARRAY[1];\n\nBEGIN\n FOR t_rows IN SELECT book_id\n FROM bv_bookgenres AS b\n JOIN bv_genre_children AS g ON (b.genre_id =\ng.genre_child_id)\n WHERE g.genre_id = v_genre\n LOOP\n\n -- If this is a new value, transmit it to the end user\n IF NOT t_rows.book_id = ANY(v_transmitted_values) THEN\n v_transmitted_values := array_append(v_transmitted_values,\nt_rows.book_id);\n v_transmitted := v_transmitted + 1;\n RETURN NEXT t_rows.book_id;\n END IF;\n\n EXIT WHEN v_transmitted >= v_limit;\n END LOOP;\n\n RETURN;\nEND;\n' LANGUAGE plpgsql;\n\nEXPLAIN ANALYZE SELECT * FROM book_results(1);\nSELECT * FROM book_results(1);\n\n", "msg_date": "Wed, 28 Apr 2004 08:23:35 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simply join in PostrgeSQL takes too long" }, { "msg_contents": "On Wed, 28 Apr 2004 08:23:35 -0400, Rod Taylor <[email protected]> wrote:\n>The below plpgsql forces the kind of algorithm we wish the planner could\n>choose. It should be fairly quick irregardless of dataset.\n\nThat reminds me of hash aggregation. So here's another idea for Vitaly:\n\n\tSELECT book_id\n\t FROM ...\n\t WHERE ...\n\t GROUP BY book_id\n\t LIMIT ...\n\nServus\n Manfred\n", "msg_date": "Thu, 29 Apr 2004 19:13:49 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simply join in PostrgeSQL takes too long" }, { "msg_contents": "On Thu, 2004-04-29 at 13:13, Manfred Koizar wrote:\n> On Wed, 28 Apr 2004 08:23:35 -0400, Rod Taylor <[email protected]> wrote:\n> >The below plpgsql forces the kind of algorithm we wish the planner could\n> >choose. It should be fairly quick irregardless of dataset.\n> \n> That reminds me of hash aggregation. So here's another idea for Vitaly:\n\nThe reason for the function is that the sort routines (hash aggregation\nincluded) will not stop in mid-sort, although I believe that feature is\non the TODO list.\n\nI believe Vitaly will achieve 10ms or less query times using that\nfunction.\n\n\n", "msg_date": "Thu, 29 Apr 2004 13:36:47 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simply join in PostrgeSQL takes too long" }, { "msg_contents": "On Thu, 29 Apr 2004 13:36:47 -0400, Rod Taylor <[email protected]> wrote:\n>The reason for the function is that the sort routines (hash aggregation\n>included) will not stop in mid-sort\n\nGood point.\n\nServus\n Manfred\n", "msg_date": "Thu, 29 Apr 2004 19:59:13 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simply join in PostrgeSQL takes too long" }, { "msg_contents": "Hello Manfred,\n\nI thank everyone for helping me on this - So many tips.\n\nI am in the middle of going through them all, till now disabling the\nenable_mergejoin really helped.\n\nAlso, I agree that the design might be flawed (I could use triggers\nand stuff like that) but for now I am just comparing how my project\nwill run on PostgreSQL (Considering migration from MySQL).\n\nI'll be reporting back on how the other stuff helped.\n\nRegards,\n Vitaly Belman\n \n ICQ: 1912453\n AIM: VitalyB1984\n MSN: [email protected]\n Yahoo!: VitalyBe\n\nWednesday, April 28, 2004, 11:24:41 AM, you wrote:\n\nMK> On Tue, 27 Apr 2004 18:01:34 -0400, Rod Taylor <[email protected]> wrote:\n>>On Tue, 2004-04-27 at 17:27, Vitaly Belman wrote:\n>>> Hello pgsql-performance,\n>>> \n>>> I discussed the whole subject for some time in DevShed and didn't\n>>> achieve much (as for results). I wonder if any of you guys can help\n>>> out:\n>>> \n>>> http://forums.devshed.com/t136202/s.html\n\n>>The point is that a book cannot be of a certain genre more than once.\n\nMK> Rod, he has a hierarchy of genres. Genre 1 has 6379 child genres and a\nMK> book can be in more than one of these.\n\nMK> Vitaly, though LIMIT makes this look like a small query, DISTINCT\nMK> requires the whole result set to be retrieved. 0.7 seconds doesn't look\nMK> so bad for several thousand rows. Did you try with other genre_ids?\n\nMK> Maybe a merge join is not the best choice. Set enable_mergejoin to\nMK> false and see whether you get a (hopefully faster) hash join, assuming\nMK> that sort_mem is large enough to keep the hash table in memory.\n\nMK> If you send me your table contents I'll try it on Linux.\n\nMK> Servus\nMK> Manfred\n\n", "msg_date": "Fri, 30 Apr 2004 00:09:36 +0300", "msg_from": "Vitaly Belman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simply join in PostrgeSQL takes too long" }, { "msg_contents": "Vitaly,\n \n> I am in the middle of going through them all, till now disabling the\n> enable_mergejoin really helped.\n\nIn that case, your random_page_cost is probably too low. Check the ratio of \nper-tuple times on index vs. seqscan seeks.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 29 Apr 2004 15:10:08 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simply join in PostrgeSQL takes too long" } ]
[ { "msg_contents": "Bruno et al,\n\n\tAny self-repsecting lurker would know that oids as row identifiers are \ndepreciated in postgres. Can anyone provide a brief history regarding \nthe reasoning behind using them as row identifiers in the first place? \nI see a discussion of their use as various primary keys in he system \ncatalog in the oid-datatype doc page, but not regarding their history \nas 'user-space' row ids.\n\nThanks,\nJames\n\n----\nJames Robinson\nSocialserve.com\n\n", "msg_date": "Wed, 28 Apr 2004 09:29:15 -0400", "msg_from": "James Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "History of oids in postgres?" }, { "msg_contents": "James Robinson wrote:\n> Bruno et al,\n> \n> \tAny self-repsecting lurker would know that oids as row identifiers are \n> depreciated in postgres. Can anyone provide a brief history regarding \n> the reasoning behind using them as row identifiers in the first place? \n> I see a discussion of their use as various primary keys in he system \n> catalog in the oid-datatype doc page, but not regarding their history \n> as 'user-space' row ids.\n\nThey were added at Berkeley and I think are related to the\nObject-relational ability of PostgreSQL. I think the newer SQL\nstandards have a similar capability specified.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 5 May 2004 13:03:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: History of oids in postgres?" } ]
[ { "msg_contents": "All,\nAfter I upgraded postgres from 7.3.4 to 7.4.2, one of my program got following error:\nDRROR:\tout of memory\nDETAIL:\tFail on request of size 92.\n\nany idea??\ndoes memory management have big difference between 7.3.4 and 7.4.2???\nthis program using a chunk of share memory and a lot of temp tables.\n\n\nThanks.\n\n\n\nJie Liang\n", "msg_date": "Wed, 28 Apr 2004 11:12:17 -0700", "msg_from": "\"Jie Liang\" <[email protected]>", "msg_from_op": true, "msg_subject": "7.4.2 out of memory" }, { "msg_contents": "On Wed, 28 Apr 2004, Jie Liang wrote:\n\n> All,\n> After I upgraded postgres from 7.3.4 to 7.4.2, one of my program got following error:\n> DRROR:\tout of memory\n> DETAIL:\tFail on request of size 92.\n> \n> any idea??\n> does memory management have big difference between 7.3.4 and 7.4.2???\n> this program using a chunk of share memory and a lot of temp tables.\n\nMore than likely this is a hash aggregate problem (or can they spill to \ndisk in 7.4.2 yet? I don't think they can, but maybe we should ask Tom.\n\nTry setting this before running the query and see what happens:\n\nset enable_hashagg = false;\n\n\n\n", "msg_date": "Wed, 28 Apr 2004 13:57:21 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] 7.4.2 out of memory" } ]
[ { "msg_contents": "Sccot,\n\nThank you very much, I think taht you are right about this.\nI tested a single query, there is no problem. I'll do a full test with my program.\n\nJie Liang\n\n-----Original Message-----\nFrom: scott.marlowe [mailto:[email protected]]\nSent: Wednesday, April 28, 2004 12:57 PM\nTo: Jie Liang\nCc: [email protected]\nSubject: Re: [ADMIN] 7.4.2 out of memory\n\n\nOn Wed, 28 Apr 2004, Jie Liang wrote:\n\n> All,\n> After I upgraded postgres from 7.3.4 to 7.4.2, one of my program got following error:\n> DRROR:\tout of memory\n> DETAIL:\tFail on request of size 92.\n> \n> any idea??\n> does memory management have big difference between 7.3.4 and 7.4.2???\n> this program using a chunk of share memory and a lot of temp tables.\n\nMore than likely this is a hash aggregate problem (or can they spill to \ndisk in 7.4.2 yet? I don't think they can, but maybe we should ask Tom.\n\nTry setting this before running the query and see what happens:\n\nset enable_hashagg = false;\n\n\n\n", "msg_date": "Wed, 28 Apr 2004 13:41:21 -0700", "msg_from": "\"Jie Liang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] 7.4.2 out of memory" } ]
[ { "msg_contents": "All,\nThis is old topic, when I use:\nselect url from urlinfo where url like 'http://www.lycos.de%';\nit uses the index, good!\n\nbut if I use:\nselect url from urlinfo where url like 'http://%.lycos.de';\nit won't use index at all, NOT good!\nis there any way I can force secon query use index???\n\nThanks.\n\nJie Liang\n\n QUERY PLAN \n------------------------------------------------------------------------------------------------\n Index Scan using urlinfo_ukey on urlinfo (cost=0.00..6.01 rows=1 width=33)\n Index Cond: ((url >= 'http://www.lycos.de/'::text) AND (url < 'http://www.lycos.de0'::text))\n Filter: (url ~ '^http://www\\\\.lycos\\\\.de/.*$'::text)\n(3 rows)\n\n QUERY PLAN \n-------------------------------------------------------------\n Seq Scan on urlinfo (cost=0.00..100440.48 rows=4 width=33)\n Filter: (url ~ '^http://.*\\\\.lycos\\\\.de$'::text)\n(2 rows)\n", "msg_date": "Wed, 28 Apr 2004 15:02:04 -0700", "msg_from": "\"Jie Liang\" <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE and INDEX" }, { "msg_contents": "> but if I use:\n> select url from urlinfo where url like 'http://%.lycos.de';\n> it won't use index at all, NOT good!\n> is there any way I can force secon query use index???\n\ncreate index nowww on urlinfo(replace(replace(url, 'http://', ''),\n'www.', '')));\n\nSELECT url\n FROM urlinfo\nWHERE replace(replace(url, 'http://', ''), 'www.', '') = 'lycos.de'\n AND url LIKE 'http://%.lycos.de' ;\n\nThe replace() will narrow the search down to all records containing\nlycos.de. Feel free to write a more complex alternative for replace()\nthat will deal with more than just optional www.\n\nOnce the results have been narrowed down, you may use the original like\nexpression to confirm it is your record.\n\n", "msg_date": "Wed, 05 May 2004 13:31:45 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE and INDEX" }, { "msg_contents": "Jie Liang wrote:\n> All,\n> This is old topic, when I use:\n> select url from urlinfo where url like 'http://www.lycos.de%';\n> it uses the index, good!\n> \n> but if I use:\n> select url from urlinfo where url like 'http://%.lycos.de';\n> it won't use index at all, NOT good!\n> is there any way I can force secon query use index???\n\nI've seen people define a reverse(text) function via plperl or similar \nthen build a functional index on reverse(url). Of course, that would \nrely on your knowing which end of your search pattern has the % wildcard.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 05 May 2004 19:12:38 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE and INDEX" } ]
[ { "msg_contents": "Hi,\n\nI am building an application using postgresql to store XML-records. There is a\ndebate within the group of developers about the best way to store our data. I\nhope you can help us make a decision.\n\nThe data consists of XML-records, with a lot of XML-fields. I want to store\nthe XML as it is, so taking the information from the XML-records and then\nstoring it in a different-from-XML-format is not an option.\n\nEach XML-record describes data about one book. If an update of bookdata comes,\nthe XML itself is not changed, but a new XML-record is stored with the updated\ndata. Via a complex scheme of combining a base record and its updates, the\nfinal dataset is produced that is used in the application.\n\nThere are different XML-formats that need to be combined. Right now, we can\nhandle three different XML-formats, each with its own structure (but all\ndescribing book-data).\n\nSearching is done via a simple table lookup on three different fields: title,\nauthor and subject. The data for these fields is extracted from the\ndatabase. Each book has a unique identifier (EAN13, derivative of ISBN).\n\nHere is one way to organize the database:\ntable title:\nTITLE | EAN13, indexing on TITLE\n\ntable author:\nAUTHOR | EAN13, indexing on AUTHOR\n\ntable subject:\nSUBJECT | EAN13, indexing on SUBJECT.\n\nFinally:\ntable record:\nEAN13 | ARRAY OF XML-records.\n\nIt's the last table that I am most curious (and worried) about, the question\nbeing mainly what the optimal way of structuring that table is. Option 1 is\nthe given option: adding/deleting an XML-record for the same book requires\nadding/deleting it to/from the array of XML-records.\n\nOption 2 would be something like this:\nEAN13 | XML-record \nwhere, if a book has several records describing it, there are multiple entries\nof the EAN13|XML-record - pair. Adding an XML-record for the same book,\nrequires adding a new entry to the table as a whole.\n\nSo, option 1-tables look like this:\nEAN13 | ARRAY OF XML-records\n0001 | {<XML1>...</XML1>, <XML2>...</XML2>, ...}\n0002 | {<XML1>...</XML1>, <XML2>...</XML2>, ...}\n\nOption-2 tables look like this:\nEAN13 | ARRAY OF XML-records\n0001 | <XML1>...</XML1>\n0001 | <XML2>...</XML2>\n0002 | <XML1>...</XML1>\n0002 | <XML2>...</XML2>\n\nWe can't decide which one is best. These are some issues we can think of:\n\nIndexing: For option 1, the EAN13-index remains unique, even if you have\nmultiple XML-records; for option 2 it does not, since multiple XML-records are\nstored as multiple tuples. On the other hand, an additional internal index can\nbe used to link the several tuples of option 2 to the information in the\n`lookup'-tables (author, title, keyword). Does any of these two options\nincrease query efficiency, ie. speed?\n\nDatabase growth: On average, the information about a book is updated three\ntimes per year. In option 1, this means that the length of the table does not\nincrease, but the width does. If we choose option 2, if we have three updates\nper book each year, the length of the table triples, but the width does\nnot. What is more costly to store for postgres, long arrays or long tables?\n\nIntegrity: Option 1 means that our software needs to keep track of all the\nbookkeeping for arrays, since such support is quite rudimentary in\npostgres. For example, it is hard to take out a record from the middle of an\narray. Also, a multidimensional array, which contains for each record the\nrecord itself and its type, is even harder to maintain. Option 2 has a simpler\ndatatype, so integrity can be easier inforced using the standard\npostgres-machinery of variable-types etc.\n\nArrays are non-standard SQL, and I hear that PHP-support for postgres & arrays\nis rudimentary. So that might be an argument to avoid using them, and go for\noption 2. From the standpoint of performance (or wisdom), can you help me\ndecide what I should choose? Or is there maybe an even better way to structure\nmy data?\n\nThanks for any contribution!\n\nRoelant.\n", "msg_date": "Thu, 29 Apr 2004 12:41:06 -0400", "msg_from": "Roelant Ossewaarde <[email protected]>", "msg_from_op": true, "msg_subject": "Use arrays or not?" }, { "msg_contents": "Roelant,\n\nYours is not a performance question, so I'm crossing it over to SQL for advice \non database design.\n\n> I am building an application using postgresql to store XML-records. There\n> is a debate within the group of developers about the best way to store our\n> data. I hope you can help us make a decision.\n>\n> The data consists of XML-records, with a lot of XML-fields. I want to store\n> the XML as it is, so taking the information from the XML-records and then\n> storing it in a different-from-XML-format is not an option.\n>\n> Each XML-record describes data about one book. If an update of bookdata\n> comes, the XML itself is not changed, but a new XML-record is stored with\n> the updated data. Via a complex scheme of combining a base record and its\n> updates, the final dataset is produced that is used in the application.\n>\n> There are different XML-formats that need to be combined. Right now, we can\n> handle three different XML-formats, each with its own structure (but all\n> describing book-data).\n>\n> Searching is done via a simple table lookup on three different fields:\n> title, author and subject. The data for these fields is extracted from the\n> database. Each book has a unique identifier (EAN13, derivative of ISBN).\n>\n> Here is one way to organize the database:\n> table title:\n> TITLE | EAN13, indexing on TITLE\n>\n> table author:\n> AUTHOR | EAN13, indexing on AUTHOR\n>\n> table subject:\n> SUBJECT | EAN13, indexing on SUBJECT.\n\nThis is a *very* strange way of setting up your database. Are you new to \nRelational Databases and SQL? If so, I'd recommend starting with a book on \nrelational database design.\n\nEither that, or you're a victim of UML design. \n\nIf only one author, title and subject are allowed per book, you should have:\n\ntable books\n\tEAN13 | TITLE | AUTHOR | SUBJECT\n\n> Finally:\n> table record:\n> EAN13 | ARRAY OF XML-records.\n>\n> It's the last table that I am most curious (and worried) about, the\n> question being mainly what the optimal way of structuring that table is.\n> Option 1 is the given option: adding/deleting an XML-record for the same\n> book requires adding/deleting it to/from the array of XML-records.\n>\n> Option 2 would be something like this:\n> EAN13 | XML-record\n> where, if a book has several records describing it, there are multiple\n> entries of the EAN13|XML-record - pair. Adding an XML-record for the same\n> book, requires adding a new entry to the table as a whole.\n\nIn my mind, there is no question that this is the best way to do things. It \nis a normalized data structure, as opposed to the arrays, which are now.\n\n>\n> So, option 1-tables look like this:\n> EAN13 | ARRAY OF XML-records\n> 0001 | {<XML1>...</XML1>, <XML2>...</XML2>, ...}\n> 0002 | {<XML1>...</XML1>, <XML2>...</XML2>, ...}\n>\n> Option-2 tables look like this:\n> EAN13 | ARRAY OF XML-records\n> 0001 | <XML1>...</XML1>\n> 0001 | <XML2>...</XML2>\n> 0002 | <XML1>...</XML1>\n> 0002 | <XML2>...</XML2>\n>\n> We can't decide which one is best. These are some issues we can think of:\n>\n> Indexing: For option 1, the EAN13-index remains unique, even if you have\n> multiple XML-records; for option 2 it does not, since multiple XML-records\n> are stored as multiple tuples. On the other hand, an additional internal\n> index can be used to link the several tuples of option 2 to the information\n> in the `lookup'-tables (author, title, keyword). Does any of these two\n> options increase query efficiency, ie. speed?\n>\n> Database growth: On average, the information about a book is updated three\n> times per year. In option 1, this means that the length of the table does\n> not increase, but the width does. If we choose option 2, if we have three\n> updates per book each year, the length of the table triples, but the width\n> does not. What is more costly to store for postgres, long arrays or long\n> tables?\n>\n> Integrity: Option 1 means that our software needs to keep track of all the\n> bookkeeping for arrays, since such support is quite rudimentary in\n> postgres. For example, it is hard to take out a record from the middle of\n> an array. Also, a multidimensional array, which contains for each record\n> the record itself and its type, is even harder to maintain. Option 2 has a\n> simpler datatype, so integrity can be easier inforced using the standard\n> postgres-machinery of variable-types etc.\n>\n> Arrays are non-standard SQL, and I hear that PHP-support for postgres &\n> arrays is rudimentary. So that might be an argument to avoid using them,\n> and go for option 2. From the standpoint of performance (or wisdom), can\n> you help me decide what I should choose? Or is there maybe an even better\n> way to structure my data?\n>\n> Thanks for any contribution!\n>\n> Roelant.\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 29 Apr 2004 10:23:58 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use arrays or not?" }, { "msg_contents": "Hi,\n\nThanks for your fast response. But I'm afraid I stated some things unclear.\n\n> >>The data consists of XML-records, with a lot of XML-fields. I want to \n> >>store\n> >>the XML as it is, so taking the information from the XML-records and then\n> >>storing it in a different-from-XML-format is not an option.\n> \n> Actually, your data consists of data. The XML is just scaffolding to \n> enable you to locate and understand your data. Since you are storing it \n> in a relational database, that will use relational scaffolding for its \n> organisation instead. At least partially, you will *have* to parse the \n> values out and organise them differently.\nI do, and I know. But I want to store the XML-records as they are. So given an\nauthor, title and an XML-record that are related to that author and title, how\nto store that. That's the question. I have good reasons to store the\nXML-records as they are, without further parsing them.\n\n> >>Each XML-record describes data about one book. If an update of bookdata\n> >>comes, the XML itself is not changed, but a new XML-record is stored with\n> >>the updated data. Via a complex scheme of combining a base record and its\n> >>updates, the final dataset is produced that is used in the application.\n> >>\n> >>Searching is done via a simple table lookup on three different fields:\n> >>title, author and subject. The data for these fields is extracted from the\n> >>database. Each book has a unique identifier (EAN13, derivative of ISBN).\n> >>\n> >>Here is one way to organize the database:\n> >>table title:\n> >>TITLE | EAN13, indexing on TITLE\n> >>\n> >>table author:\n> >>AUTHOR | EAN13, indexing on AUTHOR\n> >>\n> >>table subject:\n> >>SUBJECT | EAN13, indexing on SUBJECT.\n> >\n> >\n> >This is a *very* strange way of setting up your database. Are you new to \n> >Relational Databases and SQL? If so, I'd recommend starting with a book \n> >on relational database design.\n> I agree with Josh - think about a book.\n\nThank your for the recommendations. But the above thing is just background\ninformation, it will not be stored as such. The important question for me is\nthe question whether to use arrays or not. With index in the above examples I\ndo not mean the actual postgres-index, I mean that those are the fields that\nare used in searching. One never searches on an EAN13-number, only on author,\ntitle and subject. And one never, by the way, searches for a specific\nXML-record, only the total of the stored XML-records per book should be retrieved.\n\n> \n> >If only one author, title and subject are allowed per book, you should \n> >have:\n> >\n> >table books\n> >\tEAN13 | TITLE | AUTHOR | SUBJECT\n> \n> If, on the other hand you can have multiple authors (likely) you'll want \n> something like:\n> \n> CREATE TABLE author (\n> ean13 varchar(13), -- Guessing ean13 format\n> author_num int4,\n> author_name text,\n> PRIMARY KEY (ean13, author_num)\n> );\n> \n> Then you can have rows like:\n> \n> ('my-ean-number-here', 1, 'Aaron Aardvark')\n> ('my-ean-number-here', 2, 'Betty Bee')\n> etc.\n\nYes, I have such a thing. There can be multiple titles, multiple authors and\nmultiple keywords per book. \n\n> \n> \n> >>Finally:\n> >>table record:\n> >>EAN13 | ARRAY OF XML-records.\n> >>\n> >>It's the last table that I am most curious (and worried) about, the\n> >>question being mainly what the optimal way of structuring that table is.\n> >>Option 1 is the given option: adding/deleting an XML-record for the same\n> >>book requires adding/deleting it to/from the array of XML-records.\n> >>\n> >>Option 2 would be something like this:\n> >>EAN13 | XML-record\n> >>where, if a book has several records describing it, there are multiple\n> >>entries of the EAN13|XML-record - pair. Adding an XML-record for the same\n> >>book, requires adding a new entry to the table as a whole.\n> >\n> >\n> >In my mind, there is no question that this is the best way to do things. \n> >It is a normalized data structure, as opposed to the arrays, which are now.\n> \n> Although your option 2 doesn't go quite far enough. You'll also want to \n> know what order these come in. So, assuming you can't have two updates \n> at the same time:\n> \n> CREATE TABLE book_history (\n> ean13 varchar(13), -- Guessing ean13 format\n> ts timestamp with time zone,\n> xml text,\n> PRIMARY KEY (ean13, ts)\n> );\n\nThe order is not important; the interpretation of the XML-records is done by\nan external module. The order is determined upon the content of the\nXML-records, because they can come from different sources and can be combined\nin different ways, depending on the application processing the\nXML-records. Order is not determined at the moment that the records are\nstored, but at the moment the records are interpreted.\n\n> As for your other concerns:\n> >>Indexing:\n> >>Database growth:\n> >>Integrity:\n> Just worry about the integrity - if you keep the design simple, \n> PostgreSQL will manage quite large growth on quite small hardware.\n\nWhat would be a situation in which one should use arrays then?\n\n\n> Now... I don't think you want to do what you're trying to do. Don't take \n> this personally, but unless you're extremely pushed for time and \n> resources this is almost certainly a bad design choice.\n\n> 1. Wrong tool for the job\n> Basically you're taking a relational database and treating it like a \n> filesystem. All you need for what you're doing is a directory-tree to \n> represent the ean13 structure and one file per xml-record. Index the \n> author/title fields with dbm/SQLite. You could write the whole thing in \n> a day - simple, efficient, leverages existing unix tools.\n\nThat is correct. I think nothing would beat a dbm-style solution qua\nperformance, and I'm still considering using that. The added value of a system\nlike postgresql is the client/server-interface and the omnipresent support of\nprogramming languages, and not in the least the familiarity of most people\nwith mysql/postgresql in comparison to dbm.\n\n> 2. Wrong job for the tool\n> How do I find out which publisher produced the most books in 2003 (I'm \n> assuming this is in your XML somewhere)? Which book is available in the \n> most languages?\n> How many updates were applied last month? How many different books did \n> they affect? Why do the numbers not match - which books had multiple \n> changes?\n> The first set of questions need you to write code, the second set don't. \n> Why? Because the second set rely on information stored simply and \n> explicitly in the database (book_history as it happens).\n\nWe know what set of questions will be asked: we only need to access through\nauthor, title and subject keywords. Another reason maybe to choose a\nnon-relational model.\n\n> 3. The medium isn't the message\n> You don't want to open up your XML records to store them in the \n> database, but I assume you have to in your PHP code, or you can't \n> process individual values. As it stands you're having to extract certain \n> information when an XML update arrives anyway. If the title of a book is \n> amended, then you'll need to remember to update the book_title table. If \n> it's simple to extract more, why not do so? If some of it is fiddly to \n> represent in an SQL database then at least extract everything that is \n> convenient.\n\nThere are good reasons for that. The XMLs not necessarily contain the input to\nauthor/title/subject-tables (or columns). I just want to store the XML,\ninterpret it later. (really, it makes sense!)\n\n> Oh - and I would probably store a \"current snapshot\" of the book's \n> record separately too. Saves your application having to recalculate it \n> every time it's needed.\n\nI *want* to recalculate it every time it's needed. Because the different\nXML-records can be combined in several ways, depending on the application that\nis accessing the database-client.\n\nSo, let me rephrase my questions:\n1. When and why would anyone use arrays?\n2. When designing the database, is it really true that there is no performance\ndifference between a table of which the number of tuples grow by a factor of,\nsay 10, and a table of which the size of the tuples grow by a factor of, say\n10?\n\nThanks, \n\nRoelant.\n", "msg_date": "Thu, 29 Apr 2004 16:26:40 -0400", "msg_from": "Roelant Ossewaarde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use arrays or not?" }, { "msg_contents": "Roelant,\n\n> So, let me rephrase my questions:\n> 1. When and why would anyone use arrays?\n\nWhen the data itself is an ordered set of items which is indivisible and lacks \nmeaning outside the ordered set. For example, a set of ordered pairs of \nmolecules in a gene snippet. Or a mathematical matrix.\n\n> 2. When designing the database, is it really true that there is no \nperformance\n> difference between a table of which the number of tuples grow by a factor \nof,\n> say 10, and a table of which the size of the tuples grow by a factor of, say\n> 10?\n\nNobody's tested anything. I would *tend* to think that PostgreSQL would \nhandle more-of-less-wide-rows somewhat better, but that's just a guess. \n\nHmmm ... not completely a guess. Postgres, by default, compresses fields \nover 8K in size (see TOAST in the docs). This makes those fields somewhat \nslower to update. So if 1 XML rec < 8k but 4 XML rec > 8k, there could be a \nsmall-but-noticeable performance loss from going to \"broad\" rows.\n\nIf I had your application, I would not go for the array approach, jjust to \navoid maintainence headaches. For example, what happens when the books \nstart having a variable number of XML records? Normalized designs are \nalmost always easier to deal with from a perspective of long-term \nmaintainence.\n\nThe arrays, as far as I can tell, gain you nothing in ethier performance or \nconvenience.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 29 Apr 2004 13:47:16 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use arrays or not?" }, { "msg_contents": "\nJust some comments from my experience:\n\nPgSQL Arrays are mainly for modeling arrays not relations.\nArrays can be very useful if they are not ever gonna be treated\nas relations and if performance is an issue (e.g. dynamic graphs, on the \nfly statistics etc..).\n\nAlso (besides other solutions) int[] arrays is a handy way of implementing\ntree structures in a database.\n\nFor your case as you set it the XML arrays will make your life difficult.\n\nO kyrios Josh Berkus egrapse stis Apr 29, 2004 :\n\n> Roelant,\n> \n> Yours is not a performance question, so I'm crossing it over to SQL for advice \n> on database design.\n> \n> > I am building an application using postgresql to store XML-records. There\n> > is a debate within the group of developers about the best way to store our\n> > data. I hope you can help us make a decision.\n> >\n> > The data consists of XML-records, with a lot of XML-fields. I want to store\n> > the XML as it is, so taking the information from the XML-records and then\n> > storing it in a different-from-XML-format is not an option.\n> >\n> > Each XML-record describes data about one book. If an update of bookdata\n> > comes, the XML itself is not changed, but a new XML-record is stored with\n> > the updated data. Via a complex scheme of combining a base record and its\n> > updates, the final dataset is produced that is used in the application.\n> >\n> > There are different XML-formats that need to be combined. Right now, we can\n> > handle three different XML-formats, each with its own structure (but all\n> > describing book-data).\n> >\n> > Searching is done via a simple table lookup on three different fields:\n> > title, author and subject. The data for these fields is extracted from the\n> > database. Each book has a unique identifier (EAN13, derivative of ISBN).\n> >\n> > Here is one way to organize the database:\n> > table title:\n> > TITLE | EAN13, indexing on TITLE\n> >\n> > table author:\n> > AUTHOR | EAN13, indexing on AUTHOR\n> >\n> > table subject:\n> > SUBJECT | EAN13, indexing on SUBJECT.\n> \n> This is a *very* strange way of setting up your database. Are you new to \n> Relational Databases and SQL? If so, I'd recommend starting with a book on \n> relational database design.\n> \n> Either that, or you're a victim of UML design. \n> \n> If only one author, title and subject are allowed per book, you should have:\n> \n> table books\n> \tEAN13 | TITLE | AUTHOR | SUBJECT\n> \n> > Finally:\n> > table record:\n> > EAN13 | ARRAY OF XML-records.\n> >\n> > It's the last table that I am most curious (and worried) about, the\n> > question being mainly what the optimal way of structuring that table is.\n> > Option 1 is the given option: adding/deleting an XML-record for the same\n> > book requires adding/deleting it to/from the array of XML-records.\n> >\n> > Option 2 would be something like this:\n> > EAN13 | XML-record\n> > where, if a book has several records describing it, there are multiple\n> > entries of the EAN13|XML-record - pair. Adding an XML-record for the same\n> > book, requires adding a new entry to the table as a whole.\n> \n> In my mind, there is no question that this is the best way to do things. It \n> is a normalized data structure, as opposed to the arrays, which are now.\n> \n> >\n> > So, option 1-tables look like this:\n> > EAN13 | ARRAY OF XML-records\n> > 0001 | {<XML1>...</XML1>, <XML2>...</XML2>, ...}\n> > 0002 | {<XML1>...</XML1>, <XML2>...</XML2>, ...}\n> >\n> > Option-2 tables look like this:\n> > EAN13 | ARRAY OF XML-records\n> > 0001 | <XML1>...</XML1>\n> > 0001 | <XML2>...</XML2>\n> > 0002 | <XML1>...</XML1>\n> > 0002 | <XML2>...</XML2>\n> >\n> > We can't decide which one is best. These are some issues we can think of:\n> >\n> > Indexing: For option 1, the EAN13-index remains unique, even if you have\n> > multiple XML-records; for option 2 it does not, since multiple XML-records\n> > are stored as multiple tuples. On the other hand, an additional internal\n> > index can be used to link the several tuples of option 2 to the information\n> > in the `lookup'-tables (author, title, keyword). Does any of these two\n> > options increase query efficiency, ie. speed?\n> >\n> > Database growth: On average, the information about a book is updated three\n> > times per year. In option 1, this means that the length of the table does\n> > not increase, but the width does. If we choose option 2, if we have three\n> > updates per book each year, the length of the table triples, but the width\n> > does not. What is more costly to store for postgres, long arrays or long\n> > tables?\n> >\n> > Integrity: Option 1 means that our software needs to keep track of all the\n> > bookkeeping for arrays, since such support is quite rudimentary in\n> > postgres. For example, it is hard to take out a record from the middle of\n> > an array. Also, a multidimensional array, which contains for each record\n> > the record itself and its type, is even harder to maintain. Option 2 has a\n> > simpler datatype, so integrity can be easier inforced using the standard\n> > postgres-machinery of variable-types etc.\n> >\n> > Arrays are non-standard SQL, and I hear that PHP-support for postgres &\n> > arrays is rudimentary. So that might be an argument to avoid using them,\n> > and go for option 2. From the standpoint of performance (or wisdom), can\n> > you help me decide what I should choose? Or is there maybe an even better\n> > way to structure my data?\n> >\n> > Thanks for any contribution!\n> >\n> > Roelant.\n> \n> \n> \n\n-- \n-Achilleus\n\n", "msg_date": "Fri, 30 Apr 2004 08:49:41 +0300 (EEST)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use arrays or not?" } ]
[ { "msg_contents": "> \n> Having picked out an index scan as being the highest time user I \n> concentrated on that in this case and compared the same index scan on \n> MSSQL. At least MSSQL reported it as an index scan on the same index \n> for the same number of rows. \n> \n\nI should have also pointed out that MSSQL reported that same index scan as taking 65% of the overall query time.\nIt was just \"faster\". The overall query took 103ms in MSSQL.\n\nGary.\n\n", "msg_date": "Fri, 30 Apr 2004 00:01:10 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "On Fri, 30 Apr 2004, Gary Doades wrote:\n\n> I should have also pointed out that MSSQL reported that same index scan\n> as taking 65% of the overall query time. It was just \"faster\". The\n> overall query took 103ms in MSSQL.\n\nAre your results based on a single client accessing the database and no \nconcurrent updates?\n\nWould adding more clients, and maybe having some client that\nupdates/inserts into the tables, still make mssql faster then pg? Maybe\nit's so simple as pg being optimized for more concurrent users then mssql?\n\nI'm just asking, I don't know much about the inner workings of \nmssql.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Fri, 30 Apr 2004 07:26:59 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "On 30 Apr 2004 at 7:26, Dennis Bjorklund wrote:\n\n> On Fri, 30 Apr 2004, Gary Doades wrote:\n>\n> > I should have also pointed out that MSSQL reported that same index scan\n> > as taking 65% of the overall query time. It was just \"faster\". The\n> > overall query took 103ms in MSSQL.\n>\n> Are your results based on a single client accessing the database and no\n> concurrent updates?\n>\n> Would adding more clients, and maybe having some client that\n> updates/inserts into the tables, still make mssql faster then pg? Maybe\n> it's so simple as pg being optimized for more concurrent users then mssql?\n>\n> I'm just asking, I don't know much about the inner workings of\n> mssql.\n>\n> --\n> /Dennis Björklund\n>\n\nAt the moment it is difficult to set up many clients for testing concurrent\nstuff. In the past I have had several SQLServer clients under test,\nmainly select queries. MSSQL can certainly execute queries while other\nqueries are still running in the background.\n\nOur production app is fairly well biased towards selects. Currently it is\nabout 70% selects, 20% inserts, 6% deletes and 4% updates. Very few\nupdates are more than one row based on the primary key. Over 90% of\nthe time spend running SQL is in select queries.\n\nMy limited concurrent testing on Postgres gives very good performance\non updates, inserts, deletes, but it is suffering on the selects in certain\nareas which why I have been concentrating my efforts on that area.\n\nHaving got similar (or the same) access plans in both Postgres and\nMSSQL I was getting down to the next level of checking what was going\non when executing the already planned query.\n\nI do have another database system I could try. Sybase SQLAnywhere.\nThis is not the original Sybase Entrerprise which has the same roots as\nMSSQL. In the past my testing suggested that SQLAnywhere\nperformance was as godd or better than MSSQL. I mey try to set it up\nwith the same data in these tests for a more detailed comparison.\n\nRegards,\nGary.\n\n", "msg_date": "Fri, 30 Apr 2004 08:01:26 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "\nOn Apr 30, 2004, at 3:01 AM, Gary Doades wrote:\n\n[ pg query plan, etc ]\n\nI wonder if other parts of the plan are affecting the speed.\n\nI've recently run into a case where a merge join plan was chosen for \nthis query, which took 11 seconds to execute. Forcing it to pick a \nnested loop join dropped it to 3. (Updating my \ndefault_statistics_target to 500 caused the planner to choose nested \nloop join)\n\nSo, is the plan really the same?\n\n A better comparision query may be a simple \"select a from mytable \nwhere a between foo and bar\" to get an index scan. In that case its a \nstraight up, vanilla index scan. Nothing else getting in the way.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Fri, 30 Apr 2004 08:32:16 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "On 30 Apr 2004 at 8:32, Jeff wrote:\n> \n> A better comparision query may be a simple \"select a from mytable \n> where a between foo and bar\" to get an index scan. In that case its a \n> straight up, vanilla index scan. Nothing else getting in the way.\n> \n\nYes, you're right and I have done this just to prove to myself that it is the index scan that \nis the bottleneck. I have some complex SQL that executes very quickly with Postgres, \nsimilar to MSSQL, but the index scans in most of those only touch a few rows for a few \nloops. It seems to be a problem when the index scan is scanning very many rows and \nfor each of these it has to go to the table just to find out if the index it just looked at is \nstill valid.\n\nGary.\n\n", "msg_date": "Fri, 30 Apr 2004 19:29:44 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: planner/optimizer question " }, { "msg_contents": "\n\nOn Fri, 30 Apr 2004, Gary Doades wrote:\n\n> Yes, you're right and I have done this just to prove to myself that it\n> is the index scan that is the bottleneck. I have some complex SQL that\n> executes very quickly with Postgres, similar to MSSQL, but the index\n> scans in most of those only touch a few rows for a few loops. It seems\n> to be a problem when the index scan is scanning very many rows and for\n> each of these it has to go to the table just to find out if the index it\n> just looked at is still valid.\n> \n\nAnother way to speed this up is the TODO item: \"Use bitmaps to fetch \nheap pages in sequential order\" For an indexscan that fetches a number \nof rows those rows may be anywhere in the base table so as each index \nentry is found it fetches the corresponding table row from the heap. This \nis not ideal because you can be jumping around in the heap and end up \nfetching the same page multiple times because table rows are in the same \npage, but were found in different places in the index.\n\nKris Jurka\n", "msg_date": "Fri, 30 Apr 2004 14:48:32 -0500 (EST)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner/optimizer question " } ]
[ { "msg_contents": "I have a table that is never updated, only INSERTED into. Is there a way \nI can prevent vacuum wasting time on this table besides vacuuming each \ntable in the db by itself and omitting this table?\n\nHow feasable would it be to have a marker somewhere in pg that is \n\"updated since last vacuum\" that would be cleared when vacuum runs, and \nif set vacuum will ignore that table?\n", "msg_date": "Thu, 29 Apr 2004 19:08:23 -0400", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "Insert only tables and vacuum performance" }, { "msg_contents": "Joseph Shraibman wrote:\n> I have a table that is never updated, only INSERTED into. Is there a way \n> I can prevent vacuum wasting time on this table besides vacuuming each \n> table in the db by itself and omitting this table?\n> \n> How feasable would it be to have a marker somewhere in pg that is \n> \"updated since last vacuum\" that would be cleared when vacuum runs, and \n> if set vacuum will ignore that table?\n\nOr even better an offset into the datatable for the earliest deleted \nrow, so if you have a table where you update the row shortly after \ninsert and then never touch it vacuum can skip most of the table \n(inserts are done at the end of the table, right?)\n", "msg_date": "Thu, 29 Apr 2004 19:24:40 -0400", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Insert only tables and vacuum performance" }, { "msg_contents": "> Or even better an offset into the datatable for the earliest deleted \n> row, so if you have a table where you update the row shortly after \n> insert and then never touch it vacuum can skip most of the table \n> (inserts are done at the end of the table, right?)\n\nInserts are done at the end of the table as a last resort. But anyway,\nhow do you handle a rolled back insert?\n\n\n", "msg_date": "Thu, 29 Apr 2004 20:44:16 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert only tables and vacuum performance" }, { "msg_contents": "Rod Taylor wrote:\n>>Or even better an offset into the datatable for the earliest deleted \n>>row, so if you have a table where you update the row shortly after \n>>insert and then never touch it vacuum can skip most of the table \n>>(inserts are done at the end of the table, right?)\n> \n> \n> Inserts are done at the end of the table as a last resort.\n\nBut if most of the table is never updated then the inserts would tend to \nbe at the end, right?\n\n> But anyway,\n> how do you handle a rolled back insert?\n> \nIt is considered like a deleted row to be vacuumed.\n", "msg_date": "Thu, 29 Apr 2004 20:53:21 -0400", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Insert only tables and vacuum performance" }, { "msg_contents": "Joseph Shraibman <[email protected]> writes:\n> I have a table that is never updated, only INSERTED into. Is there a way \n> I can prevent vacuum wasting time on this table\n\nWhat makes you think vacuum is wasting much time on this table? AFAICS\nit will only update any unfixed hint bits ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Apr 2004 00:30:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert only tables and vacuum performance " }, { "msg_contents": "Tom Lane wrote:\n> Joseph Shraibman <[email protected]> writes:\n> \n>>I have a table that is never updated, only INSERTED into. Is there a way \n>>I can prevent vacuum wasting time on this table\n> \n> \n> What makes you think vacuum is wasting much time on this table? AFAICS\n> it will only update any unfixed hint bits ...\n> \n> \t\t\tregards, tom lane\n\nINFO: \"elog\": found 0 removable, 12869411 nonremovable row versions in \n196195 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 5 unused item pointers.\n0 pages are entirely empty.\nCPU 31.61s/4.53u sec elapsed 1096.83 sec.\n\nIt took 1096.83 seconds, and what did it accomplish? And what are hint \nbits?\n", "msg_date": "Fri, 30 Apr 2004 01:57:44 -0400", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Insert only tables and vacuum performance" }, { "msg_contents": "Joseph Shraibman <[email protected]> writes:\n> INFO: \"elog\": found 0 removable, 12869411 nonremovable row versions in \n> 196195 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 5 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 31.61s/4.53u sec elapsed 1096.83 sec.\n\nHmm. These numbers suggest that your disk subsystem's bandwidth is\nonly about 1.4 Mbytes/sec. Was there a lot else going on?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Apr 2004 23:04:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert only tables and vacuum performance " } ]
[ { "msg_contents": "How does the analyzer/planner deal with rows clustered together? Does \nit just assume that if this col is clustered on then the actual data \nwill be clustered? What if the data in the table happens to be close \ntogether because it was inserted together originally?\n", "msg_date": "Thu, 29 Apr 2004 19:09:09 -0400", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "analyzer/planner and clustered rows" }, { "msg_contents": "On Thu, 29 Apr 2004 19:09:09 -0400, Joseph Shraibman\n<[email protected]> wrote:\n>How does the analyzer/planner deal with rows clustered together?\n\nThere's a correlation value per column. Just try\n\n\tSELECT attname, correlation\n\t FROM pg_stats\n\t WHERE tablename = '...';\n\nif you are interested. It indicates how well the hypothetical order of\ntuples if sorted by that column corresponds to the physical order. +1.0\nis perfect correlation, 0.0 is totally chaotic, -1.0 means reverse\norder. The optimizer is more willing to choose an index scan if\ncorrelation for the first index column is near +/-1.\n\n> What if the data in the table happens to be close \n>together because it was inserted together originally?\n\nHaving equal values close to each other is not enough, the values should\nbe increasing, too. Compare\n\n\t5 5 5 4 4 4 7 7 7 2 2 2 6 6 6 3 3 3 8 8 8 low correlation\nand\n\t2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7 8 8 8 correlation = 1.0\n\n", "msg_date": "Fri, 30 Apr 2004 09:33:13 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: analyzer/planner and clustered rows" } ]
[ { "msg_contents": "Lets say I have two columns, A and B. They are each indexed seperately. \n If I do a query like:\nSELECT * FROM table WHERE A = 1 AND B = 2;\npostgres can only use one index.\n\nI assume that postgres uses the index data to narrow down pages in the \ntable to visit when doing its search. Then it goes through and filters \non the second condition.\n\nMy question: why can't it go through the first index, get a list of \npages in the table, then go through the second index, union the result \nwith the results from first index, and then go into the table?\n", "msg_date": "Thu, 29 Apr 2004 19:19:09 -0400", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "why can't 2 indexes be used at once?" }, { "msg_contents": "Joseph Shraibman <[email protected]> writes:\n> My question: why can't it go through the first index, get a list of \n> pages in the table, then go through the second index, union the result \n> with the results from first index, and then go into the table?\n\nSee TODO list ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Apr 2004 23:57:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why can't 2 indexes be used at once? " } ]
[ { "msg_contents": "Back in 2001, there was a lengthy thread on the PG Hackers list about PG \nand journaling file systems \n(http://archives.postgresql.org/pgsql-hackers/2001-05/msg00017.php), but \nthere was no decisive conclusion regarding what FS to use. At the time \nthe fly in the XFS ointment was that deletes were slow, but this was \nimproved with XFS 1.1.\n\nI think a journaling a FS is needed for PG data since large DBs could \ntake hours to recover on a non-journaling FS, but what about WAL files?\n\n-- \n\n James Thornton\n______________________________________________________\nInternet Business Consultant, http://jamesthornton.com\n\n\n", "msg_date": "Sun, 02 May 2004 03:11:12 -0500", "msg_from": "James Thornton <[email protected]>", "msg_from_op": true, "msg_subject": "Recommended File System Configuration" }, { "msg_contents": "[email protected] (James Thornton) writes:\n> Back in 2001, there was a lengthy thread on the PG Hackers list about\n> PG and journaling file systems\n> (http://archives.postgresql.org/pgsql-hackers/2001-05/msg00017.php),\n> but there was no decisive conclusion regarding what FS to use. At the\n> time the fly in the XFS ointment was that deletes were slow, but this\n> was improved with XFS 1.1.\n>\n> I think a journaling a FS is needed for PG data since large DBs could\n> take hours to recover on a non-journaling FS, but what about WAL files?\n\nIf the WAL files are on a small filesystem, it presumably won't take\nhours for that filesystem to recover at fsck time.\n\nThe results have not been totally conclusive...\n\n - Several have found JFS to be a bit faster than anything else on\n Linux, but some data loss problems have been experienced;\n\n - ext2 has the significant demerit that with big filesystems, fsck\n will \"take forever\" to run;\n\n - ext3 appears to be the slowest option out there, and there are some\n stories of filesystem corruption;\n\n - ReiserFS was designed to be real fast with tiny files, which is not\n the ideal \"use case\" for PostgreSQL; the designers there are\n definitely the most aggressive at pushing out \"bleeding edge\" code,\n which isn't likely the ideal;\n\n - XFS is neither fastest nor slowest, but there has been a lack of\n reports of \"spontaneous data loss\" under heavy load, which is a\n good thing. It's not part of \"official 2.4\" kernels, requiring\n backports, but once 2.6 gets more widely deployed, this shouldn't\n be a demerit anymore...\n\nI think that provides a reasonable overview of what has been seen...\n-- \noutput = reverse(\"gro.gultn\" \"@\" \"enworbbc\")\nhttp://cbbrowne.com/info/oses.html\nDonny: Are these the Nazis, Walter?\nWalter: No, Donny, these men are nihilists. There's nothing to be\nafraid of. -- The Big Lebowski\n", "msg_date": "Mon, 03 May 2004 12:38:33 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommended File System Configuration" }, { "msg_contents": "Chris Browne wrote:\n\n> The results have not been totally conclusive...\n> \n> - Several have found JFS to be a bit faster than anything else on\n> Linux, but some data loss problems have been experienced;\n> \n> - ext2 has the significant demerit that with big filesystems, fsck\n> will \"take forever\" to run;\n> \n> - ext3 appears to be the slowest option out there, and there are some\n> stories of filesystem corruption;\n\n\nIn an Oracle paper entitled Tuning an \"Oracle8i Database Running Linux\" \n(http://otn.oracle.com/oramag/webcolumns/2002/techarticles/scalzo_linux02.html), \nDr. Bert Scalzo says, \"The trouble with these tests-for example, Bonnie, \nBonnie++, Dbench, Iobench, Iozone, Mongo, and Postmark-is that they are \nbasic file system throughput tests, so their results generally do not \npertain in any meaningful fashion to the way relational database systems \naccess data files.\" Instead he suggests users benchmarking filesystems \nfor database applications should use these two well-known and widely \naccepted database benchmarks:\n\nAS3AP (http://www.benchmarkresources.com/handbook/5.html): a scalable, \nportable ANSI SQL relational database benchmark that provides a \ncomprehensive set of tests of database-processing power; has built-in \nscalability and portability for testing a broad range of systems; \nminimizes human effort in implementing and running benchmark tests; and \nprovides a uniform, metric, straightforward interpretation of the results.\n\nTPC-C (http://www.tpc.org/): an online transaction processing (OLTP) \nbenchmark that involves a mix of five concurrent transactions of various \ntypes and either executes completely online or queries for deferred \nexecution. The database comprises nine types of tables, having a wide \nrange of record and population sizes. This benchmark measures the number \nof transactions per second.\n\nI encourage you to read the paper -- Dr. Scalzo's results will surprise \nyou; however, while he benchmarked ext2, ext3, ReiserFS, JFS, and RAW, \nhe did not include XFS.\n\nSGI and IBM did a more detailed study on Linux filesystem performance, \nwhich included XFS, ext2, ext3 (various modes), ReiserFS, and JRS, and \nthe results are presented in a paper entitled \"Filesystem Performance \nand Scalability in Linux 2.4.17\" \n(http://oss.sgi.com/projects/xfs/papers/filesystem-perf-tm.pdf). This \npaper goes over the details on how to properly conduct a filesystem \nbenchmark and addresses scaling and load more so than Dr. Scalzo's tests.\n\nFor further study, I have compiled a list of Linux filesystem resources \nat: http://jamesthornton.com/hotlist/linux-filesystems/.\n\n-- \n\n James Thornton\n______________________________________________________\nInternet Business Consultant, http://jamesthornton.com\n\n", "msg_date": "Tue, 04 May 2004 14:38:47 -0500", "msg_from": "James Thornton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recommended File System Configuration" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi!\n\nYour bug report form on the web doesn't work.\n\n\nThis is very slow:\n\nSELECT urls.id FROM urls WHERE\n(\n\turls.id <> ALL (SELECT html.urlid FROM html)\n);\n\n...while this is quite fast:\n\nSELECT urls.id FROM urls WHERE\n(\n\tNOT (EXISTS (SELECT html.urlid FROM tml WHERE\n\t(\n\t\thtml.urlid = urls.id\n\t)))\n);\n\nRegards\nTimo\n- --\nhttp://nentwig.biz/ (J2EE)\nhttp://nitwit.de/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.2 (GNU/Linux)\n\niD8DBQFAlm53cmRm71Um+e0RAkJuAKChd+6zoFesZfBY/cGRsSVagnJeswCeMD5s\n++Es8hVsFlUpkIIsRfrBp4Y=\n=STbS\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 03 May 2004 18:08:23 +0200", "msg_from": "Timo Nentwig <[email protected]>", "msg_from_op": true, "msg_subject": "Bug in optimizer" }, { "msg_contents": "On Mon, May 03, 2004 at 18:08:23 +0200,\n Timo Nentwig <[email protected]> wrote:\n> \n> This is very slow:\n\nThis kind of question should be asked on the performance list.\n\n> \n> SELECT urls.id FROM urls WHERE\n> (\n> \turls.id <> ALL (SELECT html.urlid FROM html)\n> );\n> \n> ...while this is quite fast:\n\nYou didn't provide your postgres version or an explain analyze so it is hard\nto answer your question definitivly. Most likely you are using a pre 7.4\nversion which is know to be slow for IN (which is what the above probably\ngets translated to).\n\n> \n> SELECT urls.id FROM urls WHERE\n> (\n> \tNOT (EXISTS (SELECT html.urlid FROM tml WHERE\n> \t(\n> \t\thtml.urlid = urls.id\n> \t)))\n> );\n", "msg_date": "Tue, 4 May 2004 11:36:19 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in optimizer" }, { "msg_contents": "Timo Nentwig wrote:\n\n> This is very slow:\n> \n> SELECT urls.id FROM urls WHERE\n> (\n> urls.id <> ALL (SELECT html.urlid FROM html)\n> );\n> \n> ...while this is quite fast:\n> \n> SELECT urls.id FROM urls WHERE\n> (\n> NOT (EXISTS (SELECT html.urlid FROM tml WHERE\n> (\n> html.urlid = urls.id\n> )))\n> );\n\nAre you using the version 7.4.x ?\n\n\n\nRegards\nGaetano Mendola\n\n\n\n", "msg_date": "Wed, 05 May 2004 19:31:01 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in optimizer" } ]
[ { "msg_contents": "Hi,\n\nI test a configuration where one table is divided in 256 sub-table.\nAnd I use a RULE to offer a single view to the data.\n\nFor INSERT I have create 256 rules like:\nCREATE RULE ndicti_000 AS ON INSERT TO ndict\n WHERE (NEW.word_id & 255) = 000 DO INSTEAD\n INSERT INTO ndict_000 VALUES( NEW.url_id, 000, NEW.intag);\nCREATE RULE ndicti_001 AS ON INSERT TO ndict\n WHERE (NEW.word_id & 255) = 001 DO INSTEAD\n INSERT INTO ndict_001 VALUES( NEW.url_id, 001, NEW.intag);\nAnd that works, a bit slow.\n\nI try to do:\nCREATE RULE ndicti AS ON INSERT TO ndict\n DO INSTEAD INSERT INTO 'ndict_' || (NEW.word_id & 255)\n VALUES( NEW.url_id, NEW.word_id, NEW.intag);\nI got an error on 'ndict_' .\nI did not found the right syntax.\n\nAny help is welcomed.\n\n\nCordialement,\nJean-Gérard Pailloncy\n", "msg_date": "Mon, 3 May 2004 22:58:28 +0200", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": true, "msg_subject": "INSERT RULE" }, { "msg_contents": "> I try to do:\n> CREATE RULE ndicti AS ON INSERT TO ndict\n> DO INSTEAD INSERT INTO 'ndict_' || (NEW.word_id & 255)\n> VALUES( NEW.url_id, NEW.word_id, NEW.intag);\n> I got an error on 'ndict_' .\n> I did not found the right syntax.\nIn fact I discover that\nSELECT * FROM / INSERT INTO table\ndoesn't accept function that returns the name of the table as table, \nbut only function that returns rows....\n\nI'm dead.\n\nDoes this feature, is possible or plan ?\nIs there a trick to do it ?\n\nCordialement,\nJean-Gérard Pailloncy\n\n", "msg_date": "Tue, 4 May 2004 12:13:26 +0200", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INSERT RULE" }, { "msg_contents": "Pailloncy Jean-G�rard wrote:\n>> I try to do:\n>> CREATE RULE ndicti AS ON INSERT TO ndict\n>> DO INSTEAD INSERT INTO 'ndict_' || (NEW.word_id & 255)\n>> VALUES( NEW.url_id, NEW.word_id, NEW.intag);\n>> I got an error on 'ndict_' .\n>> I did not found the right syntax.\n> \n> In fact I discover that\n> SELECT * FROM / INSERT INTO table\n> doesn't accept function that returns the name of the table as table, but \n> only function that returns rows....\n> \n> I'm dead.\n> \n> Does this feature, is possible or plan ?\n> Is there a trick to do it ?\n\nYou could call a plpgsql function and inside that use EXECUTE (or use \npltcl or some other interpreted language).\n\nNot sure what you're doing will help you much though. Are you aware that \nyou can have partial indexes?\n\nCREATE INDEX i123 ON ndict WHERE (word_id & 255)=123;\n\nThat might be what you're after, but it's difficult to be sure without \nknowing what problem you're trying to solve.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 06 May 2004 19:10:40 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT RULE" } ]
[ { "msg_contents": "I have a big table with some int fields. I frequently need to do \nqueries like:\n\nSELECT if2, count(*) FROM table WHERE if1 = 20 GROUP BY if2;\n\nThe problem is that this is slow and frequently requires a seqscan. I'd \nlike to cache the results in a second table and update the counts with \ntriggers, but this would a) require another UPDATE for each \nINSERT/UPDATE which would slow down adding and updating of data and b) \nproduce a large amount of dead rows for vacuum to clear out.\n\nIt would also be nice if this small table could be locked into the pg \ncache somehow. It doesn't need to store the data on disk because the \ncounts can be generated from scratch?\n\nSo what is the best solution to this problem? I'm sure it must come up \npretty often.\n", "msg_date": "Mon, 03 May 2004 22:24:02 -0400", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "cache table" }, { "msg_contents": "On Mon, 3 May 2004, Joseph Shraibman wrote:\n\n> I have a big table with some int fields. I frequently need to do \n> queries like:\n> \n> SELECT if2, count(*) FROM table WHERE if1 = 20 GROUP BY if2;\n> \n> The problem is that this is slow and frequently requires a seqscan. I'd \n> like to cache the results in a second table and update the counts with \n> triggers, but this would a) require another UPDATE for each \n> INSERT/UPDATE which would slow down adding and updating of data and b) \n> produce a large amount of dead rows for vacuum to clear out.\n> \n> It would also be nice if this small table could be locked into the pg \n> cache somehow. It doesn't need to store the data on disk because the \n> counts can be generated from scratch?\n\nI think you might be interested in materialized views. You could create \nthis as a materialized view which should be very fast to just select * \nfrom.\n\nWhile materialized views aren't a standard part of PostgreSQL just yet, \nthere is a working implementation available from Jonathan Gardner at:\n\nhttp://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n\nIt's all implemented with plpgsql and is quite interesting to read \nthrough. IT has a nice tutorial methodology to it.\n\n", "msg_date": "Tue, 4 May 2004 07:52:12 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cache table" }, { "msg_contents": "scott.marlowe wrote:\n\n> I think you might be interested in materialized views. You could create \n> this as a materialized view which should be very fast to just select * \n> from.\n\nThat seems to be the count table I envisioned. It just hides the \ndetails for me. It still has the problems of an extra UPDATE every time \nthe data table is updated and generating a lot of dead tuples.\n", "msg_date": "Tue, 04 May 2004 11:27:53 -0400", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cache table" }, { "msg_contents": "Joseph Shraibman <[email protected]> writes:\n\n> scott.marlowe wrote:\n> \n> > I think you might be interested in materialized views. You could create this\n> > as a materialized view which should be very fast to just select * from.\n> \n> That seems to be the count table I envisioned. It just hides the details for\n> me. It still has the problems of an extra UPDATE every time the data table is\n> updated and generating a lot of dead tuples.\n\nThe dead tuples is only going to be a problem if you have lots of updates. If\nthat's the case then you're also going to have problems with contention. This\ntrigger will essentially serialize all inserts, deletes, updates at least\nwithin a group. If you use transactions with multiple such updates then you\nwill also risk creating deadlocks.\n\nBut I think these problems are fundamental to the problem you've described.\nKeeping denormalized aggregate data like this inherently creates contention on\nthe data and generates lots of old data. It's only really useful when you have\nfew updates and many many reads.\n\nIf you know more about the behaviour of the updates then there might be other\noptions. Like, do you need precise data or only approximate data? If\napproximate perhaps you could just do a periodic refresh of the denormalized\nview and use that.\n\nAre all the updates to the data you'll be querying coming from within the same\napplication context? In which case you can keep a cache locally in the\napplication and update it locally. I often do this when I have rarely updated\nor insert-only data, I just do a lazy cache using a perl hash or equivalent.\n\nIf you're really happy with losing the cache, and you don't need complex\ntransactions or care about serializing updates then you could use something\nlike memcached (http://www.danga.com/memcached/). That might be your best fit\nfor how you describe your requirements.\n\n-- \ngreg\n\n", "msg_date": "04 May 2004 14:14:15 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cache table" } ]
[ { "msg_contents": "Hi,\n\nI have query:\nselect pg_stat_get_numscans(76529669),\npg_stat_get_blocks_fetched(76529669), \npg_stat_get_blocks_hit(76529669);\n\nThe result is:\n pg_stat_get_numscans | pg_stat_get_blocks_fetched |\npg_stat_get_blocks_hit\n----------------------+----------------------------+------------------------\n 0 | 23617 | \n 23595\n(1 row)\n\nMy questions are:\n1. How can index disk blocks be requested (either \nfrom disk or cache) without index scan?\n2. If I want to check if an index is used after \npg_stat_reset(), how can I get it? By number of scans\nor block requests, or some other ways?\n\nThanks,\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nWin a $20,000 Career Makeover at Yahoo! HotJobs \nhttp://hotjobs.sweepstakes.yahoo.com/careermakeover \n", "msg_date": "Tue, 4 May 2004 11:44:16 -0700 (PDT)", "msg_from": "Litao Wu <[email protected]>", "msg_from_op": true, "msg_subject": "pg_stat" } ]
[ { "msg_contents": "I mentioned this at the tail end of a long post in another thread, but\nI have been researching how to configure Postgres for a RAID 10 SAME \nconfiguration as described in the Oracle paper \"Optimal Storage \nConfiguration Made Easy\" \n(http://otn.oracle.com/deploy/availability/pdf/oow2000_same.pdf). Has \nanyone delved into this before?\n\n-- \n\n James Thornton\n______________________________________________________\nInternet Business Consultant, http://jamesthornton.com\n\n\n", "msg_date": "Tue, 04 May 2004 14:34:02 -0500", "msg_from": "James Thornton <[email protected]>", "msg_from_op": true, "msg_subject": "Adapting Oracle S.A.M.E. Methodology for Postgres" } ]
[ { "msg_contents": "Hello all,\n\n \n\nWe are using Postgres 7.3 with JBoss 3.2.3 on a Linux Fedora 1.0 box.\n\n \n\nWhen I am looking at CPU activity with \"top\", I often see something like:\n\n \n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n\n14154 postgres 25 0 3592 3592 2924 R 99.1 0.3\n93:53 0 postmaster\n\n \n\nAt the same time, \"mpstat\" gives me something like:\n\n \n\n11:27:21 AM CPU %user %nice %system %idle intr/s\n\n11:27:21 AM all 2.99 0.00 18.94 78.08 105.94\n\n \n\nThe system is not visibly slow and response time is satisfactory. Sometimes,\nthe CPU usage drops to 1 or 2%, but not for long usually. Also I have\nchecked the number of open connections to Postgres and there are only 5\n(maximum is set to the default: 32).\n\n \n\nShould I be worried that Postgres is eating up 99% of my CPU??? Or is this\n*expected* behaviour?\n\n \n\nPlease note that I am a developer, not a system administrator, so maybe I\nmisunderstand the usage of \"top\" here.\n\n \n\nAny help will be appreciated.\n\n \n\nCyrille.\n\n\n\n\n\n\n\n\n\n\nHello all,\n \nWe are using Postgres 7.3 with JBoss 3.2.3 on a Linux Fedora\n1.0 box.\n \nWhen I am looking at CPU activity with “top”, I\noften see something like:\n \n  PID USER     PRI  NI \nSIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND\n14154 postgres  25   0  3592 3592 \n2924        R    99.1      0.3\n       93:53   0 postmaster\n \nAt the same time, “mpstat” gives me something\nlike:\n \n11:27:21 AM  CPU   %user   %nice\n%system   %idle    intr/s\n11:27:21 AM  all       2.99   \n  0.00   18.94         78.08   \n105.94\n \nThe system is not visibly slow and response time is\nsatisfactory. Sometimes, the CPU usage drops to 1 or 2%, but not for long\nusually. Also I have checked the number of open connections to Postgres and\nthere are only 5 (maximum is set to the default: 32).\n \nShould I be worried that Postgres is eating up 99% of my\nCPU??? Or is this *expected*\nbehaviour?\n \nPlease note that I am a developer, not a system\nadministrator, so maybe I misunderstand the usage of “top” here.\n \nAny help will be appreciated.\n \nCyrille.", "msg_date": "Wed, 5 May 2004 11:34:41 +1200", "msg_from": "\"Cyrille Bonnet\" <[email protected]>", "msg_from_op": true, "msg_subject": "very high CPU usage in \"top\", but not in \"mpstat\"" }, { "msg_contents": "\"Cyrille Bonnet\" <[email protected]> writes:\n> Should I be worried that Postgres is eating up 99% of my CPU??? Or is this\n> *expected* behaviour?\n\nIt's not expected, unless you are running some very long-running query.\n\nThe conflict between what top says and what mpstat says is strange; I\nwonder if you might have a buggy version of one of them? You should\nprobably check some other tools (try \"vmstat 1\" for instance) to see if\nyou can get a consensus on whether the CPU is maxed out or not ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 May 2004 09:03:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very high CPU usage in \"top\", but not in \"mpstat\" " }, { "msg_contents": "I'm guessing you have a 4 cpu box:\n1 99 percent busy process on a 4 way box == about 25% busy overall.\n\n\nOn May 5, 2004, at 6:03 AM, Tom Lane wrote:\n\n> \"Cyrille Bonnet\" <[email protected]> writes:\n>> Should I be worried that Postgres is eating up 99% of my CPU??? Or is \n>> this\n>> *expected* behaviour?\n>\n> It's not expected, unless you are running some very long-running query.\n>\n> The conflict between what top says and what mpstat says is strange; I\n> wonder if you might have a buggy version of one of them? You should\n> probably check some other tools (try \"vmstat 1\" for instance) to see if\n> you can get a consensus on whether the CPU is maxed out or not ...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n\n", "msg_date": "Wed, 5 May 2004 11:13:11 -0700", "msg_from": "Paul Tuckfield <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very high CPU usage in \"top\", but not in \"mpstat\" " } ]
[ { "msg_contents": "On Sat, 2004-06-05 at 11:55, Carlos Eduardo Smanioto wrote:\n> What's the case of bigger database PostgreSQL (so greate and amount of\n> registers) that they know???\n\n\nYou might want to fix the month on your system time.\n\nWith respect to how big PostgreSQL databases can get in practice, these\nare our two biggest implementations:\n\n- 0.5 Tb GIS database (this maybe upwards of 600-700Gb now, I didn't\ncheck)\n\n- 10 Gb OLTP system with 70 million rows and a typical working set of\n2-3 Gb.\n\n\nPostgres is definitely capable of handling large pretty databases with\nease. There are some narrow types of workloads that it doesn't do so\nwell on, but for many normal DBMS loads it scales quite well.\n\n\nj. andrew rogers\n\n\n", "msg_date": "05 May 2004 14:11:29 -0700", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [OFF-TOPIC] - Known maximum size of the PostgreSQL" }, { "msg_contents": "On Sat, 5 Jun 2004, Carlos Eduardo Smanioto wrote:\n\n> Hello all,\n> \n> What's the case of bigger database PostgreSQL (so greate and amount of\n> registers) that they know???\n\nhttp://www.postgresql.org/docs/faqs/FAQ.html#4.5\n\n", "msg_date": "Wed, 5 May 2004 15:23:03 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OFF-TOPIC] - Known maximum size of the PostgreSQL" }, { "msg_contents": ">>What's the case of bigger database PostgreSQL (so greate and amount of\n>>registers) that they know???\n\nDidn't someone say that RedSheriff had a 10TB postgres database or \nsomething?\n\nChris\n\n", "msg_date": "Thu, 06 May 2004 09:48:30 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OFF-TOPIC] - Known maximum size of the PostgreSQL" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n>>> What's the case of bigger database PostgreSQL (so greate and amount of\n>>> registers) that they know???\n> \n> \n> Didn't someone say that RedSheriff had a 10TB postgres database or \n> something?\n\n From http://www.redsheriff.com/us/news/news_4_201.html\n\n\"According to the company, RedSheriff processes 10 billion records a \nmonth and the total amount of data managed is more than 32TB. Griffin \nsaid PostgreSQL has been in production for 12 months with not a single \ndatabase fault in that time �The stability of the database can not be \nquestioned. Needless to say, we are extremely happy.\"\n\nI think it's safe to assume this is not on a spare Dell 600SC though.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 06 May 2004 09:13:10 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OFF-TOPIC] - Known maximum size of the PostgreSQL" }, { "msg_contents": "Richard Huxton wrote:\n\n> Christopher Kings-Lynne wrote:\n> \n>>>> What's the case of bigger database PostgreSQL (so greate and amount of\n>>>> registers) that they know???\n>> Didn't someone say that RedSheriff had a 10TB postgres database or \n>> something?\n> From http://www.redsheriff.com/us/news/news_4_201.html\n> \n> \"According to the company, RedSheriff processes 10 billion records a \n> month and the total amount of data managed is more than 32TB. Griffin \n> said PostgreSQL has been in production for 12 months with not a single \n> database fault in that time �The stability of the database can not be \n> questioned. Needless to say, we are extremely happy.\"\n> \n> I think it's safe to assume this is not on a spare Dell 600SC though.\n> \n\nI think we should have a case study for that. And publish it on our regular \nnews/press contacts(Can't imagine the flame war on /...Umm Yummy..:-)). It would \nmake a lot of noise and gain visibility for us.\n\nOf course Red Sherrif need to co-operate and spell the details and/or moderate \nwhat we write, but all in all, 32TB database is uber-cool..:-)\n\n Shridhar\n", "msg_date": "Thu, 06 May 2004 14:13:25 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] [OFF-TOPIC] - Known maximum size of the PostgreSQL" }, { "msg_contents": "Here's a question a little off-topic.\n\nWhat would a 32TB database hardware configuration look like. I'm assuming \n200GB hard-drives which would total 160 of them. Double that if you mirror \nthem.\n\nAm I correct?\n\nAt 04:13 AM 06/05/2004, you wrote:\n>Christopher Kings-Lynne wrote:\n>>>>What's the case of bigger database PostgreSQL (so greate and amount of\n>>>>registers) that they know???\n>>\n>>Didn't someone say that RedSheriff had a 10TB postgres database or something?\n>\n> From http://www.redsheriff.com/us/news/news_4_201.html\n>\n>\"According to the company, RedSheriff processes 10 billion records a month \n>and the total amount of data managed is more than 32TB. Griffin said \n>PostgreSQL has been in production for 12 months with not a single database \n>fault in that time \"The stability of the database can not be questioned. \n>Needless to say, we are extremely happy.\"\n>\n>I think it's safe to assume this is not on a spare Dell 600SC though.\n>\n>--\n> Richard Huxton\n> Archonet Ltd\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n\nDon Vaillancourt\nDirector of Software Development\n\nWEB IMPACT INC.\n416-815-2000 ext. 245\nemail: [email protected]\nweb: http://www.webimpact.com\n\n\n\nThis email message is intended only for the addressee(s)\nand contains information that may be confidential and/or\ncopyright. If you are not the intended recipient please\nnotify the sender by reply email and immediately delete\nthis email. Use, disclosure or reproduction of this email\nby anyone other than the intended recipient(s) is strictly\nprohibited. No representation is made that this email or\nany attachments are free of viruses. Virus scanning is\nrecommended and is the responsibility of the recipient.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHere's a question a little off-topic.\nWhat would a 32TB database hardware configuration look like.  I'm\nassuming 200GB hard-drives which would total 160 of them.  Double\nthat if you mirror them.\nAm I correct?\nAt 04:13 AM 06/05/2004, you wrote:\nChristopher Kings-Lynne\nwrote:\nWhat's\nthe case of bigger database PostgreSQL (so greate and amount of\nregisters) that they know???\nDidn't someone say that RedSheriff had a 10TB postgres database or\nsomething?\n From\nhttp://www.redsheriff.com/us/news/news_4_201.html\n\"According to the company, RedSheriff processes 10 billion records a\nmonth and the total amount of data managed is more than 32TB. Griffin\nsaid PostgreSQL has been in production for 12 months with not a single\ndatabase fault in that time ���The stability of the database can not be\nquestioned. Needless to say, we are extremely happy.\"\nI think it's safe to assume this is not on a spare Dell 600SC\nthough.\n-- \n  Richard Huxton\n  Archonet Ltd\n---------------------------(end of\nbroadcast)---------------------------\nTIP 7: don't forget to increase your free space map\nsettings\n\nDon\nVaillancourt\nDirector\nof Software Development\nWEB\nIMPACT INC.\n416-815-2000\next. 245\nemail: [email protected]\nweb:\nhttp://www.webimpact.com\n\n\nThis email message is intended only for the addressee(s) \nand contains information that may be confidential and/or \ncopyright.  If you are not the intended recipient please \nnotify the sender by reply email and immediately delete \nthis email. Use, disclosure or reproduction of this email \nby anyone other than the intended recipient(s) is strictly \nprohibited. No representation is made that this email or \nany attachments are free of viruses. Virus scanning is \nrecommended and is the responsibility of the recipient.", "msg_date": "Thu, 06 May 2004 09:47:08 -0400", "msg_from": "Don Vaillancourt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OFF-TOPIC] - Known maximum size of the" }, { "msg_contents": "On Thu, 6 May 2004, Shridhar Daithankar wrote:\n\n> Richard Huxton wrote:\n>\n> > Christopher Kings-Lynne wrote:\n> >\n> >>>> What's the case of bigger database PostgreSQL (so greate and amount of\n> >>>> registers) that they know???\n> >> Didn't someone say that RedSheriff had a 10TB postgres database or\n> >> something?\n> > From http://www.redsheriff.com/us/news/news_4_201.html\n> >\n> > \"According to the company, RedSheriff processes 10 billion records a\n> > month and the total amount of data managed is more than 32TB. Griffin\n> > said PostgreSQL has been in production for 12 months with not a single\n> > database fault in that time “The stability of the database can not be\n> > questioned. Needless to say, we are extremely happy.\"\n> >\n> > I think it's safe to assume this is not on a spare Dell 600SC though.\n> >\n>\n> I think we should have a case study for that. And publish it on our regular\n> news/press contacts(Can't imagine the flame war on /...Umm Yummy..:-)). It would\n> make a lot of noise and gain visibility for us.\n>\n> Of course Red Sherrif need to co-operate and spell the details and/or moderate\n> what we write, but all in all, 32TB database is uber-cool..:-)\n\nI've tried contacting them. They will not return my phone calls or emails.\n\nGavin\n", "msg_date": "Fri, 7 May 2004 08:29:11 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] [OFF-TOPIC] - Known maximum size of" }, { "msg_contents": "Don Vaillancourt wrote:\n\n> \n> Here's a question a little off-topic.\n> \n> What would a 32TB database hardware configuration look like. I'm \n> assuming 200GB hard-drives which would total 160 of them. Double that \n> if you mirror them.\n> \n> Am I correct?\n\nWhy do you have to mirror them ? Usually a SAN make data redundancy\nusing a RAID 4 or 5, this depend if you need read performances or\nwrite performances, in the case of Red Sherif I guess that guys are\nusing RAID 50 ( 0 + 5 ) sets so what you \"waste\" is a disk for each\nset.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n", "msg_date": "Sat, 08 May 2004 01:35:44 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OFF-TOPIC] - Known maximum size of the" }, { "msg_contents": "Hello all,\n\nWhat's the case of bigger database PostgreSQL (so greate and amount of\nregisters) that they know???\n\nThanks,\nCarlos Eduardo Smanioto\n\n", "msg_date": "Sat, 5 Jun 2004 15:55:32 -0300", "msg_from": "\"Carlos Eduardo Smanioto\" <[email protected]>", "msg_from_op": false, "msg_subject": "[OFF-TOPIC] - Known maximum size of the PostgreSQL Database" } ]
[ { "msg_contents": "All,\n\nSince I have not seen any follow up post,\nI am wondering if my question is not clear or\nwhat.\n\nAnyway, my postgres version is 7.3.2, and I\nwant to know:\n1. How to determine if some of indexes are used by\nany queries, like Oracle's:\nalter index my_index monitoring usage\n2. The relationship between \n\"number of index scans done\" and \n\"Number of disk block fetch requests for index\"\nas shown in the query.\n\nThank you!\n \n--- Litao Wu <[email protected]> wrote:\n> Hi,\n> \n> I have query:\n> select pg_stat_get_numscans(76529669),\n> pg_stat_get_blocks_fetched(76529669), \n> pg_stat_get_blocks_hit(76529669);\n> \n> The result is:\n> pg_stat_get_numscans | pg_stat_get_blocks_fetched |\n> pg_stat_get_blocks_hit\n>\n----------------------+----------------------------+------------------------\n> 0 | 23617 |\n> \n> 23595\n> (1 row)\n> \n> My questions are:\n> 1. How can index disk blocks be requested (either \n> from disk or cache) without index scan?\n> 2. If I want to check if an index is used after \n> pg_stat_reset(), how can I get it? By number of\n> scans\n> or block requests, or some other ways?\n> \n> Thanks,\n> \n> \n> \t\n> \t\t\n> __________________________________\n> Do you Yahoo!?\n> Win a $20,000 Career Makeover at Yahoo! HotJobs \n> http://hotjobs.sweepstakes.yahoo.com/careermakeover \n> \n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nWin a $20,000 Career Makeover at Yahoo! HotJobs \nhttp://hotjobs.sweepstakes.yahoo.com/careermakeover \n", "msg_date": "Thu, 6 May 2004 08:44:43 -0700 (PDT)", "msg_from": "Litao Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_stat" } ]
[ { "msg_contents": "Hi,\n\n I am a newbie here and just starting to use postgresql. My\nproblems is how to tune up my server because it its too slow.\n\nWe just ported from DBF to postgresql.\n\n \n\nThis is my PC specs: P4, 512Ram, Linux 9\n\n \n\nBecause I am working in a statistical organization we have a very large data\nvolume\n\nThese are my data:\n\n \n\nTable 1 with 60 million data but only with 10 fields\n\nTable 2 with 30 million data with 15 fields\n\nTable 3 with 30 million data with 10 fields\n\n \n\nI will only use this server for querying ... I already read and apply those\narticles found in the archives section but still the performance is not\ngood.\n\nI am planning to add another 512 RAM .Another question is how to calculate\nshared_buffer size ..\n\n \n\nThanks a lot and hoping for your kind answers ..\n\n \n\nMichael Puncia\n\nPhilippines\n\n \n\n \n\n \n\n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n            I\nam a newbie here and just starting to use postgresql. My problems is how to\ntune up my server because it its too slow.\nWe just ported from DBF to postgresql.\n \nThis is my PC specs: P4, 512Ram, Linux 9\n \nBecause I am working in a statistical organization we have a\nvery large data volume\nThese are my data:\n \nTable 1 with 60 million data but only with 10 fields\nTable 2 with 30 million data with 15 fields\nTable 3 with 30 million data with 10 fields\n \nI will only use this server for querying ….. I already\nread and apply those articles found in the archives section but still the\nperformance is not good.\nI am planning to add another 512 RAM …Another question\nis how to calculate shared_buffer size ..\n \nThanks a lot and hoping for your kind answers ..\n \nMichael Puncia\nPhilippines", "msg_date": "Fri, 7 May 2004 19:07:23 +0800", "msg_from": "\"Michael Ryan S. Puncia\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help how to tune-up my Database" }, { "msg_contents": "On Fri, 7 May 2004, Michael Ryan S. Puncia wrote:\n\n> Hi,\n> \n> I am a newbie here and just starting to use postgresql. My\n> problems is how to tune up my server because it its too slow.\n\nFirst, read this:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n> This is my PC specs: P4, 512Ram, Linux 9\n\nget more ram.\n\nHard Drives: interface, how many, RAID???\n\nFor a mostly read database IDEs are pretty good. Having multiple drives \nin a RAID-5 or RAID1+0 works well on a mostly read database too. Keep the \nstripe size small is setting up a RAID array for a database.\n\n> Because I am working in a statistical organization we have a very large data\n> volume\n> \n> These are my data:\n> \n> \n> \n> Table 1 with 60 million data but only with 10 fields\n> \n> Table 2 with 30 million data with 15 fields\n> \n> Table 3 with 30 million data with 10 fields\n\nThat's not really that big, but it's big enough you have to make sure your \nserver is tuned properly.\n\n> I will only use this server for querying ... I already read and apply those\n> articles found in the archives section but still the performance is not\n> good.\n> \n> I am planning to add another 512 RAM .Another question is how to calculate\n> shared_buffer size ..\n\nI'm assuming you've recently vacuumed full and analyzed your database...\n\nShared buffers should probably be between 1000 and 10000 on about 98% of \nall installations. Setting it higher than 25% of memory is usually a bad \nidea. Since they're in 8k blocks (unless you compiled with a customer \nblock size, you'd know if you did, it's not something you can accidentally \ndo by leaning on the wrong switch...) you probably want about 10000 blocks \nor so to start, which will give you about 80 megs of shared buffer.\n\nPostgreSQL doesn't really cache as well as the kernel, so it's better to \nleave more memory available for kernel cache than you allocate to buffer \ncache. On a machine with only 512Meg, I'm guessing you'll get about 128 \nto 200 megs of kernel cache if you're only running postgresql and you have \nit set to 10000 buffers.\n\nThe important things to check / set are things lik effective_cache_size. \nIt too is measured in 8k blocks, and reflects the approximate amount of \nkernel cache being dedicated to postgresql. assuming a single service \npostgresql only box, that will be the number that a server that's been up \nfor a while shows under top like so:\n\n 9:50am up 12:16, 4 users, load average: 0.00, 0.00, 0.00 \n104 processes: 102 sleeping, 2 running, 0 zombie, 0 stopped\nCPU states: 0.7% user, 0.3% system, 0.0% nice, 1.7% idle\nMem: 512924K av, 499248K used, 13676K free, 0K shrd, 54856K buff\nSwap: 2048248K av, 5860K used, 2042388K free 229572K cached\n\nthe 229572k cached entry shows about 230 megs. divided by 8192 we get \nabout 28000.\n\nsort_mem might do with a small bump, especially if you're only handling a \nfew connections at a time. Be careful, it's per sort, and measured in \nmegs, so it's easy for folks to set it too high and make their machine \nstart flushing too much kernel cache, which will slow down the other \nbackends that have to go to disk for data.\n\nA good starting point for testing is anywhere from 8192 to 32768. 32768 \nis 32 megs, which can starve a machine as small as yours if there are a \ncouple of queries each running a couple of sorts on large sets at the same \ntime.\n\nLastly, using explain analyze <your query here> you can see if postgresql \nis making a bad plan choice. compared estimated rows to actual rows. \nLook for things like nested loops being run on what the planner thinks \nwill be 80 rows but is, in fact, 8000 rows.\n\nYou can change random page cost to change the tendency of the server to \nfavor seq scans to index scans. Lower = greater tendency towards index \nscans. the default is 4, but most production servers with enough memory \nto cache most of their data will run well on a setting of 1.2 to 1.4. My \ndual 2800 with 2 gig ram runs quite well at 1.3 to 1.4. \n\nYou can also change the settings to random_page_cost, as well as turning \noff options to the planner with the following env vars:\n\nenable_hashagg\nenable_hashjoin\nenable_indexscan\nenable_mergejoin\nenable_nestloop\nenable_seqscan\nenable_sort\nenable_tidscan\n\nThey are all on by default, and shouldn't really be turned off by default \nfor the most part. but for an individual session to figure out if the \nquery planner is making the right plan you can set them to off to see if \nusing another plan works better. \n\nso, if you've got a nested loop running over 80000 rows that the planner \nthought was gonna be 80 rows, you can force it to stop using the nested \nloop for your session with:\n\nset enable_nestloop=off;\n\nand use explain analyze to see if it runs faster.\n\nYou can set effective_cache_size and sort_mem on the fly for a single \nconnection, or set them in postgresql.conf and restart or reload to make a \nchange in the default.\n\nshared_buffers is set on postgresql startup, and can't be changed without \nrestarting the database. Reloading won't do it.\n\n\n\n", "msg_date": "Fri, 7 May 2004 10:00:03 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help how to tune-up my Database" }, { "msg_contents": "scott.marlowe wrote:\n> sort_mem might do with a small bump, especially if you're only handling a \n> few connections at a time. Be careful, it's per sort, and measured in \n> megs, so it's easy for folks to set it too high and make their machine \n> start flushing too much kernel cache, which will slow down the other \n> backends that have to go to disk for data.\n<snip>\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n(under \"Memory\"), it says that sort_mem is set in KB. Is this document \nwrong (or outdated)?\n", "msg_date": "Fri, 07 May 2004 16:47:09 GMT", "msg_from": "Bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help how to tune-up my Database" }, { "msg_contents": "Sorry about that, I meant kbytes, not megs. My point being it's NOT \nmeasured in 8k blocks, like a lot of other settings. sorry for the mixup.\n\nOn Fri, 7 May 2004, Bricklen wrote:\n\n> scott.marlowe wrote:\n> > sort_mem might do with a small bump, especially if you're only handling a \n> > few connections at a time. Be careful, it's per sort, and measured in \n> > megs, so it's easy for folks to set it too high and make their machine \n> > start flushing too much kernel cache, which will slow down the other \n> > backends that have to go to disk for data.\n> <snip>\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n> (under \"Memory\"), it says that sort_mem is set in KB. Is this document \n> wrong (or outdated)?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n", "msg_date": "Mon, 10 May 2004 14:23:20 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help how to tune-up my Database" }, { "msg_contents": "scott.marlowe wrote:\n\n> Sorry about that, I meant kbytes, not megs. My point being it's NOT \n> measured in 8k blocks, like a lot of other settings. sorry for the mixup.\n> \nNo worries, I just wanted to sort that out for my own benefit, and \nanyone else who may not have caught that.\n", "msg_date": "Mon, 10 May 2004 20:37:22 GMT", "msg_from": "Bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help how to tune-up my Database" } ]
[ { "msg_contents": "Hi.\n\n I´m new here and i´m not sure if this is the right email to solve my problem.\n\n Well, i have a very large database, with vary tables and very registers. Every day, too many operations are perfomed in that DB, with queries that insert, delete and update. Once a week some statistics are collected using vacuum analyze.\n \n The problem is after a period of time (one month, i think), the queries takes too much time to perform. A simple update can take 10 seconds or more to perform.\n\n If i make a backup, drop and recreate the DB, everything goes back normal.\n\n Could anyone give me any guidance?\n\n\n\n\n\n\n    Hi.\n \n    I´m new here and \ni´m not sure if this is the right email to solve my problem.\n \n    Well, i have a \nvery large database, with vary tables and very registers. Every day, too \nmany operations are perfomed in that DB, with queries that insert, delete and \nupdate. Once a week some statistics are collected using vacuum \nanalyze.\n    \n    The problem is \nafter a period of time (one month, i think), the queries takes too much \ntime to perform. A simple update can take 10 seconds or more to \nperform.\n \n    If i make a \nbackup, drop and recreate the DB, everything goes back \nnormal.\n \n    Could \nanyone give me any guidance?", "msg_date": "Mon, 10 May 2004 11:00:43 -0300", "msg_from": "\"Anderson Boechat Lopes\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why queries takes too much time to execute?" }, { "msg_contents": "Anderson Boechat Lopes wrote:\n> Hi.\n> \n> I�m new here and i�m not sure if this is the right email to solve my \n> problem.\n> \n> Well, i have a very large database, with vary tables and very \n> registers. Every day, too many operations are perfomed in that DB, with \n> queries that insert, delete and update. Once a week some statistics are \n> collected using vacuum analyze.\n\ni guess you need to run it much more frequently than that. Thought you haven't \ngiven actual size of data etc., once or twice per day should be much better.\n> \n> The problem is after a period of time (one month, i think), the \n> queries takes too much time to perform. A simple update can take 10 \n> seconds or more to perform.\n\nYou need to vacuum full once in a while and setup FSM parameters correctly.\n> \n> If i make a backup, drop and recreate the DB, everything goes back \n> normal.\n> \n> Could anyone give me any guidance?\n\nCheck following for basic performance tuning\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\nHTH\n\n Shridhar\n\n", "msg_date": "Mon, 10 May 2004 19:51:47 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why queries takes too much time to execute?" }, { "msg_contents": "> Well, i have a very large database, with vary tables and very \n> registers. Every day, too many operations are perfomed in that DB, with \n> queries that insert, delete and update. Once a week some statistics are \n> collected using vacuum analyze.\n\nHave vacuum analyze running once an HOUR if it's very busy. If you are \nusing 7.4, run the pg_autovacuum daemon that's in contrib/pg_autovacuum.\n\n> The problem is after a period of time (one month, i think), the \n> queries takes too much time to perform. A simple update can take 10 \n> seconds or more to perform.\n\nIf running vacuum analyze once an hour doesn't help, try running a \nvacuum full once a week or something to fix the problem.\n\nAlso, try upgrading to 7.4 which has indexes that won't suffer from bloat.\n\nChris\n\n", "msg_date": "Mon, 10 May 2004 22:24:24 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why queries takes too much time to execute?" }, { "msg_contents": " Hum... now i think i�m beginning to understand.\n\n The vacuum analyse is recommended to perform at least every day, after\nadding or deleting a large number of records, and not vacuum full analyse.\nI�ve performed the vacuum full analyse every day and after some time i�ve\nnoticed the database was corrupted. I couldn�t select anything any more.\n Do you think if i perform vacuum analyse once a day and perform vacuum\nfull analyse once a week, i will get to fix this problem?\n\n Thanks for help me, folks.\n\n PS: Sorry for my grammar mistakes. My writting is not so good. :)\n\n\n----- Original Message -----\nFrom: \"Shridhar Daithankar\" <[email protected]>\nTo: \"Anderson Boechat Lopes\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, May 10, 2004 11:21 AM\nSubject: Re: [PERFORM] Why queries takes too much time to execute?\n\n\n> Anderson Boechat Lopes wrote:\n> > Hi.\n> >\n> > I�m new here and i�m not sure if this is the right email to solve my\n> > problem.\n> >\n> > Well, i have a very large database, with vary tables and very\n> > registers. Every day, too many operations are perfomed in that DB, with\n> > queries that insert, delete and update. Once a week some statistics are\n> > collected using vacuum analyze.\n>\n> i guess you need to run it much more frequently than that. Thought you\nhaven't\n> given actual size of data etc., once or twice per day should be much\nbetter.\n> >\n> > The problem is after a period of time (one month, i think), the\n> > queries takes too much time to perform. A simple update can take 10\n> > seconds or more to perform.\n>\n> You need to vacuum full once in a while and setup FSM parameters\ncorrectly.\n> >\n> > If i make a backup, drop and recreate the DB, everything goes back\n> > normal.\n> >\n> > Could anyone give me any guidance?\n>\n> Check following for basic performance tuning\n>\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n>\n> HTH\n>\n> Shridhar\n>\n>\n\n", "msg_date": "Mon, 10 May 2004 14:36:39 -0300", "msg_from": "\"Anderson Boechat Lopes\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why queries takes too much time to execute?" }, { "msg_contents": "[email protected] (\"Anderson Boechat Lopes\") writes:\n> ��� I�m new here and i�m not sure if this is the right email to\n> solve my problem.\n\nThis should be OK...\n\n> ��� Well, i have a very large database, with vary tables and very\n> registers. Every day,�too many operations are perfomed in that DB,\n> with queries that insert, delete and update. Once a week some\n> statistics are collected using vacuum analyze.\n>\n> ��� The problem is after�a period of time (one month, i think), the\n> queries takes too much time to perform.�A simple update can take 10\n> seconds or more to perform.\n\nIt seems fairly likely that two effects are coming in...\n\n-> The tables that are being updated have lots of dead tuples.\n\n-> The vacuums aren't doing much good because the number of dead\n tuples is so large that you blow out the FSM (Free Space Map), and\n thus they can't free up space.\n\n-> Another possibility is that if some tables shrink to small size,\n and build up to large size (we see this with the _rserv_log_1_\n and _rserv_log_2_ tables used by the eRServ replication system),\n the statistics may need to be updated a LOT more often.\n\nYou might want to consider running VACUUM a whole lot more often than\nonce a week. If there is any regular time that the system isn't\nterribly busy, you might want to vacuum some or all tables at that\ntime.\n\npg_autovacuum might be helpful; it will automatically do vacuums on\ntables when they have been updated heavily.\n\nThere may be more to your problem, but VACUUMing more would allow us\nto get rid of \"too many dead tuples around\" as a cause.\n-- \n\"cbbrowne\",\"@\",\"acm.org\"\nhttp://cbbrowne.com/info/x.html\nWould-be National Mottos:\nUSA: \"There oughta' be a law!\"\n", "msg_date": "Mon, 10 May 2004 14:11:21 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why queries takes too much time to execute?" }, { "msg_contents": "On Mon, 10 May 2004, Anderson Boechat Lopes wrote:\n\n> Hum... now i think i�m beginning to understand.\n> \n> The vacuum analyse is recommended to perform at least every day, after\n> adding or deleting a large number of records, and not vacuum full analyse.\n> I�ve performed the vacuum full analyse every day and after some time i�ve\n> noticed the database was corrupted. I couldn�t select anything any more.\n\nHold it right there, full stop.\n\nIf you've got corruption you've either found a rare corner case in \npostgresql (unlikely, corruption is not usually a big problem for \npostgresql) OR you have bad hardware. Test your RAM, CPUs, and hard \ndrives before going any further. Data corruption, 99% of the time, is \nnot the fault of postgresql but the fault of the hardware.\n\n\n", "msg_date": "Mon, 10 May 2004 13:59:12 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why queries takes too much time to execute?" } ]
[ { "msg_contents": "My company is developing a PostgreSQL application. We're using 7.3.4\nbut will soon upgrade to 7.4.x. Our OS is RedHat 9. Our production\nmachines have 512 MB RAM and IDE disks. So far we've been using\ndefault configuration settings, but I have started to examine\nperformance and to modify these settings.\n\nOur typical transaction involves 5-10 SELECT, INSERT or UPDATEs,\n(usually 1/2 SELECT and the remainder a mixture of INSERT and UPDATE).\nThere are a few aggregation queries which need to scan an entire\ntable. We observed highly uneven performance for the small\ntransactions. A transaction usually runs in under 100 msec, but we\nwould see spikes as high as 40,000 msec. These spikes occurred\nregularly, every 4-5 minutes, and I speculated that checkpointing\nmight be the issue.\n\nI created a test case, based on a single table:\n\n create table test( \n id int not null, \n count int not null, \n filler varchar(200),\n primary key(id))\n\nI loaded a database with 1,000,000 rows, with the filler column always\nfilled with 200 characters.\n\nI then ran a test in which a random row was selected, and the count\ncolumn incremented. Each transaction contained ten such updates. In\nthis test, I set \n\n shared_buffers = 2000\n checkpoint_segments = 40\n checkpoint_timeout = 600\n wal_debug = 1\n\nI set checkpoint_segments high because I wanted to see whether the\nspikes correlated with checkpoints.\n\nMost transactions completed in under 60 msec. Approximately every 10th\ntransaction, the time went up to 500-600 msec, (which is puzzling, but\nnot my major concern). I did see a spike every 10 minutes, in which\ntransaction time goes up to 5000-8000 msec. The spikes were correlated\nwith checkpoint activity, occurring slightly before a log entry that \nlooks like this:\n\n 2004-05-09 16:34:19 LOG: INSERT @ 2/C2A0F628: prev 2/C2A0F5EC;\n xprev 0/0; xid 0: XLOG - checkpoint: redo 2/C2984D4C; undo 0/0; \n sui 36; xid 1369741; oid 6321782; online\n\nQuestions:\n\n1. Can someone provide an overview of checkpoint processing, to help\nme understand the performance issues?\n\n2. Is the spike due to the checkpoint process keeping the disk busy?\nOr is there some locking involved that blocks my application until the\ncheckpoint completes?\n\n3. The spikes are quite problematic for us. What can I do to minimize\nthe impact of checkpointing on my application? I understand how\ncheckpoint_segments and checkpoint_timeout determine when a checkpoint\noccurs; what can I do to lessen the impact of a checkpoint?\n\n4. I understand that a \"background writer\" is being contemplated for\n7.5. Will that replace or augment the checkpoint process? Any\ncomments on how that work will apply to my problem would be\nappreciated. I wouldn't mind seeing the average performance,\n(without the spikes) go up -- let's say -- 10%, in exchange for\nmore uniform performance. These spikes are a real problem.\n\nJack Orenstein\n\n----------------------------------------------------------------\nThis message was sent using IMP, the Internet Messaging Program.\n", "msg_date": "Mon, 10 May 2004 14:52:09 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "[email protected] wrote:\n> 4. I understand that a \"background writer\" is being contemplated for\n> 7.5. Will that replace or augment the checkpoint process? Any\n> comments on how that work will apply to my problem would be\n> appreciated. I wouldn't mind seeing the average performance,\n> (without the spikes) go up -- let's say -- 10%, in exchange for\n> more uniform performance. These spikes are a real problem.\n\nThe background writer is designed to address your specific problem. We\nwill stil checkpoint, but the spike should be greatly minimized.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 10 May 2004 15:09:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "Bruce Momjian wrote:\n> [email protected] wrote:\n> \n>>4. I understand that a \"background writer\" is being contemplated for\n>>7.5. Will that replace or augment the checkpoint process? Any\n>>comments on how that work will apply to my problem would be\n>>appreciated. I wouldn't mind seeing the average performance,\n>>(without the spikes) go up -- let's say -- 10%, in exchange for\n>>more uniform performance. These spikes are a real problem.\n> \n> \n> The background writer is designed to address your specific problem. We\n> will stil checkpoint, but the spike should be greatly minimized.\n> \n\nThanks. Do you know when 7.5 is expected to be released?\n\nUntil then, is a workaround known? Also, are the delays I'm seeing out of the ordinary?\nI'm looking at one case in which two successive transactions, each updating a handful of\nrecords, take 26 and 18 *seconds* (not msec) to complete. These transactions normally complete\nin under 30 msec.\n\nJack Orenstein\n\n", "msg_date": "Mon, 10 May 2004 22:02:29 -0400", "msg_from": "Jack Orenstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "Jack Orenstein wrote:\n> Bruce Momjian wrote:\n> > [email protected] wrote:\n> > \n> >>4. I understand that a \"background writer\" is being contemplated for\n> >>7.5. Will that replace or augment the checkpoint process? Any\n> >>comments on how that work will apply to my problem would be\n> >>appreciated. I wouldn't mind seeing the average performance,\n> >>(without the spikes) go up -- let's say -- 10%, in exchange for\n> >>more uniform performance. These spikes are a real problem.\n> > \n> > \n> > The background writer is designed to address your specific problem. We\n> > will stil checkpoint, but the spike should be greatly minimized.\n> > \n> \n> Thanks. Do you know when 7.5 is expected to be released?\n\n3-6 months.\n\n> Until then, is a workaround known? Also, are the delays I'm seeing out of the ordinary?\n> I'm looking at one case in which two successive transactions, each updating a handful of\n> records, take 26 and 18 *seconds* (not msec) to complete. These transactions normally complete\n> in under 30 msec.\n\nWow. Others might know the answer to that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 10 May 2004 22:33:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "Jack Orenstein <[email protected]> writes:\n> I'm looking at one case in which two successive transactions, each\n> updating a handful of records, take 26 and 18 *seconds* (not msec) to\n> complete. These transactions normally complete in under 30 msec.\n\nI've seen installations in which it seemed that the \"normal\" query load\nwas close to saturating the available disk bandwidth, and the extra load\nimposed by a background VACUUM just pushed the entire system's response\ntime over a cliff. In an installation that has I/O capacity to spare,\na VACUUM doesn't really hurt foreground query response at all.\n\nI suspect that the same observations hold true for checkpoints, though\nI haven't specifically seen an installation suffering from that effect.\n\nAlready-committed changes for 7.5 include a background writer, which\nbasically will \"trickle\" out dirty pages between checkpoints, thereby\nhopefully reducing the volume of I/O forced at a checkpoint. We have\nalso got code in place that throttles the rate of I/O requests during\nVACUUM. It seems like it might be useful to similarly throttle the I/O\nrequest rate during a CHECKPOINT, though I'm not sure if there'd be any\nbad side effects from lengthening the elapsed time for a checkpoint.\n(Jan, any thoughts here?)\n\nNone of this is necessarily going to fix matters for an installation\nthat has no spare I/O capacity, though. And from the numbers you're\nquoting I fear you may be in that category. \"Buy faster disks\" may\nbe the only answer ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 May 2004 23:23:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints " }, { "msg_contents": "\n> \n> Jack Orenstein <[email protected]> writes:\n> > I'm looking at one case in which two successive transactions, each\n> > updating a handful of records, take 26 and 18 *seconds* (not msec) to\n> > complete. These transactions normally complete in under 30 msec.\n...\n> None of this is necessarily going to fix matters for an installation\n> that has no spare I/O capacity, though. And from the numbers you're\n> quoting I fear you may be in that category. \"Buy faster disks\" may\n> be the only answer ...\n> \n\nI had a computer once that had an out-of-the-box hard drive configuration\nthat provided horrible disk performance. I found a tutorial at O'Reilly\nthat explained how to use hdparm to dramatically speed up disk performance\non Linux. I've noticed on other computers I've set up recently that hdparm\nseems to be used by default out of the box to give good performance.\n\nMaybe your computer is using all of it's I/O capacity because it's using PIO\nmode or some other non-optimal method of accessing the disk.\n\nJust a suggestion, I hope it helps,\n\nMatthew Nuzum\t\t| ISPs: Make $200 - $5,000 per referral by\nwww.followers.net\t\t| recomending Elite CMS to your customers!\[email protected]\t| http://www.followers.net/isp\n\n", "msg_date": "Tue, 11 May 2004 09:35:18 -0400", "msg_from": "\"Matthew Nuzum\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "Jan Wieck <[email protected]> writes:\n> If we would combine the background writer and the checkpointer,\n\n... which in fact is on my agenda of things to do ...\n\n> then a \n> \"checkpoint flush\" could actually be implemented as a temporary change \n> in that activity that basically is done by not reevaluating the list of \n> to be flushed blocks any more and switching to a constant amount of \n> blocks flushed per cycle. When that list get's empty, the checkpoint \n> flush is done, the checkpoint can complete and the background writer \n> resumes normal business.\n\nSounds like a plan. I'll do it that way. However, we might want to\nhave different configuration settings controlling the write rate during\ncheckpoint and the rate during normal background writing --- what do you\nthink?\n\nAlso, presumably a shutdown checkpoint should just whomp out the data\nwithout any delays. We can't afford to wait around and risk having\ninit decide we took too long.\n\n>> None of this is necessarily going to fix matters for an installation\n>> that has no spare I/O capacity, though.\n\n> As a matter of fact, the background writer increases the overall IO. It \n> writes buffers that possibly get modified again before a checkpoint or \n> their replacement requires them to be finally written. So if there is no \n> spare IO bandwidth, it makes things worse.\n\nRight, the trickle writes could be wasted effort.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 May 2004 10:41:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints " }, { "msg_contents": "Matthew Nuzum wrote:\n>>Jack Orenstein <[email protected]> writes:\n>>\n>>>I'm looking at one case in which two successive transactions, each\n>>>updating a handful of records, take 26 and 18 *seconds* (not msec) to\n>>>complete. These transactions normally complete in under 30 msec.\n\n>>None of this is necessarily going to fix matters for an installation\n>>that has no spare I/O capacity, though. And from the numbers you're\n>>quoting I fear you may be in that category. \"Buy faster disks\" may\n>>be the only answer ...\n>>\n\n> I had a computer once that had an out-of-the-box hard drive configuration\n> that provided horrible disk performance. I found a tutorial at O'Reilly\n> that explained how to use hdparm to dramatically speed up disk performance\n> on Linux. I've noticed on other computers I've set up recently that hdparm\n> seems to be used by default out of the box to give good performance.\n> \n> Maybe your computer is using all of it's I/O capacity because it's using PIO\n> mode or some other non-optimal method of accessing the disk.\n\nThere's certainly some scope there. I have an SGI Octane whos SCSI 2 \ndisks were set-up by default with no write buffer and CTQ depth of zero \n:/ IDE drivers in Linux maybe not detecting your IDE chipset correctly \nand stepping down, however unlikely there maybe something odd going on \nbut you could check hdparm out. Ensure correct cables too, and the \naren't crushed or twisted too bad.... I digress...\n\nAssuming you're running with optimal schema and index design (ie you're \nnot doing extra work unnecessarily), and your backend has \nbetter-then-default config options set-up (plenty of tips around here), \nthen disk arrangement is critical to smoothing the ride.\n\nTaking things to a relative extreme, we implemented a set-up with issues \nsimilar sounding to yours. It was resolved by first optimising \neverything but hardware, then finally optimising hardware. This served \nus because it meant we squeezed as much out of the available hardware, \nbefore finally throwing more at it, getting us the best possible returns \n(plus further post optimisation on the new hardware).\n\nFirst tip would to take your pg_xlog and put it on another disk (and \nchannel). Next if you're running a journalled fs, get that journal off \nonto another disk (and channel). Finally, get as many disks for the data \nstore and spread the load across spindles. You're aiming here to \ndistribute the contention and disk I/O more evenly to remove the \ncongestion. sar and iostat help out as part of the analysis.\n\nYou say you're using IDE, for which I'd highly recommend switching to \nSCSI and mutliple controllers because IDE isn't great for lots of other \nreasons. Obviously budgets count, and playing with SCSI certainly limits \nthat. We took a total of 8 disks across 2 SCSI 160 channels and split up \nthe drives into a number of software RAID arrays. RAID0 mirrors for the \nos, pg_xlog, data disk journal and swap and the rest became a RAID5 \narray for the data. You could instead implement your DATA disk as \nRAID1+0 if you wanted more perf at the cost of free space. Anyway, it's \ncertainly not the fastest config out there, but it made all the \ndifference to this particular application. Infact, we had so much free \nI/O we recently installed another app on there (based on mysql, sorry) \nwhich runs concurrently, and itself 4 times faster than it originally did...\n\nYMMV, just my 2p.\n\n-- \n\nRob Fielding\[email protected]\n\nwww.dsvr.co.uk Development Designer Servers Ltd\n\n", "msg_date": "Tue, 11 May 2004 16:12:20 +0100", "msg_from": "Rob Fielding <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "Quoting Rob Fielding <[email protected]>:\n\n> Assuming you're running with optimal schema and index design (ie you're \n> not doing extra work unnecessarily), and your backend has \n> better-then-default config options set-up (plenty of tips around here), \n> then disk arrangement is critical to smoothing the ride.\n\nThe schema and queries are extremely simple. I've been experimenting\nwith config options. One possibility I'm looking into is whether \nshared_buffers is too high, at 12000. We have some preliminary evidence\nthat setting it lower (1000) reduces the demand for IO bandwidth to\na point where the spikes become almost tolerable.\n\n> First tip would to take your pg_xlog and put it on another disk (and \n> channel). \n\nThat's on my list of things to try.\n\n> Next if you're running a journalled fs, get that journal off \n> onto another disk (and channel). Finally, get as many disks for the data \n> store and spread the load across spindles. \n\nDumb question: how do I spread the data across spindles? Do you have\na pointer to something I could read?\n\nJack Orenstein\n\n----------------------------------------------------------------\nThis message was sent using IMP, the Internet Messaging Program.\n", "msg_date": "Tue, 11 May 2004 12:52:32 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "On Tue, 11 May 2004 [email protected] wrote:\n\n> Quoting Rob Fielding <[email protected]>:\n> \n> > Assuming you're running with optimal schema and index design (ie you're \n> > not doing extra work unnecessarily), and your backend has \n> > better-then-default config options set-up (plenty of tips around here), \n> > then disk arrangement is critical to smoothing the ride.\n> \n> The schema and queries are extremely simple. I've been experimenting\n> with config options. One possibility I'm looking into is whether \n> shared_buffers is too high, at 12000. We have some preliminary evidence\n> that setting it lower (1000) reduces the demand for IO bandwidth to\n> a point where the spikes become almost tolerable.\n\nIf the shared_buffers are large, postgresql seems to have a performance \nissue with handling them. Plus they can cause the kernel to dump cache on \nthings that would otherwise be right there and therefore forces the \ndatabase to hit the drives. You might wanna try settings between 1000 and \n10000 and see where your sweet spot is.\n\n> > First tip would to take your pg_xlog and put it on another disk (and \n> > channel). \n> \n> That's on my list of things to try.\n> \n> > Next if you're running a journalled fs, get that journal off \n> > onto another disk (and channel). Finally, get as many disks for the data \n> > store and spread the load across spindles. \n> \n> Dumb question: how do I spread the data across spindles? Do you have\n> a pointer to something I could read?\n\nLook into a high quality hardware RAID controller with battery backed \ncache on board. We use the ami/lsi megaraid and I'm quite pleased with \nits writing performance.\n\nHow you configure your drives is up to you. For smaller numbers of \ndrives (6 or less) RAID 1+0 is usually a clear winner. For medium numbers \nof drives, say 8 to 20, RAID 5 works well. For more drives than that, \nmany folks report RAID 5+0 or 0+5 to work well.\n\nI've only played around with 12 or fewer drives, so I'm saying RAID 5+0 is \na good choice from my experience, just reflecting back what I've heard \nhere on the performance mailing list.\n\nIf you're not doing much writing, then a software RAID may be a good \nintermediate solution, especially RAID1 with >2 disks under linux seems a \ngood setup for a mostly read database.\n\n", "msg_date": "Tue, 11 May 2004 11:30:31 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "The king of statistics in these cases, is probably vmstat. one can \ndrill down on specific things from there, but first you should send \nsome vmstat output.\n\nReducing cache -> reducing IO suggests to me the OS might be paging out \nshared buffers. This is indicated by activity in the \"si\" and \"so\" \ncolumns of vmstat. intentional disk activity by the \napplciation(postgres) shows up in the \t\"bi\" and \"bo\" columns.\n\nIf you are having a \"write storm\" or bursty writes that's burying \nperformance, a scsi raid controler with writeback cache will greatly \nimprove the situation, but I do believe they run around $1-2k. If \nit's write specific problem, the cache matters more than the striping, \nexcept to say that write specfic perf problems should avoid raid5\n\nplease send the output of \"vmstat 10\" for about 10 minutes, spanning \ngood performance and bad performance.\n\n\n\n\n\nOn May 11, 2004, at 9:52 AM, [email protected] wrote:\n\n> Quoting Rob Fielding <[email protected]>:\n>\n>> Assuming you're running with optimal schema and index design (ie \n>> you're\n>> not doing extra work unnecessarily), and your backend has\n>> better-then-default config options set-up (plenty of tips around \n>> here),\n>> then disk arrangement is critical to smoothing the ride.\n>\n> The schema and queries are extremely simple. I've been experimenting\n> with config options. One possibility I'm looking into is whether\n> shared_buffers is too high, at 12000. We have some preliminary evidence\n> that setting it lower (1000) reduces the demand for IO bandwidth to\n> a point where the spikes become almost tolerable.\n>\n>> First tip would to take your pg_xlog and put it on another disk (and\n>> channel).\n>\n> That's on my list of things to try.\n>\n>> Next if you're running a journalled fs, get that journal off\n>> onto another disk (and channel). Finally, get as many disks for the \n>> data\n>> store and spread the load across spindles.\n>\n> Dumb question: how do I spread the data across spindles? Do you have\n> a pointer to something I could read?\n>\n> Jack Orenstein\n>\n> ----------------------------------------------------------------\n> This message was sent using IMP, the Internet Messaging Program.\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Tue, 11 May 2004 12:12:35 -0700", "msg_from": "Paul Tuckfield <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "On Tue, 11 May 2004, Paul Tuckfield wrote:\n\n> If you are having a \"write storm\" or bursty writes that's burying \n> performance, a scsi raid controler with writeback cache will greatly \n> improve the situation, but I do believe they run around $1-2k. If \n> it's write specific problem, the cache matters more than the striping, \n> except to say that write specfic perf problems should avoid raid5\n\nActually, a single channel MegaRAID 320-1 (single channel ultra 320) is \nonly $421 at http://www.siliconmechanics.com/c248/u320-scsi.php It works \npretty well for me, having 6 months of a production server on one with \nzero hickups and very good performance. They have a dual channel intel \ncard for only $503, but I'm not at all familiar with that card.\n\nThe top of the line megaraid is the 320-4, which is only $1240, which \nain't bad for a four channel RAID controller.\n\nBattery backed cache is an addon, but I think it's only about $80 or so.\n\n", "msg_date": "Tue, 11 May 2004 14:22:40 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of scott.marlowe\nSent: Tuesday, May 11, 2004 2:23 PM\nTo: Paul Tuckfield\nCc: [email protected]; Matthew Nuzum; [email protected]; Rob\nFielding\nSubject: Re: [PERFORM] Configuring PostgreSQL to minimize impact of\ncheckpoints\n\nOn Tue, 11 May 2004, Paul Tuckfield wrote:\n\n> If you are having a \"write storm\" or bursty writes that's burying \n> performance, a scsi raid controler with writeback cache will greatly \n> improve the situation, but I do believe they run around $1-2k. If \n> it's write specific problem, the cache matters more than the striping, \n> except to say that write specfic perf problems should avoid raid5\n\nActually, a single channel MegaRAID 320-1 (single channel ultra 320) is \nonly $421 at http://www.siliconmechanics.com/c248/u320-scsi.php It works \npretty well for me, having 6 months of a production server on one with \nzero hickups and very good performance. They have a dual channel intel \ncard for only $503, but I'm not at all familiar with that card.\n\nThe top of the line megaraid is the 320-4, which is only $1240, which \nain't bad for a four channel RAID controller.\n\nBattery backed cache is an addon, but I think it's only about $80 or so.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n-----------------------------\n\nIf you don't mind slumming on ebay :-) keep an eye out for PERC III cards,\nthey are dell branded LSI cards. Perc = Power Edge Raid Controller. There\nare models on there dual channel u320 and dell usually sells them with\nbattery backed cache. That's how I have acquired all my high end raid\ncards.\n\nRob\n\n", "msg_date": "Tue, 11 May 2004 16:47:05 -0500", "msg_from": "\"Rob Sell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "Love that froogle.\n\nIt looks like a nice card. One thing I didn't get straight is if \nthe cache is writethru or write back.\n\nIf the original posters problem is truly a burst write problem (and not \nlinux caching or virtual memory overcommitment) then writeback is key.\n\n\n\n\n> On Tue, 11 May 2004, Paul Tuckfield wrote:\n>\n>> If you are having a \"write storm\" or bursty writes that's burying\n>> performance, a scsi raid controler with writeback cache will greatly\n>> improve the situation, but I do believe they run around $1-2k. If\n>> it's write specific problem, the cache matters more than the striping,\n>> except to say that write specfic perf problems should avoid raid5\n>\n> Actually, a single channel MegaRAID 320-1 (single channel ultra 320) is\n> only $421 at http://www.siliconmechanics.com/c248/u320-scsi.php It \n> works\n> pretty well for me, having 6 months of a production server on one with\n> zero hickups and very good performance. They have a dual channel intel\n> card for only $503, but I'm not at all familiar with that card.\n>\n> The top of the line megaraid is the 320-4, which is only $1240, which\n> ain't bad for a four channel RAID controller.\n>\n> Battery backed cache is an addon, but I think it's only about $80 or \n> so.\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Tue, 11 May 2004 14:52:57 -0700", "msg_from": "Paul Tuckfield <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "On Tue, 11 May 2004, Paul Tuckfield wrote:\n\n> Love that froogle.\n> \n> It looks like a nice card. One thing I didn't get straight is if \n> the cache is writethru or write back.\n> \n> If the original posters problem is truly a burst write problem (and not \n> linux caching or virtual memory overcommitment) then writeback is key.\n\nthe MegaRaid can be configured either way. it defaults to writeback if \nthe battery backed cache is present, I believe.\n\n", "msg_date": "Tue, 11 May 2004 16:07:36 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "On Tue, 11 May 2004, Rob Sell wrote:\n\n\n> \n> If you don't mind slumming on ebay :-) keep an eye out for PERC III cards,\n> they are dell branded LSI cards. Perc = Power Edge Raid Controller. There\n> are models on there dual channel u320 and dell usually sells them with\n> battery backed cache. That's how I have acquired all my high end raid\n> cards.\n\nNot all Perc3s are lsi, many are adaptec. The perc3di is adaptec, the \nperc3dc is lsi/megaraid.\n\n", "msg_date": "Tue, 11 May 2004 16:08:20 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "On Tue, 2004-05-11 at 14:52, Paul Tuckfield wrote:\n> Love that froogle.\n> \n> It looks like a nice card. One thing I didn't get straight is if \n> the cache is writethru or write back.\n\n\nThe LSI MegaRAID reading/writing/caching behavior is user configurable.\nIt will support both write-back and write-through, and IIRC, three\ndifferent algorithms for reading (none, read-ahead, adaptive). Plenty\nof configuration options.\n\nIt is a pretty mature and feature complete hardware RAID implementation.\n\n\nj. andrew rogers\n\n", "msg_date": "11 May 2004 15:32:41 -0700", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of" }, { "msg_contents": ">>>>> \"TL\" == Tom Lane <[email protected]> writes:\n\nTL> Jack Orenstein <[email protected]> writes:\n>> I'm looking at one case in which two successive transactions, each\n>> updating a handful of records, take 26 and 18 *seconds* (not msec) to\n>> complete. These transactions normally complete in under 30 msec.\n\nTL> I've seen installations in which it seemed that the \"normal\" query load\nTL> was close to saturating the available disk bandwidth, and the extra load\nTL> imposed by a background VACUUM just pushed the entire system's response\nTL> time over a cliff. In an installation that has I/O capacity to spare,\n\nme stand up waving hand... ;-) This is my only killer problem left.\nI always peg my disk usage at 100% when vacuum runs, and other queries\nare slow too. When not running vacuum, my queries are incredibly\nzippy fast, including joins and counts and group by's on upwards of\n100k rows at a time.\n\nTL> I suspect that the same observations hold true for checkpoints, though\nTL> I haven't specifically seen an installation suffering from that effect.\n\nI don't see that. But I also set checkpoint segments to about 50 on\nmy big server.\n\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Wed, 12 May 2004 14:57:58 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": ">>>>> \"JAR\" == J Andrew Rogers <[email protected]> writes:\n\n\nJAR> The LSI MegaRAID reading/writing/caching behavior is user configurable.\nJAR> It will support both write-back and write-through, and IIRC, three\nJAR> different algorithms for reading (none, read-ahead, adaptive). Plenty\nJAR> of configuration options.\n\nFor PG max performance, you want to set it for write-back and\nread-ahead (adaptive has been claimed to be bad, but I got similar\nperformace from read-ahead and adaptive, so YMMV).\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Wed, 12 May 2004 15:05:05 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of" }, { "msg_contents": "Quoting Vivek Khera <[email protected]>:\n\n> >>>>> \"TL\" == Tom Lane <[email protected]> writes:\n> \n> TL> Jack Orenstein <[email protected]> writes:\n> >> I'm looking at one case in which two successive transactions, each\n> >> updating a handful of records, take 26 and 18 *seconds* (not msec) to\n> >> complete. These transactions normally complete in under 30 msec.\n> \n> TL> I've seen installations in which it seemed that the \"normal\" query load\n> TL> was close to saturating the available disk bandwidth, and the extra load\n> TL> imposed by a background VACUUM just pushed the entire system's response\n> TL> time over a cliff. In an installation that has I/O capacity to spare,\n> ...\n> TL> I suspect that the same observations hold true for checkpoints, though\n> TL> I haven't specifically seen an installation suffering from that effect.\n> \n> I don't see that. But I also set checkpoint segments to about 50 on\n> my big server.\n\nBut wouldn't that affect checkpoint frequency, not checkpoint cost?\n\nJack Orenstein\n\n\n----------------------------------------------------------------\nThis message was sent using IMP, the Internet Messaging Program.\n", "msg_date": "Wed, 12 May 2004 15:22:47 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" }, { "msg_contents": "\nOn May 12, 2004, at 3:22 PM, [email protected] wrote:\n\n>>\n>> I don't see that. But I also set checkpoint segments to about 50 on\n>> my big server.\n>\n> But wouldn't that affect checkpoint frequency, not checkpoint cost\n\nSeems reasonable. I suppose checkpointing doesn't cost as much disk \nI/O as vacuum does. My checkpoints are also on a separate RAID volume \non a separate RAID channel, so perhaps that gives me extra bandwidth to \nperform the checkpoints.\n\n\n", "msg_date": "Wed, 12 May 2004 15:29:11 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring PostgreSQL to minimize impact of checkpoints" } ]
[ { "msg_contents": "Hi,\n\nI am curious if there are any real life production quad processor setups \nrunning postgresql out there. Since postgresql lacks a proper \nreplication/cluster solution, we have to buy a bigger machine.\n\nRight now we are running on a dual 2.4 Xeon, 3 GB Ram and U160 SCSI \nhardware-raid 10.\n\nHas anyone experiences with quad Xeon or quad Opteron setups? I am \nlooking at the appropriate boards from Tyan, which would be the only \noption for us to buy such a beast. The 30k+ setups from Dell etc. don't \nfit our budget.\n\nI am thinking of the following:\n\nQuad processor (xeon or opteron)\n5 x SCSI 15K RPM for Raid 10 + spare drive\n2 x IDE for system\nICP-Vortex battery backed U320 Hardware Raid\n4-8 GB Ram\n\nWould be nice to hear from you.\n\nRegards,\nBjoern\n", "msg_date": "Tue, 11 May 2004 21:06:58 +0200", "msg_from": "Bjoern Metzdorf <[email protected]>", "msg_from_op": true, "msg_subject": "Quad processor options" }, { "msg_contents": "it's very good to understand specific choke points you're trying to \naddress by upgrading so you dont get disappointed. Are you truly CPU \nconstrained, or is it memory footprint or IO thruput that makes you \nwant to upgrade?\n\nIMO The best way to begin understanding system choke points is vmstat \noutput.\n\nWould you mind forwarding the output of \"vmstat 10 120\" under peak load \nperiod? (I'm asusming this is linux or unix variant) a brief \ndescription of what is happening during the vmstat sample would help a \nlot too.\n\n\n\n> I am curious if there are any real life production quad processor \n> setups running postgresql out there. Since postgresql lacks a proper \n> replication/cluster solution, we have to buy a bigger machine.\n>\n> Right now we are running on a dual 2.4 Xeon, 3 GB Ram and U160 SCSI \n> hardware-raid 10.\n>\n> Has anyone experiences with quad Xeon or quad Opteron setups? I am \n> looking at the appropriate boards from Tyan, which would be the only \n> option for us to buy such a beast. The 30k+ setups from Dell etc. \n> don't fit our budget.\n>\n> I am thinking of the following:\n>\n> Quad processor (xeon or opteron)\n> 5 x SCSI 15K RPM for Raid 10 + spare drive\n> 2 x IDE for system\n> ICP-Vortex battery backed U320 Hardware Raid\n> 4-8 GB Ram\n>\n> Would be nice to hear from you.\n>\n> Regards,\n> Bjoern\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Tue, 11 May 2004 13:14:59 -0700", "msg_from": "Paul Tuckfield <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "On Tue, 11 May 2004, Bjoern Metzdorf wrote:\n\n> Hi,\n> \n> I am curious if there are any real life production quad processor setups \n> running postgresql out there. Since postgresql lacks a proper \n> replication/cluster solution, we have to buy a bigger machine.\n> \n> Right now we are running on a dual 2.4 Xeon, 3 GB Ram and U160 SCSI \n> hardware-raid 10.\n> \n> Has anyone experiences with quad Xeon or quad Opteron setups? I am \n> looking at the appropriate boards from Tyan, which would be the only \n> option for us to buy such a beast. The 30k+ setups from Dell etc. don't \n> fit our budget.\n> \n> I am thinking of the following:\n> \n> Quad processor (xeon or opteron)\n> 5 x SCSI 15K RPM for Raid 10 + spare drive\n> 2 x IDE for system\n> ICP-Vortex battery backed U320 Hardware Raid\n> 4-8 GB Ram\n\nWell, from what I've read elsewhere on the internet, it would seem the \nOpterons scale better to 4 CPUs than the basic Xeons do. Of course, the \nexception to this is SGI's altix, which uses their own chipset and runs \nthe itanium with very good memory bandwidth.\n\nBut, do you really need more CPU horsepower?\n\nAre you I/O or CPU or memory or memory bandwidth bound? If you're sitting \nat 99% idle, and iostat says your drives are only running at some small \npercentage of what you know they could, you might be memory or memory \nbandwidth limited. Adding two more CPUs will not help with that \nsituation.\n\nIf your I/O is saturated, then the answer may well be a better RAID \narray, with many more drives plugged into it. Do you have any spare \ndrives you can toss on the machine to see if that helps? Sometimes going \nfrom 4 drives in a RAID 1+0 to 6 or 8 or more can give a big boost in \nperformance.\n\nIn short, don't expect 4 CPUs to solve the problem if the problem isn't \nreally the CPUs being maxed out.\n\nAlso, what type of load are you running? Mostly read, mostly written, few \nconnections handling lots of data, lots of connections each handling a \nlittle data, lots of transactions, etc...\n\nIf you are doing lots of writing, make SURE you have a controller that \nsupports battery backed cache and is configured to write-back, not \nwrite-through.\n\n", "msg_date": "Tue, 11 May 2004 14:16:36 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "scott.marlowe wrote:\n\n> Well, from what I've read elsewhere on the internet, it would seem the \n> Opterons scale better to 4 CPUs than the basic Xeons do. Of course, the \n> exception to this is SGI's altix, which uses their own chipset and runs \n> the itanium with very good memory bandwidth.\n\nThis is basically what I read too. But I cannot spent money on a quad \nopteron just for testing purposes :)\n\n> But, do you really need more CPU horsepower?\n> \n> Are you I/O or CPU or memory or memory bandwidth bound? If you're sitting \n> at 99% idle, and iostat says your drives are only running at some small \n> percentage of what you know they could, you might be memory or memory \n> bandwidth limited. Adding two more CPUs will not help with that \n> situation.\n\nRight now we have a dual xeon 2.4, 3 GB Ram, Mylex extremeraid \ncontroller, running 2 Compaq BD018122C0, 1 Seagate ST318203LC and 1 \nQuantum ATLAS_V_18_SCA.\n\niostat show between 20 and 60 % user avg-cpu. And this is not even peak \ntime.\n\nI attached a \"vmstat 10 120\" output for perhaps 60-70% peak load.\n\n> If your I/O is saturated, then the answer may well be a better RAID \n> array, with many more drives plugged into it. Do you have any spare \n> drives you can toss on the machine to see if that helps? Sometimes going \n> from 4 drives in a RAID 1+0 to 6 or 8 or more can give a big boost in \n> performance.\n\nNext drives I'll buy will certainly be 15k scsi drives.\n\n> In short, don't expect 4 CPUs to solve the problem if the problem isn't \n> really the CPUs being maxed out.\n> \n> Also, what type of load are you running? Mostly read, mostly written, few \n> connections handling lots of data, lots of connections each handling a \n> little data, lots of transactions, etc...\n\nIn peak times we can get up to 700-800 connections at the same time. \nThere are quite some updates involved, without having exact numbers I'll \nthink that we have about 70% selects and 30% updates/inserts.\n\n> If you are doing lots of writing, make SURE you have a controller that \n> supports battery backed cache and is configured to write-back, not \n> write-through.\n\nCould you recommend a certain controller type? The only battery backed \none that I found on the net is the newest model from icp-vortex.com.\n\nRegards,\nBjoern", "msg_date": "Tue, 11 May 2004 22:41:28 +0200", "msg_from": "Bjoern Metzdorf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "Paul Tuckfield wrote:\n\n> Would you mind forwarding the output of \"vmstat 10 120\" under peak load \n> period? (I'm asusming this is linux or unix variant) a brief \n> description of what is happening during the vmstat sample would help a \n> lot too.\n\nsee my other mail.\n\nWe are running Linux, Kernel 2.4. As soon as the next debian version \ncomes out, I'll happily switch to 2.6 :)\n\nRegards,\nBjoern\n", "msg_date": "Tue, 11 May 2004 22:46:35 +0200", "msg_from": "Bjoern Metzdorf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "On Tue, 11 May 2004, Bjoern Metzdorf wrote:\n\n> scott.marlowe wrote:\n> \n> > Well, from what I've read elsewhere on the internet, it would seem the \n> > Opterons scale better to 4 CPUs than the basic Xeons do. Of course, the \n> > exception to this is SGI's altix, which uses their own chipset and runs \n> > the itanium with very good memory bandwidth.\n> \n> This is basically what I read too. But I cannot spent money on a quad \n> opteron just for testing purposes :)\n\nWouldn't it be nice to just have a lab full of these things?\n\n> > If your I/O is saturated, then the answer may well be a better RAID \n> > array, with many more drives plugged into it. Do you have any spare \n> > drives you can toss on the machine to see if that helps? Sometimes going \n> > from 4 drives in a RAID 1+0 to 6 or 8 or more can give a big boost in \n> > performance.\n> \n> Next drives I'll buy will certainly be 15k scsi drives.\n\nBetter to buy more 10k drives than fewer 15k drives. Other than slightly \nfaster select times, the 15ks aren't really any faster.\n\n> > In short, don't expect 4 CPUs to solve the problem if the problem isn't \n> > really the CPUs being maxed out.\n> > \n> > Also, what type of load are you running? Mostly read, mostly written, few \n> > connections handling lots of data, lots of connections each handling a \n> > little data, lots of transactions, etc...\n> \n> In peak times we can get up to 700-800 connections at the same time. \n> There are quite some updates involved, without having exact numbers I'll \n> think that we have about 70% selects and 30% updates/inserts.\n\nWow, a lot of writes then.\n\n> > If you are doing lots of writing, make SURE you have a controller that \n> > supports battery backed cache and is configured to write-back, not \n> > write-through.\n> \n> Could you recommend a certain controller type? The only battery backed \n> one that I found on the net is the newest model from icp-vortex.com.\n\nSure, adaptec makes one, so does lsi megaraid. Dell resells both of \nthese, the PERC3DI and the PERC3DC are adaptec, then lsi in that order, I \nbelieve. We run the lsi megaraid with 64 megs battery backed cache.\n\nIntel also makes one, but I've heard nothing about it.\n\nIf you get the LSI megaraid, make sure you're running the latest megaraid \n2 driver, not the older, slower 1.18 series. If you are running linux, \nlook for the dkms packaged version. dkms, (Dynamic Kernel Module System) \nautomagically compiles and installs source rpms for drivers when you \ninstall them, and configures the machine to use them to boot up. Most \ndrivers seem to be slowly headed that way in the linux universe, and I \nreally like the simplicity and power of dkms.\n\nI haven't directly tested anything but the adaptec and the lsi megaraid. \nHere at work we've had massive issues trying to get the adaptec cards \nconfigured and installed on, while the megaraid was a snap. Installed RH, \ninstalled the dkms rpm, installed the dkms enabled megaraid driver and \nrebooted. Literally, that's all it took.\n\n", "msg_date": "Tue, 11 May 2004 15:02:24 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "scott.marlowe wrote:\n>>Next drives I'll buy will certainly be 15k scsi drives.\n> \n> Better to buy more 10k drives than fewer 15k drives. Other than slightly \n> faster select times, the 15ks aren't really any faster.\n\nGood to know. I'll remember that.\n\n>>In peak times we can get up to 700-800 connections at the same time. \n>>There are quite some updates involved, without having exact numbers I'll \n>>think that we have about 70% selects and 30% updates/inserts.\n> \n> Wow, a lot of writes then.\n\nYes, it certainly could also be only 15-20% updates/inserts, but this is \nalso not negligible.\n\n> Sure, adaptec makes one, so does lsi megaraid. Dell resells both of \n> these, the PERC3DI and the PERC3DC are adaptec, then lsi in that order, I \n> believe. We run the lsi megaraid with 64 megs battery backed cache.\n\nThe LSI sounds good.\n\n> Intel also makes one, but I've heard nothing about it.\n\nIt could well be the ICP Vortex one, ICP was bought by Intel some time ago..\n\n> I haven't directly tested anything but the adaptec and the lsi megaraid. \n> Here at work we've had massive issues trying to get the adaptec cards \n> configured and installed on, while the megaraid was a snap. Installed RH, \n> installed the dkms rpm, installed the dkms enabled megaraid driver and \n> rebooted. Literally, that's all it took.\n\nI didn't hear anything about dkms for debian, so I will be hand-patching \nas usual :)\n\nRegards,\nBjoern\n\n", "msg_date": "Tue, 11 May 2004 23:11:12 +0200", "msg_from": "Bjoern Metzdorf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "On Tue, 11 May 2004, Bjoern Metzdorf wrote:\n\n> scott.marlowe wrote:\n> > Sure, adaptec makes one, so does lsi megaraid. Dell resells both of \n> > these, the PERC3DI and the PERC3DC are adaptec, then lsi in that order, I \n> > believe. We run the lsi megaraid with 64 megs battery backed cache.\n> \n> The LSI sounds good.\n> \n> > Intel also makes one, but I've heard nothing about it.\n> \n> It could well be the ICP Vortex one, ICP was bought by Intel some time ago..\n\nAlso, there are bigger, faster external RAID boxes as well, that make the \ninternal cards seem puny. They're nice because all you need in your main \nbox is a good U320 controller to plug into the external RAID array.\n\nThat URL I mentioned earlier that had prices has some of the external \nboxes listed. No price, not for sale on the web, get out the checkbook \nand write a blank check is my guess. I.e. they're not cheap.\n\nThe other nice thing about the LSI cards is that you can install >1 and \nthe act like one big RAID array. i.e. install two cards with a 20 drive \nRAID0 then make a RAID1 across them, and if one or the other cards itself \nfails, you've still got 100% of your data sitting there. Nice to know you \ncan survive the complete failure of one half of your chain.\n\n> > I haven't directly tested anything but the adaptec and the lsi megaraid. \n> > Here at work we've had massive issues trying to get the adaptec cards \n> > configured and installed on, while the megaraid was a snap. Installed RH, \n> > installed the dkms rpm, installed the dkms enabled megaraid driver and \n> > rebooted. Literally, that's all it took.\n> \n> I didn't hear anything about dkms for debian, so I will be hand-patching \n> as usual :)\n\nYeah, it seems to be an RPM kinda thing. But, I'm thinking the 2.0 \ndrivers got included in the latest 2.6 kernels, so no biggie. I was \nlooking around in google, and it definitely appears the 2.x and 1.x \nmegaraid drivers were merged into \"unified\" driver in 2.6 kernel.\n\n", "msg_date": "Tue, 11 May 2004 15:29:46 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Bjoern Metzdorf\nSent: Tuesday, May 11, 2004 3:11 PM\nTo: scott.marlowe\nCc: [email protected]; Pgsql-Admin (E-mail)\nSubject: Re: [PERFORM] Quad processor options\n\nscott.marlowe wrote:\n>>Next drives I'll buy will certainly be 15k scsi drives.\n> \n> Better to buy more 10k drives than fewer 15k drives. Other than slightly \n> faster select times, the 15ks aren't really any faster.\n\nGood to know. I'll remember that.\n\n>>In peak times we can get up to 700-800 connections at the same time. \n>>There are quite some updates involved, without having exact numbers I'll \n>>think that we have about 70% selects and 30% updates/inserts.\n> \n> Wow, a lot of writes then.\n\nYes, it certainly could also be only 15-20% updates/inserts, but this is \nalso not negligible.\n\n> Sure, adaptec makes one, so does lsi megaraid. Dell resells both of \n> these, the PERC3DI and the PERC3DC are adaptec, then lsi in that order, I \n> believe. We run the lsi megaraid with 64 megs battery backed cache.\n\nThe LSI sounds good.\n\n> Intel also makes one, but I've heard nothing about it.\n\nIt could well be the ICP Vortex one, ICP was bought by Intel some time ago..\n\n> I haven't directly tested anything but the adaptec and the lsi megaraid. \n> Here at work we've had massive issues trying to get the adaptec cards \n> configured and installed on, while the megaraid was a snap. Installed RH,\n\n> installed the dkms rpm, installed the dkms enabled megaraid driver and \n> rebooted. Literally, that's all it took.\n\nI didn't hear anything about dkms for debian, so I will be hand-patching \nas usual :)\n\nRegards,\nBjoern\n\n\n-------------------------\n\nPersonally I would stay away from anything intel over 2 processors. I have\ndone some research and if memory serves it something like this. Intel's\narchitecture makes each processor compete for bandwidth on the bus to the\nram. Amd differs in that each proc has its own bus to the ram.\n\nDon't take this as god's honest fact but just keep it in mind when\nconsidering a Xeon solution, it may be worth your time to do some deeper\nresearch into this. There is some on this here\nhttp://www4.tomshardware.com/cpu/20030422/ \n\nRob\n\n", "msg_date": "Tue, 11 May 2004 16:42:57 -0500", "msg_from": "\"Rob Sell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad processor options" }, { "msg_contents": "On 2004-05-11T15:29:46-0600, scott.marlowe wrote:\n> The other nice thing about the LSI cards is that you can install >1 and \n> the act like one big RAID array. i.e. install two cards with a 20 drive \n> RAID0 then make a RAID1 across them, and if one or the other cards itself \n> fails, you've still got 100% of your data sitting there. Nice to know you \n> can survive the complete failure of one half of your chain.\n\n... unless that dying controller corrupted your file system. Depending\non your tolerance for risk, you may not want to operate for long with a\nfile system in an unknown state.\n\nBtw, the Intel and LSI Logic RAID controller cards have suspeciously\nsimilar specificationsi, so I would be surprised if one is an OEM.\n\n\n/Allan\n-- \nAllan Wind\nP.O. Box 2022\nWoburn, MA 01888-0022\nUSA", "msg_date": "Tue, 11 May 2004 18:13:15 -0400", "msg_from": "[email protected] (Allan Wind)", "msg_from_op": false, "msg_subject": "Re: Quad processor options" }, { "msg_contents": "On Tue, 2004-05-11 at 12:06, Bjoern Metzdorf wrote:\n> Has anyone experiences with quad Xeon or quad Opteron setups? I am \n> looking at the appropriate boards from Tyan, which would be the only \n> option for us to buy such a beast. The 30k+ setups from Dell etc. don't \n> fit our budget.\n> \n> I am thinking of the following:\n> \n> Quad processor (xeon or opteron)\n> 5 x SCSI 15K RPM for Raid 10 + spare drive\n> 2 x IDE for system\n> ICP-Vortex battery backed U320 Hardware Raid\n> 4-8 GB Ram\n\n\nJust to add my two cents to the fray:\n\nWe use dual Opterons around here and prefer them to the Xeons for\ndatabase servers. As others have pointed out, the Opteron systems will\nscale well to more than two processors unlike the Xeon. I know a couple\npeople with quad Opterons and it apparently scales very nicely, unlike\nquad Xeons which don't give you much more. On some supercomputing\nhardware lists I'm on, they seem to be of the opinion that the current\nOpteron fabric won't really show saturation until you have 6-8 CPUs\nconnected to it.\n\nLike the other folks said, skip the 15k drives. Those will only give\nyou a marginal improvement for an integer factor price increase over 10k\ndrives. Instead spend your money on a nice RAID controller with a fat\ncache and a backup battery, and maybe some extra spindles for your\narray. I personally like the LSI MegaRAID 320-2, which I always max out\nto 256Mb of cache RAM and the required battery. A maxed out LSI 320-2\nshould set you back <$1k. Properly configured, you will notice large\nimprovements in the performance of your disk subsystem, especially if\nyou have a lot of writing going on.\n\nI would recommend getting the Opterons, and spending the relatively\nmodest amount of money to get nice RAID controller with a large\nwrite-back cache while sticking with 10k drives.\n\nDepending on precisely how you configure it, this should cost you no\nmore than $10-12k. We just built a very similar configuration, but with\ndual Opterons on an HDAMA motherboard rather than a quad Tyan, and it\ncost <$6k inclusive of everything. Add the money for 4 of the 8xx\nprocessors and the Tyan quad motherboard, and the sum comes out to a\nvery reasonable number for what you are getting.\n\n\nj. andrew rogers\n\n\n\n", "msg_date": "11 May 2004 15:23:29 -0700", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad processor options" }, { "msg_contents": "On Tue, 11 May 2004, Allan Wind wrote:\n\n> On 2004-05-11T15:29:46-0600, scott.marlowe wrote:\n> > The other nice thing about the LSI cards is that you can install >1 and \n> > the act like one big RAID array. i.e. install two cards with a 20 drive \n> > RAID0 then make a RAID1 across them, and if one or the other cards itself \n> > fails, you've still got 100% of your data sitting there. Nice to know you \n> > can survive the complete failure of one half of your chain.\n> \n> ... unless that dying controller corrupted your file system. Depending\n> on your tolerance for risk, you may not want to operate for long with a\n> file system in an unknown state.\n\nIt would have to be the primary controller for that to happen. The way \nthe LSI's work is that you disable the BIOS on the 2nd to 4th cards, and \nthe first card, with the active BIOS acts as the primary controller.\n\nIn this case, that means the main card is doing the RAID1 work, then \nhanding off the data to the subordinate cards.\n\nThe subordinate cards do all their own RAID0 work.\n\nmobo ---controller 1--<array1 of disks in RAID0\n.....|--controller 2--<array2 of disks in RAID0\n\nand whichever controller fails just kind of disappears.\n\nNote that if it is the master controller, then you'll have to shut down \nand enable the BIOS on one of the secondardy (now primary) controllers.\n\nSo while it's possible for the master card failing to corrupt the RAID1 \nset, it's still a more reliable system that with just one card.\n\nBut nothing is 100% reliable, sadly.\n\n> Btw, the Intel and LSI Logic RAID controller cards have suspeciously\n> similar specificationsi, so I would be surprised if one is an OEM.\n\nHmmm. I'll take a closer look.\n\n", "msg_date": "Tue, 11 May 2004 16:44:21 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad processor options" }, { "msg_contents": "I'm confused why you say the system is 70% busy: the vmstat output \nshows 70% *idle*.\n\nThe vmstat you sent shows good things and ambiguous things:\n- si and so are zero, so your not paging/swapping. Thats always step \n1. you're fine.\n- bi and bo (physical IO) shows pretty high numbers for how many disks \nyou have.\n (assuming random IO) so please send an \"iostat 10\" sampling during \npeak.\n- note that cpu is only 30% busy. that should mean that adding cpus \nwill *not* help.\n- the \"cache\" column shows that linux is using 2.3G for cache. (way too \nmuch)\n you generally want to give memory to postgres to keep it \"close\" to \nthe user,\n not leave it unused to be claimed by linux cache (need to leave \n*some* for linux tho)\n\nMy recommendations:\n- I'll bet you have a low value for shared buffers, like 10000. On \nyour 3G system\n you should ramp up the value to at least 1G (125000 8k buffers) \nunless something\n else runs on the system. It's best to not do things too \ndrastically, so if Im right and\n you sit at 10000 now, try going to 30000 then 60000 then 125000 or \nabove.\n\n- if the above is off base, then I wonder why we see high runque \nnumbers in spite\n of over 60% idle cpu. Maybe some serialization happening somewhere. \n Also depending\n on how you've laid out your 4 disk drives, you may see all IOs going \nto one drive. the 7M/sec\n is on the high side, if that's the case. iostat numbers will reveal \nif it's skewed, and if it's random,\n tho linux iostat doesn't seem to report response times (sigh) \nResponse times are the golden\n metric when diagnosing IO thruput in OLTP / stripe situation.\n\n\n\n\nOn May 11, 2004, at 1:41 PM, Bjoern Metzdorf wrote:\n\n> scott.marlowe wrote:\n>\n>> Well, from what I've read elsewhere on the internet, it would seem \n>> the Opterons scale better to 4 CPUs than the basic Xeons do. Of \n>> course, the exception to this is SGI's altix, which uses their own \n>> chipset and runs the itanium with very good memory bandwidth.\n>\n> This is basically what I read too. But I cannot spent money on a quad \n> opteron just for testing purposes :)\n>\n>> But, do you really need more CPU horsepower?\n>> Are you I/O or CPU or memory or memory bandwidth bound? If you're \n>> sitting at 99% idle, and iostat says your drives are only running at \n>> some small percentage of what you know they could, you might be \n>> memory or memory bandwidth limited. Adding two more CPUs will not \n>> help with that situation.\n>\n> Right now we have a dual xeon 2.4, 3 GB Ram, Mylex extremeraid \n> controller, running 2 Compaq BD018122C0, 1 Seagate ST318203LC and 1 \n> Quantum ATLAS_V_18_SCA.\n>\n> iostat show between 20 and 60 % user avg-cpu. And this is not even \n> peak time.\n>\n> I attached a \"vmstat 10 120\" output for perhaps 60-70% peak load.\n>\n>> If your I/O is saturated, then the answer may well be a better RAID \n>> array, with many more drives plugged into it. Do you have any spare \n>> drives you can toss on the machine to see if that helps? Sometimes \n>> going from 4 drives in a RAID 1+0 to 6 or 8 or more can give a big \n>> boost in performance.\n>\n> Next drives I'll buy will certainly be 15k scsi drives.\n>\n>> In short, don't expect 4 CPUs to solve the problem if the problem \n>> isn't really the CPUs being maxed out.\n>> Also, what type of load are you running? Mostly read, mostly \n>> written, few connections handling lots of data, lots of connections \n>> each handling a little data, lots of transactions, etc...\n>\n> In peak times we can get up to 700-800 connections at the same time. \n> There are quite some updates involved, without having exact numbers \n> I'll think that we have about 70% selects and 30% updates/inserts.\n>\n>> If you are doing lots of writing, make SURE you have a controller \n>> that supports battery backed cache and is configured to write-back, \n>> not write-through.\n>\n> Could you recommend a certain controller type? The only battery backed \n> one that I found on the net is the newest model from icp-vortex.com.\n>\n> Regards,\n> Bjoern\n> ~# vmstat 10 120\n> procs memory swap io system \n> cpu\n> r b w swpd free buff cache si so bi bo in cs \n> us sy id\n> 1 1 0 24180 10584 32468 2332208 0 1 0 2 1 2 \n> 2 0 0\n> 0 2 0 24564 10480 27812 2313528 8 0 7506 574 1199 8674 \n> 30 7 63\n> 2 1 0 24692 10060 23636 2259176 0 18 8099 298 2074 6328 \n> 25 7 68\n> 2 0 0 24584 18576 21056 2299804 3 6 13208 305 1598 8700 \n> 23 6 71\n> 1 21 1 24504 16588 20912 2309468 4 0 1442 1107 754 6874 \n> 42 13 45\n> 6 1 0 24632 13148 19992 2319400 0 0 2627 499 1184 9633 \n> 37 6 58\n> 5 1 0 24488 10912 19292 2330080 5 0 3404 150 1466 10206 \n> 32 6 61\n> 4 1 0 24488 12180 18824 2342280 3 0 2934 40 1052 3866 \n> 19 3 78\n> 0 0 0 24420 14776 19412 2347232 6 0 403 216 1123 4702 \n> 22 3 74\n> 0 0 0 24548 14408 17380 2321780 4 0 522 715 965 6336 \n> 25 5 71\n> 4 0 0 24676 12504 17756 2322988 0 0 564 830 883 7066 \n> 31 6 63\n> 0 3 0 24676 14060 18232 2325224 0 0 483 388 1097 3401 \n> 21 3 76\n> 0 2 1 24676 13044 18700 2322948 0 0 701 195 1078 5187 \n> 23 3 74\n> 2 0 0 24676 21576 18752 2328168 0 0 467 177 1552 3574 \n> 18 3 78\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to \n> [email protected])\n\n", "msg_date": "Tue, 11 May 2004 15:46:25 -0700", "msg_from": "Paul Tuckfield <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "On Tue, 11 May 2004, Bjoern Metzdorf wrote:\n\n> I am curious if there are any real life production quad processor setups \n> running postgresql out there. Since postgresql lacks a proper \n> replication/cluster solution, we have to buy a bigger machine.\n\nDu you run the latest version of PG? I've read the thread bug have not \nseen any information about what pg version. All I've seen was a reference \nto debian which might just as well mean that you run pg 7.2 (probably not \nbut I have to ask).\n\nSome classes of queries run much faster in pg 7.4 then in older versions\nso if you are lucky that can help.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Wed, 12 May 2004 06:03:41 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "...and on Tue, May 11, 2004 at 03:02:24PM -0600, scott.marlowe used the keyboard:\n> \n> If you get the LSI megaraid, make sure you're running the latest megaraid \n> 2 driver, not the older, slower 1.18 series. If you are running linux, \n> look for the dkms packaged version. dkms, (Dynamic Kernel Module System) \n> automagically compiles and installs source rpms for drivers when you \n> install them, and configures the machine to use them to boot up. Most \n> drivers seem to be slowly headed that way in the linux universe, and I \n> really like the simplicity and power of dkms.\n> \n\nHi,\n\nGiven the fact LSI MegaRAID seems to be a popular solution around here, and\nmany of you folx use Linux as well, I thought sharing this piece of info\nmight be of use.\n\nRunning v2 megaraid driver on a 2.4 kernel is actually not a good idea _at_\n_all_, as it will silently corrupt your data in the event of a disk failure.\n\nSorry to have to say so, but we tested it (on kernels up to 2.4.25, not sure\nabout 2.4.26 yet) and it comes out it doesn't do hotswap the way it should.\n\nSomehow the replaced disk drives are not _really_ added to the array, which\ncontinues to work in degraded mode for a while and (even worse than that)\nthen starts to think the replaced disk is in order without actually having\nresynced it, thus beginning to issue writes to non-existant areas of it.\n\nThe 2.6 megaraid driver indeed seems to be a merged version of the above\ndriver and the old one, giving both improved performance and correct\nfunctionality in the event of a hotswap taking place.\n\nHope this helped,\n-- \n Grega Bremec\n Senior Administrator\n Noviforum Ltd., Software & Media\n http://www.noviforum.si/", "msg_date": "Wed, 12 May 2004 06:58:32 +0200", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "BM> see my other mail.\n\nBM> We are running Linux, Kernel 2.4. As soon as the next debian version \nBM> comes out, I'll happily switch to 2.6 :)\n\nit's very simple to use 2.6 with testing version, but if you like\nwoody - you can simple install several packets from testing or\nbackports.org\n\nif you think about perfomance you must use lastest version of\npostgresql server - it can be installed from testing or backports.org\ntoo (but postgresql from testing depend from many other testing\npackages).\n\ni think if you upgade existing system you can use backports.org for\nnevest packages, if you install new - use testing - it can be used\non production servers today\n\n", "msg_date": "Wed, 12 May 2004 09:34:34 +0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Quad processor options" }, { "msg_contents": "On Tue, 2004-05-11 at 15:46 -0700, Paul Tuckfield wrote:\n\n> - the \"cache\" column shows that linux is using 2.3G for cache. (way too \n> much) you generally want to give memory to postgres to keep it \"close\" to \n> the user, not leave it unused to be claimed by linux cache (need to leave \n> *some* for linux tho)\n> \n> My recommendations:\n> - I'll bet you have a low value for shared buffers, like 10000. On \n> your 3G system you should ramp up the value to at least 1G (125000 8k buffers) \n> unless something else runs on the system. It's best to not do things too \n> drastically, so if Im right and you sit at 10000 now, try going to\n> 30000 then 60000 then 125000 or above.\n\nHuh?\n\nDoesn't this run counter to almost every piece of PostgreSQL performance\ntuning advice given?\n\nI run my own boxes with buffers set to around 10000-20000 and an\neffective_cache_size = 375000 (8k pages - around 3G).\n\nThat's working well with PostgreSQL 7.4.2 under Debian \"woody\" (using\nOliver Elphick's backported packages from\nhttp://people.debian.org/~elphick/debian/).\n\nRegards,\n\t\t\t\t\tAndrew.\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\nQ: How much does it cost to ride the Unibus?\nA: 2 bits.\n-------------------------------------------------------------------------\n\n", "msg_date": "Wed, 12 May 2004 20:58:13 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad processor options" }, { "msg_contents": "On Tue, 11 May 2004 15:46:25 -0700, Paul Tuckfield <[email protected]>\nwrote:\n>- the \"cache\" column shows that linux is using 2.3G for cache. (way too \n>much)\n\nThere is no such thing as \"way too much cache\".\n\n> you generally want to give memory to postgres to keep it \"close\" to \n>the user,\n\nYes, but only a moderate amount of memory.\n\n> not leave it unused to be claimed by linux cache\n\nCache is not unused memory.\n\n>- I'll bet you have a low value for shared buffers, like 10000. On \n>your 3G system\n> you should ramp up the value to at least 1G (125000 8k buffers) \n\nIn most cases this is almost the worst thing you can do. The only thing\neven worse would be setting it to 1.5 G.\n\nPostgres is just happy with a moderate shared_buffers setting. We\nusually recommend something like 10000. You could try 20000, but don't\nincrease it beyond that without strong evidence that it helps in your\nparticular case.\n\nThis has been discussed several times here, on -hackers and on -general.\nSearch the archives for more information.\n\nServus\n Manfred\n", "msg_date": "Wed, 12 May 2004 12:17:27 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "\nOn 12 May 2004, at 12:17 PM, Manfred Koizar wrote:\n\n> On Tue, 11 May 2004 15:46:25 -0700, Paul Tuckfield <[email protected]>\n> wrote:\n>\n>> - I'll bet you have a low value for shared buffers, like 10000. On\n>> your 3G system\n>> you should ramp up the value to at least 1G (125000 8k buffers)\n>\n> In most cases this is almost the worst thing you can do. The only \n> thing\n> even worse would be setting it to 1.5 G.\n>\n> Postgres is just happy with a moderate shared_buffers setting. We\n> usually recommend something like 10000. You could try 20000, but don't\n> increase it beyond that without strong evidence that it helps in your\n> particular case.\n>\n> This has been discussed several times here, on -hackers and on \n> -general.\n> Search the archives for more information.\n\nWe have definitely found this to be true here. We have some fairly \ncomplex queries running on a rather underpowered box (beautiful but \nsteam-driven old Silicon Graphics Challenge DM). We ended up using a \nvery slight increase to shared buffers, but gaining ENORMOUSLY through \nproper optimisation of queries, appropriate indices and the use of \noptimizer-bludgeons like \"SET ENABLE_SEQSCAN = OFF\"\n\nHal\n\n", "msg_date": "Wed, 12 May 2004 12:27:18 +0200", "msg_from": "Halford Dace <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "On Wed, 12 May 2004, Grega Bremec wrote:\n\n> ...and on Tue, May 11, 2004 at 03:02:24PM -0600, scott.marlowe used the keyboard:\n> > \n> > If you get the LSI megaraid, make sure you're running the latest megaraid \n> > 2 driver, not the older, slower 1.18 series. If you are running linux, \n> > look for the dkms packaged version. dkms, (Dynamic Kernel Module System) \n> > automagically compiles and installs source rpms for drivers when you \n> > install them, and configures the machine to use them to boot up. Most \n> > drivers seem to be slowly headed that way in the linux universe, and I \n> > really like the simplicity and power of dkms.\n> > \n> \n> Hi,\n> \n> Given the fact LSI MegaRAID seems to be a popular solution around here, and\n> many of you folx use Linux as well, I thought sharing this piece of info\n> might be of use.\n> \n> Running v2 megaraid driver on a 2.4 kernel is actually not a good idea _at_\n> _all_, as it will silently corrupt your data in the event of a disk failure.\n> \n> Sorry to have to say so, but we tested it (on kernels up to 2.4.25, not sure\n> about 2.4.26 yet) and it comes out it doesn't do hotswap the way it should.\n> \n> Somehow the replaced disk drives are not _really_ added to the array, which\n> continues to work in degraded mode for a while and (even worse than that)\n> then starts to think the replaced disk is in order without actually having\n> resynced it, thus beginning to issue writes to non-existant areas of it.\n> \n> The 2.6 megaraid driver indeed seems to be a merged version of the above\n> driver and the old one, giving both improved performance and correct\n> functionality in the event of a hotswap taking place.\n\nThis doesn't make any sense to me, since the hot swapping is handled by \nthe card autonomously. I also tested it with a hot spare and pulled one \ndrive and it worked fine during our acceptance testing.\n\nHowever, I've got a hot spare machine I can test on, so I'll try it again \nand see if I can make it fail.\n\nwhen testing it, was the problem present in certain RAID configurations or \nonly one type or what? I'm curious to try and reproduce this problem, \nsince I've never heard of it before.\n\nAlso, what firmware version were those megaraid cards, ours is fairly \nnew, as we got it at the beginning of this year, and I'm wondering if it \nis a firmware issue.\n\n", "msg_date": "Wed, 12 May 2004 07:44:37 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "Hi,\n\nat first, many thanks for your valuable replies. On my quest for the \nultimate hardware platform I'll try to summarize the things I learned.\n\n-------------------------------------------------------------\n\nThis is our current setup:\n\nHardware:\nDual Xeon DP 2.4 on a TYAN S2722-533 with HT enabled\n3 GB Ram (2 x 1 GB + 2 x 512 MB)\nMylex Extremeraid Controller U160 running RAID 10 with 4 x 18 GB SCSI \n10K RPM, no other drives involved (system, pgdata and wal are all on the \nsame volume).\n\nSoftware:\nDebian 3.0 Woody\nPostgresql 7.4.1 (selfcompiled, no special optimizations)\nKernel 2.4.22 + fixes\n\nDatabase specs:\nSize of a gzipped -9 full dump is roughly 1 gb\n70-80% selects, 20-30% updates (roughly estimated)\nup to 700-800 connections during peak times\nkernel.shmall = 805306368\nkernel.shmmax = 805306368\nmax_connections = 900\nshared_buffers = 20000\nsort_mem = 16384\ncheckpoint_segments = 6\nstatistics collector is enabled (for pg_autovacuum)\n\nLoads:\nWe are experiencing average CPU loads of up to 70% during peak hours. As \nPaul Tuckfield correctly pointed out, my vmstat output didn't support \nthis. This output was not taken during peak times, it was freshly \ngrabbed when I wrote my initial mail. It resembles perhaps 50-60% peak \ntime load (30% cpu usage). iostat does not give results about disk \nusage, I don't know exactly why, the blk_read/wrtn columns are just \nempty. (Perhaps due to the Mylex rd driver, I don't know).\n\n-------------------------------------------------------------\n\nSuggestions and solutions given:\n\nAnjan Dave reported, that he is pretty confident with his Quad Xeon \nsetups, which will cost less than $20K at Dell with a reasonable \nhardware setup. ( Dell 6650 with 2.0GHz/1MB cache/8GB Memory, 5 internal \ndrives (4 in RAID 10, 1 spare) on U320, 128MB cache on the PERC controller)\n\nScott Marlowe pointed out, that one should consider more than 4 drives \n(6 to 8, 10K rpm is enough, 15K is rip-off) for a Raid 10 setup, because \nthat can boost performance quite a lot. One should also be using a \nbattery backed raid controller. Scott has good experiences with the LSI \nMegaraid single channel controller, which is reasonably priced at ~ \n$500. He also stated, that 20-30% writes on a database is quite a lot.\n\nNext Rob Sell told us about his research on more-than-2-way Intel based \nsystems. The memory bandwidth on the xeon platform is always shared \nbetween the cpus. While a 2way xeon may perform quite well, a 4way \nsystem will be suffering due to the reduced memory bandwith available \nfor each processor.\n\nJ. Andrew Roberts supports this. He said that 4way opteron systems scale \nmuch better than a 4way xeon system. Scaling limits begin at 6-8 cpus on \nthe opteron platform. He also says that a fully equipped dual channel \nLSI Megaraid 320 with 256MB cache ram will be less that $1K. A complete \n4way opteron system will be at $10K-$12K.\n\nPaul Tuckfield then gave the suggestion to bump up my shared_buffers. \nWith a 3GB memory system, I could happily be using 1GB for shared \nbuffers (125000). This was questioned by Andrew McMillian, Manfred \nKolzar and Halford Dace, who say that common tuning advices limit \nreasonable settings to 10000-20000 shared buffers, because the OS is \nbetter at caching than the database.\n\n-------------------------------------------------------------\n\nConclusion:\n\nAfter having read some comparisons between n-way xeon and opteron systems:\n\nhttp://www.anandtech.com/IT/showdoc.html?i=1982\nhttp://www.aceshardware.com/read.jsp?id=60000275\n\nI was given the impression, that an opteron system is the way to go.\n\nThis is what I am considering the ultimate platform for postgresql:\n\nHardware:\nTyan Thunder K8QS board\n2-4 x Opteron 848 in NUMA mode\n4-8 GB RAM (DDR400 ECC Registered 1 GB modules, 2 for each processor)\nLSI Megaraid 320-2 with 256 MB cache ram and battery backup\n6 x 36GB SCSI 10K drives + 1 spare running in RAID 10, split over both \nchannels (3 + 4) for pgdata including indexes and wal.\n2 x 80 GB S-ATA IDE for system, running linux software raid 1 or \navailable onboard hardware raid (perhaps also 2 x 36 GB SCSI)\n\nSoftware:\nDebian Woody in amd64 biarch mode, or perhaps Redhat/SuSE Enterprise \n64bit distributions.\nKernel 2.6\nPostgres 7.4.2 in 64bit mode\nshared_buffers = 20000\na bumbed up effective_cache_size\n\nNow the only problem left (besides my budget) is the availability of \nsuch a system.\n\nI have found some vendors which ship similar systems, so I will have to \ntalk to them about my dream configuration. I will not self build this \nsystem, there are too many obstacles.\n\nI expect this system to come out on about 12-15K Euro. Very optimistic, \nI know :)\n\nThese are the vendors I found up to now:\n\nhttp://www.appro.com/product/server_4144h.asp\nhttp://www.appro.com/product/server_4145h.asp\nhttp://www.pyramid.de/d/builttosuit/server/4opteron.shtml\nhttp://www.rainbow-it.co.uk/productslist.aspx?CategoryID=4&selection=2\nhttp://www.quadopteron.com/\n\nThey all seem to sell more or less the same system. I found also some \nother vendors which built systems on celestica or amd boards, but they \nare way too expensive.\n\nBuying such a machine is worth some good thoughts. If budget is a limit \nand such a machine might not be maxed out during the next few months, it \nwould make more sense to go for a slightly slower system and an upgrade \nwhen more power is needed.\n\nThanks again for all your replies. I hope to have given a somehow clear \nsummary.\n\nRegards,\nBjoern\n\n", "msg_date": "Thu, 13 May 2004 00:40:58 +0200", "msg_from": "Bjoern Metzdorf <[email protected]>", "msg_from_op": true, "msg_subject": "Quad processor options - summary" }, { "msg_contents": "This is somthing I wish more of us did on the lists. The list archives\nhave solutions and workarounds for every variety of problem but very few\nsummary emails exist. A good example of this practice is in the\nsun-managers mailling list. The original poster sends a \"SUMMARY\" reply\nto the list with the original problem included and all solutions found.\nAlso makes searching the list archives easier.\n\nSimply a suggestion for us all including myself.\n\nGreg\n\n\nBjoern Metzdorf wrote:\n> Hi,\n> \n> at first, many thanks for your valuable replies. On my quest for the \n> ultimate hardware platform I'll try to summarize the things I learned.\n> \n> -------------------------------------------------------------\n> \n> This is our current setup:\n> \n> Hardware:\n> Dual Xeon DP 2.4 on a TYAN S2722-533 with HT enabled\n> 3 GB Ram (2 x 1 GB + 2 x 512 MB)\n> Mylex Extremeraid Controller U160 running RAID 10 with 4 x 18 GB SCSI \n> 10K RPM, no other drives involved (system, pgdata and wal are all on the \n> same volume).\n> \n> Software:\n> Debian 3.0 Woody\n> Postgresql 7.4.1 (selfcompiled, no special optimizations)\n> Kernel 2.4.22 + fixes\n> \n> Database specs:\n> Size of a gzipped -9 full dump is roughly 1 gb\n> 70-80% selects, 20-30% updates (roughly estimated)\n> up to 700-800 connections during peak times\n> kernel.shmall = 805306368\n> kernel.shmmax = 805306368\n> max_connections = 900\n> shared_buffers = 20000\n> sort_mem = 16384\n> checkpoint_segments = 6\n> statistics collector is enabled (for pg_autovacuum)\n> \n> Loads:\n> We are experiencing average CPU loads of up to 70% during peak hours. As \n> Paul Tuckfield correctly pointed out, my vmstat output didn't support \n> this. This output was not taken during peak times, it was freshly \n> grabbed when I wrote my initial mail. It resembles perhaps 50-60% peak \n> time load (30% cpu usage). iostat does not give results about disk \n> usage, I don't know exactly why, the blk_read/wrtn columns are just \n> empty. (Perhaps due to the Mylex rd driver, I don't know).\n> \n> -------------------------------------------------------------\n> \n> Suggestions and solutions given:\n> \n> Anjan Dave reported, that he is pretty confident with his Quad Xeon \n> setups, which will cost less than $20K at Dell with a reasonable \n> hardware setup. ( Dell 6650 with 2.0GHz/1MB cache/8GB Memory, 5 internal \n> drives (4 in RAID 10, 1 spare) on U320, 128MB cache on the PERC controller)\n> \n> Scott Marlowe pointed out, that one should consider more than 4 drives \n> (6 to 8, 10K rpm is enough, 15K is rip-off) for a Raid 10 setup, because \n> that can boost performance quite a lot. One should also be using a \n> battery backed raid controller. Scott has good experiences with the LSI \n> Megaraid single channel controller, which is reasonably priced at ~ \n> $500. He also stated, that 20-30% writes on a database is quite a lot.\n> \n> Next Rob Sell told us about his research on more-than-2-way Intel based \n> systems. The memory bandwidth on the xeon platform is always shared \n> between the cpus. While a 2way xeon may perform quite well, a 4way \n> system will be suffering due to the reduced memory bandwith available \n> for each processor.\n> \n> J. Andrew Roberts supports this. He said that 4way opteron systems scale \n> much better than a 4way xeon system. Scaling limits begin at 6-8 cpus on \n> the opteron platform. He also says that a fully equipped dual channel \n> LSI Megaraid 320 with 256MB cache ram will be less that $1K. A complete \n> 4way opteron system will be at $10K-$12K.\n> \n> Paul Tuckfield then gave the suggestion to bump up my shared_buffers. \n> With a 3GB memory system, I could happily be using 1GB for shared \n> buffers (125000). This was questioned by Andrew McMillian, Manfred \n> Kolzar and Halford Dace, who say that common tuning advices limit \n> reasonable settings to 10000-20000 shared buffers, because the OS is \n> better at caching than the database.\n> \n> -------------------------------------------------------------\n> \n> Conclusion:\n> \n> After having read some comparisons between n-way xeon and opteron systems:\n> \n> http://www.anandtech.com/IT/showdoc.html?i=1982\n> http://www.aceshardware.com/read.jsp?id=60000275\n> \n> I was given the impression, that an opteron system is the way to go.\n> \n> This is what I am considering the ultimate platform for postgresql:\n> \n> Hardware:\n> Tyan Thunder K8QS board\n> 2-4 x Opteron 848 in NUMA mode\n> 4-8 GB RAM (DDR400 ECC Registered 1 GB modules, 2 for each processor)\n> LSI Megaraid 320-2 with 256 MB cache ram and battery backup\n> 6 x 36GB SCSI 10K drives + 1 spare running in RAID 10, split over both \n> channels (3 + 4) for pgdata including indexes and wal.\n> 2 x 80 GB S-ATA IDE for system, running linux software raid 1 or \n> available onboard hardware raid (perhaps also 2 x 36 GB SCSI)\n> \n> Software:\n> Debian Woody in amd64 biarch mode, or perhaps Redhat/SuSE Enterprise \n> 64bit distributions.\n> Kernel 2.6\n> Postgres 7.4.2 in 64bit mode\n> shared_buffers = 20000\n> a bumbed up effective_cache_size\n> \n> Now the only problem left (besides my budget) is the availability of \n> such a system.\n> \n> I have found some vendors which ship similar systems, so I will have to \n> talk to them about my dream configuration. I will not self build this \n> system, there are too many obstacles.\n> \n> I expect this system to come out on about 12-15K Euro. Very optimistic, \n> I know :)\n> \n> These are the vendors I found up to now:\n> \n> http://www.appro.com/product/server_4144h.asp\n> http://www.appro.com/product/server_4145h.asp\n> http://www.pyramid.de/d/builttosuit/server/4opteron.shtml\n> http://www.rainbow-it.co.uk/productslist.aspx?CategoryID=4&selection=2\n> http://www.quadopteron.com/\n> \n> They all seem to sell more or less the same system. I found also some \n> other vendors which built systems on celestica or amd boards, but they \n> are way too expensive.\n> \n> Buying such a machine is worth some good thoughts. If budget is a limit \n> and such a machine might not be maxed out during the next few months, it \n> would make more sense to go for a slightly slower system and an upgrade \n> when more power is needed.\n> \n> Thanks again for all your replies. I hope to have given a somehow clear \n> summary.\n> \n> Regards,\n> Bjoern\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n-- \nGreg Spiegelberg\n Product Development Manager\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nTechnology. Integrity. Focus. V-Solve!\n\n", "msg_date": "Thu, 13 May 2004 09:15:20 -0400", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Off Topic - Re: [PERFORM] Quad processor options - summary" }, { "msg_contents": "Bjoern Metzdorf wrote:\n\n> Hi,\n> \n> at first, many thanks for your valuable replies. On my quest for the \n> ultimate hardware platform I'll try to summarize the things I learned.\n> \n> -------------------------------------------------------------\n\n> This is what I am considering the ultimate platform for postgresql:\n> \n> Hardware:\n> Tyan Thunder K8QS board\n> 2-4 x Opteron 848 in NUMA mode\n> 4-8 GB RAM (DDR400 ECC Registered 1 GB modules, 2 for each processor)\n> LSI Megaraid 320-2 with 256 MB cache ram and battery backup\n> 6 x 36GB SCSI 10K drives + 1 spare running in RAID 10, split over both \n> channels (3 + 4) for pgdata including indexes and wal.\n\nYou might also consider configuring the Postgres data drives for a RAID \n10 SAME configuration as described in the Oracle paper \"Optimal Storage \nConfiguration Made Easy\" \n(http://otn.oracle.com/deploy/availability/pdf/oow2000_same.pdf). Has \nanyone delved into this before?\n\n-- \n\n James Thornton\n______________________________________________________\nInternet Business Consultant, http://jamesthornton.com\n\n", "msg_date": "Thu, 13 May 2004 16:02:08 -0500", "msg_from": "James Thornton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options - summary" }, { "msg_contents": "James Thornton wrote:\n\n>> This is what I am considering the ultimate platform for postgresql:\n>>\n>> Hardware:\n>> Tyan Thunder K8QS board\n>> 2-4 x Opteron 848 in NUMA mode\n>> 4-8 GB RAM (DDR400 ECC Registered 1 GB modules, 2 for each processor)\n>> LSI Megaraid 320-2 with 256 MB cache ram and battery backup\n>> 6 x 36GB SCSI 10K drives + 1 spare running in RAID 10, split over both \n>> channels (3 + 4) for pgdata including indexes and wal.\n> \n> You might also consider configuring the Postgres data drives for a RAID \n> 10 SAME configuration as described in the Oracle paper \"Optimal Storage \n> Configuration Made Easy\" \n> (http://otn.oracle.com/deploy/availability/pdf/oow2000_same.pdf). Has \n> anyone delved into this before?\n\nOk, if I understand it correctly the papers recommends the following:\n\n1. Get many drives and stripe them into a RAID0 with a stripe width of \n1MB. I am not quite sure if this stripe width is to be controlled at the \napplication level (does postgres support this?) or if e.g. the \"chunk \nsize\" of the linux software driver is meant. Normally a chunk size of \n4KB is recommended, so 1MB sounds fairly large.\n\n2. Mirror your RAID0 and get a RAID10.\n\n3. Use primarily the fast, outer regions of your disks. In practice this \nmight be achieved by putting only half of the disk (the outer half) into \nyour stripe set. E.g. put only the outer 18GB of your 36GB disks into \nthe stripe set. Btw, is it common for all drives that the outer region \nis on the higher block numbers? Or is it sometimes on the lower block \nnumbers?\n\n4. Subset data by partition, not disk. If you have 8 disks, then don't \ntake a 4 disk RAID10 for data and the other one for log or indexes, but \nmake a global 8 drive RAID10 and have it partitioned the way that data \nand log + indexes are located on all drives.\n\nThey say, which is very interesting, as it is really contrary to what is \nnormally recommended, that it is good or better to have one big stripe \nset over all disks available, than to put log + indexes on a separated \nstripe set. Having one big stripe set means that the speed of this big \nstripe set is available to all data. In practice this setup is as fast \nas or even faster than the \"old\" approach.\n\n----------------------------------------------------------------\n\nBottom line for a normal, less than 10 disk setup:\n\nGet many disks (8 + spare), create a RAID0 with 4 disks and mirror it to \nthe other 4 disks for a RAID10. Make sure to create the RAID on the \nouter half of the disks (setup may depend on the disk model and raid \ncontroller used), leaving the inner half empty.\nUse a logical volume manager (LVM), which always helps when adding disk \nspace, and create 2 partitions on your RAID10. One for data and one for \nlog + indexes. This should look like this:\n\n----- ----- ----- -----\n| 1 | | 1 | | 1 | | 1 |\n----- ----- ----- ----- <- outer, faster half of the disk\n| 2 | | 2 | | 2 | | 2 | part of the RAID10\n----- ----- ----- -----\n| | | | | | | |\n| | | | | | | | <- inner, slower half of the disk\n| | | | | | | | not used at all\n----- ----- ----- -----\n\nPartition 1 for data, partition 2 for log + indexes. All mirrored to the \nother 4 disks not shown.\n\nIf you take 36GB disks, this should end up like this:\n\nRAID10 has size of 36 / 2 * 4 = 72GB\nPartition 1 is 36 GB\nPartition 2 is 36 GB\n\nIf 36GB is not enough for your pgdata set, you might consider moving to \n72GB disks, or (even better) make a 16 drive RAID10 out of 36GB disks, \nwhich both will end up in a size of 72GB for your data (but the 16 drive \nversion will be faster).\n\nAny comments?\n\nRegards,\nBjoern\n", "msg_date": "Thu, 13 May 2004 23:53:31 +0200", "msg_from": "Bjoern Metzdorf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Quad processor options - summary" }, { "msg_contents": "Bjoern Metzdorf wrote:\n\n>> You might also consider configuring the Postgres data drives for a \n>> RAID 10 SAME configuration as described in the Oracle paper \"Optimal \n>> Storage Configuration Made Easy\" \n>> (http://otn.oracle.com/deploy/availability/pdf/oow2000_same.pdf). Has \n>> anyone delved into this before?\n> \n> Ok, if I understand it correctly the papers recommends the following:\n> \n> 1. Get many drives and stripe them into a RAID0 with a stripe width of \n> 1MB. I am not quite sure if this stripe width is to be controlled at the \n> application level (does postgres support this?) or if e.g. the \"chunk \n> size\" of the linux software driver is meant. Normally a chunk size of \n> 4KB is recommended, so 1MB sounds fairly large.\n> \n> 2. Mirror your RAID0 and get a RAID10.\n\nDon't use RAID 0+1 -- use RAID 1+0 instead. Performance is the same, but \nif a disk fails in a RAID 0+1 configuration, you are left with a RAID 0 \narray. In a RAID 1+0 configuration, multiple disks can fail.\n\nA few weeks ago I called LSI asking about the Dell PERC4-Di card, which \nis actually an LSI Megaraid 320-2. Dell's documentation said that its \nsupport for RAID 10 was in the form of RAID-1 concatenated, but LSI said \nthat this is incorrect and that it supports RAID 10 proper.\n\n> 3. Use primarily the fast, outer regions of your disks. In practice this \n> might be achieved by putting only half of the disk (the outer half) into \n> your stripe set. E.g. put only the outer 18GB of your 36GB disks into \n> the stripe set. \n\nYou can still use the inner-half of the drives, just relegate it to \nless-frequently accessed data.\n\nYou also need to consider the filesystem.\n\nSGI and IBM did a detailed study on Linux filesystem performance, which \nincluded XFS, ext2, ext3 (various modes), ReiserFS, and JFS, and the \nresults are presented in a paper entitled \"Filesystem Performance and \nScalability in Linux 2.4.17\" \n(http://oss.sgi.com/projects/xfs/papers/filesystem-perf-tm.pdf).\n\nThe scaling and load are key factors when selecting a filesystem. Since \nPostgres data is stored in large files, ReiserFS is not the ideal choice \nsince it has been optimized for small files. XFS is probably the best \nchoice for a database server running on a quad processor box.\n\nHowever, Dr. Bert Scalzo of Quest argues that general file system \nbenchmarks aren't ideal for benchmarking a filesystem for a database \nserver. In a paper entitled \"Tuning an Oracle8i Database running Linux\" \n(http://otn.oracle.com/oramag/webcolumns/2002/techarticles/scalzo_linux02.html), \n he says, \"The trouble with these tests-for example, Bonnie, Bonnie++, \nDbench, Iobench, Iozone, Mongo, and Postmark-is that they are basic file \nsystem throughput tests, so their results generally do not pertain in \nany meaningful fashion to the way relational database systems access \ndata files.\" Instead he suggests using these two well-known and widely \naccepted database benchmarks:\n\n* AS3AP: a scalable, portable ANSI SQL relational database benchmark \nthat provides a comprehensive set of tests of database-processing power; \nhas built-in scalability and portability for testing a broad range of \nsystems; minimizes human effort in implementing and running benchmark \ntests; and provides a uniform, metric, straightforward interpretation of \nthe results.\n\n* TPC-C: an online transaction processing (OLTP) benchmark that involves \na mix of five concurrent transactions of various types and either \nexecutes completely online or queries for deferred execution. The \ndatabase comprises nine types of tables, having a wide range of record \nand population sizes. This benchmark measures the number of transactions \nper second.\n\nIn the paper, Scalzo benchmarks ext2, ext3, ReiserFS, JFS, but not XFS. \nSurprisingly ext3 won, but Scalzo didn't address scaling/load. The \nresults are surprising because most think ext3 is just ext2 with \njournaling, thus having extra overhead from journaling.\n\nIf you read papers on ext3, you'll discover that has some optimizations \nthat reduce disk head movement. For example, Daniel Robbins' \"Advanced \nfilesystem implementor's guide, Part 7: Introducing ext3\" \n(http://www-106.ibm.com/developerworks/library/l-fs7/) says:\n\n\"The approach that the [ext3 Journaling Block Device layer API] uses is \ncalled physical journaling, which means that the JBD uses complete \nphysical blocks as the underlying currency for implementing the \njournal...the use of full blocks allows ext3 to perform some additional \noptimizations, such as \"squishing\" multiple pending IO operations within \na single block into the same in-memory data structure. This, in turn, \nallows ext3 to write these multiple changes to disk in a single write \noperation, rather than many. In addition, because the literal block data \nis stored in memory, little or no massaging of the in-memory data is \nrequired before writing it to disk, greatly reducing CPU overhead.\"\n\nI suspect that less writes may be the key factor in ext3 winning \nScalzo's DB benchmark. But as I said, Scalzo didn't benchmark XFS and he \ndidn't address scaling.\n\nXFS has a feature called delayed allocation that reduces IO \n(http://www-106.ibm.com/developerworks/library/l-fs9/), and it scales \nmuch better than ext3 so while I haven't tested it, I suspect that it \nmay be the ideal choice for large Linux DB servers:\n\n\"XFS handles allocation by breaking it into a two-step process. First, \nwhen XFS receives new data to be written, it records the pending \ntransaction in RAM and simply reserves an appropriate amount of space on \nthe underlying filesystem. However, while XFS reserves space for the new \ndata, it doesn't decide what filesystem blocks will be used to store the \ndata, at least not yet. XFS procrastinates, delaying this decision to \nthe last possible moment, right before this data is actually written to \ndisk.\n\nBy delaying allocation, XFS gains many opportunities to optimize write \nperformance. When it comes time to write the data to disk, XFS can now \nallocate free space intelligently, in a way that optimizes filesystem \nperformance. In particular, if a bunch of new data is being appended to \na single file, XFS can allocate a single, contiguous region on disk to \nstore this data. If XFS hadn't delayed its allocation decision, it may \nhave unknowingly written the data into multiple non-contiguous chunks, \nreducing write performance significantly. But, because XFS delayed its \nallocation decision, it was able to write the data in one fell swoop, \nimproving write performance as well as reducing overall filesystem \nfragmentation.\n\nDelayed allocation also has another performance benefit. In situations \nwhere many short-lived temporary files are created, XFS may never need \nto write these files to disk at all. Since no blocks are ever allocated, \nthere's no need to deallocate any blocks, and the underlying filesystem \nmetadata doesn't even get touched.\"\n\nFor further study, I have compiled a list of Linux filesystem resources \nat: http://jamesthornton.com/hotlist/linux-filesystems/.\n\n-- \n\n James Thornton\n______________________________________________________\nInternet Business Consultant, http://jamesthornton.com\n\n", "msg_date": "Thu, 13 May 2004 17:50:45 -0500", "msg_from": "James Thornton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options - summary" }, { "msg_contents": "I see you've got an LSI Megaraid card with oodles of Cache. However,\ndon't underestimate the power of the software RAID implementation that\nRed Hat Linux comes with.\n\nWe're using RHE 2.1 and I can recommend Red Hat Enterprise Linux if you\nwant an excellent implementation of software RAID. In fact we have\nfound the software implementation more flexible than that of some\nexpensive hardware controllers. In addition there are also tools to\nenhance the base implementation even further, making setup and\nmaintenance even easier. An advantage of the software implementation is\nbeing able to RAID by partition, not necessarily entire disks.\n\nTo answer question 1, if you use software raid the chunk size is part of\nthe /etc/raidtab file that is used on initial container creation. 4KB is\nthe standard and a LARGE chunk size of 1MB may affect performance if\nyou're not writing down to blocks in that size continuously. If you\nmake it to big and you're constantly needing to write out smaller chunks\nof information, then you will find the disk \"always\" working and would\nbe an inefficient use of the blocks. There is some free info around\nabout calculating the ideal chunk size. Looking for \"Calculating chunk\nsize for RAID\" through google.\n\nIn the software implementation, after setup the raidtab is uncessary as\nthe superblocks of the disks now contain their relevant information.\nAs for the application knowing any of this, no, the application layers\nare entirely unaware of the lower implementation. They simply function\nas normal by writing to directories that are now mounted a different\nway. The kernel takes care of the underlying RAID writes and syncs.\n3 is easy to implement with software raid under linux. You simply\npartition the drive like normal, mark the partitions you want to \"raid\"\nas 'fd' 'linux raid autodetect', then configure the /etc/raidtab and do\na mkraid /dev/mdxx where mdxx is the matching partition for the raid\nsetup. You can map them anyway you want, but it can get confusing if\nyou're mapping /dev/sda6 > /dev/sdb8 and calling it /dev/md7.\nWe've found it easier to make them all line up, /dev/sda6 > /dev/sdb6 >\n/dev/md6\n\nFYI, if you want better performance, use 15K SCSI disks, and make sure\nyou've got more than 8MB of cache per disk. Also, you're correct in\nsplitting the drives across the channel, that's a trap for young players\n;-)\n\nBjoern is right to recommend an LVM, it will allow you to dynamically\nallocate new size to the RAID volume when you add more disks. However\nI've no experience in implementation with an LVM under the software RAID\nfor Linux, though I believe it can be done. \n\nThe software RAID implementation allows you to stop and start software\nRAID devices as desired, add new hot spare disks to the containers as\nneeded and rebuild containers on the fly. You can even change kernel\noptions to speed up or slow down the sync speed when rebuilding the\ncontainer.\n\nAnyway, have fun, cause striping is the hot rod of the RAID\nimplementations ;-)\n\nRegards.\n Hadley\n\n\nOn Fri, 2004-05-14 at 09:53, Bjoern Metzdorf wrote:\n\n> James Thornton wrote:\n> \n> >> This is what I am considering the ultimate platform for postgresql:\n> >>\n> >> Hardware:\n> >> Tyan Thunder K8QS board\n> >> 2-4 x Opteron 848 in NUMA mode\n> >> 4-8 GB RAM (DDR400 ECC Registered 1 GB modules, 2 for each processor)\n> >> LSI Megaraid 320-2 with 256 MB cache ram and battery backup\n> >> 6 x 36GB SCSI 10K drives + 1 spare running in RAID 10, split over both \n> >> channels (3 + 4) for pgdata including indexes and wal.\n> > \n> > You might also consider configuring the Postgres data drives for a RAID \n> > 10 SAME configuration as described in the Oracle paper \"Optimal Storage \n> > Configuration Made Easy\" \n> > (http://otn.oracle.com/deploy/availability/pdf/oow2000_same.pdf). Has \n> > anyone delved into this before?\n> \n> Ok, if I understand it correctly the papers recommends the following:\n> \n> 1. Get many drives and stripe them into a RAID0 with a stripe width of \n> 1MB. I am not quite sure if this stripe width is to be controlled at the \n> application level (does postgres support this?) or if e.g. the \"chunk \n> size\" of the linux software driver is meant. Normally a chunk size of \n> 4KB is recommended, so 1MB sounds fairly large.\n> \n> 2. Mirror your RAID0 and get a RAID10.\n> \n> 3. Use primarily the fast, outer regions of your disks. In practice this \n> might be achieved by putting only half of the disk (the outer half) into \n> your stripe set. E.g. put only the outer 18GB of your 36GB disks into \n> the stripe set. Btw, is it common for all drives that the outer region \n> is on the higher block numbers? Or is it sometimes on the lower block \n> numbers?\n> \n> 4. Subset data by partition, not disk. If you have 8 disks, then don't \n> take a 4 disk RAID10 for data and the other one for log or indexes, but \n> make a global 8 drive RAID10 and have it partitioned the way that data \n> and log + indexes are located on all drives.\n> \n> They say, which is very interesting, as it is really contrary to what is \n> normally recommended, that it is good or better to have one big stripe \n> set over all disks available, than to put log + indexes on a separated \n> stripe set. Having one big stripe set means that the speed of this big \n> stripe set is available to all data. In practice this setup is as fast \n> as or even faster than the \"old\" approach.\n> \n> ----------------------------------------------------------------\n> \n> Bottom line for a normal, less than 10 disk setup:\n> \n> Get many disks (8 + spare), create a RAID0 with 4 disks and mirror it to \n> the other 4 disks for a RAID10. Make sure to create the RAID on the \n> outer half of the disks (setup may depend on the disk model and raid \n> controller used), leaving the inner half empty.\n> Use a logical volume manager (LVM), which always helps when adding disk \n> space, and create 2 partitions on your RAID10. One for data and one for \n> log + indexes. This should look like this:\n> \n> ----- ----- ----- -----\n> | 1 | | 1 | | 1 | | 1 |\n> ----- ----- ----- ----- <- outer, faster half of the disk\n> | 2 | | 2 | | 2 | | 2 | part of the RAID10\n> ----- ----- ----- -----\n> | | | | | | | |\n> | | | | | | | | <- inner, slower half of the disk\n> | | | | | | | | not used at all\n> ----- ----- ----- -----\n> \n> Partition 1 for data, partition 2 for log + indexes. All mirrored to the \n> other 4 disks not shown.\n> \n> If you take 36GB disks, this should end up like this:\n> \n> RAID10 has size of 36 / 2 * 4 = 72GB\n> Partition 1 is 36 GB\n> Partition 2 is 36 GB\n> \n> If 36GB is not enough for your pgdata set, you might consider moving to \n> 72GB disks, or (even better) make a 16 drive RAID10 out of 36GB disks, \n> which both will end up in a size of 72GB for your data (but the 16 drive \n> version will be faster).\n> \n> Any comments?\n> \n> Regards,\n> Bjoern\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n\n\n\n\n\nI see you've got an LSI Megaraid card with oodles of Cache.  However, don't underestimate the power of the software RAID implementation that Red Hat Linux comes with.\n\nWe're using RHE 2.1 and I can recommend Red Hat Enterprise Linux if you want an excellent implementation of software RAID.  In fact we have found the software implementation more flexible than that of some expensive hardware controllers.  In addition there are also tools to enhance the base implementation even further, making setup and maintenance even easier.  An advantage of the software implementation is being able to RAID by partition, not necessarily entire disks.\n\nTo answer question 1, if you use software raid the chunk size is part of the /etc/raidtab file that is used on initial container creation. 4KB is the standard and a LARGE chunk size of 1MB may affect performance if you're not writing down to blocks in that size continuously.  If you make it to big and you're constantly needing to write out smaller chunks of information, then you will find the disk \"always\" working and would be an inefficient use of the blocks. There is some free info around about calculating the ideal chunk size. Looking for \"Calculating chunk size for RAID\" through google.\n\nIn the software implementation, after setup the raidtab is uncessary as the superblocks of the disks now contain their relevant information.\nAs for the application knowing any of this, no, the application layers are entirely unaware of the lower implementation.  They simply function as normal by writing to directories that are now mounted a different way.  The kernel takes care of the underlying RAID writes and syncs.\n3 is easy to implement with software raid under linux.  You simply partition the drive like normal, mark the partitions you want to \"raid\" as 'fd' 'linux raid autodetect', then configure the /etc/raidtab and do a mkraid /dev/mdxx where mdxx is the matching partition for the raid setup.  You can map them anyway you want, but it can get confusing if you're mapping /dev/sda6 > /dev/sdb8 and calling it /dev/md7.\nWe've found it easier to make them all line up,  /dev/sda6 > /dev/sdb6 > /dev/md6\n\nFYI, if you want better performance, use 15K SCSI disks, and make sure you've got more than 8MB of cache per disk.  Also, you're correct in splitting the drives across the channel, that's a trap for young players ;-)\n\nBjoern is right to recommend an LVM, it will allow you to dynamically allocate new size to the RAID volume when you add more disks.  However I've no experience in implementation with an LVM under the software RAID for Linux, though I believe it can be done. \n\nThe software RAID implementation allows you to stop and start software RAID devices as desired, add new hot spare disks to the containers as needed and rebuild containers on the fly. You can even change kernel options to speed up or slow down the sync speed when rebuilding the container.\n\nAnyway, have fun, cause striping is the hot rod of the RAID implementations ;-)\n\nRegards.\n    Hadley\n\n\nOn Fri, 2004-05-14 at 09:53, Bjoern Metzdorf wrote:\n\nJames Thornton wrote:\n\n>> This is what I am considering the ultimate platform for postgresql:\n>>\n>> Hardware:\n>> Tyan Thunder K8QS board\n>> 2-4 x Opteron 848 in NUMA mode\n>> 4-8 GB RAM (DDR400 ECC Registered 1 GB modules, 2 for each processor)\n>> LSI Megaraid 320-2 with 256 MB cache ram and battery backup\n>> 6 x 36GB SCSI 10K drives + 1 spare running in RAID 10, split over both \n>> channels (3 + 4) for pgdata including indexes and wal.\n> \n> You might also consider configuring the Postgres data drives for a RAID \n> 10 SAME configuration as described in the Oracle paper \"Optimal Storage \n> Configuration Made Easy\" \n> (http://otn.oracle.com/deploy/availability/pdf/oow2000_same.pdf). Has \n> anyone delved into this before?\n\nOk, if I understand it correctly the papers recommends the following:\n\n1. Get many drives and stripe them into a RAID0 with a stripe width of \n1MB. I am not quite sure if this stripe width is to be controlled at the \napplication level (does postgres support this?) or if e.g. the \"chunk \nsize\" of the linux software driver is meant. Normally a chunk size of \n4KB is recommended, so 1MB sounds fairly large.\n\n2. Mirror your RAID0 and get a RAID10.\n\n3. Use primarily the fast, outer regions of your disks. In practice this \nmight be achieved by putting only half of the disk (the outer half) into \nyour stripe set. E.g. put only the outer 18GB of your 36GB disks into \nthe stripe set. Btw, is it common for all drives that the outer region \nis on the higher block numbers? Or is it sometimes on the lower block \nnumbers?\n\n4. Subset data by partition, not disk. If you have 8 disks, then don't \ntake a 4 disk RAID10 for data and the other one for log or indexes, but \nmake a global 8 drive RAID10 and have it partitioned the way that data \nand log + indexes are located on all drives.\n\nThey say, which is very interesting, as it is really contrary to what is \nnormally recommended, that it is good or better to have one big stripe \nset over all disks available, than to put log + indexes on a separated \nstripe set. Having one big stripe set means that the speed of this big \nstripe set is available to all data. In practice this setup is as fast \nas or even faster than the \"old\" approach.\n\n----------------------------------------------------------------\n\nBottom line for a normal, less than 10 disk setup:\n\nGet many disks (8 + spare), create a RAID0 with 4 disks and mirror it to \nthe other 4 disks for a RAID10. Make sure to create the RAID on the \nouter half of the disks (setup may depend on the disk model and raid \ncontroller used), leaving the inner half empty.\nUse a logical volume manager (LVM), which always helps when adding disk \nspace, and create 2 partitions on your RAID10. One for data and one for \nlog + indexes. This should look like this:\n\n----- ----- ----- -----\n| 1 | | 1 | | 1 | | 1 |\n----- ----- ----- ----- <- outer, faster half of the disk\n| 2 | | 2 | | 2 | | 2 | part of the RAID10\n----- ----- ----- -----\n| | | | | | | |\n| | | | | | | | <- inner, slower half of the disk\n| | | | | | | | not used at all\n----- ----- ----- -----\n\nPartition 1 for data, partition 2 for log + indexes. All mirrored to the \nother 4 disks not shown.\n\nIf you take 36GB disks, this should end up like this:\n\nRAID10 has size of 36 / 2 * 4 = 72GB\nPartition 1 is 36 GB\nPartition 2 is 36 GB\n\nIf 36GB is not enough for your pgdata set, you might consider moving to \n72GB disks, or (even better) make a 16 drive RAID10 out of 36GB disks, \nwhich both will end up in a size of 72GB for your data (but the 16 drive \nversion will be faster).\n\nAny comments?\n\nRegards,\nBjoern\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org", "msg_date": "Fri, 14 May 2004 10:59:16 +1200", "msg_from": "Hadley Willan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options - summary" }, { "msg_contents": "Hadley Willan wrote:\n\n> To answer question 1, if you use software raid the chunk size is part of \n> the /etc/raidtab file that is used on initial container creation. 4KB is \n> the standard and a LARGE chunk size of 1MB may affect performance if \n> you're not writing down to blocks in that size continuously. If you \n> make it to big and you're constantly needing to write out smaller chunks \n> of information, then you will find the disk \"always\" working and would \n> be an inefficient use of the blocks. There is some free info around \n> about calculating the ideal chunk size. Looking for \"Calculating chunk \n> size for RAID\" through google.\n\n\"Why does the SAME configuration recommend a one megabyte stripe width? \nLet’s examine the reasoning behind this choice. Why not use a stripe \ndepth smaller than one megabyte? Smaller stripe depths can improve disk \nthroughput for a single process by spreading a single IO across multiple \ndisks. However IOs that are much smaller than a megabyte can cause seek \ntime to becomes a large fraction of the total IO time. Therefore, the \noverall efficiency of the storage system is reduced. In some cases it \nmay be worth trading off some efficiency for the increased throughput \nthat smaller stripe depths provide. In general it is not necessary to do \nthis though. Parallel execution at database level achieves high disk \nthroughput while keeping efficiency high. Also, remember that the degree \nof parallelism can be dynamically tuned, whereas the stripe depth is \nvery costly to change.\n\nWhy not use a stripe depth bigger than one megabyte? One megabyte is \nlarge enough that a sequential scan will spend most of its time \ntransferring data instead of positioning the disk head. A bigger stripe \ndepth will improve scan efficiency but only modestly. One megabyte is \nsmall enough that a large IO operation will not “hog” a single disk for \nvery long before moving to the next one. Further, one megabyte is small \nenough that Oracle’s asynchronous readahead operations access multiple \ndisks. One megabyte is also small enough that a single stripe unit will \nnot become a hot-spot. Any access hot-spot that is smaller than a \nmegabyte should fit comfortably in the database buffer cache. Therefore \nit will not create a hot-spot on disk.\"\n\nThe SAME configuration paper says to ensure that that large IO \noperations aren't broken up between the DB and the disk, you need to be \nable to ensure that the database file multi-block read count (Oracle has \na param called db_file_multiblock_read_count, does Postgres?) is the \nsame size as the stripe width and the OS IO limits should be at least \nthis size.\n\nAlso, it says, \"Ideally we would like to stripe the log files using the \nsame one megabyte stripe width as the rest of the files. However, the \nlog files are written sequentially, and many storage systems limit the \nmaximum size of a single write operation to one megabyte (or even less). \nIf the maximum write size is limited, then using a one megabyte stripe \nwidth for the log files may not work well. In this case, a smaller \nstripe width such as 64K may work better. Caching RAID controllers are \nan exception to this. If the storage subsystem can cache write \noperations in nonvolatile RAM, then a one megabyte stripe width will \nwork well for the log files. In this case, the write operation will be \nbuffered in cache and the next log writes can be issued before the \nprevious write is destaged to disk.\"\n\n\n-- \n\n James Thornton\n______________________________________________________\nInternet Business Consultant, http://jamesthornton.com\n\n", "msg_date": "Thu, 13 May 2004 18:36:16 -0500", "msg_from": "James Thornton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options - summary" }, { "msg_contents": "One big caveat re. the \"SAME\" striping strategy, is that readahead can \nreally hurt an OLTP you.\n\nMind you, if you're going from a few disks to a caching array with many \ndisks, it'll be hard to not have a big improvement\n\nBut if you push the envelope of the array with a \"SAME\" configuration, \nreadahead will hurt. Readahead is good for sequential reads but bad \nfor random reads, because the various caches (array and filesystem) get \nflooded with all the blocks that happen to come after whatever random \nblocks you're reading. Because they're random reads these extra \nblocks are genarally *not* read by subsequent queries if the database \nis large enough to be much larger than the cache itself. Of course, \nthe readahead blocks are good if you're doing sequential scans, but \nyou're not doing sequential scans because it's an OLTP database, right?\n\n\nSo this'll probably incite flames but:\nIn an OLTP environment of decent size, readahead is bad. The ideal \nwould be to adjust it dynamically til optimum (likely no readahead) if \nthe array allows it, but most people are fooled by good performance of \nreadahead on simple singlethreaded or small dataset tests, and get \nbitten by this under concurrent loads or large datasets.\n\n\nJames Thornton wrote:\n>\n>>> This is what I am considering the ultimate platform for postgresql:\n>>>\n>>> Hardware:\n>>> Tyan Thunder K8QS board\n>>> 2-4 x Opteron 848 in NUMA mode\n>>> 4-8 GB RAM (DDR400 ECC Registered 1 GB modules, 2 for each processor)\n>>> LSI Megaraid 320-2 with 256 MB cache ram and battery backup\n>>> 6 x 36GB SCSI 10K drives + 1 spare running in RAID 10, split over \n>>> both channels (3 + 4) for pgdata including indexes and wal.\n>> You might also consider configuring the Postgres data drives for a \n>> RAID 10 SAME configuration as described in the Oracle paper \"Optimal \n>> Storage Configuration Made Easy\" \n>> (http://otn.oracle.com/deploy/availability/pdf/oow2000_same.pdf). Has \n>> anyone delved into this before?\n>\n> Ok, if I understand it correctly the papers recommends the following:\n>\n> 1. Get many drives and stripe them into a RAID0 with a stripe width of \n> 1MB. I am not quite sure if this stripe width is to be controlled at \n> the application level (does postgres support this?) or if e.g. the \n> \"chunk size\" of the linux software driver is meant. Normally a chunk \n> size of 4KB is recommended, so 1MB sounds fairly large.\n>\n> 2. Mirror your RAID0 and get a RAID10.\n>\n> 3. Use primarily the fast, outer regions of your disks. In practice \n> this might be achieved by putting only half of the disk (the outer \n> half) into your stripe set. E.g. put only the outer 18GB of your 36GB \n> disks into the stripe set. Btw, is it common for all drives that the \n> outer region is on the higher block numbers? Or is it sometimes on the \n> lower block numbers?\n>\n> 4. Subset data by partition, not disk. If you have 8 disks, then don't \n> take a 4 disk RAID10 for data and the other one for log or indexes, \n> but make a global 8 drive RAID10 and have it partitioned the way that \n> data and log + indexes are located on all drives.\n>\n> They say, which is very interesting, as it is really contrary to what \n> is normally recommended, that it is good or better to have one big \n> stripe set over all disks available, than to put log + indexes on a \n> separated stripe set. Having one big stripe set means that the speed \n> of this big stripe set is available to all data. In practice this \n> setup is as fast as or even faster than the \"old\" approach.\n>\n> ----------------------------------------------------------------\n>\n> Bottom line for a normal, less than 10 disk setup:\n>\n> Get many disks (8 + spare), create a RAID0 with 4 disks and mirror it \n> to the other 4 disks for a RAID10. Make sure to create the RAID on the \n> outer half of the disks (setup may depend on the disk model and raid \n> controller used), leaving the inner half empty.\n> Use a logical volume manager (LVM), which always helps when adding \n> disk space, and create 2 partitions on your RAID10. One for data and \n> one for log + indexes. This should look like this:\n>\n> ----- ----- ----- -----\n> | 1 | | 1 | | 1 | | 1 |\n> ----- ----- ----- ----- <- outer, faster half of the disk\n> | 2 | | 2 | | 2 | | 2 | part of the RAID10\n> ----- ----- ----- -----\n> | | | | | | | |\n> | | | | | | | | <- inner, slower half of the disk\n> | | | | | | | | not used at all\n> ----- ----- ----- -----\n>\n> Partition 1 for data, partition 2 for log + indexes. All mirrored to \n> the other 4 disks not shown.\n>\n> If you take 36GB disks, this should end up like this:\n>\n> RAID10 has size of 36 / 2 * 4 = 72GB\n> Partition 1 is 36 GB\n> Partition 2 is 36 GB\n>\n> If 36GB is not enough for your pgdata set, you might consider moving \n> to 72GB disks, or (even better) make a 16 drive RAID10 out of 36GB \n> disks, which both will end up in a size of 72GB for your data (but the \n> 16 drive version will be faster).\n>\n> Any comments?\n>\n> Regards,\n> Bjoern\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Thu, 13 May 2004 17:51:42 -0700", "msg_from": "Paul Tuckfield <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options - summary" }, { "msg_contents": "I would recommend trying out several stripe sizes, and making your own \nmeasurements.\n\nA while ago I was involved in building a data warehouse system (Oracle, \nDB2) and after several file and db benchmark exercises we used 256K \nstripes, as these gave the best overall performance results for both \nsystems.\n\nI am not saying \"1M is wrong\", but I am saying \"1M may not be right\" :-)\n\nregards\n\nMark\n\nBjoern Metzdorf wrote:\n\n>\n> 1. Get many drives and stripe them into a RAID0 with a stripe width of \n> 1MB. I am not quite sure if this stripe width is to be controlled at \n> the application level (does postgres support this?) or if e.g. the \n> \"chunk size\" of the linux software driver is meant. Normally a chunk \n> size of 4KB is recommended, so 1MB sounds fairly large.\n>\n>\n", "msg_date": "Fri, 14 May 2004 22:17:10 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options - summary" } ]
[ { "msg_contents": "We use XEON Quads (PowerEdge 6650s) and they work nice, provided you configure the postgres properly. Dell is the cheapest quad you can buy i think. You shouldn't be paying 30K unless you are getting high CPU-cache on each processor and tons of memory.\r\n \r\nI am actually curious, have you researched/attempted any postgresql clustering solutions? I agree, you can't just keep buying bigger machines.\r\n \r\nThey have 5 internal drives (4 in RAID 10, 1 spare) on U320, 128MB cache on the PERC controller, 8GB RAM.\r\n \r\nThanks,\r\nAnjan\r\n\r\n\t-----Original Message----- \r\n\tFrom: Bjoern Metzdorf [mailto:[email protected]] \r\n\tSent: Tue 5/11/2004 3:06 PM \r\n\tTo: [email protected] \r\n\tCc: Pgsql-Admin (E-mail) \r\n\tSubject: [PERFORM] Quad processor options\r\n\t\r\n\t\r\n\r\n\tHi,\r\n\t\r\n\tI am curious if there are any real life production quad processor setups\r\n\trunning postgresql out there. Since postgresql lacks a proper\r\n\treplication/cluster solution, we have to buy a bigger machine.\r\n\t\r\n\tRight now we are running on a dual 2.4 Xeon, 3 GB Ram and U160 SCSI\r\n\thardware-raid 10.\r\n\t\r\n\tHas anyone experiences with quad Xeon or quad Opteron setups? I am\r\n\tlooking at the appropriate boards from Tyan, which would be the only\r\n\toption for us to buy such a beast. The 30k+ setups from Dell etc. don't\r\n\tfit our budget.\r\n\t\r\n\tI am thinking of the following:\r\n\t\r\n\tQuad processor (xeon or opteron)\r\n\t5 x SCSI 15K RPM for Raid 10 + spare drive\r\n\t2 x IDE for system\r\n\tICP-Vortex battery backed U320 Hardware Raid\r\n\t4-8 GB Ram\r\n\t\r\n\tWould be nice to hear from you.\r\n\t\r\n\tRegards,\r\n\tBjoern\r\n\t\r\n\t---------------------------(end of broadcast)---------------------------\r\n\tTIP 4: Don't 'kill -9' the postmaster\r\n\t\r\n\r\n", "msg_date": "Tue, 11 May 2004 15:32:57 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "Anjan Dave wrote:\n\n> We use XEON Quads (PowerEdge 6650s) and they work nice,\n > provided you configure the postgres properly.\n > Dell is the cheapest quad you can buy i think.\n > You shouldn't be paying 30K unless you are getting high CPU-cache\n > on each processor and tons of memory.\n\ngood to hear, I tried to online configure a quad xeon here at dell \ngermany, but the 6550 is not available for online configuration. at dell \nusa it works. I will give them a call tomorrow.\n\n> I am actually curious, have you researched/attempted any \n > postgresql clustering solutions?\n > I agree, you can't just keep buying bigger machines.\n\nThere are many asynchronous, trigger based solutions out there (eRserver \netc..), but what we need is basically a master <-> master setup, which \nseems not to be available soon for postgresql.\n\nOur current dual Xeon runs at 60-70% average cpu load, which is really \nmuch. I cannot afford any trigger overhead here. This machine is \nresponsible for over 30M page impressions per month, 50 page impressums \nper second at peak times. The autovacuum daemon is a god sent gift :)\n\nI'm curious how the recently announced mysql cluster will perform, \nalthough it is not an option for us. postgresql has far superior \nfunctionality.\n\n> They have 5 internal drives (4 in RAID 10, 1 spare) on U320, \n > 128MB cache on the PERC controller, 8GB RAM.\n\nCould you tell me what you paid approximately for this setup?\n\nHow does it perform? It certainly won't be twice as fast a as dual xeon, \nbut I remember benchmarking a quad P3 xeon some time ago, and it was \ndisappointingly slow...\n\nRegards,\nBjoern\n", "msg_date": "Tue, 11 May 2004 22:28:12 +0200", "msg_from": "Bjoern Metzdorf <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options" } ]
[ { "msg_contents": "Did you mean to say the trigger-based clustering solution is loading the dual CPUs 60-70% right now?\r\n \r\nPerformance will not be linear with more processors, but it does help with more processes. We haven't benchmarked it, but we haven't had any problems also so far in terms of performance.\r\n \r\nPrice would vary with your relation/yearly purchase, etc, but a 6650 with 2.0GHz/1MB cache/8GB Memory, RAID card, drives, etc, should definitely cost you less than 20K USD.\r\n \r\n-anjan\r\n\r\n\t-----Original Message----- \r\n\tFrom: Bjoern Metzdorf [mailto:[email protected]] \r\n\tSent: Tue 5/11/2004 4:28 PM \r\n\tTo: Anjan Dave \r\n\tCc: [email protected]; Pgsql-Admin (E-mail) \r\n\tSubject: Re: [PERFORM] Quad processor options\r\n\t\r\n\t\r\n\r\n\tAnjan Dave wrote:\r\n\t\r\n\t> We use XEON Quads (PowerEdge 6650s) and they work nice,\r\n\t > provided you configure the postgres properly.\r\n\t > Dell is the cheapest quad you can buy i think.\r\n\t > You shouldn't be paying 30K unless you are getting high CPU-cache\r\n\t > on each processor and tons of memory.\r\n\t\r\n\tgood to hear, I tried to online configure a quad xeon here at dell\r\n\tgermany, but the 6550 is not available for online configuration. at dell\r\n\tusa it works. I will give them a call tomorrow.\r\n\t\r\n\t> I am actually curious, have you researched/attempted any\r\n\t > postgresql clustering solutions?\r\n\t > I agree, you can't just keep buying bigger machines.\r\n\t\r\n\tThere are many asynchronous, trigger based solutions out there (eRserver\r\n\tetc..), but what we need is basically a master <-> master setup, which\r\n\tseems not to be available soon for postgresql.\r\n\t\r\n\tOur current dual Xeon runs at 60-70% average cpu load, which is really\r\n\tmuch. I cannot afford any trigger overhead here. This machine is\r\n\tresponsible for over 30M page impressions per month, 50 page impressums\r\n\tper second at peak times. The autovacuum daemon is a god sent gift :)\r\n\t\r\n\tI'm curious how the recently announced mysql cluster will perform,\r\n\talthough it is not an option for us. postgresql has far superior\r\n\tfunctionality.\r\n\t\r\n\t> They have 5 internal drives (4 in RAID 10, 1 spare) on U320,\r\n\t > 128MB cache on the PERC controller, 8GB RAM.\r\n\t\r\n\tCould you tell me what you paid approximately for this setup?\r\n\t\r\n\tHow does it perform? It certainly won't be twice as fast a as dual xeon,\r\n\tbut I remember benchmarking a quad P3 xeon some time ago, and it was\r\n\tdisappointingly slow...\r\n\t\r\n\tRegards,\r\n\tBjoern\r\n\t\r\n\r\n", "msg_date": "Tue, 11 May 2004 16:38:28 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Quad processor options" }, { "msg_contents": "Anjan Dave wrote:\n\n> Did you mean to say the trigger-based clustering solution \n > is loading the dual CPUs 60-70% right now?\n\nNo, this is without any triggers involved.\n\n> Performance will not be linear with more processors, \n > but it does help with more processes.\n > We haven't benchmarked it, but we haven't had any\n > problems also so far in terms of performance.\n\n From the amount of processes view, we certainly can saturate a quad \nsetup :)\n\n> Price would vary with your relation/yearly purchase, etc, \n > but a 6650 with 2.0GHz/1MB cache/8GB Memory, RAID card,\n > drives, etc, should definitely cost you less than 20K USD.\n\nWhich is still very much. Anyone have experience with a self built quad \nxeon, using the Tyan Thunder board?\n\nRegards,\nBjoern\n", "msg_date": "Tue, 11 May 2004 22:45:13 +0200", "msg_from": "Bjoern Metzdorf <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Quad processor options" } ]
[ { "msg_contents": "Hello,\n I've been having some performance issues with a DB I use. I'm trying to \ncome up with some performance recommendations to send to the \"adminstrator\".\n\nHardware:\nCPU0: Pentium III (Coppermine) 1000MHz (256k cache)\nCPU1: Pentium III (Coppermine) 1000MHz (256k cache)\nMemory: 3863468 kB (4 GB)\nOS: Red Hat Linux release 7.2 (Enigma)\nKernel: 2.4.9-31smp\nI/O I believe is a 3-disk raid 5.\n\n/proc/sys/kernel/shmmax and /proc/sys/kernel/shmall were set to 2G\n\nPostgres version: 7.3.4\n\nI know its a bit dated, and upgrades are planned, but several months out. \nLoad average seems to hover between 1.0 and 5.0-ish during peak hours. CPU \nseems to be the limiting factor but I'm not positive (cpu utilization seems \nto be 40-50%). We have 2 of those set up as the back end to 3 web-servers \neach... supposedly load-balanced, but one of the 2 dbs consistently has \nhigher load. We have a home-grown replication system that keeps them in \nsync with each other... peer to peer (master/master).\n\nThe DB schema is, well to put it nicely... not exactly normalized. No \nconstraints to speak of except for the requisite not-nulls on the primary \nkeys (many of which are compound). Keys are mostly varchar(256) fields.\n\nOk for what I'm uncertain of...\nshared_buffers:\nAccording to http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\nIts more of a staging area and more isn't necessarily better. That psql \nrelies on the OS to cache data for later use.\nBut according to \nhttp://www.ca.postgresql.org/docs/momjian/hw_performance/node3.html its \nwhere psql caches previous data for queries because the OS cache is slower, \nand should be as big as possible without causing swap.\nThose seem to be conflicting statements. In our case, the \"administrator\" \nkept increasing this until performance seemed to increase, which means its \nnow 250000 (x 8k is 2G).\nIs this just a staging area for data waiting to move to the OS cache, or is \nthis really the area that psql caches its data?\n\neffective_cache_size:\nAgain, according to the Varlena guide this tells psql how much system \nmemory is available for it to do its work in.\nuntil recently, this was set at the default value of 1000. It was just \nrecently increased to 180000 (1.5G)\naccording to \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html it \nshould be about 25% of memory?\n\nFinally sort_mem:\nWas until recently left at the default of 1000. Is now 16000.\n\nIncreasing the effective cache and sort mem didn't seem to make much of a \ndifference. I'm guessing the eff cache was probably raised a bit too much, \nand shared_buffers is way to high.\n\nWhat can I do to help determine what the proper settings should be and/or \nlook at other possible choke points. What should I look for in iostat, \nmpstat, or vmstat as red flags that cpu, memory, or i/o bound?\n\nDB maintenance wise, I don't believe they were running vacuum full until I \ntold them a few months ago that regular vacuum analyze no longer cleans out \ndead tuples. Now normal vac is run daily, vac full weekly (supposedly). How \ncan I tell from the output of vacuum if the vac fulls aren't being done, or \nnot done often enough? Or from the system tables, what can I read?\n\nIs there anywhere else I can look for possible clues? I have access to the \nDB super-user, but not the system root/user.\n\nThank you for your time. Please let me know any help or suggestions you may \nhave. Unfortunately upgrading postgres, OS, kernel, or re-writing schema is \nmost likely not an option.\n\n", "msg_date": "Tue, 11 May 2004 17:36:31 -0400", "msg_from": "Doug Y <[email protected]>", "msg_from_op": true, "msg_subject": "Clarification on some settings" }, { "msg_contents": "Doug Y wrote:\n\n> Hello,\n> I've been having some performance issues with a DB I use. I'm trying \n> to come up with some performance recommendations to send to the \n> \"adminstrator\".\n> \n> Hardware:\n> CPU0: Pentium III (Coppermine) 1000MHz (256k cache)\n> CPU1: Pentium III (Coppermine) 1000MHz (256k cache)\n> Memory: 3863468 kB (4 GB)\n> OS: Red Hat Linux release 7.2 (Enigma)\n> Kernel: 2.4.9-31smp\n> I/O I believe is a 3-disk raid 5.\n> \n> /proc/sys/kernel/shmmax and /proc/sys/kernel/shmall were set to 2G\n> \n> Postgres version: 7.3.4\n > The DB schema is, well to put it nicely... not exactly normalized. No\n> constraints to speak of except for the requisite not-nulls on the \n> primary keys (many of which are compound). Keys are mostly varchar(256) \n> fields.\n> \n> Ok for what I'm uncertain of...\n> shared_buffers:\n> According to http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n> Its more of a staging area and more isn't necessarily better. That psql \n> relies on the OS to cache data for later use.\n> But according to \n> http://www.ca.postgresql.org/docs/momjian/hw_performance/node3.html its \n> where psql caches previous data for queries because the OS cache is \n> slower, and should be as big as possible without causing swap.\n> Those seem to be conflicting statements. In our case, the \n> \"administrator\" kept increasing this until performance seemed to \n> increase, which means its now 250000 (x 8k is 2G).\n> Is this just a staging area for data waiting to move to the OS cache, or \n> is this really the area that psql caches its data?\n\nIt is the area where postgresql works. It updates data in this area and pushes \nit to OS cache for disk writes later.\n\nBy experience, larger does not mean better for this parameter. For multi-Gig RAM \nmachines, the best(on an average for wide variety of load) value found to be \naround 10000-15000. May be even lower.\n\nIt is a well known fact that raising this parameter unnecessarily decreases the \nperformance. You indicate that best performance occurred at 250000. This is very \nvery large compared to other people's experience.\n> \n> effective_cache_size:\n> Again, according to the Varlena guide this tells psql how much system \n> memory is available for it to do its work in.\n> until recently, this was set at the default value of 1000. It was just \n> recently increased to 180000 (1.5G)\n> according to \n> http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html \n> it should be about 25% of memory?\n\nNo rule of thumb. It is amount of memory OS will dedicate to psotgresql data \nbuffers. Depending uponn what else you run on machine, it could be \nstraight-forward or noodly value to calculate. For a 4GB machine, 1.5GB is quite \ngood but coupled with 2G of shared buffers it could push the machines to swap \nstorm. And swapping shared buffers is a big performance hit.\n\n> \n> Finally sort_mem:\n> Was until recently left at the default of 1000. Is now 16000.\n\nSort memory is per sort not per query or per connection. So depending upon how \nmany concurrent connections you entertain, it could take quite a chuck of RAM.\n> \n> Increasing the effective cache and sort mem didn't seem to make much of \n> a difference. I'm guessing the eff cache was probably raised a bit too \n> much, and shared_buffers is way to high.\n\nI agree. For shared buffers start with 5000 and increase in batches on 1000. Or \nset it to a high value and check with ipcs for maximum shared memory usage. If \nshare memory usage peaks at 100MB, you don't need more than say 120MB of buffers.\n\n> \n> What can I do to help determine what the proper settings should be \n> and/or look at other possible choke points. What should I look for in \n> iostat, mpstat, or vmstat as red flags that cpu, memory, or i/o bound?\n\nYes. vmstat is usually a lot of help to locate the bottelneck.\n\n> DB maintenance wise, I don't believe they were running vacuum full until \n> I told them a few months ago that regular vacuum analyze no longer \n> cleans out dead tuples. Now normal vac is run daily, vac full weekly \n> (supposedly). How can I tell from the output of vacuum if the vac fulls \n> aren't being done, or not done often enough? Or from the system tables, \n> what can I read?\n\nIn 7.4 you can do vacuum full verbose and it will tell you the stats at the end. \nFor 7.3.x, its not there.\n\nI suggest you vacuum full database once.(For large database, dumping restoring \nmight work faster. Dump/restore and vacuum full both lock the database \nexclusively i.e. downtime. So I guess faster the better for you. But there is no \ntool/guideline to determine which way to go.)\n\n> Is there anywhere else I can look for possible clues? I have access to \n> the DB super-user, but not the system root/user.\n\nOther than hardware tuning, find out slow/frequent queries. Use explain analyze \nto determine why they are so slow. Forgetting to typecast a where clause and \nusing sequential scan could cost you lot more than mistuned postgresql \nconfiguration.\n\n> Thank you for your time. Please let me know any help or suggestions you \n> may have. Unfortunately upgrading postgres, OS, kernel, or re-writing \n> schema is most likely not an option.\n\nI hope you can change your queries.\n\nHTH\n\n Shridhar\n\n", "msg_date": "Wed, 12 May 2004 14:32:54 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarification on some settings" }, { "msg_contents": "On Wed, 2004-05-12 at 05:02, Shridhar Daithankar wrote:\n> I agree. For shared buffers start with 5000 and increase in batches on 1000. Or \n> set it to a high value and check with ipcs for maximum shared memory usage. If \n> share memory usage peaks at 100MB, you don't need more than say 120MB of buffers.\n\nIf your DB touches more than 100MB worth of buffers over time, shared\nmemory consumption won't peak at 100MB. PG shared buffers are only\n\"recycled\" when there are no unused buffers available, so this isn't a\nreally valid way to determine the right shared_buffers setting.\n\n-Neil\n\n\n", "msg_date": "Thu, 13 May 2004 02:44:07 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarification on some settings" }, { "msg_contents": "Note that effective_cache_size is merely a hint to that planner to say \n\"I have this much os buffer cache to use\" - it is not actually allocated.\n\nIt is shared_buffers that will hurt you if it is too high (10000 - 25000 \nis the usual sweet spot).\n\nbest wishes\n\nMark\n\n\nShridhar Daithankar wrote:\n\n>\n>>\n>> Increasing the effective cache and sort mem didn't seem to make much \n>> of a difference. I'm guessing the eff cache was probably raised a bit \n>> too much, and shared_buffers is way to high.\n>\n>\n> I agree. For shared buffers start with 5000 and increase in batches on \n> 1000. Or set it to a high value and check with ipcs for maximum shared \n> memory usage. If share memory usage peaks at 100MB, you don't need \n> more than say 120MB of buffers.\n>\n>\n>\n", "msg_date": "Thu, 13 May 2004 19:31:29 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarification on some settings" } ]