threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi,\n\nI have a pretty trivial query that seems to take an excessive amount of time\nto run, and while doing so is only consuming 5-10% of CPU. It looks like the\nrest of the time is spent doing disk access (the hd is certainly grinding\naway a lot).\n\nI'm not sure whether this is able to be improved, and if it is, what to\ntweak.\n\nAdmittedly the tables are pretty large, but a test using dd shows that\nI'm getting something like 30M/sec off disk (without hitting cache), and\nwhile the query is running the kernel has ~1.7G worth of cache available.\n\nThe tables and indices in question appear to be around 200M each, which\nI would have thought would fit in cache quite nicely.\n\nThe machine itself is a 3GHz P4 w/ 2G memory. I don't have root on it, so\nI haven't been able to play with hdparm too much, but I have requested that\nit be set up with hdparm -u1 -d1 -m16 (which is my default guess for disk\ntuning parameters).\n\nThanks,\n\nToby.\n\nThe relevant data (sorry about the long lines) is:\n\npostgres config:\n\nshared_buffers = 32768\nmax_fsm_relations = 100\nmax_fsm_pages = 50000\nsort_mem = 16384\nvacuum_mem = 65536\neffective_cache_size = 163840\n\ntable sizes:\n\nsargeant=> select relname, relpages from pg_class where relname like 'seq_text%' order by relpages desc;\n relname | relpages\n-----------------------------+----------\n seq_text | 55764\n seq_text_text_index | 30343\n seq_text_text_lindex | 30343\n seq_text_map | 28992\n seq_text_map_seq_index | 22977\n seq_text_pkey | 7528\n seq_text_map_seq_text_index | 6478\n seq_text_id_seq | 1\n(8 rows)\n\nquery:\n\nsargeant=> explain analyze select seq_md5sum, seq_alphabet from seq_text_map, seq_text where lower(text) like '%porin%' and seq_text.id = seq_text_map.seq_text_id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..448139.41 rows=14786 width=52) (actual time=1582.24..125624.92 rows=2140 loops=1)\n Merge Cond: (\"outer\".seq_text_id = \"inner\".id)\n -> Index Scan using seq_text_map_seq_text_index on seq_text_map (cost=0.00..154974.74 rows=2957158 width=48) (actual time=23.04..110877.65 rows=2956147 loops=1)\n -> Index Scan using seq_text_pkey on seq_text (cost=0.00..285540.03 rows=17174 width=4) (actual time=71.51..12260.38 rows=3077 loops=1)\n Filter: (lower(text) ~~ '%porin%'::text)\n Total runtime: 125627.45 msec\n(6 rows)",
"msg_date": "Tue, 1 Jul 2003 13:19:23 +1000",
"msg_from": "Toby Sargeant <[email protected]>",
"msg_from_op": true,
"msg_subject": "excessive disk access during query"
},
{
"msg_contents": "Toby Sargeant <[email protected]> writes:\n> Merge Join (cost=0.00..448139.41 rows=14786 width=52) (actual time=1582.24..125624.92 rows=2140 loops=1)\n> Merge Cond: (\"outer\".seq_text_id = \"inner\".id)\n> -> Index Scan using seq_text_map_seq_text_index on seq_text_map (cost=0.00..154974.74 rows=2957158 width=48) (actual time=23.04..110877.65 rows=2956147 loops=1)\n> -> Index Scan using seq_text_pkey on seq_text (cost=0.00..285540.03 rows=17174 width=4) (actual time=71.51..12260.38 rows=3077 loops=1)\n> Filter: (lower(text) ~~ '%porin%'::text)\n> Total runtime: 125627.45 msec\n\nI'm surprised it doesn't try to use a hash join instead. Are the\ndatatypes of seq_text_id and id different (if so, can you make them the\nsame?) What sorts of plans and timings do you get if you flip\nenable_mergejoin and/or enable_indexscan?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Jul 2003 09:42:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: excessive disk access during query "
}
] |
[
{
"msg_contents": "I insert data every second in my table. Every minute I delete from the \ntable some row to keep max 10000 rows in the table.\nAt the beginning deletes consume about 20% CPU time. After 24 houts \nevery delete needs up tu 100% CPU time (updates too).\nVacuuming doesn't help.\nAfter I restart postmaster, it works again very quick.\nAny ideas?\n\nThanks,\nJuraj\n\nDelete query:\n\n DELETE FROM tbl\nWHERE time_stamp >= 0.0 AND\n time_stamp < (SELECT max(time_stamp)\n FROM (SELECT time_stamp\n FROM tbl ORDER BY time_stamp, \nid_event_archive ASC LIMIT 222) AS t)\n\nPK: id_event_archive\nIndex: time_stamp\n\nPostgres version: 7.3.3.\nOS: Solaris 2.8\n\n\n",
"msg_date": "Tue, 01 Jul 2003 09:47:17 +0200",
"msg_from": "Juraj Porada <[email protected]>",
"msg_from_op": true,
"msg_subject": "slower with the time"
},
{
"msg_contents": "On Tuesday 01 July 2003 13:17, Juraj Porada wrote:\n> I insert data every second in my table. Every minute I delete from the\n> table some row to keep max 10000 rows in the table.\n> At the beginning deletes consume about 20% CPU time. After 24 houts\n> every delete needs up tu 100% CPU time (updates too).\n> Vacuuming doesn't help.\n> After I restart postmaster, it works again very quick.\n> Any ideas?\n\nPostmaster does not consume CPU for simple things like this unless it does not \nhave enough shared buffers. \n\nWhat is your shared buffer setting? Can you tune it according to available \nRAM, dataset size and type of workload?\n\n Shridhar\n\n",
"msg_date": "Tue, 1 Jul 2003 13:18:00 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slower with the time"
},
{
"msg_contents": "shared_buffers = 32\n\nI don't have much experience in tuning the database, but I think there \nis a problem with a fragmentation of memory or so.\nI don't known backgrounds.\n\nJuraj\n\nShridhar Daithankar schrieb:\n\n>On Tuesday 01 July 2003 13:17, Juraj Porada wrote:\n> \n>\n>>I insert data every second in my table. Every minute I delete from the\n>>table some row to keep max 10000 rows in the table.\n>>At the beginning deletes consume about 20% CPU time. After 24 houts\n>>every delete needs up tu 100% CPU time (updates too).\n>>Vacuuming doesn't help.\n>>After I restart postmaster, it works again very quick.\n>>Any ideas?\n>> \n>>\n>\n>Postmaster does not consume CPU for simple things like this unless it does not \n>have enough shared buffers. \n>\n>What is your shared buffer setting? Can you tune it according to available \n>RAM, dataset size and type of workload?\n>\n> Shridhar\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n> \n>\n\n\n\n\n\n\n\n\nshared_buffers = 32\n\nI don't have much experience in tuning the database, but I think there is\na problem with a fragmentation of memory or so.\nI don't known backgrounds. \n\nJuraj\n\nShridhar Daithankar schrieb:\n\nOn Tuesday 01 July 2003 13:17, Juraj Porada wrote:\n \n\nI insert data every second in my table. Every minute I delete from the\ntable some row to keep max 10000 rows in the table.\nAt the beginning deletes consume about 20% CPU time. After 24 houts\nevery delete needs up tu 100% CPU time (updates too).\nVacuuming doesn't help.\nAfter I restart postmaster, it works again very quick.\nAny ideas?\n \n\n\nPostmaster does not consume CPU for simple things like this unless it does not \nhave enough shared buffers. \n\nWhat is your shared buffer setting? Can you tune it according to available \nRAM, dataset size and type of workload?\n\n Shridhar\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org",
"msg_date": "Tue, 01 Jul 2003 10:10:08 +0200",
"msg_from": "Juraj Porada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slower with the time"
},
{
"msg_contents": "On Tuesday 01 July 2003 13:40, Juraj Porada wrote:\n> shared_buffers = 32\n\nThat is 32*8=256KB of memory. I thought default was 64. How much physical \nmemory you have?\n\nI suggest you set it up something like 256 to start with. That may be too \nsmall as well but you haven't provided enough details to come up with a \nbetter one.\n\n>\n> I don't have much experience in tuning the database, but I think there\n> is a problem with a fragmentation of memory or so.\n> I don't known backgrounds.\n\nRead postgresql.conf and admin guide about runtime parameters. You need to \ntune shared buffers and effective_cache_size at least.\n\nSearch performance archives about tuning these two. There is lot of material \nto cover in a single mail.\n\n HTH\n\n Shridhar\n\n",
"msg_date": "Tue, 1 Jul 2003 13:47:06 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slower with the time"
}
] |
[
{
"msg_contents": "I'm just trying to improve performance on version 7 before doing some tests and hopefully upgrading to 7.3.\n\nAt the moment we have \nB=64 (no of shared buffers)\nN=32 (no of connections)\nin postmaster.opt which I take it is the equivalent of the new postgresql.conf file.\n\n From all that is being written about later versions I suspect that this is far too low. Would I be fairly safe in making the no of shared buffers larger? Also is there an equivalent of effective_cache_size that I can set for version 7?\n\nMany thanks in advance\nHilary\n\n\n\n\nHilary Forbes\n-------------\nDMR Computer Limited: http://www.dmr.co.uk/\nDirect line: 01689 889950\nSwitchboard: (44) 1689 860000 Fax: (44) 1689 860330\nE-mail: [email protected]\n\n**********************************************************\n\n",
"msg_date": "Tue, 01 Jul 2003 13:10:08 +0100",
"msg_from": "Hilary Forbes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Version 7 question"
},
{
"msg_contents": "I have my shared buffers at 8192 and my effective cache at 64000 (which is\n500 megs). Depends a lot on how much RAM you have. I have 1.5 gigs and\nI've been asking my boss for another 512megs for over a month now. I have\nno idea if my buffers are too high/low.\n\nMichael\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Hilary\n> Forbes\n> Sent: Tuesday, July 01, 2003 2:10 PM\n> To: [email protected]\n> Subject: [PERFORM] Version 7 question\n>\n>\n> I'm just trying to improve performance on version 7 before doing\n> some tests and hopefully upgrading to 7.3.\n>\n> At the moment we have\n> B=64 (no of shared buffers)\n> N=32 (no of connections)\n> in postmaster.opt which I take it is the equivalent of the new\n> postgresql.conf file.\n>\n> From all that is being written about later versions I suspect\n> that this is far too low. Would I be fairly safe in making the\n> no of shared buffers larger? Also is there an equivalent of\n> effective_cache_size that I can set for version 7?\n>\n> Many thanks in advance\n> Hilary\n>\n>\n>\n>\n> Hilary Forbes\n> -------------\n> DMR Computer Limited: http://www.dmr.co.uk/\n> Direct line: 01689 889950\n> Switchboard: (44) 1689 860000 Fax: (44) 1689 860330\n> E-mail: [email protected]\n>\n> **********************************************************\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n\n",
"msg_date": "Tue, 1 Jul 2003 14:17:08 +0200",
"msg_from": "\"Michael Mattox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Version 7 question"
},
{
"msg_contents": "On Tue, 2003-07-01 at 08:10, Hilary Forbes wrote:\n> I'm just trying to improve performance on version 7 before doing some\ntests and hopefully upgrading to 7.3.\n> \n> At the moment we have \n> B=64 (no of shared buffers)\n> N=32 (no of connections)\n> in postmaster.opt which I take it is the equivalent of the new\npostgresql.conf file.\n> \n> From all that is being written about later versions I suspect that\n>this is far too low. Would I be fairly safe in making the no of shared\n>buffers larger? \n\nyes, I'd say start with about 25% of RAM, then adjust from there. If 25%\ntakes you over your SHMMAX then start at your SHMMAX. \n\n>Also is there an equivalent of effective_cache_size that I can set for\n>version 7?\n> \n\nIf by 7 your mean 7.0.x then I don't believe so, been awhile though, I\ncould be wrong. IMHO no amount of tuning you can do in 7.0 would be as\neffective as an upgrade, after setting your shared buffers up, I'd put\nyour efforts into upgrading. (Note Beta test for 7.4 starts in 2 weeks) \n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "01 Jul 2003 08:52:26 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Version 7 question"
},
{
"msg_contents": "8192 is only 64 megs of RAM, not much, but a good number. Keep in mind \nthat the kernel tends to be better at buffering huge amounts of disk, \nwhile postgresql is better left to use buffers that are large enough for \nthe current working set (i.e. not your whole database, just the largest \namount of data you're slinging about on a regular basis in one query.)\n\nOn a machine with 1.5 gig of RAM, I've found settings as high as 32768 \n(256 megs of ram) to run well, but anything over that doesn't help. Of \ncourse, we don't toss around more than a hundred meg or so at a time. If \nour result sets were in the gigabyte range, I'd A: want more memory and B: \nGive more of it to postgresql.\n\nThe original poster was, I believe running 7.0.x, which is way old, so no, \nI don't think there was an equivalent of effective_cache_size in that \nversion. Upgrading would be far easier than performance tuning 7.0. since \nthe query planner was much simpler (i.e. more prone to make bad decisions) \nin 7.0.\n\nOn Tue, 1 Jul 2003, Michael Mattox wrote:\n\n> I have my shared buffers at 8192 and my effective cache at 64000 (which is\n> 500 megs). Depends a lot on how much RAM you have. I have 1.5 gigs and\n> I've been asking my boss for another 512megs for over a month now. I have\n> no idea if my buffers are too high/low.\n> \n> Michael\n> \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Hilary\n> > Forbes\n> > Sent: Tuesday, July 01, 2003 2:10 PM\n> > To: [email protected]\n> > Subject: [PERFORM] Version 7 question\n> >\n> >\n> > I'm just trying to improve performance on version 7 before doing\n> > some tests and hopefully upgrading to 7.3.\n> >\n> > At the moment we have\n> > B=64 (no of shared buffers)\n> > N=32 (no of connections)\n> > in postmaster.opt which I take it is the equivalent of the new\n> > postgresql.conf file.\n> >\n> > From all that is being written about later versions I suspect\n> > that this is far too low. Would I be fairly safe in making the\n> > no of shared buffers larger? Also is there an equivalent of\n> > effective_cache_size that I can set for version 7?\n> >\n> > Many thanks in advance\n> > Hilary\n> >\n> >\n> >\n> >\n> > Hilary Forbes\n> > -------------\n> > DMR Computer Limited: http://www.dmr.co.uk/\n> > Direct line: 01689 889950\n> > Switchboard: (44) 1689 860000 Fax: (44) 1689 860330\n> > E-mail: [email protected]\n> >\n> > **********************************************************\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/docs/faqs/FAQ.html\n> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n",
"msg_date": "Tue, 1 Jul 2003 06:55:55 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Version 7 question"
},
{
"msg_contents": "> yes, I'd say start with about 25% of RAM, then adjust from there. If 25%\n> takes you over your SHMMAX then start at your SHMMAX.\n\nYou're the first person I've seen to suggest that many buffers. I've read\nthat too many can slow down performance. I have 1.5 gigs of RAM on my\nserver but I'm also running a few other java programs that take up probably\n500 megs total of memory, leaving me 1gig for Postgres. Should I set my\nshared buffers to be 25% of 1gig? That would be 32768. Then what should my\neffective cache be? Right now I have it set to 64000 which would be\n512megs. Between the buffers and cache that'd be a total of 768megs,\nleaving approximately 768 for my other java apps & the OS.\n\nSounds reasonable to me.\n\nMichael\n\n\n\n",
"msg_date": "Tue, 1 Jul 2003 15:02:21 +0200",
"msg_from": "\"Michael Mattox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Version 7 question"
},
{
"msg_contents": "On Tue, 1 Jul 2003 15:02:21 +0200, \"Michael Mattox\"\n<[email protected]> wrote:\n>I have 1.5 gigs of RAM on my\n>server but I'm also running a few other java programs that take up probably\n>500 megs total of memory, leaving me 1gig for Postgres. Should I set my\n>shared buffers to be 25% of 1gig? That would be 32768. Then what should my\n>effective cache be? Right now I have it set to 64000 which would be\n>512megs. Between the buffers and cache that'd be a total of 768megs,\n>leaving approximately 768 for my other java apps & the OS.\n\nMichael, by setting effective_cache_size you do not allocate anything.\nThis configuration variable is just a *hint* to the planner how much\nRAM is used for caching on your system (as shown by top or free).\n\nServus\n Manfred\n",
"msg_date": "Tue, 01 Jul 2003 20:01:54 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Version 7 question"
},
{
"msg_contents": "Hi Hillary,\n\nI'd suggest around 1000 to 2000 shared buffers and bump your max connections\nto at least 64.\n\nMake sure you're kernel allowed enough shared memory for the above (2000 *\n8k = 16MB)\n\nChris\n\n----- Original Message ----- \nFrom: \"Hilary Forbes\" <[email protected]>\nTo: <[email protected]>\nSent: Tuesday, July 01, 2003 8:10 PM\nSubject: [PERFORM] Version 7 question\n\n\n> I'm just trying to improve performance on version 7 before doing some\ntests and hopefully upgrading to 7.3.\n>\n> At the moment we have\n> B=64 (no of shared buffers)\n> N=32 (no of connections)\n> in postmaster.opt which I take it is the equivalent of the new\npostgresql.conf file.\n>\n> From all that is being written about later versions I suspect that this\nis far too low. Would I be fairly safe in making the no of shared buffers\nlarger? Also is there an equivalent of effective_cache_size that I can set\nfor version 7?\n>\n> Many thanks in advance\n> Hilary\n>\n>\n>\n>\n> Hilary Forbes\n> -------------\n> DMR Computer Limited: http://www.dmr.co.uk/\n> Direct line: 01689 889950\n> Switchboard: (44) 1689 860000 Fax: (44) 1689 860330\n> E-mail: [email protected]\n>\n> **********************************************************\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n",
"msg_date": "Wed, 2 Jul 2003 09:37:19 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Version 7 question"
}
] |
[
{
"msg_contents": "What would be the best value range for effective_cache_size\non Postgres 7.3.2, assuming say 1.5 GB of RAM and\nshared_buffers set to 8192, and shmmax set to 750mb?\n\nAnd what are the most important factors one should take\ninto account in determining the value?\n\n\n\n> -----Original Message-----\n> From:\tscott.marlowe [SMTP:[email protected]]\n> Sent:\t01 July 2003 02:56\n> To:\tMichael Mattox\n> Cc:\tHilary Forbes; [email protected]\n> Subject:\tRe: [PERFORM] Version 7 question\n> \n> 8192 is only 64 megs of RAM, not much, but a good number. Keep in mind \n> that the kernel tends to be better at buffering huge amounts of disk, \n> while postgresql is better left to use buffers that are large enough for \n> the current working set (i.e. not your whole database, just the largest \n> amount of data you're slinging about on a regular basis in one query.)\n> \n> On a machine with 1.5 gig of RAM, I've found settings as high as 32768 \n> (256 megs of ram) to run well, but anything over that doesn't help. Of \n> course, we don't toss around more than a hundred meg or so at a time. If\n> \n> our result sets were in the gigabyte range, I'd A: want more memory and B:\n> \n> Give more of it to postgresql.\n> \n> The original poster was, I believe running 7.0.x, which is way old, so no,\n> \n> I don't think there was an equivalent of effective_cache_size in that \n> version. Upgrading would be far easier than performance tuning 7.0. since\n> \n> the query planner was much simpler (i.e. more prone to make bad decisions)\n> \n> in 7.0.\n> \n> On Tue, 1 Jul 2003, Michael Mattox wrote:\n> \n> > I have my shared buffers at 8192 and my effective cache at 64000 (which\n> is\n> > 500 megs). Depends a lot on how much RAM you have. I have 1.5 gigs and\n> > I've been asking my boss for another 512megs for over a month now. I\n> have\n> > no idea if my buffers are too high/low.\n> > \n> > Michael\n> > \n> > > -----Original Message-----\n> > > From: [email protected]\n> > > [mailto:[email protected]]On Behalf Of Hilary\n> > > Forbes\n> > > Sent: Tuesday, July 01, 2003 2:10 PM\n> > > To: [email protected]\n> > > Subject: [PERFORM] Version 7 question\n> > >\n> > >\n> > > I'm just trying to improve performance on version 7 before doing\n> > > some tests and hopefully upgrading to 7.3.\n> > >\n> > > At the moment we have\n> > > B=64 (no of shared buffers)\n> > > N=32 (no of connections)\n> > > in postmaster.opt which I take it is the equivalent of the new\n> > > postgresql.conf file.\n> > >\n> > > From all that is being written about later versions I suspect\n> > > that this is far too low. Would I be fairly safe in making the\n> > > no of shared buffers larger? Also is there an equivalent of\n> > > effective_cache_size that I can set for version 7?\n> > >\n> > > Many thanks in advance\n> > > Hilary\n> > >\n> > >\n> > >\n> > >\n> > > Hilary Forbes\n> > > -------------\n> > > DMR Computer Limited: http://www.dmr.co.uk/\n> > > Direct line: 01689 889950\n> > > Switchboard: (44) 1689 860000 Fax: (44) 1689 860330\n> > > E-mail: [email protected]\n> > >\n> > > **********************************************************\n> > >\n> > >\n> > > ---------------------------(end of\n> broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/docs/faqs/FAQ.html\n> > >\n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/docs/faqs/FAQ.html\n> > \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n",
"msg_date": "Tue, 1 Jul 2003 15:06:04 +0200 ",
"msg_from": "Howard Oblowitz <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: Version 7 question"
},
{
"msg_contents": "The best way to set it is to let the machine run under normal load for a \nwhile, then look at the cache / buffer usage using top (or some other \nprogram, top works fine for this).\n\nMy server with 1.5 gig ram shows 862328K cache right now. So, just divide \nby page size (usually 8192) and I get ~ 100,000 blocks.\n\nOn Tue, 1 Jul 2003, Howard Oblowitz wrote:\n\n> What would be the best value range for effective_cache_size\n> on Postgres 7.3.2, assuming say 1.5 GB of RAM and\n> shared_buffers set to 8192, and shmmax set to 750mb?\n> \n> And what are the most important factors one should take\n> into account in determining the value?\n> \n> \n> \n> > -----Original Message-----\n> > From:\tscott.marlowe [SMTP:[email protected]]\n> > Sent:\t01 July 2003 02:56\n> > To:\tMichael Mattox\n> > Cc:\tHilary Forbes; [email protected]\n> > Subject:\tRe: [PERFORM] Version 7 question\n> > \n> > 8192 is only 64 megs of RAM, not much, but a good number. Keep in mind \n> > that the kernel tends to be better at buffering huge amounts of disk, \n> > while postgresql is better left to use buffers that are large enough for \n> > the current working set (i.e. not your whole database, just the largest \n> > amount of data you're slinging about on a regular basis in one query.)\n> > \n> > On a machine with 1.5 gig of RAM, I've found settings as high as 32768 \n> > (256 megs of ram) to run well, but anything over that doesn't help. Of \n> > course, we don't toss around more than a hundred meg or so at a time. If\n> > \n> > our result sets were in the gigabyte range, I'd A: want more memory and B:\n> > \n> > Give more of it to postgresql.\n> > \n> > The original poster was, I believe running 7.0.x, which is way old, so no,\n> > \n> > I don't think there was an equivalent of effective_cache_size in that \n> > version. Upgrading would be far easier than performance tuning 7.0. since\n> > \n> > the query planner was much simpler (i.e. more prone to make bad decisions)\n> > \n> > in 7.0.\n> > \n> > On Tue, 1 Jul 2003, Michael Mattox wrote:\n> > \n> > > I have my shared buffers at 8192 and my effective cache at 64000 (which\n> > is\n> > > 500 megs). Depends a lot on how much RAM you have. I have 1.5 gigs and\n> > > I've been asking my boss for another 512megs for over a month now. I\n> > have\n> > > no idea if my buffers are too high/low.\n> > > \n> > > Michael\n> > > \n> > > > -----Original Message-----\n> > > > From: [email protected]\n> > > > [mailto:[email protected]]On Behalf Of Hilary\n> > > > Forbes\n> > > > Sent: Tuesday, July 01, 2003 2:10 PM\n> > > > To: [email protected]\n> > > > Subject: [PERFORM] Version 7 question\n> > > >\n> > > >\n> > > > I'm just trying to improve performance on version 7 before doing\n> > > > some tests and hopefully upgrading to 7.3.\n> > > >\n> > > > At the moment we have\n> > > > B=64 (no of shared buffers)\n> > > > N=32 (no of connections)\n> > > > in postmaster.opt which I take it is the equivalent of the new\n> > > > postgresql.conf file.\n> > > >\n> > > > From all that is being written about later versions I suspect\n> > > > that this is far too low. Would I be fairly safe in making the\n> > > > no of shared buffers larger? Also is there an equivalent of\n> > > > effective_cache_size that I can set for version 7?\n> > > >\n> > > > Many thanks in advance\n> > > > Hilary\n> > > >\n> > > >\n> > > >\n> > > >\n> > > > Hilary Forbes\n> > > > -------------\n> > > > DMR Computer Limited: http://www.dmr.co.uk/\n> > > > Direct line: 01689 889950\n> > > > Switchboard: (44) 1689 860000 Fax: (44) 1689 860330\n> > > > E-mail: [email protected]\n> > > >\n> > > > **********************************************************\n> > > >\n> > > >\n> > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > TIP 5: Have you checked our extensive FAQ?\n> > > >\n> > > > http://www.postgresql.org/docs/faqs/FAQ.html\n> > > >\n> > > \n> > > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > > \n> > > http://www.postgresql.org/docs/faqs/FAQ.html\n> > > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n",
"msg_date": "Tue, 1 Jul 2003 07:19:31 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: Version 7 question"
},
{
"msg_contents": "My understanding is to use as much effect cache as possible, so figure out\nhow much ram you need for your other applications & OS and then give the\nrest to postgres as effective cache.\n\nWhat I learned to day is the shared_buffers 25% of RAM guideline.\n\nMichael\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Howard\n> Oblowitz\n> Sent: Tuesday, July 01, 2003 3:06 PM\n> To: [email protected]\n> Subject: FW: [PERFORM] Version 7 question\n>\n>\n> What would be the best value range for effective_cache_size\n> on Postgres 7.3.2, assuming say 1.5 GB of RAM and\n> shared_buffers set to 8192, and shmmax set to 750mb?\n>\n> And what are the most important factors one should take\n> into account in determining the value?\n>\n>\n>\n> > -----Original Message-----\n> > From:\tscott.marlowe [SMTP:[email protected]]\n> > Sent:\t01 July 2003 02:56\n> > To:\tMichael Mattox\n> > Cc:\tHilary Forbes; [email protected]\n> > Subject:\tRe: [PERFORM] Version 7 question\n> >\n> > 8192 is only 64 megs of RAM, not much, but a good number. Keep in mind\n> > that the kernel tends to be better at buffering huge amounts of disk,\n> > while postgresql is better left to use buffers that are large\n> enough for\n> > the current working set (i.e. not your whole database, just the largest\n> > amount of data you're slinging about on a regular basis in one query.)\n> >\n> > On a machine with 1.5 gig of RAM, I've found settings as high as 32768\n> > (256 megs of ram) to run well, but anything over that doesn't help. Of\n> > course, we don't toss around more than a hundred meg or so at a\n> time. If\n> >\n> > our result sets were in the gigabyte range, I'd A: want more\n> memory and B:\n> >\n> > Give more of it to postgresql.\n> >\n> > The original poster was, I believe running 7.0.x, which is way\n> old, so no,\n> >\n> > I don't think there was an equivalent of effective_cache_size in that\n> > version. Upgrading would be far easier than performance tuning\n> 7.0. since\n> >\n> > the query planner was much simpler (i.e. more prone to make bad\n> decisions)\n> >\n> > in 7.0.\n> >\n> > On Tue, 1 Jul 2003, Michael Mattox wrote:\n> >\n> > > I have my shared buffers at 8192 and my effective cache at\n> 64000 (which\n> > is\n> > > 500 megs). Depends a lot on how much RAM you have. I have\n> 1.5 gigs and\n> > > I've been asking my boss for another 512megs for over a month now. I\n> > have\n> > > no idea if my buffers are too high/low.\n> > >\n> > > Michael\n> > >\n> > > > -----Original Message-----\n> > > > From: [email protected]\n> > > > [mailto:[email protected]]On Behalf Of Hilary\n> > > > Forbes\n> > > > Sent: Tuesday, July 01, 2003 2:10 PM\n> > > > To: [email protected]\n> > > > Subject: [PERFORM] Version 7 question\n> > > >\n> > > >\n> > > > I'm just trying to improve performance on version 7 before doing\n> > > > some tests and hopefully upgrading to 7.3.\n> > > >\n> > > > At the moment we have\n> > > > B=64 (no of shared buffers)\n> > > > N=32 (no of connections)\n> > > > in postmaster.opt which I take it is the equivalent of the new\n> > > > postgresql.conf file.\n> > > >\n> > > > From all that is being written about later versions I suspect\n> > > > that this is far too low. Would I be fairly safe in making the\n> > > > no of shared buffers larger? Also is there an equivalent of\n> > > > effective_cache_size that I can set for version 7?\n> > > >\n> > > > Many thanks in advance\n> > > > Hilary\n> > > >\n> > > >\n> > > >\n> > > >\n> > > > Hilary Forbes\n> > > > -------------\n> > > > DMR Computer Limited: http://www.dmr.co.uk/\n> > > > Direct line: 01689 889950\n> > > > Switchboard: (44) 1689 860000 Fax: (44) 1689 860330\n> > > > E-mail: [email protected]\n> > > >\n> > > > **********************************************************\n> > > >\n> > > >\n> > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > TIP 5: Have you checked our extensive FAQ?\n> > > >\n> > > > http://www.postgresql.org/docs/faqs/FAQ.html\n> > > >\n> > >\n> > >\n> > >\n> > > ---------------------------(end of\n> broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/docs/faqs/FAQ.html\n> > >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n",
"msg_date": "Tue, 1 Jul 2003 15:55:22 +0200",
"msg_from": "\"Michael Mattox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Version 7 question"
},
{
"msg_contents": "I think you're confusing effect_cache_size with shared_buffers. \neffective_cache_size tells the planner about how much disk cache the OS is \nusing for postgresql behind its back, so to speak.\n\nOn Tue, 1 Jul 2003, Michael Mattox wrote:\n\n> My understanding is to use as much effect cache as possible, so figure out\n> how much ram you need for your other applications & OS and then give the\n> rest to postgres as effective cache.\n> \n> What I learned to day is the shared_buffers 25% of RAM guideline.\n> \n> Michael\n> \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Howard\n> > Oblowitz\n> > Sent: Tuesday, July 01, 2003 3:06 PM\n> > To: [email protected]\n> > Subject: FW: [PERFORM] Version 7 question\n> >\n> >\n> > What would be the best value range for effective_cache_size\n> > on Postgres 7.3.2, assuming say 1.5 GB of RAM and\n> > shared_buffers set to 8192, and shmmax set to 750mb?\n> >\n> > And what are the most important factors one should take\n> > into account in determining the value?\n> >\n> >\n> >\n> > > -----Original Message-----\n> > > From:\tscott.marlowe [SMTP:[email protected]]\n> > > Sent:\t01 July 2003 02:56\n> > > To:\tMichael Mattox\n> > > Cc:\tHilary Forbes; [email protected]\n> > > Subject:\tRe: [PERFORM] Version 7 question\n> > >\n> > > 8192 is only 64 megs of RAM, not much, but a good number. Keep in mind\n> > > that the kernel tends to be better at buffering huge amounts of disk,\n> > > while postgresql is better left to use buffers that are large\n> > enough for\n> > > the current working set (i.e. not your whole database, just the largest\n> > > amount of data you're slinging about on a regular basis in one query.)\n> > >\n> > > On a machine with 1.5 gig of RAM, I've found settings as high as 32768\n> > > (256 megs of ram) to run well, but anything over that doesn't help. Of\n> > > course, we don't toss around more than a hundred meg or so at a\n> > time. If\n> > >\n> > > our result sets were in the gigabyte range, I'd A: want more\n> > memory and B:\n> > >\n> > > Give more of it to postgresql.\n> > >\n> > > The original poster was, I believe running 7.0.x, which is way\n> > old, so no,\n> > >\n> > > I don't think there was an equivalent of effective_cache_size in that\n> > > version. Upgrading would be far easier than performance tuning\n> > 7.0. since\n> > >\n> > > the query planner was much simpler (i.e. more prone to make bad\n> > decisions)\n> > >\n> > > in 7.0.\n> > >\n> > > On Tue, 1 Jul 2003, Michael Mattox wrote:\n> > >\n> > > > I have my shared buffers at 8192 and my effective cache at\n> > 64000 (which\n> > > is\n> > > > 500 megs). Depends a lot on how much RAM you have. I have\n> > 1.5 gigs and\n> > > > I've been asking my boss for another 512megs for over a month now. I\n> > > have\n> > > > no idea if my buffers are too high/low.\n> > > >\n> > > > Michael\n> > > >\n> > > > > -----Original Message-----\n> > > > > From: [email protected]\n> > > > > [mailto:[email protected]]On Behalf Of Hilary\n> > > > > Forbes\n> > > > > Sent: Tuesday, July 01, 2003 2:10 PM\n> > > > > To: [email protected]\n> > > > > Subject: [PERFORM] Version 7 question\n> > > > >\n> > > > >\n> > > > > I'm just trying to improve performance on version 7 before doing\n> > > > > some tests and hopefully upgrading to 7.3.\n> > > > >\n> > > > > At the moment we have\n> > > > > B=64 (no of shared buffers)\n> > > > > N=32 (no of connections)\n> > > > > in postmaster.opt which I take it is the equivalent of the new\n> > > > > postgresql.conf file.\n> > > > >\n> > > > > From all that is being written about later versions I suspect\n> > > > > that this is far too low. Would I be fairly safe in making the\n> > > > > no of shared buffers larger? Also is there an equivalent of\n> > > > > effective_cache_size that I can set for version 7?\n> > > > >\n> > > > > Many thanks in advance\n> > > > > Hilary\n> > > > >\n> > > > >\n> > > > >\n> > > > >\n> > > > > Hilary Forbes\n> > > > > -------------\n> > > > > DMR Computer Limited: http://www.dmr.co.uk/\n> > > > > Direct line: 01689 889950\n> > > > > Switchboard: (44) 1689 860000 Fax: (44) 1689 860330\n> > > > > E-mail: [email protected]\n> > > > >\n> > > > > **********************************************************\n> > > > >\n> > > > >\n> > > > > ---------------------------(end of\n> > > broadcast)---------------------------\n> > > > > TIP 5: Have you checked our extensive FAQ?\n> > > > >\n> > > > > http://www.postgresql.org/docs/faqs/FAQ.html\n> > > > >\n> > > >\n> > > >\n> > > >\n> > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > TIP 5: Have you checked our extensive FAQ?\n> > > >\n> > > > http://www.postgresql.org/docs/faqs/FAQ.html\n> > > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > >\n> > > http://archives.postgresql.org\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n",
"msg_date": "Tue, 1 Jul 2003 08:19:20 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Version 7 question"
},
{
"msg_contents": "On Tue, 1 Jul 2003, Michael Mattox wrote:\n\n> My understanding is to use as much effect cache as possible, so figure out\n> how much ram you need for your other applications & OS and then give the\n> rest to postgres as effective cache.\n> \n> What I learned to day is the shared_buffers 25% of RAM guideline.\n\n\nNote that the best guideline is the one that your testing shows you makes \nthe most sense. If you never access more than a few megs at a time, then \nthere's no need to have 25% of a machine with 1 gig given over to the \ndatabase's shared buffers, it's better to let the machine cache that for \nyou. If you access hundreds of megs at a time, then 25% of RAM is a good \nidea. Usually 25% of RAM is about the max that gives good results, but in \nsome corner cases, using more still makes sense. Usually at that point, \nyou've also increased sort_mem up a bit too, but be careful, sort_mem is \nPER SORT, not per backend or per database cluster, so it can add up very \nquickly and make the machine run out of RAM.\n\nSetting these settings is a lot like playing Jenga (the game with the \nwooden blocks stacked up where you pull one out and put them on top one at \na time.) Everything seems just fine, the machine's getting faster and \nfaster, everybody's loving life, then you crank one up a little too high, \ncause a swap storm, and the whole thing slows to a crawl.\n\n",
"msg_date": "Tue, 1 Jul 2003 08:22:28 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Version 7 question"
}
] |
[
{
"msg_contents": "Thanks.\n\nSome theoretical questions.\n\nThe documentation says that Effective Cache Size \"sets the optimizer's\nassumption\nabout the effective size of the disk cache ( that is, the portion of the\nkernel's disk \ncache that will be used for PostgreSQL data files ).\n\nWhat then will be the effect of setting this too high?\n\nAnd too low?\n\nHow does it impact on other applications eg Java ?\n\n\n> -----Original Message-----\n> From:\tscott.marlowe [SMTP:[email protected]]\n> Sent:\t01 July 2003 03:20\n> To:\tHoward Oblowitz\n> Cc:\[email protected]\n> Subject:\tRe: FW: [PERFORM] Version 7 question\n> \n> The best way to set it is to let the machine run under normal load for a \n> while, then look at the cache / buffer usage using top (or some other \n> program, top works fine for this).\n> \n> My server with 1.5 gig ram shows 862328K cache right now. So, just divide\n> \n> by page size (usually 8192) and I get ~ 100,000 blocks.\n> \n> On Tue, 1 Jul 2003, Howard Oblowitz wrote:\n> \n> > What would be the best value range for effective_cache_size\n> > on Postgres 7.3.2, assuming say 1.5 GB of RAM and\n> > shared_buffers set to 8192, and shmmax set to 750mb?\n> > \n> > And what are the most important factors one should take\n> > into account in determining the value?\n> > \n> > \n> > \n> > > -----Original Message-----\n> > > From:\tscott.marlowe [SMTP:[email protected]]\n> > > Sent:\t01 July 2003 02:56\n> > > To:\tMichael Mattox\n> > > Cc:\tHilary Forbes; [email protected]\n> > > Subject:\tRe: [PERFORM] Version 7 question\n> > > \n> > > 8192 is only 64 megs of RAM, not much, but a good number. Keep in\n> mind \n> > > that the kernel tends to be better at buffering huge amounts of disk, \n> > > while postgresql is better left to use buffers that are large enough\n> for \n> > > the current working set (i.e. not your whole database, just the\n> largest \n> > > amount of data you're slinging about on a regular basis in one query.)\n> > > \n> > > On a machine with 1.5 gig of RAM, I've found settings as high as 32768\n> \n> > > (256 megs of ram) to run well, but anything over that doesn't help.\n> Of \n> > > course, we don't toss around more than a hundred meg or so at a time.\n> If\n> > > \n> > > our result sets were in the gigabyte range, I'd A: want more memory\n> and B:\n> > > \n> > > Give more of it to postgresql.\n> > > \n> > > The original poster was, I believe running 7.0.x, which is way old, so\n> no,\n> > > \n> > > I don't think there was an equivalent of effective_cache_size in that \n> > > version. Upgrading would be far easier than performance tuning 7.0.\n> since\n> > > \n> > > the query planner was much simpler (i.e. more prone to make bad\n> decisions)\n> > > \n> > > in 7.0.\n> > > \n> > > On Tue, 1 Jul 2003, Michael Mattox wrote:\n> > > \n> > > > I have my shared buffers at 8192 and my effective cache at 64000\n> (which\n> > > is\n> > > > 500 megs). Depends a lot on how much RAM you have. I have 1.5 gigs\n> and\n> > > > I've been asking my boss for another 512megs for over a month now.\n> I\n> > > have\n> > > > no idea if my buffers are too high/low.\n> > > > \n> > > > Michael\n> > > > \n> > > > > -----Original Message-----\n> > > > > From: [email protected]\n> > > > > [mailto:[email protected]]On Behalf Of Hilary\n> > > > > Forbes\n> > > > > Sent: Tuesday, July 01, 2003 2:10 PM\n> > > > > To: [email protected]\n> > > > > Subject: [PERFORM] Version 7 question\n> > > > >\n> > > > >\n> > > > > I'm just trying to improve performance on version 7 before doing\n> > > > > some tests and hopefully upgrading to 7.3.\n> > > > >\n> > > > > At the moment we have\n> > > > > B=64 (no of shared buffers)\n> > > > > N=32 (no of connections)\n> > > > > in postmaster.opt which I take it is the equivalent of the new\n> > > > > postgresql.conf file.\n> > > > >\n> > > > > From all that is being written about later versions I suspect\n> > > > > that this is far too low. Would I be fairly safe in making the\n> > > > > no of shared buffers larger? Also is there an equivalent of\n> > > > > effective_cache_size that I can set for version 7?\n> > > > >\n> > > > > Many thanks in advance\n> > > > > Hilary\n> > > > >\n> > > > >\n> > > > >\n> > > > >\n> > > > > Hilary Forbes\n> > > > > -------------\n> > > > > DMR Computer Limited: http://www.dmr.co.uk/\n> > > > > Direct line: 01689 889950\n> > > > > Switchboard: (44) 1689 860000 Fax: (44) 1689 860330\n> > > > > E-mail: [email protected]\n> > > > >\n> > > > > **********************************************************\n> > > > >\n> > > > >\n> > > > > ---------------------------(end of\n> > > broadcast)---------------------------\n> > > > > TIP 5: Have you checked our extensive FAQ?\n> > > > >\n> > > > > http://www.postgresql.org/docs/faqs/FAQ.html\n> > > > >\n> > > > \n> > > > \n> > > > \n> > > > ---------------------------(end of\n> broadcast)---------------------------\n> > > > TIP 5: Have you checked our extensive FAQ?\n> > > > \n> > > > http://www.postgresql.org/docs/faqs/FAQ.html\n> > > > \n> > > \n> > > \n> > > ---------------------------(end of\n> broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > > \n> > > http://archives.postgresql.org\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> > \n",
"msg_date": "Tue, 1 Jul 2003 15:50:14 +0200 ",
"msg_from": "Howard Oblowitz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Effective Cache Size"
},
{
"msg_contents": "On 1 Jul 2003 at 15:50, Howard Oblowitz wrote:\n\n> The documentation says that Effective Cache Size \"sets the optimizer's\n> assumption\n> about the effective size of the disk cache ( that is, the portion of the\n> kernel's disk \n> cache that will be used for PostgreSQL data files ).\n> \n> What then will be the effect of setting this too high?\n> \n> And too low?\n\nLet's say postgresql is preparing a plan for a scan and it estimates data set \nsize as 100MB whereas your shared buffers+effective cache is 80M. So postgresql \nwould deduce that it would be better off with sequential scan rather than index \nscan. Where in fact you have much more memory to make a file system cache and \nthe machine can afford index scan.\n\nThere is nothing too low or too high of a setting. This isn't exactly \nperformance tuning paramter as other. This is more of information to \npostgresql. The closer it gets to truer, the plans produced would get optimal.\n\nAbout how to set this parameter, it is roughly\n\neffective cache size= (Physical RAM size-shared buffers-requirement for other \napps) * 0.8\n\nThis is very very rough. You need to make sure that some setting does not \ntrigger a swap avelanche\n\nHTH\n\nBye\n Shridhar\n\n--\nSlous' Contention:\tIf you do a job too well, you'll get stuck with it.\n\n",
"msg_date": "Tue, 01 Jul 2003 19:32:49 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Effective Cache Size"
},
{
"msg_contents": "Good questions. Basically, telling postgresql it has a larger disk cache \nmakes it favor index operations, smaller makes it favor seq scans.\n\nIf your machine has super fast I/O then you may want it to favor seq \nscans, whereas if you have more CPU power than I/O bandwidth then you'd \nlikely want it to favor index operations.\n\nNote that even if you are running java and it is using a few hundred megs \nof ram, it's quite likely that postgresql is still using most of the disk \ncache, as the memory java is using is likely allocated to hold its created \ndata structures, not stuff loaded from disk.\n\nOn Tue, 1 Jul 2003, Howard Oblowitz wrote:\n\n> Thanks.\n> \n> Some theoretical questions.\n> \n> The documentation says that Effective Cache Size \"sets the optimizer's\n> assumption\n> about the effective size of the disk cache ( that is, the portion of the\n> kernel's disk \n> cache that will be used for PostgreSQL data files ).\n> \n> What then will be the effect of setting this too high?\n> \n> And too low?\n> \n> How does it impact on other applications eg Java ?\n> \n> \n> > -----Original Message-----\n> > From:\tscott.marlowe [SMTP:[email protected]]\n> > Sent:\t01 July 2003 03:20\n> > To:\tHoward Oblowitz\n> > Cc:\[email protected]\n> > Subject:\tRe: FW: [PERFORM] Version 7 question\n> > \n> > The best way to set it is to let the machine run under normal load for a \n> > while, then look at the cache / buffer usage using top (or some other \n> > program, top works fine for this).\n> > \n> > My server with 1.5 gig ram shows 862328K cache right now. So, just divide\n> > \n> > by page size (usually 8192) and I get ~ 100,000 blocks.\n> > \n> > On Tue, 1 Jul 2003, Howard Oblowitz wrote:\n> > \n> > > What would be the best value range for effective_cache_size\n> > > on Postgres 7.3.2, assuming say 1.5 GB of RAM and\n> > > shared_buffers set to 8192, and shmmax set to 750mb?\n> > > \n> > > And what are the most important factors one should take\n> > > into account in determining the value?\n> > > \n> > > \n> > > \n> > > > -----Original Message-----\n> > > > From:\tscott.marlowe [SMTP:[email protected]]\n> > > > Sent:\t01 July 2003 02:56\n> > > > To:\tMichael Mattox\n> > > > Cc:\tHilary Forbes; [email protected]\n> > > > Subject:\tRe: [PERFORM] Version 7 question\n> > > > \n> > > > 8192 is only 64 megs of RAM, not much, but a good number. Keep in\n> > mind \n> > > > that the kernel tends to be better at buffering huge amounts of disk, \n> > > > while postgresql is better left to use buffers that are large enough\n> > for \n> > > > the current working set (i.e. not your whole database, just the\n> > largest \n> > > > amount of data you're slinging about on a regular basis in one query.)\n> > > > \n> > > > On a machine with 1.5 gig of RAM, I've found settings as high as 32768\n> > \n> > > > (256 megs of ram) to run well, but anything over that doesn't help.\n> > Of \n> > > > course, we don't toss around more than a hundred meg or so at a time.\n> > If\n> > > > \n> > > > our result sets were in the gigabyte range, I'd A: want more memory\n> > and B:\n> > > > \n> > > > Give more of it to postgresql.\n> > > > \n> > > > The original poster was, I believe running 7.0.x, which is way old, so\n> > no,\n> > > > \n> > > > I don't think there was an equivalent of effective_cache_size in that \n> > > > version. Upgrading would be far easier than performance tuning 7.0.\n> > since\n> > > > \n> > > > the query planner was much simpler (i.e. more prone to make bad\n> > decisions)\n> > > > \n> > > > in 7.0.\n> > > > \n> > > > On Tue, 1 Jul 2003, Michael Mattox wrote:\n> > > > \n> > > > > I have my shared buffers at 8192 and my effective cache at 64000\n> > (which\n> > > > is\n> > > > > 500 megs). Depends a lot on how much RAM you have. I have 1.5 gigs\n> > and\n> > > > > I've been asking my boss for another 512megs for over a month now.\n> > I\n> > > > have\n> > > > > no idea if my buffers are too high/low.\n> > > > > \n> > > > > Michael\n> > > > > \n> > > > > > -----Original Message-----\n> > > > > > From: [email protected]\n> > > > > > [mailto:[email protected]]On Behalf Of Hilary\n> > > > > > Forbes\n> > > > > > Sent: Tuesday, July 01, 2003 2:10 PM\n> > > > > > To: [email protected]\n> > > > > > Subject: [PERFORM] Version 7 question\n> > > > > >\n> > > > > >\n> > > > > > I'm just trying to improve performance on version 7 before doing\n> > > > > > some tests and hopefully upgrading to 7.3.\n> > > > > >\n> > > > > > At the moment we have\n> > > > > > B=64 (no of shared buffers)\n> > > > > > N=32 (no of connections)\n> > > > > > in postmaster.opt which I take it is the equivalent of the new\n> > > > > > postgresql.conf file.\n> > > > > >\n> > > > > > From all that is being written about later versions I suspect\n> > > > > > that this is far too low. Would I be fairly safe in making the\n> > > > > > no of shared buffers larger? Also is there an equivalent of\n> > > > > > effective_cache_size that I can set for version 7?\n> > > > > >\n> > > > > > Many thanks in advance\n> > > > > > Hilary\n> > > > > >\n> > > > > >\n> > > > > >\n> > > > > >\n> > > > > > Hilary Forbes\n> > > > > > -------------\n> > > > > > DMR Computer Limited: http://www.dmr.co.uk/\n> > > > > > Direct line: 01689 889950\n> > > > > > Switchboard: (44) 1689 860000 Fax: (44) 1689 860330\n> > > > > > E-mail: [email protected]\n> > > > > >\n> > > > > > **********************************************************\n> > > > > >\n> > > > > >\n> > > > > > ---------------------------(end of\n> > > > broadcast)---------------------------\n> > > > > > TIP 5: Have you checked our extensive FAQ?\n> > > > > >\n> > > > > > http://www.postgresql.org/docs/faqs/FAQ.html\n> > > > > >\n> > > > > \n> > > > > \n> > > > > \n> > > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > > TIP 5: Have you checked our extensive FAQ?\n> > > > > \n> > > > > http://www.postgresql.org/docs/faqs/FAQ.html\n> > > > > \n> > > > \n> > > > \n> > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > TIP 6: Have you searched our list archives?\n> > > > \n> > > > http://archives.postgresql.org\n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 2: you can get off all lists at once with the unregister command\n> > > (send \"unregister YourEmailAddressHere\" to [email protected])\n> > > \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\n",
"msg_date": "Tue, 1 Jul 2003 08:18:34 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Effective Cache Size"
},
{
"msg_contents": "On Tue, 1 Jul 2003 15:50:14 +0200 , Howard Oblowitz\n<[email protected]> wrote:\n>What then will be the effect of setting this too high?\n\nThe planner might choose an index scan where a sequential scan would\nbe faster.\n\n>And too low?\n\nThe planner might choose a sequential scan where an index scan would\nbe faster.\n\n>How does it impact on other applications eg Java ?\n\nIt doesn't -- at least not directly. (There could be very subtle\neffects when Postgres does a sequential scan over a large relation\nthus pushing everything else out of the cache, where an index scan\nwould have read only a small number of pages. Or when a large index\nscan turns your machine from CPU bound to I/O bound.)\n\nServus\n Manfred\n",
"msg_date": "Tue, 01 Jul 2003 20:22:13 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Effective Cache Size"
}
] |
[
{
"msg_contents": "\nOn 02/07/2003 21:04 Matthew Hixson wrote:\n> We currently have a public website that is serving customers, or at \n> least trying to. This machine is underpowered but we are going to be \n> upgrading soon. In the meantime we need to keep the current site alive.\n> We are running a Java application server. It is receiving \n> 'transaction timed out' SQLExceptions from the JDBC driver. I am \n> wondering if it would be better to raise the transaction timeout or to \n> lower it. On one hand it seems like raising it might improve things. \n> It might let the transactions complete, even though it would make the \n> user experience less enjoyable having to wait longer. On the other hand \n> I could see raising the transaction timeout just cause there to be more \n> transactions in process which would thereby degrade performance since \n> the machine would have even more work to do. Would, in fact, lowering \n> the transaction timeout at least cause the machine to fail fast and \n> return either an error or the page in a more timely manner on a per-user \n> level? I'd like to keep people visiting the site while at the same time \n> relieving some stress from the machine.\n> We have also done little to no performance tuning of Postgres' \n> configuration. We do have indexes on all of the important columns and \n> we have reindexed. Any pointers would be greatly appreciated.\n\nAs well as the tuning postgresql advice which others have given, there's \nanother thing you could try:\n\nAssuming you're using connection pooling, try reducing the maximum number \nof connections. This will take some of the stress off the database. \n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Thu, 3 Jul 2003 09:47:44 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: raise or lower transaction timeout?"
}
] |
[
{
"msg_contents": "> CREATE VIEW foo AS {complex_slow_query};\n> \n> SET random_page_cost = 1.5; EXPLAIN ANALYZE SELECT * FROM foo;\n> \n> Note the time taken. Repeat a few times to get the average.\n\nYou pulled everything off disk and tossed it into memory with the first\nrun so the results will NOT match your normal situation (some data on\ndisk, some cached in memory) for your second run and further runs unless\nthere is a LONG timeframe between runs.\n\nThat said, if you test with several other queries and get the same\nresults, it's probably good enough for your system.",
"msg_date": "03 Jul 2003 08:26:13 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to optimize monstrous query, sorts instead of"
}
] |
[
{
"msg_contents": "What are the odds of going through and revamping some of the tunables\nin postgresql.conf for the 7.4 release? I was just working with\nsomeone on IRC and on their 7800 RPM IDE drives, their\nrandom_page_cost was ideally suited to be 0.32: a far cry from 4.\nDoing so has been a win across the board and the problem query went\nfrom about 40sec (seq scan) down to 0.25ms (using idx, higher than\n0.32 resulted in a query time jump to 2sec, and at 0.4 it went back up\nto a full seq scan at 40sec).\n\nI know Josh is working on revamping the postgresql.conf file, but\nwould it be possible to include suggested values for various bits of\nhardware and then solicit contributions from admins on this list who\nhave tuned their DB correctly?\n\n## random_page_cost -- units are one sequential page fetch cost\n#random_page_cost = 4 # default - very conservative\n#random_page_cost = 0.9 # IDE 5200 RPM, 8MB disk cache\n#random_page_cost = 0.3 # IDE 7800 RPM, 4MB disk cache\n#random_page_cost = 0.1 # SCSI RAID 5, 10,000RPM, 64MB cache\n#random_page_cost = 0.05 # SCSI RAID 1+0, 15,000RPM, 128MB cache\n#...\n\n## next_hardware_dependent_tunable....\n#hardware_dependent_tunable\n\nI know these tables could get somewhat lengthy or organized\ndifferently, but given the file is read _once_ at _startup_, seen by\nthousands of DBAs, is visited at least once for every installation (at\nthe least to turn on TCP connections), is often the only file other\nthan pg_hba.conf that gets modified or looked at, this could be a very\nnice way of introducing DBAs to tuning PostgreSQL and reducing the\nnumber of people crying \"PostgreSQL's slow.\" Having postgresql.conf a\nclearing house for tunable values for various bits of hardware would\nbe a huge win for the community and would hopefully radically change\nthis database's perception. At the top of the file, it would be\nuseful to include a blurb to the effect of:\n\n# The default values for PostgreSQL are extremely conservative and are\n# likely far from ideal for a site's needs. Included in this\n# configuration, however, are _suggested_ values to help aid in\n# tuning. The values below are not authoritative, merely contributed\n# suggestions from PostgreSQL DBAs and committers who have\n# successfully tuned their databases. Please take these values as\n# advisory only and remember that they will very likely have to be\n# adjusted according to your site's specific needs. If you have a\n# piece of hardware that isn't mentioned below and have tuned your\n# configuration aptly and have found a suggested value that the\n# PostgreSQL community would benefit from, please send a description\n# of the hardware, the name of the tunable, and the tuned value to\n# [email protected] to be considered for inclusion in future\n# releases.\n#\n# It should also go without saying that the PostgreSQL Global\n# Development Group and its community of committers, contributors,\n# administrators, and commercial supporters are absolved from any\n# responsibility or liability with regards to the use of its software\n# (see this software's license for details). Any data loss,\n# corruption, or performance degradation is the responsibility of the\n# individual or group of individuals using/managing this installation.\n#\n# Hints to DBAs:\n#\n# *) Setup a regular backup schedule (hint: pg_dump(1)/pg_dumpall(1) +\n# cron(8))\n#\n# *) Tuning: Use psql(1) to test out values before changing values for\n# the entire database. In psql(1), type:\n#\n# 1) SHOW [tunabe_name];\n# 2) SET [tunable_name] = [value];\n# 3) [run query]\n# 4) [repeat adjustments as necessary before setting a value here in\n# the postgresql.conf].\n# 5) [Send a SIGHUP signal to the backend to have the config values\n# re-read]\n#\n# *) Never use kill -9 on the backend to shut it down.\n#\n# *) VACUUM ANALYZE your databases regularly.\n#\n# *) Use EXPLAIN ANALYZE [query] to tune queries.\n#\n# *) Read the online documentation at:\n# http://www.postgresql.org/docs/\n#\n# -- PostgreSQL Global Development Group\n\nJust a thought. A bit lengthy, but given that out of the box most\nevery value is set to be extremely conservative (detrimentally so, esp\nsince the majority of users aren't running PostgreSQL in embedded\ndevices, are on reasonably new hardware > 3 years old), and the config\nis only read in once and generally the only file viewed by DBAs, it'd\nmake PostgreSQL more competitive in the performance dept if there were\nsome kind of suggested values for various tunables. Having someone\nwhine, \"my PostgreSQL database is slow\" is really getting old when its\nreally not and it's a lack of tuning that is at fault, lowering the\nbar to a successful and speedy PostgreSQL installation would be a win\nfor everyone. The person who I was helping also had the same data,\nschema, and query running on MySQL and the fastest it could go was\n2.7s (about 40M rows in the table).\n\n<gets_off_of_soap_box_to_watch_and_listen/> -sc\n\n-- \nSean Chittenden\n",
"msg_date": "Thu, 3 Jul 2003 12:05:02 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "On Thu, 3 Jul 2003, Sean Chittenden wrote:\n\n> What are the odds of going through and revamping some of the tunables\n> in postgresql.conf for the 7.4 release? I was just working with\n> someone on IRC and on their 7800 RPM IDE drives, their\n> random_page_cost was ideally suited to be 0.32: a far cry from 4.\n> Doing so has been a win across the board and the problem query went\n> from about 40sec (seq scan) down to 0.25ms (using idx, higher than\n> 0.32 resulted in a query time jump to 2sec, and at 0.4 it went back up\n> to a full seq scan at 40sec).\n\nI'm the guy who advocates settings of 1 to 2, and that still sounds low to \nme. :-) I'm wondering if the effective_cache_size was set properly, as \nwell as there be enough buffers allocated.\n\nI generally set effective cache size to 100,000 pages (800 megs or so) on \nmy box, which is where it sits most days. with this setting I've found \nthat settings of under 1 are not usually necessary to force the planner to \ntake the path of righteousness (i.e. the fastest one :-) 1.2 to 1.4 are \noptimal to me.\n\nSince theoretically a random page of of 1 means no penalty to move the \nheads around, and there's ALWAYS a penalty for moving the heads around, we \nhave to assume:\n\n1: That either the planner is making poor decisions on some \nother variable, and we can whack the planner in the head with a really low \nrandom page count.\n\nOR \n\n2: The other settings are suboptimal (buffers, sort_mem, \neffective_cache_size, etc...) and lowering random page costs helps there.\n\nI've always wondered if most performance issues aren't a bit of both. \n\nThe answer, of course, is fixing the planner so that a random_page_cost of \nanything less than 1 would never be needed, since by design, anything \nunder 1 represents a computer that likely doesn't exist (in theory of \ncourse.) A 1 would be a machine that was using solid state hard drives \nand had the same cost in terms of OS paths to do random accesses as \nsequential.\n\nWhat constants in the planner, and / or formulas would be the likely\nculprits I wonder? I've wandered through that page and wasn't sure what \nto play with.\n\n> I know Josh is working on revamping the postgresql.conf file, but\n> would it be possible to include suggested values for various bits of\n> hardware and then solicit contributions from admins on this list who\n> have tuned their DB correctly?\n> \n> ## random_page_cost -- units are one sequential page fetch cost\n> #random_page_cost = 4 # default - very conservative\n> #random_page_cost = 0.9 # IDE 5200 RPM, 8MB disk cache\n> #random_page_cost = 0.3 # IDE 7800 RPM, 4MB disk cache\n> #random_page_cost = 0.1 # SCSI RAID 5, 10,000RPM, 64MB cache\n> #random_page_cost = 0.05 # SCSI RAID 1+0, 15,000RPM, 128MB cache\n> #...\n> \n> ## next_hardware_dependent_tunable....\n> #hardware_dependent_tunable\n> \n> I know these tables could get somewhat lengthy or organized\n> differently, but given the file is read _once_ at _startup_, seen by\n> thousands of DBAs, is visited at least once for every installation (at\n> the least to turn on TCP connections), is often the only file other\n> than pg_hba.conf that gets modified or looked at, this could be a very\n> nice way of introducing DBAs to tuning PostgreSQL and reducing the\n> number of people crying \"PostgreSQL's slow.\" Having postgresql.conf a\n> clearing house for tunable values for various bits of hardware would\n> be a huge win for the community and would hopefully radically change\n> this database's perception. At the top of the file, it would be\n> useful to include a blurb to the effect of:\n> \n> # The default values for PostgreSQL are extremely conservative and are\n> # likely far from ideal for a site's needs. Included in this\n> # configuration, however, are _suggested_ values to help aid in\n> # tuning. The values below are not authoritative, merely contributed\n> # suggestions from PostgreSQL DBAs and committers who have\n> # successfully tuned their databases. Please take these values as\n> # advisory only and remember that they will very likely have to be\n> # adjusted according to your site's specific needs. If you have a\n> # piece of hardware that isn't mentioned below and have tuned your\n> # configuration aptly and have found a suggested value that the\n> # PostgreSQL community would benefit from, please send a description\n> # of the hardware, the name of the tunable, and the tuned value to\n> # [email protected] to be considered for inclusion in future\n> # releases.\n> #\n> # It should also go without saying that the PostgreSQL Global\n> # Development Group and its community of committers, contributors,\n> # administrators, and commercial supporters are absolved from any\n> # responsibility or liability with regards to the use of its software\n> # (see this software's license for details). Any data loss,\n> # corruption, or performance degradation is the responsibility of the\n> # individual or group of individuals using/managing this installation.\n> #\n> # Hints to DBAs:\n> #\n> # *) Setup a regular backup schedule (hint: pg_dump(1)/pg_dumpall(1) +\n> # cron(8))\n> #\n> # *) Tuning: Use psql(1) to test out values before changing values for\n> # the entire database. In psql(1), type:\n> #\n> # 1) SHOW [tunabe_name];\n> # 2) SET [tunable_name] = [value];\n> # 3) [run query]\n> # 4) [repeat adjustments as necessary before setting a value here in\n> # the postgresql.conf].\n> # 5) [Send a SIGHUP signal to the backend to have the config values\n> # re-read]\n> #\n> # *) Never use kill -9 on the backend to shut it down.\n> #\n> # *) VACUUM ANALYZE your databases regularly.\n> #\n> # *) Use EXPLAIN ANALYZE [query] to tune queries.\n> #\n> # *) Read the online documentation at:\n> # http://www.postgresql.org/docs/\n> #\n> # -- PostgreSQL Global Development Group\n> \n> Just a thought. A bit lengthy, but given that out of the box most\n> every value is set to be extremely conservative (detrimentally so, esp\n> since the majority of users aren't running PostgreSQL in embedded\n> devices, are on reasonably new hardware > 3 years old), and the config\n> is only read in once and generally the only file viewed by DBAs, it'd\n> make PostgreSQL more competitive in the performance dept if there were\n> some kind of suggested values for various tunables. Having someone\n> whine, \"my PostgreSQL database is slow\" is really getting old when its\n> really not and it's a lack of tuning that is at fault, lowering the\n> bar to a successful and speedy PostgreSQL installation would be a win\n> for everyone. The person who I was helping also had the same data,\n> schema, and query running on MySQL and the fastest it could go was\n> 2.7s (about 40M rows in the table).\n> \n> <gets_off_of_soap_box_to_watch_and_listen/> -sc\n> \n> \n\n",
"msg_date": "Thu, 3 Jul 2003 14:14:37 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "I don't have much to add because I'm pretty new to Postgres and have been\nsoliciting advice here recently, but I totally agree with everything you\nsaid. I don't mind if it's in the postgres.conf file or in a faq that is\neasy to find, I just would like it to be in one place. A good example of\nthe need for this is when I was tuning \"effective_cache\" I thought that was\ncreating a cache for Postgres when in fact as it was pointed out to me, it's\njust hinting to postgres the size of the OS cache. Lots of ways for people\nto get really confused here.\n\nAlso some people have said I should have used MySQL and to be honest I did\nconsider trying it out. Because Postgres is hard to tune a lot of people\nthink it's slower than MySQL. So anything that improves the quality of the\ndocumentation and makes it easier to tune will improve Postgres' reputation\nwhich will in turn encourage more people to use it!\n\nMichael\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Sean\n> Chittenden\n> Sent: Thursday, July 03, 2003 9:05 PM\n> To: [email protected]\n> Subject: [PERFORM] Moving postgresql.conf tunables into 2003...\n>\n>\n> What are the odds of going through and revamping some of the tunables\n> in postgresql.conf for the 7.4 release? I was just working with\n> someone on IRC and on their 7800 RPM IDE drives, their\n> random_page_cost was ideally suited to be 0.32: a far cry from 4.\n> Doing so has been a win across the board and the problem query went\n> from about 40sec (seq scan) down to 0.25ms (using idx, higher than\n> 0.32 resulted in a query time jump to 2sec, and at 0.4 it went back up\n> to a full seq scan at 40sec).\n>\n> I know Josh is working on revamping the postgresql.conf file, but\n> would it be possible to include suggested values for various bits of\n> hardware and then solicit contributions from admins on this list who\n> have tuned their DB correctly?\n>\n> ## random_page_cost -- units are one sequential page fetch cost\n> #random_page_cost = 4 # default - very conservative\n> #random_page_cost = 0.9 # IDE 5200 RPM, 8MB disk cache\n> #random_page_cost = 0.3 # IDE 7800 RPM, 4MB disk cache\n> #random_page_cost = 0.1 # SCSI RAID 5, 10,000RPM, 64MB cache\n> #random_page_cost = 0.05 # SCSI RAID 1+0, 15,000RPM, 128MB cache\n> #...\n>\n> ## next_hardware_dependent_tunable....\n> #hardware_dependent_tunable\n>\n> I know these tables could get somewhat lengthy or organized\n> differently, but given the file is read _once_ at _startup_, seen by\n> thousands of DBAs, is visited at least once for every installation (at\n> the least to turn on TCP connections), is often the only file other\n> than pg_hba.conf that gets modified or looked at, this could be a very\n> nice way of introducing DBAs to tuning PostgreSQL and reducing the\n> number of people crying \"PostgreSQL's slow.\" Having postgresql.conf a\n> clearing house for tunable values for various bits of hardware would\n> be a huge win for the community and would hopefully radically change\n> this database's perception. At the top of the file, it would be\n> useful to include a blurb to the effect of:\n>\n> # The default values for PostgreSQL are extremely conservative and are\n> # likely far from ideal for a site's needs. Included in this\n> # configuration, however, are _suggested_ values to help aid in\n> # tuning. The values below are not authoritative, merely contributed\n> # suggestions from PostgreSQL DBAs and committers who have\n> # successfully tuned their databases. Please take these values as\n> # advisory only and remember that they will very likely have to be\n> # adjusted according to your site's specific needs. If you have a\n> # piece of hardware that isn't mentioned below and have tuned your\n> # configuration aptly and have found a suggested value that the\n> # PostgreSQL community would benefit from, please send a description\n> # of the hardware, the name of the tunable, and the tuned value to\n> # [email protected] to be considered for inclusion in future\n> # releases.\n> #\n> # It should also go without saying that the PostgreSQL Global\n> # Development Group and its community of committers, contributors,\n> # administrators, and commercial supporters are absolved from any\n> # responsibility or liability with regards to the use of its software\n> # (see this software's license for details). Any data loss,\n> # corruption, or performance degradation is the responsibility of the\n> # individual or group of individuals using/managing this installation.\n> #\n> # Hints to DBAs:\n> #\n> # *) Setup a regular backup schedule (hint: pg_dump(1)/pg_dumpall(1) +\n> # cron(8))\n> #\n> # *) Tuning: Use psql(1) to test out values before changing values for\n> # the entire database. In psql(1), type:\n> #\n> # 1) SHOW [tunabe_name];\n> # 2) SET [tunable_name] = [value];\n> # 3) [run query]\n> # 4) [repeat adjustments as necessary before setting a value here in\n> # the postgresql.conf].\n> # 5) [Send a SIGHUP signal to the backend to have the config values\n> # re-read]\n> #\n> # *) Never use kill -9 on the backend to shut it down.\n> #\n> # *) VACUUM ANALYZE your databases regularly.\n> #\n> # *) Use EXPLAIN ANALYZE [query] to tune queries.\n> #\n> # *) Read the online documentation at:\n> # http://www.postgresql.org/docs/\n> #\n> # -- PostgreSQL Global Development Group\n>\n> Just a thought. A bit lengthy, but given that out of the box most\n> every value is set to be extremely conservative (detrimentally so, esp\n> since the majority of users aren't running PostgreSQL in embedded\n> devices, are on reasonably new hardware > 3 years old), and the config\n> is only read in once and generally the only file viewed by DBAs, it'd\n> make PostgreSQL more competitive in the performance dept if there were\n> some kind of suggested values for various tunables. Having someone\n> whine, \"my PostgreSQL database is slow\" is really getting old when its\n> really not and it's a lack of tuning that is at fault, lowering the\n> bar to a successful and speedy PostgreSQL installation would be a win\n> for everyone. The person who I was helping also had the same data,\n> schema, and query running on MySQL and the fastest it could go was\n> 2.7s (about 40M rows in the table).\n>\n> <gets_off_of_soap_box_to_watch_and_listen/> -sc\n>\n> --\n> Sean Chittenden\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n",
"msg_date": "Thu, 3 Jul 2003 22:17:42 +0200",
"msg_from": "\"Michael Mattox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "On Thu, 2003-07-03 at 19:05, Sean Chittenden wrote:\n> What are the odds of going through and revamping some of the tunables\n> in postgresql.conf for the 7.4 release? I was just working with\n> someone on IRC and on their 7800 RPM IDE drives, their\n> random_page_cost was ideally suited to be 0.32: a far cry from 4.\n\nI find it very very hard to believe a random read was cheaper than a\nsequential read. Something is shifty in your testing.",
"msg_date": "03 Jul 2003 20:24:01 +0000",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "Hear, hear!\nWell said Sean. I know that there has been disagreement about this in \nthe past (recommending settings, with some very good reasons), however \nas a programmer/part-time DBA, something like this would be extremely \nuseful. Our company recently developed a web-based app for a client who \nis very happy with their product (we used postgresql db) and we are just \nbeginning to revive a standalone app using postgresql instead of db2 as \nthe database. As I'm the DBA only on a part-time basis it is really time \nconsuming to have to 1) find all relevant documentation and 2) learn it \nsufficiently to try to tune the db properly and 3) forget about most of \nit until we set up a new project in another year. I like postgresql and \nhave convinced two of our clients to use it, but if I could fine tune it \nso it could 'fly', it would be easier for me (and others) to get more \npeople to use it.\n\nRon St.Pierre\n\nBTW I'm looking forward to Josh's configuration doc.\n\nSean Chittenden wrote:\n\n>What are the odds of going through and revamping some of the tunables\n>in postgresql.conf for the 7.4 release? I was just working with\n>someone on IRC and on their 7800 RPM IDE drives, their\n>random_page_cost was ideally suited to be 0.32: a far cry from 4.\n>Doing so has been a win across the board and the problem query went\n>from about 40sec (seq scan) down to 0.25ms (using idx, higher than\n>0.32 resulted in a query time jump to 2sec, and at 0.4 it went back up\n>to a full seq scan at 40sec).\n>\n>I know Josh is working on revamping the postgresql.conf file, but\n>would it be possible to include suggested values for various bits of\n>hardware and then solicit contributions from admins on this list who\n>have tuned their DB correctly?\n>\n>## random_page_cost -- units are one sequential page fetch cost\n>#random_page_cost = 4 # default - very conservative\n>#random_page_cost = 0.9 # IDE 5200 RPM, 8MB disk cache\n>#random_page_cost = 0.3 # IDE 7800 RPM, 4MB disk cache\n>#random_page_cost = 0.1 # SCSI RAID 5, 10,000RPM, 64MB cache\n>#random_page_cost = 0.05 # SCSI RAID 1+0, 15,000RPM, 128MB cache\n>#...\n>\n>## next_hardware_dependent_tunable....\n>#hardware_dependent_tunable\n>\n>I know these tables could get somewhat lengthy or organized\n>differently, but given the file is read _once_ at _startup_, seen by\n>thousands of DBAs, is visited at least once for every installation (at\n>the least to turn on TCP connections), is often the only file other\n>than pg_hba.conf that gets modified or looked at, this could be a very\n>nice way of introducing DBAs to tuning PostgreSQL and reducing the\n>number of people crying \"PostgreSQL's slow.\" Having postgresql.conf a\n>clearing house for tunable values for various bits of hardware would\n>be a huge win for the community and would hopefully radically change\n>this database's perception. At the top of the file, it would be\n>useful to include a blurb to the effect of:\n>\n># The default values for PostgreSQL are extremely conservative and are\n># likely far from ideal for a site's needs. Included in this\n># configuration, however, are _suggested_ values to help aid in\n># tuning. The values below are not authoritative, merely contributed\n># suggestions from PostgreSQL DBAs and committers who have\n># successfully tuned their databases. Please take these values as\n># advisory only and remember that they will very likely have to be\n># adjusted according to your site's specific needs. If you have a\n># piece of hardware that isn't mentioned below and have tuned your\n># configuration aptly and have found a suggested value that the\n># PostgreSQL community would benefit from, please send a description\n># of the hardware, the name of the tunable, and the tuned value to\n># [email protected] to be considered for inclusion in future\n># releases.\n>#\n># It should also go without saying that the PostgreSQL Global\n># Development Group and its community of committers, contributors,\n># administrators, and commercial supporters are absolved from any\n># responsibility or liability with regards to the use of its software\n># (see this software's license for details). Any data loss,\n># corruption, or performance degradation is the responsibility of the\n># individual or group of individuals using/managing this installation.\n>#\n># Hints to DBAs:\n>#\n># *) Setup a regular backup schedule (hint: pg_dump(1)/pg_dumpall(1) +\n># cron(8))\n>#\n># *) Tuning: Use psql(1) to test out values before changing values for\n># the entire database. In psql(1), type:\n>#\n># 1) SHOW [tunabe_name];\n># 2) SET [tunable_name] = [value];\n># 3) [run query]\n># 4) [repeat adjustments as necessary before setting a value here in\n># the postgresql.conf].\n># 5) [Send a SIGHUP signal to the backend to have the config values\n># re-read]\n>#\n># *) Never use kill -9 on the backend to shut it down.\n>#\n># *) VACUUM ANALYZE your databases regularly.\n>#\n># *) Use EXPLAIN ANALYZE [query] to tune queries.\n>#\n># *) Read the online documentation at:\n># http://www.postgresql.org/docs/\n>#\n># -- PostgreSQL Global Development Group\n>\n>Just a thought. A bit lengthy, but given that out of the box most\n>every value is set to be extremely conservative (detrimentally so, esp\n>since the majority of users aren't running PostgreSQL in embedded\n>devices, are on reasonably new hardware > 3 years old), and the config\n>is only read in once and generally the only file viewed by DBAs, it'd\n>make PostgreSQL more competitive in the performance dept if there were\n>some kind of suggested values for various tunables. Having someone\n>whine, \"my PostgreSQL database is slow\" is really getting old when its\n>really not and it's a lack of tuning that is at fault, lowering the\n>bar to a successful and speedy PostgreSQL installation would be a win\n>for everyone. The person who I was helping also had the same data,\n>schema, and query running on MySQL and the fastest it could go was\n>2.7s (about 40M rows in the table).\n>\n><gets_off_of_soap_box_to_watch_and_listen/> -sc\n>\n> \n>\n\n\n",
"msg_date": "Thu, 03 Jul 2003 13:28:21 -0700",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "Sean Chittenden <[email protected]> writes:\n> What are the odds of going through and revamping some of the tunables\n> in postgresql.conf for the 7.4 release?\n\nI was arguing awhile back for bumping the default shared_buffers up,\nbut the discussion trailed off with no real resolution.\n\n> I was just working with\n> someone on IRC and on their 7800 RPM IDE drives, their\n> random_page_cost was ideally suited to be 0.32: a far cry from 4.\n\nIt is not physically sensible for random_page_cost to be less than one.\nThe system only lets you set it there for experimental purposes; there\nis no way that postgresql.conf.sample will recommend it. If you needed\nto push it below one to force indexscans, there is some other problem\nthat needs to be solved. (I'd wonder about index correlation myself;\nwe know that that equation is pretty bogus.)\n\n> I know Josh is working on revamping the postgresql.conf file, but\n> would it be possible to include suggested values for various bits of\n> hardware and then solicit contributions from admins on this list who\n> have tuned their DB correctly?\n\nI think such material belongs in the SGML docs, not hidden away in a\nconfig file that people may not look at...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jul 2003 16:38:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003... "
},
{
"msg_contents": "I'm curious how many of the configuration values can be determined \nautomatically, or with the help of some script. It seem like there \ncould be some perl script in contrib that could help figure this out. \nPossibly you are asked a bunch of questions and then the values are \ncomputed based on that. Something like:\n\nHow many tables will the system have?\nHow much memory will be available to the postmaster?\nHow many backends will there typically be?\nWhat is the avg seek time of the drive?\nWhat's the transfer rate of the drive?\n\nSeems to me that a lot of reasonable default values can be figure out \nfrom these basic questions. FSM settings, Sort Mem, Random Page Cost, \nEffective Cache Size, Shared Memor, etc, etc.\n\n\nOn Thursday, July 3, 2003, at 02:14 PM, scott.marlowe wrote:\n\n> On Thu, 3 Jul 2003, Sean Chittenden wrote:\n>\n>> What are the odds of going through and revamping some of the tunables\n>> in postgresql.conf for the 7.4 release? I was just working with\n>> someone on IRC and on their 7800 RPM IDE drives, their\n>> random_page_cost was ideally suited to be 0.32: a far cry from 4.\n>> Doing so has been a win across the board and the problem query went\n>> from about 40sec (seq scan) down to 0.25ms (using idx, higher than\n>> 0.32 resulted in a query time jump to 2sec, and at 0.4 it went back up\n>> to a full seq scan at 40sec).\n>\n> I'm the guy who advocates settings of 1 to 2, and that still sounds \n> low to\n> me. :-) I'm wondering if the effective_cache_size was set properly, as\n> well as there be enough buffers allocated.\n>\n> I generally set effective cache size to 100,000 pages (800 megs or so) \n> on\n> my box, which is where it sits most days. with this setting I've found\n> that settings of under 1 are not usually necessary to force the \n> planner to\n> take the path of righteousness (i.e. the fastest one :-) 1.2 to 1.4 are\n> optimal to me.\n>\n> Since theoretically a random page of of 1 means no penalty to move the\n> heads around, and there's ALWAYS a penalty for moving the heads \n> around, we\n> have to assume:\n>\n> 1: That either the planner is making poor decisions on some\n> other variable, and we can whack the planner in the head with a really \n> low\n> random page count.\n>\n> OR\n>\n> 2: The other settings are suboptimal (buffers, sort_mem,\n> effective_cache_size, etc...) and lowering random page costs helps \n> there.\n>\n> I've always wondered if most performance issues aren't a bit of both.\n>\n> The answer, of course, is fixing the planner so that a \n> random_page_cost of\n> anything less than 1 would never be needed, since by design, anything\n> under 1 represents a computer that likely doesn't exist (in theory of\n> course.) A 1 would be a machine that was using solid state hard drives\n> and had the same cost in terms of OS paths to do random accesses as\n> sequential.\n>\n> What constants in the planner, and / or formulas would be the likely\n> culprits I wonder? I've wandered through that page and wasn't sure \n> what\n> to play with.\n>\n>> I know Josh is working on revamping the postgresql.conf file, but\n>> would it be possible to include suggested values for various bits of\n>> hardware and then solicit contributions from admins on this list who\n>> have tuned their DB correctly?\n>>\n>> ## random_page_cost -- units are one sequential page fetch cost\n>> #random_page_cost = 4 # default - very conservative\n>> #random_page_cost = 0.9 # IDE 5200 RPM, 8MB disk cache\n>> #random_page_cost = 0.3 # IDE 7800 RPM, 4MB disk cache\n>> #random_page_cost = 0.1 # SCSI RAID 5, 10,000RPM, 64MB cache\n>> #random_page_cost = 0.05 # SCSI RAID 1+0, 15,000RPM, 128MB \n>> cache\n>> #...\n>>\n>> ## next_hardware_dependent_tunable....\n>> #hardware_dependent_tunable\n>>\n>> I know these tables could get somewhat lengthy or organized\n>> differently, but given the file is read _once_ at _startup_, seen by\n>> thousands of DBAs, is visited at least once for every installation (at\n>> the least to turn on TCP connections), is often the only file other\n>> than pg_hba.conf that gets modified or looked at, this could be a very\n>> nice way of introducing DBAs to tuning PostgreSQL and reducing the\n>> number of people crying \"PostgreSQL's slow.\" Having postgresql.conf a\n>> clearing house for tunable values for various bits of hardware would\n>> be a huge win for the community and would hopefully radically change\n>> this database's perception. At the top of the file, it would be\n>> useful to include a blurb to the effect of:\n>>\n>> # The default values for PostgreSQL are extremely conservative and are\n>> # likely far from ideal for a site's needs. Included in this\n>> # configuration, however, are _suggested_ values to help aid in\n>> # tuning. The values below are not authoritative, merely contributed\n>> # suggestions from PostgreSQL DBAs and committers who have\n>> # successfully tuned their databases. Please take these values as\n>> # advisory only and remember that they will very likely have to be\n>> # adjusted according to your site's specific needs. If you have a\n>> # piece of hardware that isn't mentioned below and have tuned your\n>> # configuration aptly and have found a suggested value that the\n>> # PostgreSQL community would benefit from, please send a description\n>> # of the hardware, the name of the tunable, and the tuned value to\n>> # [email protected] to be considered for inclusion in future\n>> # releases.\n>> #\n>> # It should also go without saying that the PostgreSQL Global\n>> # Development Group and its community of committers, contributors,\n>> # administrators, and commercial supporters are absolved from any\n>> # responsibility or liability with regards to the use of its software\n>> # (see this software's license for details). Any data loss,\n>> # corruption, or performance degradation is the responsibility of the\n>> # individual or group of individuals using/managing this installation.\n>> #\n>> # Hints to DBAs:\n>> #\n>> # *) Setup a regular backup schedule (hint: pg_dump(1)/pg_dumpall(1) +\n>> # cron(8))\n>> #\n>> # *) Tuning: Use psql(1) to test out values before changing values for\n>> # the entire database. In psql(1), type:\n>> #\n>> # 1) SHOW [tunabe_name];\n>> # 2) SET [tunable_name] = [value];\n>> # 3) [run query]\n>> # 4) [repeat adjustments as necessary before setting a value here \n>> in\n>> # the postgresql.conf].\n>> # 5) [Send a SIGHUP signal to the backend to have the config values\n>> # re-read]\n>> #\n>> # *) Never use kill -9 on the backend to shut it down.\n>> #\n>> # *) VACUUM ANALYZE your databases regularly.\n>> #\n>> # *) Use EXPLAIN ANALYZE [query] to tune queries.\n>> #\n>> # *) Read the online documentation at:\n>> # http://www.postgresql.org/docs/\n>> #\n>> # -- PostgreSQL Global Development Group\n>>\n>> Just a thought. A bit lengthy, but given that out of the box most\n>> every value is set to be extremely conservative (detrimentally so, esp\n>> since the majority of users aren't running PostgreSQL in embedded\n>> devices, are on reasonably new hardware > 3 years old), and the config\n>> is only read in once and generally the only file viewed by DBAs, it'd\n>> make PostgreSQL more competitive in the performance dept if there were\n>> some kind of suggested values for various tunables. Having someone\n>> whine, \"my PostgreSQL database is slow\" is really getting old when its\n>> really not and it's a lack of tuning that is at fault, lowering the\n>> bar to a successful and speedy PostgreSQL installation would be a win\n>> for everyone. The person who I was helping also had the same data,\n>> schema, and query running on MySQL and the fastest it could go was\n>> 2.7s (about 40M rows in the table).\n>>\n>> <gets_off_of_soap_box_to_watch_and_listen/> -sc\n>>\n>>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n",
"msg_date": "Thu, 3 Jul 2003 15:33:34 -0600",
"msg_from": "Brian Hirt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "> > What are the odds of going through and revamping some of the\n> > tunables in postgresql.conf for the 7.4 release? I was just\n> > working with someone on IRC and on their 7800 RPM IDE drives,\n> > their random_page_cost was ideally suited to be 0.32: a far cry\n> > from 4. Doing so has been a win across the board and the problem\n> > query went from about 40sec (seq scan) down to 0.25ms (using idx,\n> > higher than 0.32 resulted in a query time jump to 2sec, and at 0.4\n> > it went back up to a full seq scan at 40sec).\n> \n> I'm the guy who advocates settings of 1 to 2, and that still sounds\n> low to me. :-) I'm wondering if the effective_cache_size was set\n> properly, as well as there be enough buffers allocated.\n> \n> I generally set effective cache size to 100,000 pages (800 megs or\n> so) on my box, which is where it sits most days. with this setting\n> I've found that settings of under 1 are not usually necessary to\n> force the planner to take the path of righteousness (i.e. the\n> fastest one :-) 1.2 to 1.4 are optimal to me.\n\nThis is a nightly report that's run, cache sizes won't impact\nperformance of the query at all. The planner was consistently\nchoosing a sequential scan over using the index until the\nrandom_page_cost was set to 0.32. After adjustment, the query just\nflies ([email protected] vs [email protected] vs. 40s@>0.4). Since it's a nightly\nreport that only gets performed once a day and data is COPY'ed in once\nevery few minutes, there's a huge amount of data that's not cached nor\nshould it be.\n\n> Since theoretically a random page of of 1 means no penalty to move\n> the heads around, and there's ALWAYS a penalty for moving the heads\n> around, we have to assume:\n> \n> 1: That either the planner is making poor decisions on some other\n> variable, and we can whack the planner in the head with a really low\n> random page count.\n\nBy all accounts of having played with this query+data, this is the\ncorrect assumption from what I can tell.\n\n> OR \n> \n> 2: The other settings are suboptimal (buffers, sort_mem,\n> effective_cache_size, etc...) and lowering random page costs helps\n> there.\n\nNone of those other than possibly sort_mem had any impact on the\nquery, but even then, lower sort_mem doesn't help until the data's\nbeen picked out of the table. Sorting ~16k of rows is quicker with\nmore sort_mem. Higher sort_mem has zero impact on fetching ~16K rows\nout of a table with 40M rows of data. Getting the planner to pick\nusing the index to filter out data inserted in the last 3 days over\ndoing a seq scan... well, I don't know how you could do that without\nchanging the random_page_cost. A good thump to the side of the head\nwould be welcome too if I'm wrong, just make sure it's a good thump\nwith the appropriate clue-bat.\n\n> I've always wondered if most performance issues aren't a bit of both. \n\nEh, in my experience, it's generally that random_page_cost needs to be\nadjusted to match the hardware and this value every year with new\nhardware, seems to be getting lower.\n\n> The answer, of course, is fixing the planner so that a\n> random_page_cost of anything less than 1 would never be needed,\n> since by design, anything under 1 represents a computer that likely\n> doesn't exist (in theory of course.) A 1 would be a machine that\n> was using solid state hard drives and had the same cost in terms of\n> OS paths to do random accesses as sequential.\n\nWell, this could be a bug then, but I'm skeptical. What's odd to me\nis that hanging the value between 0.32, 0.33, and 0.4 all radically\nchange the performance of the query.\n\n> What constants in the planner, and / or formulas would be the likely\n> culprits I wonder? I've wandered through that page and wasn't sure\n> what to play with.\n\nrandom_page_cost should be proportional to the seek time necessary for\nthe disk to find a page of data on its platters. It makes sense that\nthis value, as time progresses, gets smaller as hardware gets faster.\n\n-sc\n\n-- \nSean Chittenden\n",
"msg_date": "Thu, 3 Jul 2003 16:23:51 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "> I'm curious how many of the configuration values can be determined \n> automatically, or with the help of some script. It seem like there \n> could be some perl script in contrib that could help figure this out. \n> Possibly you are asked a bunch of questions and then the values are \n> computed based on that. Something like:\n> \n> How many tables will the system have?\n> How much memory will be available to the postmaster?\n> How many backends will there typically be?\n> What is the avg seek time of the drive?\n> What's the transfer rate of the drive?\n> \n> Seems to me that a lot of reasonable default values can be figure out \n> from these basic questions. FSM settings, Sort Mem, Random Page Cost, \n> Effective Cache Size, Shared Memor, etc, etc.\n\nSomeone was working on a thing called pg_autotune or some such program\nthat'd do exactly what you're thinking of. \n\nhttp://archives.postgresql.org/pgsql-performance/2002-10/msg00101.php\nhttp://gborg.postgresql.org/project/pgautotune/projdisplay.php\n\n\n-- \nSean Chittenden\n",
"msg_date": "Thu, 3 Jul 2003 16:25:35 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "> > What are the odds of going through and revamping some of the\n> > tunables in postgresql.conf for the 7.4 release? I was just\n> > working with someone on IRC and on their 7800 RPM IDE drives,\n> > their random_page_cost was ideally suited to be 0.32: a far cry\n> > from 4.\n> \n> I find it very very hard to believe a random read was cheaper than a\n> sequential read. Something is shifty in your testing.\n\nThis is the procedure used to zero in on the number:\n\nSET random_page_cost = 3;\n[run query three times]\nSET random_page_cost = 2;\n[run query three times]\nSET random_page_cost = 1;\n[run query three times]\nSET random_page_cost = 0.01; -- verify that this tunable would make\n\t\t\t -- a difference eventually\n[run query three times]\nSET random_page_cost = 0.5;\n[run query three times]\nSET random_page_cost = 0.2; -- this was the 1st query that didn't\n\t\t\t -- do a seq scan\n[run query three times]\nSET random_page_cost = 0.4; -- back to a seq scan\n[run query three times]\nSET random_page_cost = 0.3; -- idx scan, how high can I push the rpc?\n[run query three times]\nSET random_page_cost = 0.35; -- interesting, the query time jumped to\n\t\t\t -- about 0.2s... better than 40s, but not as\n\t\t\t -- nice as the 0.25ms when the rpc was at 0.3\n[run query three times]\nSET random_page_cost = 0.32; -- Sweet, 0.25ms for the query\n[run query three times]\nSET random_page_cost = 0.33; -- Bah, back up to 0.2s\n[run query three times]\nSET random_page_cost = 0.31; -- Down to 0.25ms, too low\n[run query three times]\nSET random_page_cost = 0.33; -- Double check that it wasn't an errant\n\t\t\t -- performance at 0.33\n[run query three times]\nSET random_page_cost = 0.32; -- Double check that 0.32 is the magic number\n[run query three times]\n\n[edit postgresql.conf && killall -SIGHUP postmaster]\n\n-sc\n\n-- \nSean Chittenden",
"msg_date": "Thu, 3 Jul 2003 16:32:38 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "> > What are the odds of going through and revamping some of the tunables\n> > in postgresql.conf for the 7.4 release?\n> \n> I was arguing awhile back for bumping the default shared_buffers up,\n> but the discussion trailed off with no real resolution.\n> \n> > I was just working with someone on IRC and on their 7800 RPM IDE\n> > drives, their random_page_cost was ideally suited to be 0.32: a\n> > far cry from 4.\n> \n> It is not physically sensible for random_page_cost to be less than\n> one. The system only lets you set it there for experimental\n> purposes; there is no way that postgresql.conf.sample will recommend\n> it. If you needed to push it below one to force indexscans, there\n> is some other problem that needs to be solved. (I'd wonder about\n> index correlation myself; we know that that equation is pretty\n> bogus.)\n\nCould be. I had him create a multi-column index on the date and a\nnon-unique highly redundant id. This is a production machine so the\nload times are heavier now than they were earlier. The stats sample\nwas increased to 1000 too to see if that made any difference in the\nplanners estimations.\n\nmss_masterlog=> SHOW random_page_cost;\n random_page_cost\n------------------\n 4\n(1 row)\n\nmss_masterlog=> EXPLAIN ANALYZE SELECT srca, COUNT(srca) FROM mss_fwevent WHERE\nmss_masterlog-> sensorid = 7 AND evtime > (now() - '6 hours'::INTERVAL)\nmss_masterlog-> AND NOT action GROUP BY srca ORDER BY COUNT DESC LIMIT 20;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=240384.69..240384.74 rows=20 width=12) (actual time=24340.04..24340.39 rows=20 loops=1)\n -> Sort (cost=240384.69..240426.80 rows=16848 width=12) (actual time=24340.02..24340.14 rows=21 loops=1)\n Sort Key: count(srca)\n -> Aggregate (cost=237938.36..239201.95 rows=16848 width=12) (actual time=24322.84..24330.73 rows=23 loops=1)\n -> Group (cost=237938.36..238780.75 rows=168478 width=12) (actual time=24322.57..24328.45 rows=320 loops=1)\n -> Sort (cost=237938.36..238359.55 rows=168478 width=12) (actual time=24322.55..24324.34 rows=320 loops=1)\n Sort Key: srca\n -> Seq Scan on mss_fwevent (cost=0.00..223312.60 rows=168478 width=12) (actual time=24253.66..24319.87 rows=320 loops=1)\n Filter: ((sensorid = 7) AND (evtime > (now() - '06:00'::interval)) AND (NOT \"action\"))\n Total runtime: 24353.67 msec\n(10 rows)\n\nmss_masterlog=> SET enable_seqscan = false;\nSET\nmss_masterlog=> EXPLAIN ANALYZE SELECT srca, COUNT(srca) FROM mss_fwevent WHERE\nmss_masterlog-> sensorid = 7 AND evtime > (now() - '6 hours'::INTERVAL)\nmss_masterlog-> AND NOT action GROUP BY srca ORDER BY COUNT DESC LIMIT 20;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2459596.79..2459596.84 rows=20 width=12) (actual time=162.92..163.25 rows=20 loops=1)\n -> Sort (cost=2459596.79..2459638.91 rows=16848 width=12) (actual time=162.90..163.01 rows=21 loops=1)\n Sort Key: count(srca)\n -> Aggregate (cost=2457150.46..2458414.05 rows=16848 width=12) (actual time=135.62..143.46 rows=23 loops=1)\n -> Group (cost=2457150.46..2457992.85 rows=168478 width=12) (actual time=135.35..141.22 rows=320 loops=1)\n -> Sort (cost=2457150.46..2457571.66 rows=168478 width=12) (actual time=135.33..137.14 rows=320 loops=1)\n Sort Key: srca\n -> Index Scan using mss_fwevent_evtime_sensorid_idx on mss_fwevent (cost=0.00..2442524.70 rows=168478 width=12) (actual time=68.36..132.84 rows=320 loops=1)\n Index Cond: ((evtime > (now() - '06:00'::interval)) AND (sensorid = 7))\n Filter: (NOT \"action\")\n Total runtime: 163.60 msec\n(11 rows)\nmss_masterlog=> SET enable_seqscan = true;\nSET\nmss_masterlog=> SET random_page_cost = 0.32;\nSET\nmss_masterlog=> EXPLAIN ANALYZE SELECT srca, COUNT(srca) FROM mss_fwevent WHERE\nmss_masterlog-> sensorid = 7 AND evtime > (now() - '6 hours'::INTERVAL)\nmss_masterlog-> AND NOT action GROUP BY srca ORDER BY COUNT DESC LIMIT 20;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=227274.85..227274.90 rows=20 width=12) (actual time=28.42..28.75 rows=20 loops=1)\n -> Sort (cost=227274.85..227316.97 rows=16848 width=12) (actual time=28.41..28.52 rows=21 loops=1)\n Sort Key: count(srca)\n -> Aggregate (cost=224828.52..226092.11 rows=16848 width=12) (actual time=20.26..28.13 rows=23 loops=1)\n -> Group (cost=224828.52..225670.91 rows=168478 width=12) (actual time=19.99..25.86 rows=320 loops=1)\n -> Sort (cost=224828.52..225249.72 rows=168478 width=12) (actual time=19.98..21.76 rows=320 loops=1)\n Sort Key: srca\n -> Index Scan using mss_fwevent_evtime_sensorid_idx on mss_fwevent (cost=0.00..210202.76 rows=168478 width=12) (actual time=0.35..17.61 rows=320 loops=1)\n Index Cond: ((evtime > (now() - '06:00'::interval)) AND (sensorid = 7))\n Filter: (NOT \"action\")\n Total runtime: 29.09 msec\n(11 rows)\n\nAnd there 'ya have it. The times are different from when I had him\nsend me the queries this morning, but they're within an order of\nmagnitude difference between each and show the point. Oh, today they\ndid a bunch of pruning of old data (nuked June's data)... the runtime\ndifferences are basically the same though.\n\n> > I know Josh is working on revamping the postgresql.conf file, but\n> > would it be possible to include suggested values for various bits of\n> > hardware and then solicit contributions from admins on this list who\n> > have tuned their DB correctly?\n> \n> I think such material belongs in the SGML docs, not hidden away in a\n> config file that people may not look at...\n\nThe config file isn't hidden though and is very visible in the tuning\nprocess and to DBAs. I don't know if a PostgreSQL distributions ship\nwith TCP connections enabled by default (FreeBSD doesn't), so the\nconfig is always seen and viewed by DBAs. If it's not the TCP\nconnections setting, it's the max connections setting or sort_mem,\netc... having the values dup'ed in the SGML, however, would be good\ntoo, but it's of most practical relevance in the actual config: as an\nadmin setting up a DB, I'd rather not have to fish around on\npostgresql.org to find a recommended setting, having it inline and\njust having to uncomment it is by far and away the most DBA friendly\nand likely to be used in the wild by admins.\n\n-sc\n\n-- \nSean Chittenden\n",
"msg_date": "Thu, 3 Jul 2003 17:06:46 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "Sean Chittenden <[email protected]> writes:\n> Getting the planner to pick\n> using the index to filter out data inserted in the last 3 days over\n> doing a seq scan... well, I don't know how you could do that without\n> changing the random_page_cost.\n\nThis sounds a *whole* lot like a correlation issue. If the data in\nquestion were scattered randomly in the table, it's likely that an\nindexscan would be a loser. The recently-inserted data is probably\nclustered near the end of the table (especially if they're doing VACUUM\nFULL after data purges; are they?). But the planner's correlation stats\nare much too crude to recognize that situation, if the rest of the table\nis not well-ordered.\n\nIf their typical process involves a periodic data purge and then a\nVACUUM FULL, it might be worth experimenting with doing a CLUSTER on the\ntimestamp index instead of the VACUUM FULL. The CLUSTER would reclaim\nspace as effectively as VACUUM FULL + REINDEX, and it would leave the\ntable with an unmistakable 1.0 correlation ... which should tilt the\nplanner towards an indexscan without needing a physically impossible\nrandom_page_cost to do it. I think CLUSTER would probably be a little\nslower than VACUUM FULL but it's hard to be sure without trying.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jul 2003 21:57:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003... "
},
{
"msg_contents": "Sean, Tom, Rod, Michael, Brian, Ron:\n\nI'm going to paste everything into one monumental response. So be prepared to \nscroll.\n\nSean Asks:\n> What are the odds of going through and revamping some of the tunables\n> in postgresql.conf for the 7.4 release? \n\nPoor. The time to do this would have been 3 weeks ago, when I announced that \nI was re-organizing them and that Bruce was changing many names. We're past \nFeature Freeze now, and we have a *lot* of bug-checking to do with the \nback-end changes.\n\n> I know Josh is working on revamping the postgresql.conf file, but\n> would it be possible to include suggested values for various bits of\n> hardware and then solicit contributions from admins on this list who\n> have tuned their DB correctly?\n\nSure, but this is not a short-term project. I started this list, and have \n100% of list e-mails archived, and I can tell you that there is little \nagreement on many of the parameters ... plus I think we'd need about 15-25 \ne-mails about the best way to implement it, as my ideas are different from \nyours and Tom's are different from both of us.\n\nI'd also suggest that this is a good thing to do *after* we have created a \ncomprehensive benchmarking package that allows us to difinitively test the \nargued values for various parameters. Right now, the \"conventional wisdom\" \nwe have is strictly anecdotal; for example, all of the discussions on this \nlist about the value of shared_buffers encompasses only about 14 servers and \n3 operating systems.\n\n> # The default values for PostgreSQL are extremely conservative and are\n> # likely far from ideal for a site's needs. Included in this\n> # configuration, however, are _suggested_ values to help aid in\n<snip>\n\nThis sort of narrative belongs in the SGML docs, not in a CONF file. In fact, \none could argue that we should take *all* commentary out of the CONF file in \norder to force people to read the docs.\n\nMichael Says:\n> I don't have much to add because I'm pretty new to Postgres and have been\n> soliciting advice here recently, but I totally agree with everything you\n> said. I don't mind if it's in the postgres.conf file or in a faq that is\n> easy to find, I just would like it to be in one place. \n\nI spent a bunch of hours this last period re-organizing the official docs so \nthat they are easier to read. Check them out in the 7.4 dev docs. To \nfurther enhance this, Shridhar and I will be putting together a broad \ncommentary and putting it on one of the postgresql web sites. Eventually \nwhen recommendations are tested a lot of this commentary will make its way \ninto the official docs.\n\nRon Says:\n> the database. As I'm the DBA only on a part-time basis it is really time \n> consuming to have to 1) find all relevant documentation and 2) learn it \n> sufficiently to try to tune the db properly and 3) forget about most of \n> it until we set up a new project in another year. I like postgresql and \n> have convinced two of our clients to use it, but if I could fine tune it \n> so it could 'fly', it would be easier for me (and others) to get more \n> people to use it.\n\nDatabase performance tuning will always be a \"black art,\" as it necessitates a \nbroad knowledge of PostgreSQL, OS architecture, and computer hardware. So I \ndoubt that we can post docs that would allow any 10% time DBA to make \nPostgreSQL \"fly\", but hopefully over the next year we can make enough \nknowledge public to allow anyone to make PostgreSQL \"sprint\".\n\nTom Comments:\n> I was arguing awhile back for bumping the default shared_buffers up,\n> but the discussion trailed off with no real resolution.\n\nI think we ran up against the still far-too-low SHMMAX settings in most \n*nixes. We could raise this default once we can supply a script which will \nhelp the user bump up the OS's memory settings at, say, initDB time.\n\nBrian Suggests:\n> I'm curious how many of the configuration values can be determined \n> automatically, or with the help of some script. It seem like there \n> could be some perl script in contrib that could help figure this out. \n> Possibly you are asked a bunch of questions and then the values are \n> computed based on that. Something like:\n\nThis would be great! Wanna be in charge of it?\n\nSean Replies:\n> Someone was working on a thing called pg_autotune or some such program\n> that'd do exactly what you're thinking of. \n\nJustin. Unfortunately, pg_autotune didn't get very far, plus its design is \nnot very friendly to collaborative programming. So it's the right idea, but \nneeds to be reworked from the whiteboard, probably in Perl.\n\tKevin Brown and I followed that up by trying to build a downloadable public \ndomain database that could be used for benchmarking. However, he got an FT \njob and I got distracted by prep for 7.4. So, a little help?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 3 Jul 2003 19:38:15 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Kevin Brown and I followed that up by trying to build a downloadable public \n> domain database that could be used for benchmarking. However, he got an FT \n> job and I got distracted by prep for 7.4. So, a little help?\n\nBTW, OSDL (Linus' new home ;-)) is starting to release Postgres-friendly\nversions of some of the open-source benchmarks they've been working on.\nI wouldn't put all my faith in any one benchmark, but with a few to\nchoose from we might start to get someplace.\n\nAlso, for any of you who will be at O'Reilly next week, OSDL is just up\nthe road and I'm expecting to meet with them Tuesday afternoon/evening.\nAnyone else interested in going?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jul 2003 23:13:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003... "
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Sean Asks:\n>> What are the odds of going through and revamping some of the tunables\n>> in postgresql.conf for the 7.4 release? \n\n> Poor. The time to do this would have been 3 weeks ago, when I\n> announced that I was re-organizing them and that Bruce was changing\n> many names. We're past Feature Freeze now, and we have a *lot* of\n> bug-checking to do with the back-end changes.\n\nFWIW, I think what Sean is suggesting would amount purely to a\ndocumentation change, and as such would be perfectly legitimate during\nbeta. However, I quite agree with your point that we are far from\nhaving a consensus on good numbers to put in. To get to consensus will\ntake a lot more work on benchmarking than we've done to date.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jul 2003 23:52:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003... "
},
{
"msg_contents": "That's one heck of a poor estimate for the number of rows returned.\n\n> -> Seq Scan on mss_fwevent (cost=0.00..223312.60 rows=168478 width=12) (actual time=24253.66..24319.87 rows=320 loops=1)",
"msg_date": "04 Jul 2003 08:29:04 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Tom Comments:\n>> I was arguing awhile back for bumping the default shared_buffers up,\n>> but the discussion trailed off with no real resolution.\n\n> I think we ran up against the still far-too-low SHMMAX settings in most \n> *nixes. We could raise this default once we can supply a script which will \n> help the user bump up the OS's memory settings at, say, initDB time.\n\nActually, I think it would not be hard to get initdb to test whether\nlarger shared-memory settings would work. We could do something like\ntry -B of 64, 256, 512, 1024, and insert into postgresql.conf the\nlargest value that works. I would want it to top out at a few thousand\nat most, because I don't think a default installation should try to\ncommandeer the whole machine, but if we could get the typical\ninstallation to be running with even 1000 buffers rather than 64,\nwe'd be *way* better off. (See \"Postgres vs MySQL\" thread nearby.)\n\nWe could possibly also have initdb print some kind of message if it's\nforced to use an unreasonably small value for shared_buffers, so that\npeople might have a clue that they need to do kernel reconfiguration.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jul 2003 13:15:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003... "
},
{
"msg_contents": "> Sean Chittenden <[email protected]> writes:\n> > Getting the planner to pick\n> > using the index to filter out data inserted in the last 3 days over\n> > doing a seq scan... well, I don't know how you could do that without\n> > changing the random_page_cost.\n> \n> This sounds a *whole* lot like a correlation issue. If the data in\n> question were scattered randomly in the table, it's likely that an\n> indexscan would be a loser. The recently-inserted data is probably\n> clustered near the end of the table (especially if they're doing\n> VACUUM FULL after data purges; are they?). But the planner's\n> correlation stats are much too crude to recognize that situation, if\n> the rest of the table is not well-ordered.\n\nData isn't scattered randomly from what I can tell and is basically\nalready clustered just because the data is inserted linearly and\nbased off of time. I don't think they're doing a VACUUM FULL after a\npurge, but I'll double check on that on Monday when they get in. Is\nthere an easy way of determining or setting a planner stat to suggest\nthat data is ordered around a column in a permanent way? CLUSTER has\nalways been a one shot deal and its effects wear off quickly depending\non the way that data is inserted. It seems as though that this would\nbe a circumstance in which preallocated disk space would be a win\n(that way data wouldn't always be appended to the heap and could be\ninserted in order, of most use for non-time related data: ex, some\nnon-unique ID).\n\n> If their typical process involves a periodic data purge and then a\n> VACUUM FULL, it might be worth experimenting with doing a CLUSTER on\n> the timestamp index instead of the VACUUM FULL. The CLUSTER would\n> reclaim space as effectively as VACUUM FULL + REINDEX, and it would\n> leave the table with an unmistakable 1.0 correlation ... which\n> should tilt the planner towards an indexscan without needing a\n> physically impossible random_page_cost to do it. I think CLUSTER\n> would probably be a little slower than VACUUM FULL but it's hard to\n> be sure without trying.\n\nHrm, I understand what clustering does, I'm just not convinced that\nit'll \"fix\" this performance problem unless CLUSTER sets some kind of\nhint that ANALYZE uses to modify the way in which it collects\nstatistics. Like I said, I'll let you know on Monday when they're\nback in the shop, but I'm not holding my breath. I know\nrandom_page_cost is set to something physically impossible, but in\nterms of performance, it's always been the biggest win for me to set\nthis puppy quite low. Bug in the planner, or documentation\nsurrounding what this knob does, I'm not sure, but setting this to a\nlow value consistently yields good results for me. Faster the drive,\nthe lower the random_page_cost value. *shrug*\n\n> That's one heck of a poor estimate for the number of rows returned.\n> \n> > -> Seq Scan on mss_fwevent (cost=0.00..223312.60 rows=168478 width=12) (actual time=24253.66..24319.87 rows=320 loops=1)\n\nThe stats for the columns are already set to 1000 to aid with\nthis... don't know what else I can do here. Having the planner off by\nas much as even half the actual size isn't uncommon in my experience.\n\n-sc\n\n-- \nSean Chittenden\n",
"msg_date": "Sat, 5 Jul 2003 14:04:41 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "> > # The default values for PostgreSQL are extremely conservative and\n> > # are likely far from ideal for a site's needs. Included in this\n> > # configuration, however, are _suggested_ values to help aid in >\n> > # <snip>\n> \n> This sort of narrative belongs in the SGML docs, not in a CONF file.\n> In fact, one could argue that we should take *all* commentary out of\n> the CONF file in order to force people to read the docs.\n\nThe SGML docs aren't in the DBA's face and are way out of the way for\nDBAs rolling out a new system or who are tuning the system. SGML ==\nDeveloper, conf == DBA.\n\n> Database performance tuning will always be a \"black art,\" as it\n> necessitates a broad knowledge of PostgreSQL, OS architecture, and\n> computer hardware. So I doubt that we can post docs that would\n> allow any 10% time DBA to make PostgreSQL \"fly\", but hopefully over\n> the next year we can make enough knowledge public to allow anyone to\n> make PostgreSQL \"sprint\".\n\nI'm highly resistant to/disappointed in this attitude and firmly\nbelieve that there are well understood algorithms that DBAs use to\ndiagnose and solve performance problems. It's only a black art\nbecause it hasn't been documented. Performance tuning isn't voodoo,\nit's adjusting constraints to align with the execution of applications\nand we know what the applications do, therefore the database can mold\nto the applications' needs. Some of those parameters are based on\nhardware constraints and should be pooled and organized as such.\n\nrandom_page_cost ==\n\tavg cost of a random disk seek/read (eg: disk seek time) ==\n\tconstant integer for a given piece of hardware\n\nThere are other settings that are RAM based as well, which should be\nformulaic and derived though a formula hasn't been defined to date.\n\n-sc\n\n-- \nSean Chittenden\n",
"msg_date": "Sat, 5 Jul 2003 14:12:56 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "Sean,\n\n> The SGML docs aren't in the DBA's face and are way out of the way for\n> DBAs rolling out a new system or who are tuning the system. SGML ==\n> Developer, conf == DBA.\n\nThat's exactly my point. We cannot provide enough documentation in the CONF \nfile without septupling its length. IF we remove all commentary, and instead \nprovide a pointer to the documentation, more DBAs will read it.\n\n> Some of those parameters are based on\n> hardware constraints and should be pooled and organized as such.\n> \n> random_page_cost ==\n> \tavg cost of a random disk seek/read (eg: disk seek time) ==\n> \tconstant integer for a given piece of hardware\n\nBut, you see, this is exactly what I'm talking about. random_page_cost isn't \nstatic to a specific piece of hardware ... it depends as well on what else is \non the disk/array, concurrent disk activity, disk controller settings, \nfilesystem, OS, distribution of records and tables, and arrangment of the \npartitions on disk. One can certainly get a \"good enough\" value by \nbenchmarking the disk's random seek and calculating based on that ... but to \nget an \"ideal\" value requires a long interactive session by someone with \nexperience and in-depth knowledge of the machine and database.\n\n> There are other settings that are RAM based as well, which should be\n> formulaic and derived though a formula hasn't been defined to date.\n\nYou seem pretty passionate about this ... how about you help me an Kevin \ndefine a benchmarking suite when I get back into the country (July 17)? If \nwe're going to define formulas, it requires that we have a near-comprehensive \nand consistent test database and test battery that we can run on a variety of \nmachines and platforms.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n",
"msg_date": "Sat, 5 Jul 2003 15:58:25 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "> > The SGML docs aren't in the DBA's face and are way out of the way\n> > for DBAs rolling out a new system or who are tuning the system.\n> > SGML == Developer, conf == DBA.\n> \n> That's exactly my point. We cannot provide enough documentation in\n> the CONF file without septupling its length. IF we remove all\n> commentary, and instead provide a pointer to the documentation, more\n> DBAs will read it.\n\nWhich I don't think would happen and why I think the terse bits that\nare included are worth while. :)\n\n> > Some of those parameters are based on hardware constraints and\n> > should be pooled and organized as such.\n> > \n> > random_page_cost ==\n> > \tavg cost of a random disk seek/read (eg: disk seek time) ==\n> > \tconstant integer for a given piece of hardware\n> \n> But, you see, this is exactly what I'm talking about.\n> random_page_cost isn't static to a specific piece of hardware ... it\n> depends as well on what else is on:\n\n*) the disk/array\n\ntranslation: how fast data is accessed and over how many drives.\n\n*) concurrent disk activity\n\nA disk/database activity metric is different than the cost of a seek\non the platters. :) Because PostgreSQL doesn't currently support such\na disk concurrency metric doesn't mean that its definition should get\nrolled into a different number in an attempt to accommodate for a lack\nthereof.\n\n*) disk controller settings\n\nThis class of settings falls into the same settings that affect random\nseeks on the platters/disk array(s).\n\n*) filesystem\n\nAgain, this influences avg seek time\n\n*) OS\n\nAgain, avg seek time\n\n*) distribution of records and tables\n\nThis has nothing to do with PostgreSQL's random_page_cost setting\nother than that if data is fragmented on the platter, the disk is\ngoing to have to do a lot of seeking. This is a stat that should get\nset by ANALYZE, not by a human.\n\n*) arrangement of the partitions on disk\n\nAgain, avg seek time.\n\n> One can certainly get a \"good enough\" value by benchmarking the\n> disk's random seek and calculating based on that ... but to get an\n> \"ideal\" value requires a long interactive session by someone with\n> experience and in-depth knowledge of the machine and database.\n\nAn \"ideal\" value isn't obtained via guess and check. Checking is only\nthe verification of some calculable set of settings....though right now\nthose calculated settings are guessed, unfortunately.\n\n> > There are other settings that are RAM based as well, which should\n> > be formulaic and derived though a formula hasn't been defined to\n> > date.\n> \n> You seem pretty passionate about this ... how about you help me an\n> Kevin define a benchmarking suite when I get back into the country\n> (July 17)? If we're going to define formulas, it requires that we\n> have a near-comprehensive and consistent test database and test\n> battery that we can run on a variety of machines and platforms.\n\nWorks for me, though a benchmark will be less valuable than adding a\ndisk concurrency stat, improving data trend/distribution analysis, and\nusing numbers that are concrete and obtainable through the OS kernel\nAPI or an admin manually plunking numbers in. I'm still recovering\nfrom my move from Cali to WA so with any luck, I'll be settled in by\nthen.\n\n-sc\n\n-- \nSean Chittenden\n",
"msg_date": "Sat, 5 Jul 2003 17:24:13 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "Sean,\n\n> > That's exactly my point. We cannot provide enough documentation in\n> > the CONF file without septupling its length. IF we remove all\n> > commentary, and instead provide a pointer to the documentation, more\n> > DBAs will read it.\n>\n> Which I don't think would happen and why I think the terse bits that\n> are included are worth while. :)\n\nDepressingly enough, you are probably correct, unless we assemble a more \nuser-friendly \"getting started\" guide.\n\n> *) concurrent disk activity\n>\n> A disk/database activity metric is different than the cost of a seek\n> on the platters. :) Because PostgreSQL doesn't currently support such\n> a disk concurrency metric doesn't mean that its definition should get\n> rolled into a different number in an attempt to accommodate for a lack\n> thereof.\n\nI was talking about concurrent activity by *other* applications. For example, \nif a DBA has a java app that is accessing XML on the same array as postgres \n500 times/minute, then you'd need to adjust random_page_cost upwards to allow \nfor the resource contest.\n\n> An \"ideal\" value isn't obtained via guess and check. Checking is only\n> the verification of some calculable set of settings....though right now\n> those calculated settings are guessed, unfortunately.\n\n> Works for me, though a benchmark will be less valuable than adding a\n> disk concurrency stat, improving data trend/distribution analysis, and\n> using numbers that are concrete and obtainable through the OS kernel\n> API or an admin manually plunking numbers in. I'm still recovering\n> from my move from Cali to WA so with any luck, I'll be settled in by\n> then.\n\nThe idea is that for a lot of statistics, we're only going to be able to \nobtain valid numbers if you have something constant to check them against.\n\nTalk to you later this month!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 5 Jul 2003 18:27:44 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "> \n> Brian Suggests:\n> > I'm curious how many of the configuration values can be determined\n> > automatically, or with the help of some script. It seem like there\n> > could be some perl script in contrib that could help figure this out.\n> > Possibly you are asked a bunch of questions and then the values are\n> > computed based on that. Something like:\n> \n> This would be great! Wanna be in charge of it?\n> \n\nIs there a to-do list for this kind of stuff? Maybe there could be a \"help\nwanted\" sign on the website. Seems like there are lot's of good ideas that\nfly around here but never get followed up on.\n\nAdditionally, I have an increasingly large production database that I would\nbe willing to do some test-cases on. I don't really know how to do it\nthough... If someone where able to give instructions I could run tests on\nthree different platforms.\n\nMatthew Nuzum\t\t| Makers of \"Elite Content Management System\"\nwww.followers.net\t\t| View samples of Elite CMS in action\[email protected]\t| http://www.followers.net/portfolio/\n\n",
"msg_date": "Sun, 6 Jul 2003 22:11:06 -0400",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "> > This sort of narrative belongs in the SGML docs, not in a CONF file.\n> > In fact, one could argue that we should take *all* commentary out of\n> > the CONF file in order to force people to read the docs.\n> \n> The SGML docs aren't in the DBA's face and are way out of the way for\n> DBAs rolling out a new system or who are tuning the system. SGML ==\n> Developer, conf == DBA.\n> \n> > Database performance tuning will always be a \"black art,\" as it\n> > necessitates a broad knowledge of PostgreSQL, OS architecture, and\n> > computer hardware. So I doubt that we can post docs that would\n> > allow any 10% time DBA to make PostgreSQL \"fly\", but hopefully over\n> > the next year we can make enough knowledge public to allow anyone to\n> > make PostgreSQL \"sprint\".\n> \n> I'm highly resistant to/disappointed in this attitude and firmly\n> believe that there are well understood algorithms that DBAs use to\n> diagnose and solve performance problems. It's only a black art\n> because it hasn't been documented. Performance tuning isn't voodoo,\n> it's adjusting constraints to align with the execution of applications\n> and we know what the applications do, therefore the database can mold\n> to the applications' needs. \n\nI agree.\n\nWe often seem to forget simple lessons in human nature. Expecting someone\nto spend 20 extra seconds to do something is often too much. In many cases,\nthe only \"manual\" that a person will see is the .conf files.\n\nAt the very least, if there is good documentation for these parameters,\nmaybe the conf file should provide a link to this info. \n\nAbout the documentation... The few times I've tried reading these sections\nof the docs it was like reading a dictionary.\n\nBruce's book is a much better writing style because it starts out with a\nbasic concept and then expands on it, sometimes several times until a\nthorough (but not exhaustive) example has been given.\n\nThe exhaustive material in the docs is good when you know what you're\nlooking for, and therefore is a critical piece of reference work. I don't\nwant to belittle the authors of that material in any way. An illustration\nof this would be to compare the O'Reilly \"... Nutshell\" book series to\nsomething like the [fictitious] book \"Learn PostgreSQL in 24 hours\".\n\nTo close this message, I would just like to add that one of the most\nsuccessful open source projects of all time could be used as an example.\nThe Apache httpd project is one of the few open source projects in wide\nspread use that holds more market share than all competing products\ncombined.\n\nIt uses a three phase (if not more) documentation level. The .conf file\ncontains detailed instructions in an easy to read and not-to-jargon-ish\nstructure. The docs provide detailed tutorials and papers that expand on\nconfiguration params in an easy to read format. Both of these refer to the\nthorough reference manual that breaks each possible option down into it's\nnitty gritty details so that a user can get more information if they so\ndesire.\n\nMatthew Nuzum\t\t| Makers of \"Elite Content Management System\"\nwww.followers.net\t\t| View samples of Elite CMS in action\[email protected]\t| http://www.followers.net/portfolio/\n\n\n\n",
"msg_date": "Sun, 6 Jul 2003 22:33:13 -0400",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "On Sun, 6 Jul 2003, Matthew Nuzum wrote:\n\n> At the very least, if there is good documentation for these parameters,\n> maybe the conf file should provide a link to this info. \n\nI believe that is what Josh is proposing:\n\nhttp://archives.postgresql.org/pgsql-performance/2003-07/msg00102.php\n\n> [Apache httpd] uses a three phase (if not more) documentation level. \n> The .conf file contains detailed instructions in an easy to read and\n> not-to-jargon-ish structure. The docs provide detailed tutorials and\n> papers that expand on configuration params in an easy to read format. \n> Both of these refer to the thorough reference manual that breaks each\n> possible option down into it's nitty gritty details so that a user can\n> get more information if they so desire.\n\nI agree that Apache's approach is primo. Often the .conf comments are\nenough to jog my memory about a directive I haven't used for a while. Or\nthe comments are enough to let me know I don't need a directive, or that I\nneed to go to the manual and read more. I appreciate that.\n\nmichael\n\n",
"msg_date": "Sun, 6 Jul 2003 23:13:07 -0400 (EDT)",
"msg_from": "Michael Pohl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "Michael Pohl wrote:\n> On Sun, 6 Jul 2003, Matthew Nuzum wrote:\n> \n> \n>>At the very least, if there is good documentation for these parameters,\n>>maybe the conf file should provide a link to this info. \n> \n> \n> I believe that is what Josh is proposing:\n> \n> http://archives.postgresql.org/pgsql-performance/2003-07/msg00102.php\n> \n> \n>>[Apache httpd] uses a three phase (if not more) documentation level. \n>>The .conf file contains detailed instructions in an easy to read and\n>>not-to-jargon-ish structure. The docs provide detailed tutorials and\n>>papers that expand on configuration params in an easy to read format. \n>>Both of these refer to the thorough reference manual that breaks each\n>>possible option down into it's nitty gritty details so that a user can\n>>get more information if they so desire.\n> \n> \n> I agree that Apache's approach is primo. Often the .conf comments are\n> enough to jog my memory about a directive I haven't used for a while. Or\n> the comments are enough to let me know I don't need a directive, or that I\n> need to go to the manual and read more. I appreciate that.\n> \n> michael\n> \n\n\nOne thing that may also help, is to include more sample .conf files. \nFor example, you could include settings that would be commonly seen for \ndecicated databases with generic specs and another with less resources \nand not dedicated for use with Postgres.\n\nThis would allow users to see how certain setting changes will work. \nThe default .conf is great if you want to setup a small test bed, but \nfor a real life example chances are it won't exactly be what your \nlooking for.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n",
"msg_date": "Sun, 06 Jul 2003 21:42:47 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "Why couldn't Postgres learn for itself what the optimal performance \nsettings would be? The easy one seems to be the effective_cache_size. \nTop shows us this information. Couldn't Postgres read that value from \nthe same place top reads it instead of relying on a config file value? \nSeems like it could even adjust to changing conditions as the cache \nsize changes.\n Wouldn't it be great to set a single parameter in postgresql.conf \nlike:\n\nlearn = on\n\nThis would make Postgres run the same queries multiple times with \ndifferent settings, trying to find the ones that made the query run the \nfastest. Obviously you wouldn't want this on all the time because \nPostgres would be doing more work than it needs to satisfy the \napplications that are asking it for data. You'd leave it running like \nthis for as long as you think it would need to get a sampling of real \nworld use for your specific application.\n Something like this could automagically adapt to load, hardware, \nschema, and operating system. If you drop another 1GB of RAM into the \nmachine, just turn the learning option on and let Postgres tune itself \nagain.\n -M@\n\n\nOn Thursday, July 3, 2003, at 04:25 PM, Sean Chittenden wrote:\n\n>> I'm curious how many of the configuration values can be determined\n>> automatically, or with the help of some script. It seem like there\n>> could be some perl script in contrib that could help figure this out.\n>> Possibly you are asked a bunch of questions and then the values are\n>> computed based on that. Something like:\n>>\n>> How many tables will the system have?\n>> How much memory will be available to the postmaster?\n>> How many backends will there typically be?\n>> What is the avg seek time of the drive?\n>> What's the transfer rate of the drive?\n>>\n>> Seems to me that a lot of reasonable default values can be figure out\n>> from these basic questions. FSM settings, Sort Mem, Random Page Cost,\n>> Effective Cache Size, Shared Memor, etc, etc.\n>\n> Someone was working on a thing called pg_autotune or some such program\n> that'd do exactly what you're thinking of.\n>\n> http://archives.postgresql.org/pgsql-performance/2002-10/msg00101.php\n> http://gborg.postgresql.org/project/pgautotune/projdisplay.php\n>\n>\n> -- \n> Sean Chittenden\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n\n",
"msg_date": "Mon, 7 Jul 2003 00:19:33 -0700",
"msg_from": "Matthew Hixson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "On Sat, Jul 05, 2003 at 02:12:56PM -0700, Sean Chittenden wrote:\n> The SGML docs aren't in the DBA's face and are way out of the way for\n> DBAs rolling out a new system or who are tuning the system. SGML ==\n> Developer, conf == DBA.\n\nI could not disagree more. I'd say more like, if the dba won't read\nthe manual, get yourself a real dba. Sorry, but so-called\nprofessionals who won't learn their tools have no home in my shop. \n\nRecently, someone pointed out to me that there _was_ a deficiency\nin the docs -- one I thought was not there. And in that case, I was\nmore than willing to be chastised. But claiming that inadequate\ncomments in the config file are the only things dbas will lok at,\nwell, frankly, I'm offended.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 7 Jul 2003 05:22:52 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Andrew Sullivan\n> Sent: Monday, July 07, 2003 5:23 AM\n> To: [email protected]\n> Subject: Re: [PERFORM] Moving postgresql.conf tunables into 2003...\n> \n> On Sat, Jul 05, 2003 at 02:12:56PM -0700, Sean Chittenden wrote:\n> > The SGML docs aren't in the DBA's face and are way out of the way for\n> > DBAs rolling out a new system or who are tuning the system. SGML ==\n> > Developer, conf == DBA.\n> \n> I could not disagree more. I'd say more like, if the dba won't read\n> the manual, get yourself a real dba. Sorry, but so-called\n> professionals who won't learn their tools have no home in my shop.\n> \n\nI don' want to come off confrontational, so please don't take this as an\nattack.\n\nAre you willing to say that the PostgreSQL database system should only be\nused by DBAs? I believe that Postgres is such a good and useful tool that\nanyone should be able to start using it with little or no barrier to entry.\n\nI don't believe I'm alone in this opinion either. As a matter of fact, this\nphilosophy is being adopted by many in the software industry. Note that\nLinux and many other OSs that act as servers are being made more secure and\neasier to use __out of the box__ so that a person can simply install from cd\nand start using the tool with out too much difficulty.\n\nMaybe your definition of \"dba\" is broader than mine and what you mean is,\n\"someone who installs a postgres database\". Also, by manual, are you\nreferring to the 213 page Administration guide, or are you talking about the\n340 page Reference Manual? Let us rephrase your statement like this: \"If\nthe [person who installs a postgres database] won't read the [340 page\nreference] manual, then that person should go find a different database to\nuse.\"\n\nI think that the postgres installation procedure, .conf files and\ndocumentation can be modified in such a way that a newbie (we were all\nnewbies once) can have a good \"out of box experience\" with little effort.\nThat means they can __quickly__ get a __good performing__ database up and\nrunning with __little effort__ and without needing to subscribe to a mailing\nlist or read a book.\n\nI have seen software projects that have what I call an \"elitist\" attitude;\nmeaning they expect you to be an expert or dedicated to their software in\norder to use it. Invariably this mentality stifles the usefulness of the\nproduct. It seems that there is a relative minority of people on this list\nwho feel that you have to be \"elite\" in order to have a good working\npostgres installation. I don't feel that should be a requirement or even a\nconsideration.\n\nMatthew Nuzum\t\t| Makers of \"Elite Content Management System\"\nwww.followers.net\t\t| View samples of Elite CMS in action\[email protected]\t| http://www.followers.net/portfolio/\n\n\n",
"msg_date": "Mon, 7 Jul 2003 09:11:39 -0400",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "> Are you willing to say that the PostgreSQL database system should only be\n> used by DBAs? I believe that Postgres is such a good and useful tool that\n> anyone should be able to start using it with little or no barrier\n> to entry.\n\nThis is a good point. After reading previous responses I was starting to\nfeel like the only non-DBA Postgres user on this list. I'm a java\narchitect/developer and until recently I knew very little about databases.\nI just learned what an index was while trying to tune Postgres. I imagine\nsome of you are even laughing reading this but it's true. In Java land, the\nO/R mapping tools are getting so good that you don't have to be a database\nexpert to use the database. I'm using JDO which generates my database\ntables and indexes automatically. But you do need to learn about the\ndatabase in order to increase performance and optimize the settings. I'm\nsure I'm not the only developer who is overwhelmed by the Postgres\ndocumentation and configuration files.\n\nRegards,\nMichael\n\n\n",
"msg_date": "Mon, 7 Jul 2003 15:23:40 +0200",
"msg_from": "\"Michael Mattox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "\"Matthew Nuzum\" <[email protected]> writes:\n> Are you willing to say that the PostgreSQL database system should only be\n> used by DBAs? I believe that Postgres is such a good and useful tool that\n> anyone should be able to start using it with little or no barrier to entry.\n\nI quite agree. But there is a difference between saying \"you should get\ndecent performance with no effort\" and \"you should get optimal\nperformance with no effort\". I think we can get to the first with\nrelatively little trouble (like boosting the default shared_buffers to\n1000), but the second is an impractical goal.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Jul 2003 09:30:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003... "
},
{
"msg_contents": "Matthew Nuzum wrote:\n\n>>I'm highly resistant to/disappointed in this attitude and firmly\n>>believe that there are well understood algorithms that DBAs use to\n>>diagnose and solve performance problems. It's only a black art\n>>because it hasn't been documented. Performance tuning isn't voodoo,\n>>it's adjusting constraints to align with the execution of applications\n>>and we know what the applications do, therefore the database can mold\n>>to the applications' needs. \n>> \n>>\n>\n>I agree.\n>\n>We often seem to forget simple lessons in human nature. Expecting someone\n>to spend 20 extra seconds to do something is often too much. In many cases,\n>the only \"manual\" that a person will see is the .conf files.\n>\n> \n>\nIn my opinion, a serious RDBMS system will *always* require the admin to \nbe doing research in order to learn how to use it effectively. We are \nnot talking about a word processor here.\n\nThat being said, I think that a good part of the problem is that admins \ndon't know where to look for the appropriate documentation and what is \nneeded. Expecting someone to spend 20 seconds looking for a piece of \ninfo is not too bad, but expecting them to spend hours trying to figure \nout what info is relavent is not going to get us anywhere.\n\nFor those who have been following the discussion relating to MySQL vs \nPostgreSQL, I think this is relavent here. MySQL does much of its \ntuning at compile time, and the MySQL team very carefully controls the \nbuild process for the various binary distriutions they offer. If you \nwant to see a real mess, try compiling MySQL from source. Talk about \nhaving to read documentation on items which *should* be handled by the \nconfigure script. \n\nOTOH, PostgreSQL is optomized using configuration files and is tunable \non the fly. This is, I think, a better approach but it needs to be \nbetter documented. Maybe a \"Beginner's guide to database server tuning\" \nor something like that.\n\nSecondly, documenting the tuning algorythms well my allow PostgreSQL to \nautomatically tune itself to some extent or for the development of \nperformance tuning tools for the server. This would be a big win for \nthe project. Unfortunately I am not knowledgable on this topic to \nreally do this subject justice.\n\nBest Wishes,\nChris Travers\n\n",
"msg_date": "Mon, 07 Jul 2003 10:08:50 -0700",
"msg_from": "Chris Travers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "On Mon, Jul 07, 2003 at 10:08:50AM -0700, Chris Travers wrote:\n> In my opinion, a serious RDBMS system will *always* require the admin to \n> be doing research in order to learn how to use it effectively. We are \n> not talking about a word processor here.\n> \n> That being said, I think that a good part of the problem is that admins \n> don't know where to look for the appropriate documentation and what is \n> needed. Expecting someone to spend 20 seconds looking for a piece of \n> info is not too bad, but expecting them to spend hours trying to figure \n> out what info is relavent is not going to get us anywhere.\n \nSomething else to consider is that this is made worse because tuning for\npgsql is quite different than tuning for something like Oracle or DB2,\nwhich don't deal as much with metrics such as random access cost v.\nsequential access. They also take the approach of 'give me as much\nmemory as you can; I'll take it from there, thankyouverymuch', which\nmakes effective_cache_size a bit of a mystery.\n-- \nJim C. Nasby, Database Consultant [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Mon, 7 Jul 2003 16:40:48 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": ">>Are you willing to say that the PostgreSQL database system should only be\n>>used by DBAs? I believe that Postgres is such a good and useful tool that\n>>anyone should be able to start using it with little or no barrier to entry.\n> \n> \n> I quite agree. But there is a difference between saying \"you should get\n> decent performance with no effort\" and \"you should get optimal\n> performance with no effort\". I think we can get to the first with\n> relatively little trouble (like boosting the default shared_buffers to\n> 1000), but the second is an impractical goal.\n\n\nJust wanted to repeat some of the thoughts already been expressed.\n\nThere are no reasons why shouldn't PostgreSQL be reasonably well \nconfigured for a particular platform out of the box. Not for maximum \nperformance but for good enough performance. The many complaints by new \nusers about PostgreSQL being suprisingly slow and the all the so \nstandard answers (vacuum, pump up memory settings) imho prove that the \ndefault installatio can be improved. Already mentioned in the mail \nlists: using multiple standard conf files, quering system info and \ndynamically generating all or some parts of the conf file, automating \nthe vacuum process...\n\nKaarel\n\n",
"msg_date": "Wed, 09 Jul 2003 21:33:35 +0300",
"msg_from": "Kaarel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "On Wed, 9 Jul 2003, Kaarel wrote:\n\n> >>Are you willing to say that the PostgreSQL database system should only be\n> >>used by DBAs? I believe that Postgres is such a good and useful tool that\n> >>anyone should be able to start using it with little or no barrier to entry.\n> > \n> > \n> > I quite agree. But there is a difference between saying \"you should get\n> > decent performance with no effort\" and \"you should get optimal\n> > performance with no effort\". I think we can get to the first with\n> > relatively little trouble (like boosting the default shared_buffers to\n> > 1000), but the second is an impractical goal.\n> \n> \n> Just wanted to repeat some of the thoughts already been expressed.\n> \n> There are no reasons why shouldn't PostgreSQL be reasonably well \n> configured for a particular platform out of the box. Not for maximum \n> performance but for good enough performance. The many complaints by new \n> users about PostgreSQL being suprisingly slow and the all the so \n> standard answers (vacuum, pump up memory settings) imho prove that the \n> default installatio can be improved. Already mentioned in the mail \n> lists: using multiple standard conf files, quering system info and \n> dynamically generating all or some parts of the conf file, automating \n> the vacuum process...\n\nIt would be nice to have a program that could run on any OS postgresql \nruns on and could report on the current limits of the kernel, and make \nrecommendations for changes the admin might want to make.\n\nOne could probably make a good stab at effective cache size during \ninstall. Anything reasonably close would probably help.\n\nReport what % of said resources could be consumed by postgresql under \nvarious circumstances...\n\n",
"msg_date": "Wed, 9 Jul 2003 13:40:45 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "Scott Marlowe wrote:\n> \n> It would be nice to have a program that could run on any OS postgresql \n> runs on and could report on the current limits of the kernel, and make \n> recommendations for changes the admin might want to make.\n> \n> One could probably make a good stab at effective cache size during \n> install. Anything reasonably close would probably help.\n> \n> Report what % of said resources could be consumed by postgresql under \n> various circumstances...\n> \n\nOne of the issues that automating the process would encounter are limits \nin the kernel that are too low for PostgreSQL to handle. The BSD's come \nto mind where they need values manually increased in the kernel before \nyou can reach a reasonable maximum connection count.\n\nAnother example is how OpenBSD will outright crash when trying to test \nthe database during install time. It seems that most of the tests fail \nbecause the maximum amount of processes allowed is too low for the test \nto succeed. While FreeBSD will work just fine on those same tests.\n\nIf PostgreSQL automates the configuration, that would be a plus. But \nalso detect the platform and inform the person that these changes should \nbe done to the kernel, sysctl or whatever in order to have that \nconfiguration run.\n\nPerl may be useful in this for a few reasons. It's portable enough to \nrun on multiple Unix variants and the tools would be fairly standard, so \nthe code would require less considerations for more exotic implementations.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Wed, 09 Jul 2003 15:45:05 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "> I don't have much to add because I'm pretty new to Postgres and have\n> been soliciting advice here recently, but I totally agree with\n> everything you said. I don't mind if it's in the postgres.conf file\n> or in a faq that is easy to find, I just would like it to be in one\n> place. A good example of the need for this is when I was tuning\n> \"effective_cache\" I thought that was creating a cache for Postgres\n> when in fact as it was pointed out to me, it's just hinting to\n> postgres the size of the OS cache. Lots of ways for people to get\n> really confused here.\n\nI looked through the src/doc/runtime.sgml for a good place to stick\nthis and couldn't find a place that this seemed appropriate, but on\nFreeBSD, this can be determined with a great deal of precision in a\nprogrammatic manner:\n\necho \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n\nThe same OID is available via C too. It'd be slick if PostgreSQL\ncould tune itself (on FreeBSD) at initdb time with the above code. If\nLinux exports this info via /proc and can whip out the appropriate\nmagic, even better. An uncommented out good guess that shows up in\npostgresql.conf would be stellar and quite possible with the use of\nsed.\n\nMaybe an initdb switch could be added to have initdb tune the config\nit generates? If a -n is added, have it generate a config and toss it\nto stdout?\n\n\ncase `uname` in\n\"FreeBSD\")\n echo \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n ;;\n*)\n echo \"Unable to automatically determine the effective cache size\" >> /dev/stderr\n ;;\nesac\n\n\n-sc\n\n-- \nSean Chittenden\n",
"msg_date": "Wed, 9 Jul 2003 16:30:31 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "Sean Chittenden wrote:\n> \n> I looked through the src/doc/runtime.sgml for a good place to stick\n> this and couldn't find a place that this seemed appropriate, but on\n> FreeBSD, this can be determined with a great deal of precision in a\n> programmatic manner:\n> \n> echo \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n> \n> The same OID is available via C too. It'd be slick if PostgreSQL\n> could tune itself (on FreeBSD) at initdb time with the above code. If\n> Linux exports this info via /proc and can whip out the appropriate\n> magic, even better. An uncommented out good guess that shows up in\n> postgresql.conf would be stellar and quite possible with the use of\n> sed.\n> \n> Maybe an initdb switch could be added to have initdb tune the config\n> it generates? If a -n is added, have it generate a config and toss it\n> to stdout?\n> \n> \n> case `uname` in\n> \"FreeBSD\")\n> echo \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n> ;;\n> *)\n> echo \"Unable to automatically determine the effective cache size\" >> /dev/stderr\n> ;;\n> esac\n> \n> \n> -sc\n> \n\nSimplest way may be to create a 'auto-tune' directory with scripts for \nconfigured platforms. When postgres installs the databases, it checks \nfor 'tune.xxx' and if found uses that to generate the script itself?\n\nThis would allow for defaults on platforms that do not have them and \noptimization for those that do.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Wed, 09 Jul 2003 18:23:44 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "\nKeep in mind that if we auto-tune, we will only be able to do it for\nsome platforms, so we will need a table that shows which settings are\nautotuned for each platform.\n\n---------------------------------------------------------------------------\n\nSean Chittenden wrote:\n> > I don't have much to add because I'm pretty new to Postgres and have\n> > been soliciting advice here recently, but I totally agree with\n> > everything you said. I don't mind if it's in the postgres.conf file\n> > or in a faq that is easy to find, I just would like it to be in one\n> > place. A good example of the need for this is when I was tuning\n> > \"effective_cache\" I thought that was creating a cache for Postgres\n> > when in fact as it was pointed out to me, it's just hinting to\n> > postgres the size of the OS cache. Lots of ways for people to get\n> > really confused here.\n> \n> I looked through the src/doc/runtime.sgml for a good place to stick\n> this and couldn't find a place that this seemed appropriate, but on\n> FreeBSD, this can be determined with a great deal of precision in a\n> programmatic manner:\n> \n> echo \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n> \n> The same OID is available via C too. It'd be slick if PostgreSQL\n> could tune itself (on FreeBSD) at initdb time with the above code. If\n> Linux exports this info via /proc and can whip out the appropriate\n> magic, even better. An uncommented out good guess that shows up in\n> postgresql.conf would be stellar and quite possible with the use of\n> sed.\n> \n> Maybe an initdb switch could be added to have initdb tune the config\n> it generates? If a -n is added, have it generate a config and toss it\n> to stdout?\n> \n> \n> case `uname` in\n> \"FreeBSD\")\n> echo \"effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))\"\n> ;;\n> *)\n> echo \"Unable to automatically determine the effective cache size\" >> /dev/stderr\n> ;;\n> esac\n> \n> \n> -sc\n> \n> -- \n> Sean Chittenden\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 20 Jul 2003 15:20:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "Michael Pohl wrote:\n> On Sun, 6 Jul 2003, Matthew Nuzum wrote:\n> \n> > At the very least, if there is good documentation for these parameters,\n> > maybe the conf file should provide a link to this info. \n> \n> I believe that is what Josh is proposing:\n> \n> http://archives.postgresql.org/pgsql-performance/2003-07/msg00102.php\n> \n> > [Apache httpd] uses a three phase (if not more) documentation level. \n> > The .conf file contains detailed instructions in an easy to read and\n> > not-to-jargon-ish structure. The docs provide detailed tutorials and\n> > papers that expand on configuration params in an easy to read format. \n> > Both of these refer to the thorough reference manual that breaks each\n> > possible option down into it's nitty gritty details so that a user can\n> > get more information if they so desire.\n> \n> I agree that Apache's approach is primo. Often the .conf comments are\n> enough to jog my memory about a directive I haven't used for a while. Or\n> the comments are enough to let me know I don't need a directive, or that I\n> need to go to the manual and read more. I appreciate that.\n\nIsn't that what we have now --- isn't postgresql.conf clear enough to\njog people's memory.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 20 Jul 2003 15:21:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "[jumping in late due to vacation]\n\nOn Thu, 3 Jul 2003 17:06:46 -0700, Sean Chittenden\n<[email protected]> wrote:\n>> is some other problem that needs to be solved. (I'd wonder about\n>> index correlation myself; we know that that equation is pretty\n>> bogus.)\n>\n>Could be. I had him create a multi-column index on the date and a\n>non-unique highly redundant id.\n\nTom has already suspected index correlation to be a possible source of\nthe problem and recommended to CLUSTER on the index. A weakness of\nthe current planner implementation is that a multi column index is\nalways thought to have low correlation. In your case even after\nCLUSTER the 2-column index on (date, sensorid) is treated like a\nsingle column index with correlation 0.5.\n\nI have an experimental patch lying around somewhere that tries to work\naround these problems by offering different estimation methods for\nindex scans. If you are interested, I'll dig it out.\n\nIn the meantime have him try with a single column index on date.\n\nOn 04 Jul 2003 08:29:04 -0400, Rod Taylor <[email protected]> wrote:\n|That's one heck of a poor estimate for the number of rows returned.\n|\n|> -> Seq Scan on mss_fwevent (cost=0.00..223312.60 rows=168478 width=12)\n| (actual time=24253.66..24319.87 rows=320 loops=1)\n\n> -> Index Scan using mss_fwevent_evtime_sensorid_idx on mss_fwevent\n> (cost=0.00..2442524.70 rows=168478 width=12)\n> (actual time=68.36..132.84 rows=320 loops=1)\n> Index Cond: ((evtime > (now() - '06:00'::interval)) AND (sensorid = 7))\n> Filter: (NOT \"action\")\n\nEstimated number of rows being wrong by a factor 500 seems to be the\nmain problem hiding everything else. With statistics already set to\n1000, does this mean that sensorid, evtime, and action are not\nindependent? It'd be interesting to know whether the estimation error\ncomes from \"Index Cond\" or from \"Filter\".\n\nServus\n Manfred\n",
"msg_date": "Thu, 31 Jul 2003 19:37:40 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "> >> is some other problem that needs to be solved. (I'd wonder about\n> >> index correlation myself; we know that that equation is pretty\n> >> bogus.)\n> >\n> >Could be. I had him create a multi-column index on the date and a\n> >non-unique highly redundant id.\n> \n> Tom has already suspected index correlation to be a possible source\n> of the problem and recommended to CLUSTER on the index. A weakness\n> of the current planner implementation is that a multi column index\n> is always thought to have low correlation. In your case even after\n> CLUSTER the 2-column index on (date, sensorid) is treated like a\n> single column index with correlation 0.5.\n\nHowdy. Well, I got far enough with the guy in the testing to figure\nout that it wasn't a single vs multi-column index problem, however I\nhaven't heard back from him regarding the use of CLUSTER. Ce est la\nIRC. :-p\n\n> I have an experimental patch lying around somewhere that tries to\n> work around these problems by offering different estimation methods\n> for index scans. If you are interested, I'll dig it out.\n\nSure, I'll take a gander... had my head in enough Knuth recently to\neven hopefully have some kind of a useful response to the patch.\n\n> In the meantime have him try with a single column index on date.\n\nBeen there, done that: no change.\n\n> |That's one heck of a poor estimate for the number of rows returned.\n> |\n> |> -> Seq Scan on mss_fwevent (cost=0.00..223312.60 rows=168478 width=12)\n> | (actual time=24253.66..24319.87 rows=320 loops=1)\n> \n> > -> Index Scan using mss_fwevent_evtime_sensorid_idx on mss_fwevent\n> > (cost=0.00..2442524.70 rows=168478 width=12)\n> > (actual time=68.36..132.84 rows=320 loops=1)\n> > Index Cond: ((evtime > (now() - '06:00'::interval)) AND (sensorid = 7))\n> > Filter: (NOT \"action\")\n> \n> Estimated number of rows being wrong by a factor 500 seems to be the\n> main problem hiding everything else. With statistics already set to\n> 1000, does this mean that sensorid, evtime, and action are not\n> independent? It'd be interesting to know whether the estimation\n> error comes from \"Index Cond\" or from \"Filter\".\n\nHrm... sensorid is sequence and grows proportional with evtime,\nobviously. Action is a char(1) or something like that (ie: not\nunique). See the EXPLAIN ANALYZEs that I posted in msgid:\[email protected].. or at the bottom of this\nmsg.\n\nHaving spent a fair amount of time looking at the two following plans,\nit seems as though an additional statistic is needed to change the\ncost of doing an index lookup when the index is linearly ordered.\nWhether CLUSTER does this or not, I don't know, I never heard back\nfrom him after getting the runtime down to a few ms. :-/ Are indexes\non linearly ordered data rebalanced somehow? I thought CLUSTER only\nreordered data on disk. -sc\n\n\n\nPlan for normal random_page_cost:\n\nmss_masterlog=> SHOW random_page_cost;\n random_page_cost\n------------------\n 4\n(1 row)\n\nmss_masterlog=> EXPLAIN ANALYZE SELECT srca, COUNT(srca) FROM mss_fwevent WHERE\nmss_masterlog-> sensorid = 7 AND evtime > (now() - '6 hours'::INTERVAL)\nmss_masterlog-> AND NOT action GROUP BY srca ORDER BY COUNT DESC LIMIT 20;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=240384.69..240384.74 rows=20 width=12) (actual time=24340.04..24340.39 rows=20 loops=1)\n -> Sort (cost=240384.69..240426.80 rows=16848 width=12) (actual time=24340.02..24340.14 rows=21 loops=1)\n Sort Key: count(srca)\n -> Aggregate (cost=237938.36..239201.95 rows=16848 width=12) (actual time=24322.84..24330.73 rows=23 loops=1)\n -> Group (cost=237938.36..238780.75 rows=168478 width=12) (actual time=24322.57..24328.45 rows=320 loops=1)\n -> Sort (cost=237938.36..238359.55 rows=168478 width=12) (actual time=24322.55..24324.34 rows=320 loops=1)\n Sort Key: srca\n -> Seq Scan on mss_fwevent (cost=0.00..223312.60 rows=168478 width=12) (actual time=24253.66..24319.87 rows=320 loops=1)\n Filter: ((sensorid = 7) AND (evtime > (now() - '06:00'::interval)) AND (NOT \"action\"))\n Total runtime: 24353.67 msec\n(10 rows)\n\n\nPlan for altered random_page_cost:\n\nmss_masterlog=> SET random_page_cost = 0.32;\nSET\nmss_masterlog=> EXPLAIN ANALYZE SELECT srca, COUNT(srca) FROM mss_fwevent WHERE\nmss_masterlog-> sensorid = 7 AND evtime > (now() - '6 hours'::INTERVAL)\nmss_masterlog-> AND NOT action GROUP BY srca ORDER BY COUNT DESC LIMIT 20;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------\n Limit (cost=227274.85..227274.90 rows=20 width=12) (actual time=28.42..28.75 rows=20 loops=1)\n -> Sort (cost=227274.85..227316.97 rows=16848 width=12) (actual time=28.41..28.52 rows=21 loops=1)\n Sort Key: count(srca)\n -> Aggregate (cost=224828.52..226092.11 rows=16848 width=12) (actual time=20.26..28.13 rows=23 loops=1)\n -> Group (cost=224828.52..225670.91 rows=168478 width=12) (actual time=19.99..25.86 rows=320 loops=1)\n -> Sort (cost=224828.52..225249.72 rows=168478 width=12) (actual time=19.98..21.76 rows=320 loops=1)\n Sort Key: srca\n -> Index Scan using mss_fwevent_evtime_sensorid_idx on mss_fwevent (cost=0.00..210202.76 rows=168478 width=12) (actual time=0.35..17.61\nrows=320 loops=1)\n Index Cond: ((evtime > (now() - '06:00'::interval)) AND (sensorid = 7))\n Filter: (NOT \"action\")\n Total runtime: 29.09 msec\n(11 rows)\n\n-- \nSean Chittenden\n",
"msg_date": "Tue, 5 Aug 2003 15:26:09 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "On Tue, 5 Aug 2003 15:26:09 -0700, Sean Chittenden\n<[email protected]> wrote:\n>> I have an experimental patch lying around somewhere that tries to\n>> work around these problems by offering different estimation methods\n>> for index scans. If you are interested, I'll dig it out.\n>\n>Sure, I'll take a gander... had my head in enough Knuth recently to\n>even hopefully have some kind of a useful response to the patch.\n\nSean, the patch is at http://www.pivot.at/pg/16-correlation-732.diff.\nA short description of its usage can be found at\nhttp://archives.postgresql.org/pgsql-performance/2002-11/msg00256.php.\nIf you are interested how the different interpolation methods work,\nread the source - it shouldn't be too hard to find.\n\nYou might also want to read the thread starting at\nhttp://archives.postgresql.org/pgsql-hackers/2002-10/msg00072.php.\n\n>> does this mean that sensorid, evtime, and action are not\n>> independent?\n>\n>Hrm... sensorid is sequence and grows proportional with evtime,\n>obviously.\n\nSo a *low* sensorid (7) is quite uncommon for a *late* evtime? This\nwould help understand the problem. Unfortunately I have no clue what\nto do about it. :-(\n\n>Having spent a fair amount of time looking at the two following plans,\n>it seems as though an additional statistic is needed to change the\n>cost of doing an index lookup when the index is linearly ordered.\n\nI'm not sure I understand what you mean by \"index is linearly\nordered\", but I guess correlation is that statistic you are talking\nabout. However, it is calculated per column, not per index.\n\n>Whether CLUSTER does this or not, I don't know,\n\nIf you CLUSTER on an index and then ANALYSE, you get a correlation of\n1.0 (== optimum) for the first column of the index.\n\n> I never heard back\n>from him after getting the runtime down to a few ms. :-/\n\nPity! I'd have liked to see EXPLAIN ANALYSE for\n\n\tSELECT *\n\t FROM mss_fwevent\n\t WHERE sensorid = 7\n\t AND evtime > (now() - '6 hours'::INTERVAL)\n\t AND NOT action;\n\n\tSELECT *\n\t FROM mss_fwevent\n\t WHERE sensorid = 7\n\t AND evtime > (now() - '6 hours'::INTERVAL);\n\n\tSELECT *\n\t FROM mss_fwevent\n\t WHERE evtime > (now() - '6 hours'::INTERVAL);\n\n\tSELECT *\n\t FROM mss_fwevent\n\t WHERE sensorid = 7;\n\n\n> Are indexes\n>on linearly ordered data rebalanced somehow? I thought CLUSTER only\n>reordered data on disk. -sc\n\nAFAIK CLUSTER re-creates all indices belonging to the table.\n\nServus\n Manfred\n",
"msg_date": "Thu, 07 Aug 2003 16:44:41 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "> >> I have an experimental patch lying around somewhere that tries to\n> >> work around these problems by offering different estimation methods\n> >> for index scans. If you are interested, I'll dig it out.\n> >\n> >Sure, I'll take a gander... had my head in enough Knuth recently to\n> >even hopefully have some kind of a useful response to the patch.\n> \n> Sean, the patch is at http://www.pivot.at/pg/16-correlation-732.diff.\n> A short description of its usage can be found at\n> http://archives.postgresql.org/pgsql-performance/2002-11/msg00256.php.\n> If you are interested how the different interpolation methods work,\n> read the source - it shouldn't be too hard to find.\n> \n> You might also want to read the thread starting at\n> http://archives.postgresql.org/pgsql-hackers/2002-10/msg00072.php.\n\nHrm... let me bop back in my archives and reply there... very\ninteresting work though. I hope a reasonable algorythm can be found\nin time for 7.5, or even 7.4 as this seems to be biting many people\nand the current algo is clearly not right.\n\n> >> does this mean that sensorid, evtime, and action are not\n> >> independent?\n> >\n> >Hrm... sensorid is sequence and grows proportional with evtime,\n> >obviously.\n> \n> So a *low* sensorid (7) is quite uncommon for a *late* evtime? This\n> would help understand the problem. Unfortunately I have no clue what\n> to do about it. :-(\n\nCorrect.\n\n> >Having spent a fair amount of time looking at the two following plans,\n> >it seems as though an additional statistic is needed to change the\n> >cost of doing an index lookup when the index is linearly ordered.\n> \n> I'm not sure I understand what you mean by \"index is linearly\n> ordered\", but I guess correlation is that statistic you are talking\n> about. However, it is calculated per column, not per index.\n\nIf two rows are id's 123456 and 123457, what are the odds that the\ntuples are going to be on the same page? ie, if 123456 is read, is\n123457 already in the OS or PostgreSQL's disk cache?\n\n> >Whether CLUSTER does this or not, I don't know,\n> \n> If you CLUSTER on an index and then ANALYSE, you get a correlation of\n> 1.0 (== optimum) for the first column of the index.\n\nCorrelating of what to what? Of data to nearby data? Of data to\nrelated data (ie, multi-column index?)? Of related data to pages on\ndisk? Not 100% sure in what context you're using the word\ncorrelation...\n\nBut that value will degrade after time and at what rate? Does ANALYZE\nmaintain that value so that it's kept acurrate? The ANALYZE page was\nlacking in terms of implementation details in terms of how many rows\nANALYZE actually scans on big tables, which could dramatically affect\nthe correlation of a table after time if ANALYZE is maintaining the\ncorrelation for a column.\n\n> > I never heard back from him after getting the runtime down to a\n> > few ms. :-/\n> \n> Pity! I'd have liked to see EXPLAIN ANALYSE for\n> \n> \tSELECT *\n> \t FROM mss_fwevent\n> \t WHERE sensorid = 7\n> \t AND evtime > (now() - '6 hours'::INTERVAL)\n> \t AND NOT action;\n> \n> \tSELECT *\n> \t FROM mss_fwevent\n> \t WHERE sensorid = 7\n> \t AND evtime > (now() - '6 hours'::INTERVAL);\n> \n> \tSELECT *\n> \t FROM mss_fwevent\n> \t WHERE evtime > (now() - '6 hours'::INTERVAL);\n> \n> \tSELECT *\n> \t FROM mss_fwevent\n> \t WHERE sensorid = 7;\n\nditto\n\n> > Are indexes\n> >on linearly ordered data rebalanced somehow? I thought CLUSTER only\n> >reordered data on disk. -sc\n> \n> AFAIK CLUSTER re-creates all indices belonging to the table.\n\nAs of 7.3 or 7.4, yes. -sc\n\n-- \nSean Chittenden\n",
"msg_date": "Thu, 7 Aug 2003 13:24:26 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003..."
},
{
"msg_contents": "Sean Chittenden <[email protected]> writes:\n>> If you CLUSTER on an index and then ANALYSE, you get a correlation of\n>> 1.0 (== optimum) for the first column of the index.\n\n> Correlating of what to what? Of data to nearby data? Of data to\n> related data (ie, multi-column index?)? Of related data to pages on\n> disk? Not 100% sure in what context you're using the word\n> correlation...\n\nThe correlation is between index order and heap order --- that is, are\nthe tuples in the table physically in the same order as the index?\nThe better the correlation, the fewer heap-page reads it will take to do\nan index scan.\n\nNote it is possible to measure correlation without regard to whether\nthere actually is any index; ANALYZE is simply looking to see whether\nthe values appear in increasing order according to the datatype's\ndefault sort operator.\n\nOne problem we have is extrapolating from the single-column correlation\nstats computed by ANALYZE to appropriate info for multi-column indexes.\nIt might be that the only reasonable fix for this is for ANALYZE to\ncompute multi-column stats too when multi-column indexes are present.\nPeople are used to the assumption that you don't need to re-ANALYZE\nafter creating a new index, but maybe we'll have to give that up.\n\n> But that value will degrade after time and at what rate? Does ANALYZE\n> maintain that value so that it's kept acurrate?\n\nYou keep it up to date by ANALYZE-ing at suitable intervals. It's no\ndifferent from any other statistic.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Aug 2003 19:31:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Moving postgresql.conf tunables into 2003... "
},
{
"msg_contents": "On Thu, 07 Aug 2003 19:31:52 -0400, Tom Lane <[email protected]>\nwrote:\n>The correlation is between index order and heap order --- that is, are\n>the tuples in the table physically in the same order as the index?\n>The better the correlation, the fewer heap-page reads it will take to do\n>an index scan.\n\nThis is true for a column that is the first column of a btree index.\nCorrelation doesn't help with additional index columns and with\nfunctional indices.\n\n>Note it is possible to measure correlation without regard to whether\n>there actually is any index;\n\nBut there is no need to, because the correlation is only used for\nindex access cost estimation.\n\n>One problem we have is extrapolating from the single-column correlation\n>stats computed by ANALYZE to appropriate info for multi-column indexes.\n>It might be that the only reasonable fix for this is for ANALYZE to\n>compute multi-column stats too when multi-column indexes are present.\n\nI wonder whether it would be better to drop column correlation and\ncalculate index correlation instead, i.e. correlation of index tuples\nto heap tuple positions. This would solve both the multi-column index\nand the functional index cost estimation problem.\n\n>People are used to the assumption that you don't need to re-ANALYZE\n>after creating a new index, but maybe we'll have to give that up.\n\nIndex correlation would be computed on CREATE INDEX and whenever the\nheap relation is analysed ...\n\nServus\n Manfred\n",
"msg_date": "Fri, 08 Aug 2003 18:52:44 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Index correlation (was: Moving postgresql.conf tunables into 2003...\n )"
},
{
"msg_contents": "I have a reporting system that does regular queries on a table with a\nmultipart index. I am running version 7.3.4. Here is the table\ndefinition:\n\n Table \"public.ds_rec_fld\"\n Column | Type | Modifiers\n---------------+-------------------------+-----------\n dsid | character varying(20) | not null\n recid | integer | not null\n field_name | character varying(20) | not null\n option_tag | character varying(10) | not null\n option_value | integer |\n field_text | character varying(2000) |\n field_type_cd | character varying(8) |\nIndexes: ds_rf_ndx1 btree (recid, field_name, option_value)\n\nNormally queries are done using recid and field_name, so Postgresql\nreturns rows very quickly as expected. Here is a sample explain\nanalyze output for a typical query:\n\ndb=> explain analyze\ndb-> select field_name, option_tag from ds_rec_fld where recid = 3000\nand field_name = 'Q3A1';\n QUERY PLAN\n \n-------------------------------------------------------------------------------------------------------------------------\n Index Scan using ds_rf_ndx1 on ds_rec_fld (cost=0.00..163.09 rows=40\nwidth=38) (actual time=0.06..0.07 rows=1 loops=1)\n Index Cond: ((recid = 3001) AND (field_name = 'Q3A1'::character\nvarying))\n Total runtime: 0.12 msec\n(3 rows)\n \nThe problem comes in when we are selecting multiple field_name values\nin one query. The normal SQL syntax we have been using is like this:\n\nselect field_name, option_tag from ds_rec_fld where recid = 3001 and\nfield_name in ('Q3A1', 'Q3A9');\n\nThis is just a simplified example, at times there can be a lot of\nfield_name values in one query in the \"in\" clause. Here postgresql\nrefuses to use the full index, instead doing a filter based on part of\nthe first recid part of index. Here is the explain analyze output:\n\n Index Scan using ds_rf_ndx1 on ds_rec_fld (cost=0.00..30425.51\nrows=80 width=38) (actual time=0.18..1.08 rows=2 loops=1)\n Index Cond: (recid = 3001)\n Filter: ((field_name = 'Q3A1'::character varying) OR (field_name =\n'Q3A9'::character varying))\n Total runtime: 1.12 msec\n(4 rows)\n\nSo, 10 times longer. This is an issue because at times we are\niterating through thousands of recid values.\n\nI did a vacuum analyze, adjusted random_page_cost, etc. all to no\navail. \n\nI also noticed that the problem goes away when I reformat the query\nlike this:\n\nselect field_name, option_tag from ds_rec_fld where \n(recid = 3001 and field_name = 'Q3A1') or\n(recid = 3001 and field_name = 'Q3A9')\n\nHere is the explain analyze output for this:\n\n Index Scan using ds_rf_ndx1, ds_rf_ndx1 on ds_rec_fld \n(cost=0.00..326.57 rows=80 width=38) (actual time=0.07..0.10 rows=2\nloops=1)\n Index Cond: (((recid = 3001) AND (field_name = 'Q3A1'::character\nvarying)) OR ((recid = 3001) AND (field_name = 'Q3A9'::character\nvarying)))\n Total runtime: 0.16 msec\n(3 rows)\n\nMuch better. So I have partially solved my own problem, but there are\nother places that this is not this simple to fix.\n\nTherefore, my question is, is there some way to force postgresql to use\nthe full index and still stick with the shorter \"field_name in ('...',\n'...')\" syntax? \n\nIf anyone has any thoughts please let me know. Also it strikes me that\nperhaps the optimizer could be tweaked to treat the first case like the\nsecond one. Thanks in advance,\n\nRob\n\n__________________________________\nDo you Yahoo!?\nThe New Yahoo! Shopping - with improved product search\nhttp://shopping.yahoo.com\n",
"msg_date": "Thu, 23 Oct 2003 11:18:31 -0700 (PDT)",
"msg_from": "Rob Messer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Use of multipart index with \"IN\""
},
{
"msg_contents": "Rob Messer <[email protected]> writes:\n> The problem comes in when we are selecting multiple field_name values\n> in one query. The normal SQL syntax we have been using is like this:\n\n> select field_name, option_tag from ds_rec_fld where recid = 3001 and\n> field_name in ('Q3A1', 'Q3A9');\n\nYou'd have better luck if field_name were the first column of the\ntwo-column index. See the archives.\n\nImproving this situation is on the to-do list but it seems not trivial\nto fix.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Oct 2003 12:17:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of multipart index with \"IN\" "
}
] |
[
{
"msg_contents": "Hi,\n\nhas anybody tested PostgreSQL 7.3.x tables agains MySQL 4.0.12/13 with InnoDB?\n\n\nRegards,\nRafal \n\n",
"msg_date": "Fri, 04 Jul 2003 12:03:03 +0200",
"msg_from": "Rafal Kedziorski <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Friday 04 Jul 2003 11:03 am, Rafal Kedziorski wrote:\n> Hi,\n>\n> has anybody tested PostgreSQL 7.3.x tables agains MySQL 4.0.12/13 with\n> InnoDB?\n\nLots of people probably. The big problem is that unless the tester's setup \nmatches your intended usage the results are of little worth.\n\nFor the tests to be meaningful, you need the same:\n - hardware\n - OS\n - query complexity\n - usage patterns\n - tuning options\n\nI'd suggest running your own tests with real data where possible. Just to make \nthe situation more interesting, the best way to solve a problem in PG isn't \nnecessarily the same in MySQL.\n\n From my experience and general discussion on the lists, I'd say MySQL can win \nfor:\n - simple selects\n - some aggregates (e.g. count(*))\nPG wins for:\n - complex queries\n - large numbers of clients\n - stored procedures/functions\n - SQL compliance\n\n-- \n Richard Huxton\n",
"msg_date": "Fri, 4 Jul 2003 12:12:25 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "I recently took a system from MySQL to Postgres. Same HW, SW, same data.\nThe major operations where moderately complex queries (joins on 8 tables).\n\nThe results we got was that Postgres was fully 3 times slower than MySql.\nWe were on this list a fair bit looking for answers and tried all the\nstandard answers. It was still much much much slower.\n\nBrian Tarbox\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Rafal\nKedziorski\nSent: Friday, July 04, 2003 6:03 AM\nTo: [email protected]\nSubject: [PERFORM] PostgreSQL vs. MySQL\n\n\nHi,\n\nhas anybody tested PostgreSQL 7.3.x tables agains MySQL 4.0.12/13 with\nInnoDB?\n\n\nRegards,\nRafal\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n",
"msg_date": "Fri, 4 Jul 2003 08:27:36 -0400",
"msg_from": "\"Brian Tarbox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "> I recently took a system from MySQL to Postgres. Same HW, SW, same data.\n> The major operations where moderately complex queries (joins on 8 tables).\n>\n> The results we got was that Postgres was fully 3 times slower than MySql.\n> We were on this list a fair bit looking for answers and tried all the\n> standard answers. It was still much much much slower.\n\nI'm curious what the usage was. How many concurrent processes were\nperforming the complex queries? I've heard that Postgres does better when\nthe number of concurrent users is high and MySQL does better when the number\nis low. I have no idea if that is true or not.\n\nMichael\n\n\n",
"msg_date": "Fri, 4 Jul 2003 14:36:00 +0200",
"msg_from": "\"Michael Mattox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "I'm actually leaving this list but I can answer this question. Our results\nwere with a single user and we were running Inodb. We were running on\nRedHat 8.0 / 9.0 with vanilla linux settings.\n\nBrian\n\n-----Original Message-----\nFrom: Michael Mattox [mailto:[email protected]]\nSent: Friday, July 04, 2003 8:36 AM\nTo: Brian Tarbox; Rafal Kedziorski; [email protected]\nSubject: RE: [PERFORM] PostgreSQL vs. MySQL\n\n\n> I recently took a system from MySQL to Postgres. Same HW, SW, same data.\n> The major operations where moderately complex queries (joins on 8 tables).\n>\n> The results we got was that Postgres was fully 3 times slower than MySql.\n> We were on this list a fair bit looking for answers and tried all the\n> standard answers. It was still much much much slower.\n\nI'm curious what the usage was. How many concurrent processes were\nperforming the complex queries? I've heard that Postgres does better when\nthe number of concurrent users is high and MySQL does better when the number\nis low. I have no idea if that is true or not.\n\nMichael\n\n",
"msg_date": "Fri, 4 Jul 2003 08:43:24 -0400",
"msg_from": "\"Brian Tarbox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "> I'm actually leaving this list but I can answer this question.\n> Our results\n> were with a single user and we were running Inodb. We were running on\n> RedHat 8.0 / 9.0 with vanilla linux settings.\n\nThat's funny, you make a statement that Postgres was 3 times slower than\nMySQL and then you promptly leave the list! Just kidding.\n\nIt'd be interesting to see what happens if you test your system with a\nhundred users. If it's a webapp you can use JMeter to do this really\neasily.\n\nMichael\n\n\n",
"msg_date": "Fri, 4 Jul 2003 14:46:15 +0200",
"msg_from": "\"Michael Mattox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Friday 04 July 2003 18:16, Michael Mattox wrote:\n> > I'm actually leaving this list but I can answer this question.\n> > Our results\n> > were with a single user and we were running Inodb. We were running on\n> > RedHat 8.0 / 9.0 with vanilla linux settings.\n>\n> That's funny, you make a statement that Postgres was 3 times slower than\n> MySQL and then you promptly leave the list! Just kidding.\n>\n> It'd be interesting to see what happens if you test your system with a\n> hundred users. If it's a webapp you can use JMeter to do this really\n> easily.\n\nHundred users is a later scenario. I am curious about \"vanilla linux settings\" \nWhat does that mean.\n\n Postgresql communmity would always like to help who need it but this thread \nso far gives me impression that OP isn't willing to provide sufficient \ninformation..\n\n Shridhar\n\n",
"msg_date": "Fri, 4 Jul 2003 18:21:26 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Friday 04 July 2003 17:57, Brian Tarbox wrote:\n> I recently took a system from MySQL to Postgres. Same HW, SW, same data.\n> The major operations where moderately complex queries (joins on 8 tables).\n>\n> The results we got was that Postgres was fully 3 times slower than MySql.\n> We were on this list a fair bit looking for answers and tried all the\n> standard answers. It was still much much much slower.\n\nThis invites the slew of questions thereof. Can you provide more information \non\n\n1. Hardware\n2. Postgresql version\n3. Postgresql tuning you did\n4. data size\n5. nature of queries\n6. mysql benchmarks to rate against.\n\nUnless you provide these, it's difficult to help..\n\n Shridhar\n\n",
"msg_date": "Fri, 4 Jul 2003 18:23:40 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "> Unless you provide these, it's difficult to help..\n\nhttp://archives.postgresql.org/pgsql-performance/2003-05/msg00299.php\n\nNote the thread with Tom and Brian.",
"msg_date": "04 Jul 2003 09:11:10 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On 4 Jul 2003 at 9:11, Rod Taylor wrote:\n\n> > Unless you provide these, it's difficult to help..\n> \n> http://archives.postgresql.org/pgsql-performance/2003-05/msg00299.php\n\nWell, even in that thread there wasn't enough information I asked for in other \nmail. It was bit too vague to be a comfortable DB tuning problem.\n\nAm I reading the thread wrong? Please correct me.\n\n\nBye\n Shridhar\n\n--\nAhead warp factor one, Mr. Sulu.\n\n",
"msg_date": "Fri, 04 Jul 2003 18:50:19 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Fri, 2003-07-04 at 09:20, Shridhar Daithankar wrote:\n> On 4 Jul 2003 at 9:11, Rod Taylor wrote:\n> \n> > > Unless you provide these, it's difficult to help..\n> > \n> > http://archives.postgresql.org/pgsql-performance/2003-05/msg00299.php\n> \n> Well, even in that thread there wasn't enough information I asked for in other \n> mail. It was bit too vague to be a comfortable DB tuning problem.\n\nCompletely too little information, and it stopped with Tom asking for\nadditional information. I don't think Brian has any interest in being\nhelped. Many here would be more than happy to do so if the information\nwere to flow.",
"msg_date": "04 Jul 2003 09:45:30 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "> I recently took a system from MySQL to Postgres. Same HW, SW, same data.\n> The major operations where moderately complex queries (joins on 8 tables).\n>\n> The results we got was that Postgres was fully 3 times slower than MySql.\n> We were on this list a fair bit looking for answers and tried all the\n> standard answers. It was still much much much slower.\n\nI have never found a query in MySQL that was faster than one in\nPostgreSQL.\n\nChris\n\n\n",
"msg_date": "Fri, 4 Jul 2003 21:50:26 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "\"Brian Tarbox\" <[email protected]> writes:\n> I recently took a system from MySQL to Postgres. Same HW, SW, same data.\n> The major operations where moderately complex queries (joins on 8 tables).\n\n> The results we got was that Postgres was fully 3 times slower than MySql.\n> We were on this list a fair bit looking for answers and tried all the\n> standard answers. It was still much much much slower.\n\nCould we see the details? It's not very fair to not give us a chance to\nlearn about problems.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jul 2003 09:59:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL "
},
{
"msg_contents": "Ok, I'll give more data :-)\n\nUnder both MySql and Postgres the tests were run on a variety of systems,\nall with similar results. My own personal testing was done on a P4 2.4Mhz,\n512 mb memory, latest production versions of each database. By vanilla\nRedHat I mean that I installed RH on a clean system, said install everything\nand did no customization of RH settings.\nWe had about 40 tables in the db, with joined queries on about 8-12 tables.\nSome tables had 10,000 records, some 1000 records, other tables had dozens\nof records. There were indexes on all join fields, and all join fields were\nlisted as foriegn keys. All join fields were unique primary keys in their\nhome table (so the index distribution would be very spread out). I'm not\npermitted to post the actual tables as per company policy.\n\nI did no tuning of MySql. The only tuning for PG was to vacuum and vacuum\nanalyze.\n\nI'll also mention that comments like this one are not productive:\n\n>I don't think Brian has any interest in being helped.\n\nPlease understand the limits of how much information a consultant can submit\nto an open list like this about a client's confidential information. I've\nanswered every question I _can_ answer and when I get hostility in response\nall I can do is sigh and move on.\nI'm sorry if Shridhar is upset that I can't validate his favorite db but ad\nhominin comments aren't helpful.\n\nBrian\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Shridhar\nDaithankar\nSent: Friday, July 04, 2003 8:54 AM\nTo: [email protected]\nSubject: Re: [PERFORM] PostgreSQL vs. MySQL\n\n\nOn Friday 04 July 2003 17:57, Brian Tarbox wrote:\n> I recently took a system from MySQL to Postgres. Same HW, SW, same data.\n> The major operations where moderately complex queries (joins on 8 tables).\n>\n> The results we got was that Postgres was fully 3 times slower than MySql.\n> We were on this list a fair bit looking for answers and tried all the\n> standard answers. It was still much much much slower.\n\nThis invites the slew of questions thereof. Can you provide more information\non\n\n1. Hardware\n2. Postgresql version\n3. Postgresql tuning you did\n4. data size\n5. nature of queries\n6. mysql benchmarks to rate against.\n\nUnless you provide these, it's difficult to help..\n\n Shridhar\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n",
"msg_date": "Fri, 4 Jul 2003 10:07:46 -0400",
"msg_from": "\"Brian Tarbox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On 4 Jul 2003 at 10:07, Brian Tarbox wrote:\n\n> Ok, I'll give more data :-)\n> \n> Under both MySql and Postgres the tests were run on a variety of systems,\n> all with similar results. My own personal testing was done on a P4 2.4Mhz,\n> 512 mb memory, latest production versions of each database. By vanilla\n> RedHat I mean that I installed RH on a clean system, said install everything\n> and did no customization of RH settings.\n> We had about 40 tables in the db, with joined queries on about 8-12 tables.\n> Some tables had 10,000 records, some 1000 records, other tables had dozens\n> of records. There were indexes on all join fields, and all join fields were\n> listed as foriegn keys. All join fields were unique primary keys in their\n> home table (so the index distribution would be very spread out). I'm not\n> permitted to post the actual tables as per company policy.\n> \n> I did no tuning of MySql. The only tuning for PG was to vacuum and vacuum\n> analyze.\n\nNo wonder pg bombed out so badly. In fact I am surprised it was slower only by \nfactor of 3. \n\nRule of thumb is if you have more than 1K records in any table, you got to tune \npostgresql.conf. I don't think I need to elaborate what difference tuning in \npostgresql.conf can make.\n\n> \n> I'll also mention that comments like this one are not productive:\n> \n> >I don't think Brian has any interest in being helped.\n> \n> Please understand the limits of how much information a consultant can submit\n> to an open list like this about a client's confidential information. I've\n> answered every question I _can_ answer and when I get hostility in response\n> all I can do is sigh and move on.\n\nWell, definition of threshold of hostile response differ from person to person. \nThat is understood but by internet standards, I don't think you have received \nany hostile response. But that's not the topic I would like to continue to \ndiscuss.\n\nWhat I would suggest you is to look at some other performance problem \ndescription submitted earlier. I don't think these guys have permission to \ndisclose sensitive data either but they did everything they could in their \nlimits.\n\nLook at, http://archives.postgresql.org/pgsql-performance/2003-06/msg00134.php \nand the thread thereof. You can reach there from \nhttp://archives.postgresql.org/pgsql-performance/2003-06/threads.php\n\nThere is a reason why Michael got so many and so detailed responses. Within \nyour limits, I am sure you could have posted more and earlier rather than \nposting details when original thread is long gone.\n\n\n> I'm sorry if Shridhar is upset that I can't validate his favorite db but ad\n> hominin comments aren't helpful.\n\nI have no problems personally if postgresql does not work with you. The very \nfirst reason I stick with postgresql is that it works best for me. The moment \nit does not work for somebody else, there is a potential problem which I would \nlike to rectify ASAP. That is the idea of getting on lists and forums.\n\nIt's not about product as much it is about helping each other.\n\nAnd certainly. I have posted weirder qeuries here and I disagree that you \ncouldn't post more. However this is a judgement from what you have posted and \nby all chances it is wrong. Never mind that.\n\nAt the end, it's the problem and solution that matters. Peace..\n\nBye\n Shridhar\n\n--\nMurphy's Laws:\t(1) If anything can go wrong, it will.\t(2) Nothing is as easy as \nit looks.\t(3) Everything takes longer than you think it will.\n\n",
"msg_date": "Fri, 04 Jul 2003 19:51:30 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "Rod Taylor <[email protected]> writes:\n>> It was bit too vague to be a comfortable DB tuning problem.\n\n> Completely too little information, and it stopped with Tom asking for\n> additional information.\n\nThere was something awfully fishy about that. Brian was saying that he\ngot a seqscan plan out of \"WHERE foo = 100\", where foo is an integer\nprimary key. That's just not real credible, at least not once you get\npast the couple of standard issues that were mentioned in the thread.\nAnd we never did get word one of information about his join problems.\n\n> I don't think Brian has any interest in being helped.\n\nI suspect he'd made up his mind already. Which is his privilege, but\nit'd be nice to have some clue what the problem was ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jul 2003 10:23:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL "
},
{
"msg_contents": "On Fri, Jul 04, 2003 at 10:07:46AM -0400, Brian Tarbox wrote:\n> 512 mb memory, latest production versions of each database. By vanilla\n> RedHat I mean that I installed RH on a clean system, said install everything\n> and did no customization of RH settings.\n\nDoes that include no customization of the Postgres settings? \n\n> We had about 40 tables in the db, with joined queries on about 8-12 tables.\n\nSELECTs only? because. . .\n\n> of records. There were indexes on all join fields, and all join fields were\n> listed as foriegn keys. All join fields were unique primary keys in their\n\n. . .you know that FK constraints in Postgres are not cheap, right?\n\n> I did no tuning of MySql. The only tuning for PG was to vacuum and vacuum\n> analyze.\n\nThis appears to be a \"yes\" answer to my question above. Out of the\nbox, PostgreSQL is set up to be able to run on a 1992-vintage SGI\nIndy with 8 M of RAM (ok, I may be exaggerating, but only by a bit);\nit is not tuned for performance. Running without even tweaking the\nshared buffers is guaranteed to get you lousy performance.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Fri, 4 Jul 2003 10:28:00 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "> Please understand the limits of how much information a consultant can submit\n> to an open list like this about a client's confidential information. I've\n> answered every question I _can_ answer and when I get hostility in response\n> all I can do is sigh and move on.\n\nIs there any chance you could show us an EXPLAIN ANALYZE output of the\npoor performing query in question?\n\n> I'm sorry if Shridhar is upset that I can't validate his favorite db but ad\n> hominin comments aren't helpful.\n\nIt was me who gave the comment based upon previous threads which\nrequested information that had gone unanswered (not even a response\nstating such information could not be provided).\n\nThe database you describe is quite small, so I'm not surprised MySQL\ndoes well with it. That said, it isn't normal to experience poor\nperformance with PostgreSQL unless you've stumbled upon a poor spot (IN\nbased sub-queries used to be poor performing, aggregates can be slow,\nmismatched datatypes, etc.).\n\nOutput of EXPLAIN ANALYZE of a contrived query representative of the\ntype of work done (that demonstrates the problem) with renamed tables\nand columns would go a long way to helping us help you.",
"msg_date": "04 Jul 2003 10:34:20 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "> This appears to be a \"yes\" answer to my question above. Out of the\n> box, PostgreSQL is set up to be able to run on a 1992-vintage SGI\n> Indy with 8 M of RAM (ok, I may be exaggerating, but only by a bit);\n> it is not tuned for performance. Running without even tweaking the\n> shared buffers is guaranteed to get you lousy performance.\n\nI see this as a major problem. How many people run postgres, decide it's\ntoo slow and give up without digging into the documentation or coming to\nthis group? This seems to be pretty common. Even worst, they tell 10\nothers how slow Postgres is and then it gets a bad reputation.\n\nIn my opinion the defaults should be set up for a typical database server\nmachine.\n\nMichael\n\n\n",
"msg_date": "Fri, 4 Jul 2003 16:35:03 +0200",
"msg_from": "\"Michael Mattox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "\"Brian Tarbox\" <[email protected]> writes:\n> I'm not permitted to post the actual tables as per company policy.\n\nNobody wants to see your data, only the table schemas and queries. If\nyou feel that even that contains some sensitive information, just rename\nthe table and field names to something meaningless. But the kinds of\nproblems I am interested in finding out about require seeing the column\ndatatypes and the form of the queries. The hardware and platform\ndetails you gave mean nothing to me (and probably not to anyone else\neither, given that you were comparing to MySQL on the same platform).\n\n> I did no tuning of MySql. The only tuning for PG was to vacuum and vacuum\n> analyze.\n\nIf you didn't at least bump up shared_buffers, you were deliberately\nskewing the results against Postgres. Surely you can't have been\nsubscribed to pgsql-performance very long without knowing that the\ndefault postgresql.conf settings are set up for a toy installation.\n\n> all I can do is sigh and move on.\n\nYou're still looking for reasons not to answer our questions, aren't\nyou? Do you actually want to find out what the problem was here?\nIf not, you're wasting our list bandwidth. I'd like to find out,\nif only so I can try to fix it in future releases, but without useful\ninformation I'll just have to write this off as an unsubstantiated report.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jul 2003 10:35:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL "
},
{
"msg_contents": "Moving to -hackers.\n\n> In my opinion the defaults should be set up for a typical database server\n> machine.\n\nOk.. thats fair. The first problem would be to define typical for\ncurrent PostgreSQL installations, and typical for non-postgresql\ninstallations (the folks we want to convert).\n\nAfter that, do we care if it works with the typical OS installation or\nwith all default OS installations (think shared memory settings)?\n\n\nI agree, this is something we should tackle somewhere along the 7.5\ntimeline.",
"msg_date": "04 Jul 2003 10:49:01 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On 4 Jul 2003 at 16:35, Michael Mattox wrote:\n\n> I see this as a major problem. How many people run postgres, decide it's\n> too slow and give up without digging into the documentation or coming to\n> this group? This seems to be pretty common. Even worst, they tell 10\n> others how slow Postgres is and then it gets a bad reputation.\n> \n> In my opinion the defaults should be set up for a typical database server\n> machine.\n\nWell, there are few major reasons defaults are the way they are and the reason \nit hurts the way they are\n\n1. Postgresql expects to start on every machine on which it can run. Now some \nof the arcane platforms need kernel recompilation to raise SHMMAX and defaults \nto 1MB.\n\n2. Postgresql uses shared memory being process based architecture. Mysql uses \nprocess memory being threaded application. It does not need kernel settings to \nwork and usually works best it can.\n\n3. We expect users/admins to be reading docs. If one does not read docs, it \ndoes not matter what defaults are. Sooner or later, it is going to fall on it's \nface.\n\n4. Unlike likes of Oracle, postgresql does not pre-claim resources and starts \nhogging the system, replacing OS whereever possible. No it does not work that \nway..\n\nOne thing always strikes me. Lot of people(Not you Michael!..:-)) would \ncomplain that postgresql is slow and needs tweaking are not bothered by the \nfact that oracle needs almost same kind of and same amount of tweaking to get \nsomewhere. Perception matterrs a lot.\n\nI would have whined for java as well but this is not the forum for that..:-)\n\nOn a positive note, me and Josh are finishing a bare bone performance article \nthat would answer lot of your questions. I am counting on you to provide \nvaluable feedback. I expect it out tomorrow or on sunday..Josh will confirm \nthat..\n\n\n \nBye\n Shridhar\n\n--\nTheorem: a cat has nine tails.Proof:\tNo cat has eight tails. A cat has one tail \nmore than no cat.\tTherefore, a cat has nine tails.\n\n",
"msg_date": "Fri, 04 Jul 2003 20:19:16 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "hi,\n\nAt 20:19 04.07.2003 +0530, Shridhar Daithankar wrote:\n[...]\n\n>On a positive note, me and Josh are finishing a bare bone performance article\n\nwhere will be this article published?\n\n>that would answer lot of your questions. I am counting on you to provide\n>valuable feedback. I expect it out tomorrow or on sunday..Josh will confirm\n>that..\n\n\nRafal\n\n",
"msg_date": "Fri, 04 Jul 2003 16:55:04 +0200",
"msg_from": "Rafal Kedziorski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "> 2. Postgresql uses shared memory being process based architecture. Mysql uses \n> process memory being threaded application. It does not need kernel settings to \n> work and usually works best it can.\n\nMySQL has other issues with the kernel due to their threading choice \nsuch as memory limits per process, or poor threaded SMP support on some\nplatforms (inability for a single process to use more than one CPU at a \ntime regardless of thread count).\n\nThreads aren't an easy way around kernel limitations, which is probably\nwhy Apache has gone for a combination of the two -- but of course that\nadds complexity.",
"msg_date": "04 Jul 2003 11:06:03 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Friday 04 July 2003 20:36, Rod Taylor wrote:\n> > 2. Postgresql uses shared memory being process based architecture. Mysql\n> > uses process memory being threaded application. It does not need kernel\n> > settings to work and usually works best it can.\n>\n> MySQL has other issues with the kernel due to their threading choice\n> such as memory limits per process, or poor threaded SMP support on some\n> platforms (inability for a single process to use more than one CPU at a\n> time regardless of thread count).\n>\n> Threads aren't an easy way around kernel limitations, which is probably\n> why Apache has gone for a combination of the two -- but of course that\n> adds complexity.\n\nCorrect. It's not debate about whether threading is better or not. But it \ncertainly affects the default way with which these two applications work.\n\n Shridhar\n\n",
"msg_date": "Fri, 4 Jul 2003 20:48:39 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Fri, Jul 04, 2003 at 04:35:03PM +0200, Michael Mattox wrote:\n\n> I see this as a major problem. How many people run postgres, decide it's\n> too slow and give up without digging into the documentation or coming to\n> this group? This seems to be pretty common. Even worst, they tell 10\n> others how slow Postgres is and then it gets a bad reputation.\n\nThere have been various proposals to do things of this sort. But\nthere are always problems with it. For instance, on many OSes,\nPostgres would not run _at all_ when you first compiled it if its\ndefaults were set more agressively. Then how many people would\ncomplain, \"It just doesn't work,\" and move on without asking about\nit?\n\nI cannot, for the life of me, understand how anyone can install some\nsoftware which is supposed to provide meaningful results under\nproduction conditions, and not bother to read even the basic\n\"quickstart\"-type stuff that is kicking around. There is _no secret_\nthat Postgres is configured as a toy out of the box. One presumes\nthat DBAs are hired to do _some_ little bit of work.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Fri, 4 Jul 2003 11:26:20 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Friday 04 July 2003 20:56, Andrew Sullivan wrote:\n> On Fri, Jul 04, 2003 at 04:35:03PM +0200, Michael Mattox wrote:\n> > I see this as a major problem. How many people run postgres, decide it's\n> > too slow and give up without digging into the documentation or coming to\n> > this group? This seems to be pretty common. Even worst, they tell 10\n> > others how slow Postgres is and then it gets a bad reputation.\n>\n> There have been various proposals to do things of this sort. But\n> there are always problems with it. For instance, on many OSes,\n> Postgres would not run _at all_ when you first compiled it if its\n> defaults were set more agressively. Then how many people would\n> complain, \"It just doesn't work,\" and move on without asking about\n> it?\n\nThere was a proposal to ship various postgresql.conf.sample like one for large \nservers, one for medium, one for update intensive purpose etc.\n\nI was thinking over it. Actaully we could tweak initdb script to be \ninteractiev and get inputs from users and tune it accordingly. Of course it \nwould be nowhere near the admin reading the docs. but at least it won't fall \nflat on performance groundas the way falls now.\n\n Shridhar\n\n",
"msg_date": "Fri, 4 Jul 2003 20:58:12 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "That would be something that I'd like to see. Being new to PostgreSQL some\nof the basics of tuning the database were a little hard to find. The reason\npeople go with MySQL is because it's fast and easy to use. That's why I had\nbeen using it for years. Then when a problem came along and I couldn't use\nMySQL I checked out PostgreSQL and found that it would fill the gap, but I\nhad been able to get by on doing very little in terms of administration for\nMySQL (which performed well for me) and I was expecting PostgreSQL to be\nsimilar. As with many people I have the hat of DB admin, server admin,\nprogrammer and designer and the less I have to do in any of those areas\nmakes my life a lot easier.\n\nWhen I first started using PostgreSQL I installed it and entered my data\nwithout any thought of having to tune it because I never had to before. If\nthere were some program that could be inserted to the end of the make\nprocess or something it might help dimwits like me :-) realize that there\nwas more that needs to be done once the installation has been completed.\n\nKevin\n\n\n----- Original Message ----- \nFrom: \"Shridhar Daithankar\" <[email protected]>\nTo: <[email protected]>\nSent: Friday, July 04, 2003 10:28 AM\nSubject: Re: [PERFORM] PostgreSQL vs. MySQL\n\n\n> On Friday 04 July 2003 20:56, Andrew Sullivan wrote:\n> > On Fri, Jul 04, 2003 at 04:35:03PM +0200, Michael Mattox wrote:\n> > > I see this as a major problem. How many people run postgres, decide\nit's\n> > > too slow and give up without digging into the documentation or coming\nto\n> > > this group? This seems to be pretty common. Even worst, they tell 10\n> > > others how slow Postgres is and then it gets a bad reputation.\n> >\n> > There have been various proposals to do things of this sort. But\n> > there are always problems with it. For instance, on many OSes,\n> > Postgres would not run _at all_ when you first compiled it if its\n> > defaults were set more agressively. Then how many people would\n> > complain, \"It just doesn't work,\" and move on without asking about\n> > it?\n>\n> There was a proposal to ship various postgresql.conf.sample like one for\nlarge\n> servers, one for medium, one for update intensive purpose etc.\n>\n> I was thinking over it. Actaully we could tweak initdb script to be\n> interactiev and get inputs from users and tune it accordingly. Of course\nit\n> would be nowhere near the admin reading the docs. but at least it won't\nfall\n> flat on performance groundas the way falls now.\n>\n> Shridhar\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Fri, 4 Jul 2003 10:42:52 -0500",
"msg_from": "\"Kevin Schroeder\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "> I don't think Brian has any interest in being helped.\n>I suspect he'd made up his mind already.\n\n\nWith all due respect Tom, I don't think I'm the one demonstrating a closed\nmind.\nRather than trying to figure out whats going on in my head, how about\nfiguring out whats going on in my database? :-)\n\nI'm answering every question I can. I supplied HW info because someone\nasked, and then Tom said: \"The hardware and platform details you gave mean\nnothing to me...\". Which would you like guys??\n\nI am not allowed to share schemas...sorry but thats what the contract says.\nThe queries represent code, thus intellectual property, thus I can't post\nthem.\n\nI posted an Explain output at some point and was told my database was too\nsmall to be fast. So, I added 10,000 records, vacummed, and my selects were\nstill the same speed.\n\nHow many people on this list have asked for a tuning/performance doc? I\nhear that there is one coming soon. Thats great. Saying RTM is fine too,\nif the manual is clear. Look at Michael Mattox's thread on this very topic\non 6/24. Michael said:\n\"I think the biggest area of confusion for me was that the various\nparameters\nare very briefly described and no context is given for their parameters.?\n\nShridhar then suggested he change OSes, upgrade his kernel (with specific\npatches), get different HW, etc. That goes a bit beyond casual tuning.\n\n\nI'm not saying (and never did say) that postgres could not be fast. All I\never said was that with the same minimal effort applied to both DBs,\npostgres was slower.\n\nI really wasn't looking for battle this fine day....I'm going outside to\nBBQ! (and if you conclude from that that I'm not interested in this or\nthat, there's nothing I can do about that. It is a beautiful day out and\nbbq does sound more fun than this list. sorry)\n\nBrian\n\n",
"msg_date": "Fri, 4 Jul 2003 12:10:46 -0400",
"msg_from": "\"Brian Tarbox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL "
},
{
"msg_contents": "\nPostgreSQL (as being a really advanced RDBMS),\ngenerally requires some tuning in order to get\nthe best performance.\n\nYour best bet is to try both.\n\nAlso check to see IF mysql has\n-Referential integrity\n-subselects\n-transactions\n-(other usefull features like arrays,user defined types,etc..)\n(its probable that you will need some of the above)\n\nOn Fri, 4 Jul 2003, Rafal Kedziorski wrote:\n\n> Hi,\n> \n> has anybody tested PostgreSQL 7.3.x tables agains MySQL 4.0.12/13 with InnoDB?\n> \n> \n> Regards,\n> Rafal \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: achill at matrix dot gatewaynet dot com\n mantzios at softlab dot ece dot ntua dot gr\n\n",
"msg_date": "Fri, 4 Jul 2003 14:14:02 -0200 (GMT+2)",
"msg_from": "Achilleus Mantzios <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "> I'm not saying (and never did say) that postgres could not be fast.\n> All I ever said was that with the same minimal effort applied to both\n> DBs, postgres was slower.\n\nAfaik, your original posting said postgresql was 3 times slower than mysql\nand that you are going to leave this list now. This implied that you have\nmade your decision between postgresql and mysql, taking mysql because it is\nfaster.\n\nNow you say your testing setup has minimal effort applied. Well, it is not\nvery surprising that mysql is faster in standard configurations. As Shridhar\npointed out, postgresql has very conservative default values, so that it\nstarts on nearly every machine.\n\nIf I was your client and gave you the task to choose a suitable database for\nmy application and you evaluated suitable databases this way, then something\nis seriously wrong with your work.\n\nRegards,\nBjoern\n\n\n",
"msg_date": "Fri, 4 Jul 2003 18:22:43 +0200",
"msg_from": "\"Bjoern Metzdorf\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL "
},
{
"msg_contents": "My goodness people!! If you are just going to bash people who are trying to\nlearn PostgreSQL then you have no chance of ever getting new people using\nit! Cut out this crap and do what this list is meant to do, which is, I'm\nassuming, helping people figure out why their installations aren't running\nas fast as they would like. This is pathetic!!\n\nKevin\n\n----- Original Message ----- \nFrom: \"Bjoern Metzdorf\" <[email protected]>\nTo: \"Postgresql Performance\" <[email protected]>\nSent: Friday, July 04, 2003 11:22 AM\nSubject: Re: [PERFORM] PostgreSQL vs. MySQL\n\n\n> > I'm not saying (and never did say) that postgres could not be fast.\n> > All I ever said was that with the same minimal effort applied to both\n> > DBs, postgres was slower.\n>\n> Afaik, your original posting said postgresql was 3 times slower than mysql\n> and that you are going to leave this list now. This implied that you have\n> made your decision between postgresql and mysql, taking mysql because it\nis\n> faster.\n>\n> Now you say your testing setup has minimal effort applied. Well, it is not\n> very surprising that mysql is faster in standard configurations. As\nShridhar\n> pointed out, postgresql has very conservative default values, so that it\n> starts on nearly every machine.\n>\n> If I was your client and gave you the task to choose a suitable database\nfor\n> my application and you evaluated suitable databases this way, then\nsomething\n> is seriously wrong with your work.\n>\n> Regards,\n> Bjoern\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n",
"msg_date": "Fri, 4 Jul 2003 11:39:49 -0500",
"msg_from": "\"Kevin Schroeder\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL "
},
{
"msg_contents": "\nOn Fri, 4 Jul 2003, Brian Tarbox wrote:\n\n> > I don't think Brian has any interest in being helped.\n> >I suspect he'd made up his mind already.\n>\n>\n> With all due respect Tom, I don't think I'm the one demonstrating a closed\n> mind.\n> Rather than trying to figure out whats going on in my head, how about\n> figuring out whats going on in my database? :-)\n\nWell, in the case of getting a sequential scan on something like\nselect * from foo where col=10;\nwhere col is a primary key, the things I can think of to check\nare does select * from foo where col='10'; give a different plan?\n\nIn general for cases where you can't post queries or schema we're kinda\nstuck and not really able to give intelligent advice since it's often\nschema/query specific, so the general questions/comments are things like\n(which you've probably heard, but I think they should get put into this\nthread if only to move the thread towards usefulness)\n\nWhat is the relative costs/plan if you set enable_seqscan to false before\nexplain analyzing the query? If there are places you think that it should\nbe able to do an index scan and it still doesn't, make sure that there\naren't cross datatype issues (especially with int constants).\n\nAlso, using explain analyze, where is the time being taken, it's often not\nwhere the cost factor would expect it.\n\nDo the row estimates match reality in the explain analyze output, if not\ndoes analyzing help, if not does raising the statistics target (to say 50,\n100, 1000) with alter table and then analyzing help?\n\nDoes vacuuming help, what about vacuum full? If the latter does and the\nformer doesn't, you may need to look at raising the fsm settings.\n\nIf shared_buffers is less than 1000, does setting it to something between\n1000-8000 raise performance?\n\nHow much memory does the machine have that's being used for caching, if\nit's alot, try raising effective_cache_size to see if that helps the\nchoice of plan by making a more reasonable guess as to cache hit rates.\n\nAre there any sorts in the query, if so, how large would expect the result\nset that's being sorted to be, can you afford to make sort_mem cover that\n(either permanently by changing conf files or before the query with a set\ncommand)?\n\nIs it possible to avoid some sorts in the plan with a multi-column index?\n\nFor 7.3 and earlier, does the query use IN or =ANY, if so it might help to\ntry to convert to an exists form.\n\nDoes the query use any mix/max aggregates, it might help to look for a\nworkaround, this is one case that is truly slow.\n\n\n\nPostgreSQL really does require more than minimal optimization at start,\neffective_cache_size, shared_buffers, sort_mem and the fsm settings really\nneed to be set at a level for the machine/queries you have. Without the\nqueries we can't be too specific. Big speed losses I can think of are the\ndatatype mismatch confusion, followed quickly by row estimates that don't\nmatch reality (generally requiring a greater statistics target on the\ncolumn) and issues with correlation (I'm not really sure there's a good\nsolution for this currently, maybe someone will know -- I've not run into\nit really on anything I've looked at).\n\n",
"msg_date": "Fri, 4 Jul 2003 10:05:56 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL "
},
{
"msg_contents": "On Fri, Jul 04, 2003 at 12:10:46PM -0400, Brian Tarbox wrote:\n> I am not allowed to share schemas...sorry but thats what the contract says.\n> The queries represent code, thus intellectual property, thus I can't post\n> them.\n\nIf you ask for help, but say, \"I can't tell you anything,\" no-one\nwill be able to help you. \n\nI think what people are reacting to angrily is that you complain that\nPostgreSQL is slow, it appears you haven't tuned it correctly, and\nyou're not willing to share with anyone what you did. In that case,\nyou shouldn't be reporting, \"MySQL was faster that PostgreSQL for\nme.\" You should at most be reporting, \"MySQL was faster than\nPostgreSQL for me, but I haven't any idea how to tune PostgreSQL, and\ndidn't know how to learn to do so.\" That, at least, gives people a\nfighting chance to evaluate the utility of your report.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Fri, 4 Jul 2003 13:08:32 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "Brian,\n\nHowdy! I'm Josh Berkus, I'm also on the Core Team for PostgreSQL, and I \nwanted to give some closure on your issue before you quit with a bad taste in \nyour mouth.\n\nYour posting hit a sore point in the collective PostgreSQL community, so you \ngot a strong reaction from several people on the list -- probably out of \nproportion to your posting. \n\nOr, to put it another way, you posted something intended to offend people out \nof your frustration, and got a very offended reaction back.\n\n> Rather than trying to figure out whats going on in my head, how about\n> figuring out whats going on in my database? :-)\n> I am not allowed to share schemas...sorry but thats what the contract says.\n> The queries represent code, thus intellectual property, thus I can't post\n> them.\n\nI think you recognize, now, that this list cannot help you under those \ncircumstances? \n\nA significant portion of my income derives from clients who need tuning help \nunder NDA. If, however, you don't need any capabilites that PostgreSQL has \nwhich MySQL doesn't, hiring a consultant would not be money well spent.\n\n> I really wasn't looking for battle this fine day....I'm going outside to\n> BBQ! (and if you conclude from that that I'm not interested in this or\n> that, there's nothing I can do about that. It is a beautiful day out and\n> bbq does sound more fun than this list. sorry)\n\nNo arguments there ... wish I didn't have to work :-(\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 4 Jul 2003 10:12:36 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "> Andrew Sullivan wrote:\n\n> I cannot, for the life of me, understand how anyone can \n> install some software which is supposed to provide meaningful \n> results under production conditions, and not bother to read \n> even the basic \"quickstart\"-type stuff that is kicking \n> around.\nThen please point out where it sais, in the documentation, that the\nvalue for the shared_memory of 64 is too low and that 4000 is a nice\nvalue to start with?\n\nPlease, also point out the part of the documentation that explains how\nhigh the fsm-settings should be, what the impact of a lower or higher\nsort_mem-setting is, what kind of value the effective_cache_size should\nhave and the best way to determine that.\n\nIf you can find the above in the default-documentation, like the\n\"getting started\"-documents or the administration documentation, than be\nso kind to give direct links or quotes to that. I was unable to find\nthat, now in a 15 minute search in the docs themselves and I have read\nmost part of them (in the past)...\n\nEspecially in chapter 10 \"Performance hints\" I was surprised not to see\nsuch information, although it could be considered an administration\ntask, but there it wasn't in chapter 10 (monitoring database usage)\neither.\n\n\nI'm sorry to put this in a such a confronting manner, but you simply\ncan't expect people to search for information that they don't know the\nexistence of... Actually, that doesn't appear to exist, at least not on\nthe places you'd expect that information to be placed. I, myself, have\nread Bruce's document on performance tuning, but even that document\ndoesn't provide the detail of information that can be read in this\nmailing-list.\n\nHaving said that, this list only has 461 subscribers and I can hardly\nbelieve that that are _all_ users of postgresql, as long as it's not the\ndefault way of trying to gather data, it shouldn't be expected that\nanyone actually tries to find his information in this list.\n\nAnyway, I saw that there has been done some effort to create a document\nthat does describe such parameters, I'd be happy to see and read that :)\n\n> There is _no secret_ that Postgres is configured as \n> a toy out of the box. One presumes that DBAs are hired to do \n> _some_ little bit of work.\nI don't see it on the frontpage, nor in the documentation. Anyway, see\nabove :)\n\nRegards,\n\nArjen\n\nBtw, I've tried to tune my postgresql database using the administration\nand tech documents, and saw quite a few queries run quite a lot faster\non mysql, I'll try to set up a more useful test environment and supply\nthis list with information to allow me to tune it to run more or less\nequal to mysql. I do see 3x runs, even with the shared memory and sort\nmem settings cranked up and having done little to none tuning on mysql\n:)\n\n\n\n",
"msg_date": "Fri, 4 Jul 2003 20:07:18 +0200",
"msg_from": "\"Arjen van der Meijden\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "Why is such a simple list of questions not somewhere in the\ndocumentation? :(\n\nOf course a few of your questions are relatively case-dependent, but the\nothers are very general. Such information should be in the documentation\nand easy to access :)\n\nRegards,\n\nArjen\n\n> Stephan Szabo wrote a nice list of helpful questions\n\n\n\n",
"msg_date": "Fri, 4 Jul 2003 20:15:21 +0200",
"msg_from": "\"Arjen van der Meijden\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL "
},
{
"msg_contents": "On Fri, Jul 04, 2003 at 08:07:18PM +0200, Arjen van der Meijden wrote:\n> > Andrew Sullivan wrote:\n> > results under production conditions, and not bother to read \n> > even the basic \"quickstart\"-type stuff that is kicking \n> > around.\n> Then please point out where it sais, in the documentation, that the\n> value for the shared_memory of 64 is too low and that 4000 is a nice\n> value to start with?\n\nI think I did indeed speak too soon, as the criticism is a fair one:\nnowhere in the installation instructions or the \"getting started\"\ndocs does it say that you really ought to do some tuning once you\nhave the system installed. Can I suggest for the time being that\nsomething along these lines should go in 14.6.3, \"Tuning the\ninstallation\":\n\n---snip---\nBy default, PostgreSQL is configured to run on minimal hardware. As\na result, some tuning of your installation will be necessary before\nusing it for anything other than extremely small databases. At the\nvery least, it will probably be necessary to increase your shared\nbuffers setting. See Chapter 16 for details on what tuning options\nare available to you.\n---snip---\n\n> I'm sorry to put this in a such a confronting manner, but you simply\n> can't expect people to search for information that they don't know the\n> existence of.\n\nNo need to apologise; I think you're right.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Fri, 4 Jul 2003 14:28:53 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "People:\n\n> I think I did indeed speak too soon, as the criticism is a fair one:\n> nowhere in the installation instructions or the \"getting started\"\n> docs does it say that you really ought to do some tuning once you\n> have the system installed. Can I suggest for the time being that\n> something along these lines should go in 14.6.3, \"Tuning the\n> installation\":\n> \n> ---snip---\n> By default, PostgreSQL is configured to run on minimal hardware. As\n> a result, some tuning of your installation will be necessary before\n> using it for anything other than extremely small databases. At the\n> very least, it will probably be necessary to increase your shared\n> buffers setting. See Chapter 16 for details on what tuning options\n> are available to you.\n> ---snip---\n\nI think we actually need much more than this. Kaarel on the Advocacy list has \nvolunteered to try to extend our \"getting started\" section to encompass some \nbasic tuning stuff. Of course, more people would be better.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 4 Jul 2003 11:37:58 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> ---snip---\n>> By default, PostgreSQL is configured to run on minimal hardware. As\n>> a result, some tuning of your installation will be necessary before\n>> using it for anything other than extremely small databases. At the\n>> very least, it will probably be necessary to increase your shared\n>> buffers setting. See Chapter 16 for details on what tuning options\n>> are available to you.\n>> ---snip---\n\n> I think we actually need much more than this.\n\nI am about to propose a patch that will cause the default shared_buffers\nto be more realistic, say 1000, on machines where the kernel will allow\nit. Not sure if people will let me get away with applying it\npost-feature-freeze, but if so that would change the terms of this\ndebate noticeably.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jul 2003 15:18:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL "
},
{
"msg_contents": "Tom,\n\n> I am about to propose a patch that will cause the default shared_buffers\n> to be more realistic, say 1000, on machines where the kernel will allow\n> it. Not sure if people will let me get away with applying it\n> post-feature-freeze, but if so that would change the terms of this\n> debate noticeably.\n\n+1\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n",
"msg_date": "Fri, 4 Jul 2003 12:20:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "> I am about to propose a patch that will cause the default shared_buffers\n> to be more realistic, say 1000, on machines where the kernel will allow\n> it. Not sure if people will let me get away with applying it\n> post-feature-freeze, but if so that would change the terms of this\n> debate noticeably.\n\nIt's not a feature change, it's a bug fix -- bug being an oversight.",
"msg_date": "04 Jul 2003 19:29:38 +0000",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Fri, Jul 04, 2003 at 10:49:01AM -0400, Rod Taylor wrote:\n> > In my opinion the defaults should be set up for a typical database\n> > server machine.\n> \n> Ok.. thats fair. The first problem would be to define typical for\n> current PostgreSQL installations, and typical for non-postgresql\n> installations (the folks we want to convert).\n\nIt's been a while since the last one of these discussions, so stop me\nif this has been suggested before, but...\n\nDo we actually want to have a default configuration file?\n\nSeriously, if we provide, say, 4 or 5 files based on various system\nassumptions (conf.MINIMAL, conf.AVERAGE, conf.MULTIDISK, or whatever),\nthen we might be able to get away with not providing an actual\ndefault. Change the installation instructions to say\n\n>>>\nPostgreSQL requires a configuration file, which it expects to be\nlocated in $DIR. Provided are several example configurations (in\n$DIR/eg/). If you're just starting with PostrgreSQL, we recommend\nreading through those and selecting one which most closely matches\nyour machine.\n\nIf you're in doubt as to which file to use, try $AVERAGE. If you're\nstill having difficulty getting PostgreSQL to run, try\n$MINIMAL. $MINIMAL should work on every supported platform, but is not\noptimized for modern hardware -- PostgreSQL will not run well in this\nconfiguration.\n<<<\n\nThis makes the installation process slightly less simple, but only in\nthe way that we want it to be. That is, it forces the end user to the\nrealization that there actually is configuration to be done, and\nforces them into a minimally interactive way to deal with it.\n\nIt also doesn't require any kernel-test coding, or really any\ndevelopment at all, so we should theoretically be able to get it\nfinished and ready to go more quickly.\n\nThoughts?\n\n-johnnnnnnnnnnn\n",
"msg_date": "Fri, 4 Jul 2003 15:39:57 -0500",
"msg_from": "johnnnnnn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": ">Afaik, your original posting said postgresql was 3 times slower than mysql\n>and that you are going to leave this list now. This implied that you have\n>made your decision between postgresql and mysql, taking mysql because it is\n>faster.\n\nWell, that shows what you get for making implications. The client is\nsticking with postgres and we are coding around the issue in other ways.\n\n\n>If I was your client and gave you the task to choose a suitable database\nfor\n>my application and you evaluated suitable databases this way, then\nsomething\n>is seriously wrong with your work.\n>\n>Regards,\n>Bjoern\n\nGlad to see you're not getting personal with this. Ad hominin attacks are\nfor folks with no better answers.\n\nPlease go read the posts by Kevin Schroeder and Arjen va der Meijden before\nslinging any more 'help'.\n\nover and out.\n\n\n\n\n\n",
"msg_date": "Fri, 4 Jul 2003 17:14:01 -0400",
"msg_from": "\"Brian Tarbox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL "
},
{
"msg_contents": "Being new to Postgres, I understand how frustrating tuning is. I've been\nworking on some very basic queries, and trying to get some decent performance.\nI know the problem isn't with io on the system, since I can use other tests\nthat far exceed the amount of data being written through postgres, so I can\nonly assume that the rdbms needs to be tuned.\n\neven beyond doing different configuration files (as mysql does, fyi), just\nhaving 'guidelines' for where to start with tuning the various items in\npostresql.conf would be helpful. Something like\n\nfoo_val = 100 # 0-1024; higher numbers for more complex queries\n\nI don't think that'd be too difficult. Anyone who's worked with a database\nof any type, understands that tuning needs to happen.\n\nHaving the three different sizes of servers (as suggested below) would be \nhelpful. MySQL does this, and i've used their default configurations in the\npast to help me troubleshoot probllems i was having with a complex query on\nsub-standard hardware. After changing some of the values around, the query\nran slowly, but it actually ran, whereas before, it didn't. \n\nSlightly off subject, as I've been advocating use of Postgres, and people \nhave been trying it, some of the quirks that one runs into when goinng from \nrdbms to rdbms are frustrating as well. Granted, these things happen with\ngoing from any rdbms to another, but it'd be nice if there were a guide to say\nsomething like, in MySQL, you use 'show tables from tablename', and in \nPostgres, you use \\d tablename to achieve the same results.\n\nJust my $.02 worth.\n\nTim\n\n\nOn Fri, Jul 04, 2003 at 03:39:57PM -0500, johnnnnnn wrote:\n> On Fri, Jul 04, 2003 at 10:49:01AM -0400, Rod Taylor wrote:\n> > > In my opinion the defaults should be set up for a typical database\n> > > server machine.\n> > \n> > Ok.. thats fair. The first problem would be to define typical for\n> > current PostgreSQL installations, and typical for non-postgresql\n> > installations (the folks we want to convert).\n> \n> It's been a while since the last one of these discussions, so stop me\n> if this has been suggested before, but...\n> \n> Do we actually want to have a default configuration file?\n> \n> Seriously, if we provide, say, 4 or 5 files based on various system\n> assumptions (conf.MINIMAL, conf.AVERAGE, conf.MULTIDISK, or whatever),\n> then we might be able to get away with not providing an actual\n> default. Change the installation instructions to say\n> \n> >>>\n> PostgreSQL requires a configuration file, which it expects to be\n> located in $DIR. Provided are several example configurations (in\n> $DIR/eg/). If you're just starting with PostrgreSQL, we recommend\n> reading through those and selecting one which most closely matches\n> your machine.\n> \n> If you're in doubt as to which file to use, try $AVERAGE. If you're\n> still having difficulty getting PostgreSQL to run, try\n> $MINIMAL. $MINIMAL should work on every supported platform, but is not\n> optimized for modern hardware -- PostgreSQL will not run well in this\n> configuration.\n> <<<\n> \n> This makes the installation process slightly less simple, but only in\n> the way that we want it to be. That is, it forces the end user to the\n> realization that there actually is configuration to be done, and\n> forces them into a minimally interactive way to deal with it.\n> \n> It also doesn't require any kernel-test coding, or really any\n> development at all, so we should theoretically be able to get it\n> finished and ready to go more quickly.\n> \n> Thoughts?\n> \n> -johnnnnnnnnnnn\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n",
"msg_date": "Fri, 4 Jul 2003 17:28:32 -0400",
"msg_from": "Tim Conrad <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": ">> Afaik, your original posting said postgresql was 3 times slower than\n>> mysql and that you are going to leave this list now. This implied\n>> that you have made your decision between postgresql and mysql,\n>> taking mysql because it is faster.\n>\n> Well, that shows what you get for making implications. The client is\n> sticking with postgres and we are coding around the issue in other\n> ways.\n\nAs many other guys here pointed out, there are numerous ways to tune\npostgresql for maximum performance. If you are willing to share more\ninformation about your particular project, we might be able to help you out\nand optimize your application, without the need to code around the issue as\nmuch as you may be doing right now.\nEven if it is not possible for you to share enough information, there are a\nlot of places where you can read about performance tuning (if not in the\ndocs then in the archives).\n\n>> If I was your client and gave you the task to choose a suitable\n>> database for my application and you evaluated suitable databases\n>> this way, then something is seriously wrong with your work.\n>>\n> Glad to see you're not getting personal with this. Ad hominin attacks\n> are for folks with no better answers.\n\nYep, you're right. Sorry for that, I didn't mean to get personal. I was\nsomehow irritated that you come here, post your database comparison and want\nto leave right afterwards, without going into detail (what should be the\ncase normally).\n\nAgain our offer: Post (possibly obfuscated) schema information, and we will\ncertainly be able to help you with performance tuning.\n\nRegards,\nBjoern\n\n",
"msg_date": "Sat, 5 Jul 2003 00:24:18 +0200",
"msg_from": "\"Bjoern Metzdorf\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL "
},
{
"msg_contents": "...and on Sat, Jul 05, 2003 at 12:24:18AM +0200, Bjoern Metzdorf used the keyboard:\n> >> Afaik, your original posting said postgresql was 3 times slower than\n> >> mysql and that you are going to leave this list now. This implied\n> >> that you have made your decision between postgresql and mysql,\n> >> taking mysql because it is faster.\n> >\n> > Well, that shows what you get for making implications. The client is\n> > sticking with postgres and we are coding around the issue in other\n> > ways.\n> \n> As many other guys here pointed out, there are numerous ways to tune\n> postgresql for maximum performance. If you are willing to share more\n> information about your particular project, we might be able to help you out\n> and optimize your application, without the need to code around the issue as\n> much as you may be doing right now.\n> Even if it is not possible for you to share enough information, there are a\n> lot of places where you can read about performance tuning (if not in the\n> docs then in the archives).\n> \n\nAlso, I should think the clients would not be too offended if Brian posted\nsome hint about the actual quantity of data involved here, both the total\nexpected database size and some info about the estimated \"working set\" size,\nsuch as a sum of sizes of tables most commonly used in JOIN queries and the\npercentage of data being shuffled around in those. Are indexes big? Are\nthere any multicolumn indexes in use? Lots of sorting expected? Lots of\nUPDATEs/INSERTs/DELETEs?\n\nAlso, it would be helpful to know just how normalized the database is, to\nprovide some advice about possible query optimization, which could again\nprove helpful in speeding the machinery up.\n\nAnother useful piece of information would be the amount of memory consumed\nby other applications vs. the amount of memory reserved by the OS for cache,\nand the nature of those other applications running - are they big cache\nconsumers, such as Apache with static content and a large load would be,\nor do they keep a low profile?\n\nI think this would, in combination with the information already posted, such\nas the amount of memory and I/O subsystem info, at least enable us to advise\nabout the recommended shared_buffers, effective_cache_size, sort_mem,\nvacuum_mem, and others, without compromising the intellectual property of\nBrian's clients.\n\n> > over and out.\n\nI CC'd this post over to you, Brian, 'cause this signoff made me rather\nunsure as to whether or not you're still on the list. Hope you don't mind.\n\nSincerely,\n-- \n Grega Bremec\n System Administration & Development Support\n grega.bremec-at-noviforum.si\n http://najdi.si/\n http://www.noviforum.si/\n",
"msg_date": "Sat, 5 Jul 2003 18:39:37 +0200",
"msg_from": "Grega Bremec <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "\nOn Friday, July 4, 2003, at 07:07 AM, Brian Tarbox wrote:\n\n> We had about 40 tables in the db, with joined queries on about 8-12 \n> tables.\n>\n\nA while ago a tested a moderately complex schema on MySQL, Pg, and \nOracle. I usually heavily normalize schemas and then define views as a \ndenormalized API, which sends MySQL to the book of toys already. The \nviews more often than not would join anywhere from 6-12 tables, using \nplain (as opposed to compound) foreign keys to primary key straight \njoins.\n\nI noticed that Pg was more than an order of magnitude slower for joins \n > 8 tables than Oracle. I won't claim that none of this can have been \ndue to lack of tuning. My point is the following though. After I dug in \nit turned out that of the 4 secs Pg needed to execute the query it \nspent 3.9 secs in the planner. The execution plan Pg came up with was \npretty good - it just needed an extraordinary amount of time to arrive \nat it, spoiling its own results.\n\nAsking this list I then learned how to tweak GEQO such that it would \npick up the planning and do it faster than it would otherwise. I was \nable to get the planner time down to a quarter - still a multitude of \nthe actual execution time.\n\nI was told on this list that query planning suffers from combinatorial \nexplosion very quickly - and I completely buy that. It's just - Oracle \nplanned the same query in a fraction of a second, using the cost-based \noptimizer, on a slower machine. I've seen it plan 15-table joins in \nmuch less than a second, and I have no idea how it would do that. In \naddition, once you've prepared a query in Oracle, the execution plan is \npre-compiled.\n\nIf I were a CS student I'd offer myself to the hall of humiliation and \nset out to write a fast query planner for Pg ...\n\n\t-hilmar\n-- \n-------------------------------------------------------------\nHilmar Lapp email: lapp at gnf.org\nGNF, San Diego, Ca. 92121 phone: +1-858-812-1757\n-------------------------------------------------------------\n\n",
"msg_date": "Sat, 5 Jul 2003 16:40:52 -0700",
"msg_from": "Hilmar Lapp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "Brian Tarbox kirjutas R, 04.07.2003 kell 15:27:\n> I recently took a system from MySQL to Postgres. Same HW, SW, same data.\n> The major operations where moderately complex queries (joins on 8 tables).\n> The results we got was that Postgres was fully 3 times slower than MySql.\n\nFor each and every query ??\n\n> We were on this list a fair bit looking for answers and tried all the\n> standard answers. \n\nCould you post the list of \"standard answers\" you tried ?\n\n> It was still much much much slower.\n\nWas this with InnoDB ?\n\nwhat kind of joins were they (i.e \n\"FROM a JOIN b on a.i=b.i\" \nor \"FROM a,b WHERE a.i = b.i\" ?\n\nWhat was the ratio of planning time to actual execution time in pgsql?\n\nWhere the queries originally optimized for MySQL ?\n\n----------------\nHannu\n",
"msg_date": "06 Jul 2003 03:11:58 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Fri, 4 Jul 2003, Brian Tarbox wrote:\n\n> I'm actually leaving this list but I can answer this question. Our results\n> were with a single user and we were running Inodb. We were running on\n> RedHat 8.0 / 9.0 with vanilla linux settings.\n\nHi Brian, I just wanted to add that if you aren't testing your setup for \nmultiple users, you are doing yourself a disservice. The performance of \nyour app with one user is somewhat interesting, the performance of the \nsystem with a dozen or a hundred users is of paramount importance.\n\nA server that dies under heavy parallel load is useless, no matter how \nfast it ran when tested for one user. Conversely, one would prefer a \nserver that was a little slow for single users but can hold up under load.\n\nWhen I first built my test box a few years ago, I tested postgresql / \napache / php at 100 or more parallel users. That's where things start \ngetting ugly, and you've got to test for it now, before you commit to a \nplatform.\n\nPostgresql is designed to work on anything out of the box, which means \nit's not optimized for high performance, but for running on old Sparc 2s \nwith 128 meg of ram. If you're going to test it against MySQL, be fair to \nyourself and performance tune them both before testing, they're \nperformance on vanilla linux with vanilla configuration tuning teachs you \nlittle about how they'll behave in production on heavy iron.\n\nGood luck on your testing, and please, don't quit testing at the first \nsign one or the other is faster, be throrough and complete, including \nheavy parallel load testing with reads AND writes. Know the point at \nwhich each system begins to fail / become unresponsive, and how they \nbehave in overload.\n\n\n",
"msg_date": "Mon, 7 Jul 2003 11:35:24 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "Oddly enough, the particular application in question will have an extremely\nsmall user base...perhaps a few simultainous users at most.\n\nAs to the testing, I neglected to say early in this thread that my manager\ninstructed me _not_ to do further performance testing...so as a good\nconsultant I complied. I'm not going to touch if that was a smart\ninstruction to give :-)\n\nBrian\n\n\n-----Original Message-----\nFrom: scott.marlowe [mailto:[email protected]]\nSent: Monday, July 07, 2003 1:35 PM\nTo: Brian Tarbox\nCc: [email protected]; Rafal Kedziorski;\[email protected]\nSubject: Re: [PERFORM] PostgreSQL vs. MySQL\n\n\nOn Fri, 4 Jul 2003, Brian Tarbox wrote:\n\n> I'm actually leaving this list but I can answer this question. Our\nresults\n> were with a single user and we were running Inodb. We were running on\n> RedHat 8.0 / 9.0 with vanilla linux settings.\n\nHi Brian, I just wanted to add that if you aren't testing your setup for\nmultiple users, you are doing yourself a disservice. The performance of\nyour app with one user is somewhat interesting, the performance of the\nsystem with a dozen or a hundred users is of paramount importance.\n\nA server that dies under heavy parallel load is useless, no matter how\nfast it ran when tested for one user. Conversely, one would prefer a\nserver that was a little slow for single users but can hold up under load.\n\nWhen I first built my test box a few years ago, I tested postgresql /\napache / php at 100 or more parallel users. That's where things start\ngetting ugly, and you've got to test for it now, before you commit to a\nplatform.\n\nPostgresql is designed to work on anything out of the box, which means\nit's not optimized for high performance, but for running on old Sparc 2s\nwith 128 meg of ram. If you're going to test it against MySQL, be fair to\nyourself and performance tune them both before testing, they're\nperformance on vanilla linux with vanilla configuration tuning teachs you\nlittle about how they'll behave in production on heavy iron.\n\nGood luck on your testing, and please, don't quit testing at the first\nsign one or the other is faster, be throrough and complete, including\nheavy parallel load testing with reads AND writes. Know the point at\nwhich each system begins to fail / become unresponsive, and how they\nbehave in overload.\n\n",
"msg_date": "Mon, 7 Jul 2003 14:16:16 -0400",
"msg_from": "\"Brian Tarbox\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Mon, 7 Jul 2003, Brian Tarbox wrote:\n\n> Oddly enough, the particular application in question will have an extremely\n> small user base...perhaps a few simultainous users at most.\n> \n> As to the testing, I neglected to say early in this thread that my manager\n> instructed me _not_ to do further performance testing...so as a good\n> consultant I complied. I'm not going to touch if that was a smart\n> instruction to give :-)\n\nBut remember, you can always rename your performance testing as \ncompliance testing and then it's ok, as long as you don't keep any \ndetailed records about the time it took to run the \"compliance testing\" \nqueries.\n\nDefinitely look at the output from explain analyze select ... to see what \nthe planner THINKS the query is gonna cost versus what it really costs. \nIf you see a huge difference between, say estimated rows and actual rows, \nor some other value, it points to the analyzer not getting the right data \nfor the planner. You can adjust the percentage of a table sampled with \nalter table to force more data into analyze.\n\n",
"msg_date": "Mon, 7 Jul 2003 13:58:24 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Sunday 13 July 2003 10:23, Ron Johnson wrote:\n> On Fri, 2003-07-04 at 09:49, Shridhar Daithankar wrote:\n> > On 4 Jul 2003 at 16:35, Michael Mattox wrote:\n>\n> [snip]\n>\n> > On a positive note, me and Josh are finishing a bare bone performance\n> > article that would answer lot of your questions. I am counting on you to\n> > provide valuable feedback. I expect it out tomorrow or on sunday..Josh\n> > will confirm that..\n>\n> Hello,\n>\n> Is this doc publicly available yet?\n\nYes. See http://www.varlena.com/GeneralBits/\n\nI thought I announved it on performance.. anyways..\n\n Shridhar\n\n",
"msg_date": "Sun, 13 Jul 2003 15:41:08 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "\nI think the issue with multiple users is that a car is good for moving a\nfew people, but it can't move lots of large boxes. A truck can move\nlarge boxes, but it can't move a few people efficiently. PostgreSQL is\nmore like a truck, while MySQL is more like a car.\n\nAs an aside, I think Solaris is slower than other OS's because it is\nbuilt to scale efficiently to many CPU's, and that takes a performance\nhit in a machine with just a few CPU's, though they are working on\ntuning those cases.\n\nOf course, this is all just a generalization.\n\n---------------------------------------------------------------------------\n\nscott.marlowe wrote:\n> On Fri, 4 Jul 2003, Brian Tarbox wrote:\n> \n> > I'm actually leaving this list but I can answer this question. Our results\n> > were with a single user and we were running Inodb. We were running on\n> > RedHat 8.0 / 9.0 with vanilla linux settings.\n> \n> Hi Brian, I just wanted to add that if you aren't testing your setup for \n> multiple users, you are doing yourself a disservice. The performance of \n> your app with one user is somewhat interesting, the performance of the \n> system with a dozen or a hundred users is of paramount importance.\n> \n> A server that dies under heavy parallel load is useless, no matter how \n> fast it ran when tested for one user. Conversely, one would prefer a \n> server that was a little slow for single users but can hold up under load.\n> \n> When I first built my test box a few years ago, I tested postgresql / \n> apache / php at 100 or more parallel users. That's where things start \n> getting ugly, and you've got to test for it now, before you commit to a \n> platform.\n> \n> Postgresql is designed to work on anything out of the box, which means \n> it's not optimized for high performance, but for running on old Sparc 2s \n> with 128 meg of ram. If you're going to test it against MySQL, be fair to \n> yourself and performance tune them both before testing, they're \n> performance on vanilla linux with vanilla configuration tuning teachs you \n> little about how they'll behave in production on heavy iron.\n> \n> Good luck on your testing, and please, don't quit testing at the first \n> sign one or the other is faster, be throrough and complete, including \n> heavy parallel load testing with reads AND writes. Know the point at \n> which each system begins to fail / become unresponsive, and how they \n> behave in overload.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 20 Jul 2003 19:23:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "Brian Tarbox wrote:\n> Oddly enough, the particular application in question will have an extremely\n> small user base...perhaps a few simultainous users at most.\n> \n> As to the testing, I neglected to say early in this thread that my manager\n> instructed me _not_ to do further performance testing...so as a good\n> consultant I complied. I'm not going to touch if that was a smart\n> instruction to give :-)\n\nPerformance is probably 'good enough', and you can revisit it later when\nyou have more time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 20 Jul 2003 19:24:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "\nDo we need to add a mention of the need for tuning to the install docs?\n\n---------------------------------------------------------------------------\n\nAndrew Sullivan wrote:\n> On Fri, Jul 04, 2003 at 08:07:18PM +0200, Arjen van der Meijden wrote:\n> > > Andrew Sullivan wrote:\n> > > results under production conditions, and not bother to read \n> > > even the basic \"quickstart\"-type stuff that is kicking \n> > > around.\n> > Then please point out where it sais, in the documentation, that the\n> > value for the shared_memory of 64 is too low and that 4000 is a nice\n> > value to start with?\n> \n> I think I did indeed speak too soon, as the criticism is a fair one:\n> nowhere in the installation instructions or the \"getting started\"\n> docs does it say that you really ought to do some tuning once you\n> have the system installed. Can I suggest for the time being that\n> something along these lines should go in 14.6.3, \"Tuning the\n> installation\":\n> \n> ---snip---\n> By default, PostgreSQL is configured to run on minimal hardware. As\n> a result, some tuning of your installation will be necessary before\n> using it for anything other than extremely small databases. At the\n> very least, it will probably be necessary to increase your shared\n> buffers setting. See Chapter 16 for details on what tuning options\n> are available to you.\n> ---snip---\n> \n> > I'm sorry to put this in a such a confronting manner, but you simply\n> > can't expect people to search for information that they don't know the\n> > existence of.\n> \n> No need to apologise; I think you're right.\n> \n> A\n> \n> -- \n> ----\n> Andrew Sullivan 204-4141 Yonge Street\n> Liberty RMS Toronto, Ontario Canada\n> <[email protected]> M2P 2A8\n> +1 416 646 3304 x110\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 16 Aug 2003 20:36:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Sat, Aug 16, 2003 at 08:36:57PM -0400, Bruce Momjian wrote:\n> \n> Do we need to add a mention of the need for tuning to the install docs?\n\nWouldn't be a bad idea, as far as I'm concerned.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Sun, 17 Aug 2003 11:06:19 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] PostgreSQL vs. MySQL"
},
{
"msg_contents": "Andrew Sullivan wrote:\n> On Sat, Aug 16, 2003 at 08:36:57PM -0400, Bruce Momjian wrote:\n> > \n> > Do we need to add a mention of the need for tuning to the install docs?\n> \n> Wouldn't be a bad idea, as far as I'm concerned.\n\nOK, I added a 'Tuning' section to the install instructions. I can make\nadjustments.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/sgml/installation.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/installation.sgml,v\nretrieving revision 1.143\ndiff -c -c -r1.143 installation.sgml\n*** doc/src/sgml/installation.sgml\t13 Sep 2003 17:01:09 -0000\t1.143\n--- doc/src/sgml/installation.sgml\t26 Sep 2003 17:40:53 -0000\n***************\n*** 1156,1161 ****\n--- 1156,1181 ----\n <title>Post-Installation Setup</title>\n \n <sect2>\n+ <title>Tuning</title>\n+ \n+ <indexterm>\n+ <primary>tuning</primary>\n+ </indexterm>\n+ \n+ <para>\n+ By default, <productname>PostgreSQL</> is configured to run on minimal\n+ hardware. This allows it to start up with almost any hardware\n+ configuration. However, the default configuration is not designed for\n+ optimum performance. To achieve optimum performance, several server\n+ variables must be adjusted, the two most common being\n+ <varname>shared_buffers</varname> and <varname> sort_mem</varname>\n+ mentioned in <xref linkend=\"runtime-config-resource-memory\">. Other\n+ paramters in <xref linkend=\"runtime-config-resource\"> also affect \n+ performance.\n+ </para>\n+ </sect2>\n+ \n+ <sect2>\n <title>Shared Libraries</title>\n \n <indexterm>",
"msg_date": "Fri, 26 Sep 2003 13:47:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] PostgreSQL vs. MySQL"
},
{
"msg_contents": "Andrew Sullivan wrote:\n> On Fri, Jul 04, 2003 at 08:07:18PM +0200, Arjen van der Meijden wrote:\n> > > Andrew Sullivan wrote:\n> > > results under production conditions, and not bother to read \n> > > even the basic \"quickstart\"-type stuff that is kicking \n> > > around.\n> > Then please point out where it sais, in the documentation, that the\n> > value for the shared_memory of 64 is too low and that 4000 is a nice\n> > value to start with?\n> \n> I think I did indeed speak too soon, as the criticism is a fair one:\n> nowhere in the installation instructions or the \"getting started\"\n> docs does it say that you really ought to do some tuning once you\n> have the system installed. Can I suggest for the time being that\n> something along these lines should go in 14.6.3, \"Tuning the\n> installation\":\n> \n> ---snip---\n> By default, PostgreSQL is configured to run on minimal hardware. As\n> a result, some tuning of your installation will be necessary before\n> using it for anything other than extremely small databases. At the\n> very least, it will probably be necessary to increase your shared\n> buffers setting. See Chapter 16 for details on what tuning options\n> are available to you.\n> ---snip---\n> \n> > I'm sorry to put this in a such a confronting manner, but you simply\n> > can't expect people to search for information that they don't know the\n> > existence of.\n> \n> No need to apologise; I think you're right.\n\nAgreed. Text added to install docs:\n\n <para>\n By default, <productname>PostgreSQL</> is configured to run on minimal\n hardware. This allows it to start up with almost any hardware\n configuration. However, the default configuration is not designed for\n optimum performance. To achieve optimum performance, several server\n variables must be adjusted, the two most common being\n <varname>shared_buffers</varname> and <varname> sort_mem</varname>\n mentioned in <![%standalone-include[the documentation]]>\n <![%standalone-ignore[<xref linkend=\"runtime-config-resource-memory\">]]>.\n Other parameters in <![%standalone-include[the documentation]]>\n <![%standalone-ignore[<xref linkend=\"runtime-config-resource\">]]>\n also affect performance.\n </para>\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 8 Oct 2003 13:28:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "Bruce,\n\n> Agreed. Text added to install docs:\n>\n> <para>\n> By default, <productname>PostgreSQL</> is configured to run on minimal\n> hardware. This allows it to start up with almost any hardware\n> configuration. However, the default configuration is not designed for\n> optimum performance. To achieve optimum performance, several server\n> variables must be adjusted, the two most common being\n> <varname>shared_buffers</varname> and <varname> sort_mem</varname>\n> mentioned in <![%standalone-include[the documentation]]>\n> <![%standalone-ignore[<xref\n> linkend=\"runtime-config-resource-memory\">]]>. Other parameters in\n> <![%standalone-include[the documentation]]> <![%standalone-ignore[<xref\n> linkend=\"runtime-config-resource\">]]> also affect performance.\n> </para>\n\nWhat would you think of adding a condensed version of my and Shridhar's guide \nto the install docs? I think I can offer a 3-paragraph version which would \ncover the major points of setting PostgreSQL.conf.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 8 Oct 2003 10:49:55 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "Josh Berkus wrote:\n> Bruce,\n> \n> > Agreed. Text added to install docs:\n> >\n> > <para>\n> > By default, <productname>PostgreSQL</> is configured to run on minimal\n> > hardware. This allows it to start up with almost any hardware\n> > configuration. However, the default configuration is not designed for\n> > optimum performance. To achieve optimum performance, several server\n> > variables must be adjusted, the two most common being\n> > <varname>shared_buffers</varname> and <varname> sort_mem</varname>\n> > mentioned in <![%standalone-include[the documentation]]>\n> > <![%standalone-ignore[<xref\n> > linkend=\"runtime-config-resource-memory\">]]>. Other parameters in\n> > <![%standalone-include[the documentation]]> <![%standalone-ignore[<xref\n> > linkend=\"runtime-config-resource\">]]> also affect performance.\n> > </para>\n> \n> What would you think of adding a condensed version of my and Shridhar's guide \n> to the install docs? I think I can offer a 3-paragraph version which would \n> cover the major points of setting PostgreSQL.conf.\n\nYes, I think that is a good idea --- now, does it go in the install\ndocs, or in the docs next to each GUC item?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 8 Oct 2003 13:58:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "Bruce,\n\n> Yes, I think that is a good idea --- now, does it go in the install\n> docs, or in the docs next to each GUC item?\n\nHmmm ... both, I think. The Install Docs should have:\n\n\"Here are the top # things you will want to adjust in your PostgreSQL.conf:\n1) Shared_buffers <link>\n2) Sort_mem <link>\n3) effective_cache_size <link>\n4) random_page_cost <link>\n5) Fsync <link>\netc.\"\n\nThen next to each of these items in the Docs, I add 1-2 sentences about how to \nset that item.\n\nHmmm ... do we have similar instructions for setting connection options and \npg_hba.conf? We should have a P telling people they need to do this.\n\nBarring an objection, I'll get to work on this.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 8 Oct 2003 11:05:47 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "\nTotally agree.\n\n---------------------------------------------------------------------------\n\nJosh Berkus wrote:\n> Bruce,\n> \n> > Yes, I think that is a good idea --- now, does it go in the install\n> > docs, or in the docs next to each GUC item?\n> \n> Hmmm ... both, I think. The Install Docs should have:\n> \n> \"Here are the top # things you will want to adjust in your PostgreSQL.conf:\n> 1) Shared_buffers <link>\n> 2) Sort_mem <link>\n> 3) effective_cache_size <link>\n> 4) random_page_cost <link>\n> 5) Fsync <link>\n> etc.\"\n> \n> Then next to each of these items in the Docs, I add 1-2 sentences about how to \n> set that item.\n> \n> Hmmm ... do we have similar instructions for setting connection options and \n> pg_hba.conf? We should have a P telling people they need to do this.\n> \n> Barring an objection, I'll get to work on this.\n> \n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 8 Oct 2003 14:15:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Wed, 2003-10-08 at 14:05, Josh Berkus wrote:\n> Hmmm ... both, I think. The Install Docs should have:\n> \n> \"Here are the top # things you will want to adjust in your PostgreSQL.conf:\n> 1) Shared_buffers <link>\n> 2) Sort_mem <link>\n> 3) effective_cache_size <link>\n> 4) random_page_cost <link>\n> 5) Fsync <link>\n> etc.\"\n\n> Barring an objection, I'll get to work on this.\n\nI think this kind of information belongs in the documentation proper,\nnot in the installation instructions. I think you should put this kind\nof tuning information in the \"Performance Tips\" chapter, and include a\npointer to it in the installation instructions.\n\n-Neil\n\n\n",
"msg_date": "Wed, 08 Oct 2003 14:28:45 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": ">>>>> \"JB\" == Josh Berkus <[email protected]> writes:\n\nJB> Hmmm ... both, I think. The Install Docs should have:\n\nJB> \"Here are the top # things you will want to adjust in your PostgreSQL.conf:\nJB> 1) Shared_buffers <link>\nJB> 2) Sort_mem <link>\nJB> 3) effective_cache_size <link>\nJB> 4) random_page_cost <link>\nJB> 5) Fsync <link>\nJB> etc.\"\n\nAdd:\n\nmax_fsm_relations (perhaps it is ok with current default)\nmax_fsm_pages\n\nI don't think you really want to diddle with fsync in the name of\nspeed at the cost of safety.\n\nand possibly:\n\ncheckpoint_segments (if you do a lot of writes to the DB for extended\n durations of time) With 7.4 it warns you in the\n logs if you should increase this.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Wed, 08 Oct 2003 15:58:03 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
},
{
"msg_contents": "On Wed, Oct 08, 2003 at 01:28:53PM -0400, Bruce Momjian wrote:\n> \n> Agreed. Text added to install docs:\n\n[&c.]\n\nI think this is just right. It tells a user where to find the info\nneeded, doesn't reproduce it all over the place, and still points out\nthat this is something you'd better do. Combined with the new\nprobe-to-set-shared-buffers bit at install time, I think the reports\nof 400 billion times worse performance than MySQL will probably\ndiminish.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Thu, 9 Oct 2003 05:53:38 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs. MySQL"
}
] |
[
{
"msg_contents": "On 2 Jul 2003 at 16:17, Mats Kling wrote:\n\n> \n> Hi all!\n> \n> I have a big trouble with a database and hope you can help out on how to \n> improve the time vacuum takes.\n> \n> The database grovs to ~60Gb and after a 'vacuum full' it's ~31Gb, after \n> about a week the database it up to 55-60Gb again and i have to do a \n> 'vacuum alalyze full' to gain disk (the disk is 70Gb so I'm living on \n> the edge here ;(\n> I have a maintenancewindow once a week but vacuuming this database takes \n> around 10-14 hours and I really wanna cut that time down.\n> Since the disk is >85% full i tried to vacuum table by table instead of \n> doing the whole database and my feeling was that i think I gained a \n> speedup if I vacuumed the tables that takes most disk first (so the rest \n> of the tables have more disk to work on)..\n> can this be true or was it just a feeling?\n\nI know this is a very late reply but it might be of some help.\n\n1. What is nature of your updates/deletes? Do you have many deletes or few \nupdates or other way round.\n\n2. Have you tried using pgavd?\n\nIf your load involves lot's of updates, pgavd can definitely help you.\n\nHTH\n\nBye\n Shridhar\n\n--\nBumper sticker:\tAll the parts falling off this car are of the very finest\t\nBritish manufacture.\n\n",
"msg_date": "Fri, 04 Jul 2003 18:55:21 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: can multiple vacuums gain speed?"
}
] |
[
{
"msg_contents": "The problem is that people often benchmark the so called vanilla \ninstallation of PostgreSQL.\n\nI understand why the PostgreSQL team has decided to have an overly \nconservative default conf file. But no matter what the reason is or \nwho's to blame that a tester has not tuned PostgreSQL configuration, the \nword is being spread that PostgreSQL is featurerich but _slow_. The \nongoing discussion currently in performance list is just one example. I \nhave seen it announce more than once: \"we did not do any configuration \ntuning on the test systems\". Take \nhttp://www.hwaci.com/sw/sqlite/speed.html as another example.\n\n\"The PostgreSQL and MySQL servers used were as delivered by default on \nRedHat 7.2. (PostgreSQL version 7.1.3 and MySQL version 3.23.41.) No \neffort was made to tune these engines. Note in particular the the \ndefault MySQL configuration on RedHat 7.2 does not support transactions.\"\n\nI remember a discussion in the general list about having multiple \ndefault conf files to choose from. Ala low-end, average and high-end \ninstallations. A tool to read some system information and dynamically \ngenerating a proper configuration file was also mentioned.\n\nThe other issue that a lot of new PostgreSQL users seem to have is the \nVACUUM ANALYZE. They just don't know about it. Perhaps some more active \nones will read the documentation or ask for help in email lists. But a \nlot of them are surely leaving things and thinking of PostgreSQL as a \nslow system. Remember they too spread the word of their experience.\n\nI'm not an expert of PostgreSQL by any means I have just been reading \nPostgreSQL email lists for only about a month or so. So I believe I have \nread that there is a auto-vacuum being worked on? In my opinion this \nshould be included in the main installation by default. This is just the \nkind of job that a machine should do...when a big portion of data has \nchanged do VACUUM ANALYCE automagically.\n\nIs these improvements actually being implemented and how far are they?\n\nThe technical side of these problems is not for this list of course. \nHowever the \"side-effects\" (reputation of being slow) of these problems \ndireclty relate to advocacy and PostgreSQL popularity. Maybe these \nproblems are already worked on or maybe I'm over exaggerating the \nsituation but I do believe solving these issues would only benefit \nPostgreSQL.\n\nJust my 2 c\nKaarel\n\n",
"msg_date": "Fri, 04 Jul 2003 19:45:56 +0300",
"msg_from": "Kaarel <[email protected]>",
"msg_from_op": true,
"msg_subject": "About the default performance"
},
{
"msg_contents": "Kaarel:\n\n(cross-posted back to Performance because I don't want to post twice on the \nsame topic)\n\n> The problem is that people often benchmark the so called vanilla\n> installation of PostgreSQL.\n<snip>\n> I remember a discussion in the general list about having multiple\n> default conf files to choose from. Ala low-end, average and high-end\n> installations. A tool to read some system information and dynamically\n> generating a proper configuration file was also mentioned.\n\nYes. So far, only Justin, Kevin B., Shridhar and I have volunteered to do any \nwork on that task -- and all of us have been swamped with 7.4-related stuff.\n\nI would like to see, before the end of the year, some if not all of the stuff \nthat Kaarel is posting about. Obviously, my first task is to set up a \nframework so that everyone can contribute to the project.\n\n> I'm not an expert of PostgreSQL by any means I have just been reading\n> PostgreSQL email lists for only about a month or so. So I believe I have\n> read that there is a auto-vacuum being worked on? In my opinion this\n> should be included in the main installation by default. This is just the\n> kind of job that a machine should do...when a big portion of data has\n> changed do VACUUM ANALYCE automagically.\n>\n> Is these improvements actually being implemented and how far are they?\n\nThe auto-vacuum daemon (pgavd) is finished. However, it will still require \nthe user to turn it on; we don't want to run potentially RAM-sucking \nbackground processes without user invitiation. So obviously that needs to be \npart of a comprehensive \"quick start\" guide.\n\nSo, Kaarel .... you want to write the \"quick start\" guide for 7.4? All of \nthe detail material is available online, you mainly need to provide narrative \nand links of the form of ... first, read this: <link>, then do this ...\n\n> The technical side of these problems is not for this list of course.\n> However the \"side-effects\" (reputation of being slow) of these problems\n> direclty relate to advocacy and PostgreSQL popularity. Maybe these\n> problems are already worked on or maybe I'm over exaggerating the\n> situation but I do believe solving these issues would only benefit\n> PostgreSQL.\n\nYou're absolutely correct .... so let's do something about it. From my \nperspective, the first step is improved docs, becuase we can have those out \nby 7.4 release.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 4 Jul 2003 09:56:24 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About the default performance"
},
{
"msg_contents": "Hi everybody\n\nIs there any 'official' PostgreSQL banner?\n\nGreetings\nConni\n",
"msg_date": "Sun, 6 Jul 2003 15:05:37 +0200",
"msg_from": "\"Cornelia Boenigk\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "PostgreSQL banner"
},
{
"msg_contents": "I'm willing to help too. I'm basically a DBA / developer type, with mild \nC hacking skills (I develop in PHP, so my C coding is quite rusty \nnowadays.)\n\nIf nothing else testing on different equipment / OSes.\n\nOn Fri, 4 Jul 2003, Josh Berkus wrote:\n\n> Kaarel:\n> \n> (cross-posted back to Performance because I don't want to post twice on the \n> same topic)\n> \n> > The problem is that people often benchmark the so called vanilla\n> > installation of PostgreSQL.\n> <snip>\n> > I remember a discussion in the general list about having multiple\n> > default conf files to choose from. Ala low-end, average and high-end\n> > installations. A tool to read some system information and dynamically\n> > generating a proper configuration file was also mentioned.\n> \n> Yes. So far, only Justin, Kevin B., Shridhar and I have volunteered to do any \n> work on that task -- and all of us have been swamped with 7.4-related stuff.\n> \n> I would like to see, before the end of the year, some if not all of the stuff \n> that Kaarel is posting about. Obviously, my first task is to set up a \n> framework so that everyone can contribute to the project.\n> \n> > I'm not an expert of PostgreSQL by any means I have just been reading\n> > PostgreSQL email lists for only about a month or so. So I believe I have\n> > read that there is a auto-vacuum being worked on? In my opinion this\n> > should be included in the main installation by default. This is just the\n> > kind of job that a machine should do...when a big portion of data has\n> > changed do VACUUM ANALYCE automagically.\n> >\n> > Is these improvements actually being implemented and how far are they?\n> \n> The auto-vacuum daemon (pgavd) is finished. However, it will still require \n> the user to turn it on; we don't want to run potentially RAM-sucking \n> background processes without user invitiation. So obviously that needs to be \n> part of a comprehensive \"quick start\" guide.\n> \n> So, Kaarel .... you want to write the \"quick start\" guide for 7.4? All of \n> the detail material is available online, you mainly need to provide narrative \n> and links of the form of ... first, read this: <link>, then do this ...\n> \n> > The technical side of these problems is not for this list of course.\n> > However the \"side-effects\" (reputation of being slow) of these problems\n> > direclty relate to advocacy and PostgreSQL popularity. Maybe these\n> > problems are already worked on or maybe I'm over exaggerating the\n> > situation but I do believe solving these issues would only benefit\n> > PostgreSQL.\n> \n> You're absolutely correct .... so let's do something about it. From my \n> perspective, the first step is improved docs, becuase we can have those out \n> by 7.4 release.\n> \n> \n\n",
"msg_date": "Mon, 7 Jul 2003 12:00:17 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About the default performance"
},
{
"msg_contents": "\nI can help with this too.\n\n---------------------------------------------------------------------------\n\nscott.marlowe wrote:\n> I'm willing to help too. I'm basically a DBA / developer type, with mild \n> C hacking skills (I develop in PHP, so my C coding is quite rusty \n> nowadays.)\n> \n> If nothing else testing on different equipment / OSes.\n> \n> On Fri, 4 Jul 2003, Josh Berkus wrote:\n> \n> > Kaarel:\n> > \n> > (cross-posted back to Performance because I don't want to post twice on the \n> > same topic)\n> > \n> > > The problem is that people often benchmark the so called vanilla\n> > > installation of PostgreSQL.\n> > <snip>\n> > > I remember a discussion in the general list about having multiple\n> > > default conf files to choose from. Ala low-end, average and high-end\n> > > installations. A tool to read some system information and dynamically\n> > > generating a proper configuration file was also mentioned.\n> > \n> > Yes. So far, only Justin, Kevin B., Shridhar and I have volunteered to do any \n> > work on that task -- and all of us have been swamped with 7.4-related stuff.\n> > \n> > I would like to see, before the end of the year, some if not all of the stuff \n> > that Kaarel is posting about. Obviously, my first task is to set up a \n> > framework so that everyone can contribute to the project.\n> > \n> > > I'm not an expert of PostgreSQL by any means I have just been reading\n> > > PostgreSQL email lists for only about a month or so. So I believe I have\n> > > read that there is a auto-vacuum being worked on? In my opinion this\n> > > should be included in the main installation by default. This is just the\n> > > kind of job that a machine should do...when a big portion of data has\n> > > changed do VACUUM ANALYCE automagically.\n> > >\n> > > Is these improvements actually being implemented and how far are they?\n> > \n> > The auto-vacuum daemon (pgavd) is finished. However, it will still require \n> > the user to turn it on; we don't want to run potentially RAM-sucking \n> > background processes without user invitiation. So obviously that needs to be \n> > part of a comprehensive \"quick start\" guide.\n> > \n> > So, Kaarel .... you want to write the \"quick start\" guide for 7.4? All of \n> > the detail material is available online, you mainly need to provide narrative \n> > and links of the form of ... first, read this: <link>, then do this ...\n> > \n> > > The technical side of these problems is not for this list of course.\n> > > However the \"side-effects\" (reputation of being slow) of these problems\n> > > direclty relate to advocacy and PostgreSQL popularity. Maybe these\n> > > problems are already worked on or maybe I'm over exaggerating the\n> > > situation but I do believe solving these issues would only benefit\n> > > PostgreSQL.\n> > \n> > You're absolutely correct .... so let's do something about it. From my \n> > perspective, the first step is improved docs, becuase we can have those out \n> > by 7.4 release.\n> > \n> > \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 20 Jul 2003 22:49:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About the default performance"
}
] |
[
{
"msg_contents": "Hi all,\n\nWe're run into a rather odd problem here, and we're puzzling out\nwhat's going on. But while we do, I thought I'd see if anyone else\nhas anything similar to report.\n\nThis is for 7.2.4 on Solaris 8.\n\nWe have a query for which EXPLAIN ANALYSE on a local psql connection\nalways returns a time of between about 325 msec and 850 msec\n(depending on other load, whether the result is in cache, &c. -- this\nis an aggregates query involving min() and count()).\n\nIf I connect using -h 127.0.0.1, however, I can _sometimes_ get the\nquery to take as long as 1200 msec. The effect is sporadic (of\ncourse. If it were totally predictable, the computing gods wouldn't\nbe having any fun with me), but it is certainly there off and on. \n(We discovered it because our application is regularly reporting\ntimes on this query roughly twice as long as I was able to get with\npsql, until I connected via TCP/IP.)\n\nI'll have more to report as we investigate further -- at the moment,\nthis has cropped up on a production system, and so we're trying to\nreproduce it in our test environment. Naturally, we're looking at\nthe TCP/IP stack configuration, among other stuff. In the meantime,\nhowever, I wondered if anyone knows which bits I ought to be prodding\nat to look for sub-optimal libraries, &c.; or whether anyone else has\nrun into similar problems on Solaris or elsewhere.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Fri, 4 Jul 2003 13:22:05 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange result: UNIX vs. TCP/IP sockets"
},
{
"msg_contents": "\n'K, this is based on \"old information\", I don't know if Sun changed it\n'yet again' ... but, when I was working at the University, one of our IT\ndirectors gave me a report that deal with something Sun did (god, I'm so\ndetailed here, eh?) to \"mimic\" how Microsoft broke the TCP/IP protocol\n... the report was in relation to Web services, and how the change\nactually made Sun/Solaris appear to be slower then Microsoft ...\n\nAnd Sun made this the 'default' setting, but it was disablable in\n/etc/systems ...\n\nSorry for being so vague, but if I recall correctly, it had something to\ndo with adding an extra ACK to each packet ... maybe even as vague as the\nabove is, it will jar a memory for someone else?\n\n\nOn Fri, 4 Jul 2003, Andrew Sullivan wrote:\n\n> Hi all,\n>\n> We're run into a rather odd problem here, and we're puzzling out\n> what's going on. But while we do, I thought I'd see if anyone else\n> has anything similar to report.\n>\n> This is for 7.2.4 on Solaris 8.\n>\n> We have a query for which EXPLAIN ANALYSE on a local psql connection\n> always returns a time of between about 325 msec and 850 msec\n> (depending on other load, whether the result is in cache, &c. -- this\n> is an aggregates query involving min() and count()).\n>\n> If I connect using -h 127.0.0.1, however, I can _sometimes_ get the\n> query to take as long as 1200 msec. The effect is sporadic (of\n> course. If it were totally predictable, the computing gods wouldn't\n> be having any fun with me), but it is certainly there off and on.\n> (We discovered it because our application is regularly reporting\n> times on this query roughly twice as long as I was able to get with\n> psql, until I connected via TCP/IP.)\n>\n> I'll have more to report as we investigate further -- at the moment,\n> this has cropped up on a production system, and so we're trying to\n> reproduce it in our test environment. Naturally, we're looking at\n> the TCP/IP stack configuration, among other stuff. In the meantime,\n> however, I wondered if anyone knows which bits I ought to be prodding\n> at to look for sub-optimal libraries, &c.; or whether anyone else has\n> run into similar problems on Solaris or elsewhere.\n>\n> A\n>\n> --\n> ----\n> Andrew Sullivan 204-4141 Yonge Street\n> Liberty RMS Toronto, Ontario Canada\n> <[email protected]> M2P 2A8\n> +1 416 646 3304 x110\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n",
"msg_date": "Fri, 4 Jul 2003 14:35:18 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange result: UNIX vs. TCP/IP sockets"
},
{
"msg_contents": "http://grotto11.com/blog/slash.html?+1039831658\n\nSummary: IE and IIS cheat at TCP level by leaving out various SYN and ACK\npackets, thereby making IE requests from IIS servers blazingly fast, and\nmaking IE requests to non-IIS servers infuriatingly slow.\n\nBut since this only relates to making and breaking TCP connections, I don't\nthink this is relevant for a larger query time. It's probably normal for a TCP\nconnection to be slightly slower than a unix socket, but I don't think that's\nwat Andrew is experiencing.\n\nOn 2003-07-04 14:35:18 -0300, The Hermit Hacker wrote:\n> \n> 'K, this is based on \"old information\", I don't know if Sun changed it\n> 'yet again' ... but, when I was working at the University, one of our IT\n> directors gave me a report that deal with something Sun did (god, I'm so\n> detailed here, eh?) to \"mimic\" how Microsoft broke the TCP/IP protocol\n> ... the report was in relation to Web services, and how the change\n> actually made Sun/Solaris appear to be slower then Microsoft ...\n> \n> And Sun made this the 'default' setting, but it was disablable in\n> /etc/systems ...\n> \n> Sorry for being so vague, but if I recall correctly, it had something to\n> do with adding an extra ACK to each packet ... maybe even as vague as the\n> above is, it will jar a memory for someone else?\n> \n> \n> On Fri, 4 Jul 2003, Andrew Sullivan wrote:\n> \n> > Hi all,\n> >\n> > We're run into a rather odd problem here, and we're puzzling out\n> > what's going on. But while we do, I thought I'd see if anyone else\n> > has anything similar to report.\n> >\n> > This is for 7.2.4 on Solaris 8.\n> >\n> > We have a query for which EXPLAIN ANALYSE on a local psql connection\n> > always returns a time of between about 325 msec and 850 msec\n> > (depending on other load, whether the result is in cache, &c. -- this\n> > is an aggregates query involving min() and count()).\n> >\n> > If I connect using -h 127.0.0.1, however, I can _sometimes_ get the\n> > query to take as long as 1200 msec. The effect is sporadic (of\n> > course. If it were totally predictable, the computing gods wouldn't\n> > be having any fun with me), but it is certainly there off and on.\n> > (We discovered it because our application is regularly reporting\n> > times on this query roughly twice as long as I was able to get with\n> > psql, until I connected via TCP/IP.)\n> >\n> > I'll have more to report as we investigate further -- at the moment,\n> > this has cropped up on a production system, and so we're trying to\n> > reproduce it in our test environment. Naturally, we're looking at\n> > the TCP/IP stack configuration, among other stuff. In the meantime,\n> > however, I wondered if anyone knows which bits I ought to be prodding\n> > at to look for sub-optimal libraries, &c.; or whether anyone else has\n> > run into similar problems on Solaris or elsewhere.\n> >\n> > A\n> >\n> > --\n> > ----\n> > Andrew Sullivan 204-4141 Yonge Street\n> > Liberty RMS Toronto, Ontario Canada\n> > <[email protected]> M2P 2A8\n> > +1 416 646 3304 x110\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 8: explain analyze is your friend\n> >\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\nVincent van Leeuwen\nMedia Design - http://www.mediadesign.nl/\n",
"msg_date": "Fri, 4 Jul 2003 19:55:12 +0200",
"msg_from": "Vincent van Leeuwen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange result: UNIX vs. TCP/IP sockets"
},
{
"msg_contents": "> If I connect using -h 127.0.0.1, however, I can _sometimes_ get the\n> query to take as long as 1200 msec. The effect is sporadic (of\n\nSSL plays havoc with our system when using local loopback for the host \non both Solaris 7 and 8. It was probably key renegotiation which 7.4\nhas addressed.",
"msg_date": "04 Jul 2003 18:07:38 +0000",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange result: UNIX vs. TCP/IP sockets"
},
{
"msg_contents": "On Fri, Jul 04, 2003 at 07:55:12PM +0200, Vincent van Leeuwen wrote:\n\n> But since this only relates to making and breaking TCP connections,\n> I don't think this is relevant for a larger query time. It's\n> probably normal for a TCP connection to be slightly slower than a\n> unix socket, but I don't think that's wat Andrew is experiencing.\n\nNo, it's not. And my colleague Sorin Iszlai pointed out to me\nsomething else about it: we're getting different numbers reported by\nEXPLAIN ANALYSE itself. How is that even possible?\n\nIf we try it here on a moderately-loaded Sun box, it seems we're able\nto reproduce it, as well. \n\nHow could it be the transport affects the time for the query as\nreported by the back end?\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Fri, 4 Jul 2003 17:20:53 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange result: UNIX vs. TCP/IP sockets"
},
{
"msg_contents": "Andrew Sullivan <[email protected]> writes:\n> How could it be the transport affects the time for the query as\n> reported by the back end?\n\nHow much data is being sent back by the query?\n\nDo you have SSL enabled? SSL encryption overhead is nontrivial,\nespecially if any renegotiations happen.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jul 2003 17:47:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange result: UNIX vs. TCP/IP sockets "
},
{
"msg_contents": "On Fri, Jul 04, 2003 at 05:47:27PM -0400, Tom Lane wrote:\n> Andrew Sullivan <[email protected]> writes:\n> > How could it be the transport affects the time for the query as\n> > reported by the back end?\n> \n> How much data is being sent back by the query?\n\nIn this case, it's an all-aggregate query:\n\nselect count(*), min(id) from sometable where owner = int4;\n\n(Yeah, yeah, I know. I didn't write it.)\n\nBut it's the EXPLAIN ANALYSE that's reporting different times\ndepending on the transport. That's what I find so strange.\n\n> Do you have SSL enabled? SSL encryption overhead is nontrivial,\n> especially if any renegotiations happen.\n\nNo.\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Sat, 5 Jul 2003 16:58:15 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange result: UNIX vs. TCP/IP sockets"
},
{
"msg_contents": "Hi all,\n\nYou may remember in my last report, I said that it appeared that\nTCP/IP connections caused EXPLAIN ANALYSE to return (repeatably but\nnot consistently) slower times than when connected over UNIX domain\nsockets. \n\nThis turns out to be false. We (well, Chris Browne, actually) ran\nsome tests which demonstrated that the performance problem turned up\nover the UNIX socket, as well. It was just a statistical fluke that\nour smaller sample always found the problem on TCP/IP.\n\nOf course, now we have some other work to do, but we can rule out the\ntransport at least. Chalk one up for sane results. If we discover\nany more, I'll post it here.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 8 Jul 2003 16:54:48 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange result: UNIX vs. TCP/IP sockets"
}
] |
[
{
"msg_contents": "The only time that I have ever seen load averages of 30 or more under \nOpenBSD is when one of my scripts goes wild. However, I can say that \nI am also seeing these load averages under PostgreSQL 7.3.2 after a \nmigration to it from MySQL.\n\nMySQL Statistics:\nUptime: 1055352 Threads: 178 Questions: 75161710 Slow queries: 46 \nOpens: 1084 Flush tables: 1 Open tables: 206 Queries per second avg: \n71.220\n\nThe above are statistics from older generation scripts that would make \nuse of MySQL as to give an idea of what's going on. That generation of \nscripts would handle the referential integrity, since foreign key \nconstraints are not enforced under that system. However, the system \nhandled 250 concurrent users without a singular problem, while under \nPostgres with new scripts using functions, referential integrity, \ntransactions and lighter code, the system starts to buckle at even less \nthen 70 users.\n\nWhat I would like to know is. Why? The kernel has been compiled to \nhandle the number of concurrent connections, the server may not be the \nbest, but it should be able to handle the requests: PIII 1Ghz, 1GB \nSDRAM, 2 IDE 20GB drives.\n\nI have changed settings to take advantage of the memory. So the \nfollowing settings are of interest:\n\tshared_buffers = 16384\n\twal_buffers = 256\n\tsort_mem = 16384\n\tvacuum_mem = 32768\n\nStatistics gathering has now been disabled, and logging is done through \nsyslog. I do not expect those settings to cripple system performance \nhowever.\n\nThe scripts are heavy SELECTS with a fair dose of UPDATES and INSERTS. \n To get a concept of what these scripts done, you can look at Ethereal \nRealms (http://www.ethereal-realms.org) which are running the PostgreSQL \nscript variants or consider that this is a chat site.\n\nAnyone have ideas? Is the use of connection pooling consider bad? \nShould flush be run more then once a day? I have no intention of going \nback to MySQL, and would like to make this new solution work.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Sat, 05 Jul 2003 22:54:57 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Extreme high load averages"
},
{
"msg_contents": "On Sunday 06 Jul 2003 5:54 am, Martin Foster wrote:\n> The only time that I have ever seen load averages of 30 or more under\n> OpenBSD is when one of my scripts goes wild. However, I can say that\n> I am also seeing these load averages under PostgreSQL 7.3.2 after a\n> migration to it from MySQL.\n[snip]\n> However, the system\n> handled 250 concurrent users without a singular problem, while under\n> Postgres with new scripts using functions, referential integrity,\n> transactions and lighter code, the system starts to buckle at even less\n> then 70 users.\n[snip]\n> PIII 1Ghz, 1GB\n> SDRAM, 2 IDE 20GB drives.\n>\n> I have changed settings to take advantage of the memory. So the\n> following settings are of interest:\n> \tshared_buffers = 16384\n> \twal_buffers = 256\n> \tsort_mem = 16384\n> \tvacuum_mem = 32768\n\nYou do know that sort_mem is in kB per sort (not per connection, but per sort \nbeing done by a connection). That's 16MB per sort you've allowed in main \nmemory, or for 70 concurrent sorts up to 1.1GB of memory allocated to \nsorting. You're not going into swap by any chance?\n\nMight want to try halving shared_buffers too and see what happens.\n\nI don't know the *BSDs myself, but do you have the equivalent of iostat/vmstat \noutput you could get for us? Also a snapshot of \"top\" output? People are \ngoing to want to see:\n - overall memory usage (free/buffers/cache/swap)\n - memory usage per process\n - disk activity (blocks in/out)\n\n From that lot, someone will be able to point towards the issue, I'm sure.\n-- \n Richard Huxton\n",
"msg_date": "Sun, 6 Jul 2003 08:49:00 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Extreme high load averages"
},
{
"msg_contents": "On 5 Jul 2003 at 22:54, Martin Foster wrote:\n> What I would like to know is. Why? The kernel has been compiled to \n> handle the number of concurrent connections, the server may not be the \n> best, but it should be able to handle the requests: PIII 1Ghz, 1GB \n> SDRAM, 2 IDE 20GB drives.\n> \n> I have changed settings to take advantage of the memory. So the \n> following settings are of interest:\n> \tshared_buffers = 16384\n> \twal_buffers = 256\n> \tsort_mem = 16384\n> \tvacuum_mem = 32768\n\nAs somebody else has already pointed out, your sort_mem is bit too high\nthan required. Try lowering it.\n\nSecondly did you tune effective_cache_size?\n\nHTH\nBye\n Shridhar\n\n--\nPower, n.:\tThe only narcotic regulated by the SEC instead of the FDA.\n\n",
"msg_date": "Sun, 06 Jul 2003 15:14:18 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extreme high load averages"
},
{
"msg_contents": "Shridhar Daithankar wrote:\n\n> On 5 Jul 2003 at 22:54, Martin Foster wrote:\n> \n>>What I would like to know is. Why? The kernel has been compiled to \n>>handle the number of concurrent connections, the server may not be the \n>>best, but it should be able to handle the requests: PIII 1Ghz, 1GB \n>>SDRAM, 2 IDE 20GB drives.\n>>\n>>I have changed settings to take advantage of the memory. So the \n>>following settings are of interest:\n>>\tshared_buffers = 16384\n>>\twal_buffers = 256\n>>\tsort_mem = 16384\n>>\tvacuum_mem = 32768\n> \n> \n> As somebody else has already pointed out, your sort_mem is bit too high\n> than required. Try lowering it.\n> \n> Secondly did you tune effective_cache_size?\n> \n> HTH\n> Bye\n> Shridhar\n> \n> --\n> Power, n.:\tThe only narcotic regulated by the SEC instead of the FDA.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\nI dropped the size of the sort_mem down to 8 megs. Since I am not \nswapping to cache at all this should not post much of a problem at that \nvalue.\n\neffective_cache_size seems interesting, though the description is \nsomewhat lacking. Is this related to the swap partition and how much of \nit will be used by PostgreSQL? If I am correct, this should be fairly low?\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Sun, 06 Jul 2003 04:26:21 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extreme high load averages"
},
{
"msg_contents": "Richard Huxton wrote:\n\n> On Sunday 06 Jul 2003 5:54 am, Martin Foster wrote:\n> \n>>The only time that I have ever seen load averages of 30 or more under\n>>OpenBSD is when one of my scripts goes wild. However, I can say that\n>>I am also seeing these load averages under PostgreSQL 7.3.2 after a\n>>migration to it from MySQL.\n> \n> [snip]\n> \n>>However, the system\n>>handled 250 concurrent users without a singular problem, while under\n>>Postgres with new scripts using functions, referential integrity,\n>>transactions and lighter code, the system starts to buckle at even less\n>>then 70 users.\n> \n> [snip]\n> \n>>PIII 1Ghz, 1GB\n>>SDRAM, 2 IDE 20GB drives.\n>>\n>>I have changed settings to take advantage of the memory. So the\n>>following settings are of interest:\n>>\tshared_buffers = 16384\n>>\twal_buffers = 256\n>>\tsort_mem = 16384\n>>\tvacuum_mem = 32768\n> \n> \n> You do know that sort_mem is in kB per sort (not per connection, but per sort \n> being done by a connection). That's 16MB per sort you've allowed in main \n> memory, or for 70 concurrent sorts up to 1.1GB of memory allocated to \n> sorting. You're not going into swap by any chance?\n> \n> Might want to try halving shared_buffers too and see what happens.\n> \n> I don't know the *BSDs myself, but do you have the equivalent of iostat/vmstat \n> output you could get for us? Also a snapshot of \"top\" output? People are \n> going to want to see:\n> - overall memory usage (free/buffers/cache/swap)\n> - memory usage per process\n> - disk activity (blocks in/out)\n> \n>>From that lot, someone will be able to point towards the issue, I'm sure.\n\nActually, no I did not. Which is probably why it was as high as it is. \n When looking at the PostgreSQL Hardware Performance Tuning page, it \nseems to imply that you should calculate based on RAM to give it an \nappropriate value.\n\n http://www.postgresql.org/docs/aw_pgsql_book/hw_performance/node8.html\n\nI dropped that value, and will see if that helps. The thing is, the \nsystem always indicated plenty of memory available. Even when at a 30 \nload level the free memory was still roughly 170MB.\n\nTomorrow will be a good gage to see if the changes will actually help \nmatters. And if they do not, I will include vmstat, iostat, and top \nas requested.\n\nThanks!\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Sun, 06 Jul 2003 04:31:52 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Extreme high load averages"
},
{
"msg_contents": "On Sunday 06 July 2003 15:56, Martin Foster wrote:\n> effective_cache_size seems interesting, though the description is\n> somewhat lacking. Is this related to the swap partition and how much of\n> it will be used by PostgreSQL? If I am correct, this should be fairly\n> low? Martin Foster\n\nIt gives hint to psotgresql how much file system cache is available in the \nsystem. \n\nYou have 1GB memory and your application requirement does not exceed 400MB. So \nOS can use roughly 600MB for file system cache. In that case you can set this \nparameter to 400MB cache to leave room for other application in FS cache.\n\nIIRC, BSD needs sysctl tuning to make more memory available for FS cache other \nwise they max out at 300MB.\n\nRoughly this setting should be (total memory -application \nrequirement)*(0.7/0.8)\n\nI guess that high kernel load you are seeing due to increased interaction \nbetween postgresql and OS when data is swapped to/fro in shared memory. If OS \ncache does well, postgresql should reduce this interaction as well.\n\n\nBTW, since you have IDE disks, heavy disk activity can eat CPU as well. Is \nyour disk bandwidth totally maxed out? Check with vmstat or whatever \nequivalent you have on BSD.\n\n Shridhar\n\n",
"msg_date": "Sun, 6 Jul 2003 16:04:48 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extreme high load averages"
},
{
"msg_contents": "Martin Foster <[email protected]> writes:\n>> The only time that I have ever seen load averages of 30 or more under\n>> OpenBSD is when one of my scripts goes wild.\n\nNote also that \"high load average\" is not per se an indication that\nanything is wrong. In Postgres, if you have thirty queries waiting\nfor disk I/O, that's thirty processes --- so if that's the average\nstate then the kernel will report a load average of thirty. While\nI'm no MySQL expert, I believe that the equivalent condition in MySQL\nwould be thirty threads blocked for I/O within one process. Depending\non how your kernel is written, that might show as a load average of\none ... but the difference is completely illusory, because what counts\nis the number of disk I/Os in flight, and that's the same.\n\nYou didn't say whether you were seeing any real performance problems,\nlike slow queries or performance dropping when query load rises, but\nthat is the aspect to concentrate on.\n\nI concur with the nearby recommendations to drop your resource settings.\nThe thing you have to keep in mind about Postgres is that it likes to\nhave a lot of physical RAM available for kernel disk buffers (disk\ncache). In a correctly tuned system that's been up for any length of\ntime, \"free memory\" should be nearly nada, and the amount of RAM used\nfor disk buffers should be sizable (50% or more of RAM would be good\nIMHO).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Jul 2003 11:19:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Extreme high load averages "
},
{
"msg_contents": "Shridhar Daithankar wrote:\n> \n> It gives hint to psotgresql how much file system cache is available in the \n> system. \n> \n> You have 1GB memory and your application requirement does not exceed 400MB. So \n> OS can use roughly 600MB for file system cache. In that case you can set this \n> parameter to 400MB cache to leave room for other application in FS cache.\n> \n> IIRC, BSD needs sysctl tuning to make more memory available for FS cache other \n> wise they max out at 300MB.\n> \n> Roughly this setting should be (total memory -application \n> requirement)*(0.7/0.8)\n> \n> I guess that high kernel load you are seeing due to increased interaction \n> between postgresql and OS when data is swapped to/fro in shared memory. If OS \n> cache does well, postgresql should reduce this interaction as well.\n> \n> \n> BTW, since you have IDE disks, heavy disk activity can eat CPU as well. Is \n> your disk bandwidth totally maxed out? Check with vmstat or whatever \n> equivalent you have on BSD.\n> \n> Shridhar\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\nI changed the value of effective_cache_size seems interesting to 512. \nThe database restarted without any problems and load averages seem to be \na bit lower as a result.\n\nSince people have been asking for it, I added in most of the stat \ncommand outputs that I could think of. All located below my signature \nblock, this will show you what roughly 127 client connections with \nPostgre will generate. The numbers are a lot nicer to see then a 30 \nload level.\n\nNote, that the high number of connections is a side effect of connection \npooling under Apache using Apache::DBI. This means that for every \nclient on the http server there is a connection to Postgres even if the \nconnection is idle.\n\nThe above may be a factor of performance as well. As I had noticed \nthat with an idle child setting being too high, that server would show \nvery high load averages as well. Probably an indication that the \nsystem is continually forking new children trying to just keep the idle \nchild count at the right level.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\nvmstat:\n 2:09PM up 16:45, 1 user, load averages: 0.36, 0.30, 0.35\n\nvmstat:\n procs memory\n r b w avm fre\n 1 0 0 234036 687548\n\n page\n flt re pi po fr sr\n 621 0 0 0 0 0\n\n faults cpu\n in sy cs us sy id\n 364 396 88 19 1 79\n\niostat:\n tty wd0 wd1 cpu\n tin tout KB/t t/s MB/s KB/t t/s MB/s us ni sy in id\n 0 1023 4.53 1 0.01 9.72 11 0.10 19 0 1 0 79\n\npstat -s:\n Device 512-blocks Used Avail Capacity Priority\n swap_device 4194288 0 4194288 0% 0\n\ntop header:\n load averages: 0.31, 0.35, 0.42 \n\n 147 processes: 2 running, 145 idle\n CPU states: 32.9% user, 0.0% nice, 0.9% system, 0.0% interrupt, 66.2% \nidle\n Memory: Real: 263M/377M act/tot Free: 630M Swap: 0K/2048M used/tot\n\nps -uax:\nUSER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND\npostgres 1561 0.0 0.5 2120 4812 p0 I 1:48PM 0:00.10 \n/usr/local/bin/postmaster (postgres)\npostgres 9935 0.0 2.8 3832 29744 p0 I 1:48PM 0:00.74 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 7436 0.0 0.6 3640 6636 p0 S 1:48PM 0:00.92 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 18814 0.0 7.0 3876 72904 p0 I 1:48PM 0:04.53 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 15346 0.0 4.1 3820 42468 p0 I 1:48PM 0:00.93 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 13621 0.0 6.9 3832 71824 p0 I 1:48PM 0:02.66 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 20916 0.0 4.7 3812 49164 p0 I 1:48PM 0:00.59 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 21684 0.0 2.2 3688 23356 p0 S 1:48PM 0:01.27 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 19472 0.0 6.9 3824 72452 p0 I 1:48PM 0:02.61 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 27361 0.0 0.7 3664 6976 p0 S 1:48PM 0:00.91 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 28925 0.0 2.8 3840 29528 p0 I 1:48PM 0:00.46 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 12790 0.0 2.7 3800 28080 p0 I 1:48PM 0:01.11 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 13957 0.0 6.8 3820 71476 p0 I 1:48PM 0:02.26 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 29129 0.0 2.8 3828 29096 p0 I 1:48PM 0:01.50 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 24279 0.0 2.7 3824 27992 p0 S 1:48PM 0:01.08 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 20382 0.0 0.6 3640 6748 p0 S 1:48PM 0:00.91 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 28258 0.0 6.9 3872 71912 p0 S 1:48PM 0:03.01 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 16961 0.0 0.6 3664 6612 p0 S 1:48PM 0:00.96 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\npostgres 8599 0.0 0.6 3664 6656 p0 S 1:48PM 0:00.90 \npostmaster: ethereal ethereal 192.168.1.6 idle in tra\n\n\n",
"msg_date": "Sun, 06 Jul 2003 14:16:37 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extreme high load averages"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Martin Foster <[email protected]> writes:\n> \n>>>The only time that I have ever seen load averages of 30 or more under\n>>>OpenBSD is when one of my scripts goes wild.\n> \n> \n> Note also that \"high load average\" is not per se an indication that\n> anything is wrong. In Postgres, if you have thirty queries waiting\n> for disk I/O, that's thirty processes --- so if that's the average\n> state then the kernel will report a load average of thirty. While\n> I'm no MySQL expert, I believe that the equivalent condition in MySQL\n> would be thirty threads blocked for I/O within one process. Depending\n> on how your kernel is written, that might show as a load average of\n> one ... but the difference is completely illusory, because what counts\n> is the number of disk I/Os in flight, and that's the same.\n> \n> You didn't say whether you were seeing any real performance problems,\n> like slow queries or performance dropping when query load rises, but\n> that is the aspect to concentrate on.\n> \n> I concur with the nearby recommendations to drop your resource settings.\n> The thing you have to keep in mind about Postgres is that it likes to\n> have a lot of physical RAM available for kernel disk buffers (disk\n> cache). In a correctly tuned system that's been up for any length of\n> time, \"free memory\" should be nearly nada, and the amount of RAM used\n> for disk buffers should be sizable (50% or more of RAM would be good\n> IMHO).\n> \n> \t\t\tregards, tom lane\n\nUnder a circumstance where we have 250 concurrent users, MySQL would \nreport an uptime of 0.5 sometimes 0.8 depending on the tasks being \nperformed.\n\nThis would translate to wait times averaging less then a second, and \nunder a heavy resource script 4 seconds. That system had less RAM \nhowever.\n\nThis new system when showing a load average of 30, produced wait times \nof 12 seconds averages and about 30 seconds for the heavy resource \nscript. The web server itself showed a load average of 0.5 showing \nthat it was not heavy client interaction slowing things down.\n\nSo there is a very noticeable loss of performance when the system \nskyrockets like that. All of the load as indicated by top is at user \nlevel, and not swap is even touched.\n\nThis may help show why I was slightly concerned.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Sun, 06 Jul 2003 14:28:48 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Extreme high load averages"
},
{
"msg_contents": "Richard Huxton wrote:\n> \n> I don't know the *BSDs myself, but do you have the equivalent of iostat/vmstat \n> output you could get for us? Also a snapshot of \"top\" output? People are \n> going to want to see:\n> - overall memory usage (free/buffers/cache/swap)\n> - memory usage per process\n> - disk activity (blocks in/out)\n> \n\nI changed a bit of the scripting code to cut down on the weight of a \nquery being run. This is the only thing in the entire system that \nwould cause scripts to run at high processor times for extended lengths. \n With the corrections, postgres processes average more closely to < 1% \nthen before.\n\nThis is not stopping the system from getting high load averages. \nAttached, is an example of the site running at 160 users with very slow \nresponse rates (30 seconds for some scripts). According to top, and ps \nnothing is taking up all that processing time.\n\nThe processor seems to be purposely sitting there twiddling it's thumbs. \n Which leads me to believe that perhaps the nice levels have to be \nchanged on the server itself? And perhaps increase the file system \nbuffer to cache files in memory instead of always fetching/writing them?\n\nAnyone more ideas?\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]",
"msg_date": "Sun, 06 Jul 2003 22:21:29 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Extreme high load averages"
},
{
"msg_contents": "On Sun, 6 Jul 2003, Martin Foster wrote:\n\n> The processor seems to be purposely sitting there twiddling it's thumbs. \n> Which leads me to believe that perhaps the nice levels have to be \n> changed on the server itself?\n\nIt could also be all the usual things that affect performance. Are your \nqueries using indexes where it should? Do you vacuum analyze after you \nhave updated/inserted a lot of data?\n\nIt could be that some of your queries is not as efficient as it should, \nlike doing a sequenctial scan over a table instead of an index scan. That \ntranslates into more IO needed and slower response times. Especially when \nyou have more connections figthing for the available IO.\n\n-- \n/Dennis\n\n",
"msg_date": "Mon, 7 Jul 2003 11:07:15 +0200 (CEST)",
"msg_from": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Extreme high load averages"
},
{
"msg_contents": "On Sun, 6 Jul 2003, Martin Foster wrote:\n\n> Shridhar Daithankar wrote:\n> > \n> > It gives hint to psotgresql how much file system cache is available in the \n> > system. \n> > \n> > You have 1GB memory and your application requirement does not exceed 400MB. So \n> > OS can use roughly 600MB for file system cache. In that case you can set this \n> > parameter to 400MB cache to leave room for other application in FS cache.\n> > \n> > IIRC, BSD needs sysctl tuning to make more memory available for FS cache other \n> > wise they max out at 300MB.\n> > \n> > Roughly this setting should be (total memory -application \n> > requirement)*(0.7/0.8)\n> > \n> > I guess that high kernel load you are seeing due to increased interaction \n> > between postgresql and OS when data is swapped to/fro in shared memory. If OS \n> > cache does well, postgresql should reduce this interaction as well.\n> > \n> > \n> > BTW, since you have IDE disks, heavy disk activity can eat CPU as well. Is \n> > your disk bandwidth totally maxed out? Check with vmstat or whatever \n> > equivalent you have on BSD.\n> > \n> > Shridhar\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> \n> I changed the value of effective_cache_size seems interesting to 512. \n> The database restarted without any problems and load averages seem to be \n> a bit lower as a result.\n\nI would try a few things. First off, effective_cache_size is the size \nmeasured in 8k blocks, so 512 would be a setting of 4 Megs. Probably a \nlittle low. If you average 512Meg free, that would be a setting of 65536.\n\nNote that the higer the effective_cache_size, the more the planner will \nfavor index scans, and the lower, the more it will favor sequential scans.\n\nGenerally speaking, index scans cost in CPU terms, while seq scans cost in \nI/O time.\n\nSince you're reporting low CPU usage, I'm guessing you're getting a lot of \nseq scans.\n\nDo you have any type mismatches anywhere that could be the culprit? \nrunning vacuum and analyze regurlarly? Any tables that are good \ncandidates for clustering?\n\nA common problem is a table like this:\n\ncreate table test (info text, id int8 primary key);\ninsert into test values ('ted',1);\n.. a few thousand more inserts;\nvacuum full;\nanalyze;\nselect * from test where id=1;\n\nwill result in a seq scan, always, because the 1 by itself is \nautoconverted to int4, which doesn't match int8 automatically. This \nquery:\n\nselect * from test where id=1::int8\n\nwill cast the 1 to an int8 so the index can be used.\n\n",
"msg_date": "Mon, 7 Jul 2003 13:35:17 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extreme high load averages"
},
{
"msg_contents": "scott.marlowe wrote:\n> \n> \n> I would try a few things. First off, effective_cache_size is the size \n> measured in 8k blocks, so 512 would be a setting of 4 Megs. Probably a \n> little low. If you average 512Meg free, that would be a setting of 65536.\n> \n> Note that the higer the effective_cache_size, the more the planner will \n> favor index scans, and the lower, the more it will favor sequential scans.\n> \n> Generally speaking, index scans cost in CPU terms, while seq scans cost in \n> I/O time.\n> \n> Since you're reporting low CPU usage, I'm guessing you're getting a lot of \n> seq scans.\n> \n> Do you have any type mismatches anywhere that could be the culprit? \n> running vacuum and analyze regurlarly? Any tables that are good \n> candidates for clustering?\n> \n> A common problem is a table like this:\n> \n> create table test (info text, id int8 primary key);\n> insert into test values ('ted',1);\n> .. a few thousand more inserts;\n> vacuum full;\n> analyze;\n> select * from test where id=1;\n> \n> will result in a seq scan, always, because the 1 by itself is \n> autoconverted to int4, which doesn't match int8 automatically. This \n> query:\n> \n> select * from test where id=1::int8\n> \n> will cast the 1 to an int8 so the index can be used.\n> \n\nThat last trick actually listed seemed to have solved on the larger \nslowdowns I had. It would seem that a view was making use of INTERVAL \nand CURRENT_TIMESTAMP. However, the datatype did not make use of \ntimezones and that caused significant slowdowns.\n\nBy using ::TIMESTAMP, it essentially dropped the access time from 4.98+ \nto 0.98 seconds. This alone makes my day, as it shows that Postgres is \nperforming well, but is just a bit more picky about the queries.\n\nI changed the settings as you recommended, locked the memory to 768 megs \nso that PostgreSQL cannot go beyond that and made the database priority \nhigher. All of those changes seems to have increase overall performance.\n\nI do have a site question:\n\n ENABLE_HASHJOIN (boolean)\n ENABLE_INDEXSCAN (boolean)\n ENABLE_MERGEJOIN (boolean)\n ENABLE_TIDSCAN (boolean)\n\nAll of the above, state that they are for debugging the query planner. \n Does this mean that disabling these reduces debugging overhead and \nstreamlines things? The documentation is rather lacking for \ninformation on these.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Mon, 07 Jul 2003 17:29:50 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extreme high load averages"
},
{
"msg_contents": "> A common problem is a table like this:\n> \n> create table test (info text, id int8 primary key);\n> insert into test values ('ted',1);\n> .. a few thousand more inserts;\n> vacuum full;\n> analyze;\n> select * from test where id=1;\n> \n> will result in a seq scan, always, because the 1 by itself is\n> autoconverted to int4, which doesn't match int8 automatically. This\n> query:\n> \n> select * from test where id=1::int8\n> \n> will cast the 1 to an int8 so the index can be used.\n> \n> \n\nHey Scott, this is a little scary because I probably have a lot of this\ngoing on...\n\nIs there a way to log something so that after a day or so I can go back and\nlook for things like this that would be good candidates for optimization?\n\nI've got fast enough servers that currently the impact of this problem might\nnot be too obvious, but I suspect that after the server gets loaded up the\nimpact will become more of a problem.\n\nBy the way, I must say that this thread has been very useful.\n\nMatthew Nuzum\t\t| Makers of \"Elite Content Management System\"\nwww.followers.net\t\t| View samples of Elite CMS in action\[email protected]\t| http://www.followers.net/portfolio/\n\n",
"msg_date": "Mon, 7 Jul 2003 20:47:00 -0400",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extreme high load averages"
},
{
"msg_contents": "On Mon, 7 Jul 2003, Matthew Nuzum wrote:\n\n> > A common problem is a table like this:\n> > \n> > create table test (info text, id int8 primary key);\n> > insert into test values ('ted',1);\n> > .. a few thousand more inserts;\n> > vacuum full;\n> > analyze;\n> > select * from test where id=1;\n> > \n> > will result in a seq scan, always, because the 1 by itself is\n> > autoconverted to int4, which doesn't match int8 automatically. This\n> > query:\n> > \n> > select * from test where id=1::int8\n> > \n> > will cast the 1 to an int8 so the index can be used.\n> > \n> > \n> \n> Hey Scott, this is a little scary because I probably have a lot of this\n> going on...\n> \n> Is there a way to log something so that after a day or so I can go back and\n> look for things like this that would be good candidates for optimization?\n> \n> I've got fast enough servers that currently the impact of this problem might\n> not be too obvious, but I suspect that after the server gets loaded up the\n> impact will become more of a problem.\n> \n> By the way, I must say that this thread has been very useful.\n\nWell, you can turn on some of the newer logging features that tell you how \nlong the query took to run.\n\nLook here:\n\nhttp://developer.postgresql.org/docs/postgres/runtime-config.html#RUNTIME-CONFIG-LOGGING\n\nand here:\n\nhttp://developer.postgresql.org/docs/postgres/runtime-config.html#RUNTIME-CONFIG-STATISTICS\n\nNote that those are the developer docs. I'm pretty sure the first one has \na corrolary to the 7.3.x docs, but the second set (log_statement_stats, \nparser_stats, etc...) looks new for 7.4\n\n",
"msg_date": "Tue, 8 Jul 2003 09:26:34 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extreme high load averages"
},
{
"msg_contents": "Dennis Bj�rklund wrote:\n\n> On Sun, 6 Jul 2003, Martin Foster wrote:\n> \n> \n>>The processor seems to be purposely sitting there twiddling it's thumbs. \n>> Which leads me to believe that perhaps the nice levels have to be \n>>changed on the server itself?\n> \n> \n> It could also be all the usual things that affect performance. Are your \n> queries using indexes where it should? Do you vacuum analyze after you \n> have updated/inserted a lot of data?\n> \n> It could be that some of your queries is not as efficient as it should, \n> like doing a sequenctial scan over a table instead of an index scan. That \n> translates into more IO needed and slower response times. Especially when \n> you have more connections figthing for the available IO.\n> \n\nI actually got a bit more respect for PostgreSQL tonight. It seems that \none of my scripts was not committing changes after maintenance was \nconducted. Meaning that rows that would normally be removed after \noffline archiving was completed were in fact still around.\n\nNormally at any given point in time this table would grow 50K rows \nduring a day, be archived that night and then loose rows that were no \nlonger needed. This process, is what allowed MySQL to maintain any \nstability as the size of this table can balloon significantly.\n\nPostgreSQL with tweaking was handling a table with nearly 300K rows. \nThat size alone would of dragged the MySQL system down to a near grind, \nand since most of those rows are not needed. One can imagine that \nqueries are needlessly processing rows that should be outright ignored.\n\nThis probably explains why row numbering based searches greatly \naccelerated the overall process.\n\nBy fixing the script and doing the appropriate full vacuum and re-index, \nthe system is behaving much more like it should. Even if the process \nmay seem a bit odd to some.\n\nThe reason for removing rows on a daily basis is due to the perishable \nnature of the information. Since this is a chat site, posts over a day \nold are rarely needed for any reason. Which is why they are archived \ninto dumps in case we really need to retrieve the information itself and \nthis gives us the added bonus of smaller backup sizes and smaller \ndatabase sizes.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Wed, 09 Jul 2003 23:37:50 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Extreme high load averages"
},
{
"msg_contents": "On 9 Jul 2003 at 23:37, Martin Foster wrote:\n\n> By fixing the script and doing the appropriate full vacuum and re-index, \n> the system is behaving much more like it should. Even if the process \n> may seem a bit odd to some.\n> \n> The reason for removing rows on a daily basis is due to the perishable \n> nature of the information. Since this is a chat site, posts over a day \n> old are rarely needed for any reason. Which is why they are archived \n> into dumps in case we really need to retrieve the information itself and \n> this gives us the added bonus of smaller backup sizes and smaller \n> database sizes.\n\nI have an idea.\n\nHow about creating a table for each day. Use it for a while and rename it. \nSince you can rename a table in transaction, it should not be a problem.\n\nYou can use inheritance if you want to query all of them. Using indexes and \nforegin keys on inherited tables is a problem though.\n\nThat way deletion would be avoided and so would vacuum. It should be mich \nlighter on the system overall as well.\n\nTell us if it works.\n\nBye\n Shridhar\n\n--\nKaufman's Law:\tA policy is a restrictive document to prevent a recurrence\tof a \nsingle incident, in which that incident is never mentioned.\n\n",
"msg_date": "Thu, 10 Jul 2003 12:00:16 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [NOVICE] Extreme high load averages"
},
{
"msg_contents": "Shridhar Daithankar wrote:\n> \n> \n> I have an idea.\n> \n> How about creating a table for each day. Use it for a while and rename it. \n> Since you can rename a table in transaction, it should not be a problem.\n> \n> You can use inheritance if you want to query all of them. Using indexes and \n> foregin keys on inherited tables is a problem though.\n> \n> That way deletion would be avoided and so would vacuum. It should be mich \n> lighter on the system overall as well.\n> \n> Tell us if it works.\n> \n> Bye\n> Shridhar\n> \n\n\nGenerally I won't be pulling 250K rows from that table. It's \nmaintained nightly during the general cleanup process where stale users, \nrooms and posts are removed from the system. Then the system runs a \nnormal VACUUM ANALYSE to get things going again smoothly.\n\nOnce a week a more detailed archiving takes place which runs an all out \nvaccume and re-index. That's the so called plan at least.\n\nAs for creating a new table, that in itself is a nice idea. But it \nwould cause issues for people currently in the realm. Their posts \nwould essentially dissapear from site and cause more confusion then its \nworth.\n\nInheritance would work, but the database would essentially just grow and \ngrow and grow right?\n\nBTW, I can't thank you all enough for this general advice. It's \nhelping me get this thing running very smoothly.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Thu, 10 Jul 2003 00:43:22 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [NOVICE] Extreme high load averages"
},
{
"msg_contents": "On 10 Jul 2003 at 0:43, Martin Foster wrote:\n> As for creating a new table, that in itself is a nice idea. But it \n> would cause issues for people currently in the realm. Their posts \n> would essentially dissapear from site and cause more confusion then its \n> worth.\n\nNo they won't. Say you have a base table and your current post table is child \nof that. You can query on base table and get rows from child table. That way \nall the data would always be there.\n\nWhile inserting posts, you would insert in child table. While qeurying you \nwould query on base table. That way things will be optimal.\n\n> Inheritance would work, but the database would essentially just grow and \n> grow and grow right?\n\nRight. But there are two advantages.\n\n1. It will always contain valid posts. No dead tuples.\n2. You can work in chuncks of data. Each child table can be dealt with \nseparately without affecting other child tables, whereas in case of a single \nlarge table, entire site is affected..\n\nDeleting 100K posts from 101K rows table is vastly different than deleting 10K \nposts from 2M rows table. Later one would unnecessary starve the table with \ndead tuples and IO whereas in former case you can do create table as select \nfrom and drop the original..\n\nHTH\n\nBye\n Shridhar\n\n--\n\"[In 'Doctor' mode], I spent a good ten minutes telling Emacs what Ithought of \nit. (The response was, 'Perhaps you could try to be lessabusive.')\"(By Matt \nWelsh)\n\n",
"msg_date": "Thu, 10 Jul 2003 12:41:16 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [NOVICE] Extreme high load averages"
},
{
"msg_contents": "> I actually got a bit more respect for PostgreSQL tonight. It seems\n> that one of my scripts was not committing changes after maintenance\n> was conducted. Meaning that rows that would normally be removed\n> after offline archiving was completed were in fact still around.\n> \n> Normally at any given point in time this table would grow 50K rows\n> during a day, be archived that night and then loose rows that were\n> no longer needed. This process, is what allowed MySQL to maintain\n> any stability as the size of this table can balloon significantly.\n> \n> PostgreSQL with tweaking was handling a table with nearly 300K rows.\n> That size alone would of dragged the MySQL system down to a near\n> grind, and since most of those rows are not needed. One can imagine\n> that queries are needlessly processing rows that should be outright\n> ignored.\n\nHaving used MySQL once upon a time and run into it's problems when you\nhave more than 1M rows in a table, it took me a while when 1st using\nPostgreSQL to trust that PostgreSQL can reliably handle millions or\nbillions of rows without crapping out randomly and corrupting itself.\nIf you would have let this grow, you'd run out of disk space long\nbefore you hit anything close to a stability, reliability, or\nperformance problem with PostgreSQL. I have one table in particular\nthat has about 1.9B rows in it right now and it conservatively takes\nabout 0.04ms for non-complex queries to run against the table. In\nMySQL land, I wouldn't dare let something grow that big... which\nwould've been a huge problem because the table mentioned above isn't\nlogging data or something I can routinely purge. It's a strange\nfeeling at first to not have to design your application around size or\ntuple limitations of the database any more. :) I'm glad you're\nenjoying PostgreSQL. -sc\n\n-- \nSean Chittenden\n",
"msg_date": "Thu, 10 Jul 2003 13:22:07 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Extreme high load averages"
},
{
"msg_contents": "Shridhar Daithankar wrote:\n> On 10 Jul 2003 at 0:43, Martin Foster wrote:\n> \n>>As for creating a new table, that in itself is a nice idea. But it \n>>would cause issues for people currently in the realm. Their posts \n>>would essentially dissapear from site and cause more confusion then its \n>>worth.\n> \n> \n> No they won't. Say you have a base table and your current post table is child \n> of that. You can query on base table and get rows from child table. That way \n> all the data would always be there.\n> \n> While inserting posts, you would insert in child table. While qeurying you \n> would query on base table. That way things will be optimal.\n> \n> \n>>Inheritance would work, but the database would essentially just grow and \n>>grow and grow right?\n> \n> \n> Right. But there are two advantages.\n> \n> 1. It will always contain valid posts. No dead tuples.\n> 2. You can work in chuncks of data. Each child table can be dealt with \n> separately without affecting other child tables, whereas in case of a single \n> large table, entire site is affected..\n> \n> Deleting 100K posts from 101K rows table is vastly different than deleting 10K \n> posts from 2M rows table. Later one would unnecessary starve the table with \n> dead tuples and IO whereas in former case you can do create table as select \n> from and drop the original..\n> \n> HTH\n> \n> Bye\n> Shridhar\n\nWhile your idea is sound, I can easily report that this is as bad or \neven worse then removing thousands of rows at any given point in time. \n Trying to remove a child table, will pretty much guarantee a complete \nand total deadlock in the database.\n\nWhile it's waiting for a lock, it's locking out authenticating users but \nallows existing connections to go through. And considering this goes on \nfor tens of minutes and people keep piling on requests to the server, \nthis quickly disintegrates into one hell of a mess. I.E. requires a \ncold boot to get this thing up again.\n\nPerhaps it is more efficient, but until I can remove archived tables \nentirely, I do not exactly see a compelling reason to use inheritance.\n\nAlso, some questions are not answered from documentation. Such as are \nindexes carried forth, if you call the parent table, or do you have to \nre-create them all manually. And what happens to the primary key \nconstraints that no longer show up.\n\nThanks for the tip though. Just wish it worked better then it does.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Thu, 10 Jul 2003 18:23:11 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [NOVICE] Extreme high load averages"
},
{
"msg_contents": "Shridhar Daithankar wrote:\n\n> On 10 Jul 2003 at 0:43, Martin Foster wrote:\n> \n>>As for creating a new table, that in itself is a nice idea. But it \n>>would cause issues for people currently in the realm. Their posts \n>>would essentially dissapear from site and cause more confusion then its \n>>worth.\n> \n> \n> No they won't. Say you have a base table and your current post table is child \n> of that. You can query on base table and get rows from child table. That way \n> all the data would always be there.\n> \n> While inserting posts, you would insert in child table. While qeurying you \n> would query on base table. That way things will be optimal.\n> \n> \n>>Inheritance would work, but the database would essentially just grow and \n>>grow and grow right?\n> \n> \n> Right. But there are two advantages.\n> \n> 1. It will always contain valid posts. No dead tuples.\n> 2. You can work in chuncks of data. Each child table can be dealt with \n> separately without affecting other child tables, whereas in case of a single \n> large table, entire site is affected..\n> \n> Deleting 100K posts from 101K rows table is vastly different than deleting 10K \n> posts from 2M rows table. Later one would unnecessary starve the table with \n> dead tuples and IO whereas in former case you can do create table as select \n> from and drop the original..\n> \n> HTH\n> \n> Bye\n> Shridhar\n> \n> --\n> \"[In 'Doctor' mode], I spent a good ten minutes telling Emacs what Ithought of \n> it. (The response was, 'Perhaps you could try to be lessabusive.')\"(By Matt \n> Welsh)\n> \n\nWhen I ran EXPLAIN on the views and queries making use of the inherited \ntables, I noticed that everything worked based on sequence scans and it \navoided all indexes. While making use of ONLY kicked in full indexes.\n\nThis is even after having created a child table with the same indexes as \nthe parent. Is this a known issue, or just some sort of oddity on my \nsetup?\n\nTables still cannot be removed easily, but I found a way to work around \nit for a day-to-day basis. Essentailly I just clean out the tables \ncontaining old rows and delete them later. However based on the above, \nI doubt performance would get any better.\n\nThanks for the advice however!\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Fri, 11 Jul 2003 00:09:38 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [NOVICE] Extreme high load averages"
}
] |
[
{
"msg_contents": "Hi all,\n\nIn addition to Tom's patch, this patch asks tuning parameters right away, \nwhile doing initdb. I have also changed the notice displayed after initdb is \ndone.\n\nJust an attempt to make defaults user friendly. I would also like to add other \nparamters to this approach, like fsync and random_page_cost but first I \nthought others should have look at these.\n\nAnd one more thing, can we get all the parameters in postgresql.conf to follow \nsimilar units? Some settings aer in 8KB pages, some in bytes etc. Can we haev \nall of them to follow say MB or KB?\n\nI tried but guc.h and guc.c were bit too much to be gulped at one go. I will \ntry again.\n\n Shridhar",
"msg_date": "Sun, 6 Jul 2003 19:40:32 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Another POC initdb patch"
},
{
"msg_contents": "Shridhar Daithankar <[email protected]> writes:\n> In addition to Tom's patch, this patch asks tuning parameters right away, \n> while doing initdb.\n\nSorry, there is zero chance of putting any interactivity into initdb.\nMost RPM installations run it from the RPM install script and would be\nunable to cope with this.\n\nI disagree with the concept of expecting someone to supply useful values\nat install time anyway, since a newbie is the *least* likely to have any\nidea what to say at that time. Heck, the experts can hardly agree on\nwhat to use ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Jul 2003 11:25:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Another POC initdb patch "
},
{
"msg_contents": "On Sunday 06 July 2003 20:55, Tom Lane wrote:\n> Shridhar Daithankar <[email protected]> writes:\n> > In addition to Tom's patch, this patch asks tuning parameters right away,\n> > while doing initdb.\n>\n> Sorry, there is zero chance of putting any interactivity into initdb.\n> Most RPM installations run it from the RPM install script and would be\n> unable to cope with this.\n\nHmm.. If distro. vendors can put a wrapper of service script around it on \ntheir own, how much difficult it is to modify initdb to revert back to \noriginal behaviour?\n\nI think it would be fair on linux distro. vendors part if they decide to put a \nreasonable default for shared_buffers. Unlike postgresql community, they \ndon't haev to worry about OS protability because they know that it is going \nto run on only linux.\n\nI mailed mandrake long time back, requesting a config file for service script \nwhere it would allow to specify database location. No reply.\n\n> I disagree with the concept of expecting someone to supply useful values\n> at install time anyway, since a newbie is the *least* likely to have any\n> idea what to say at that time. Heck, the experts can hardly agree on\n> what to use ...\n\nWhatever user says might not be the best or optimum but will be likely to be \nmuch better than 64.\n\nI agree that this is not part of rpm philosophy. Install and configure is only \nfollowed by debian IIRC.\n\nFurthermore this could take care of user complains that postgresql does not \nhave reasonable defaults. Problems with such approach w.r.t. linux distro.s \naren't exactly impossible to solve. May be we could add it to release notes, \nfor their convinience.\n\nAnother proposal is to take out everything interactive I put it in patch, \nwrite another shell script, that would tune postgresql.conf, like the \nconfiguration wizard you suggested.\n\nNo matter where it goes, users would be very happy to have a tool which hand \nholds them to a reasonanly working config file and points to right \ndocumentation thereafter.\n\n Shridhar\n\n",
"msg_date": "Mon, 7 Jul 2003 12:15:59 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Another POC initdb patch"
}
] |
[
{
"msg_contents": "Our production database is running under 7.2.4; our test database\nrunning almost the same data is at 7.3.3. One table has about 400,000\nrows in each schema. A query against an indexed column uses an index\nscan under 7.2.4, but a sequential scan under 7.3.3. A count of the\ntable in question shows that they have comparable numbers of matching\nrows.\n\nOn 7.2.4:\n\nselect count(*) from articles;\n count\n--------\n 420213\n\nselect count(*) from articles\n where path_base like 'http://news.findlaw.com/hdocs%';\n count\n-------\n 38\n\n(and it returns this nearly instantaneously)\n\nexplain select count(*) from articles\n where path_base like 'http://news.findlaw.com/hdocs%'\n \nAggregate (cost=6.02..6.02 rows=1 width=0)\n -> Index Scan using ix_articles_3 on articles (cost=0.00..6.01\nrows=1 width=0)\n \nOn 7.3.3:\n\nselect count(*) from articles;\n count\n--------\n 406319\n\nselect count(*) from articles\n where path_base like 'http://news.findlaw.com/hdocs%'\n count\n-------\n 23\n\n(and it takes many seconds to return)\n\nexplain select count(*) from articles\n where path_base like 'http://news.findlaw.com/hdocs%'\n\n Aggregate (cost=205946.65..205946.65 rows=1 width=0)\n -> Seq Scan on articles (cost=0.00..205946.65 rows=1 width=0)\n Filter: (path_base ~~ 'http://news.findlaw.com/hdocs%'::text)\n\nI can't find any differences between the indexes (ix_articles_3 exists\nin both schemas); the column statistics are set up the same (the\ndefault); and the optimizer settings (costs in postgresql.conf) are the\nsame.\n\n-- \nJeff Boes vox 269.226.9550 ext 24\nDatabase Engineer fax 269.349.9076\nNexcerpt, Inc. http://www.nexcerpt.com\n ...Nexcerpt... Extend your Expertise\n\n",
"msg_date": "07 Jul 2003 10:17:35 -0400",
"msg_from": "Jeff Boes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer differences between 7.2 and 7.3"
},
{
"msg_contents": "On Monday 07 Jul 2003 3:17 pm, Jeff Boes wrote:\n> Our production database is running under 7.2.4; our test database\n> running almost the same data is at 7.3.3. One table has about 400,000\n> rows in each schema. A query against an indexed column uses an index\n> scan under 7.2.4, but a sequential scan under 7.3.3. A count of the\n> table in question shows that they have comparable numbers of matching\n> rows.\n[snip[\n> select count(*) from articles\n> where path_base like 'http://news.findlaw.com/hdocs%';\n> count\n> -------\n> 38\n[snip]\n> I can't find any differences between the indexes (ix_articles_3 exists\n> in both schemas); the column statistics are set up the same (the\n> default); and the optimizer settings (costs in postgresql.conf) are the\n> same.\n\nCheck the locale the database was initdb'd to. You'll probably find 7.2.4 is \nin the \"C\" locale whereas 7.3.3 isn't. The \"like\" comparison can only use \nindexes in the \"C\" locale. I believe you might need to initdb again to fix \nthis.\n\n-- \n Richard Huxton\n",
"msg_date": "Mon, 7 Jul 2003 15:40:30 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer differences between 7.2 and 7.3"
},
{
"msg_contents": "A bit OT:\n\ndo regex ops (~, ~*) use index scan in non-\"C\" locales? Is it worth to\nconvert LIKE to regex?\n\nG.\n------------------------------- cut here -------------------------------\n----- Original Message ----- \nFrom: \"Richard Huxton\" <[email protected]>\nSent: Monday, July 07, 2003 4:40 PM\n\n\nCheck the locale the database was initdb'd to. You'll probably find 7.2.4 is\nin the \"C\" locale whereas 7.3.3 isn't. The \"like\" comparison can only use\nindexes in the \"C\" locale. I believe you might need to initdb again to fix\nthis.\n\n-- \n Richard Huxton\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html\n\n",
"msg_date": "Mon, 21 Jul 2003 10:25:51 +0200",
"msg_from": "=?ISO-8859-1?Q?SZUCS_G=E1bor?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer differences between 7.2 and 7.3"
},
{
"msg_contents": "=?ISO-8859-1?Q?SZUCS_G=E1bor?= <[email protected]> writes:\n> do regex ops (~, ~*) use index scan in non-\"C\" locales? Is it worth to\n> convert LIKE to regex?\n\nThe locale issues are the same either way.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Jul 2003 09:38:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer differences between 7.2 and 7.3 "
}
] |
[
{
"msg_contents": "\nSomeone asked a hypothetical question about how to retrieve all records of a\ntable twice in SQL. It got me thinking about whether there was a way to do\nthis efficiently.\n\n\"Obviously\" if you do it using the UNION ALL approach postgres isn't going to\ndo two separate scans, doing it otherwise would be quite hard.\n\nHowever using the join approach it seems postgres ought to be able to do a\nsingle sequential scan and return every tuple it finds twice. It doesn't do\nthis:\n\nslo=> explain analyze select * from region, (select 1 union all select 2) as x;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..11162.00 rows=5534 width=108) (actual time=0.13..541.19 rows=5534 loops=1)\n -> Subquery Scan x (cost=0.00..2.00 rows=2 width=0) (actual time=0.03..0.08 rows=2 loops=1)\n -> Append (cost=0.00..2.00 rows=2 width=0) (actual time=0.02..0.05 rows=2 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..1.00 rows=1 width=0) (actual time=0.01..0.02 rows=1 loops=1)\n -> Result (cost=0.00..1.00 rows=1 width=0) (actual time=0.01..0.01 rows=1 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..1.00 rows=1 width=0) (actual time=0.01..0.02 rows=1 loops=1)\n -> Result (cost=0.00..1.00 rows=1 width=0) (actual time=0.01..0.01 rows=1 loops=1)\n -> Seq Scan on region (cost=0.00..2813.00 rows=2767 width=104) (actual time=0.03..123.44 rows=2767 loops=2)\n Total runtime: 566.24 msec\n(9 rows)\n\nWouldn't it be faster to drive the nested loop the other way around?\n\n(I'm also a bit puzzled why the optimizer is calculating that 2,813 * 2 = 5,534)\n\nThis is tested on 7.3. I haven't tried CVS yet.\n\n-- \ngreg\n\n",
"msg_date": "07 Jul 2003 14:22:00 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": true,
"msg_subject": "optimizer picks smaller table to drive nested loops?"
},
{
"msg_contents": "On Monday 07 July 2003 12:22 pm, you wrote:\n\n> loops=1) -> Seq Scan on region (cost=0.00..2813.00 rows=2767 width=104)\n> (actual time=0.03..123.44 rows=2767 loops=2) Total runtime: 566.24 msec\n> (9 rows)\n>\n> (I'm also a bit puzzled why the optimizer is calculating that 2,813 * 2 =\n> 5,534)\n\nYou should read it 2767 (rows) * 2 = 5534 (rows)\n2813.00 is part of the cost.\n",
"msg_date": "Mon, 7 Jul 2003 14:58:54 -0600",
"msg_from": "Randy Neumann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizer picks smaller table to drive nested loops?"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> slo=> explain analyze select * from region, (select 1 union all select 2) as x;\n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..11162.00 rows=5534 width=108) (actual time=0.13..541.19 rows=5534 loops=1)\n> -> Subquery Scan x (cost=0.00..2.00 rows=2 width=0) (actual time=0.03..0.08 rows=2 loops=1)\n> -> Append (cost=0.00..2.00 rows=2 width=0) (actual time=0.02..0.05 rows=2 loops=1)\n> -> Subquery Scan \"*SELECT* 1\" (cost=0.00..1.00 rows=1 width=0) (actual time=0.01..0.02 rows=1 loops=1)\n> -> Result (cost=0.00..1.00 rows=1 width=0) (actual time=0.01..0.01 rows=1 loops=1)\n> -> Subquery Scan \"*SELECT* 2\" (cost=0.00..1.00 rows=1 width=0) (actual time=0.01..0.02 rows=1 loops=1)\n> -> Result (cost=0.00..1.00 rows=1 width=0) (actual time=0.01..0.01 rows=1 loops=1)\n> -> Seq Scan on region (cost=0.00..2813.00 rows=2767 width=104) (actual time=0.03..123.44 rows=2767 loops=2)\n> Total runtime: 566.24 msec\n> (9 rows)\n\n> Wouldn't it be faster to drive the nested loop the other way around?\n\nYou seem to be using a rather wacko value of cpu_tuple_cost; those\nResult nodes ought to be costed at 0.01 not 1.00. With the default\ncost settings I get an other-way-around plan for a similar test.\n(I used tenk1 from the regression database as the outer table.)\n\nHowever, it looks to me like the subquery-scan-outside plan probably\nis the faster one, on both my machine and yours. I get\n\nregression=# explain analyze select * from tenk1, (select 1 union all select 2) as x;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..858.00 rows=20000 width=248) (actual time=0.42..3648.61 rows=20000 loops=1)\n -> Seq Scan on tenk1 (cost=0.00..458.00 rows=10000 width=244) (actual time=0.23..199.97 rows=10000 loops=1)\n -> Subquery Scan x (cost=0.00..0.02 rows=2 width=0) (actual time=0.07..0.24 rows=2 loops=10000)\n -> Append (cost=0.00..0.02 rows=2 width=0) (actual time=0.05..0.17 rows=2 loops=10000)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..0.01 rows=1 width=0) (actual time=0.03..0.06 rows=1 loops=10000)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.02 rows=1 loops=10000)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..0.01 rows=1 width=0) (actual time=0.03..0.06 rows=1 loops=10000)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.02 rows=1 loops=10000)\n Total runtime: 3807.39 msec\n(9 rows)\n\nregression=# set cpu_tuple_cost = 1;\nSET\nregression=# explain analyze select * from tenk1, (select 1 union all select 2) as x;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..40718.00 rows=20000 width=248) (actual time=0.39..1214.42 rows=20000 loops=1)\n -> Subquery Scan x (cost=0.00..2.00 rows=2 width=0) (actual time=0.10..0.31 rows=2 loops=1)\n -> Append (cost=0.00..2.00 rows=2 width=0) (actual time=0.06..0.22 rows=2 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..1.00 rows=1 width=0) (actual time=0.05..0.08 rows=1 loops=1)\n -> Result (cost=0.00..1.00 rows=1 width=0) (actual time=0.03..0.04 rows=1 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..1.00 rows=1 width=0) (actual time=0.05..0.08 rows=1 loops=1)\n -> Result (cost=0.00..1.00 rows=1 width=0) (actual time=0.02..0.03 rows=1 loops=1)\n -> Seq Scan on tenk1 (cost=0.00..10358.00 rows=10000 width=244) (actual time=0.17..188.37 rows=10000 loops=2)\n Total runtime: 1371.17 msec\n(9 rows)\n\nThe flipover point between the two plans is cpu_tuple_cost = 0.04 in\nmy tests.\n\nIt looks to me like we've neglected to charge any cost associated with\nSubquery Scan or Append nodes. Certainly Subquery Scan ought to charge\nat least a cpu_tuple_cost per row. Perhaps Append ought to as well ---\nalthough since it doesn't do selection or projection, I'm not quite sure\nwhere the time is going in that case. (Hmmm... time to get out the\nprofiler...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 14 Jul 2003 14:04:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizer picks smaller table to drive nested loops? "
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> You seem to be using a rather wacko value of cpu_tuple_cost; those\n> Result nodes ought to be costed at 0.01 not 1.00. With the default\n\noops yes, thanks. that was left over from other experimentation.\n\n> However, it looks to me like the subquery-scan-outside plan probably\n> is the faster one, on both my machine and yours. I get\n> \n> regression=# explain analyze select * from tenk1, (select 1 union all select 2) as x;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..858.00 rows=20000 width=248) (actual time=0.42..3648.61 rows=20000 loops=1)\n> -> Seq Scan on tenk1 (cost=0.00..458.00 rows=10000 width=244) (actual time=0.23..199.97 rows=10000 loops=1)\n> -> Subquery Scan x (cost=0.00..0.02 rows=2 width=0) (actual time=0.07..0.24 rows=2 loops=10000)\n...\n> Total runtime: 3807.39 msec\n\n> Nested Loop (cost=0.00..40718.00 rows=20000 width=248) (actual time=0.39..1214.42 rows=20000 loops=1)\n> -> Subquery Scan x (cost=0.00..2.00 rows=2 width=0) (actual time=0.10..0.31 rows=2 loops=1)\n> -> Seq Scan on tenk1 (cost=0.00..10358.00 rows=10000 width=244) (actual time=0.17..188.37 rows=10000 loops=2)\n> Total runtime: 1371.17 msec\n\nWoah, that's pretty whacky. It seems like it ought to be way faster to do a\nsingle sequential scan and return two records for each tuple read rather than\ndo an entire unnecessary sequential scan, even if most or even all of the\nsecond one is cached.\n\n-- \ngreg\n\n",
"msg_date": "14 Jul 2003 14:40:37 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: optimizer picks smaller table to drive nested loops?"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> However, it looks to me like the subquery-scan-outside plan probably\n>> is the faster one, on both my machine and yours. I get\n\n> Woah, that's pretty whacky. It seems like it ought to be way faster to do a\n> single sequential scan and return two records for each tuple read rather than\n> do an entire unnecessary sequential scan, even if most or even all of the\n> second one is cached.\n\nThe problem is the CPU expense of executing \"SELECT 1 UNION SELECT 2\"\nover and over. Doing that for every row of the outer table adds up.\n\nWe were both testing on relatively small tables --- I suspect the\nresults would be different if the outer table were too large to fit\nin disk cache.\n\nI am not sure why the planner did not choose to stick a Materialize\nnode atop the Subquery Scan, though. It looks to me like it should\nhave considered that option --- possibly the undercharging for Subquery\nScan is the reason it wasn't chosen.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 14 Jul 2003 16:58:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizer picks smaller table to drive nested loops? "
},
{
"msg_contents": "I said:\n> I am not sure why the planner did not choose to stick a Materialize\n> node atop the Subquery Scan, though. It looks to me like it should\n> have considered that option --- possibly the undercharging for Subquery\n> Scan is the reason it wasn't chosen.\n\nIndeed, after fixing the unrealistic estimate for SubqueryScan, I get\nthis:\n\nregression=# explain analyze select * from tenk1, (select 1 union all select 2) as x;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.06..858.06 rows=20000 width=248) (actual time=0.25..1448.19 rows=20000 loops=1)\n -> Seq Scan on tenk1 (cost=0.00..458.00 rows=10000 width=244) (actual time=0.06..162.48 rows=10000 loops=1)\n -> Materialize (cost=0.06..0.08 rows=2 width=4) (actual time=0.01..0.03 rows=2 loops=10000)\n -> Subquery Scan x (cost=0.00..0.06 rows=2 width=4) (actual time=0.10..0.27 rows=2 loops=1)\n -> Append (cost=0.00..0.04 rows=2 width=0) (actual time=0.07..0.20 rows=2 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..0.02 rows=1 width=0) (actual time=0.05..0.08 rows=1 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.03..0.03 rows=1 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..0.02 rows=1 width=0) (actual time=0.03..0.06 rows=1 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.02 rows=1 loops=1)\n Total runtime: 1627.26 msec\n(10 rows)\n\nwhich is probably the best way to do it, all things considered.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 14 Jul 2003 18:41:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizer picks smaller table to drive nested loops? "
}
] |
[
{
"msg_contents": "Dear All\n\nPlease note: I'm using Version 7.1\n\nFollowing everyone's advice, I increased my max_connections to 64 and shared_buffers to 2000. However, the postmaster then started to issue me error messages saying\n\npg_recvbuf: unexpected EOF on client connection\n\nHelp! (Needless to say I've restored the default postgresql.conf file and am ok)\n:-(\n\nObviously I've fouled up somewhere - advice awaited :-)\n\nAs a point of interest, I searched the documentation fo pg_recvbuf and the search returned no results.\n\nThanks\nHilary\n\n\nHilary Forbes\n-------------\nDMR Computer Limited: http://www.dmr.co.uk/\nDirect line: 01689 889950\nSwitchboard: (44) 1689 860000 Fax: (44) 1689 860330\nE-mail: [email protected]\n\n**********************************************************\n\n",
"msg_date": "Tue, 08 Jul 2003 20:46:37 +0100",
"msg_from": "Hilary Forbes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Config file problem"
},
{
"msg_contents": "> Please note: I'm using Version 7.1\n\nUpgrade to 7.3 :)\n\n> Following everyone's advice, I increased my max_connections to 64 and\nshared_buffers to 2000. However, the postmaster then started to issue me\nerror messages saying\n>\n> pg_recvbuf: unexpected EOF on client connection\n\nThat's odd. I have no idea how your changes and that error can possibly be\nrelated. I get those all the time in my logs, but that seems to be\nsomething to do with how clients disconnect in Apache sometimes or\nsomething.\n\nChris\n\n",
"msg_date": "Wed, 9 Jul 2003 09:13:23 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Config file problem"
}
] |
[
{
"msg_contents": "As I keep looking through code to see where I can make things more \nefficient, I noticed that in some cases timestamps seem horribly \ninefficient. This leads to very long run times for certain queries.\n\nHere is an example:\n\n-- USING TIMESTAMPS TO NARROW DOWN --\n\nSELECT\n Post.PostIDNumber,\n Post.PuppeteerLogin,\n Post.PuppetName,\n Post.PostCmd,\n Post.PostClass\n FROM ethereal.Post\n WHERE Post.PostTimeStamp > (LOCALTIMESTAMP - INTERVAL '10 Minutes')\n AND Post.RealmName='Amalgam'\n AND (Post.PostTo='all' OR Post.PostTo='root')\n AND (NOT EXISTS (SELECT PuppetIgnore.PuppetLogin\n FROM ethereal.PuppetIgnore\n WHERE PuppetIgnore.PuppetIgnore='global'\n AND PuppetIgnore.PuppeteerLogin='root'\n AND PuppetIgnore.PuppetLogin=Post.PuppeteerLogin)\n OR Post.PuppeteerLogin IS NULL)\n AND (NOT EXISTS (SELECT PuppetIgnore.PuppetName\n FROM ethereal.PuppetIgnore\n WHERE PuppetIgnore='single'\n AND PuppetIgnore.PuppeteerLogin='root'\n AND PuppetIgnore.PuppetName=Post.PuppetName)\n OR Post.PuppetName IS NULL)\n ORDER BY Post.PostIDNumber LIMIT 100\n\n-- Explain of Above--\nLimit (cost=0.00..260237.32 rows=100 width=48)\n -> Index Scan using pkpost on post (cost=0.00..3020594.00 rows=1161 \nwidth=48)\n Filter: ((posttimestamp > (('now'::text)::timestamp(6) without \ntime zone - '00:10'::interval)) AND (realmname = 'Amalgam'::character \nvarying) AND ((postto = 'all'::character varying) OR (postto = \n'root'::character varying)) AND ((NOT (subplan)) OR (puppeteerlogin IS \nNULL)) AND ((NOT (subplan)) OR (puppetname IS NULL)))\n SubPlan\n -> Index Scan using pkpuppetignore on puppetignore \n(cost=0.00..13.31 rows=1 width=10)\n Index Cond: (puppeteerlogin = 'root'::character varying)\n Filter: ((puppetignore = 'global'::character varying) \nAND (puppetlogin = $0))\n -> Index Scan using pkpuppetignore on puppetignore \n(cost=0.00..5.84 rows=1 width=15)\n Index Cond: ((puppeteerlogin = 'root'::character \nvarying) AND (puppetname = $1))\n Filter: (puppetignore = 'single'::character varying)\n\n\nResult : 22 rows fetched (17.21 sec)\n\n\n-- USING A GENERATED ID NUMBER --\n\nSELECT\n Post.PostIDNumber,\n Post.PuppeteerLogin,\n Post.PuppetName,\n Post.PostCmd,\n Post.PostClass\n FROM ethereal.Post\n WHERE Post.PostIDNumber > 1\n AND Post.RealmName='Amalgam'\n AND (Post.PostTo='all' OR Post.PostTo='root')\n AND (NOT EXISTS (SELECT PuppetIgnore.PuppetLogin\n FROM ethereal.PuppetIgnore\n WHERE PuppetIgnore.PuppetIgnore='global'\n AND PuppetIgnore.PuppeteerLogin='root'\n AND PuppetIgnore.PuppetLogin=Post.PuppeteerLogin)\n OR Post.PuppeteerLogin IS NULL)\n AND (NOT EXISTS (SELECT PuppetIgnore.PuppetName\n FROM ethereal.PuppetIgnore\n WHERE PuppetIgnore='single'\n AND PuppetIgnore.PuppeteerLogin='root'\n AND PuppetIgnore.PuppetName=Post.PuppetName)\n OR Post.PuppetName IS NULL)\n ORDER BY Post.PostIDNumber LIMIT 100\n\n-- Explain of Above--\nLimit (cost=0.00..86712.10 rows=100 width=48)\n -> Index Scan using pkpost on post (cost=0.00..3019119.56 rows=3482 \nwidth=48)\n Index Cond: (postidnumber > 1)\n Filter: ((realmname = 'Amalgam'::character varying) AND \n((postto = 'all'::character varying) OR (postto = 'root'::character \nvarying)) AND ((NOT (subplan)) OR (puppeteerlogin IS NULL)) AND ((NOT \n(subplan)) OR (puppetname IS NULL)))\n SubPlan\n -> Index Scan using pkpuppetignore on puppetignore \n(cost=0.00..13.31 rows=1 width=10)\n Index Cond: (puppeteerlogin = 'root'::character varying)\n Filter: ((puppetignore = 'global'::character varying) \nAND (puppetlogin = $0))\n -> Index Scan using pkpuppetignore on puppetignore \n(cost=0.00..5.84 rows=1 width=15)\n Index Cond: ((puppeteerlogin = 'root'::character \nvarying) AND (puppetname = $1))\n Filter: (puppetignore = 'single'::character varying)\n\n\nResult : 100 rows fetched ( 0.19 sec)\n\n\n-- USING A MIXTURE OF BOTH --\n\nSELECT\n Post.PostIDNumber,\n Post.PuppeteerLogin,\n Post.PuppetName,\n Post.PostCmd,\n Post.PostClass\n FROM ethereal.Post\n WHERE Post.PostIDNumber > (SELECT MIN(PostIDNumber)\n FROM ethereal.Post\n WHERE Post.PostTimeStamp > (LOCALTIMESTAMP - INTERVAL '10 minutes'))::INT\n AND Post.RealmName='Amalgam'\n AND (Post.PostTo='all' OR Post.PostTo='root')\n AND (NOT EXISTS (SELECT PuppetIgnore.PuppetLogin\n FROM ethereal.PuppetIgnore\n WHERE PuppetIgnore.PuppetIgnore='global'\n AND PuppetIgnore.PuppeteerLogin='root'\n AND PuppetIgnore.PuppetLogin=Post.PuppeteerLogin)\n OR Post.PuppeteerLogin IS NULL)\n AND (NOT EXISTS (SELECT PuppetIgnore.PuppetName\n FROM ethereal.PuppetIgnore\n WHERE PuppetIgnore='single'\n AND PuppetIgnore.PuppeteerLogin='root'\n AND PuppetIgnore.PuppetName=Post.PuppetName)\n OR Post.PuppetName IS NULL)\n ORDER BY Post.PostIDNumber LIMIT 100\n\n-- Explain of Above--\nLimit (cost=0.00..87101.38 rows=100 width=48)\n InitPlan\n -> Aggregate (cost=12412.82..12412.82 rows=1 width=4)\n -> Index Scan using idxpost_timestamp on post \n(cost=0.00..12282.42 rows=52160 width=4)\n Index Cond: (posttimestamp > \n(('now'::text)::timestamp(6) without time zone - '00:10'::interval))\n -> Index Scan using pkpost on post (cost=0.00..1010992.25 rows=1161 \nwidth=48)\n Index Cond: (postidnumber > $0)\n Filter: ((realmname = 'Amalgam'::character varying) AND \n((postto = 'all'::character varying) OR (postto = 'root'::character \nvarying)) AND ((NOT (subplan)) OR (puppeteerlogin IS NULL)) AND ((NOT \n(subplan)) OR (puppetname IS NULL)))\n SubPlan\n -> Index Scan using pkpuppetignore on puppetignore \n(cost=0.00..13.31 rows=1 width=10)\n Index Cond: (puppeteerlogin = 'root'::character varying)\n Filter: ((puppetignore = 'global'::character varying) \nAND (puppetlogin = $1))\n -> Index Scan using pkpuppetignore on puppetignore \n(cost=0.00..5.84 rows=1 width=15)\n Index Cond: ((puppeteerlogin = 'root'::character \nvarying) AND (puppetname = $2))\n Filter: (puppetignore = 'single'::character varying)\n\n\nResult : 18 rows fetched ( 0.04 sec)\n\n\nBoth PostIDNumber and PostTimestamp are indexed, so that should not be a \nbottleneck in itself. However, as you can see in the third example \nthe use of a sub-query actually accelerates the process considerably, \nmeaning that integer based searching is much much faster.\n\nUnder MySQL timestamps where in Unix time, which is why I may have never \nnoticed such an extreme slowdown when doing similar on that script. Of \ncourse to boggle the mind, here is a view that works very well:\n\nCREATE VIEW ethereal.Who AS\n SELECT\n Po.PuppetName AS PuppetName,\n Po.PuppeteerLogin AS PuppeteerLogin,\n Po.RealmName AS RealmName,\n Re.RealmPublic AS RealmPublic,\n Re.RealmVerified AS RealmVerified\n FROM ethereal.Post Po, ethereal.Puppet Ch, ethereal.Realm Re\n WHERE Po.PuppeteerLogin = Ch.PuppeteerLogin\n AND Po.RealmName = Re.RealmName\n AND Po.PostTimestamp > (LOCALTIMESTAMP - INTERVAL '10 minutes')\n AND Po.PuppetName IS NOT NULL\n GROUP BY Po.PuppeteerLogin, Po.PuppetName, Po.RealmName, \nRe.RealmPublic, Re.RealmVerified\n ORDER BY Po.RealmName, Po.PuppetName;\n\nSort (cost=309259.89..309629.34 rows=147780 width=79)\n Sort Key: po.realmname, po.puppetname\n -> Group (cost=270648.27..292815.19 rows=147780 width=79)\n -> Sort (cost=270648.27..274342.75 rows=1477795 width=79)\n Sort Key: po.puppeteerlogin, po.puppetname, po.realmname, \nre.realmpublic, re.realmverified\n -> Merge Join (cost=22181.60..41087.65 rows=1477795 \nwidth=79)\n Merge Cond: (\"outer\".puppeteerlogin = \n\"inner\".puppeteerlogin)\n -> Sort (cost=17172.82..17300.26 rows=50978 width=69)\n Sort Key: po.puppeteerlogin\n -> Hash Join (cost=12.41..13186.95 \nrows=50978 width=69)\n Hash Cond: (\"outer\".realmname = \n\"inner\".realmname)\n -> Index Scan using idxpost_timestamp \non post po (cost=0.00..12282.42 rows=50978 width=42)\n Index Cond: (posttimestamp > \n(('now'::text)::timestamp(6) without time zone - '00:10'::interval))\n Filter: (puppetname IS NOT NULL)\n -> Hash (cost=11.93..11.93 rows=193 \nwidth=27)\n -> Seq Scan on realm re \n(cost=0.00..11.93 rows=193 width=27)\n -> Sort (cost=5008.78..5100.22 rows=36574 width=10)\n Sort Key: ch.puppeteerlogin\n -> Seq Scan on puppet ch \n(cost=0.00..2236.74 rows=36574 width=10)\n\n\nResult : 48 rows fetched ( 0.55 sec)\n\n\nIt uses the exact same time restraint as the first three examples, looks \nthrough the same table, does a tipple join and still gets off at higher \nspeeds. This seems to indicate that timestamps are actually efficient, \nwhich contradicts above examples.\n\nAny ideas? Code for the table creation is below signature:\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n--\n--\n-- NAME : Post\n-- REFERENCES : Realm*\n-- Puppet*\n-- PuppeteerLogin*\n--\n-- DESCRIPTION : Post is the hive of activity for all realms. \nAssociated with all three\n-- major tables, it is not actually linked because of the \nnature of storing\n-- posts for statistics and auditing.\n\nCREATE TABLE ethereal.Post (\n PostIDNumber INT NOT NULL DEFAULT \nNEXTVAL('ethereal.seqPost'),\n RealmName VARCHAR(30) NOT NULL,\n PuppetName VARCHAR(30),\n PuppeteerLogin VARCHAR(10),\n PostTo VARCHAR(30),\n PostTimestamp TIMESTAMP NOT NULL DEFAULT LOCALTIMESTAMP,\n PostClass VARCHAR(10) NOT NULL DEFAULT 'general',\n PostCmd VARCHAR(10) NOT NULL DEFAULT 'none',\n PostFullFormat TEXT,\n PostImagelessFormat TEXT,\n PostPartialFormat TEXT,\n CONSTRAINT pkPost PRIMARY KEY (PostIDNumber),\n CONSTRAINT enumPostClass CHECK (PostCLass IN \n('banner','dice','duplicate','general','play','private','special','system')),\n CONSTRAINT enumPostCmd CHECK (PostCmd IN \n('general','none','play','stream'))\n) WITHOUT OIDS;\n\n-- STANDARD INDEX\nCREATE INDEX idxPost_Class ON ethereal.Post\n(\n PostClass\n);\n\nCREATE INDEX idxPost_Login ON ethereal.Post\n(\n PuppeteerLogin\n);\n\nCREATE INDEX idxPost_Puppet ON ethereal.Post\n(\n PuppetName\n);\n\nCREATE INDEX idxPost_Realm ON ethereal.Post\n(\n RealmName\n);\n\nCREATE INDEX idxPost_Timestamp ON ethereal.Post\n(\n PostTimestamp\n);\n\n\n",
"msg_date": "Tue, 08 Jul 2003 18:27:36 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Efficiency of timestamps"
},
{
"msg_contents": "\nOn Tue, 8 Jul 2003, Martin Foster wrote:\n\n> As I keep looking through code to see where I can make things more\n> efficient, I noticed that in some cases timestamps seem horribly\n> inefficient. This leads to very long run times for certain queries.\n>\n> Here is an example:\n>\n> -- USING TIMESTAMPS TO NARROW DOWN --\n>\n> SELECT\n> Post.PostIDNumber,\n> Post.PuppeteerLogin,\n> Post.PuppetName,\n> Post.PostCmd,\n> Post.PostClass\n> FROM ethereal.Post\n> WHERE Post.PostTimeStamp > (LOCALTIMESTAMP - INTERVAL '10 Minutes')\n> AND Post.RealmName='Amalgam'\n> AND (Post.PostTo='all' OR Post.PostTo='root')\n> AND (NOT EXISTS (SELECT PuppetIgnore.PuppetLogin\n> FROM ethereal.PuppetIgnore\n> WHERE PuppetIgnore.PuppetIgnore='global'\n> AND PuppetIgnore.PuppeteerLogin='root'\n> AND PuppetIgnore.PuppetLogin=Post.PuppeteerLogin)\n> OR Post.PuppeteerLogin IS NULL)\n> AND (NOT EXISTS (SELECT PuppetIgnore.PuppetName\n> FROM ethereal.PuppetIgnore\n> WHERE PuppetIgnore='single'\n> AND PuppetIgnore.PuppeteerLogin='root'\n> AND PuppetIgnore.PuppetName=Post.PuppetName)\n> OR Post.PuppetName IS NULL)\n> ORDER BY Post.PostIDNumber LIMIT 100\n>\n> -- Explain of Above--\n> Limit (cost=0.00..260237.32 rows=100 width=48)\n> -> Index Scan using pkpost on post (cost=0.00..3020594.00 rows=1161\n> width=48)\n> Filter: ((posttimestamp > (('now'::text)::timestamp(6) without\n> time zone - '00:10'::interval)) AND (realmname = 'Amalgam'::character\n> varying) AND ((postto = 'all'::character varying) OR (postto =\n> 'root'::character varying)) AND ((NOT (subplan)) OR (puppeteerlogin IS\n> NULL)) AND ((NOT (subplan)) OR (puppetname IS NULL)))\n\nI think you might get better results with some kind of multi-column index.\nIt's using the index to avoid a sort it looks like, but it's not helping\nto find the conditions. I can't remember the correct ordering, but maybe\n(posttimestamp, realmname, postidnumber). Having separate indexes on the\nfields won't help currently since only one index will get chosen for the\nscan. Also, what does explain analyze show?\n\n\n> -- NAME : Post\n> -- REFERENCES : Realm*\n> -- Puppet*\n> -- PuppeteerLogin*\n> --\n> -- DESCRIPTION : Post is the hive of activity for all realms.\n> Associated with all three\n> -- major tables, it is not actually linked because of the\n> nature of storing\n> -- posts for statistics and auditing.\n>\n> CREATE TABLE ethereal.Post (\n> PostIDNumber INT NOT NULL DEFAULT\n> NEXTVAL('ethereal.seqPost'),\n> RealmName VARCHAR(30) NOT NULL,\n> PuppetName VARCHAR(30),\n> PuppeteerLogin VARCHAR(10),\n> PostTo VARCHAR(30),\n> PostTimestamp TIMESTAMP NOT NULL DEFAULT LOCALTIMESTAMP,\n> PostClass VARCHAR(10) NOT NULL DEFAULT 'general',\n> PostCmd VARCHAR(10) NOT NULL DEFAULT 'none',\n> PostFullFormat TEXT,\n> PostImagelessFormat TEXT,\n> PostPartialFormat TEXT,\n> CONSTRAINT pkPost PRIMARY KEY (PostIDNumber),\n> CONSTRAINT enumPostClass CHECK (PostCLass IN\n> ('banner','dice','duplicate','general','play','private','special','system')),\n> CONSTRAINT enumPostCmd CHECK (PostCmd IN\n> ('general','none','play','stream'))\n> ) WITHOUT OIDS;\n>\n> -- STANDARD INDEX\n> CREATE INDEX idxPost_Class ON ethereal.Post\n> (\n> PostClass\n> );\n>\n> CREATE INDEX idxPost_Login ON ethereal.Post\n> (\n> PuppeteerLogin\n> );\n>\n> CREATE INDEX idxPost_Puppet ON ethereal.Post\n> (\n> PuppetName\n> );\n>\n> CREATE INDEX idxPost_Realm ON ethereal.Post\n> (\n> RealmName\n> );\n>\n> CREATE INDEX idxPost_Timestamp ON ethereal.Post\n> (\n> PostTimestamp\n> );\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n\n\n",
"msg_date": "Tue, 8 Jul 2003 17:55:51 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiency of timestamps"
},
{
"msg_contents": "Stephan Szabo wrote:\n> \n> \n> I think you might get better results with some kind of multi-column index.\n> It's using the index to avoid a sort it looks like, but it's not helping\n> to find the conditions. I can't remember the correct ordering, but maybe\n> (posttimestamp, realmname, postidnumber). Having separate indexes on the\n> fields won't help currently since only one index will get chosen for the\n> scan. Also, what does explain analyze show?\n> \n\nHope that shed's light on the matter.\n\n Limit (cost=0.00..260237.32 rows=100 width=48) (actual \ntime=68810.26..68820.83 rows=55 loops=1)\n -> Index Scan using pkpost on post (cost=0.00..3020594.00 \nrows=1161 width=48) (actual time=68810.25..68820.72 rows=55 loops=1)\n Filter: ((posttimestamp > (('now'::text)::timestamp(6) without \ntime zone - '00:10'::interval)) AND (realmname = 'Amalgam'::character \nvarying) AND ((postto = 'all'::character varying) OR (postto = \n'root'::character varying)) AND ((NOT (subplan)) OR (puppeteerlogin IS \nNULL)) AND ((NOT (subplan)) OR (puppetname IS NULL)))\n SubPlan\n -> Index Scan using pkpuppetignore on puppetignore \n(cost=0.00..13.31 rows=1 width=10) (actual time=0.02..0.02 rows=0 loops=55)\n Index Cond: (puppeteerlogin = 'root'::character varying)\n Filter: ((puppetignore = 'global'::character varying) \nAND (puppetlogin = $0))\n -> Index Scan using pkpuppetignore on puppetignore \n(cost=0.00..5.84 rows=1 width=15) (actual time=0.01..0.01 rows=0 loops=55)\n Index Cond: ((puppeteerlogin = 'root'::character \nvarying) AND (puppetname = $1))\n Filter: (puppetignore = 'single'::character varying)\n Total runtime: 68821.11 msec\n\n-- \n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Tue, 08 Jul 2003 19:37:16 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Efficiency of timestamps"
},
{
"msg_contents": "On Tue, 8 Jul 2003, Martin Foster wrote:\n\n> Stephan Szabo wrote:\n> >\n> >\n> > I think you might get better results with some kind of multi-column index.\n> > It's using the index to avoid a sort it looks like, but it's not helping\n> > to find the conditions. I can't remember the correct ordering, but maybe\n> > (posttimestamp, realmname, postidnumber). Having separate indexes on the\n> > fields won't help currently since only one index will get chosen for the\n> > scan. Also, what does explain analyze show?\n> >\n>\n> Hope that shed's light on the matter.\n>\n> Limit (cost=0.00..260237.32 rows=100 width=48) (actual\n> time=68810.26..68820.83 rows=55 loops=1)\n> -> Index Scan using pkpost on post (cost=0.00..3020594.00\n> rows=1161 width=48) (actual time=68810.25..68820.72 rows=55 loops=1)\n> Filter: ((posttimestamp > (('now'::text)::timestamp(6) without\n> time zone - '00:10'::interval)) AND (realmname = 'Amalgam'::character\n> varying) AND ((postto = 'all'::character varying) OR (postto =\n> 'root'::character varying)) AND ((NOT (subplan)) OR (puppeteerlogin IS\n> NULL)) AND ((NOT (subplan)) OR (puppetname IS NULL)))\n> SubPlan\n> -> Index Scan using pkpuppetignore on puppetignore\n> (cost=0.00..13.31 rows=1 width=10) (actual time=0.02..0.02 rows=0 loops=55)\n> Index Cond: (puppeteerlogin = 'root'::character varying)\n> Filter: ((puppetignore = 'global'::character varying)\n> AND (puppetlogin = $0))\n> -> Index Scan using pkpuppetignore on puppetignore\n> (cost=0.00..5.84 rows=1 width=15) (actual time=0.01..0.01 rows=0 loops=55)\n> Index Cond: ((puppeteerlogin = 'root'::character\n> varying) AND (puppetname = $1))\n> Filter: (puppetignore = 'single'::character varying)\n> Total runtime: 68821.11 msec\n\nThe row estimate is high. How many rows meet the various conditions and\nsome of the combinations? And how many rows does it estimate if you do a\nsimpler query on those with explain?\n\nI still think some variety of multi-column index to make the above index\nconditions would help, but you'd probably need to play with which ones\nhelp, and with the cost cut for the limit, I don't know if it'd actually\nget a better plan, but it may be worth trying a bunch and seeing which\nones are useful and then dropping the rest.\n\n\n",
"msg_date": "Tue, 8 Jul 2003 19:49:04 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiency of timestamps"
},
{
"msg_contents": "Stephan Szabo wrote:\n> \n> \n> The row estimate is high. How many rows meet the various conditions and\n> some of the combinations? And how many rows does it estimate if you do a\n> simpler query on those with explain?\n> \n> I still think some variety of multi-column index to make the above index\n> conditions would help, but you'd probably need to play with which ones\n> help, and with the cost cut for the limit, I don't know if it'd actually\n> get a better plan, but it may be worth trying a bunch and seeing which\n> ones are useful and then dropping the rest.\n> \n> \n\nAt any given point in time you would not expect to see much more then 30 \nposts applying for a time based search. That is primarily a result of \nhaving more then one room for which posts are attached to, and then some \nposts exist just to show people are there et cetera.\n\nSimpler queries seem to do quiet well. That view makes use of the same \ntable and seems to have no performance impact from doing as such, and \nthe position based search is considerably faster.\n\nI can show EXPLAIN ANALYSE for all of those if you wish.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Tue, 08 Jul 2003 22:23:38 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Efficiency of timestamps"
},
{
"msg_contents": "On Tue, 8 Jul 2003, Martin Foster wrote:\n\n> Stephan Szabo wrote:\n>\n> > The row estimate is high. How many rows meet the various conditions and\n> > some of the combinations? And how many rows does it estimate if you do a\n> > simpler query on those with explain?\n> >\n> > I still think some variety of multi-column index to make the above index\n> > conditions would help, but you'd probably need to play with which ones\n> > help, and with the cost cut for the limit, I don't know if it'd actually\n> > get a better plan, but it may be worth trying a bunch and seeing which\n> > ones are useful and then dropping the rest.\n> >\n> At any given point in time you would not expect to see much more then 30\n> posts applying for a time based search. That is primarily a result of\n> having more then one room for which posts are attached to, and then some\n> posts exist just to show people are there et cetera.\n>\n> Simpler queries seem to do quiet well. That view makes use of the same\n> table and seems to have no performance impact from doing as such, and\n> the position based search is considerably faster.\n\nWell, the reason I asked is to see both whether the estimates for the\nvarious columns were somewhere near reality (if not, then you may need to\nraise the statistics target for the column) which might affect whether\nit'd consider using a multi-column index for the conditions and sort\nrather than the index scan it was using.\n\n",
"msg_date": "Tue, 8 Jul 2003 22:01:33 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiency of timestamps"
},
{
"msg_contents": "Stephan Szabo wrote:\n> \n> Well, the reason I asked is to see both whether the estimates for the\n> various columns were somewhere near reality (if not, then you may need to\n> raise the statistics target for the column) which might affect whether\n> it'd consider using a multi-column index for the conditions and sort\n> rather than the index scan it was using.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\nI'm going to have to pull out the 'Practical PostgreSQL' book and brush \nup on optimizing. This level of optimization is not something I have \nhad to deal with in the past.\n\nAlso to make this interesting. The sub-query method is faster at times \nand slower in others. But doing two separate queries and working on \nthe PostIDNumber field exclusively is always blazingly fast...\n\n Martin Foster\n Creator/Designer Ethereal Realms\n [email protected]\n\n\n\n",
"msg_date": "Wed, 09 Jul 2003 00:51:42 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Efficiency of timestamps"
}
] |
[
{
"msg_contents": "Hi All, \n\n I'm sure some of you know me from previous questions on other lists,\nbut this one has myself and Marc completely stumped. We've got a\ndatabase that has about 89 Million rows, under PostgreSQL 7.3.3 on a\ndual PIII 1.2 with 4 GBytes of RAM on a 5 disk RAID 5 array. The dataset\nitself is about 26+ GBYtes in size, all of it in the one table. \n\n To give you some perspective on the size of the dataset and the\nperformance level we are hitting, here are some \"good\" results based on\nsome explains:\n\njnlstats=# explain analyze select count(*) from some_table where\nsome_time::date='2003-05-21';\n \nQUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1794562.35..1794562.35 rows=1 width=0) (actual\ntime=3013.55..3013.55 rows=1 loops=1)\n -> Index Scan using some_table_ix_0 on some_table \n(cost=0.00..1793446.02 rows=446531 width=0) (actual time=48.40..2721.26\nrows=249837 loops=1)\n Index Cond: ((some_time)::date = '2003-05-21'::date)\n Total runtime: 3015.02 msec\n(4 rows)\n\njnlstats=# explain analyze select count(*) from stats_raw where\nsome_time::date='2003-05-21';\n QUERY\nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1794562.35..1794562.35 rows=1 width=0) (actual\ntime=1401.23..1401.23 rows=1 loops=1)\n -> Index Scan using some_table_ix_0 on some_table \n(cost=0.00..1793446.02 rows=446531 width=0) (actual time=0.50..1118.92\nrows=249837 loops=1)\n Index Cond: ((some_time)::date = '2003-05-21'::date)\n Total runtime: 1401.42 msec\n\n There are about 249837 items that the query is identifying as valid\nresults and the results range between 1-1.4 seconds over ten runs with\nthe initial query taking 3 seconds, this average is how 90% of the\nqueries resopond, but we've got several peaks that we can not explain in\nany way. For instance:\n\njnlstats=# explain analyze select count(*) from some_table where\nsome_time::date='2003-05-26';\n \nQUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1794562.35..1794562.35 rows=1 width=0) (actual\ntime=307025.65..307025.65 rows=1 loops=1)\n -> Index Scan using some_table_ix_0 on some_table \n(cost=0.00..1793446.02 rows=446531 width=0) (actual\ntime=51.05..306256.93 rows=374540 loops=1)\n Index Cond: ((some_time)::date = '2003-05-26'::date)\n Total runtime: 307025.81 msec\n(4 rows)\n\njnlstats=# explain analyze select count(*) from some_table where\nsome_time::date='2003-05-26';\n \nQUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1794562.35..1794562.35 rows=1 width=0) (actual\ntime=10837.86..10837.86 rows=1 loops=1)\n -> Index Scan using some_table_ix_0 on some_table \n(cost=0.00..1793446.02 rows=446531 width=0) (actual time=1.01..10304.78\nrows=374540 loops=1)\n Index Cond: ((some_time)::date = '2003-05-26'::date)\n Total runtime: 10838.04 msec\n\n The total number of items counted is 374540 items, so not too much more\nthen the previous query, but the 300 second runtime was unexpected (we\nwere expecting ~4-5 seconds and then ~1-2 seconds for the caches\nresults. I have 5 other dates that all exhibit this information,but it's\nONLY those dates that run that slow and the one I presented above here\nis the largest of them all. The database server is configured with a 5\nMByte shared mamory buffer, but even a larger shared memory buffer does\nnot help (we have had it set to 800 MBytes before). The disk is getting\nhit the heviest durring that last query, with iostat results being:\n\n tty da0 da1 da2 \ncpu \n tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in\nid\n 0 25 12.22 1 0.02 7.68 0 0.00 7.68 0 0.00 0 0 0 0\n99\n 4 3758 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 6 0 38 0\n56\n 0 151 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 9 0 43 0\n48 \n 0 148 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 10 0 40 0\n49\n 0 153 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 10 0 40 0\n49\n 0 152 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 10 0 40 1\n49\n 0 150 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 9 0 42 0\n49\n 0 153 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 9 0 41 1\n49\n 0 149 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 8 0 45 0\n48\n 0 148 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 11 0 41 0\n48\n 0 152 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 12 0 38 0\n50\n 0 152 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 14 0 37 0\n49\n 0 152 16.00 20 0.31 8.00 9 0.07 8.00 16 0.12 0 0 1 0\n98\n 0 152 2.00 1 0.00 0.00 0 0.00 1.00 2 0.00 0 0 0 0\n100\n 0 152 11.33 6 0.07 0.00 0 0.00 1.00 2 0.00 0 0 0 0\n99\n 0 152 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 1 0 0 0\n99\n 0 152 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 1 0 0 0\n99\n 0 152 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0 0 0 0\n100\n 0 154 0.00 0 0.00 5.89 18 0.10 0.00 0 0.00 0 0 1 0\n98\n\n The database is vacuumed about once every hour, the pg_stats table\nshows:\n\njnlstats=# select tablename, attname, n_distinct from pg_stats where\ntablename = 'some_table';\n tablename | attname | n_distinct \n-----------+------------------+------------\n some_table | some_time | -0.24305\n\n So the column is very distinct (assuming that's what a negative number\nmeans). What I'm looking for is any form of explaination that might be\ncasusing those spikes, but at the same time a possible solution that\nmight help me bring it down some...\n-- \nChris Bowlby <[email protected]>\nHub.Org Networking Services\n\n",
"msg_date": "09 Jul 2003 14:29:38 -0300",
"msg_from": "Chris Bowlby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some very weird behaviour...."
},
{
"msg_contents": "> To give you some perspective on the size of the dataset and the\n> performance level we are hitting, here are some \"good\" results based on\n> some explains:\n\nBefore Tom jumps in taking all the fun out of trying to solve it...\n\n\nThe estimates in the slow queries seem perfectly reasonable. In fact,\nthe cost estimates of both the slow and fast queries are the same which\nis what would be expected if all of the data was distributed evenly\namongst the table.\n\nGiven it's a date, I would guess that the data is generally inserted\ninto the table in an order following the date but for some reason those\n'high' dates have their data distributed more evenly amongst the table. \nClustered data will have fewer disk seeks and deal with fewer pages of\ninformation in general which makes for a much faster query. Distributed\ndata will have to pull out significantly more information from the disk,\nthrowing most of it away.\n\nI would guess that sometime on 2002-05-25 someone did a bit of data\ncleaning (deleting records). Next day the free space map had entries\navailable in various locations within the table, and used them rather\nthan appending to the end. With 89 Million records with date being\nsignificant, I'm guessing there aren't very many modifications or\ndeletes on it.\n\nSo.. How to solve the problem? If this is the type of query that occurs\nmost often, you do primarily inserts, and the inserts are generally\ncreated following date, cluster the table by index \"some_table_ix_0\". \nThe clustering won't degrade very much since that is how you naturally\ninsert the data.",
"msg_date": "09 Jul 2003 17:42:35 +0000",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some very weird behaviour...."
},
{
"msg_contents": "On Wed, 2003-07-09 at 14:42, Rod Taylor wrote:\n\n Clustering definatly helped with that case, and appears to have helped\nwith all of the dates I have had high execution times for... thanks for\nthe tip..\n\n> > To give you some perspective on the size of the dataset and the\n> > performance level we are hitting, here are some \"good\" results based on\n> > some explains:\n> \n> Before Tom jumps in taking all the fun out of trying to solve it...\n> \n> \n> The estimates in the slow queries seem perfectly reasonable. In fact,\n> the cost estimates of both the slow and fast queries are the same which\n> is what would be expected if all of the data was distributed evenly\n> amongst the table.\n> \n> Given it's a date, I would guess that the data is generally inserted\n> into the table in an order following the date but for some reason those\n> 'high' dates have their data distributed more evenly amongst the table. \n> Clustered data will have fewer disk seeks and deal with fewer pages of\n> information in general which makes for a much faster query. Distributed\n> data will have to pull out significantly more information from the disk,\n> throwing most of it away.\n> \n> I would guess that sometime on 2002-05-25 someone did a bit of data\n> cleaning (deleting records). Next day the free space map had entries\n> available in various locations within the table, and used them rather\n> than appending to the end. With 89 Million records with date being\n> significant, I'm guessing there aren't very many modifications or\n> deletes on it.\n> \n> So.. How to solve the problem? If this is the type of query that occurs\n> most often, you do primarily inserts, and the inserts are generally\n> created following date, cluster the table by index \"some_table_ix_0\". \n> The clustering won't degrade very much since that is how you naturally\n> insert the data.\n-- \nChris Bowlby <[email protected]>\nHub.Org Networking Services\n\n",
"msg_date": "10 Jul 2003 12:09:10 -0300",
"msg_from": "Chris Bowlby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some very weird behaviour...."
}
] |
[
{
"msg_contents": "About a month ago I asked the general list about plpgsql functions that\noccasionally significantly underperform their straight SQL equivalents. \nTom noted that a different query plan was almost certainly being chosen by\nthe plpgsql function:\n\nhttp://archives.postgresql.org/pgsql-general/2003-05/msg00966.php\nhttp://archives.postgresql.org/pgsql-general/2003-05/msg00998.php\n\nTom suggested checking for sloppy datatype declarations in the plpgsql \nfunctions. Double-checked, a-ok.\n\nTom also suggested that indexscans might not get picked by the plpgsql\nfunction if I have some very skewed statistics. Is there a way to verify\nthe plpgsql function's planner choices?\n\nMy casual observations are that this problem occurs with aggregates, and\nthat the big performance hit is not consistent. I'd like advice on more\nformal troubleshooting.\n\nI can provide examples (my latest problem function is currently taking\nover 4 seconds vs. .04 seconds for its straight SQL equivalent), table\nschema, explain output for the straight SQL, etc., if anyone cares to work\nthrough this with me.\n\nthanks,\n\nmichael\n\n",
"msg_date": "Wed, 9 Jul 2003 21:48:36 -0400 (EDT)",
"msg_from": "Michael Pohl <[email protected]>",
"msg_from_op": true,
"msg_subject": "plpgsql vs. SQL performance (again)"
},
{
"msg_contents": "On Thu, 10 Jul 2003, Tom Rochester wrote:\n\n> I would like to achive something along the lines of:\n> \n> SELECT field FROM table WHERE field ILIKE '$searchterm' ORDER BY\n> substr_count(field, '$searchterm');\n\nIf you have plperl installed:\n\ncreate or replace function substr_count(\n varchar(255),\n varchar(255)\n)\nreturns int as '\n my ($field, $searchterm) = @_;\n my $count = $field =~ s/$searchterm//g;\n return $count;\n' language 'plperl';\n\nmichael\n\n",
"msg_date": "Thu, 10 Jul 2003 08:55:59 -0400 (EDT)",
"msg_from": "Michael Pohl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: substr_count"
}
] |
[
{
"msg_contents": "Hello all!\n I'm a new to Postgresql , I have never used it before.\n I am having an issue with configure the postgresql.conf file.\n The machine itself is a 2.66GHz P4 w/ 2G memory.\n Would you mind to send me a copy of examples .(postgresql.conf)\n Maybe you can tell me how to configure these parameters.\n Thanks\n Sincerely,\n\nChris.Wu\n\n\n\n\n",
"msg_date": "Thu, 10 Jul 2003 09:51:53 +0800",
"msg_from": "\"Chris_Wu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can you help me?"
}
] |
[
{
"msg_contents": "With 2 GB RAM I'd go for about this: (assuming you're using Linux)\n\nshared_buffers = 32768\nsort_mem = 61440\nvacuum_mem = 32768\neffective_cache_size = 64000\n\nthat should give you a start.\n\nYou might have to adjust your shmall and shmmax\nparameters, again assuming you're using linux.\n(you have to edit the /etc/sysctl.conf file)\n\nThere's good stuff on the web, try google and feed it with\n\"postgres shared_buffers effective_cache_size\" or\n\"linux shmall shmmax\"\n\nregards,\nOli\n\n\n-----Ursprüngliche Nachricht-----\nVon: Chris_Wu [mailto:[email protected]]\nGesendet: Donnerstag, 10. Juli 2003 03:52\nAn: [email protected]\nBetreff: [PERFORM] Can you help me?\n\n\nHello all!\n I'm a new to Postgresql , I have never used it before.\n I am having an issue with configure the postgresql.conf file.\n The machine itself is a 2.66GHz P4 w/ 2G memory.\n Would you mind to send me a copy of examples .(postgresql.conf)\n Maybe you can tell me how to configure these parameters.\n Thanks\n Sincerely,\n\nChris.Wu\n\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n",
"msg_date": "Thu, 10 Jul 2003 10:29:46 +0200",
"msg_from": "\"Oliver Scheit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can you help me?"
}
] |
[
{
"msg_contents": "Hello,\n\nI am wondering if there is a way to force the use of a particular index\nwhen doing a query. I have two tables that are pretty big (each >3\nmillion rows), and when I do a join between them the performance is\ngenerally quite poor as it does not use the indexes that I think it\nshould use. Here is an example query:\n\n SELECT DISTINCT f.name,fl.fmin,fl.fmax,fl.strand,f.type_id,f.feature_id\n FROM feature f, featureloc fl\n WHERE\n f.feature_id = fl.feature_id and\n fl.srcfeature_id = 6 and fl.fmin <= 2585581 and fl.fmax >= 2565581 and\n f.type_id = 219\n\nNow, I know that if the query planner will use an index on featureloc on\n(srcfeature_id, fmin, fmax) that will reduce the amount of data from the\nfeatureloc table from over 3 million to at most a few thousand, and it\nwill go quite quickly (if I drop other indexes on this table, it does\nuse that index and completes in about 1/1000th of the time). After\nthat, the join with the feature table should go quite quickly as well\nusing the primary key on feature.\n\nSo, the question is, is there a way I can force the query planner to use\nthe index I want it to use? I have experimented with using INNER JOIN\nand changing the order of the tables in the join clause, but nothing\nseems to work. Any suggestions?\n\nThanks much,\nScott\n\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "10 Jul 2003 11:18:01 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "force the use of a particular index"
},
{
"msg_contents": "On Thu, 2003-07-10 at 15:18, Scott Cain wrote:\n> Hello,\n> \n> I am wondering if there is a way to force the use of a particular index\n> when doing a query. I have two tables that are pretty big (each >3\n> million rows), and when I do a join between them the performance is\n> generally quite poor as it does not use the indexes that I think it\n> should use. Here is an example query:\n\nPlease send the EXPLAIN ANALYZE results for that query with and without\nsequential scans enabled.\n\nset enable_seqscan = true;\nEXPLAIN ANALYZE <query>;\n\nset enable_seqscan = false;\nEXPLAIN ANALYZE <query>;",
"msg_date": "11 Jul 2003 10:51:13 +0000",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: force the use of a particular index"
},
{
"msg_contents": "The problem (at least as it appears to me) is not that it is performing\na table scan instead of an index scan, it is that it is using the wrong\nindex. Here is the output from EXPLAIN ANALYZE:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=494008.47..494037.59 rows=166 width=54) (actual time=114660.37..114660.38 rows=1 loops=1)\n -> Sort (cost=494008.47..494012.63 rows=1664 width=54) (actual time=114660.37..114660.37 rows=1 loops=1)\n Sort Key: f.name, fl.fmin, fl.fmax, fl.strand, f.type_id, f.feature_id\n -> Nested Loop (cost=0.00..493919.44 rows=1664 width=54) (actual time=2596.13..114632.90 rows=1 loops=1)\n -> Index Scan using feature_pkey on feature f (cost=0.00..134601.43 rows=52231 width=40) (actual time=105.74..56048.87 rows=13825 loops=1)\n Filter: (type_id = 219)\n -> Index Scan using featureloc_idx1 on featureloc fl (cost=0.00..6.87 rows=1 width=14) (actual time=4.23..4.23 rows=0 loops=13825)\n Index Cond: (\"outer\".feature_id = fl.feature_id)\n Filter: ((srcfeature_id = 6) AND (fmin <= 2585581) AND (fmax >= 2565581))\n Total runtime: 114660.91 msec\n\nThis is the same regardless of enable_seqscan's setting. The index that\nit is using on featureloc (featureloc_idx1) is on the foreign key\nfeature_id. It should instead be using another index, featureloc_idx3,\nwhich is built on (srcfeature_id, fmin, fmax).\n\nI should also mention that I've done a VACUUM FULL ANALYZE on this\ndatabase, and I've been using it for a while, and this is the primary\ntype of query I perform on the database.\n\nThanks,\nScott\n\n\n\nOn Fri, 2003-07-11 at 06:51, Rod Taylor wrote:\n> On Thu, 2003-07-10 at 15:18, Scott Cain wrote:\n> > Hello,\n> > \n> > I am wondering if there is a way to force the use of a particular index\n> > when doing a query. I have two tables that are pretty big (each >3\n> > million rows), and when I do a join between them the performance is\n> > generally quite poor as it does not use the indexes that I think it\n> > should use. Here is an example query:\n> \n> Please send the EXPLAIN ANALYZE results for that query with and without\n> sequential scans enabled.\n> \n> set enable_seqscan = true;\n> EXPLAIN ANALYZE <query>;\n> \n> set enable_seqscan = false;\n> EXPLAIN ANALYZE <query>;\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "11 Jul 2003 09:17:40 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: force the use of a particular index"
},
{
"msg_contents": "On Fri, 2003-07-11 at 13:17, Scott Cain wrote:\n> The problem (at least as it appears to me) is not that it is performing\n> a table scan instead of an index scan, it is that it is using the wrong\n> index. Here is the output from EXPLAIN ANALYZE:\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=494008.47..494037.59 rows=166 width=54) (actual time=114660.37..114660.38 rows=1 loops=1)\n> -> Sort (cost=494008.47..494012.63 rows=1664 width=54) (actual time=114660.37..114660.37 rows=1 loops=1)\n> Sort Key: f.name, fl.fmin, fl.fmax, fl.strand, f.type_id, f.feature_id\n> -> Nested Loop (cost=0.00..493919.44 rows=1664 width=54) (actual time=2596.13..114632.90 rows=1 loops=1)\n> -> Index Scan using feature_pkey on feature f (cost=0.00..134601.43 rows=52231 width=40) (actual time=105.74..56048.87 rows=13825 loops=1)\n> Filter: (type_id = 219)\n> -> Index Scan using featureloc_idx1 on featureloc fl (cost=0.00..6.87 rows=1 width=14) (actual time=4.23..4.23 rows=0 loops=13825)\n> Index Cond: (\"outer\".feature_id = fl.feature_id)\n> Filter: ((srcfeature_id = 6) AND (fmin <= 2585581) AND (fmax >= 2565581))\n> Total runtime: 114660.91 msec\n\n> it is using on featureloc (featureloc_idx1) is on the foreign key\n> feature_id. It should instead be using another index, featureloc_idx3,\n> which is built on (srcfeature_id, fmin, fmax).\n\nNope.. The optimizer is right in the decision to use featureloc_idx1. \nYou will notice it is expecting to retrieve a single row from this\nindex, but the featureloc_idx3 is bound to be larger (due to indexing\nmore data), thus take more disk reads for the exact same information (or\nin this case, lack thereof).\n\nWhat is taking a long time is the scan on feature_pkey. It looks like it\nis throwing away a ton of rows that are not type_id = 219. Either that,\nor you do a pile of deletes and haven't run REINDEX recently.\n\nCreate an index consisting of (feature_id, type_id). This will probably\nmake a significant different in execution time.",
"msg_date": "11 Jul 2003 13:38:16 +0000",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: force the use of a particular index"
},
{
"msg_contents": "Scott Cain <[email protected]> writes:\n> So, the question is, is there a way I can force the query planner to use\n> the index I want it to use?\n\nNo (and I don't think there should be). Given that it *can* generate\nthe plan you want, this is clearly an estimation failure. What is the\nindex it does use? Would you show us EXPLAIN ANALYZE results when\nusing each index?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jul 2003 11:24:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: force the use of a particular index "
},
{
"msg_contents": "Rod,\n\nI see what you mean about the scan on the feature_pkey taking a long\ntime. I tried several things to remedy that. I created an index on\nfeature (feature_id,type_id) (which I don't think makes sense since\nfeature_id is the primary key, so add another column really doesn't\nhelp). I also created a index on feature (type_id, feature_id), but the\nplanner doesn't use it. Also, there was an already existing index on\nfeature (type_id) that the planner never used.\n\nOne thing I tried that changed the query plan and improved performance\nslightly (but still nowhere near what I need) was to add a partial index\non featureloc on (fmin,fmax) where scrfeature_id=6. This is something I\ncould realistically do since there are relatively few (>30)\nsrcfeature_ids that I am interested in, so putting in place a partial\nindex for each of them would not be a big deal. Nevertheless, the\nperformance is still not there. Here is the EXPLAIN ANALYZE for this\nsituation:\n\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=156172.23..156200.11 rows=159 width=54) (actual time=63631.93..63631.93 rows=1 loops=1)\n -> Sort (cost=156172.23..156176.21 rows=1594 width=54) (actual time=63631.93..63631.93 rows=1 loops=1)\n Sort Key: f.name, fl.fmin, fl.fmax, fl.strand, f.type_id, f.feature_id\n -> Hash Join (cost=135100.30..156087.46 rows=1594 width=54) (actual time=63631.29..63631.79 rows=1 loops=1)\n Hash Cond: (\"outer\".feature_id = \"inner\".feature_id)\n -> Index Scan using featureloc_src_6 on featureloc fl (cost=0.00..18064.99 rows=101883 width=14) (actual time=26.11..430.00 rows=570 loops=1)\n Index Cond: ((fmin <= 2585581) AND (fmax >= 2565581))\n Filter: (srcfeature_id = 6)\n -> Hash (cost=134601.43..134601.43 rows=48347 width=40) (actual time=63182.86..63182.86 rows=0 loops=1)\n -> Index Scan using feature_pkey on feature f (cost=0.00..134601.43 rows=48347 width=40) (actual time=69.98..62978.27 rows=13825 loops=1)\n Filter: (type_id = 219)\n Total runtime: 63632.28 msec\n(12 rows)\n\nAny other ideas?\n\nThanks,\nScott\n\nOn Fri, 2003-07-11 at 09:38, Rod Taylor wrote:\n> On Fri, 2003-07-11 at 13:17, Scott Cain wrote:\n> > The problem (at least as it appears to me) is not that it is performing\n> > a table scan instead of an index scan, it is that it is using the wrong\n> > index. Here is the output from EXPLAIN ANALYZE:\n> > \n> > QUERY PLAN\n> > ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Unique (cost=494008.47..494037.59 rows=166 width=54) (actual time=114660.37..114660.38 rows=1 loops=1)\n> > -> Sort (cost=494008.47..494012.63 rows=1664 width=54) (actual time=114660.37..114660.37 rows=1 loops=1)\n> > Sort Key: f.name, fl.fmin, fl.fmax, fl.strand, f.type_id, f.feature_id\n> > -> Nested Loop (cost=0.00..493919.44 rows=1664 width=54) (actual time=2596.13..114632.90 rows=1 loops=1)\n> > -> Index Scan using feature_pkey on feature f (cost=0.00..134601.43 rows=52231 width=40) (actual time=105.74..56048.87 rows=13825 loops=1)\n> > Filter: (type_id = 219)\n> > -> Index Scan using featureloc_idx1 on featureloc fl (cost=0.00..6.87 rows=1 width=14) (actual time=4.23..4.23 rows=0 loops=13825)\n> > Index Cond: (\"outer\".feature_id = fl.feature_id)\n> > Filter: ((srcfeature_id = 6) AND (fmin <= 2585581) AND (fmax >= 2565581))\n> > Total runtime: 114660.91 msec\n> \n> > it is using on featureloc (featureloc_idx1) is on the foreign key\n> > feature_id. It should instead be using another index, featureloc_idx3,\n> > which is built on (srcfeature_id, fmin, fmax).\n> \n> Nope.. The optimizer is right in the decision to use featureloc_idx1. \n> You will notice it is expecting to retrieve a single row from this\n> index, but the featureloc_idx3 is bound to be larger (due to indexing\n> more data), thus take more disk reads for the exact same information (or\n> in this case, lack thereof).\n> \n> What is taking a long time is the scan on feature_pkey. It looks like it\n> is throwing away a ton of rows that are not type_id = 219. Either that,\n> or you do a pile of deletes and haven't run REINDEX recently.\n> \n> Create an index consisting of (feature_id, type_id). This will probably\n> make a significant different in execution time.\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "11 Jul 2003 11:36:19 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: force the use of a particular index"
},
{
"msg_contents": "Hi Tom,\n\nEmbarrassingly, I can't. I've been monkeying with the database so much\nthat I can't seem to get it back to the state where I reproduce the\nbehavior I want. A database drop and reload may be the only way, but\nsince that is a time consuming thing to do, I won't be able to do it\nuntil this evening.\n\nThanks,\nScott\n\nOn Fri, 2003-07-11 at 11:24, Tom Lane wrote:\n> Scott Cain <[email protected]> writes:\n> > So, the question is, is there a way I can force the query planner to use\n> > the index I want it to use?\n> \n> No (and I don't think there should be). Given that it *can* generate\n> the plan you want, this is clearly an estimation failure. What is the\n> index it does use? Would you show us EXPLAIN ANALYZE results when\n> using each index?\n> \n> \t\t\tregards, tom lane\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "11 Jul 2003 11:38:47 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: force the use of a particular index"
},
{
"msg_contents": "Scott Cain <[email protected]> writes:\n> Embarrassingly, I can't. I've been monkeying with the database so much\n> that I can't seem to get it back to the state where I reproduce the\n> behavior I want.\n\nIf the thing works as desired after a VACUUM ANALYZE, then I suggest\nthe estimation failure was just due to out-of-date statistics ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jul 2003 11:44:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: force the use of a particular index "
},
{
"msg_contents": "On Fri, 2003-07-11 at 11:36, Scott Cain wrote:\n> Rod,\n> \n> I see what you mean about the scan on the feature_pkey taking a long\n> time. I tried several things to remedy that. I created an index on\n> feature (feature_id,type_id) (which I don't think makes sense since\n> feature_id is the primary key, so add another column really doesn't\n\nIt may be the primary key, but the system looked like it was throwing\naway many rows based on type_id. If it was throwing away many more rows\nthan found, the index with type_id may have been cheaper.\n\nIt is difficult to tell from an EXPLAIN ANALYZE as it doesn't tell you\nexactly how many rows were filtered, just the cost to read them and how\nmany were used after the filter.\n\n> help). I also created a index on feature (type_id, feature_id), but the\n> planner doesn't use it. Also, there was an already existing index on\n> feature (type_id) that the planner never used.\n\nIt cannot use more than one index for a given table scan at the moment. \nThere are proposals on how to 'fix' that, but those require significant \noverhauls of various systems.\n\n> Any other ideas?\n\nOut of curiosity, what do you get if you disable hash joins?\n\nset enable_hashjoin = false;\n\n\nHow about a partial index on (feature_id) where type_id = 219?",
"msg_date": "11 Jul 2003 12:20:51 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: force the use of a particular index"
},
{
"msg_contents": "On Fri, 2003-07-11 at 12:20, Rod Taylor wrote:\n> On Fri, 2003-07-11 at 11:36, Scott Cain wrote:\n> > Any other ideas?\n> \n> Out of curiosity, what do you get if you disable hash joins?\n> \n> set enable_hashjoin = false;\n\nBINGO!\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=161718.69..161738.38 rows=113 width=53) (actual time=529.03..529.03 rows=1 loops=1)\n -> Sort (cost=161718.69..161721.50 rows=1125 width=53) (actual time=529.02..529.02 rows=1 loops=1)\n Sort Key: f.name, fl.fmin, fl.fmax, fl.strand, f.type_id, f.feature_id\n -> Merge Join (cost=26493.64..161661.65 rows=1125 width=53) (actual time=416.46..528.77 rows=1 loops=1)\n Merge Cond: (\"outer\".feature_id = \"inner\".feature_id)\n -> Index Scan using feature_pkey on feature f (cost=0.00..134592.43 rows=47912 width=39) (actual time=0.46..502.50 rows=431 loops=1)\n Filter: (type_id = 219)\n -> Sort (cost=26493.64..26722.33 rows=91476 width=14) (actual time=23.98..24.38 rows=570 loops=1)\n Sort Key: fl.feature_id\n -> Index Scan using featureloc_src_6 on featureloc fl (cost=0.00..18039.22 rows=91476 width=14) (actual time=15.16..21.85 rows=570 loops=1)\n Index Cond: ((fmin <= 2585581) AND (fmax >= 2565581))\n Filter: (srcfeature_id = 6)\n Total runtime: 529.52 msec\n(13 rows)\n\n> \n> How about a partial index on (feature_id) where type_id = 219?\n\nThat is a possiblity. type_id is a foreign key on another table that\nhas several thousand rows, but in practice, there will be only a subset\nof those that we are interested in using with this query, so it may not\nbe too unwieldy to do for each interesting type_id in practice. \nHowever, for testing I just created the partial index on type_id=219 and\nit was not used, so it may not make a difference anyway.\n\nThanks much,\nScott\n\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "11 Jul 2003 13:20:59 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: force the use of a particular index"
},
{
"msg_contents": "> > set enable_hashjoin = false;\n> \n> BINGO!\n\nI'm not so sure about that. Your dataset seems to have changed fairly\nsignificantly since the last test.\n\n> -> Index Scan using feature_pkey on feature f (cost=0.00..134592.43 rows=47912 width=39) (actual time=0.46..502.50 rows=431 loops=1)\n\nNotice it only pulled out 431 rows where prior runs pulled out several\nthousand (~13000). I think what really happened was something came\nalong and deleted a bunch of stuff, then vacuum ran.",
"msg_date": "11 Jul 2003 14:14:45 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: force the use of a particular index"
},
{
"msg_contents": "On Fri, 2003-07-11 at 14:14, Rod Taylor wrote:\n> > > set enable_hashjoin = false;\n> > \n> > BINGO!\n> \n> I'm not so sure about that. Your dataset seems to have changed fairly\n> significantly since the last test.\n> \n> > -> Index Scan using feature_pkey on feature f (cost=0.00..134592.43 rows=47912 width=39) (actual time=0.46..502.50 rows=431 loops=1)\n> \n> Notice it only pulled out 431 rows where prior runs pulled out several\n> thousand (~13000). I think what really happened was something came\n> along and deleted a bunch of stuff, then vacuum ran.\n\nThere is nearly a zero chance that happened. This database is\naccessible only by me, I haven't deleted anything. The only things I\nhave done is to create and drop various indexes and run vacuum. Is\nthere anything else that could explain the difference? Is the index\nscan on feature_pkey using information from the index scan on\nfeatureloc_src_6 to limit the number of rows to get from feature?\n\nScott\n\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "11 Jul 2003 14:23:04 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: force the use of a particular index"
}
] |
[
{
"msg_contents": "Hi,\n\nI have this machine with a 10 million records:\n* Dual Xeon 2.0 (HyperThreading enabled), 3 7200 SCSI , Adaptec 2110S,\nRAID 5 - 32k chunk size, 1 GB Ram DDR 266 ECC, RH 8.0 - 2.4.18\n\nThe database is mirrored with contrib/dbmirror in a P4 1 Gb Ram + IDE\n\nIf a disk failure occurs, I can use the server in the mirror.\n\nI will format the main server in this weekend and I have seen in the list\nsome people that recomends a Software RAID instead HW.\n\nI think too remove the RAID 5 and turn a RAID 1 for data in 2 HDs.\nSO, WAL and swap in the thrid HD.\n\nMy questions:\n\n1) I will see best disk performance changing the disk layout like above\n2) HyperThreading really improve a procces basead program, like postgres\n\nThank�s for all\n\nAlexandre\n\n\n",
"msg_date": "Thu, 10 Jul 2003 14:43:25 -0300 (BRT)",
"msg_from": "\"alexandre arruda paes :: aldeia digital\"\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "Dual Xeon + HW RAID question"
},
{
"msg_contents": "> 2) HyperThreading really improve a procces basead program, like postgres\n\nI've not seen the results of this type of measurement posted, so really\ncouldn't say.",
"msg_date": "11 Jul 2003 10:55:59 +0000",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
}
] |
[
{
"msg_contents": "Greetings,\nAll the recommendations I can locate concerning the use of (the various \nflavors of) VACUUM suggest running it at regular intervals. Is there any \nway, for a given table, to discover how many/what percentage of rows are \nlikely to be VACUUMable at a given point, so that some kind of \nthreshold-based VACUUM could be done by an application? We have several \ntables that undergo bursty UPDATEs, where large numbers of transactions \noccur \"relatively\" infrequently; and others where the UPDATEs occur at \nregular intervals (a few seconds or so).\n\nThanks for any advice!\n\n Regards,\n Rich Cullingford\n [email protected]\n\n",
"msg_date": "Thu, 10 Jul 2003 14:19:26 -0400",
"msg_from": "Rich Cullingford <[email protected]>",
"msg_from_op": true,
"msg_subject": "pre-Vacuum statistics"
},
{
"msg_contents": "On Thu, 2003-07-10 at 18:19, Rich Cullingford wrote:\n> Greetings,\n> All the recommendations I can locate concerning the use of (the various \n> flavors of) VACUUM suggest running it at regular intervals. Is there any \n> way, for a given table, to discover how many/what percentage of rows are \n\nNot nicely, but you may want to look at pg_autovacuum either off gborg\nor in the 7.4 /contrib/pg_autovacuum directory.\n\nIt will fire off a periodic vacuum based on table activity from the\nstatistics system.",
"msg_date": "11 Jul 2003 10:57:47 +0000",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pre-Vacuum statistics"
}
] |
[
{
"msg_contents": "Hi all,\n\nFew days back, I promised an article of postgresql tuning. It is already \npublished and I had the notification in my inbox but somwhow missed that. \n\nThe articles are available at http://www.varlena.com/GeneralBits/. I would be \nlooking forward to the feedback.\n\nAnd BTW, what's up with the lists? Not a single message on general and hackers? \nKind of strange, I would say..\n\nBye\n Shridhar\n\n--\nArnold's Addendum:\tAnything not fitting into these categories causes cancer in \nrats.\n\n",
"msg_date": "Fri, 11 Jul 2003 16:02:43 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql General Bits issue"
}
] |
[
{
"msg_contents": "Upon testing queries with EXPLAIN ANALYSE, I started to notice that the \nplanner would avoid using indexes when available. Instead it would \njump to sequence scans, ignoring the index and increasing overall time \nit took to get results.\n\nI have been looking up documentation and noticed that you can somewhat \nforce Postgres into using Indexes when available. So I changed the \nfollowing two lines in the .conf file:\n\n enable_seqscan = false\n enable_nestloop = false\n\nThis was recommended in the documentation, and to say the least things \nhave really changed in performance. Queries have halved the time \nneeded to execute even if the estimates are insanely high compared.\n\nI also increased this value, which apparently helps when running ANALYSE \non tables:\n default_statistics_target = 1000\n\nNow how sane is it to keep those options turned off? And what side \neffects can I expect from changing default_statistics_target? And any \nway to have the planner quiet guessing tens of thousands of rows will be \nreturn when there are at most hundred?\n\nI included the EXPLAIN ALALYSE results in an attachment to maintain \nformatting of the output. Thanks in advance!\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]",
"msg_date": "Fri, 11 Jul 2003 20:17:34 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer Parameters"
},
{
"msg_contents": "Martin Foster <[email protected]> writes:\n> force Postgres into using Indexes when available. So I changed the \n> following two lines in the .conf file:\n\n> enable_seqscan = false\n> enable_nestloop = false\n\n> This was recommended in the documentation,\n\nWhere would you say that setting those off in the config file is\n\"recommended\"?\n\n> Now how sane is it to keep those options turned off?\n\nIt isn't. If you have to force them off for a particular query, do\nso right before you issue that query, and turn them on again after.\nTurning them off globally is sure to cause you pain later.\n\n> And any \n> way to have the planner quiet guessing tens of thousands of rows will be \n> return when there are at most hundred?\n\n> AND Po.PostTimestamp > (LOCALTIMESTAMP - INTERVAL '10 minutes')\n> AND Po.PuppetName IS NOT NULL\n\n> -> Seq Scan on post po (cost=0.00..14369.84 rows=40513 width=41) (actual time=2820.88..2826.30 rows=392 loops=1)\n> Filter: ((posttimestamp > (('now'::text)::timestamp(6) without time zone - '00:10'::interval)) AND (puppetname IS NOT NULL))\n\nNot with that coding technique; \"LOCALTIMESTAMP - INTERVAL '10 minutes'\"\nisn't a constant and so the planner can't look at its statistics to\nsee that only a small part of the table will be selected.\n\nThere are two standard workarounds for this:\n\n1. Do the timestamp arithmetic on the client side, so that the query\nyou send the backend has a simple constant:\n\n ... AND Po.PostTimestamp > '2003-07-12 16:27'\n\n2. Create a function that is falsely marked immutable, viz:\n\ncreate function ago(interval) returns timestamp without time zone as\n'select localtimestamp - $1' language sql immutable strict;\n\n ... AND Po.PostTimestamp > ago('10 minutes')\n\nBecause the function is marked immutable, the planner will reduce\n\"ago('10 minutes')\" to a constant on sight, and then use that value\nfor planning purposes. This technique can cause problems, since\nin some contexts the reduction will occur prematurely, but as long\nas you only use ago() in interactively-issued queries it works okay.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 12 Jul 2003 16:46:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer Parameters "
},
{
"msg_contents": "Tom Lane wrote:\n> \n>>force Postgres into using Indexes when available. So I changed the \n>>following two lines in the .conf file:\n> \n> \n>> enable_seqscan = false\n>> enable_nestloop = false\n> >This was recommended in the documentation,\n> \n> \n> Where would you say that setting those off in the config file is\n> \"recommended\"?\n> \n> \n>>Now how sane is it to keep those options turned off?\n> \n> \n> It isn't. If you have to force them off for a particular query, do\n> so right before you issue that query, and turn them on again after.\n> Turning them off globally is sure to cause you pain later.\n> \n> \n>>And any \n>>way to have the planner quiet guessing tens of thousands of rows will be \n>>return when there are at most hundred?\n> \n> \n>> AND Po.PostTimestamp > (LOCALTIMESTAMP - INTERVAL '10 minutes')\n>> AND Po.PuppetName IS NOT NULL\n> \n> \n>> -> Seq Scan on post po (cost=0.00..14369.84 rows=40513 width=41) (actual time=2820.88..2826.30 rows=392 loops=1)\n>> Filter: ((posttimestamp > (('now'::text)::timestamp(6) without time zone - '00:10'::interval)) AND (puppetname IS NOT NULL))\n> \n> \n> Not with that coding technique; \"LOCALTIMESTAMP - INTERVAL '10 minutes'\"\n> isn't a constant and so the planner can't look at its statistics to\n> see that only a small part of the table will be selected.\n> \n> There are two standard workarounds for this:\n> \n> 1. Do the timestamp arithmetic on the client side, so that the query\n> you send the backend has a simple constant:\n> \n> ... AND Po.PostTimestamp > '2003-07-12 16:27'\n> \n> 2. Create a function that is falsely marked immutable, viz:\n> \n> create function ago(interval) returns timestamp without time zone as\n> 'select localtimestamp - $1' language sql immutable strict;\n> \n> ... AND Po.PostTimestamp > ago('10 minutes')\n> \n> Because the function is marked immutable, the planner will reduce\n> \"ago('10 minutes')\" to a constant on sight, and then use that value\n> for planning purposes. This technique can cause problems, since\n> in some contexts the reduction will occur prematurely, but as long\n> as you only use ago() in interactively-issued queries it works okay.\n> \n> \t\t\tregards, tom lane\n\nhttp://www.postgresql.org/docs/7.3/static/indexes-examine.html\n\nThe conf file does not make a mention of it, other then perhaps being \nused to debug. The above link points to disabling it, but tells you \nnothing about potential consequences and what to do if it works better \nthen it did before.\n\nHowever, when I tried out your functions things started to work much \nbetter then previously. This to say the least is a great sign as it \nwill increase overall performance.\n\nSo thanks for that! As a side note, would you recommend disabling \nfsync for added performance? This would be joined with a healthy dose \nof a kernel file system buffer.\n\nSimply curious, as I have been increasing certain options for the WAL to \nmean it writes less often (transactions are numerous so that's not an \nissue) to the hard drives.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n\n",
"msg_date": "Sat, 12 Jul 2003 18:16:29 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer Parameters"
},
{
"msg_contents": "Martin Foster <[email protected]> writes:\n> As a side note, would you recommend disabling \n> fsync for added performance?\n\nOnly if you are willing to sacrifice crash-safety in the name of speed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 12 Jul 2003 22:14:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [NOVICE] Optimizer Parameters "
}
] |
[
{
"msg_contents": "Alexandre,\n\nSince you want the fastest speed I would do the 2 data\ndisks in RAID 0 (striping) not RAID 1 (mirroring).\n\nIf you would care about not loosing any transactions\nyou would keep all 3 disks in RAID 5.\n\nDon't know the answer to the Hyperthreading question. \nWhy don't you run a test to find out?\n\nRegards,\nNikolaus\n\nOn Thu, 10 Jul 2003 14:43:25 -0300 (BRT), \"alexandre\narruda paes :: aldeia digital\" wrote:\n\n> \n> Hi,\n> \n> I have this machine with a 10 million records:\n> * Dual Xeon 2.0 (HyperThreading enabled), 3 7200 SCSI\n,\n> Adaptec 2110S,\n> RAID 5 - 32k chunk size, 1 GB Ram DDR 266 ECC, RH 8.0\n-\n> 2.4.18\n> \n> The database is mirrored with contrib/dbmirror in a P4\n> 1 Gb Ram + IDE\n> \n> If a disk failure occurs, I can use the server in the\n> mirror.\n> \n> I will format the main server in this weekend and I\n> have seen in the list\n> some people that recomends a Software RAID instead HW.\n> \n> I think too remove the RAID 5 and turn a RAID 1 for\n> data in 2 HDs.\n> SO, WAL and swap in the thrid HD.\n> \n> My questions:\n> \n> 1) I will see best disk performance changing the disk\n> layout like above\n> 2) HyperThreading really improve a procces basead\n> program, like postgres\n> \n> Thank´s for all\n> \n> Alexandre\n> \n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send\n> an appropriate\n> subscribe-nomail command to\n> [email protected] so that your\n> message can get through to the mailing list\n> cleanly\n",
"msg_date": "Sat, 12 Jul 2003 11:25:14 -0700 (PDT)",
"msg_from": "\"Nikolaus Dilger\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dual Xeon + HW RAID question"
},
{
"msg_contents": "Back in the day, we got good performance from similar sized tables\nusing VMS, a small VAX with only 256MB RAM and narrow SCSI 1GB disks.\nThe RDBMS was DEC's own Rdb/VMS. A \"small\" mainframe (6 MIPS, 8MB \nRAM) also gave good performance.\n\nSo, this old curmudgeon asks, why such beefy h/w for such small \ndatabases.\n\nOn Sat, 2003-07-12 at 13:25, Nikolaus Dilger wrote:\n> Alexandre,\n> \n> Since you want the fastest speed I would do the 2 data\n> disks in RAID 0 (striping) not RAID 1 (mirroring).\n> \n> If you would care about not loosing any transactions\n> you would keep all 3 disks in RAID 5.\n> \n> Don't know the answer to the Hyperthreading question. \n> Why don't you run a test to find out?\n> \n> Regards,\n> Nikolaus\n> \n> On Thu, 10 Jul 2003 14:43:25 -0300 (BRT), \"alexandre\n> arruda paes :: aldeia digital\" wrote:\n> \n> > \n> > Hi,\n> > \n> > I have this machine with a 10 million records:\n> > * Dual Xeon 2.0 (HyperThreading enabled), 3 7200 SCSI\n> ,\n> > Adaptec 2110S,\n> > RAID 5 - 32k chunk size, 1 GB Ram DDR 266 ECC, RH 8.0\n> -\n> > 2.4.18\n> > \n> > The database is mirrored with contrib/dbmirror in a P4\n> > 1 Gb Ram + IDE\n> > \n> > If a disk failure occurs, I can use the server in the\n> > mirror.\n> > \n> > I will format the main server in this weekend and I\n> > have seen in the list\n> > some people that recomends a Software RAID instead HW.\n> > \n> > I think too remove the RAID 5 and turn a RAID 1 for\n> > data in 2 HDs.\n> > SO, WAL and swap in the thrid HD.\n> > \n> > My questions:\n> > \n> > 1) I will see best disk performance changing the disk\n> > layout like above\n> > 2) HyperThreading really improve a procces basead\n> > program, like postgres\n\n-- \n+-----------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| 4 degrees from Vladimir Putin\n+-----------------------------------------------------------+\n\n",
"msg_date": "12 Jul 2003 16:09:03 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "[OT] Such incredible h/w (was Re: Dual Xeon + HW RAID\n\tquestion)"
},
{
"msg_contents": "On Sat, Jul 12, 2003 at 11:25:14AM -0700, Nikolaus Dilger wrote:\n> Alexandre,\n> \n> Since you want the fastest speed I would do the 2 data\n> disks in RAID 0 (striping) not RAID 1 (mirroring).\n\nNote that RAID 0 buys you nothing at all in redundancy. So if the\npoint is to be able to recover from a disk failure, you need 1 (or\nsome combination of 0 and 1, or 5).\n\nA\n\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 14 Jul 2003 07:19:25 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
},
{
"msg_contents": "Alexandre,\n\nI missed your orig. post, but AFAIK multiprocessing kernels will handle HT\nCPUs as 2 CPUs each. Thus, our dual Xeon 2.4 is recognized as 4 Xeon 2.4\nCPUs.\n\nThis way, I don't think HT would improve any single query (afaik no postgres\nprocess uses more than one cpu), but overall multi-query performance has to\nimprove.\n\n----- Original Message ----- \nFrom: \"Nikolaus Dilger\" <[email protected]>\nSent: Saturday, July 12, 2003 8:25 PM\n\n\nAlexandre,\n\nSince you want the fastest speed I would do the 2 data\ndisks in RAID 0 (striping) not RAID 1 (mirroring).\n\nIf you would care about not loosing any transactions\nyou would keep all 3 disks in RAID 5.\n\nDon't know the answer to the Hyperthreading question.\nWhy don't you run a test to find out?\n\nRegards,\nNikolaus\n\nOn Thu, 10 Jul 2003 14:43:25 -0300 (BRT), \"alexandre\narruda paes :: aldeia digital\" wrote:\n\n>\n> Hi,\n>\n> I have this machine with a 10 million records:\n> * Dual Xeon 2.0 (HyperThreading enabled), 3 7200 SCSI\n,\n> Adaptec 2110S,\n> RAID 5 - 32k chunk size, 1 GB Ram DDR 266 ECC, RH 8.0\n-\n> 2.4.18\n>\n> The database is mirrored with contrib/dbmirror in a P4\n> 1 Gb Ram + IDE\n>\n> If a disk failure occurs, I can use the server in the\n> mirror.\n>\n> I will format the main server in this weekend and I\n> have seen in the list\n> some people that recomends a Software RAID instead HW.\n>\n> I think too remove the RAID 5 and turn a RAID 1 for\n> data in 2 HDs.\n> SO, WAL and swap in the thrid HD.\n>\n> My questions:\n>\n> 1) I will see best disk performance changing the disk\n> layout like above\n> 2) HyperThreading really improve a procces basead\n> program, like postgres\n>\n> Thank�s for all\n>\n> Alexandre\n\n",
"msg_date": "Mon, 21 Jul 2003 07:09:48 +0200",
"msg_from": "=?ISO-8859-1?Q?SZUCS_G=E1bor?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
},
{
"msg_contents": "SZUCS,\n\nIn my tests, I don�t a great performance enhacement with HT.\n\nI suspect that my problem resides on I/O performance. I will\nwait for a best moment to resinstall the system with other\ndisk configurations and then I will report here.\n\n\nThanks for all replys!\n\nAlexandre\n\n> Alexandre,\n>\n> I missed your orig. post, but AFAIK multiprocessing kernels will handle HT\n> CPUs as 2 CPUs each. Thus, our dual Xeon 2.4 is recognized as 4 Xeon 2.4\n> CPUs.\n>\n> This way, I don't think HT would improve any single query (afaik no\n> postgres\n> process uses more than one cpu), but overall multi-query performance has\n> to\n> improve.\n>\n> ----- Original Message -----\n> From: \"Nikolaus Dilger\" <[email protected]>\n> Sent: Saturday, July 12, 2003 8:25 PM\n>\n>\n> Alexandre,\n>\n> Since you want the fastest speed I would do the 2 data\n> disks in RAID 0 (striping) not RAID 1 (mirroring).\n>\n> If you would care about not loosing any transactions\n> you would keep all 3 disks in RAID 5.\n>\n> Don't know the answer to the Hyperthreading question.\n> Why don't you run a test to find out?\n>\n> Regards,\n> Nikolaus\n>\n> On Thu, 10 Jul 2003 14:43:25 -0300 (BRT), \"alexandre\n> arruda paes :: aldeia digital\" wrote:\n>\n>>\n>> Hi,\n>>\n>> I have this machine with a 10 million records:\n>> * Dual Xeon 2.0 (HyperThreading enabled), 3 7200 SCSI\n> ,\n>> Adaptec 2110S,\n>> RAID 5 - 32k chunk size, 1 GB Ram DDR 266 ECC, RH 8.0\n> -\n>> 2.4.18\n>>\n>> The database is mirrored with contrib/dbmirror in a P4\n>> 1 Gb Ram + IDE\n>>\n>> If a disk failure occurs, I can use the server in the\n>> mirror.\n>>\n>> I will format the main server in this weekend and I\n>> have seen in the list\n>> some people that recomends a Software RAID instead HW.\n>>\n>> I think too remove the RAID 5 and turn a RAID 1 for\n>> data in 2 HDs.\n>> SO, WAL and swap in the thrid HD.\n>>\n>> My questions:\n>>\n>> 1) I will see best disk performance changing the disk\n>> layout like above\n>> 2) HyperThreading really improve a procces basead\n>> program, like postgres\n>>\n>> Thank�s for all\n>>\n>> Alexandre\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n",
"msg_date": "Mon, 21 Jul 2003 15:07:06 -0300 (BRT)",
"msg_from": "\"alexandre paes :: aldeia digital\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
},
{
"msg_contents": "SZUCS G�bor wrote:\n> Alexandre,\n> \n> I missed your orig. post, but AFAIK multiprocessing kernels will handle HT\n> CPUs as 2 CPUs each. Thus, our dual Xeon 2.4 is recognized as 4 Xeon 2.4\n> CPUs.\n> \n> This way, I don't think HT would improve any single query (afaik no postgres\n> process uses more than one cpu), but overall multi-query performance has to\n> improve.\n\nWhen you use hyperthreading, each virtual cpu runs at 70% of a full CPU,\nso hyperthreading could be slower than non-hyperthreading. On a fully\nloaded dual cpu system, you are looking at 2.8 cpu's (0.70 * 4), while\nif it isn't loaded, you are looking at slowing down if you are only\nusing 1 or 2 cpu's.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 21 Jul 2003 15:07:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
},
{
"msg_contents": "\n> > I missed your orig. post, but AFAIK multiprocessing kernels will handle\nHT\n> > CPUs as 2 CPUs each. Thus, our dual Xeon 2.4 is recognized as 4 Xeon 2.4\n> > CPUs.\n> >\n> > This way, I don't think HT would improve any single query (afaik no\npostgres\n> > process uses more than one cpu), but overall multi-query performance has\nto\n> > improve.\n>\n> When you use hyperthreading, each virtual cpu runs at 70% of a full CPU,\n> so hyperthreading could be slower than non-hyperthreading. On a fully\n> loaded dual cpu system, you are looking at 2.8 cpu's (0.70 * 4), while\n> if it isn't loaded, you are looking at slowing down if you are only\n> using 1 or 2 cpu's.\n\n Virtual cpus are not running at 70% of real cpus :). Slowdown will happen\nif\nscheduler will run 2 processes on the same real cpu. And I read that there\nare\npatches for Linux kernel to fix that. Sooner rather than later they will\nappear\nin Linus kernel.\n\n Mindaugas\n\n",
"msg_date": "Tue, 22 Jul 2003 10:23:14 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
},
{
"msg_contents": "Mindaugas Riauba wrote:\n> \n> > > I missed your orig. post, but AFAIK multiprocessing kernels will handle\n> HT\n> > > CPUs as 2 CPUs each. Thus, our dual Xeon 2.4 is recognized as 4 Xeon 2.4\n> > > CPUs.\n> > >\n> > > This way, I don't think HT would improve any single query (afaik no\n> postgres\n> > > process uses more than one cpu), but overall multi-query performance has\n> to\n> > > improve.\n> >\n> > When you use hyperthreading, each virtual cpu runs at 70% of a full CPU,\n> > so hyperthreading could be slower than non-hyperthreading. On a fully\n> > loaded dual cpu system, you are looking at 2.8 cpu's (0.70 * 4), while\n> > if it isn't loaded, you are looking at slowing down if you are only\n> > using 1 or 2 cpu's.\n> \n> Virtual cpus are not running at 70% of real cpus :). Slowdown will happen\n> if\n> scheduler will run 2 processes on the same real cpu. And I read that there\n> are\n> patches for Linux kernel to fix that. Sooner rather than later they will\n> appear\n> in Linus kernel.\n\nRight, I simplified it. The big deal is whether the OS favors the\nsecond real CPU over one of the virtual CPU's on the same die --- by\ndefault, it doesn't. Ever if it did work perfectly, you are talking\nabout going from 1 to 1.4 or 2 to 2.8, which doesn't seem like much.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 22 Jul 2003 12:26:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
},
{
"msg_contents": "\"by default\" -- do you mean there is a way to tell Linux to favor the second\nreal cpu over the HT one? how?\n\nG.\n------------------------------- cut here -------------------------------\n----- Original Message ----- \nFrom: \"Bruce Momjian\" <[email protected]>\nSent: Tuesday, July 22, 2003 6:26 PM\nSubject: Re: [PERFORM] Dual Xeon + HW RAID question\n\n\n> Right, I simplified it. The big deal is whether the OS favors the\n> second real CPU over one of the virtual CPU's on the same die --- by\n> default, it doesn't. Ever if it did work perfectly, you are talking\n> about going from 1 to 1.4 or 2 to 2.8, which doesn't seem like much.\n\n",
"msg_date": "Tue, 22 Jul 2003 19:10:52 +0200",
"msg_from": "=?ISO-8859-1?Q?SZUCS_G=E1bor?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
},
{
"msg_contents": "The Linux 2.6 kernel will have the ability to set CPU affinity for\nspecific processes. There is a patch for the 2.4 kernel at\nhttp://www.kernel.org/pub/linux/kernel/people/rml/cpu-affinity\n\nRedHat 9 already has support for CPU affinity build in.\n\nThe July 2003 issue of Linux Journal includes a little C program (on\npage 20) that gives you a shell level interface to the CPU affinity\nsystem calls, so you can dynamically assign processes to specific CPUs.\nI haven't tried it, but it looks very cool (my only SMP machine is in\nproduction, and I don't want to mess with it). If you try it out, please\nshare your experiences with the list.\n\n\nJord Tanner\nIndependent Gecko Consultants\n\nOn Tue, 2003-07-22 at 10:10, SZUCS Gábor wrote:\n> \"by default\" -- do you mean there is a way to tell Linux to favor the second\n> real cpu over the HT one? how?\n> \n> G.\n> ------------------------------- cut here -------------------------------\n> ----- Original Message ----- \n> From: \"Bruce Momjian\" <[email protected]>\n> Sent: Tuesday, July 22, 2003 6:26 PM\n> Subject: Re: [PERFORM] Dual Xeon + HW RAID question\n> \n> \n> > Right, I simplified it. The big deal is whether the OS favors the\n> > second real CPU over one of the virtual CPU's on the same die --- by\n> > default, it doesn't. Ever if it did work perfectly, you are talking\n> > about going from 1 to 1.4 or 2 to 2.8, which doesn't seem like much.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n-- \nJord Tanner <[email protected]>\n\n",
"msg_date": "22 Jul 2003 10:35:52 -0700",
"msg_from": "Jord Tanner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
},
{
"msg_contents": "SZUCS G�bor wrote:\n> \"by default\" -- do you mean there is a way to tell Linux to favor the second\n> real cpu over the HT one? how?\n\nRight now there is no way the kernel can tell which virtual cpu's are on\neach physical cpu's, and that is the problem. Once there is a way,\nhyperthreading will be more useful, but even then, it doesn't double\nyour CPU throughput, just increases by 40%.\n\n\n> > Right, I simplified it. The big deal is whether the OS favors the\n> > second real CPU over one of the virtual CPU's on the same die --- by\n> > default, it doesn't. Ever if it did work perfectly, you are talking\n> > about going from 1 to 1.4 or 2 to 2.8, which doesn't seem like much.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 22 Jul 2003 13:38:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
},
{
"msg_contents": "\nBut CPU affinity isn't realated to hyperthreading, as far as I know. \nCPU affinity tries to keep processes on the same cpu in case there is\nstill valuable info in the cpu cache.\n\n---------------------------------------------------------------------------\n\nJord Tanner wrote:\n> The Linux 2.6 kernel will have the ability to set CPU affinity for\n> specific processes. There is a patch for the 2.4 kernel at\n> http://www.kernel.org/pub/linux/kernel/people/rml/cpu-affinity\n> \n> RedHat 9 already has support for CPU affinity build in.\n> \n> The July 2003 issue of Linux Journal includes a little C program (on\n> page 20) that gives you a shell level interface to the CPU affinity\n> system calls, so you can dynamically assign processes to specific CPUs.\n> I haven't tried it, but it looks very cool (my only SMP machine is in\n> production, and I don't want to mess with it). If you try it out, please\n> share your experiences with the list.\n> \n> \n> Jord Tanner\n> Independent Gecko Consultants\n> \n> On Tue, 2003-07-22 at 10:10, SZUCS G?bor wrote:\n> > \"by default\" -- do you mean there is a way to tell Linux to favor the second\n> > real cpu over the HT one? how?\n> > \n> > G.\n> > ------------------------------- cut here -------------------------------\n> > ----- Original Message ----- \n> > From: \"Bruce Momjian\" <[email protected]>\n> > Sent: Tuesday, July 22, 2003 6:26 PM\n> > Subject: Re: [PERFORM] Dual Xeon + HW RAID question\n> > \n> > \n> > > Right, I simplified it. The big deal is whether the OS favors the\n> > > second real CPU over one of the virtual CPU's on the same die --- by\n> > > default, it doesn't. Ever if it did work perfectly, you are talking\n> > > about going from 1 to 1.4 or 2 to 2.8, which doesn't seem like much.\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 8: explain analyze is your friend\n> -- \n> Jord Tanner <[email protected]>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 22 Jul 2003 13:39:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
},
{
"msg_contents": "On Tue, 2003-07-22 at 10:39, Bruce Momjian wrote:\n> But CPU affinity isn't realated to hyperthreading, as far as I know. \n> CPU affinity tries to keep processes on the same cpu in case there is\n> still valuable info in the cpu cache.\n> \n\nIt is true that CPU affinity is designed to prevent the dump of valuable\nCPU cache. My thought is that if you are trying to prevent CPU\ncontention, you could use CPU affinity to prevent 2 postmaster processes\nfrom running simultaneously on the same die. Am I out to lunch here?\nI've not worked with CPU affinity before, so I'm not familiar with the\nintimate details.\n \n\n-- \nJord Tanner <[email protected]>\n\n",
"msg_date": "22 Jul 2003 10:54:24 -0700",
"msg_from": "Jord Tanner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
},
{
"msg_contents": "Jord Tanner wrote:\n> On Tue, 2003-07-22 at 10:39, Bruce Momjian wrote:\n> > But CPU affinity isn't realated to hyperthreading, as far as I know. \n> > CPU affinity tries to keep processes on the same cpu in case there is\n> > still valuable info in the cpu cache.\n> > \n> \n> It is true that CPU affinity is designed to prevent the dump of valuable\n> CPU cache. My thought is that if you are trying to prevent CPU\n> contention, you could use CPU affinity to prevent 2 postmaster processes\n> from running simultaneously on the same die. Am I out to lunch here?\n> I've not worked with CPU affinity before, so I'm not familiar with the\n> intimate details.\n\nI guess you could but it is the backends that use the cpu. I don't\nthink manually specifying affinity will work for most applications.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 22 Jul 2003 14:50:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
},
{
"msg_contents": "On Tue, 2003-07-22 at 11:50, Bruce Momjian wrote:\n> Jord Tanner wrote:\n> > On Tue, 2003-07-22 at 10:39, Bruce Momjian wrote:\n> > > But CPU affinity isn't realated to hyperthreading, as far as I know. \n> > > CPU affinity tries to keep processes on the same cpu in case there is\n> > > still valuable info in the cpu cache.\n> > > \n> > \n> > It is true that CPU affinity is designed to prevent the dump of valuable\n> > CPU cache. My thought is that if you are trying to prevent CPU\n> > contention, you could use CPU affinity to prevent 2 postmaster processes\n> > from running simultaneously on the same die. Am I out to lunch here?\n> > I've not worked with CPU affinity before, so I'm not familiar with the\n> > intimate details.\n> \n> I guess you could but it is the backends that use the cpu. I don't\n> think manually specifying affinity will work for most applications.\n\nThis is beating a dead horse, but I'll take one more kick at it.\n\nCPU affinity is defined by a bit mask, so multiple processors can be\nselected. It is also inherited by child processes, so assigning CPU 0\nand CPU 2 (which I assume would be on different dies in a dual processor\nhyper-threading system) to the parent postmaster should prevent CPU\ncontention with respect to the postgres backend. \n\nI would be very interested to see if any advantage could be gained by a\ncombination of multiple HT processors and cpu affinity over multiple\nnon-HT processors. Yet Another Performance Testing To Do (YAPTTD)!\n\n-- \nJord Tanner <[email protected]>\n\n",
"msg_date": "22 Jul 2003 12:18:52 -0700",
"msg_from": "Jord Tanner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual Xeon + HW RAID question"
}
] |
[
{
"msg_contents": "Hi all,\n \nI'm in the process of initiating a movement in our company to move\ntowards open source software use. As part of this movement I will be\nrecommending PostgreSQL as an alternative to the currently used MSSQL.\nI'm going with PostgreSQL over MySQL because of the much more complete\nfeature set it provides. (After having used MSSQL for quite some time\nnot having triggers, foreign keys, sub selects, etc. is not an option.)\n \nHowever, to be able to justify the move I will have to demonstrate that\nPostgreSQL is up to par with MSSQL and MySQL when it comes to\nperformance. After having read through the docs and the lists it seems\nobvious that PostgreSQL is not configured for high performance out of\nthe box. I don't have months to learn the ins and outs of PostgreSQL\nperformance tuning so I looked around to see if there are any\npreconfigured solutions out there.\n \nI found that Red Hat Database 2.1 comes with PostgreSQL installed.\nHowever, as far as I can tell it comes with postgreSQL 7.2 and it\nrequires Red Hat 8.0 or Red Hat Advanced Server which is based on Red\nHat 7.2. Would I be better off installing Red Hat 9.0 and PostgreSQL 7.3\nand try to performance tune the installation myself, or should I buy Red\nHat Advanced Server and install Red Hat Database 2.1? (Let's say money\nis no object)\n \nSo, does anyone here have any experience using RH AS and DB 2.1?\n \nAny advice would be much appreciated.\n \nTIA\n \nBalazs\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nI’m in the process of initiating a movement in our\ncompany to move towards open source software use. As part of this movement I\nwill be recommending PostgreSQL as an alternative to\nthe currently used MSSQL. I’m going with PostgreSQL\nover MySQL because of the much more complete feature\nset it provides. (After having used MSSQL for quite some time not having\ntriggers, foreign keys, sub selects, etc. is not an option.)\n \nHowever, to be able to justify the move I will have to\ndemonstrate that PostgreSQL is up to par with MSSQL\nand MySQL when it comes to performance. After having\nread through the docs and the lists it seems obvious that PostgreSQL\nis not configured for high performance out of the box. I don’t have\nmonths to learn the ins and outs of PostgreSQL\nperformance tuning so I looked around to see if there are any preconfigured\nsolutions out there.\n \nI found that Red Hat Database 2.1 comes with PostgreSQL installed. However, as far as I can tell it\ncomes with postgreSQL 7.2 and it requires Red Hat 8.0\nor Red Hat Advanced Server which is based on Red Hat 7.2. Would I be better off\ninstalling Red Hat 9.0 and PostgreSQL 7.3 and try to\nperformance tune the installation myself, or should I\nbuy Red Hat Advanced Server and install Red Hat Database 2.1? (Let’s say\nmoney is no object)\n \nSo, does anyone here have any experience using RH AS and DB 2.1?\n \nAny advice would be much appreciated.\n \nTIA\n \nBalazs",
"msg_date": "Sat, 12 Jul 2003 23:35:38 -0700",
"msg_from": "\"Balazs Wellisch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pgsql - Red Hat Linux - VS MySQL VS MSSQL"
},
{
"msg_contents": "On Sunday 13 July 2003 12:05, Balazs Wellisch wrote:\n> Hi all,\n> However, to be able to justify the move I will have to demonstrate that\n> PostgreSQL is up to par with MSSQL and MySQL when it comes to\n> performance. After having read through the docs and the lists it seems\n> obvious that PostgreSQL is not configured for high performance out of\n> the box. I don't have months to learn the ins and outs of PostgreSQL\n> performance tuning so I looked around to see if there are any\n> preconfigured solutions out there.\n\n If postgresql performance is going to be a concern, concurrency \nconsiderations with mysql will be even bigger concern. Postgresql can be \ntuned. For achieving good concurrency with mysql, you might have to redesign \nyour app.\n\nIn general, this list can help you to tune the things. Shouldn't be that big \nconcern.\n\n>\n> I found that Red Hat Database 2.1 comes with PostgreSQL installed.\n> However, as far as I can tell it comes with postgreSQL 7.2 and it\n> requires Red Hat 8.0 or Red Hat Advanced Server which is based on Red\n> Hat 7.2. Would I be better off installing Red Hat 9.0 and PostgreSQL 7.3\n> and try to performance tune the installation myself, or should I buy Red\n> Hat Advanced Server and install Red Hat Database 2.1? (Let's say money\n> is no object)\n\nI would rather vote for RH-AS with postgresql 7.4 devel. Former for it's \nbig-app tunings out of the box and later for it's performance.\n\nOf course best way is to try it out yourself. Even vanilaa distro. on good \nhardware should be plenty good..\n\n Shridhar\n\n",
"msg_date": "Sun, 13 Jul 2003 15:51:57 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pgsql - Red Hat Linux - VS MySQL VS MSSQL"
},
{
"msg_contents": "Balazs Wellisch wrote:\n> I don't have months to learn the ins and outs of PostgreSQL\n> performance tuning so I looked around to see if there are any\n> preconfigured solutions out there.\n\nI don't know of a preconfigured solution. Generally speaking, the best \nconfiguration will be highly dependent on your hardware, data, and \napplication.\n\n> Hat Advanced Server and install Red Hat Database 2.1? (Let's say money\n> is no object)\n\nThere are many Linux and other OS distributions that will work just \nfine. You may need to tweak a few kernel configuration parameters, but \nthat's not too difficult; see:\nhttp://www.postgresql.org/docs/view.php?version=7.3&idoc=0&file=kernel-resources.html\n\nI would *not* use the default version of Postgres shipped with any \nparticular distribution. Use 7.3.3 because that is the latest released \nversion. Or, as Shridhar mentioned in his post, the are a number of \npretty significant performance improvements in 7.4 (which is in feature \nfreeze and scheduled to go into beta on 21 July). If you are in an \nexploratory/test phase rather than production right now, I'd say use the \n7.4 beta for your comparisons.\n\nIf money is truly not a problem, but time is, my advice is to hire a \nconsultant. There are probably several people on this list that can fill \nthat role for you. Otherwise read the archives and ask lots of specific \nquestions.\n\nJoe\n\n",
"msg_date": "Sun, 13 Jul 2003 10:30:43 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pgsql - Red Hat Linux - VS MySQL VS MSSQL"
},
{
"msg_contents": "\n> On Sunday 13 July 2003 12:05, Balazs Wellisch wrote:\n> > Hi all,\n> > However, to be able to justify the move I will have to demonstrate that\n> > PostgreSQL is up to par with MSSQL and MySQL when it comes to\n> > performance. After having read through the docs and the lists it seems\n> > obvious that PostgreSQL is not configured for high performance out of\n> > the box. I don't have months to learn the ins and outs of PostgreSQL\n> > performance tuning so I looked around to see if there are any\n> > preconfigured solutions out there.\n>\n> If postgresql performance is going to be a concern, concurrency\n> considerations with mysql will be even bigger concern. Postgresql can be\n> tuned. For achieving good concurrency with mysql, you might have to\nredesign\n> your app.\n>\n\nYes, we still may use MySQL in certain situations, but we are looking at\nPostgreSQL for concurrency and other reasons such as the much more complete\nset of features it provides. And now that we found PostgreSQL Manager\n(http://www.ems-hitech.com/pgmanager) it's even up to par with MSSQL in ease\nof use!\n\n\n> In general, this list can help you to tune the things. Shouldn't be that\nbig\n> concern.\n>\n\nThat's good to hear!\n\n\n> >\n> > I found that Red Hat Database 2.1 comes with PostgreSQL installed.\n> > However, as far as I can tell it comes with postgreSQL 7.2 and it\n> > requires Red Hat 8.0 or Red Hat Advanced Server which is based on Red\n> > Hat 7.2. Would I be better off installing Red Hat 9.0 and PostgreSQL 7.3\n> > and try to performance tune the installation myself, or should I buy Red\n> > Hat Advanced Server and install Red Hat Database 2.1? (Let's say money\n> > is no object)\n>\n> I would rather vote for RH-AS with postgresql 7.4 devel. Former for it's\n> big-app tunings out of the box and later for it's performance.\n>\n\nCould you enumerate what those settings are? What should I be looking at as\nfar as kernel, file system, etc. goes?\n\n\n> Of course best way is to try it out yourself. Even vanilaa distro. on\ngood\n> hardware should be plenty good..\n>\n> Shridhar\n>\n\nThank you for your advice. It's much appriciated.\n\nBalazs\n\n\n",
"msg_date": "Sun, 13 Jul 2003 12:30:44 -0700",
"msg_from": "\"Balazs Wellisch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pgsql - Red Hat Linux - VS MySQL VS MSSQL"
},
{
"msg_contents": "> On Sun, 2003-07-13 at 01:35, Balazs Wellisch wrote:\n> > Hi all,\n> >\n> >\n> >\n> > Iļæ½m in the process of initiating a movement in our company to move\n> > towards open source software use. As part of this movement I will be\n> > recommending PostgreSQL as an alternative to the currently used MSSQL.\n> > Iļæ½m going with PostgreSQL over MySQL because of the much more complete\n> > feature set it provides. (After having used MSSQL for quite some time\n> > not having triggers, foreign keys, sub selects, etc. is not an\n> > option.)\n>\n> Note that I've read a couple of times from Tom Lane (one of the\n> core team) that FKs are a serous performance drag, so I'd drop\n> them after the s/w has been in production long enough to work\n> out the kinks.\n>\n\nThat's interesting, I didn't know that. Any idea how much of a performance\ndrag we're talking about?\n\n\n> > However, to be able to justify the move I will have to demonstrate\n> > that PostgreSQL is up to par with MSSQL and MySQL when it comes to\n> > performance. After having read through the docs and the lists it seems\n> > obvious that PostgreSQL is not configured for high performance out of\n> > the box. I donļæ½t have months to learn the ins and outs of PostgreSQL\n> > performance tuning so I looked around to see if there are any\n> > preconfigured solutions out there.\n>\n> http://www.varlena.com/GeneralBits/\n> http://www.varlena.com/GeneralBits/Tidbits/perf.html\n> http://www.varlena.com/GeneralBits/Tidbits/annotated_conf_e.html\n>\n\nThose links are great!!! Thank you for bringing them to my attantion. And a\nBIG thank you to the authors (Josh Berkus & Shridhar Daithankar) for making\nthis available. I've been looking for an authoritative and comprehensive\nsource for performance tuning tips but haven't found much except for little\ntidbits here and there. This is very nice.\n\n\n> Me, I'd install Debian, but I understand the comfort level created\n> by RH.\n>\n\nDon't know much about Debian, but we've been working with RH for years. I've\nhad nothing but good experiences with them. (Except maybe for RH8) The new\nEnterprise direction they're going in is exectly what we need. Longer\ntesting cycles and better tuned distributions are good for businesses like\nus. We don't necessarily need the latest and greates we need the latest and\nmost stable to guarantee the highest return on our investment. But, this\ndiscussion is for another list... :)\n\nThanks for your advice. This list has proved to be a great asset so far.\n\nBalazs\n\n\n",
"msg_date": "Sun, 13 Jul 2003 12:42:29 -0700",
"msg_from": "\"Balazs Wellisch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pgsql - Red Hat Linux - VS MySQL VS MSSQL"
},
{
"msg_contents": "\n> There are many Linux and other OS distributions that will work just\n> fine. You may need to tweak a few kernel configuration parameters, but\n> that's not too difficult; see:\n>\nhttp://www.postgresql.org/docs/view.php?version=7.3&idoc=0&file=kernel-resources.html\n>\n\nYes, I looked at the online documentation but found it a little too generic.\nAlthough it gives me good idea of where to look to adjust performance\nrelated parameters I need a little more specific advise. I just don't have\nthe time to tweak and test different configurations for months to see what\nworks and what doesn't. Ideally, I'd love to run my own benchmarks and\nbecome an expert at postgresql, but unfortunately in the real world I have\ndeadlines to meet and clients to appease. So, I was hoping someone would\nhave some real world experiences to share running postgresql on RH in an\nenterprise environment.\n\n\n> I would *not* use the default version of Postgres shipped with any\n> particular distribution. Use 7.3.3 because that is the latest released\n> version. Or, as Shridhar mentioned in his post, the are a number of\n> pretty significant performance improvements in 7.4 (which is in feature\n> freeze and scheduled to go into beta on 21 July). If you are in an\n> exploratory/test phase rather than production right now, I'd say use the\n> 7.4 beta for your comparisons.\n>\n\nWell, I could start by testing 7.4, however I'd have to go back to the\nstable version once we're ready to use it a production environment. So, I\nmight as well stick with eveluating the production version.\n\n\n> If money is truly not a problem, but time is, my advice is to hire a\n> consultant. There are probably several people on this list that can fill\n> that role for you. Otherwise read the archives and ask lots of specific\n> questions.\n>\n\nOnce we're ready to go with postgresql in a production environment we may\nindeed need to hire a consultant. Any suggestions whom I should contact?\n(We're in the San Diego area)\n\nThank you for your advice.\n\nBalazs\n\n\n",
"msg_date": "Sun, 13 Jul 2003 13:04:48 -0700",
"msg_from": "\"Balazs Wellisch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pgsql - Red Hat Linux - VS MySQL VS MSSQL"
},
{
"msg_contents": "Balazs Wellisch wrote:\n>>I would *not* use the default version of Postgres shipped with any\n>>particular distribution. Use 7.3.3 because that is the latest released\n>>version. Or, as Shridhar mentioned in his post, the are a number of\n>>pretty significant performance improvements in 7.4 (which is in feature\n>>freeze and scheduled to go into beta on 21 July). If you are in an\n>>exploratory/test phase rather than production right now, I'd say use the\n>>7.4 beta for your comparisons.\n> \n> Well, I could start by testing 7.4, however I'd have to go back to the\n> stable version once we're ready to use it a production environment. So, I\n> might as well stick with eveluating the production version.\n\nHow soon do you think you'll be in production? PostgreSQL beta testing \nusually seems to run about 2 months or so -- if you won't be in \nproduction before October, it is a good bet that Postgres 7.4 will be \nout or at least in release candidate by then.\n\nBut it really depends on your specific application. If you use lots of \n\"WHERE foo IN (SELECT ...)\" type queries, you'll need to rewrite them in \n7.3.3 or earlier, but in 7.4 they will probably work fine. Also, if you \ndo much in the way of aggregate queries for reporting, 7.4 will likely \ngive you a significant performance boost.\n\n>>If money is truly not a problem, but time is, my advice is to hire a\n>>consultant. There are probably several people on this list that can fill\n>>that role for you. Otherwise read the archives and ask lots of specific\n>>questions.\n> \n> Once we're ready to go with postgresql in a production environment we may\n> indeed need to hire a consultant. Any suggestions whom I should contact?\n> (We're in the San Diego area)\n> \n\nUm, actually, I live in the San Diego area ;-)\n\nJoe\n\n\n",
"msg_date": "Sun, 13 Jul 2003 13:25:27 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pgsql - Red Hat Linux - VS MySQL VS MSSQL"
},
{
"msg_contents": "On Sun, Jul 13, 2003 at 12:42:29PM -0700, Balazs Wellisch wrote:\n> > On Sun, 2003-07-13 at 01:35, Balazs Wellisch wrote:\n\n> > Note that I've read a couple of times from Tom Lane (one of the\n> > core team) that FKs are a serous performance drag, so I'd drop\n> > them after the s/w has been in production long enough to work\n> > out the kinks.\n> >\n> \n> That's interesting, I didn't know that. Any idea how much of a performance\n> drag we're talking about?\n\nForeign keys in any database are going to cost you something, because\nthey require a lookup in other tables.\n\nThe big hit from FKs in PostgreSQL used to be that they caused\ndeadlocks in older versions. I _think_ this is fixed by default in\n7.3.3; if not, there's a patch floating around for the problem. The\nrepair is definitely in 7.4.\n\nThat said, if speed is your goal, FKs are always going to be a cost\nfor you. OTOH, people who try to handle this sort of thing in the\napplication come to regret it. You probably want to look somewhere\nelse to solve your performance difficulties from FKs.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 14 Jul 2003 07:44:24 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pgsql - Red Hat Linux - VS MySQL VS MSSQL"
}
] |
[
{
"msg_contents": "\nHi all! I'm new to Postgresql and I'm trying solve a problem: is there a way to know how many disk-pages are read during a query? Because I found out only how many disk-pages a relation has and I'd like to know if there is a system\ncatalog or something else that stores this information\n\nthanks,\nAndrea Lazzarotto\n\n-----------------------------------------------------\n\nSalve, il messaggio che hai ricevuto\n� stato inviato per mezzo del sistema\ndi web mail interfree. Se anche tu vuoi \nuna casella di posta free visita il\nsito http://club.interfree.it\nTi aspettiamo!\n\n-----------------------------------------------------\n\n\n",
"msg_date": "13 Jul 2003 12:18:59 -0000",
"msg_from": "[email protected] ()",
"msg_from_op": true,
"msg_subject": "Help disk-pages"
},
{
"msg_contents": "\nSee postgres -t and the statistics tables to see block read, and the\nchapter on Disk Space Monitor to find disk sizes.\n\n---------------------------------------------------------------------------\n\[email protected] wrote:\n> \n> Hi all! I'm new to Postgresql and I'm trying solve a problem: is there a way to know how many disk-pages are read during a query? Because I found out only how many disk-pages a relation has and I'd like to know if there is a system\n> catalog or something else that stores this information\n> \n> thanks,\n> Andrea Lazzarotto\n> \n> -----------------------------------------------------\n> \n> Salve, il messaggio che hai ricevuto\n> � stato inviato per mezzo del sistema\n> di web mail interfree. Se anche tu vuoi \n> una casella di posta free visita il\n> sito http://club.interfree.it\n> Ti aspettiamo!\n> \n> -----------------------------------------------------\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 21 Jul 2003 15:46:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help disk-pages"
}
] |
[
{
"msg_contents": "I'm not an SQL or PostgreSQL expert.\n\nI'm getting abysmal performance on a nested query and\nneed some help on finding ways to improve the performance:\n\nBackground:\n RH 8.0 dual-CPU machine (1.2GHz athlon)\n Postgresql 7.2\n 1GB ram\n (Machine is dedicated to postgres, so there's\n not much else running.)\n\nThe table has ~500K rows.\n\nTable definition:\n\n lab.devel.configdb=# \\d attributes_table\n Table \"attributes_table\"\n Column | Type | Modifiers \n --------+--------------------------+---------------\n id | character varying(64) | not null\n name | character varying(64) | not null\n units | character varying(32) | \n value | text | \n time | timestamp with time zone | default now()\n Indexes: id_index,\n name_index\n Primary key: attributes_table_pkey\n Triggers: trigger_insert\n\nView definition:\n lab.devel.configdb=# \\d attributes;\n View \"attributes\"\n Column | Type | Modifiers \n --------+-----------------------+-----------\n id | character varying(64) | \n name | character varying(64) | \n units | character varying(32) | \n value | text | \n View definition: SELECT attributes_table.id,\n attributes_table.name, attributes_table.units,\n attributes_table.value FROM attributes_table;\n\nQuery:\n\n select * from attributes_table where id in (select id from\n attributes where (name='obsid') and (value='oid00066'));\n\nNow, the inner SELECT is fast:\n lab.devel.configdb=# explain analyze select id from attributes\n where (name='obsid') and (value='oid00066');\n NOTICE: QUERY PLAN:\n\n Index Scan using name_index on attributes_table\n (cost=0.00..18187.48 rows=15 width=25)\n (actual time=0.33..238.06 rows=2049 loops=1)\n Total runtime: 239.28 msec\n\n EXPLAIN\n\nBut the outer SELECT insists on using a sequential scan [it should\npick up about 20K-40K rows (normally, access is through a\nscript].\n\nHow slow? Slow enough that:\n\n explain analyze select * from attributes_table where id in\n (select id from attributes where (name='obsid') and\n (value='oid00066'));\n\nhasn't completed in the last 15 minutes.\n\nRemoving the analyze gives:\n\nlab.devel.configdb=# explain select * from attributes_table where\n id in (select id from attributes where (name='obsid') and\n (value='oid00066'));\n NOTICE: QUERY PLAN:\n\n Seq Scan on attributes_table \n (cost=100000000.00..8873688920.07 rows=241201 width=59)\n SubPlan\n -> Materialize (cost=18187.48..18187.48 rows=15 width=25)\n -> Index Scan using name_index on attributes_table\n (cost=0.00..18187.48 rows=15 width=25)\n\n EXPLAIN\n\nObviously, something is forcing the outer select into a\nsequential scan, which is what I assume is the bottleneck\n(see above about lack of expert-ness...).\n\nI've played with the settings in postgresql.conf, using\nthe on-line performance tuning guide:\n\n shared_buffers = 8192 # 2*max_connections, min 16\n max_fsm_relations = 1000 # min 10, fsm is free space map\n max_fsm_pages = 10000 # min 1000, fsm is free space map\n max_locks_per_transaction = 128 # min 10\n wal_buffers = 64 # min 4\n sort_mem = 128 # min 32\n vacuum_mem = 4096 # min 1024\n wal_files = 32 # range 0-64 (default was 0)\n effective_cache_size = 96000 # default in 8k pages\n random_page_cost = 3\n\nbut haven't noticed an significant change with these settings\nover more conservative settings.\n\nAny suggestions? Is there a better way to phrase the query\nthat would provide order-of-magnitude improvement?\n\nThanks!\nSteve\n\n-- \nSteve Wampler -- [email protected]\nQuantum materiae materietur marmota monax si marmota\n monax materiam possit materiari?\n",
"msg_date": "13 Jul 2003 11:05:15 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improving a simple query?"
},
{
"msg_contents": "> I'm not an SQL or PostgreSQL expert.\n>\n> I'm getting abysmal performance on a nested query and\n> need some help on finding ways to improve the performance:\n[snip]\n> select * from attributes_table where id in (select id from\n> attributes where (name='obsid') and (value='oid00066'));\n\nThis is the classic IN problem (much improved in 7.4 dev I believe). The\nrecommended approach is to rewrite the query as an EXISTS form if\npossible. See the mailing list archives for plenty of examples.\n\nCould you not rewrite this as a simple join though?\n\n- Richard\n",
"msg_date": "Sun, 13 Jul 2003 20:09:17 +0100 (BST)",
"msg_from": "\"Richard Huxton\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving a simple query?"
},
{
"msg_contents": "> select * from attributes_table where id in (select id from\n> attributes where (name='obsid') and (value='oid00066'));\n\nCan you convert it into a join? 'where in' clauses tend to slow pgsql\ndown. \n--\nMike Nolan\n",
"msg_date": "Sun, 13 Jul 2003 14:54:40 -0500 (CDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Improving a simple query?"
},
{
"msg_contents": "On Sun, Jul 13, 2003 at 08:09:17PM +0100, Richard Huxton wrote:\n> > I'm not an SQL or PostgreSQL expert.\n> >\n> > I'm getting abysmal performance on a nested query and\n> > need some help on finding ways to improve the performance:\n> [snip]\n> > select * from attributes_table where id in (select id from\n> > attributes where (name='obsid') and (value='oid00066'));\n> \n> This is the classic IN problem (much improved in 7.4 dev I believe). The\n> recommended approach is to rewrite the query as an EXISTS form if\n> possible. See the mailing list archives for plenty of examples.\n> \n> Could you not rewrite this as a simple join though?\n\nHmmm, I don't see how. Then again, I'm pretty much the village\nidiot w.r.t. SQL...\n\nThe inner select is locating a set of (2049) ids (actually from\nthe same table, since 'attributes' is just a view into\n'attributes_table'). The outer select is then locating all\nrecords (~30-40K) that have any of those ids. Is that really\nsomething a JOIN could be used for?\n\n-Steve\n-- \nSteve Wampler -- [email protected]\nQuantum materiae materietur marmota monax si marmota\n monax materiam possit materiari?\n",
"msg_date": "Sun, 13 Jul 2003 13:46:10 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving a simple query?"
},
{
"msg_contents": "Steve Wampler kirjutas P, 13.07.2003 kell 23:46:\n> On Sun, Jul 13, 2003 at 08:09:17PM +0100, Richard Huxton wrote:\n> > > I'm not an SQL or PostgreSQL expert.\n> > >\n> > > I'm getting abysmal performance on a nested query and\n> > > need some help on finding ways to improve the performance:\n> > [snip]\n> > > select * from attributes_table where id in (select id from\n> > > attributes where (name='obsid') and (value='oid00066'));\n> > \n> > This is the classic IN problem (much improved in 7.4 dev I believe). The\n> > recommended approach is to rewrite the query as an EXISTS form if\n> > possible. See the mailing list archives for plenty of examples.\n> > \n> > Could you not rewrite this as a simple join though?\n> \n> Hmmm, I don't see how. Then again, I'm pretty much the village\n> idiot w.r.t. SQL...\n> \n> The inner select is locating a set of (2049) ids (actually from\n> the same table, since 'attributes' is just a view into\n> 'attributes_table'). The outer select is then locating all\n> records (~30-40K) that have any of those ids. Is that really\n> something a JOIN could be used for?\n\nThere may be some subtle differences, but most likely the 'join' form\nwis like this:\n\nselect at.*\n from attributes_table at,\n attributes a\n where at.id = a.id\n and a.name='obsid'\n and a.value='oid00066'\n\n--------------\nHannu\n\n",
"msg_date": "14 Jul 2003 02:58:28 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving a simple query?"
},
{
"msg_contents": "> > Could you not rewrite this as a simple join though?\n> \n> Hmmm, I don't see how. Then again, I'm pretty much the village\n> idiot w.r.t. SQL...\n> \n> The inner select is locating a set of (2049) ids (actually from\n> the same table, since 'attributes' is just a view into\n> 'attributes_table'). The outer select is then locating all\n> records (~30-40K) that have any of those ids. Is that really\n> something a JOIN could be used for?\n\nThis may be a question for SQL theoretists, but I don't think I've ever\nrun across a query with a 'where in' clause that couldn't be written\nas a join. I think linguistically 'where in' may even be a special \ncase of 'join'.\n\nYet another question for the theoretists: Would it be possible to optimize\na 'where in' query by rewriting it as a join?\n--\nMike Nolan\n",
"msg_date": "Sun, 13 Jul 2003 20:23:47 -0500 (CDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Improving a simple query?"
},
{
"msg_contents": "At 01:46 PM 7/13/03 -0700, Steve Wampler wrote:\n\n The following left join should work if I've done my select right, you \nmight want to play with a left versus right to see which will give you a \nbetter result, but this query should help:\n\n SELECT * FROM attributes_table att LEFT JOIN attributes at ON (at.name = \n'obsid' AND at.value = 'oid00066') WHERE att.id = at.id;\n\n>On Sun, Jul 13, 2003 at 08:09:17PM +0100, Richard Huxton wrote:\n> > > I'm not an SQL or PostgreSQL expert.\n> > >\n> > > I'm getting abysmal performance on a nested query and\n> > > need some help on finding ways to improve the performance:\n> > [snip]\n> > > select * from attributes_table where id in (select id from\n> > > attributes where (name='obsid') and (value='oid00066'));\n> >\n> > This is the classic IN problem (much improved in 7.4 dev I believe). The\n> > recommended approach is to rewrite the query as an EXISTS form if\n> > possible. See the mailing list archives for plenty of examples.\n> >\n> > Could you not rewrite this as a simple join though?\n>\n>Hmmm, I don't see how. Then again, I'm pretty much the village\n>idiot w.r.t. SQL...\n>\n>The inner select is locating a set of (2049) ids (actually from\n>the same table, since 'attributes' is just a view into\n>'attributes_table'). The outer select is then locating all\n>records (~30-40K) that have any of those ids. Is that really\n>something a JOIN could be used for?\n>\n>-Steve\n>--\n>Steve Wampler -- [email protected]\n>Quantum materiae materietur marmota monax si marmota\n> monax materiam possit materiari?\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Sun, 13 Jul 2003 23:31:01 -0300",
"msg_from": "Chris Bowlby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving a simple query?"
},
{
"msg_contents": "At 11:31 PM 7/13/03 -0300, Chris Bowlby wrote:\n\n Woops, this might not go through via the address I used :> (not \nsubscribed with that address)..\n\n>At 01:46 PM 7/13/03 -0700, Steve Wampler wrote:\n>\n> The following left join should work if I've done my select right, you \n> might want to play with a left versus right to see which will give you a \n> better result, but this query should help:\n>\n> SELECT * FROM attributes_table att LEFT JOIN attributes at ON (at.name = \n> 'obsid' AND at.value = 'oid00066') WHERE att.id = at.id;\n>\n>>On Sun, Jul 13, 2003 at 08:09:17PM +0100, Richard Huxton wrote:\n>> > > I'm not an SQL or PostgreSQL expert.\n>> > >\n>> > > I'm getting abysmal performance on a nested query and\n>> > > need some help on finding ways to improve the performance:\n>> > [snip]\n>> > > select * from attributes_table where id in (select id from\n>> > > attributes where (name='obsid') and (value='oid00066'));\n>> >\n>> > This is the classic IN problem (much improved in 7.4 dev I believe). The\n>> > recommended approach is to rewrite the query as an EXISTS form if\n>> > possible. See the mailing list archives for plenty of examples.\n>> >\n>> > Could you not rewrite this as a simple join though?\n>>\n>>Hmmm, I don't see how. Then again, I'm pretty much the village\n>>idiot w.r.t. SQL...\n>>\n>>The inner select is locating a set of (2049) ids (actually from\n>>the same table, since 'attributes' is just a view into\n>>'attributes_table'). The outer select is then locating all\n>>records (~30-40K) that have any of those ids. Is that really\n>>something a JOIN could be used for?\n>>\n>>-Steve\n>>--\n>>Steve Wampler -- [email protected]\n>>Quantum materiae materietur marmota monax si marmota\n>> monax materiam possit materiari?\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Sun, 13 Jul 2003 23:33:26 -0300",
"msg_from": "Chris Bowlby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving a simple query?"
}
] |
[
{
"msg_contents": "\n> The most important thing seems to be to increase shared_buffers. On my\n> RH7.3 machine here, Linux is configured with shmmax = 32MB which allows me\n> a value of just under 4000 for shared_buffers (3900 works, 3950 doesn't).\n> If your selects return large amounts of data, you'll probably also need to\n> increase sort_mem (I use a value of 1024 so a query would have to return\n> more that 1MB of data before the sort (assuming there is a order by clause\n> to cause a sort) starts paging stuff out disk.\n> >\n> > I found that Red Hat Database 2.1 comes with PostgreSQL installed.\n> > However, as far as I can tell it comes with postgreSQL 7.2 and it\n> > requires Red Hat 8.0 or Red Hat Advanced Server which is based on Red\n> > Hat 7.2. Would I be better off installing Red Hat 9.0 and PostgreSQL 7.3\n> > and try to performance tune the installation myself, or should I buy Red\n> > Hat Advanced Server and install Red Hat Database 2.1? (Let's say money\n> > is no object)\n>\n>\n> Alternatively, you simply compile 7.3.3 from source. I've upgraded most my\n> machines that way.\n>\n\nUnfortunatelly, compiling from source is not really an option for us. We use\nRPMs only to ease the installation and upgrade process. We have over a\nhundred servers to maintaine and having to compile and recompile software\neverytime a new release comes out would be waaaaay too much work.\n\n\n> >\n> > So, does anyone here have any experience using RH AS and DB 2.1?\n>\n> Are RH still selling DB 2.1? I can't find it listed on their web site.\n> --\n\nYes, it's available for free download. The documentation is here:\nhttp://www.redhat.com/docs/manuals/database/. I'd welcome your oppinions on\nthis product.\n\nThank you for your comments.\n\nBalazs\n\n\n",
"msg_date": "Sun, 13 Jul 2003 12:51:02 -0700",
"msg_from": "\"Balazs Wellisch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pgsql - Red Hat Linux - VS MySQL VS MSSQL"
},
{
"msg_contents": "On Sun, Jul 13, 2003 at 12:51:02PM -0700, Balazs Wellisch wrote:\n> > Alternatively, you simply compile 7.3.3 from source. I've upgraded most my\n> > machines that way.\n> >\n> \n> Unfortunatelly, compiling from source is not really an option for us. We use\n> RPMs only to ease the installation and upgrade process. We have over a\n> hundred servers to maintaine and having to compile and recompile software\n> everytime a new release comes out would be waaaaay too much work.\n \nIf you aren't settled on OS yet, take a look at FreeBSD, or one of the\nlinuxes that have better app management. Keeping pgsql up-to-date using\nports on FreeBSD is pretty painless (for that matter, so is keeping the\nOS itself up-to-date).\n-- \nJim C. Nasby, Database Consultant [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Sun, 13 Jul 2003 17:32:14 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pgsql - Red Hat Linux - VS MySQL VS MSSQL"
},
{
"msg_contents": "On Monday 14 July 2003 01:21, Balazs Wellisch wrote:\n> Unfortunatelly, compiling from source is not really an option for us. We\n> use RPMs only to ease the installation and upgrade process. We have over a\n> hundred servers to maintaine and having to compile and recompile software\n> everytime a new release comes out would be waaaaay too much work.\n\nUse checkinstall. Simple. Google for more information.\n\nMaking your own rpms isn't that big deal..:-)\n\n",
"msg_date": "Mon, 14 Jul 2003 12:07:32 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pgsql - Red Hat Linux - VS MySQL VS MSSQL"
},
{
"msg_contents": "On Sun, Jul 13, 2003 at 12:51:02PM -0700, Balazs Wellisch wrote:\n> \n> Unfortunatelly, compiling from source is not really an option for us. We use\n> RPMs only to ease the installation and upgrade process. We have over a\n> hundred servers to maintaine and having to compile and recompile software\n> everytime a new release comes out would be waaaaay too much work.\n\nIt's not clear that the RPMs will help you in ease of upgrade. More\nprecisely, be real sure you dump your database before upgrading major\nversions (e.g. 7.3.x to 7.4.x).\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 14 Jul 2003 07:45:31 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pgsql - Red Hat Linux - VS MySQL VS MSSQL"
}
] |
[
{
"msg_contents": "\nI've got a simple nested query:\n\n select * from attributes where id in (select id from\n attributes where (name='obsid') and (value='oid00066'));\n\nthat performs abysmally. I've heard this described as the\n'classic WHERE IN' problem.\n\nIs there a better way to obtain the same results? The inner\nselect identifies a set of ids (2049 of them, to be exact)\nthat are then used to locate records that have the same id\n(about 30-40K of those, including the aforementioned 2049).\n\nThanks!\n-Steve\n\n-- \nSteve Wampler -- [email protected]\nQuantum materiae materietur marmota monax si marmota\n monax materiam possit materiari?\n",
"msg_date": "Sun, 13 Jul 2003 14:50:42 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Replacing a simple nested query?"
},
{
"msg_contents": "Steve Wampler wrote:\n> I've got a simple nested query:\n> \n> select * from attributes where id in (select id from\n> attributes where (name='obsid') and (value='oid00066'));\n> \n> that performs abysmally. I've heard this described as the\n> 'classic WHERE IN' problem.\n\nI may be missing something, but why can't you just do:\n select * from attributes where name='obsid' and value='oid00066';\n?\n\nJoe\n\n",
"msg_date": "Sun, 13 Jul 2003 15:01:13 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replacing a simple nested query?"
},
{
"msg_contents": "On Sun, 2003-07-13 at 14:50, Steve Wampler wrote:\n> I've got a simple nested query:\n> \n> select * from attributes where id in (select id from\n> attributes where (name='obsid') and (value='oid00066'));\n> \n> that performs abysmally. I've heard this described as the\n> 'classic WHERE IN' problem.\n> \n> Is there a better way to obtain the same results? The inner\n> select identifies a set of ids (2049 of them, to be exact)\n> that are then used to locate records that have the same id\n> (about 30-40K of those, including the aforementioned 2049).\n\nFor the record, Joe Conway and Hannu Krosing both provided\nthe same solution:\n\n select at.* from attributes_table at, attributes a\n where at.id = a.id and a.name='obsid' and a.value='oid00066';\n\nwhich is several orders of infinity faster than than my naive\napproach above:\n-------------------------------------------------------------\nlab.devel.configdb=# explain analyze select * from\n attributes_table where id in (select id from attributes\n where (name='obsid') and (value='oid00066')) order by id;\nNOTICE: QUERY PLAN:\n\nIndex Scan using id_index on attributes_table (cost=0.00..8773703316.10\nrows=241201 width=59) (actual time=136297.91..3418016.04 rows=32799\nloops=1)\n SubPlan\n -> Materialize (cost=18187.48..18187.48 rows=15 width=25) (actual\ntime=0.01..1.68 rows=1979 loops=482402)\n -> Index Scan using name_index on attributes_table \n(cost=0.00..18187.48 rows=15 width=25) (actual time=0.27..251.95\nrows=2049 loops=1)\nTotal runtime: 3418035.38 msec\n--------------------------------------------------------------\nlab.devel.configdb=# explain analyze select at.* from\n attributes_table at, attributes a\n where at.id = a.id and a.name='obsid' and a.value='oid00066';\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..18739.44 rows=217 width=84) (actual\ntime=0.76..1220.65 rows=32799 loops=1)\n -> Index Scan using name_index on attributes_table \n(cost=0.00..18187.48 rows=15 width=25) (actual time=0.47..507.31\nrows=2049 loops=1)\n -> Index Scan using id_index on attributes_table at \n(cost=0.00..35.80 rows=12 width=59) (actual time=0.11..0.31 rows=16\nloops=2049)\nTotal runtime: 1235.42 msec\n-------------------------------------------------------------------\n\nMy thanks to both Joe and Hannu!\nSteve\n-- \nSteve Wampler -- [email protected]\nQuantum materiae materietur marmota monax si marmota\n monax materiam possit materiari?\n",
"msg_date": "14 Jul 2003 06:38:18 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] Replacing a simple nested query?"
}
] |
[
{
"msg_contents": "Hello all!\n I'm a new to Postgresql , I have never used it before.\n I am having an issue with configure the postgresql.conf file.\n The machine itself is a\n CPU= 2.66GHz P4 w/\n Memory= 2G\n Maybe you can tell me how to configure these parameters.\n shared_buffers=\n max_fsm_relations=\n max_fsm_pages=\n max_locks_per_transaction=\n wal_buffers=\n sort_mem=\n vacuum_mem=\n wal_files=\n wal_sync_method=\n wal_debug =\n commit_delay =\n commit_siblings =\n checkpoint_segments =\n checkpoint_timeout =\n fsync = true\n enable_seqscan =\n enable_indexscan =\n enable_tidscan =\n enable_sort =\n enable_nestloop =\n enable_mergejoin =\n enable_hashjoin =\n ksqo =\n effective_cache_size =\n random_page_cost =\n cpu_tuple_cost =\n cpu_index_tuple_cost =\n cpu_operator_cost =\n\n Would you mind to send me a copy of examples .(postgresql.conf)\n Thanks\n Sincerely,\n\nChris.Wu\n\n\n",
"msg_date": "Mon, 14 Jul 2003 11:26:37 +0800",
"msg_from": "\"Chris_Wu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to configure the postgresql.conf files"
},
{
"msg_contents": "Hi Chris,\n\nI suggest you read this tech. document:\n\nhttp://www.varlena.com/GeneralBits/\n\nI think you'll it's the best place to start.\n\nCheers\nRudi.\n\nChris_Wu wrote:\n\n>Hello all!\n> I'm a new to Postgresql , I have never used it before.\n> I am having an issue with configure the postgresql.conf file.\n> The machine itself is a\n> CPU= 2.66GHz P4 w/\n> Memory= 2G\n> Maybe you can tell me how to configure these parameters.\n> shared_buffers=\n> max_fsm_relations=\n> max_fsm_pages=\n> max_locks_per_transaction=\n> wal_buffers=\n> sort_mem=\n> vacuum_mem=\n> wal_files=\n> wal_sync_method=\n> wal_debug =\n> commit_delay =\n> commit_siblings =\n> checkpoint_segments =\n> checkpoint_timeout =\n> fsync = true\n> enable_seqscan =\n> enable_indexscan =\n> enable_tidscan =\n> enable_sort =\n> enable_nestloop =\n> enable_mergejoin =\n> enable_hashjoin =\n> ksqo =\n> effective_cache_size =\n> random_page_cost =\n> cpu_tuple_cost =\n> cpu_index_tuple_cost =\n> cpu_operator_cost =\n>\n> Would you mind to send me a copy of examples .(postgresql.conf)\n> Thanks\n> Sincerely,\n>\n>Chris.Wu\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n> \n>\n\n\n",
"msg_date": "Mon, 14 Jul 2003 13:35:01 +1000",
"msg_from": "Rudi Starcevic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to configure the postgresql.conf files"
},
{
"msg_contents": "Chris,\n\nOops - it's changed !\n\nHere's the link's you need:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\nCheers\nRudi.\n\nChris_Wu wrote:\n\n>Hello all!\n> I'm a new to Postgresql , I have never used it before.\n> I am having an issue with configure the postgresql.conf file.\n> The machine itself is a\n> CPU= 2.66GHz P4 w/\n> Memory= 2G\n> Maybe you can tell me how to configure these parameters.\n> shared_buffers=\n> max_fsm_relations=\n> max_fsm_pages=\n> max_locks_per_transaction=\n> wal_buffers=\n> sort_mem=\n> vacuum_mem=\n> wal_files=\n> wal_sync_method=\n> wal_debug =\n> commit_delay =\n> commit_siblings =\n> checkpoint_segments =\n> checkpoint_timeout =\n> fsync = true\n> enable_seqscan =\n> enable_indexscan =\n> enable_tidscan =\n> enable_sort =\n> enable_nestloop =\n> enable_mergejoin =\n> enable_hashjoin =\n> ksqo =\n> effective_cache_size =\n> random_page_cost =\n> cpu_tuple_cost =\n> cpu_index_tuple_cost =\n> cpu_operator_cost =\n>\n> Would you mind to send me a copy of examples .(postgresql.conf)\n> Thanks\n> Sincerely,\n>\n>Chris.Wu\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n> \n>\n\n\n",
"msg_date": "Mon, 14 Jul 2003 13:46:51 +1000",
"msg_from": "Rudi Starcevic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to configure the postgresql.conf files"
}
] |
[
{
"msg_contents": "\nOn 13/07/2003 20:51 Balazs Wellisch wrote:\n\n> [snip]\n> > > So, does anyone here have any experience using RH AS and DB 2.1?\n> >\n> > Are RH still selling DB 2.1? I can't find it listed on their web site.\n> > --\n> \n> Yes, it's available for free download. The documentation is here:\n> http://www.redhat.com/docs/manuals/database/. I'd welcome your oppinions\n> on\n> this product.\n> \n> Thank you for your comments.\n\nIt looks like they just wrote a number of GUI versions of the command line\nutilities. From what I can tell, its still a standard postgresql database\nbehind the scenes.\n\n\n--\nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n\n",
"msg_date": "Mon, 14 Jul 2003 15:06:39 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pgsql - Red Hat Linux - VS MySQL VS MSSQL"
}
] |
[
{
"msg_contents": "Tried to search the list but the search wasn't working.\n\nI have a server running strictly PostgreSQL that I'm trying to tune for performance. The specs are\n\n2 X 2.4 Athlon MP processors\n2G Reg DDR\nFreeBSD 4.8 SMP kernel complied\nPostgreSQL 7.3.3\n4 X 80G IDE Raid 5\n\nMy problem is that I have not totally put my head around the concepts of the shmmax, shmmaxpgs, etc.... As it pertains to my current setup and the shared mem values in postgresql.conf. I'm looking for a good rule of thumb when approaching this. Any help or direction would be greatly appreciated.\n\nThanks\nStephen Howie\n\n\n\n\n\n\nTried to search the list but the search wasn't \nworking.\n \nI have a server running strictly PostgreSQL that \nI'm trying to tune for performance. The specs are\n \n2 X 2.4 Athlon MP processors\n2G Reg DDR\nFreeBSD 4.8 SMP kernel complied\nPostgreSQL 7.3.3\n4 X 80G IDE Raid 5\n \nMy problem is that I have not totally put my head \naround the concepts of the shmmax, shmmaxpgs, etc.... As it pertains to my \ncurrent setup and the shared mem values in postgresql.conf. I'm looking \nfor a good rule of thumb when approaching this. Any help or direction \nwould be greatly appreciated.\n \nThanks\nStephen Howie",
"msg_date": "Mon, 14 Jul 2003 09:31:26 -0500",
"msg_from": "\"Stephen Howie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tunning FreeeBSD and PostgreSQL"
},
{
"msg_contents": "On Monday 14 Jul 2003 3:31 pm, Stephen Howie wrote:\n[snip]\n> My problem is that I have not totally put my head around the concepts of\n> the shmmax, shmmaxpgs, etc.... As it pertains to my current setup and the\n> shared mem values in postgresql.conf. I'm looking for a good rule of thumb\n> when approaching this. Any help or direction would be greatly appreciated.\n\nThere are two articles recently posted here:\n\nhttp://www.varlena.com/GeneralBits/\n\nThey should provide a good start.\n-- \n Richard Huxton\n",
"msg_date": "Mon, 14 Jul 2003 17:02:25 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tunning FreeeBSD and PostgreSQL"
},
{
"msg_contents": "Richard-\n\nThat was very helpfull Thanks!\nI still would like some guidance on tunning FreeBSD (shmmax and shmmaxpgs).\nDo I need to even touch these settings?\n\nStephen Howie\n\n>There are two articles recently posted here:\n>\n>http://www.varlena.com/GeneralBits/\n>\n>They should provide a good start.\n>-- \n> Richard Huxton\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n\n\n",
"msg_date": "Mon, 14 Jul 2003 12:23:30 -0500",
"msg_from": "\"Stephen Howie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tunning FreeeBSD and PostgreSQL"
},
{
"msg_contents": "\n> I still would like some guidance on tunning FreeBSD (shmmax and\n> shmmaxpgs).\n> Do I need to even touch these settings?\n\nStephen- I have no idea what these are set to by default in FreeBSD, but\nhere's the page that covers changing it in the postgresql docs:\n\nhttp://www.postgresql.org/docs/7.3/static/kernel-resources.html\n\n-Nick\n\n",
"msg_date": "Mon, 14 Jul 2003 12:57:55 -0500",
"msg_from": "\"Nick Fankhauser\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tunning FreeeBSD and PostgreSQL"
},
{
"msg_contents": ">>>>> \"SH\" == Stephen Howie <[email protected]> writes:\n\nSH> Richard-\nSH> That was very helpfull Thanks!\nSH> I still would like some guidance on tunning FreeBSD (shmmax and shmmaxpgs).\nSH> Do I need to even touch these settings?\n\nHere's what I use on FreeBSD 4.7/4.8. The kernel settings don't hurt\nanything being too large for the SHM values, since they are limits,\nnot anything pre-allocated (from my understanding). These settings\nallow for up to 100,000 shared buffers (I currently only use 30,000\nbuffers)\n\n\noptions SYSVMSG #SYSV-style message queues\n\n# only purpose of this box is to run PostgreSQL, which needs tons of shared\n# memory, and some semaphores.\n# Postgres allocates buffers in 8k chunks, so tell Postgres to use about\n# 150 fewer than SHMMAXPGS/2 buffers to leave some room for other Postgres\n# shared memory needs.\noptions SYSVSHM #SYSV-style shared memory\n# Maximum number of shared memory pages system wide.\noptions SHMALL=262144\n# Maximum size, in pages (4k), of a single System V shared memory region.\noptions SHMMAXPGS=262144\n\n# only need semaphores for PostgreSQL\noptions SYSVSEM #SYSV-style semaphores\n# Maximum number of System V semaphores that can be used on the system at\n# one time.\noptions SEMMNI=32\n# Total number of semaphores system wide\noptions SEMMNS=512\n# Maximum number of entries in a semaphore map.\noptions SEMMAP=256\n\n\nAlso, in /etc/sysctl.conf I put\n\n# need lots of files for database\nkern.maxfiles=8000\n# tuning for PostgreSQL\nkern.ipc.shm_use_phys=1\n\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Tue, 15 Jul 2003 12:44:37 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tunning FreeeBSD and PostgreSQL"
},
{
"msg_contents": "Vivek,\nThanks, for your reply. May I ask what you system setup is like (i.e.\nmemory and such)?\n\n----- Original Message ----- \nFrom: \"Vivek Khera\" <[email protected]>\nNewsgroups: ml.postgres.performance\nTo: <[email protected]>\nSent: Tuesday, July 15, 2003 11:44 AM\nSubject: Re: [PERFORM] Tunning FreeeBSD and PostgreSQL\n\n\n> >>>>> \"SH\" == Stephen Howie <[email protected]> writes:\n>\n> SH> Richard-\n> SH> That was very helpfull Thanks!\n> SH> I still would like some guidance on tunning FreeBSD (shmmax and\nshmmaxpgs).\n> SH> Do I need to even touch these settings?\n>\n> Here's what I use on FreeBSD 4.7/4.8. The kernel settings don't hurt\n> anything being too large for the SHM values, since they are limits,\n> not anything pre-allocated (from my understanding). These settings\n> allow for up to 100,000 shared buffers (I currently only use 30,000\n> buffers)\n>\n>\n> options SYSVMSG #SYSV-style message queues\n>\n> # only purpose of this box is to run PostgreSQL, which needs tons of\nshared\n> # memory, and some semaphores.\n> # Postgres allocates buffers in 8k chunks, so tell Postgres to use about\n> # 150 fewer than SHMMAXPGS/2 buffers to leave some room for other Postgres\n> # shared memory needs.\n> options SYSVSHM #SYSV-style shared memory\n> # Maximum number of shared memory pages system wide.\n> options SHMALL=262144\n> # Maximum size, in pages (4k), of a single System V shared memory region.\n> options SHMMAXPGS=262144\n>\n> # only need semaphores for PostgreSQL\n> options SYSVSEM #SYSV-style semaphores\n> # Maximum number of System V semaphores that can be used on the system at\n> # one time.\n> options SEMMNI=32\n> # Total number of semaphores system wide\n> options SEMMNS=512\n> # Maximum number of entries in a semaphore map.\n> options SEMMAP=256\n>\n>\n> Also, in /etc/sysctl.conf I put\n>\n> # need lots of files for database\n> kern.maxfiles=8000\n> # tuning for PostgreSQL\n> kern.ipc.shm_use_phys=1\n>\n>\n>\n> -- \n> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\n> Vivek Khera, Ph.D. Khera Communications, Inc.\n> Internet: [email protected] Rockville, MD +1-240-453-8497\n> AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n\n",
"msg_date": "Tue, 15 Jul 2003 12:14:43 -0500",
"msg_from": "\"Stephen Howie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tunning FreeeBSD and PostgreSQL"
},
{
"msg_contents": ">>>>> \"SH\" == Stephen Howie <[email protected]> writes:\n\nSH> Vivek,\nSH> Thanks, for your reply. May I ask what you system setup is like (i.e.\nSH> memory and such)?\n\nCurrent box is dual P3 1GHz and 2GB RAM. RAID0+1 on 4 disks. I'm\nabout to order a bigger box, since I'm saturating the disk bandwidth\nas far as I can measure it. I'm thinking along the lines of 8 disks\non RAID0+1 and 4GB RAM. The CPUs twiddle their thumbs a lot, so no\npoint in really beefing that up.\n\n",
"msg_date": "Tue, 15 Jul 2003 14:04:54 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tunning FreeeBSD and PostgreSQL"
},
{
"msg_contents": "Vivek Khera wrote:\n> >>>>> \"SH\" == Stephen Howie <[email protected]> writes:\n> \n> SH> Richard-\n> SH> That was very helpfull Thanks!\n> SH> I still would like some guidance on tunning FreeBSD (shmmax and shmmaxpgs).\n> SH> Do I need to even touch these settings?\n> \n> Here's what I use on FreeBSD 4.7/4.8. The kernel settings don't hurt\n> anything being too large for the SHM values, since they are limits,\n> not anything pre-allocated (from my understanding). These settings\n> allow for up to 100,000 shared buffers (I currently only use 30,000\n> buffers)\n\nI think the only downside to making them too big is that you allocate\npage tables and prevent that address range from being used by other\nprocesses. Of course, if you have much less than 4 gigs of RAM in the\nmachine, it probably isn't an issue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 21 Jul 2003 16:41:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tunning FreeeBSD and PostgreSQL"
},
{
"msg_contents": ">>>>> \"BM\" == Bruce Momjian <[email protected]> writes:\n\n>> not anything pre-allocated (from my understanding). These settings\n>> allow for up to 100,000 shared buffers (I currently only use 30,000\n>> buffers)\n\nBM> I think the only downside to making them too big is that you allocate\nBM> page tables and prevent that address range from being used by other\n\nDoes this apply in general or just on FreeBSD?\n\nBM> processes. Of course, if you have much less than 4 gigs of RAM in the\nBM> machine, it probably isn't an issue.\n\nProbably, but wasting page table entries is never a good idea...\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Tue, 22 Jul 2003 10:20:19 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tunning FreeeBSD and PostgreSQL"
},
{
"msg_contents": "Vivek Khera wrote:\n> >>>>> \"BM\" == Bruce Momjian <[email protected]> writes:\n> \n> >> not anything pre-allocated (from my understanding). These settings\n> >> allow for up to 100,000 shared buffers (I currently only use 30,000\n> >> buffers)\n> \n> BM> I think the only downside to making them too big is that you allocate\n> BM> page tables and prevent that address range from being used by other\n> \n> Does this apply in general or just on FreeBSD?\n\nLet me tell you how it traditionally worked --- each process has the\nkernel address space accessible at a fixed address --- it has to so the\nprocess can make kernel calls and run those kernel calls in its own\naddress space, though with a kernel stack and data space.\n\nWhat they did with shared memory was to put shared memory in the same\naddress space with the kernel, because everyone had that address range\nmapped into their address space already. If each process had its own\nprivate copy of the kernel page tables, there is bloat in having the\nkernel address space be larger than required. However, if the kernel\npage tables are shared by all processes, then there isn't much bloat,\njust less addressable user memory, and if you don't have anything near 4\ngigs of RAM, it isn't a problem.\n\nI know Linux has pagable shared memory, and you can resize the maximum\nin a running kernel, so it seems they must have abandonded the linkage\nbetween shared page tables and the kernel. This looks interesting:\n\n\thttp://www.linux-tutorial.info/cgi-bin/display.pl?312&0&0&0&3\n\nand the Contents on the left show additional info like the i386 virtual\ndirectory/page tables:\n\n\thttp://www.linux-tutorial.info/cgi-bin/display.pl?261&0&0&0&3\n\nSo it seems Linux has moved in the direction of making shared memory act\njust like ordinary allocated memory, except it is shared, meaning I\nthink each process has its own pages tables for the shared memory. Once\nyou do that, you get the ability to size it however you want, but you\nlose shared page tables, and it can now be swapped out, which can be bad\nfor performance.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 22 Jul 2003 13:29:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tunning FreeeBSD and PostgreSQL"
},
{
"msg_contents": ">>>>> \"BM\" == Bruce Momjian <[email protected]> writes:\n\nBM> I know Linux has pagable shared memory, and you can resize the maximum\nBM> in a running kernel, so it seems they must have abandonded the linkage\nBM> between shared page tables and the kernel. This looks interesting:\n\nThanks for the info. You can resize it in FreeBSD as well, using the\nsysctl command to set the various kern.ipc.shm* values.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Tue, 22 Jul 2003 14:18:15 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tunning FreeeBSD and PostgreSQL"
}
] |
[
{
"msg_contents": "Greetings,\nWe have several tables (in a PG 7.3.3 database on RH Linux 7.3) with 2M+ \nrows (each row 300-400 bytes in length) that we SELECT into a JDBC \nResultSet for display to the user. We expected that the driver would not \nactually transmit data from the database until the application began \nissuing getXXX() calls. (IIRC, this is the way the Oracle driver works, \nand we had created a buffering mechanism to use it.) Instead, the driver \nappears to be attempting to create the whole rowset in Java memory \nbefore returning, and the application runs out of memory. (Java has been \nconfigured to use up to 1.5G on the machine this occurs on.)\n\nNow the SELECT is preceded by a COUNT of the rows that the same query \nwould return, so perhaps that's what's causing the problem. But the \nquestion is, is this the way a ResultSet is supposed to work? Are there \nany configuration options available that modify this behavior? Are there \ncommercial implementations of PG JDBC that don't have this problem? \n(Shame on me, but I have to ask. :)\n\nAny help will be greatly appreciated!\n\n Rich Cullingford\n [email protected]\n\n",
"msg_date": "Mon, 14 Jul 2003 12:53:18 -0400",
"msg_from": "Rich Cullingford <[email protected]>",
"msg_from_op": true,
"msg_subject": "Java Out-of-memory errors on attempts to read tables with millions\n\tof rows"
},
{
"msg_contents": "I think you want to use a Cursor for browsing the data.\n\nChristoph Nelles\n\n\nAm Montag, 14. Juli 2003 um 18:53 schrieben Sie:\n\nRC> Greetings,\nRC> We have several tables (in a PG 7.3.3 database on RH Linux 7.3) with 2M+ \nRC> rows (each row 300-400 bytes in length) that we SELECT into a JDBC \nRC> ResultSet for display to the user. We expected that the driver would not \nRC> actually transmit data from the database until the application began \nRC> issuing getXXX() calls. (IIRC, this is the way the Oracle driver works, \nRC> and we had created a buffering mechanism to use it.) Instead, the driver \nRC> appears to be attempting to create the whole rowset in Java memory \nRC> before returning, and the application runs out of memory. (Java has been \nRC> configured to use up to 1.5G on the machine this occurs on.)\n\nRC> Now the SELECT is preceded by a COUNT of the rows that the same query \nRC> would return, so perhaps that's what's causing the problem. But the \nRC> question is, is this the way a ResultSet is supposed to work? Are there \nRC> any configuration options available that modify this behavior? Are there \nRC> commercial implementations of PG JDBC that don't have this problem? \nRC> (Shame on me, but I have to ask. :)\n\nRC> Any help will be greatly appreciated!\n\nRC> Rich Cullingford\nRC> [email protected]\n\n\nRC> ---------------------------(end of broadcast)---------------------------\nRC> TIP 9: the planner will ignore your desire to choose an index scan if your\nRC> joining column's datatypes do not match\n\n\n\n-- \nMit freundlichen Grᅵssen\nEvil Azrael mailto:[email protected]\n\n",
"msg_date": "Mon, 14 Jul 2003 19:07:44 +0200",
"msg_from": "Evil Azrael <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Java Out-of-memory errors on attempts to read tables with\n\tmillions of rows"
}
] |
[
{
"msg_contents": "Hi folks-\n\nFor some time, we've been running Postgres with the default configuration &\ngetting adequate performance, but the time has come to tune a bit, so I've\nbeen lurking on this list & gathering notes. Now I'm about ready to make a\nchange & would appreciate it if a few more experienced folks could comment\non whether I appear to be heading in the right direction-\n\nHere's what I'm planning:\n\nIncrease SHMMAX and SHMALL in my kernel to 134217728 (128MB)\n\nIncrease shared_buffers to 8192 (64MB)\n\nIncrease sort_mem to 16384 (16MB)\n\nIncrease effective_cache_size to 65536 (1/2 GB)\n\n\nHere's the environment:\n\nThe Hardware is a dual-processor Athlon 1.2 Ghz box with 1 GB of RAM and the\nDB on SCSI RAID drives.\n\nThe server runs only PostgreSQL\n\nThe database size is about 8GB, with the largest table 2.5 GB, and the two\nmost commonly queried tables at 1 GB each.\n\nThe two most commonly queried tables are usually queried based on a\nnon-unique indexed varchar field typically 20 chars long. The query is a\n\"like\" on people's names with trailing %, so this often gets pushed to seq\nscan or returns several thousand records. (As when someone searches on\n'Jones%'.\n\nRecords from the largest table are always accessed via unique index in\ngroups of 20 or less.\n\nThe OS is Debian Linux kernel 2.4.x (recompiled custom kernel for dual\nprocessor support)\nThe PostgreSQL version is 7.3.2\n\nWe typically have about 30 interactive users on the DB, but they're using a\nshared connection pool of 16. Our main problem appears to be when one of the\nusers fires up a large query and creates a log-jam with resources.\n\n\nMy reasoning is that I'll increase shared_buffers based on anecdotal\nrecommendations I've seen on this list to 64MB. I'll boost the OS SHMMAX to\ntwice that value to allow adequate room for other shared memory needs, thus\nreserving 128MB. Of the remaining memory, 256MB goes to 16 connections *\n16MB sort space, if I leave about 128 MB for headroom, then 1/2 GB should be\nleft available for the effective cache size.\n\nAny thoughts? Is this a sane plan? Are there other parameters I should\nconsider changing first?\n\n\nThanks!\n -Nick\n\n---------------------------------------------------------------------\nNick Fankhauser\n\n [email protected] Phone 1.765.965.7363 Fax 1.765.962.9788\ndoxpop - Court records at your fingertips - http://www.doxpop.com/\n\n\n---------------------------------------------------------------------\nNick Fankhauser\n\n [email protected] Phone 1.765.965.7363 Fax 1.765.962.9788\ndoxpop - Court records at your fingertips - http://www.doxpop.com/\n\n",
"msg_date": "Mon, 14 Jul 2003 12:51:44 -0500",
"msg_from": "\"Nick Fankhauser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sanity check requested"
},
{
"msg_contents": "On 14 Jul 2003 at 12:51, Nick Fankhauser wrote:\n> Any thoughts? Is this a sane plan? Are there other parameters I should\n> consider changing first?\n\nWell, everything seems to be in order and nothing much to suggest I guess. But \nstill..\n\n1. 30 users does not seem to be much of a oevrhead. If possible try doing away \nwith connection pooling. Buta test benchmark run is highly recommended.\n\n2. While increasing sort memory, try 4/8/16 in that order. That way you will \nget a better picture of load behaviour. Though whatever you put appears \nreasonable, having more data always help.\n\n3. I don't know how this affects on SCSI drives, but what file system you are \nusing? Can you try diferent ones? Like reiserfs/ext3 and XFS? See what fits \nyour bill.\n\n4. OK, this is too much but linux kernel 2.6 is in test and has vastly improved \nIO scheduler. May be you should look at it if you are up to experimentation.\n\nHTH\n\nBye\n Shridhar\n\n--\nYou're dead, Jim.\t\t-- McCoy, \"The Tholian Web\", stardate unknown\n\n",
"msg_date": "Tue, 15 Jul 2003 11:12:10 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sanity check requested"
},
{
"msg_contents": "Shridhar-\n\nI appreciate your thoughts- I'll be running some before & after tests on\nthis using one of our development/hot-swap boxes, so I'll report the results\nback to the list.\n\nA few more thoughts/questions:\n\n> 1. 30 users does not seem to be much of a oevrhead. If possible\n> try doing away with connection pooling.\n\nThe application needs to scale up gracefully. We actually have about 200\nusers that could decide to log on at the same time- 30 is just a typical\nload. We'd also prefer to have 20,000 subscribers so we can start making a\nliving with this business <g>.\n\n> 2. While increasing sort memory, try 4/8/16 in that order. That\n> way you will get a better picture of load behaviour. Though whatever you\nput appears\n> reasonable, having more data always help.\n\nI'll try that approach while testing. Is it the case that the sort memory is\nallocated for each connection and becomes unavailable to other processes\nwhile the connection exists? If so, since I'm using a connection pool, I\nshould be able to control total usage precisely. Without a connection pool,\nI could start starving the rest of the system for resources if the number of\nusers spiked unexpectedly. Correct?\n\n\n\n> 3. I don't know how this affects on SCSI drives, but what file\n> system you are using? Can you try diferent ones?\n\n> 4. OK, this is too much but linux kernel 2.6 is in test and has\n> vastly improved IO...\n\nI'm using ext2. For now, I'll leave this and the OS version alone. If I\nchange too many variables, I won't be able to discern which one is causing a\nchange. Although I understand that there's an element of art to tuning, I'm\nenough of a neophyte that I don't have a \"feeling\" for the tuning parameters\nyet and hence I have to take a scientific approach of just tweaking a few\nvariables in an otherwise controlled and unchanged environment. If I can't\nreach my goals with the simple approach, I'll consider some of the more\nradical ideas.\n\nAgain, thanks for the ideas- I'll feed the results back after I've done some\ntests\n\n-Nick\n\n\n\n\n\n",
"msg_date": "Thu, 17 Jul 2003 10:41:35 -0500",
"msg_from": "\"Nick Fankhauser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sanity check requested"
},
{
"msg_contents": "Nick,\n\n> I'll try that approach while testing. Is it the case that the sort memory\n> is allocated for each connection and becomes unavailable to other processes\n> while the connection exists? If so, since I'm using a connection pool, I\n> should be able to control total usage precisely. Without a connection pool,\n> I could start starving the rest of the system for resources if the number\n> of users spiked unexpectedly. Correct?\n\nWrong, actually. Sort memory is allocated *per sort*, not per connnection or \nper query. So a single complex query could easily use 4xsort_mem if it has \nseveral merge joins ... and a pooled connection could use many times sort_mem \ndepending on activity. Thus connection pooling does not help you with \nsort_mem usage at all, unless your pooling mechanism can control the rate at \nwhich queries are fed to the planner.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 17 Jul 2003 08:57:08 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sanity check requested"
},
{
"msg_contents": "\n> Wrong, actually. Sort memory is allocated *per sort*, not per\nconnnection or\n> per query. So a single complex query could easily use 4xsort_mem if it\nhas\n> several merge joins ...\n\nThanks for the correction- it sounds like this is one where usage can't be\nprecisely controlled in a dynamic user environment & I just need to get a\nfeel for what works under a load that approximates my production system.\n\n-Nick\n\n",
"msg_date": "Thu, 17 Jul 2003 11:11:41 -0500",
"msg_from": "\"Nick Fankhauser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sanity check requested"
},
{
"msg_contents": "Nick Fankhauser wrote:\n> Thanks for the correction- it sounds like this is one where usage\n> can't be precisely controlled in a dynamic user environment & I just\n> need to get a feel for what works under a load that approximates my\n> production system.\n> \n\nI think the most important point here is that if you set sort_mem too \nhigh, and you have a lot of simultaneous sorts, you can drive the server \ninto swapping, which obviously is a very bad thing. You want it set as \nhigh as possible, but not so high given your usage patterns that you \nwind up swapping.\n\nJoe\n\n\n",
"msg_date": "Thu, 17 Jul 2003 09:40:21 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sanity check requested"
},
{
"msg_contents": "On 17 Jul 2003 at 10:41, Nick Fankhauser wrote:\n> I'm using ext2. For now, I'll leave this and the OS version alone. If I\n\nI appreciate your approach but it almost proven that ext2 is not the best and \nfastest out there.\n\nIMO, you can safely change that to reiserfs or XFS. Or course, testing is \nalways recommended.\n\nHTH\n\nBye\n Shridhar\n\n--\nNewton's Little-Known Seventh Law:\tA bird in the hand is safer than one \noverhead.\n\n",
"msg_date": "Fri, 18 Jul 2003 12:05:54 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sanity check requested"
},
{
"msg_contents": "I have two tables in the database which are used almost every time\nsomeone makes use of the webpages themselves. The first, is some sort\nof database side parameter list which stores parameters from session to\nsession. While the other, is a table that handles the posting activity\nof all the rooms and chatters.\n\nThe first is required in all authentication with the system and when\nentries are missing you are challenged by the system to prove your\nidentity. This table is based on a randomized order, as in the unique\nnumber changes pseudo randomly and this table sees a reduction in\nentries every hour on the hour as to keep it's information fresh and\nmanageable.\n\nThe other table follows a sequential order and carries more columns of\ninformation. However, this table clears it's entry nightly and with\ncurrent settings will delete roughly a days traffic sitting at 50K rows\nof information.\n\nThe difference is as follows: Without making the use of vacuum every\nhour the parameter table performs very well, showing no loss in service\nor degradation. Since people authenticate more then post, it is safe\nto assume that it removes more rows daily then the posting table.\n\nThe posting table often drags the system down in performance when a day\nhas been skipped, which includes the use of VACUUM ANALYZE EXPLAIN.\nThis seems to be an indication that the process of a daily delete is\nactually a very wise step to take, even if the information itself is not\nneeded for very long.\n\nA VACUUM FULL will correct the issue, but put the site out of commission\nfor roughly 20 minutes as the drive crunches the information.\n\nMy question is, should the purging of rows be done more often then once\na day for both tables. Is this why performance seems to take a hit\nspecifically? As there were too many rows purged for vacuum to\naccurately keep track of?\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Fri, 18 Jul 2003 00:55:12 -0600",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": false,
"msg_subject": "Clearing rows periodically"
},
{
"msg_contents": "Shridhar Daithankar wrote:\n> On 17 Jul 2003 at 10:41, Nick Fankhauser wrote:\n> \n>>I'm using ext2. For now, I'll leave this and the OS version alone. If I\n> \n> \n> I appreciate your approach but it almost proven that ext2 is not the best and \n> fastest out there.\n\nAgreed.\n\n> IMO, you can safely change that to reiserfs or XFS. Or course, testing is \n> always recommended.\n\nWe've been using ext3fs for our production systems. (Red Hat Advanced \nServer 2.1)\n\nAnd since your (Nick) system is based on Debian, I have done some rough \ntesting on Debian sarge (testing) (with custom 2.4.20) with ext3fs, \nreiserfs and jfs. Can't get XFS going easily on Debian, though.\n\nI used a single partition mkfs'd with ext3fs, reiserfs and jfs one after \nthe other on an IDE disk. Ran pgbench and osdb-x0.15-0 on it.\n\njfs's has been underperforming for me. Somehow the CPU usage is higher \nthan the other two. As for ext3fs and reiserfs, I can't detect any \nsignificant difference. So if you're in a hurry, it'll be easier to \nconvert your ext2 to ext3 (using tune2fs) and use that. Otherwise, it'd \nbe nice if you could do your own testing, and post it to the list.\n\n-- \nLinux homer 2.4.18-14 #1 Wed Sep 4 13:35:50 EDT 2002 i686 i686 i386 \nGNU/Linux\n 2:30pm up 204 days, 5:35, 5 users, load average: 5.50, 5.18, 5.13",
"msg_date": "Fri, 18 Jul 2003 15:08:38 +0800",
"msg_from": "Ang Chin Han <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sanity check requested"
},
{
"msg_contents": "\nOn 18/07/2003 07:55 Martin Foster wrote:\n> [snip]\n> A VACUUM FULL will correct the issue, but put the site out of commission\n> for roughly 20 minutes as the drive crunches the information.\n> \n> My question is, should the purging of rows be done more often then once\n> a day for both tables. Is this why performance seems to take a hit\n> specifically? As there were too many rows purged for vacuum to\n> accurately keep track of?\n\nISTR that there are setting in postgresql.conf which affect how many \ntables/rows vacuum can reclaim. The docs say that the default setting of \nmax_fsm_pages is 10000. Maybe this should be increased for your situation?\n\nHTH\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Fri, 18 Jul 2003 08:42:15 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clearing rows periodically"
},
{
"msg_contents": "On Fri, Jul 18, 2003 at 12:55:12AM -0600, Martin Foster wrote:\n> The other table follows a sequential order and carries more columns of\n> information. However, this table clears it's entry nightly and with\n> current settings will delete roughly a days traffic sitting at 50K rows\n> of information.\n\n> has been skipped, which includes the use of VACUUM ANALYZE EXPLAIN.\n> This seems to be an indication that the process of a daily delete is\n> actually a very wise step to take, even if the information itself is not\n> needed for very long.\n> \n> A VACUUM FULL will correct the issue, but put the site out of commission\n> for roughly 20 minutes as the drive crunches the information.\n\nDuring your \"clearing period\", why not do the deletes in batches, and\nVACUUM the table periodically. That will allow you to reclaim the\nspace gradually, and ensure that you don't end up with a big \"bald\nspot\". But you probably want to increase your FSM settings. See the\ndocs.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Fri, 18 Jul 2003 07:34:40 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clearing rows periodically"
},
{
"msg_contents": "Martin Foster <[email protected]> writes:\n> My question is, should the purging of rows be done more often then once\n> a day for both tables. Is this why performance seems to take a hit\n> specifically?\n\nGiven that the hourly purge seems to work well for you, I'd suggest\ntrying it on both tables.\n\nNon-FULL vacuum is intended to be run *frequently*, say as often as\nyou've updated or deleted 10% to 50% of the rows in a table. Delaying\nit until you've had multiple complete turnovers of the table contents\nwill cost you.\n\n> As there were too many rows purged for vacuum to\n> accurately keep track of?\n\nOnly possible if you don't have the FSM parameters set high enough.\nInfrequent vacuuming means you need more FSM space, btw.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jul 2003 09:31:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clearing rows periodically "
},
{
"msg_contents": "\nThanks for the suggestions in the FS types- especially the Debian oriented\ninfo. I'll start by playing with the memory allocation parameters that I\noriginally listed (seems like they should provide results in a way that is\nunaffected by the disk IO). Then once I have them at optimal values, move on\nto trying different file systems.\n\nI assume that as I make changes that affect the disk IO performance, I'll\nthen need to do some testing to find new values for the IO cost for the\nplanner- Do you folks have some ballpark numbers to start with for this\nbased on your experience? I'm departing in three ways from the simple IDE\nmodel that (I presume) the default random page cost of 4 is based on- The\ndisks are SCSI & RAID and the FS would be different.\n\nAt this point, I can't think of any better way to test this than simply\nrunning my local test suite with various values and recording the wall-clock\nresults. Is there a different approach that might make more sense? (This\nmeans that my results will be skewed to my environment, but I'll post them\nanyway.)\n\nI'll post results back to the list as I get to it- It might be a slow\nprocess Since I spend about 18 hours of each day keeping the business\nrunning, I'll have to cut back on sleep & do this in the other 10 hours. <g>\n\n-NF\n\n\n> Shridhar Daithankar wrote:\n> I appreciate your approach but it almost proven that ext2 is\n> not the best and fastest out there.\n>\n> Agreed.\n\n> Ang Chin Han wrote:\n> We've been using ext3fs for our production systems. (Red Hat Advanced\n> Server 2.1)\n>\n> And since your (Nick) system is based on Debian, I have done some rough\n> testing on Debian sarge (testing) (with custom 2.4.20) with ext3fs,\n> reiserfs and jfs. Can't get XFS going easily on Debian, though.\n>\n> I used a single partition mkfs'd with ext3fs, reiserfs and jfs one after\n> the other on an IDE disk. Ran pgbench and osdb-x0.15-0 on it.\n>\n> jfs's has been underperforming for me. Somehow the CPU usage is higher\n> than the other two. As for ext3fs and reiserfs, I can't detect any\n> significant difference. So if you're in a hurry, it'll be easier to\n> convert your ext2 to ext3 (using tune2fs) and use that. Otherwise, it'd\n> be nice if you could do your own testing, and post it to the list.\n>\n> --\n> Linux homer 2.4.18-14 #1 Wed Sep 4 13:35:50 EDT 2002 i686 i686 i386\n> GNU/Linux\n> 2:30pm up 204 days, 5:35, 5 users, load average: 5.50, 5.18, 5.13\n>\n\n",
"msg_date": "Fri, 18 Jul 2003 09:31:46 -0500",
"msg_from": "\"Nick Fankhauser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "File systems (RE: Sanity check requested)"
},
{
"msg_contents": "On Fri, 18 Jul 2003, Ang Chin Han wrote:\n\n> Shridhar Daithankar wrote:\n> > On 17 Jul 2003 at 10:41, Nick Fankhauser wrote:\n> > \n> >>I'm using ext2. For now, I'll leave this and the OS version alone. If I\n> > \n> > \n> > I appreciate your approach but it almost proven that ext2 is not the best and \n> > fastest out there.\n> \n> Agreed.\n\nHuh? How can journaled file systems hope to outrun a simple unjournaled \nfile system? There's just less overhead for ext2 so it's quicker, it's \njust not as reliable.\n\nI point you to this link from IBM:\n\nhttp://www-124.ibm.com/developerworks/opensource/linuxperf/iozone/iozone.php\n\nWhile ext3 is a clear loser to jfs and rfs, ext2 wins most of the contests \nagainst both reiser and jfs. Note that xfs wasn't tested here. But in \ngeneral, ext2 is quite fast nowadays.\n\n> \n> > IMO, you can safely change that to reiserfs or XFS. Or course, testing is \n> > always recommended.\n> \n> We've been using ext3fs for our production systems. (Red Hat Advanced \n> Server 2.1)\n> \n> And since your (Nick) system is based on Debian, I have done some rough \n> testing on Debian sarge (testing) (with custom 2.4.20) with ext3fs, \n> reiserfs and jfs. Can't get XFS going easily on Debian, though.\n> \n> I used a single partition mkfs'd with ext3fs, reiserfs and jfs one after \n> the other on an IDE disk. Ran pgbench and osdb-x0.15-0 on it.\n> \n> jfs's has been underperforming for me. Somehow the CPU usage is higher \n> than the other two. As for ext3fs and reiserfs, I can't detect any \n> significant difference. So if you're in a hurry, it'll be easier to \n> convert your ext2 to ext3 (using tune2fs) and use that. Otherwise, it'd \n> be nice if you could do your own testing, and post it to the list.\n\nI would like to see some tests on how they behave on top of large fast \nRAID arrays, like a 10 disk RAID5 or something. It's likely that on a \nsingle IDE drive the most limiting factor is the bandwidth of the drive, \nwhereas on a large array, the limiting factor would likely be the file \nsystem code.\n\n",
"msg_date": "Fri, 18 Jul 2003 09:40:15 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sanity check requested"
},
{
"msg_contents": "On 2003-07-17 10:41:35 -0500, Nick Fankhauser wrote:\n> I'm using ext2. For now, I'll leave this and the OS version alone. If I\n> \n\nI'd upgrade to a journaling filesystem as soon as possible for reliability.\nTesting in our own environment has shown that PostgreSQL performs best on ext3\n(yes, better than XFS, JFS or ReiserFS) with a linux 2.4.21 kernel. Be sure to\nmount noatime and to create the ext3 partition with the correct stripe size of\nyour RAID array using the '-R stride=foo' option (see man mke2fs).\n\nVincent van Leeuwen\nMedia Design - http://www.mediadesign.nl/\n",
"msg_date": "Fri, 18 Jul 2003 18:00:43 +0200",
"msg_from": "Vincent van Leeuwen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sanity check requested"
},
{
"msg_contents": "I'm confused:\n\nAng Chin Han wrote:\n\n> We've been using ext3fs for our production systems. (Red Hat Advanced\n> Server 2.1)\n\nVincent van Leeuwen wrote:\n\n> I'd upgrade to a journaling filesystem as soon as possible for\n> reliability.\n\n...About one year ago I considered moving to a journaling file system, but\nopted not to because it seems like that's what WAL does for us already. How\ndoes putting a journaling file system under it add more reliability?\n\nI also guessed that a journaling file system would add overhead because now\na write to the WAL file could itself be deferred and logged elsewhere.\n\n...So now I'm really puzzled because folks are weighing in with solid\nanecdotal evidence saying that I'll get both better reliability and\nperformance. Can someone explain what I'm missing about the concept?\n\n-A puzzled Nick\n\n",
"msg_date": "Fri, 18 Jul 2003 14:01:57 -0500",
"msg_from": "\"Nick Fankhauser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sanity check requested"
},
{
"msg_contents": "Nick,\n\n> ...About one year ago I considered moving to a journaling file system, but\n> opted not to because it seems like that's what WAL does for us already. How\n> does putting a journaling file system under it add more reliability?\n\nIt lets you restart your server quickly after an unexpected power-out. Ext2 \nis notoriously bad about this.\n\nAlso, WAL cannot necessarily recover properly if the underlying filesystem is \ncorrupted.\n\n> I also guessed that a journaling file system would add overhead because now\n> a write to the WAL file could itself be deferred and logged elsewhere.\n\nYou are correct.\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 18 Jul 2003 12:07:00 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sanity check requested"
},
{
"msg_contents": "> ...About one year ago I considered moving to a journaling file system, but\n> opted not to because it seems like that's what WAL does for us already. How\n> does putting a journaling file system under it add more reliability?\n\nWAL only works if the WAL files are actually written to disk and can be\nread off it again. Ext2 has a number of deficiencies which can cause\nproblems with this basic operation (inode corruptions, etc). Journaling\ndoes not directly help.",
"msg_date": "18 Jul 2003 19:12:47 +0000",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sanity check requested"
},
{
"msg_contents": "\"Nick Fankhauser\" <[email protected]> writes:\n> I'm departing in three ways from the simple IDE\n> model that (I presume) the default random page cost of 4 is based on- The\n> disks are SCSI & RAID and the FS would be different.\n\nActually, the default 4 is based on experiments I did quite awhile back\non HPUX (with a SCSI disk) and Linux (with an IDE disk, and a different\nfilesystem). I didn't see too much difference between 'em. RAID might\nalter the equation, or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jul 2003 17:49:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File systems (RE: Sanity check requested) "
}
] |
[
{
"msg_contents": "Hi, I have the following query - is there anything i've missed or is it \njust slow?!\n\nI have an index on meta.date that i thought might have been used but \nisn't (I know it would only be a small performance increase in the \ncurrent plan).\n\nmeta.date is between 1999 and 2003. I think generally the most \nefficient order to do things would be to extract all the messages \nwithin the date range and then search over just them.\n\nI am currently in the process of setting up full text indexing as \ndescribed in the techdocs.postgresql.org i guess this is the main way \nof speeding up searches through ~40GB of bulk text?\n\nThanks!...\nm\n\nEXPLAIN ANALYZE SELECT meta.msg_id, date, from_line, subject FROM \nmessage ,meta WHERE meta.date >= '15-06-2003 00:00:00' AND meta.date <= \n'26-06-2003 00:00:00' AND message.header||message.body ILIKE '%chicken%'\nAND meta.sys_id = message.sys_id ORDER BY meta.date DESC;\n QUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------\nSort (cost=50.16..50.17 rows=1 width=120) (actual time=412333.65..\n412333.76 rows=166 loops=1)\nSort Key: meta.date\n-> Nested Loop (cost=0.00..50.15 rows=1 width=120) (actual time=\n400713.41..412332.53 rows=166 loops=1)\n-> Seq Scan on message (cost=0.00..25.00 rows=5 width=8) (actual time=\n58.18..410588.49 rows=20839 loops=1)\nFilter: ((header || body) ~~* '%chicken%'::text)\n-> Index Scan using meta_pkey on meta (cost=0.00..5.02 rows=1 width=\n112) (actual time=0.07..0.07 rows=0 loops=20839)\nIndex Cond: (meta.sys_id = \"outer\".sys_id)\nFilter: ((date >= '2003-06-15 00:00:00'::timestamp without time zone) \nAND (date <= '2003-06-26 00:00:00'::timestamp without time zone))\nTotal runtime: 412334.08 msec\n(9 rows)\n",
"msg_date": "Tue, 15 Jul 2003 16:59:31 +0100",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Query Optimisation"
},
{
"msg_contents": "\n[replying to -performance]\n\nOn Tue, 15 Jul 2003 [email protected] wrote:\n\n> Hi, I have the following query - is there anything i've missed or is it\n> just slow?!\n\nThe fact that it underestimates the number of matching message rows by a\nfactor of about 4000 doesn't help. I'm not sure you're going to be able to\nget a better estimate using message.header||message.body ILIKE '%chicken%'\n(possibly using two ilikes with or might help but probably not enough).\nHave you vacuum analyzed the two tables recently? The seq scan cost on\nmessage seems fairly low given what I would expect to be the size of that\ntable.\n\n> I am currently in the process of setting up full text indexing as\n> described in the techdocs.postgresql.org i guess this is the main way\n> of speeding up searches through ~40GB of bulk text?\n\nThat's still probably the best way.\n\n> EXPLAIN ANALYZE SELECT meta.msg_id, date, from_line, subject FROM\n> message ,meta WHERE meta.date >= '15-06-2003 00:00:00' AND meta.date <=\n> '26-06-2003 00:00:00' AND message.header||message.body ILIKE '%chicken%'\n> AND meta.sys_id = message.sys_id ORDER BY meta.date DESC;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> ------\n> Sort (cost=50.16..50.17 rows=1 width=120) (actual time=412333.65..\n> 412333.76 rows=166 loops=1)\n> Sort Key: meta.date\n> -> Nested Loop (cost=0.00..50.15 rows=1 width=120) (actual time=\n> 400713.41..412332.53 rows=166 loops=1)\n> -> Seq Scan on message (cost=0.00..25.00 rows=5 width=8) (actual time=\n> 58.18..410588.49 rows=20839 loops=1)\n> Filter: ((header || body) ~~* '%chicken%'::text)\n> -> Index Scan using meta_pkey on meta (cost=0.00..5.02 rows=1 width=\n> 112) (actual time=0.07..0.07 rows=0 loops=20839)\n> Index Cond: (meta.sys_id = \"outer\".sys_id)\n> Filter: ((date >= '2003-06-15 00:00:00'::timestamp without time zone)\n> AND (date <= '2003-06-26 00:00:00'::timestamp without time zone))\n> Total runtime: 412334.08 msec\n> (9 rows)\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Tue, 15 Jul 2003 09:09:19 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Query Optimisation"
},
{
"msg_contents": "Try explicitly casting the values into the appropriate date type.\n\nJon\n\nOn Tue, 15 Jul 2003 [email protected] wrote:\n\n> Hi, I have the following query - is there anything i've missed or is it\n> just slow?!\n>\n> I have an index on meta.date that i thought might have been used but\n> isn't (I know it would only be a small performance increase in the\n> current plan).\n>\n> meta.date is between 1999 and 2003. I think generally the most\n> efficient order to do things would be to extract all the messages\n> within the date range and then search over just them.\n>\n> I am currently in the process of setting up full text indexing as\n> described in the techdocs.postgresql.org i guess this is the main way\n> of speeding up searches through ~40GB of bulk text?\n>\n> Thanks!...\n> m\n>\n> EXPLAIN ANALYZE SELECT meta.msg_id, date, from_line, subject FROM\n> message ,meta WHERE meta.date >= '15-06-2003 00:00:00' AND meta.date <=\n> '26-06-2003 00:00:00' AND message.header||message.body ILIKE '%chicken%'\n> AND meta.sys_id = message.sys_id ORDER BY meta.date DESC;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> ------\n> Sort (cost=50.16..50.17 rows=1 width=120) (actual time=412333.65..\n> 412333.76 rows=166 loops=1)\n> Sort Key: meta.date\n> -> Nested Loop (cost=0.00..50.15 rows=1 width=120) (actual time=\n> 400713.41..412332.53 rows=166 loops=1)\n> -> Seq Scan on message (cost=0.00..25.00 rows=5 width=8) (actual time=\n> 58.18..410588.49 rows=20839 loops=1)\n> Filter: ((header || body) ~~* '%chicken%'::text)\n> -> Index Scan using meta_pkey on meta (cost=0.00..5.02 rows=1 width=\n> 112) (actual time=0.07..0.07 rows=0 loops=20839)\n> Index Cond: (meta.sys_id = \"outer\".sys_id)\n> Filter: ((date >= '2003-06-15 00:00:00'::timestamp without time zone)\n> AND (date <= '2003-06-26 00:00:00'::timestamp without time zone))\n> Total runtime: 412334.08 msec\n> (9 rows)\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Tue, 15 Jul 2003 09:21:26 -0700 (PDT)",
"msg_from": "Jonathan Bartlett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Optimisation"
},
{
"msg_contents": "[email protected] writes:\n> -> Seq Scan on message (cost=0.00..25.00 rows=5 width=8) (actual time=\n> 58.18..410588.49 rows=20839 loops=1)\n> Filter: ((header || body) ~~* '%chicken%'::text)\n\nEstimated cost of a seqscan only 25? Have you ever vacuumed or analyzed\nthat table? The planner evidently thinks it is tiny ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Jul 2003 12:35:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Optimisation "
}
] |
[
{
"msg_contents": "Joe,\n\n> Regarding the document at\n> http://www.varlena.com/GeneralBits/Tidbits/perf.html#maxfsmp\n>\n> In section 3.3 you say max_fsm_pages should be set to the number of pages\n> that vacuum reports. Does that apply to table pages only or both table and\n> index pages? Because I'm finding my index pages are a lot more than the\n> table pages.\n\nDepends on which version you're using. As of 7.4 (now in beta), both ... this \nis a major feature for 7.4, which should reduce the need to REINDEX \nsignificantly.\n\nFor 7.3.3 and earlier, table pages only. \n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 15 Jul 2003 15:09:54 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: max_fsm_pages question"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi,\n I have a performance problem using postgresql when the connection is made \nvia ODBC with a windows machine using the latests ODBC drivers (Windows) and \nPostgreSQL 7.3.3 (Linux).\n\n The queries made by my Visual Basic program are very very simple. It queries \nwith as Select if a record exists and if so, it reduces stock with an Update. \nFor the benchmarks I do it 200 times.\n\n If I test it against an Access database (located in a SMB server) it spends \n3 seconds but against PostgreSQL 17 !! Exactly the same test programmed in C \n(with pgsql libraries) and run within the same machine or another Linux \nspends less than a second!!\n\n So the problem seems to be whether with the ODBC drivers or with Windows \nODBC itself. Are there any parameters in the ODBC drivers that might help \nreducing that big overhead added or do you have any suggestions to speed it \nup?\n\nThanks in advance!\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.2 (GNU/Linux)\n\niD8DBQE/FV3pTK7ZP4pDOHcRAmVWAJ9KF/YyKmuBZcidV3FK2gESaX25NwCgjABx\n6WhA0HgC7oxF7VFJeczIrgE=\n=3H+u\n-----END PGP SIGNATURE-----\n",
"msg_date": "Wed, 16 Jul 2003 16:15:01 +0200",
"msg_from": "Albert Cervera Areny <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad performance using ODBC"
},
{
"msg_contents": "Albert Cervera Areny <[email protected]> writes:\n> I have a performance problem using postgresql when the connection is made \n> via ODBC with a windows machine using the latests ODBC drivers (Windows) and \n> PostgreSQL 7.3.3 (Linux).\n\nDo you have logging turned on in the ODBC driver? I recall hearing that\nthat adds a heck of a lot of overhead...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Jul 2003 10:38:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Bad performance using ODBC "
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nA Dimecres 16 Juliol 2003 16:38, Tom Lane va escriure:\n> Albert Cervera Areny <[email protected]> writes:\n> > I have a performance problem using postgresql when the connection is\n> > made via ODBC with a windows machine using the latests ODBC drivers\n> > (Windows) and PostgreSQL 7.3.3 (Linux).\n>\n> Do you have logging turned on in the ODBC driver? I recall hearing that\n> that adds a heck of a lot of overhead...\n\nAfter trying too many things I've finally been able to make it run in just 1 \nor 2 seconds. I simply had to change the recordset type and set it to \ndbOpenSnapshot (This one doesn't show changes made to the database once it's \nbeen open) instead of the default dbDynaset (much more powerful but \nunnecessary in this application).\n\nTake note that though it might seem obvious the performance loss against \nAccess isn't that much and thus VB users aren't probably used to change the \nrecordset type. I think It would be nice a note with this performance \nbenchmarks (2 seconds against 15) in the Mini-Howto on Accessing PostgreSQL \nfrom Visual Basic. I'll contact Dave Page directly in case he finds it \ninteresting.\n\nI haven't seen any speed improvements desabling logging but thanks for your \nsuggestion anyway!\n\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.2 (GNU/Linux)\n\niD8DBQE/FstfTK7ZP4pDOHcRArepAJ9rIhOKtztuPORbGkrVTOfC4UmUOQCeJ00u\nUxJegkvrs4TL3QVXNun3iFs=\n=itG7\n-----END PGP SIGNATURE-----\n",
"msg_date": "Thu, 17 Jul 2003 18:14:16 +0200",
"msg_from": "Albert Cervera Areny <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Bad performance using ODBC"
}
] |
[
{
"msg_contents": "Hi all,\n\nfirst of all I'd like to thank everyone who responded to my earlier post. I have a much better understanding of postgres performance tuning now. In case anyone's interested we've decided to go with RH9 and PostgreSQL 7.3 and we'll do the OS and DB tuning ourselves. (should be a good learning experience)\n\nWe are now getting ready to purchase the hardware that will be used to run the database server. We're spending quite a bit of money on it because this will eventually, if things go well within two months, become a production server. We're getting all RH certified hardware from Dell. (Dell 2650)\n\nWe're now stuck on the question of what type of RAID configuration to use for this server. RAID 5 offers the best fault tolerance but doesn't perform all that well. RAID 10 offers much better performance, but no hot swap. Or should we not use RAID at all. I know that ideally the log (WAL) files should reside on a separate disk from the rest of the DB. Should we use 4 separate drives instead? One for the OS, one for data, one for WAL, one for swap? Or RAID 10 for everything plus 1 drive for WAL? Or RAID 5 for everything?\n\nWe have the budget for 5 drives. Does anyone have any real world experience with what hard drive configuration works best for postgres? This is going to be a dedicated DB server. There are going to be a large number of transactions being written to the database. (Information is logged from a separate app through ODBC to postgres) And there will be some moderately complex queries run concurrently to present this information in the form of various reports on the web. (The app server is a separate machine and will connect to the DB through JDBC to create the HTML reports)\n\nAny thoughts, ideas, comments would be appreciated.\n\nThank you,\n\nBalazs Wellisch\nNeu Solutions\[email protected]\n\n\n\n\n\n\n\nHi all,\n \nfirst of all I'd like to thank everyone who \nresponded to my earlier post. I have a much better understanding of postgres \nperformance tuning now. In case anyone's interested we've decided to go with RH9 \nand PostgreSQL 7.3 and we'll do the OS and DB tuning ourselves. (should be a \ngood learning experience)\n \nWe are now getting ready to purchase the hardware \nthat will be used to run the database server. We're spending quite a bit of \nmoney on it because this will eventually, if things go well within two \nmonths, become a production server. We're getting all RH certified hardware from \nDell. (Dell 2650)\n \nWe're now stuck on the question of what type of \nRAID configuration to use for this server. RAID 5 offers the best fault \ntolerance but doesn't perform all that well. RAID 10 offers much better \nperformance, but no hot swap. Or should we not use RAID at all. I know that \nideally the log (WAL) files should reside on a separate disk from the rest of \nthe DB. Should we use 4 separate drives instead? One for the OS, one for data, \none for WAL, one for swap? Or RAID 10 for everything plus 1 drive for WAL? Or \nRAID 5 for everything?\n \nWe have the budget for 5 drives. Does anyone have \nany real world experience with what hard drive configuration works best for \npostgres? This is going to be a dedicated DB server. There are going to be a \nlarge number of transactions being written to the database. (Information is \nlogged from a separate app through ODBC to postgres) And there will be some \nmoderately complex queries run concurrently to present this information in \nthe form of various reports on the web. (The app server is a separate machine \nand will connect to the DB through JDBC to create the HTML reports)\n \nAny thoughts, ideas, comments would be \nappreciated.\n \nThank you,\n \nBalazs Wellisch\nNeu Solutions\[email protected]",
"msg_date": "Wed, 16 Jul 2003 19:57:22 -0700",
"msg_from": "\"Balazs Wellisch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hardware performance"
},
{
"msg_contents": "Balazs Wellisch wrote:\n> first of all I'd like to thank everyone who responded to my earlier\n> post. I have a much better understanding of postgres performance\n> tuning now. In case anyone's interested we've decided to go with RH9\n> and PostgreSQL 7.3 and we'll do the OS and DB tuning ourselves.\n> (should be a good learning experience)\n\nGood choice! I think you'll find that this list will be a great resource \nas you learn. One point here is that you should use 7.3.3 (latest \nrelease version) instead of the version of Postgres in the distribution. \nAlso, you might want to rebuild the RPMs from source using\n\"--target i686\".\n\n> We have the budget for 5 drives. Does anyone have any real world\n> experience with what hard drive configuration works best for\n> postgres? This is going to be a dedicated DB server. There are going\n> to be a large number of transactions being written to the database.\n\nTo an extent it depends on how big the drives are and how large you \nexpect the database to get. For maximal performance you want RAID 1+0 \nfor data and WAL; and you want OS, data, and WAL each on their own \ndrives. So with 5 drives one possible configuration is:\n\n1 drive OS: OS on it's own drive makes it easy to upgrade, or restore \nthe OS from CD if needed\n2 drives, RAID 1+0: WAL\n2 drives, RAID 1+0: data\n\nBut I've seem reports that with fast I/O subsystems, there was no \nmeasurable difference with WAL separated from data. And to be honest, \nI've never personally found it necessary to separate WAL from data. You \nmay want to test with WAL on the same volume as the data to see if there \nis enough difference to warrant separating it or not given your load and \nyour actual hardware. If not, use 1 OS drive and 4 RAID 1+0 drives as \none volume.\n\nYou never want find any significant use of hard disk based swap space -- \nif you see that, you are probably misconfigured, and performance will be \npoor no matter how you've set up the drives.\n\n> And there will be some moderately complex queries run concurrently to\n> present this information in the form of various reports on the web.\n\nOnce you have some data on your test server, and you have complex \nqueries to tune, there will be a few details you'll get asked every time \nif you don't provide them when posting a question to the list:\n\n1) Have you been running VACUUM and ANALYZE (or VACUUM ANALYZE) at\n appropriate intervals?\n2) What are the table definitions and indexes for all tables involved?\n3) What is the output of EXPLAIN ANALYZE?\n\nHTH,\n\nJoe\n\n\n",
"msg_date": "Wed, 16 Jul 2003 21:52:13 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "Joe Conway kirjutas N, 17.07.2003 kell 07:52:\n> To an extent it depends on how big the drives are and how large you \n> expect the database to get. For maximal performance you want RAID 1+0 \n> for data and WAL; and you want OS, data, and WAL each on their own \n> drives. So with 5 drives one possible configuration is:\n> \n> 1 drive OS: OS on it's own drive makes it easy to upgrade, or restore \n> the OS from CD if needed\n> 2 drives, RAID 1+0: WAL\n> 2 drives, RAID 1+0: data\n\nHow do you do RAID 1+0 with just two drives ?\n\n--------------\nHannu\n\n\n",
"msg_date": "17 Jul 2003 10:56:49 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "On 2003-07-16 19:57:22 -0700, Balazs Wellisch wrote:\n> We're now stuck on the question of what type of RAID configuration to use\n> for this server. RAID 5 offers the best fault tolerance but doesn't perform\n> all that well. RAID 10 offers much better performance, but no hot swap. Or\n> should we not use RAID at all. I know that ideally the log (WAL) files\n> should reside on a separate disk from the rest of the DB. Should we use 4\n> separate drives instead? One for the OS, one for data, one for WAL, one for\n> swap? Or RAID 10 for everything plus 1 drive for WAL? Or RAID 5 for\n> everything?\n> \n\nWe have recently run our own test (simulating our own database load) on a new\nserver which contained 7 15K rpm disks. Since we always want to have a\nhot-spare drive (servers are located in a hard-to-reach datacenter) and we\nalways want redundancy, we tested two different configurations:\n- 6 disk RAID 10 array, holding everything\n- 4 disk RAID 5 array holding postgresql data and 2 disk RAID 1 array holding\n OS, swap and WAL logs\n\nOur database is used for a very busy community website, so our load contains a\nlot of inserts/updates for a website, but much more selects than there are\nupdates.\n\nOur findings were that the 6 disk RAID 10 set was significantly faster than\nthe other setup.\n\nSo I'd recommend a 4-disk RAID 10 array. I'd use the 5th drive for a hot-spare\ndrive, but that's your own call. However, it would be best if you tested some\ndifferent setups under your own database load to see what works best for you.\n\n\nVincent van Leeuwen\nMedia Design - http://www.mediadesign.nl/\n",
"msg_date": "Thu, 17 Jul 2003 12:14:16 +0200",
"msg_from": "Vincent van Leeuwen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "Hannu Krosing wrote:\n> How do you do RAID 1+0 with just two drives ?\n> \n\nHmm, good point -- I must have been tired last night ;-). With two \ndrives you can do mirroring or striping, but not both.\n\nUsually I've seen a pair of mirrored drives for the OS, and a RAID 1+0 \narray for data. But that requires 6 drives, not 5. On non-database \nservers usually the data array is RAID 5, and you could get away with 5 \ndrives (as someone else pointed out).\n\nAs I said, I've never personally found it necessary to move WAL off to a \ndifferent physical drive. What do you think is the best configuration \ngiven the constraint of 5 drives? 1 drive for OS, and 4 for RAID 1+0 for \ndata-plus-WAL? I guess the ideal would be to find enough money for that \n6th drive, use the mirrored pair for both OS and WAL.\n\nJoe\n\n\n",
"msg_date": "Thu, 17 Jul 2003 07:57:53 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "\n> As I said, I've never personally found it necessary to move WAL off to a\n> different physical drive. What do you think is the best configuration\n> given the constraint of 5 drives? 1 drive for OS, and 4 for RAID 1+0 for\n> data-plus-WAL? I guess the ideal would be to find enough money for that\n> 6th drive, use the mirrored pair for both OS and WAL.\n\nI think the issue from the original posters point of view is that the Dell\nPE2650 can only hold a maximum of 5 internal drives\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n",
"msg_date": "Thu, 17 Jul 2003 16:04:38 +0100",
"msg_from": "Adam Witney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "Adam Witney wrote:\n> I think the issue from the original posters point of view is that the Dell\n> PE2650 can only hold a maximum of 5 internal drives\n> \n\nTrue enough, but maybe that's a reason to be looking at other \nalternatives. I think he said the hardware hasn't been bought yet.\n\nJoe\n\n\n\n",
"msg_date": "Thu, 17 Jul 2003 08:09:44 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "On 17/7/03 4:09 pm, \"Joe Conway\" <[email protected]> wrote:\n\n> Adam Witney wrote:\n>> I think the issue from the original posters point of view is that the Dell\n>> PE2650 can only hold a maximum of 5 internal drives\n>> \n> \n> True enough, but maybe that's a reason to be looking at other\n> alternatives. I think he said the hardware hasn't been bought yet.\n\nActually I am going through the same questions myself at the moment.... I\nwould like to have a 2 disk RAID1 and a 4 disk RAID5, so need at least 6\ndisks....\n\nAnybody have any suggestions or experience with other hardware manufacturers\nfor this size of setup? (2U rack, up to 6 disks, 2 processors, ~2GB RAM, if\npossible)\n\nThanks\n\nadam\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n",
"msg_date": "Thu, 17 Jul 2003 16:20:42 +0100",
"msg_from": "Adam Witney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "I am currious. How can you have RAID 1+0 with only 2 drives?\nIf you are thinking about partitioning the drives, wont this defeate the\npurpose?\n\nJLL\n\nJoe Conway wrote:\n> \n> [...]\n> 2 drives, RAID 1+0: WAL\n> 2 drives, RAID 1+0: data\n> [...]\n",
"msg_date": "Thu, 17 Jul 2003 12:13:27 -0400",
"msg_from": "Jean-Luc Lachance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "Jean-Luc Lachance wrote:\n> I am currious. How can you have RAID 1+0 with only 2 drives?\n> If you are thinking about partitioning the drives, wont this defeate the\n> purpose?\n\nYeah -- Hannu already pointed out that my mind was fuzzy when I made \nthat statement :-(. See subsequent posts.\n\nJoe\n\n\n\n",
"msg_date": "Thu, 17 Jul 2003 09:21:32 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "Adam Witney wrote:\n> Actually I am going through the same questions myself at the moment.... I\n> would like to have a 2 disk RAID1 and a 4 disk RAID5, so need at least 6\n> disks....\n> \n> Anybody have any suggestions or experience with other hardware manufacturers\n> for this size of setup? (2U rack, up to 6 disks, 2 processors, ~2GB RAM, if\n> possible)\n\nI tend to use either 1U or 4U servers, depending on the application. But \nI've had good experiences with IBM recently, and a quick look on their \nsite shows the x345 with these specs:\n\n� 2U, 2-way server delivers extreme performance and availability for \ndemanding applications\n� Up to 2 Intel Xeon processors up to 3.06GHz with 533MHz front-side \nbus speed for outstanding performance\n� Features up to 8GB of DDR memory, 5 PCI (4 PCI-X) slots and up to 6 \nhard disk drives for robust expansion\n� Hot-swap redundant cooling, power and hard disk drives for high \navailability\n� Integrated dual Ultra320 SCSI with RAID-1 for data protection\n\nThis may not wrap well, but here is the url:\nhttp://www-132.ibm.com/webapp/wcs/stores/servlet/CategoryDisplay?catalogId=-840&storeId=1&categoryId=2559454&langId=-1&dualCurrId=73\n\nHandles 6 drives; maybe that fits the bill?\n\nJoe\n\n",
"msg_date": "Thu, 17 Jul 2003 09:29:18 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "Sorry for the redundant duplication of the repetition.\nI should have read the follow-up messages.\n\n\nJoe Conway wrote:\n> \n> Jean-Luc Lachance wrote:\n> > I am currious. How can you have RAID 1+0 with only 2 drives?\n> > If you are thinking about partitioning the drives, wont this defeate the\n> > purpose?\n> \n> Yeah -- Hannu already pointed out that my mind was fuzzy when I made\n> that statement :-(. See subsequent posts.\n> \n> Joe\n",
"msg_date": "Thu, 17 Jul 2003 12:44:01 -0400",
"msg_from": "Jean-Luc Lachance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "\nOn Thu, 2003-07-17 at 08:20, Adam Witney wrote:\n\n\n> Anybody have any suggestions or experience with other hardware manufacturers\n> for this size of setup? (2U rack, up to 6 disks, 2 processors, ~2GB RAM, if\n> possible)\n> \n> Thanks\n> \n> adam\n\nCheck out http://www.amaxit.com It is all white box stuff, but they have\nsome really cool gear.\n\n-- \nJord Tanner <[email protected]>\n\n",
"msg_date": "17 Jul 2003 09:54:26 -0700",
"msg_from": "Jord Tanner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "On Thu, Jul 17, 2003 at 07:57:53AM -0700, Joe Conway wrote:\n> \n> As I said, I've never personally found it necessary to move WAL off to a \n> different physical drive. What do you think is the best configuration \n\nOn our Solaris test boxes (where, alas, we do not have the luxury of\n1/2 TB external RAID boxes :-( ), putting WAL on a disk of its own\nyielded something like 30% improvement in throughput on high\ntransaciton volumes. So it's definitely important in some cases.\n\nA\n\n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Thu, 17 Jul 2003 14:41:52 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "On Thu, 17 Jul 2003 16:20:42 +0100\nAdam Witney <[email protected]> said something like:\n\n> \n> Actually I am going through the same questions myself at the\n> moment.... I would like to have a 2 disk RAID1 and a 4 disk RAID5, so\n> need at least 6 disks....\n> \n> Anybody have any suggestions or experience with other hardware\n> manufacturers for this size of setup? (2U rack, up to 6 disks, 2\n> processors, ~2GB RAM, if possible)\n> \n\nWe recently bought a couple of Compaq Proliant DL380 units. They are\n2u, and support 6 disks, 2 CPU's, 12Gb max.\n\nWe purchased 2 units of 1CPU, 4x72Gb RAID 0+1, 1Gb mem, redundant fans\nand power supplies for around $11,000 total. Unfortunately they are\nrunning Win2K with SQLAnywhere (ClearQuest/Web server) ;-) So far (5\nmonths), they're real board...\n\nCheers,\nRob\n\n-- \n 21:16:04 up 1:19, 1 user, load average: 2.04, 1.99, 1.38",
"msg_date": "Thu, 17 Jul 2003 21:23:08 -0600",
"msg_from": "Robert Creager <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
}
] |
[
{
"msg_contents": "I've got a Dell 2650 set up with 5 drives and a separate app server connecting with JDBC. Since you've only got 5 drives, my conclusion regarding the best balance of performance and redundancy was:\r\n \r\n2 drives have the OS, swap, and WAL in RAID-1\r\n3 drives have the data in RAID-5\r\n \r\nIf you can afford it, get the 2+3 split backplane and make the 3 data drives the biggest, fastest you can afford. Currently that means the 15k 73GB drives, which would give you 146GB for data. Make the OS drives smaller and slower if you need to save cash. \r\n \r\nIf only it had six drive bays....you could use 4 drives for the data and do RAID-10. If you've got the additional rackspace available, you could get the 5U Dell 2600 instead for the same ballpark cost. If you order it with rack rails, it comes all set up for rack installation...a special sideways faceplate and everything.\r\n \r\nBy the way, RAID-5 is not the best fault tolerance, RAID-1 or RAID-10 is. And you can certainly hot-swap RAID-10 arrays. I've actually done it....recently! I am of the mind that single drives are not an option for production servers - I just don't need the pain of the server going down at all. Although they DO go down despite redundancy...I just had a SCSI backplane go out in a Dell 6600 that has every bit of redundancy you can order. While uncommon, the backplane is one one of the many single points of failure! \r\n \r\nRoman Fail\r\nPOS Portal, Inc.\r\n \r\n \r\n\r\n\t-----Original Message----- \r\n\tFrom: Balazs Wellisch [mailto:[email protected]] \r\n\tSent: Wed 7/16/2003 7:57 PM \r\n\tTo: [email protected] \r\n\tCc: \r\n\tSubject: [PERFORM] Hardware performance\r\n\t\r\n\t\r\n\tHi all,\r\n\t \r\n\tfirst of all I'd like to thank everyone who responded to my earlier post. I have a much better understanding of postgres performance tuning now. In case anyone's interested we've decided to go with RH9 and PostgreSQL 7.3 and we'll do the OS and DB tuning ourselves. (should be a good learning experience)\r\n\t \r\n\tWe are now getting ready to purchase the hardware that will be used to run the database server. We're spending quite a bit of money on it because this will eventually, if things go well within two months, become a production server. We're getting all RH certified hardware from Dell. (Dell 2650)\r\n\t \r\n\tWe're now stuck on the question of what type of RAID configuration to use for this server. RAID 5 offers the best fault tolerance but doesn't perform all that well. RAID 10 offers much better performance, but no hot swap. Or should we not use RAID at all. I know that ideally the log (WAL) files should reside on a separate disk from the rest of the DB. Should we use 4 separate drives instead? One for the OS, one for data, one for WAL, one for swap? Or RAID 10 for everything plus 1 drive for WAL? Or RAID 5 for everything?\r\n\t \r\n\tWe have the budget for 5 drives. Does anyone have any real world experience with what hard drive configuration works best for postgres? This is going to be a dedicated DB server. There are going to be a large number of transactions being written to the database. (Information is logged from a separate app through ODBC to postgres) And there will be some moderately complex queries run concurrently to present this information in the form of various reports on the web. (The app server is a separate machine and will connect to the DB through JDBC to create the HTML reports)\r\n\t \r\n\tAny thoughts, ideas, comments would be appreciated.\r\n\t \r\n\tThank you,\r\n\t \r\n\tBalazs Wellisch\r\n\tNeu Solutions\r\n\[email protected]\r\n\t \r\n\r\n",
"msg_date": "Wed, 16 Jul 2003 21:25:43 -0700",
"msg_from": "\"Roman Fail\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "On Wed, 2003-07-16 at 23:25, Roman Fail wrote:\n[snip] \n> has every bit of redundancy you can order. While uncommon, the\n> backplane is one one of the many single points of failure!\n\nUnless you go with a shared-disk cluster (Oracle 9iRAC or OpenVMS)\nor replication.\n\nFace it, if your pockets are deep enough, you can make everything\nredundant and burden-sharing (i.e., not just waiting for the master\nsystem to die). (And with some enterprise FC controllers, you can\nmirror the disks many kilometers away.)\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "18 Jul 2003 05:14:48 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
}
] |
[
{
"msg_contents": "\nHi all,\n\nIm currently taking my first steps with db optimizations and am wondering \nwhats happening here and if/how i can help pg choose the better plan.\n\nThanks,\n Fabian\n\n >>>\n\npsql (PostgreSQL) 7.2.2\n\nperg_1097=# VACUUM ANALYZE ;\nVACUUM\nperg_1097=# EXPLAIN ANALYZE select notiz_id, obj_id, obj_typ\nperg_1097-# from notiz_objekt a\nperg_1097-# where not exists\nperg_1097-# (\nperg_1097(# select 1\nperg_1097(# from notiz_gelesen b\nperg_1097(# where ma_id = 2001\nperg_1097(# and ma_pid = 1097\nperg_1097(# and a.notiz_id = b.notiz_id\nperg_1097(# )\nperg_1097-# ;\nNOTICE: QUERY PLAN:\n\nSeq Scan on notiz_objekt a (cost=0.00..56125.80 rows=15561 width=12) \n(actual time=0.28..2305.52 rows=31122 loops=1)\n SubPlan\n -> Seq Scan on notiz_gelesen b (cost=0.00..1.79 rows=1 width=0) \n(actual time=0.07..0.07 rows=0 loops=31122)\nTotal runtime: 2334.42 msec\n\nEXPLAIN\nperg_1097=# SET enable_seqscan to false;\nSET VARIABLE\nperg_1097=# EXPLAIN ANALYZE select notiz_id, obj_id, obj_typ\nperg_1097-# from notiz_objekt a\nperg_1097-# where not exists\nperg_1097-# (\nperg_1097(# select 1\nperg_1097(# from notiz_gelesen b\nperg_1097(# where ma_id = 2001\nperg_1097(# and ma_pid = 1097\nperg_1097(# and a.notiz_id = b.notiz_id\nperg_1097(# )\nperg_1097-# ;\nNOTICE: QUERY PLAN:\n\nSeq Scan on notiz_objekt a (cost=100000000.00..100111719.36 rows=15561 \nwidth=12) (actual time=0.24..538.86 rows=31122 loops=1)\n SubPlan\n -> Index Scan using idx_notiz_gelesen_2 on notiz_gelesen \nb (cost=0.00..3.57 rows=1 width=0) (actual time=0.01..0.01 rows=0 loops=31122)\nTotal runtime: 570.75 msec\n\nEXPLAIN\nperg_1097=#\n\nperg_1097=# \\d notiz_objekt;\n Table \"notiz_objekt\"\n Column | Type | Modifiers\n----------+---------+-----------\n notiz_id | integer |\n obj_id | integer |\n obj_typ | integer |\nIndexes: idx_notiz_objekt_1,\n idx_notiz_objekt_2\n\nperg_1097=# \\d notiz_gelesen;\n Table \"notiz_gelesen\"\n Column | Type | Modifiers\n----------+--------------------------+----------------------------------------------------\n notiz_id | integer |\n ma_id | integer |\n ma_pid | integer |\n stamp | timestamp with time zone | default ('now'::text)::timestamp(6) \nwith time zone\n anzeigen | character varying |\nIndexes: idx_notiz_gelesen_1,\n idx_notiz_gelesen_2\n\nperg_1097=#\n\nperg_1097=# select count(*) from notiz_objekt;\n count\n-------\n 31122\n(1 row)\n\nperg_1097=# select count(*) from notiz_gelesen;\n count\n-------\n 45\n(1 row)\n\nperg_1097=#\n\nidx_notiz_gelesen_1 (ma_id,ma_pid)\nidx_notiz_gelesen_2 (notiz_id)\n\n",
"msg_date": "Thu, 17 Jul 2003 11:01:18 +0200",
"msg_from": "Fabian Kreitner <[email protected]>",
"msg_from_op": true,
"msg_subject": "index / sequential scan problem"
},
{
"msg_contents": "On 17 Jul 2003 at 11:01, Fabian Kreitner wrote:\n> psql (PostgreSQL) 7.2.2\n> \n> perg_1097=# VACUUM ANALYZE ;\n> VACUUM\n> perg_1097=# EXPLAIN ANALYZE select notiz_id, obj_id, obj_typ\n> perg_1097-# from notiz_objekt a\n> perg_1097-# where not exists\n> perg_1097-# (\n> perg_1097(# select 1\n> perg_1097(# from notiz_gelesen b\n> perg_1097(# where ma_id = 2001\n> perg_1097(# and ma_pid = 1097\n> perg_1097(# and a.notiz_id = b.notiz_id\n> perg_1097(# )\n> perg_1097-# ;\n\nFor 31K records, seq. scan does not sound like a bad plan to me but anyway..\n\nHow about \n\n where ma_id = 2001::integer\nand ma_pid = 1097::integer \n\nin above query?\n\nBye\n Shridhar\n\n--\nNo one can guarantee the actions of another.\t\t-- Spock, \"Day of the Dove\", \nstardate unknown\n\n",
"msg_date": "Thu, 17 Jul 2003 14:47:46 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index / sequential scan problem"
},
{
"msg_contents": "\nOn 17/07/2003 10:01 Fabian Kreitner wrote:\n\nHi Fabian,\n\nWhen you are doing these kinds of tests, you need to be aware that the \nkernel may have most of your data cached after the first query and this \nmay be why the second query appears to run faster.\n\nAlso don't be worried if the planner chooses a seq scan for small tables \nas the whole table can often be bought into memory with one IO whereas \nreading the index then the table would be 2 IOs. \nHTH\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Thu, 17 Jul 2003 11:12:48 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index / sequential scan problem"
},
{
"msg_contents": "At 11:17 17.07.2003, Shridhar Daithankar wrote:\n>On 17 Jul 2003 at 11:01, Fabian Kreitner wrote:\n> > psql (PostgreSQL) 7.2.2\n> >\n> > perg_1097=# VACUUM ANALYZE ;\n> > VACUUM\n> > perg_1097=# EXPLAIN ANALYZE select notiz_id, obj_id, obj_typ\n> > perg_1097-# from notiz_objekt a\n> > perg_1097-# where not exists\n> > perg_1097-# (\n> > perg_1097(# select 1\n> > perg_1097(# from notiz_gelesen b\n> > perg_1097(# where ma_id = 2001\n> > perg_1097(# and ma_pid = 1097\n> > perg_1097(# and a.notiz_id = b.notiz_id\n> > perg_1097(# )\n> > perg_1097-# ;\n>\n>For 31K records, seq. scan does not sound like a bad plan to me but anyway..\n\nIm not generally worried that it uses a seq scan but that the second \nexample (where an index on the sub select is used on a table with only 45 \nentries) executes more than 4 times faster. Its not a cache thing either, \nsince i can enable seqscan again and it will run with 2300ms again.\n\n>How about\n>\n> where ma_id = 2001::integer\n>and ma_pid = 1097::integer\n>\n>in above query?\n\nI dont really understand in what way this will help the planner but ill try.\n\nThanks,\n Fabian\n\n",
"msg_date": "Thu, 17 Jul 2003 13:12:10 +0200",
"msg_from": "Fabian Kreitner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index / sequential scan problem"
},
{
"msg_contents": "At 12:12 17.07.2003, you wrote:\n\n>On 17/07/2003 10:01 Fabian Kreitner wrote:\n>\n>Hi Fabian,\n>\n>When you are doing these kinds of tests, you need to be aware that the \n>kernel may have most of your data cached after the first query and this \n>may be why the second query appears to run faster.\n\nI thought of this too, but executions times wont change with repeating / \nalternating these two tests.\n\n>Also don't be worried if the planner chooses a seq scan for small tables \n>as the whole table can often be bought into memory with one IO whereas \n>reading the index then the table would be 2 IOs. HTH\n\nThat is what I read too and is why Im confused that the index is indeed \nexecuting faster. Can this be a problem with the hardware and/or postgress \ninstallation?\n\nThanks,\n Fabian\n\n",
"msg_date": "Thu, 17 Jul 2003 13:13:06 +0200",
"msg_from": "Fabian Kreitner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index / sequential scan problem"
},
{
"msg_contents": "On 17 Jul 2003 at 13:12, Fabian Kreitner wrote:\n\n> At 11:17 17.07.2003, Shridhar Daithankar wrote:\n> >How about\n> >\n> > where ma_id = 2001::integer\n> >and ma_pid = 1097::integer\n> >\n> >in above query?\n> \n> I dont really understand in what way this will help the planner but ill try.\n\nThat is typecasting. It helps planner understand query in more correct fashion.\n\nBye\n Shridhar\n\n--\nQOTD:\t\"I may not be able to walk, but I drive from the sitting posistion.\"\n\n",
"msg_date": "Thu, 17 Jul 2003 16:48:42 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index / sequential scan problem"
},
{
"msg_contents": "\nOn 17/07/2003 12:13 Fabian Kreitner wrote:\n> That is what I read too and is why Im confused that the index is indeed \n> executing faster. Can this be a problem with the hardware and/or \n> postgress installation?\n\n\nIt's more likely that the OS has most of the data cached after the first \nquery and so doesn't need to re-read that data from disk when you retry \nthe query with seq scan disabled. Try something like this:\n\nset enable_seqscan to true;\nexplain analyze ......\nset enable_seqscan to false;\nexplain analyze ......\nset enable_seqscan to true;\nexplain analyze ......\n\nI expect you will find that the third query is also a lot faster that the \nfirst query.\n\nHTH\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Thu, 17 Jul 2003 13:34:45 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index / sequential scan problem"
},
{
"msg_contents": "At 14:34 17.07.2003, you wrote:\n\n>On 17/07/2003 12:13 Fabian Kreitner wrote:\n>>That is what I read too and is why Im confused that the index is indeed \n>>executing faster. Can this be a problem with the hardware and/or \n>>postgress installation?\n>\n>\n>It's more likely that the OS has most of the data cached after the first \n>query and so doesn't need to re-read that data from disk when you retry \n>the query with seq scan disabled. Try something like this:\n>\n>set enable_seqscan to true;\n>explain analyze ......\n>set enable_seqscan to false;\n>explain analyze ......\n>set enable_seqscan to true;\n>explain analyze ......\n>\n>I expect you will find that the third query is also a lot faster that the \n>first query.\n\nIm afraid, no.\nDatabase has been stopped / started right before this.\n\nperg_1097=# set enable_seqscan to true;\nSET VARIABLE\nperg_1097=# EXPLAIN ANALYZE select notiz_id, obj_id, obj_typ\nperg_1097-# from notiz_objekt a\nperg_1097-# where not exists\nperg_1097-# (\nperg_1097(# select 1\nperg_1097(# from notiz_gelesen b\nperg_1097(# where ma_id = 2001\nperg_1097(# and ma_pid = 1097\nperg_1097(# and a.notiz_id = b.notiz_id\nperg_1097(# )\nperg_1097-# ;\nNOTICE: QUERY PLAN:\n\nSeq Scan on notiz_objekt a (cost=0.00..56125.80 rows=15561 width=12) \n(actual time=0.28..2298.71 rows=31122 loops=1)\n SubPlan\n -> Seq Scan on notiz_gelesen b (cost=0.00..1.79 rows=1 width=0) \n(actual time=0.07..0.07 rows=0 loops=31122)\nTotal runtime: 2327.37 msec\n\nEXPLAIN\nperg_1097=# set enable_seqscan to false;\nSET VARIABLE\nperg_1097=# EXPLAIN ANALYZE select notiz_id, obj_id, obj_typ\nperg_1097-# from notiz_objekt a\nperg_1097-# where not exists\nperg_1097-# (\nperg_1097(# select 1\nperg_1097(# from notiz_gelesen b\nperg_1097(# where ma_id = 2001\nperg_1097(# and ma_pid = 1097\nperg_1097(# and a.notiz_id = b.notiz_id\nperg_1097(# )\nperg_1097-# ;\nNOTICE: QUERY PLAN:\n\nSeq Scan on notiz_objekt a (cost=100000000.00..100111719.36 rows=15561 \nwidth=12) (actual time=0.25..535.75 rows=31122 loops=1)\n SubPlan\n -> Index Scan using idx_notiz_gelesen_2 on notiz_gelesen \nb (cost=0.00..3.57 rows=1 width=0) (actual time=0.01..0.01 rows=0 loops=31122)\nTotal runtime: 567.94 msec\n\nEXPLAIN\nperg_1097=# set enable_seqscan to true;\nSET VARIABLE\nperg_1097=# EXPLAIN ANALYZE select notiz_id, obj_id, obj_typ\nperg_1097-# from notiz_objekt a\nperg_1097-# where not exists\nperg_1097-# (\nperg_1097(# select 1\nperg_1097(# from notiz_gelesen b\nperg_1097(# where ma_id = 2001\nperg_1097(# and ma_pid = 1097\nperg_1097(# and a.notiz_id = b.notiz_id\nperg_1097(# )\nperg_1097-# ;\nNOTICE: QUERY PLAN:\n\nSeq Scan on notiz_objekt a (cost=0.00..56125.80 rows=15561 width=12) \n(actual time=0.13..2300.74 rows=31122 loops=1)\n SubPlan\n -> Seq Scan on notiz_gelesen b (cost=0.00..1.79 rows=1 width=0) \n(actual time=0.07..0.07 rows=0 loops=31122)\nTotal runtime: 2330.25 msec\n\nEXPLAIN\nperg_1097=#\n\n",
"msg_date": "Thu, 17 Jul 2003 14:50:30 +0200",
"msg_from": "Fabian Kreitner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index / sequential scan problem"
},
{
"msg_contents": "On 17 Jul 2003 at 14:50, Fabian Kreitner wrote:\n\n> At 14:34 17.07.2003, you wrote:\n> >I expect you will find that the third query is also a lot faster that the \n> >first query.\n> \n> Im afraid, no.\n> Database has been stopped / started right before this.\n> \n> perg_1097=# set enable_seqscan to true;\n> SET VARIABLE\n> perg_1097=# EXPLAIN ANALYZE select notiz_id, obj_id, obj_typ\n> perg_1097-# from notiz_objekt a\n> perg_1097-# where not exists\n\nWell, he said query and not the query plan...:-) \n\nWhile explain analyze is great for judging what is happening, it's not always a \ngood idea to trust the numbers produced by it. It will probably produce same \nnumber for a SCSI disk machine and for a IDE disk machine, everything else \nbeing equal. Obviously that is not correct.\n\nOnly thing you can trust in explain analyze is it's plan. Numbers are based on \nheuristic and should be taken as hint only.\n\nBye\n Shridhar\n\n--\nHarrisberger's Fourth Law of the Lab:\tExperience is directly proportional to \nthe amount of equipment ruined.\n\n",
"msg_date": "Thu, 17 Jul 2003 18:24:27 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index / sequential scan problem"
},
{
"msg_contents": "\nI've seen similar behavior in my own queries. I found that reducing\nrandom_page_cost from the default of 4 down to 2 caused the query to\nchoose the index, and resulted in an order of magnitude improvement on\nsome queries.\n\nOn Thu, 2003-07-17 at 05:50, Fabian Kreitner wrote:\n> At 14:34 17.07.2003, you wrote:\n> \n> >On 17/07/2003 12:13 Fabian Kreitner wrote:\n> >>That is what I read too and is why Im confused that the index is indeed \n> >>executing faster. Can this be a problem with the hardware and/or \n> >>postgress installation?\n> >\n> >\n> >It's more likely that the OS has most of the data cached after the first \n> >query and so doesn't need to re-read that data from disk when you retry \n> >the query with seq scan disabled. Try something like this:\n> >\n> >set enable_seqscan to true;\n> >explain analyze ......\n> >set enable_seqscan to false;\n> >explain analyze ......\n> >set enable_seqscan to true;\n> >explain analyze ......\n> >\n> >I expect you will find that the third query is also a lot faster that the \n> >first query.\n> \n> Im afraid, no.\n> Database has been stopped / started right before this.\n> \n> perg_1097=# set enable_seqscan to true;\n> SET VARIABLE\n> perg_1097=# EXPLAIN ANALYZE select notiz_id, obj_id, obj_typ\n> perg_1097-# from notiz_objekt a\n> perg_1097-# where not exists\n> perg_1097-# (\n> perg_1097(# select 1\n> perg_1097(# from notiz_gelesen b\n> perg_1097(# where ma_id = 2001\n> perg_1097(# and ma_pid = 1097\n> perg_1097(# and a.notiz_id = b.notiz_id\n> perg_1097(# )\n> perg_1097-# ;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on notiz_objekt a (cost=0.00..56125.80 rows=15561 width=12) \n> (actual time=0.28..2298.71 rows=31122 loops=1)\n> SubPlan\n> -> Seq Scan on notiz_gelesen b (cost=0.00..1.79 rows=1 width=0) \n> (actual time=0.07..0.07 rows=0 loops=31122)\n> Total runtime: 2327.37 msec\n> \n> EXPLAIN\n> perg_1097=# set enable_seqscan to false;\n> SET VARIABLE\n> perg_1097=# EXPLAIN ANALYZE select notiz_id, obj_id, obj_typ\n> perg_1097-# from notiz_objekt a\n> perg_1097-# where not exists\n> perg_1097-# (\n> perg_1097(# select 1\n> perg_1097(# from notiz_gelesen b\n> perg_1097(# where ma_id = 2001\n> perg_1097(# and ma_pid = 1097\n> perg_1097(# and a.notiz_id = b.notiz_id\n> perg_1097(# )\n> perg_1097-# ;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on notiz_objekt a (cost=100000000.00..100111719.36 rows=15561 \n> width=12) (actual time=0.25..535.75 rows=31122 loops=1)\n> SubPlan\n> -> Index Scan using idx_notiz_gelesen_2 on notiz_gelesen \n> b (cost=0.00..3.57 rows=1 width=0) (actual time=0.01..0.01 rows=0 loops=31122)\n> Total runtime: 567.94 msec\n> \n> EXPLAIN\n> perg_1097=# set enable_seqscan to true;\n> SET VARIABLE\n> perg_1097=# EXPLAIN ANALYZE select notiz_id, obj_id, obj_typ\n> perg_1097-# from notiz_objekt a\n> perg_1097-# where not exists\n> perg_1097-# (\n> perg_1097(# select 1\n> perg_1097(# from notiz_gelesen b\n> perg_1097(# where ma_id = 2001\n> perg_1097(# and ma_pid = 1097\n> perg_1097(# and a.notiz_id = b.notiz_id\n> perg_1097(# )\n> perg_1097-# ;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on notiz_objekt a (cost=0.00..56125.80 rows=15561 width=12) \n> (actual time=0.13..2300.74 rows=31122 loops=1)\n> SubPlan\n> -> Seq Scan on notiz_gelesen b (cost=0.00..1.79 rows=1 width=0) \n> (actual time=0.07..0.07 rows=0 loops=31122)\n> Total runtime: 2330.25 msec\n> \n> EXPLAIN\n> perg_1097=#\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n-- \nJord Tanner <[email protected]>\n\n",
"msg_date": "17 Jul 2003 07:10:35 -0700",
"msg_from": "Jord Tanner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index / sequential scan problem"
},
{
"msg_contents": "\nOn 17/07/2003 13:50 Fabian Kreitner wrote:\n> [snip]\n> Im afraid, no.\n> Database has been stopped / started right before this.\n> [snip]\n\n1) enable_seqscan = true\n> Seq Scan on notiz_objekt a (cost=0.00..56125.80 rows=15561 width=12) \n> (actual time=0.28..2298.71 rows=31122 loops=1)\n> [snip]\n\n2) enable_seqscan = false\n> Seq Scan on notiz_objekt a (cost=100000000.00..100111719.36 rows=15561 \n> width=12) (actual time=0.25..535.75 rows=31122 loops=1)\n\nI've just noticed this. Something is not right here. Look at the crazy \ncost estimation for the second query. It looks to me like \nenable_indexscan, enable_tidscan, enable_sort, enable_nestloop, \nenable_mergejoin or enable_hashjoin have been set to false. Looking at the \nsource, thats the only way I can see that such large numbers can be \nproduced.\n\nHTH\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Thu, 17 Jul 2003 15:38:25 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index / sequential scan problem"
},
{
"msg_contents": "Paul Thomas <[email protected]> writes:\n> 2) enable_seqscan = false\n>> Seq Scan on notiz_objekt a (cost=100000000.00..100111719.36 rows=15561 \n>> width=12) (actual time=0.25..535.75 rows=31122 loops=1)\n\n> I've just noticed this. Something is not right here. Look at the crazy \n> cost estimation for the second query.\n\nNo, that's exactly what it's supposed to do. enable_seqscan cannot\nsimply suppress generation of a seqscan plan (because that might be\nthe only way to do the query, if there's no applicable index). So it\ngenerates the plan, but sticks a large penalty into the cost estimate\nto keep the planner from choosing that alternative if there is any\nother. The \"100000000.00\" is that artificial penalty.\n\nWe could probably hide this implementation detail from you if we tried\nhard enough, but it hasn't bothered anyone enough to try.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Jul 2003 14:03:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index / sequential scan problem "
},
{
"msg_contents": "Fabian Kreitner <[email protected]> writes:\n> That is what I read too and is why Im confused that the index is indeed \n> executing faster. Can this be a problem with the hardware and/or postgress \n> installation?\n\nI think the actual issue here is that you are executing the EXISTS\nsubplan over and over, once for each outer row. The planner's cost\nestimate for EXISTS is based on the assumption that you do it once\n... in which scenario the seqscan very possibly is cheaper. However,\nwhen you do the EXISTS subplan over and over for many outer rows, you\nget a savings from the fact that the index and table pages soon get\ncached in memory. The seqscan plan gets a savings too, since the table\nis small enough to fit in memory, but once everything is in memory the\nindexscan plan is faster.\n\nThere's been some discussion on pghackers about how to teach the planner\nto account for repeated executions of subplans, but we have not come up\nwith a good solution yet.\n\nFor the moment, what people tend to do if they know their database is\nsmall enough to mostly stay in memory is to reduce random_page_cost to\nmake the planner favor indexscans. If you know the database is entirely\ncached then the theoretically correct value of random_page_cost is 1.0\n(since fetching any page will cost the same, if it's all in RAM). I'd\nrecommend against adopting that as a default, but a lot of people find\nthat setting it to 2.0 or so seems to model their situation better than\nthe out-of-the-box 4.0.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Jul 2003 14:12:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index / sequential scan problem "
},
{
"msg_contents": "At 20:12 17.07.2003, Tom Lane wrote:\n>Fabian Kreitner <[email protected]> writes:\n> > That is what I read too and is why Im confused that the index is indeed\n> > executing faster. Can this be a problem with the hardware and/or postgress\n> > installation?\n>\n>I think the actual issue here is that you are executing the EXISTS\n>subplan over and over, once for each outer row. The planner's cost\n>estimate for EXISTS is based on the assumption that you do it once\n>... in which scenario the seqscan very possibly is cheaper. However,\n>when you do the EXISTS subplan over and over for many outer rows, you\n>get a savings from the fact that the index and table pages soon get\n>cached in memory. The seqscan plan gets a savings too, since the table\n>is small enough to fit in memory, but once everything is in memory the\n>indexscan plan is faster.\n>\n>There's been some discussion on pghackers about how to teach the planner\n>to account for repeated executions of subplans, but we have not come up\n>with a good solution yet.\n>\n>For the moment, what people tend to do if they know their database is\n>small enough to mostly stay in memory is to reduce random_page_cost to\n>make the planner favor indexscans. If you know the database is entirely\n>cached then the theoretically correct value of random_page_cost is 1.0\n>(since fetching any page will cost the same, if it's all in RAM). I'd\n>recommend against adopting that as a default, but a lot of people find\n>that setting it to 2.0 or so seems to model their situation better than\n>the out-of-the-box 4.0.\n\nThanks for the explanation :)\n\n\nHowever .... :(\n\nperg_1097=# vacuum analyze;\nVACUUM\nperg_1097=# set random_page_cost to 1.0;\nSET VARIABLE\nperg_1097=# EXPLAIN ANALYZE select notiz_id, obj_id, obj_typ\nperg_1097-# from notiz_objekt a\nperg_1097-# where not exists\nperg_1097-# (\nperg_1097(# select 1\nperg_1097(# from notiz_gelesen b\nperg_1097(# where ma_id = 2001\nperg_1097(# and ma_pid = 1097\nperg_1097(# and a.notiz_id = b.notiz_id\nperg_1097(# )\nperg_1097-# ;\nNOTICE: QUERY PLAN:\n\nSeq Scan on notiz_objekt a (cost=0.00..56125.80 rows=15561 width=12) \n(actual time=0.27..2299.09 rows=31122 loops=1)\n SubPlan\n -> Seq Scan on notiz_gelesen b (cost=0.00..1.79 rows=1 width=0) \n(actual time=0.07..0.07 rows=0 loops=31122)\nTotal runtime: 2328.05 msec\n\nEXPLAIN\nperg_1097=#\n\n...\n\nperg_1097=# set enable_seqscan to false;\nSET VARIABLE\nperg_1097=# set random_page_cost to 1.0;\nSET VARIABLE\nperg_1097=# EXPLAIN ANALYZE select notiz_id, obj_id, obj_typ\nperg_1097-# from notiz_objekt a\nperg_1097-# where not exists\nperg_1097-# (\nperg_1097(# select 1\nperg_1097(# from notiz_gelesen b\nperg_1097(# where ma_id = 2001\nperg_1097(# and ma_pid = 1097\nperg_1097(# and a.notiz_id = b.notiz_id\nperg_1097(# )\nperg_1097-# ;\nNOTICE: QUERY PLAN:\n\nSeq Scan on notiz_objekt a (cost=100000000.00..100093380.36 rows=15561 \nwidth=12) (actual time=0.07..550.07 rows=31122 loops=1)\n SubPlan\n -> Index Scan using idx_notiz_gelesen_2 on notiz_gelesen \nb (cost=0.00..2.98 rows=1 width=0) (actual time=0.01..0.01 rows=0 loops=31122)\nTotal runtime: 582.90 msec\n\nEXPLAIN\nperg_1097=#\n\n\nEven with a random page cost of 1 it thinks using the index should/could \ntake significantly longer which it doesnt for some reason :-/\n\n",
"msg_date": "Fri, 18 Jul 2003 06:47:05 +0200",
"msg_from": "Fabian Kreitner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index / sequential scan problem "
},
{
"msg_contents": "\nHi all,\n\nAdjusting the cpu_tuple_cost to 0.042 got the planner to choose the index.\n\nAnything I need to consider when raising it to such \"high\" values?\n\nThanks for the help,\n Fabian\n\n",
"msg_date": "Fri, 18 Jul 2003 07:18:27 +0200",
"msg_from": "Fabian Kreitner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index / sequential scan problem "
},
{
"msg_contents": "On Fri, 18 Jul 2003, Fabian Kreitner wrote:\n\n> Adjusting the cpu_tuple_cost to 0.042 got the planner to choose the index.\n\nDoesn't sound very good and it will most likely make other queries slower.\nYou could always turn off sequential scan before that query and turn it on\nafter.\n\n> Anything I need to consider when raising it to such \"high\" values?\n\nYou could fill the table with more data and it will probably come to a \npoint where it will stop using the seq. scan.\n\nYou could of course also change pg itself so it calculates a better\nestimate.\n\n-- \n/Dennis\n\n",
"msg_date": "Fri, 18 Jul 2003 07:25:54 +0200 (CEST)",
"msg_from": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index / sequential scan problem "
},
{
"msg_contents": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]> writes:\n> On Fri, 18 Jul 2003, Fabian Kreitner wrote:\n>> Adjusting the cpu_tuple_cost to 0.042 got the planner to choose the index.\n\n> Doesn't sound very good and it will most likely make other queries slower.\n\nSeems like a reasonable approach to me --- certainly better than setting\nrandom_page_cost to physically nonsensical values.\n\nIn a fully-cached situation it's entirely reasonable to inflate the\nvarious cpu_xxx costs, since by assumption you are not paying the normal\nprice of physical disk I/O. Fetching a page from kernel buffer cache\nis certainly cheaper than getting it off the disk. But the CPU costs\ninvolved in processing the page contents don't change. Since our cost\nunit is defined as 1.0 = one sequential page fetch, you have to increase\nthe cpu_xxx numbers instead of reducing the I/O cost estimate.\n\nI would recommend inflating all the cpu_xxx costs by the same factor,\nunless you have evidence that they are wrong in relation to each other.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jul 2003 09:24:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index / sequential scan problem "
},
{
"msg_contents": "On Fri, 18 Jul 2003, Tom Lane wrote:\n\n> =?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]> writes:\n> > On Fri, 18 Jul 2003, Fabian Kreitner wrote:\n> >> Adjusting the cpu_tuple_cost to 0.042 got the planner to choose the index.\n> \n> > Doesn't sound very good and it will most likely make other queries slower.\n> \n> Seems like a reasonable approach to me --- certainly better than setting\n> random_page_cost to physically nonsensical values.\n> \n> In a fully-cached situation it's entirely reasonable to inflate the\n> various cpu_xxx costs, since by assumption you are not paying the normal\n> price of physical disk I/O. Fetching a page from kernel buffer cache\n> is certainly cheaper than getting it off the disk. But the CPU costs\n> involved in processing the page contents don't change. Since our cost\n> unit is defined as 1.0 = one sequential page fetch, you have to increase\n> the cpu_xxx numbers instead of reducing the I/O cost estimate.\n> \n> I would recommend inflating all the cpu_xxx costs by the same factor,\n> unless you have evidence that they are wrong in relation to each other.\n\nAnd don't forget to set effective_cache_size. It's the one I missed for \nthe longest when I started.\n\n",
"msg_date": "Fri, 18 Jul 2003 08:09:22 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index / sequential scan problem "
},
{
"msg_contents": "On Fri, 18 Jul 2003, Tom Lane wrote:\n\n> >> Adjusting the cpu_tuple_cost to 0.042 got the planner to choose the index.\n> \n> > Doesn't sound very good and it will most likely make other queries slower.\n> \n> Seems like a reasonable approach to me --- certainly better than setting\n> random_page_cost to physically nonsensical values.\n\nHehe, just before this letter there was talk about changing\nrandom_page_cost. I kind of responed that 0.042 is not a good random page\ncost. But now of course I can see that it says cpu_tuple_cost :-)\n\nSorry for adding confusion.\n\n-- \n/Dennis\n\n",
"msg_date": "Fri, 18 Jul 2003 20:43:41 +0200 (CEST)",
"msg_from": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index / sequential scan problem "
}
] |
[
{
"msg_contents": "Hello all,\n\nI'm putting together a database that has me wondering about the interaction of ANALYZE\nwith indices. I guess the basic question is: are indices affected by the results of\nANALYZE.\n\nThe particular application I've got is doing a batch insert of lots of records. For\nperformance, I'm dropping the indexes on the table, doing the inserts, then recreating\nthe indexes a then doing a VACUUM ANALYZE. Specifically, I'm wondering if I should do\nthe ANALYZE before or after I recreate the indexes, or whether it matters.\n\nAny feedbackis welcome.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Thu, 17 Jul 2003 10:45:35 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Relation of indices to ANALYZE"
},
{
"msg_contents": "On Thursday 17 Jul 2003 3:45 pm, Bill Moran wrote:\n> Hello all,\n>\n> I'm putting together a database that has me wondering about the interaction\n> of ANALYZE with indices. I guess the basic question is: are indices\n> affected by the results of ANALYZE.\n>\n> The particular application I've got is doing a batch insert of lots of\n> records. For performance, I'm dropping the indexes on the table, doing the\n> inserts, then recreating the indexes a then doing a VACUUM ANALYZE. \n> Specifically, I'm wondering if I should do the ANALYZE before or after I\n> recreate the indexes, or whether it matters.\n\nI don't think it matters - the analyse looks at the data, and then when you \nrun a query the planner estimates how many rows each clause will require and \nchecks if there is an index that will help.\n-- \n Richard Huxton\n",
"msg_date": "Thu, 17 Jul 2003 17:14:06 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation of indices to ANALYZE"
},
{
"msg_contents": "Bill Moran <[email protected]> writes:\n> Specifically, I'm wondering if I should do\n> the ANALYZE before or after I recreate the indexes, or whether it matters.\n\nAt the moment it does not matter --- ANALYZE computes statistics for\neach column of a table regardless of what indexes exist.\n\nThere has been some talk of trying to compute statistics for the\ncontents of functional indexes. Also, if we ever do anything about\ncomputing multicolumn correlation statistics, we'd likely choose which\nones are worth computing based on the presence of multicolumn indexes.\nSo if you want to future-proof your code I'd recommend recreating the\nindexes before you ANALYZE.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Jul 2003 14:27:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation of indices to ANALYZE "
},
{
"msg_contents": "Tom Lane wrote:\n> Bill Moran <[email protected]> writes:\n> \n>>Specifically, I'm wondering if I should do\n>>the ANALYZE before or after I recreate the indexes, or whether it matters.\n> \n> \n> At the moment it does not matter --- ANALYZE computes statistics for\n> each column of a table regardless of what indexes exist.\n> \n> There has been some talk of trying to compute statistics for the\n> contents of functional indexes. Also, if we ever do anything about\n> computing multicolumn correlation statistics, we'd likely choose which\n> ones are worth computing based on the presence of multicolumn indexes.\n> So if you want to future-proof your code I'd recommend recreating the\n> indexes before you ANALYZE.\n\nThanks, Tom (and everyone else who replied).\n\nI'm already recreating the indices prior to the VACUUM ANALYZE, since this\nputs the database back in a more usable state faster than doing the\nVACUUM first. It's good to know that it will probably be the proper way\nto do things in the future as well.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Thu, 17 Jul 2003 14:51:10 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation of indices to ANALYZE"
}
] |
[
{
"msg_contents": ">Adam Witney wrote:\n>> Actually I am going through the same questions myself at the \n>moment.... I\n>> would like to have a 2 disk RAID1 and a 4 disk RAID5, so \n>need at least 6\n>> disks....\n>> \n>> Anybody have any suggestions or experience with other \n>hardware manufacturers\n>> for this size of setup? (2U rack, up to 6 disks, 2 \n>processors, ~2GB RAM, if\n>> possible)\n>\n>I tend to use either 1U or 4U servers, depending on the \n>application. But \n>I've had good experiences with IBM recently, and a quick look on their \n>site shows the x345 with these specs:\n>\n>* 2U, 2-way server delivers extreme performance and availability for \n>demanding applications\n>* Up to 2 Intel Xeon processors up to 3.06GHz with 533MHz front-side \n>bus speed for outstanding performance\n>* Features up to 8GB of DDR memory, 5 PCI (4 PCI-X) slots and up to 6 \n>hard disk drives for robust expansion\n>* Hot-swap redundant cooling, power and hard disk drives for high \n>availability\n>* Integrated dual Ultra320 SCSI with RAID-1 for data protection\n>\n>This may not wrap well, but here is the url:\n>http://www-132.ibm.com/webapp/wcs/stores/servlet/CategoryDispla\n> y?catalogId=-840&storeId=1&categoryId=2559454&langId=-1&dualCurrId=73\n>\n> Handles 6 drives; maybe that fits the bill?\n\n[naturally, there should be one for each of the major server vendors,\neh?]\n\nI've used mainly HP (as in former Compaq) machines here, with nothing\nbut good experience. HPs machine in the scame class is the DL380G3.\nAlmost identical specs to the IBM (I'd expect all major vendors have\nfairly similar machines). Holds 12Gb RAM. Only 3 PCI-X slots (2 of them\nhotplug). RPS. 6 disk slots (Ultra-320) that can be put on one or two\nSCSI chains (builtin RAID controller only handles a single channel,\nthough, so you'd need an extra SmartArray controller if you want to\nsplit them). RAID0/1/1+0/5.\n\nIf you would go with that one, make sure to get the optional BBWC\n(Battery Backed Write Cache). Without it the controller won't enable the\nwrite-back cache (which it really shouldn't, since it wouldn't be safe\nwithout the batteries). WB cache can really speed things on in many db\nsituations - it's sort of like \"speed of fsync off, security of fsync\non\". I've seen huge speedups with both postgresql and other databases on\nthat.\n\n\nIf you want to be \"ready for more storage\", I'd suggest looking at a 1U\nserver with a 3U external disk rack. That'll give you 16 disks in 4U (2\nin the server + 14 in the rack on 2 channels), which is hard to beat. If\nyou have no need to go there, then sure, the 2U machine will be better.\nBut I've found the \"small machine with external rack\" a lot more\nflexible than the \"big machine with disks inside it\". (For example, you\ncan put two 1U servers to it, and have 7 disks assigned to each server)\nIn HP world that would mean DL360G3 and the StorageWorks 4354.\n\nThe mandatory link:\nhttp://h18004.www1.hp.com/products/servers/platforms/index-dl-ml.html\n\n\n\nThough if you are already equipped with servers from one vendor, I'd\nsuggest sticking to it as long as the specs are fairly close. Then you\nonly need one set of management software etc.\n\n\n//Magnus\n",
"msg_date": "Thu, 17 Jul 2003 20:55:29 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware performance"
},
{
"msg_contents": "On Thu, 2003-07-17 at 13:55, Magnus Hagander wrote:\n> >Adam Witney wrote:\n[snip]\n> If you would go with that one, make sure to get the optional BBWC\n> (Battery Backed Write Cache). Without it the controller won't enable the\n> write-back cache (which it really shouldn't, since it wouldn't be safe\n> without the batteries). WB cache can really speed things on in many db\n> situations - it's sort of like \"speed of fsync off, security of fsync\n> on\". I've seen huge speedups with both postgresql and other databases on\n> that.\n\nDon't forget to check the batteries!!! And if you have an HPaq service\ncontract, don't rely on them to do it...\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "18 Jul 2003 05:18:57 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware performance"
}
] |
[
{
"msg_contents": "Folks:\n\nOn my projects, I haven't found PostgreSQL's implementation of clustered \nindexes to be particularly useful ... gains of only a few percent in query \nefficiency in exchange for a substantial management task. Obviously, not \neveryone has had the same experience, or we wouldn't still have the feature.\n\nWhen I re-vamp my articles on indexing, I would like to include something \nabout clustered indexes. Can people give me some examples of cases where \nthey have found clustered indexes to be useful, preferably with some \nstatistics?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 17 Jul 2003 12:10:25 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table clustering -- useful, or not?"
}
] |
[
{
"msg_contents": "\n\nHi All,\n\ndata_bank.updated_profiles and public.city_master are small tables\nwith 21790 and 49303 records repectively. both have indexes on the join \ncolumn. in first one on (city,source) and in second one on (city)\n\nThe query below does not return for long durations > 10 mins.\n\nexplain analyze select b.state,a.city from data_bank.updated_profiles a join \npublic.city_master b using(city) where source='BRANDING' and a.state is NULL \nand b.country='India' ;\n\n\nsimple explain returns below.\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nNested Loop (cost=0.00..83506.31 rows=14 width=35)\n Join Filter: (\"outer\".city = (\"inner\".city)::text)\n -> Seq Scan on updated_profiles a (cost=0.00..1376.39 rows=89 width=11)\n Filter: ((source = 'BRANDING'::character varying) AND (state IS NULL))\n -> Index Scan using city_master_temp1 on city_master b (cost=0.00..854.87 \nrows=5603 width=24)\n Filter: (country = 'India'::character varying)\n(6 rows)\n\n-----------------------------------------\n\n\nAny help is appreciated.\n\n\nRegds\nmallah.\n\n\n",
"msg_date": "Fri, 18 Jul 2003 18:21:21 +0530",
"msg_from": "Rajesh Kumar Mallah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Yet another slow join query.."
},
{
"msg_contents": "On Fri, 18 Jul 2003, Rajesh Kumar Mallah wrote:\n\n> Hi All,\n>\n> data_bank.updated_profiles and public.city_master are small tables\n> with 21790 and 49303 records repectively. both have indexes on the join\n> column. in first one on (city,source) and in second one on (city)\n>\n> The query below does not return for long durations > 10 mins.\n>\n> explain analyze select b.state,a.city from data_bank.updated_profiles a join\n> public.city_master b using(city) where source='BRANDING' and a.state is NULL\n> and b.country='India' ;\n>\n>\n> simple explain returns below.\n>\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>\n> Nested Loop (cost=0.00..83506.31 rows=14 width=35)\n> Join Filter: (\"outer\".city = (\"inner\".city)::text)\n> -> Seq Scan on updated_profiles a (cost=0.00..1376.39 rows=89 width=11)\n> Filter: ((source = 'BRANDING'::character varying) AND (state IS NULL))\n> -> Index Scan using city_master_temp1 on city_master b (cost=0.00..854.87\n> rows=5603 width=24)\n> Filter: (country = 'India'::character varying)\n> (6 rows)\n\nHow many rows actually meet the filter conditions on updated_profiles and\ncity_master? Are the two city columns of the same type?\n\n\n\n\n",
"msg_date": "Fri, 18 Jul 2003 09:09:31 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another slow join query.."
},
{
"msg_contents": "\n\nThe Types of the join columns were different text vs varchar(100),\nnow its working fine and using a Hash Join\n\nThanks once again.\nregds\nmallah.\n\n\n\n explain analyze select b.state,a.city from data_bank.updated_profiles a\n join public.city_master b using(city) where source='BRANDING' and\n a.state is NULL and b.country='India' ; QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=2806.09..3949.37 rows=28 width=92) (actual\n time=183.05..326.52 rows=18285 loops=1) Hash Cond: (\"outer\".city = \"inner\".city)\n -> Index Scan using city_master_temp1 on city_master b \n (cost=0.00..854.87 rows=5603 width=24) (actual time=0.17..45.70\n rows=5603 loops=1) Filter: (country = 'India'::character varying)\n -> Hash (cost=2805.65..2805.65 rows=178 width=68) (actual\n time=181.74..181.74 rows=0 loops=1) -> Seq Scan on updated_profiles a (cost=0.00..2805.65 rows=178\n width=68) (actual time=20.53..149.66 rows=17537 loops=1) Filter: ((source = 'BRANDING'::character varying) AND\n (state IS NULL)) Total runtime: 348.50 msec\n(8 rows)\n\n\n\n\n\n\n> On Fri, 18 Jul 2003, Rajesh Kumar Mallah wrote:\n>\n>> Hi All,\n>>\n>> data_bank.updated_profiles and public.city_master are small tables\n>> with 21790 and 49303 records repectively. both have indexes on the\n>> join column. in first one on (city,source) and in second one on (city)\n>>\n>> The query below does not return for long durations > 10 mins.\n>>\n>> explain analyze select b.state,a.city from data_bank.updated_profiles\n>> a join public.city_master b using(city) where source='BRANDING' and\n>> a.state is NULL and b.country='India' ;\n>>\n>>\n>> simple explain returns below.\n>>\n>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>>\n>> Nested Loop (cost=0.00..83506.31 rows=14 width=35)\n>> Join Filter: (\"outer\".city = (\"inner\".city)::text)\n>> -> Seq Scan on updated_profiles a (cost=0.00..1376.39 rows=89\n>> width=11)\n>> Filter: ((source = 'BRANDING'::character varying) AND (state\n>> IS NULL))\n>> -> Index Scan using city_master_temp1 on city_master b\n>> (cost=0.00..854.87\n>> rows=5603 width=24)\n>> Filter: (country = 'India'::character varying)\n>> (6 rows)\n>\n> How many rows actually meet the filter conditions on updated_profiles\n> and city_master? Are the two city columns of the same type?\n\n\n\n-----------------------------------------\nOver 1,00,000 exporters are waiting for your order! Click below to get\nin touch with leading Indian exporters listed in the premier\ntrade directory Exporters Yellow Pages.\nhttp://www.trade-india.com/dyn/gdh/eyp/\n\n\n",
"msg_date": "Fri, 18 Jul 2003 23:11:10 +0530 (IST)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another slow join query.. [ SOLVED ]"
}
] |
[
{
"msg_contents": "> > >Adam Witney wrote:\n> [snip]\n> > If you would go with that one, make sure to get the optional BBWC \n> > (Battery Backed Write Cache). Without it the controller \n> won't enable \n> > the write-back cache (which it really shouldn't, since it \n> wouldn't be \n> > safe without the batteries). WB cache can really speed things on in \n> > many db situations - it's sort of like \"speed of fsync off, \n> security \n> > of fsync on\". I've seen huge speedups with both postgresql \n> and other \n> > databases on that.\n> \n> Don't forget to check the batteries!!! And if you have an \n> HPaq service contract, don't rely on them to do it...\n\nThat's what management software is for.. :-) (Yes, it does check the\nbatteries. They are also reported on reboot, but you don't want to do\nthat often, of course)\n\nUnder the service contract, HP will *replace* the batteries for free,\nthough - but you have to know when to replace them.\n\n//Magnus\n",
"msg_date": "Fri, 18 Jul 2003 15:25:54 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware performance"
}
] |
[
{
"msg_contents": "> Be sure to mount noatime \n\nI did \"chattr -R +A /var/lib/pgsql/data\"\nthat should do the trick as well or am I wrong?\n\nregards,\nOli\n",
"msg_date": "Fri, 18 Jul 2003 18:20:55 +0200",
"msg_from": "\"Oliver Scheit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sanity check requested"
},
{
"msg_contents": "On 2003-07-18 18:20:55 +0200, Oliver Scheit wrote:\n> > Be sure to mount noatime \n> \n> I did \"chattr -R +A /var/lib/pgsql/data\"\n> that should do the trick as well or am I wrong?\n> \n\nAccording to the man page it gives the same effect. There are a few things you\nshould consider though:\n- new files won't be created with the same options (I think), so you'll have\nto run this command as a daily cronjob or something to that effect\n- chattr is probably more filesystem-specific than a noatime mount, although\nthis isn't a problem on ext[23] ofcourse\n\nVincent van Leeuwen\nMedia Design - http://www.mediadesign.nl/\n",
"msg_date": "Fri, 18 Jul 2003 18:28:55 +0200",
"msg_from": "Vincent van Leeuwen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sanity check requested"
}
] |
[
{
"msg_contents": ">> > Be sure to mount noatime \n>> \n>> I did \"chattr -R +A /var/lib/pgsql/data\"\n>> that should do the trick as well or am I wrong?\n>> \n>\n> According to the man page it gives the same effect.\n> There are a few things you should consider though:\n> - new files won't be created with the same options (I think),\n> so you'll have to run this command as a daily cronjob or\n> something to that effect\n\nThis would be a really interesting point to know.\nI will look into this.\n\nI think the advantage of \"chattr\" is that the last access time\nis still available for the rest of the filesystem.\n(Of course you could have your own filesystem just for the\ndatabase stuff, in this case the advantage would be obsolete)\n\nregards,\nOli\n",
"msg_date": "Fri, 18 Jul 2003 18:42:26 +0200",
"msg_from": "\"Oliver Scheit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sanity check requested"
}
] |
[
{
"msg_contents": "I currently have a J2EE app that allows our users to upload files. The \nactual file gets stored on the same disk as the webserver is running \non, while the information they entered about the file gets stored in \nthe database. We now need to move the database to a different machine \nand I'm wondering if we should just start storing the files as BLOBs \nwhile we're at it so that the files and their data all stay together. \nThis would make moving the database and backing up the data a lot \neasier.\n So, I'm wondering how many people out there are using Postgres to \nstore binary data. Our new database server is Linux running on a dual \nXeon 2.6Ghz with 1GB of RAM and two 36GB 10K RPM Ultra 320 SCSI hard \ndrives in RAID 0. The files we're storing are small images and \nringtones for cell phones. The average file size is about 40KB.\n I had originally chosen to store the files outside the database \nbecause I thought there was a need to be able to browse those files \noutside of the J2EE app. That turned out to not be the case. I also \ndidn't realize how small these files were going to be.\n The total size of our file directory is 775MB. Should I have any \nconcern that Postgres is going to have problems with handling that many \nfiles or that much data for the machine I described above?\n Thanks,\n -M@\n\n",
"msg_date": "Sat, 19 Jul 2003 14:21:42 -0700",
"msg_from": "Matthew Hixson <[email protected]>",
"msg_from_op": true,
"msg_subject": "storing files in Postgres"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHello,\n\n\tI have an application I wrote that uses two tables heavily. The first table \nis a queue table, the second table is an archive table. The queue table is \nconstantly taking on new records at the rate of 10/second. My software pulls \nthe logs from the queue table, processes them, moves them to the archive \ntable, and then deletes the processed entries from the queue table.\n\n\tSymptoms:\n\tMy system will run great after a full vacuum(as I would expect). It will run \nall day long taking only 3-5 seconds to run and deal with approximately \n100megs of new data each day. However, the instant the system finishes only \na 'vacuum analyze', the whole thing slows WAY down to where each run can take \n10-15 minutes.\n\n\tie. After a full analyze vacuum, my software will load, process, move, and \ndelete in under 3-4 seconds. After a analyze vacuum(notice: not full), it \ncan load, process and move data in 3-5 seconds but the delete takes 10-15 \nminutes! I submit the delete as one transaction to clear out the records \nprocessed. Trunactae won't work because other records are coming in while I \nprocess data. Mind you the archive table is 15 million records while the \ntemporary table is maybe 10-20,000 records. \n\n\tNow I just rewrote a portion of my application to change its behavior. What \nit did before was that it would pile through a 10 gig archive table, \nprocessed logs, etc... in about 3 minutes but I did not delete in the same \nway because everything is already in one table. My software has to run every \nfive minutes so the three minute runtime is getting close for process \noverlap(yuck). \n\n\tRecap\n\tThe old system didn't delete records but plowed through the 10 gig db and \ntakes 3 1/2 minutes to do its job. The new system flies through the smaller \nqueue table(100-200k) but it dies after conducting a non-full vacuum.\n\n\tIs the planner just that much better at analyzing a full then an regular \nanalyze or is there something else I'm missing?\n\nThe Box:\nThe DB is a dual P4 2.4ghz Xeon w/ 1.5 gig of RAM. IBM 335 w/ 36gig mirrored.\n\nkernel.shmmax = 1342177280\n\nshared_buffers = 115200 # 2*max_connections, min 16\nsort_mem = 65536 # min 32\nvacuum_mem = 196608 # min 1024\nfsync = false\n\n\n- -- \nJeremy M. Guthrie\nSystems Engineer\nBerbee\n5520 Research Park Dr.\nMadison, WI 53711\nPhone: 608-298-1061\n\nBerbee...Decade 1. 1993-2003\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.1 (GNU/Linux)\n\niD8DBQE/GbajqtjaBHGZBeURAkKiAJ9zaqQISD47XycRcSgDKbNeuqqaKQCfcgim\nyCdaycBg4+99Epd7EuAAxsE=\n=9xlS\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Sat, 19 Jul 2003 16:22:40 -0500",
"msg_from": "\"Jeremy M. Guthrie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor delete performance AFTER vacuum analyze"
},
{
"msg_contents": "\"Jeremy M. Guthrie\" <[email protected]> writes:\n> \tMy system will run great after a full vacuum(as I would expect). It will run \n> all day long taking only 3-5 seconds to run and deal with approximately \n> 100megs of new data each day. However, the instant the system finishes only \n> a 'vacuum analyze', the whole thing slows WAY down to where each run can take \n> 10-15 minutes.\n\nCould we see EXPLAIN ANALYZE for the deletion query in both the fast and\nslow states?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 20 Jul 2003 00:44:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor delete performance AFTER vacuum analyze "
},
{
"msg_contents": "On Sat, 19 Jul 2003, Jeremy M. Guthrie wrote:\n\n> 100megs of new data each day. However, the instant the system finishes only \n> a 'vacuum analyze', the whole thing slows WAY down to where each run can take \n> 10-15 minutes.\n\nHave you run EXPLAIN ANALYZE on the delete query before and after the \nvacuum? Does it explain why it goes slower?\n\n-- \n/Dennis\n\n",
"msg_date": "Sun, 20 Jul 2003 07:42:07 +0200 (CEST)",
"msg_from": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor delete performance AFTER vacuum analyze"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI looked back at my code and I also need to reclarify something. The delete \nat the end is multiple delete statements within a transaction.\n\nAfter full vacuum with 160,000 records in Table: (takes a bit the first time \nthrough)\nTlog=# explain analyze delete from Tlog where Tlog_ID <= 47766002 and \nhost='tbp-pp';\n QUERY PLAN\n- -----------------------------------------------------------------------------------------------------------------------------\n Index Scan using shost_idx on tlog (cost=0.00..6281.45 rows=136 width=6) \n(actual time=64529.43..64529.43 rows=0 loops=1)\n Index Cond: (host = 'tbp-pp'::character varying)\n Filter: (tlog_id <= 47766002)\n Total runtime: 64529.52 msec\n\nAfter zero records in table: (\nTlog=# explain analyze delete from Tlog where Tlog_ID <= 47766002 and \nhost='tbp-pp'; \n QUERY PLAN\n- -----------------------------------------------------------------------------------------------------------------------\n Index Scan using shost_idx on tlog (cost=0.00..6281.45 rows=136 width=6) \n(actual time=84.87..84.87 rows=0 loops=1)\n Index Cond: (host = 'tbp-pp'::character varying)\n Filter: (tlog_id <= 47766002)\n Total runtime: 84.96 msec\n\nSlow Explain after vacuum analyze: (this is when it gets bad)\nTLog=# explain analyze delete from Tlog where Tlog_ID <= 47766002 and \nshost='tbp-pp';\n QUERY PLAN\n- ------------------------------------------------------------------------------------------------------------------------------\n Index Scan using shost_idx on tlog (cost=0.00..6128.52 rows=82 width=6) \n(actual time=262178.82..262178.82 rows=0 loops=1)\n Index Cond: (host = 'tbp-pp'::character varying)\n Filter: (tlog_id <= 47766002)\n Total runtime: 262178.96 msec\n\n\n- -- \nJeremy M. Guthrie\nSystems Engineer\nBerbee\n5520 Research Park Dr.\nMadison, WI 53711\nPhone: 608-298-1061\n\nBerbee...Decade 1. 1993-2003\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.1 (GNU/Linux)\n\niD8DBQE/GysLqtjaBHGZBeURAhNTAJ0QA2/eZM/DhSyxmXi89i6kXFQFwgCfacZY\nUIMUdK95O3N0UpOTxedM6Pw=\n=laUO\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Sun, 20 Jul 2003 18:51:39 -0500",
"msg_from": "\"Jeremy M. Guthrie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor delete performance AFTER vacuum analyze"
},
{
"msg_contents": "\"Jeremy M. Guthrie\" <[email protected]> writes:\n> I looked back at my code and I also need to reclarify something. The delete \n> at the end is multiple delete statements within a transaction.\n\nI think you are going to have to show us all the details of what you're\ndoing in between these queries. I was expecting to see a difference in\nquery plans, but you've got the exact same plan in all three runs ---\nso it's not the planner making the difference here, nor the ANALYZE\nstatistics. My best conjecture now is something odd about the way you\nare deleting the old data or loading the new.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Jul 2003 01:09:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor delete performance AFTER vacuum analyze "
}
] |
[
{
"msg_contents": "Dear Gurus,\n\nI have a query discussed here earlier that suffers heavily from \"lack of\nview flattening\" in v7.3. Following Tom's guidance, I made a conclusion to\nthat thread\n(http://archives.postgresql.org/pgsql-performance/2003-05/msg00215.php)\nand asked it to be confirmed or fixed, but I didn't get any responses.\n\nHere are some times, for which I'd like to get some response.\n\nOld machine is New machine is\n* PIII 800, * Dual Xeon 2.4,\n* IDE 7200, * 5xSCSI 10000 HW RAID 5,\n* psql 7.2.1, * psql 7.3.3,\n* orig conf * orig and crude conf, as below.\n\n* old: 18 sec * new: 24 sec\n * new w/ vacuum full verbose analyze: 30-31 sec (!!!)\n\n1. Are these times (18 vs 24) believable with such heavy HW change or is\nthere something fishy about it?\n* I know multiprocessing doesn't come in view with a single query\n* but cpu and hw speed should\n* I know 7.3 is slower because of unflattened views\n\n2. What may be the cause of VACUUM slowing the query?\n\n3. Disabling any one of mergejoin, hashjoin, seqscan did no good. Disabling\nsort prevented query from finishing in several minutes.\n\n4. I have tried to crudely carve optimizer settings as below, but it changed\nnothing according to this query. Any further ideas? Note that time tests\nwere taken in close succession (test; killall -HUP postmaster; test; ...)\n\nIf needed, I can attach query, exp-ana outputs before and after vacuum\n(carved and uncarved conf file), and the vacuum log itself.\n\nTIA,\nG.\n------------------------------- cut here -------------------------------\nshared_bufers = 4096\nsort_mem = 4096\neffective_cache_size = 20000\nrandom_page_cost = 1.5\n------------------------------- cut here -------------------------------\n\n",
"msg_date": "Mon, 21 Jul 2003 07:58:56 +0200",
"msg_from": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ugly query slower in 7.3, even slower after vacuum full analyze"
}
] |
[
{
"msg_contents": "Hi guys,\n\nI am new to PostgreSQL and have done some \"extensive\" research already. If you could give me some advice/confirmation, I would be really grateful.\n\nI am going to build a PostgreSQL database server for a client. This database will contain many tables (over 100, maybe more), with some of them containing over 1 million records pretty soon. This database will be accessed via a desktop application (Windows) and a web application (PHP/Apache). There may be over 100 people accessing the database at any given time (selecting, inserting and updating), but I don't think it will be a lot more than 100 at any given time. Most of the time, it will be less.\n\nWhat I am thinking about is buying a server with the following specifications:\n\n* 1 or 2 Intel Xeon processors (2.4 GHz).\n* 2 Gigabytes of RAM (DDR/ECC).\n* Three 36Gb SCSI160 disks (10.000rpm) in a RAID-5 config, giving 72Gb storage space (right?). The RAID-5 controller has a (hardware) cache of 128Mb.\n* 100Mbit ethernet.\n\nI will run RedHat Linux 9 (kernel 2.40) with PostgreSQL 7.3.3 on this server.\n\nWhat would you think of this hardware config? Would it do? Of would 4Gb RAM be a lot better? What do you think about the need for two Xeon procs?\n\nFinally, I have some questions about postgresql.conf (who doesnt?). After some research, I think I will go for the following settings initially. Maybe benchmarking will lead to other values, but I think these settings will be a fine starting point :\n\nshared_buffers = 6000 (kernel.shmmax = 60000000)\nsort_mem = 4096\nmax_connections = 150\nvacuum_mem = 65536\n\nWhat do you think of these settings? Do you have any other hints for optimizing PostgreSQL\n\nMany many thanks in advance :)\n\n\nKind regards,\n\nAlexander Priem\nCICT Solutions\nEmail: [email protected]\nInternet: www.cict.nl\n\n\n\n\n\n\n\nHi guys,\n \nI am new to PostgreSQL and have done some \n\"extensive\" research already. If you could give me some advice/confirmation, I \nwould be really grateful.\n \nI am going to build a PostgreSQL database server \nfor a client. This database will contain many tables (over 100, \nmaybe more), with some of them containing over 1 million records pretty \nsoon. This database will be accessed via a desktop application (Windows) and a \nweb application (PHP/Apache). There may be over 100 people accessing the \ndatabase at any given time (selecting, inserting and updating), but I \ndon't think it will be a lot more than 100 at any given time. Most of the time, \nit will be less.\n \nWhat I am thinking about is buying a server with \nthe following specifications:\n \n* 1 or 2 Intel Xeon processors (2.4 \nGHz).\n* 2 Gigabytes of RAM (DDR/ECC).\n* Three 36Gb SCSI160 disks (10.000rpm) in a RAID-5 \nconfig, giving 72Gb storage space (right?). The RAID-5 controller has \na (hardware) cache of 128Mb.\n* 100Mbit ethernet.\n \nI will run RedHat Linux 9 (kernel 2.40) with \nPostgreSQL 7.3.3 on this server.\n \nWhat would you think of this hardware config? Would \nit do? Of would 4Gb RAM be a lot better? What do you think about the need for \ntwo Xeon procs?\n \nFinally, I have some questions about \npostgresql.conf (who doesnt?). After some research, I think I will go for the \nfollowing settings initially. Maybe benchmarking will lead to other values, but \nI think these settings will be a fine starting point :\n \nshared_buffers = 6000 (kernel.shmmax = \n60000000)\nsort_mem = 4096\nmax_connections = 150\nvacuum_mem = 65536\n \nWhat do you think of these settings? Do you have \nany other hints for optimizing PostgreSQL\n \nMany many thanks in advance \n:)\n \nKind regards,Alexander PriemCICT SolutionsEmail: [email protected]: www.cict.nl",
"msg_date": "Mon, 21 Jul 2003 10:31:19 +0200",
"msg_from": "\"Alexander Priem\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning PostgreSQL"
},
{
"msg_contents": "On 21 Jul 2003 at 10:31, Alexander Priem wrote:\n> What I am thinking about is buying a server with the following specifications:\n> \n> * 1 or 2 Intel Xeon processors (2.4 GHz).\n> * 2 Gigabytes of RAM (DDR/ECC).\n> * Three 36Gb SCSI160 disks (10.000rpm) in a RAID-5 config, giving 72Gb storage \n> space (right?). The RAID-5 controller has a(hardware) cache of 128Mb.\n> * 100Mbit ethernet.\n> \n> I will run RedHat Linux 9 (kernel 2.40) with PostgreSQL 7.3.3 on this server.\n\nYou might scale down a little on hardware front if required. Of course, if you \ncan get it, get it.\n\n> What would you think of this hardware config? Would it do? Of would 4Gb RAM be \n> a lot better? What do you think about the need for two Xeon procs?\n\nI would say get an SMP board with one processor in it. If requierd you can \nupgrade. I suppose that would make hefty difference in price.\n\n> shared_buffers = 6000(kernel.shmmax = 60000000)\n> sort_mem = 4096\n> max_connections = 150\n> vacuum_mem = 65536\n\neffective_cache_size\nnoatime for data partition\nA good filesystem.\nWAL on separate drive.\n\nNow that is a good start..\n\nBye\n Shridhar\n\n--\nQOTD:\t\"I'm on a seafood diet -- I see food and I eat it.\"\n\n",
"msg_date": "Mon, 21 Jul 2003 14:21:53 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "Hi all,\n\nI am wondering about something: I have a table (let's call it \"Table\") which will contain a lot of records. Every record has a field named \"deleted\" which can be either NULL or a date value. When this field is NULL, the record in question may be used by a program. If the field contains a date, this field must be considered as \"deleted\" and cannot be used anymore.\n\nThe reason why I don't actually delete such records is that I want to be able to reference them from other locations (for history purposes).\n\nWhat I am thinking about is creating two views for this table: Table_View and Table_History. Table_View would contain all records where \"Deleted is null\". Table_History would just contain all records (Select * From Table).\n\nIn my program most queries would need to view only the records where deleted is null.\n\nWould \" Select * from Table_View Where Name='xxx' \" perform worse than \" Select * from Table Where deleted is null and Name='xxx' \" ?\n\nI ask this because I would like it if I wouldn't have to type \"where deleted is null\" for about every query in my program. But I will not use this strategy if this would mean serious performance loss...\n\nThanks in Advance,\nAlexander Priem.\n\n\n\n\n\n\nHi all,\n \nI am wondering about something: I have a table \n(let's call it \"Table\") which will contain a lot of records. Every record \nhas a field named \"deleted\" which can be either NULL or a date value. When this \nfield is NULL, the record in question may be used by a program. If the field \ncontains a date, this field must be considered as \"deleted\" and cannot be used \nanymore.\n \nThe reason why I don't actually delete such records \nis that I want to be able to reference them from other locations (for history \npurposes).\n \nWhat I am thinking about is creating two views for \nthis table: Table_View and Table_History. Table_View would contain all records \nwhere \"Deleted is null\". Table_History would just contain all records (Select * \nFrom Table).\n \nIn my program most queries would need to view only \nthe records where deleted is null.\n \nWould \" Select * from Table_View Where Name='xxx' \" \nperform worse than \" Select * from Table Where deleted is null \nand Name='xxx' \" ?\n \nI ask this because I would like it if I wouldn't \nhave to type \"where deleted is null\" for about every query in my program. \nBut I will not use this strategy if this would mean serious \nperformance loss...\n \nThanks in Advance,\nAlexander Priem.",
"msg_date": "Thu, 14 Aug 2003 13:42:50 +0200",
"msg_from": "\"Alexander Priem\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "'View'-performance"
},
{
"msg_contents": "\"Alexander Priem\" <[email protected]> writes:\n> What I am thinking about is creating two views for this table: Table_View a=\n> nd Table_History. Table_View would contain all records where \"Deleted is nu=\n> ll\". Table_History would just contain all records (Select * From Table).\n\n> Would \" Select * from Table_View Where Name=3D'xxx' \" perform worse than \" =\n> Select * from Table Where deleted is null and Name=3D'xxx' \" ?\n\nThey'd be exactly the same (modulo a few extra microseconds/milliseconds\nfor the query planner to expand the view definition).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Aug 2003 08:00:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'View'-performance "
}
] |
[
{
"msg_contents": "Hi Alexander ,\n\nOn 21 Jul 2003 at 11:23, Alexander Priem wrote:\n> So the memory settings I specified are pretty much OK?\n\nAs of now yes, You need to test with these settings and make sure that they \nperform as per your requirement. That tweaking will always be there...\n\n> What would be good guidelines for setting effective_cache_size, noatime ?\n\nI suggest you look at \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html. \n\nThat should help you.\n\n> I use ext3 filesystem, which probably is not the best performer, is it?\n\nNo. You also need to check ext2, reiser and XFS. There is no agreement between \nusers as in what works best. You need to benchmark and decide.\n\n> I will set the WAL on a separate drive. What do I need to change in the conf\n> files to achive this?\n\nNo. You need to shutdown postgresql server process and symlink WAL and clog \ndirectories in postgresql database cluster to another place. That should do it.\n\nHTH\n\nBye\n Shridhar\n\n--\nMeade's Maxim:\tAlways remember that you are absolutely unique, just like everyone else.\n\n",
"msg_date": "Mon, 21 Jul 2003 15:03:14 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "Thanks, I will look at the site you sent me and purchase some hardware. Then\nI will run some benchmarks.\n\nKind regards,\nAlexander.\n\n----- Original Message -----\nFrom: \"Shridhar Daithankar\" <[email protected]>\nTo: \"Alexander Priem\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, July 21, 2003 11:33 AM\nSubject: Re: [PERFORM] Tuning PostgreSQL\n\n\n> Hi Alexander ,\n>\n> On 21 Jul 2003 at 11:23, Alexander Priem wrote:\n> > So the memory settings I specified are pretty much OK?\n>\n> As of now yes, You need to test with these settings and make sure that\nthey\n> perform as per your requirement. That tweaking will always be there...\n>\n> > What would be good guidelines for setting effective_cache_size, noatime\n?\n>\n> I suggest you look at\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html.\n>\n> That should help you.\n>\n> > I use ext3 filesystem, which probably is not the best performer, is it?\n>\n> No. You also need to check ext2, reiser and XFS. There is no agreement\nbetween\n> users as in what works best. You need to benchmark and decide.\n>\n> > I will set the WAL on a separate drive. What do I need to change in the\nconf\n> > files to achive this?\n>\n> No. You need to shutdown postgresql server process and symlink WAL and\nclog\n> directories in postgresql database cluster to another place. That should\ndo it.\n>\n> HTH\n>\n> Bye\n> Shridhar\n>\n> --\n> Meade's Maxim: Always remember that you are absolutely unique, just like\neveryone else.\n>\n\n",
"msg_date": "Mon, 21 Jul 2003 11:40:42 +0200",
"msg_from": "\"Alexander Priem\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "Shridhar Daithankar wrote:\n> On 21 Jul 2003 at 11:23, Alexander Priem wrote:\n\n>>I use ext3 filesystem, which probably is not the best performer, is it?\n> \n> No. You also need to check ext2, reiser and XFS. There is no agreement between \n> users as in what works best. You need to benchmark and decide.\n\nNeed? Maybe I'm a bit disillusioned, but are the performances between \nthe filesystems differ so much as to warrant the additional effort? \n(e.g. XFS doesn't come with Red Hat 9 -- you'll have to patch the \nsource, and compile it yourself).\n\nBenchmarking it properly before deployment is tough: are the test load \non the db/fs representative of actual load? Is 0.5% reduction in CPU \nusage worth it? Did you test for catastrophic failure by pulling the \nplug during write operations (ext2) to test if the fs can handle it? Is \nthe code base for the particular fs stable enough? Obscure bugs in the fs?\n\nFor the record, we tried several filesystems, but stuck with 2.4.9's \next3 (Red Hat Advanced Server). Didn't hit a load high enough for the \nfilesystem choices to matter after all. :(\n\n-- \nLinux homer 2.4.18-14 #1 Wed Sep 4 13:35:50 EDT 2002 i686 i686 i386 \nGNU/Linux\n 5:30pm up 207 days, 8:35, 5 users, load average: 5.33, 5.16, 5.21",
"msg_date": "Mon, 21 Jul 2003 18:09:23 +0800",
"msg_from": "Ang Chin Han <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On 21 Jul 2003 at 18:09, Ang Chin Han wrote:\n\n> Shridhar Daithankar wrote:\n> > On 21 Jul 2003 at 11:23, Alexander Priem wrote:\n> \n> >>I use ext3 filesystem, which probably is not the best performer, is it?\n> > \n> > No. You also need to check ext2, reiser and XFS. There is no agreement between \n> > users as in what works best. You need to benchmark and decide.\n> \n> Need? Maybe I'm a bit disillusioned, but are the performances between \n> the filesystems differ so much as to warrant the additional effort? \n> (e.g. XFS doesn't come with Red Hat 9 -- you'll have to patch the \n> source, and compile it yourself).\n\nWell, the benchmarking is not to prove which filesystem is fastest and feature \nrich but to find out which one suits your needs best.\n \n> Benchmarking it properly before deployment is tough: are the test load \n> on the db/fs representative of actual load? Is 0.5% reduction in CPU \n> usage worth it? Did you test for catastrophic failure by pulling the \n> plug during write operations (ext2) to test if the fs can handle it? Is \n> the code base for the particular fs stable enough? Obscure bugs in the fs?\n\nWell, that is what that 'benchmark' is supposed to find out. Call it pre-\ndeployment testing or whatever other fancy name one sees fit. But it is a must \nin almost all serious usage.\n \n> For the record, we tried several filesystems, but stuck with 2.4.9's \n> ext3 (Red Hat Advanced Server). Didn't hit a load high enough for the \n> filesystem choices to matter after all. :(\n\nGood for you. You have time at hand to find out which one suits you best. Do \nthe testing before you have load that needs another FS..:-)\n\nBye\n Shridhar\n\n--\nIt would be illogical to assume that all conditions remain stable.\t\t-- Spock, \"The Enterprise\" Incident\", stardate 5027.3\n\n",
"msg_date": "Mon, 21 Jul 2003 16:01:41 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "Shridhar Daithankar wrote:\n\n> Good for you. You have time at hand to find out which one suits you best. Do \n> the testing before you have load that needs another FS..:-)\n\nKinda my point is that when we've more load, we'd be using RAID-0 over \nRAID-5, or getting faster SCSI drives, or even turn fsync off if that's \na bottleneck, because the different filesystems do not have that much \nperformance difference[1] -- the filesystem is not a bottleneck. Just \nneed to tweak most of them a bit, like noatime,data=writeback.\n\n[1] That is, AFAIK, from our testing. Please, please correct me if I'm \nwrong: has anyone found that different filesystems produces wildly \ndifferent performance for postgresql, FreeBSD's filesystems not included?\n\n-- \nLinux homer 2.4.18-14 #1 Wed Sep 4 13:35:50 EDT 2002 i686 i686 i386 \nGNU/Linux\n 7:00pm up 207 days, 10:05, 5 users, load average: 5.00, 5.03, 5.06",
"msg_date": "Mon, 21 Jul 2003 19:27:54 +0800",
"msg_from": "Ang Chin Han <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On 21 Jul 2003 at 19:27, Ang Chin Han wrote:\n> [1] That is, AFAIK, from our testing. Please, please correct me if I'm \n> wrong: has anyone found that different filesystems produces wildly \n> different performance for postgresql, FreeBSD's filesystems not included?\n\nwell, when postgresql starts splitting table files after a gig, filesystem sure \nmakes difference. IIRC, frommy last test XFS was at least 10-15% faster than \nreiserfs for such databases. That was around an year back, with mandrake 8.0.\n\n\nBye\n Shridhar\n\n--\nmodesty, n.:\tBeing comfortable that others will discover your greatness.\n\n",
"msg_date": "Mon, 21 Jul 2003 17:09:30 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "So where can I set the noatime & data=writeback variables? They are not\nPostgreSQL settings, but rather Linux settings, right? Where can I find\nthese?\n\nKind regards,\nAlexander Priem.\n\n\n----- Original Message -----\nFrom: \"Shridhar Daithankar\" <[email protected]>\nTo: <[email protected]>\nSent: Monday, July 21, 2003 12:31 PM\nSubject: Re: [PERFORM] Tuning PostgreSQL\n\n\n> On 21 Jul 2003 at 18:09, Ang Chin Han wrote:\n>\n> > Shridhar Daithankar wrote:\n> > > On 21 Jul 2003 at 11:23, Alexander Priem wrote:\n> >\n> > >>I use ext3 filesystem, which probably is not the best performer, is\nit?\n> > >\n> > > No. You also need to check ext2, reiser and XFS. There is no agreement\nbetween\n> > > users as in what works best. You need to benchmark and decide.\n> >\n> > Need? Maybe I'm a bit disillusioned, but are the performances between\n> > the filesystems differ so much as to warrant the additional effort?\n> > (e.g. XFS doesn't come with Red Hat 9 -- you'll have to patch the\n> > source, and compile it yourself).\n>\n> Well, the benchmarking is not to prove which filesystem is fastest and\nfeature\n> rich but to find out which one suits your needs best.\n>\n> > Benchmarking it properly before deployment is tough: are the test load\n> > on the db/fs representative of actual load? Is 0.5% reduction in CPU\n> > usage worth it? Did you test for catastrophic failure by pulling the\n> > plug during write operations (ext2) to test if the fs can handle it? Is\n> > the code base for the particular fs stable enough? Obscure bugs in the\nfs?\n>\n> Well, that is what that 'benchmark' is supposed to find out. Call it pre-\n> deployment testing or whatever other fancy name one sees fit. But it is a\nmust\n> in almost all serious usage.\n>\n> > For the record, we tried several filesystems, but stuck with 2.4.9's\n> > ext3 (Red Hat Advanced Server). Didn't hit a load high enough for the\n> > filesystem choices to matter after all. :(\n>\n> Good for you. You have time at hand to find out which one suits you best.\nDo\n> the testing before you have load that needs another FS..:-)\n>\n> Bye\n> Shridhar\n>\n> --\n> It would be illogical to assume that all conditions remain stable. --\nSpock, \"The Enterprise\" Incident\", stardate 5027.3\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n",
"msg_date": "Mon, 21 Jul 2003 13:45:06 +0200",
"msg_from": "\"Alexander Priem\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On 21 Jul 2003 at 13:45, Alexander Priem wrote:\n\n> So where can I set the noatime & data=writeback variables? They are not\n> PostgreSQL settings, but rather Linux settings, right? Where can I find\n> these?\n\nThese are typicaly set in /etc/fstab.conf. These are mount settings. man mount \nfor more details. \n\nThe second setting data=writeback is ext3 specific, IIRC.\n\nHTH\n\nBye\n Shridhar\n\n--\nHistory tends to exaggerate.\t\t-- Col. Green, \"The Savage Curtain\", stardate \n5906.4\n\n",
"msg_date": "Mon, 21 Jul 2003 17:35:03 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "Thanks, i'll look further into these mount setting.\n\nI was just thinking, the server will have a (RAID) controller containing\n128Mb of battery-backed cache memory. This would really speed up inserts to\nthe disk and would prevent data loss in case of a power-down also.\n\nWhat would you guys think of not using RAID5 in that case, but just a really\nfast 15.000 rpm SCSI-320 disk?\n\nKind regards,\nAlexander.\n\n\n----- Original Message -----\nFrom: \"Shridhar Daithankar\" <[email protected]>\nTo: <[email protected]>\nSent: Monday, July 21, 2003 2:05 PM\nSubject: Re: [PERFORM] Tuning PostgreSQL\n\n\n> On 21 Jul 2003 at 13:45, Alexander Priem wrote:\n>\n> > So where can I set the noatime & data=writeback variables? They are not\n> > PostgreSQL settings, but rather Linux settings, right? Where can I find\n> > these?\n>\n> These are typicaly set in /etc/fstab.conf. These are mount settings. man\nmount\n> for more details.\n>\n> The second setting data=writeback is ext3 specific, IIRC.\n>\n> HTH\n>\n> Bye\n> Shridhar\n>\n> --\n> History tends to exaggerate. -- Col. Green, \"The Savage Curtain\", stardate\n> 5906.4\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n",
"msg_date": "Mon, 21 Jul 2003 14:43:22 +0200",
"msg_from": "\"Alexander Priem\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Mon, 2003-07-21 at 04:33, Shridhar Daithankar wrote:\n> Hi Alexander ,\n> \n> On 21 Jul 2003 at 11:23, Alexander Priem wrote:\n[snip]\n> > I use ext3 filesystem, which probably is not the best performer, is it?\n> \n> No. You also need to check ext2, reiser and XFS. There is no agreement between \n> users as in what works best. You need to benchmark and decide.\n\nAccording to Jeremy Allison of SAMBA, \"\"They used ext3, which is one\nof the slowest filesystems on Linux,\" Allison said. \"In a real\ncomparative test, you would use XFS\".\nhttp://www.linuxworld.com/story/32673.htm\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "22 Jul 2003 04:52:56 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "Wow, I never figured how many different RAID configurations one could think\nof :)\n\nAfter reading lots of material, forums and of course, this mailing-list, I\nthink I am going for a RAID5 configuration of 6 disks (18Gb, 15.000 rpm\neach), one of those six disks will be a 'hot spare'. I will just put the OS,\nthe WAL and the data one one volume. RAID10 is way to expensive :)\n\nIf I understand correctly, this will give great read-performance, but less\nwrite-performance. But since this server will be equipped with an embedded\nRAID controller featuring 128Mb of battery-backed cache, I figure that this\ncontroller will negate that (at least somewhat). I will need to find out\nwhether this cache can be configured so that it will ONLY cache WRITES, not\nREADS....\n\nAlso because of this battery backed cache controller, I will go for the ext2\nfile system, mounted with 'noatime'. I will use a UPS, so I don't think I\nneed the journaling of ext3. XFS is not natively supported by RedHat and I\nwill go for the easy way here :)\n\n1 Gb of RAM should be enough, I think. That is about the only point that\nalmost everyone agrees on :) Do you think ECC is very important? The\nserver I have in mind does not support it. Another one does, but is is about\n1.000 euros more expensive :(\n\nOne CPU should also be enough.\n\nAs for postgresql.conf settings, I think I will start with the following :\n\nmax_connections = 128\nsuperuser_reserved_connections = 1\nshared_buffers = 8192\nmax_fsm_relations = 1000\nmax_fsm_pages = 100000\nwal_buffers = 32\nsort_mem = 2048\nvacuum_mem = 32768\neffective_cache_size = 28672 (this one I'm not sure about, maybe this one\nneeds to be higher)\nrandom_page_cost = 2\ngeq0_threshold = 20\n\nThis pretty much sums it up. What do you think about this config? It may not\nbe the fastest, but a server like this will cost about 4750 euros, and that\nis including an Intel Xeon 2.4GHz cpu, redundant power supply, WITHOUT the\nUPS. Seems very reasonable to me...\n\nKind regards,\nAlexander Priem.\n\n",
"msg_date": "Tue, 22 Jul 2003 14:53:58 +0200",
"msg_from": "\"Alexander Priem\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Tue, 2003-07-22 at 07:53, Alexander Priem wrote:\n> Wow, I never figured how many different RAID configurations one could think\n[snip]\n> Also because of this battery backed cache controller, I will go for the ext2\n> file system, mounted with 'noatime'. I will use a UPS, so I don't think I\n> need the journaling of ext3.\n\nOooooo, I don't think I'd do that!!!!! It's akin to saying, \"I\ndon't need to make backups, because I have RAID[1,5,10,1+0]\n\nIf the power is out for 26 minutes and your UPS only lasts for 25\nminutes, you could be in be in for a long, painful boot process if\nthe box crashes. (For example, the UPS auto-shutdown daemon doesn't\nwork properly, and no one can get to the console to shut it down\nproperly before the batteries die.)\n\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "22 Jul 2003 09:37:45 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Wed, 2003-07-23 at 00:53, Alexander Priem wrote:\n> Wow, I never figured how many different RAID configurations one could think\n> of :)\n> \n> After reading lots of material, forums and of course, this mailing-list, I\n> think I am going for a RAID5 configuration of 6 disks (18Gb, 15.000 rpm\n> each), one of those six disks will be a 'hot spare'. I will just put the OS,\n> the WAL and the data one one volume. RAID10 is way to expensive :)\n\nThe general heuristic is that RAID-5 is not the way to deal with\ndatabases. Now surely someone will disagree with me, but as I\nunderstand it RAID-5 has a bottleneck on a single disk for the\n(checksum) information. Bottleneck is not the word you want to hear in\nthe context of \"database server\".\n\nRAID-1 (mirroring) or RAID-10 (sort-of-mirrored-RAID-5) is the best\nchoice.\n\nAs far as FS performance goes, a year or two ago I remember someone\ndoing an evaluation of FS performance for PostgreSQL and they found that\nthe best performance was...\n\nFAT\n\nYep: FAT\n\nThe reason is that a lot of what the database is doing, especially\nguaranteeing writes (WAL) and so forth is best handled through a\nfilesystem that does not get in the way. The fundamentals will not have\nchanged.\n\nIt is for this reason that ext2 is very much likely to be better than\next3. XFS is possibly (maybe, perhaps) OK, because there are\noptimisations in there for databases, but the best optimisation is to\nnot be there at all. That's why Oracle want direct IO to disk\npartitions so they can implement their own \"filesystem\" (i.e. record\nsystem... table system...) on a raw partition.\n\nPersonally I don't plan to reboot my DB server more than once a year (if\nthat (even my_laptop currently has 37 days uptime, not including\nsuspend). On our DB servers I use ext2 (rather than ext3) mounted with\nnoatime, and I bite the 15 minutes to fsck (once a year) rather than\nscrew general performance with journalling database on top of\njournalling FS. I split pg_xlog onto a separate physical disk, if\nperformance requirements are extreme. \n\nCatalyst's last significant project was to write the Domain Name\nregistration system for .nz (using PostgreSQL). Currently we are\ndeveloping the electoral roll for the same country (2.8 million electors\nliving at 1.4 million addresses). We use Oracle (or Progress, or MySQL)\nif a client demands them, but we use PostgreSQL if we get to choose.\nIncreasingly we get to choose. Good.\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n---------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n Survey for nothing with http://survey.net.nz/ \n---------------------------------------------------------------------\n\n",
"msg_date": "25 Jul 2003 22:45:38 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "> Andrew McMillan wrote:\n>\n> The general heuristic is that RAID-5 is not the way to deal \n> with databases. Now surely someone will disagree with me, \n> but as I understand it RAID-5 has a bottleneck on a single \n> disk for the\n> (checksum) information. Bottleneck is not the word you want \n> to hear in the context of \"database server\".\nThat's indeed incorrect. There is no single disk \"special\" in a Raid-5,\nyou might be mistaking it for Raid-3/4 (where a single disk holds the\nchecksum). In raid-5 the checksums are scattered around on all the\nharddisks.\nRaid-5's problem is the write-performance, but with a decent\nraid-controller it outperforms a less-decent raid-controller (with the\nsame harddisks) on both read- and writeperformance which is running a\nraid-10.\nWith a decent raid-controller you end up with \"about the same\" write\nperformance as with raid-1, but slightly lower read performance.\n\nAt least, that's what I was able to gather from some tests of a\ncolleague of mine with different raid-setups.\n\n> RAID-1 (mirroring) or RAID-10 (sort-of-mirrored-RAID-5) is \n> the best choice.\nRaid-10 is _not_ similar to raid-5, it is raid1+0 i.e. a mirroring set\nof stripes (raid-0 is more-or-less a stripe).\n\nFor databases, raid-10 is supposed to be the fastest, since you have the\nadvantage of the striping for both reading and writing. While you also\nhave the advantage of the mirroring for reading.\n\nThe main disadvantage of raid-1 (and also of raid-10) is the heavy waste\nof harddisk space. Another advantage of raid-5 over raid-10 is that when\nyou don't care about space, raid-5 is more save with four harddrives\nthan raid-10 (i.e. set it up with a 3-disk+1spare).\n\n> As far as FS performance goes, a year or two ago I remember \n> someone doing an evaluation of FS performance for PostgreSQL \n> and they found that the best performance was...\n> \n> FAT\n> \n> Yep: FAT\nFAT has a few disadvantages afaik, I wouldn't use it for my database at\nleast.\n\n> Personally I don't plan to reboot my DB server more than once \n> a year (if that (even my_laptop currently has 37 days uptime, \n> not including suspend). On our DB servers I use ext2 (rather \n> than ext3) mounted with noatime, and I bite the 15 minutes to \n> fsck (once a year) rather than screw general performance with \n> journalling database on top of journalling FS. I split \n> pg_xlog onto a separate physical disk, if performance \n> requirements are extreme. \nWell, reboting is not a problem with ext2, but crashing might be... And\nnormally you don't plan a systemcrash ;)\nExt3 and xfs handle that much better.\n\nRegards,\n\nArjen\n\n\n\n",
"msg_date": "Sat, 26 Jul 2003 12:18:37 +0200",
"msg_from": "\"Arjen van der Meijden\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "\"Arjen van der Meijden\" <[email protected]> writes:\n> Well, reboting is not a problem with ext2, but crashing might be... And\n> normally you don't plan a systemcrash ;)\n> Ext3 and xfs handle that much better.\n\nA journaling filesystem is good to use if you can set it to journal\nmetadata but not file contents. PG's WAL logic can recover lost file\ncontents, but we have no way to help out the filesystem if it's lost\nmetadata.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 Jul 2003 10:12:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL "
},
{
"msg_contents": "Since there seem to be a lot of different opinions regarding the various\ndifferent RAID configurations I thought I'd post this link to the list:\nhttp://www.storagereview.com/guide2000/ref/hdd/perf/raid/index.html\n\nThis is the best resource for information on RAID and hard drive\nperformance I found online.\n\nI hope this helps.\n\nBalazs\n\n\n\n\n",
"msg_date": "Sat, 26 Jul 2003 14:08:34 -0700",
"msg_from": "\"Balazs Wellisch\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "Balasz,\n\n> Since there seem to be a lot of different opinions regarding the various\n> different RAID configurations I thought I'd post this link to the list:\n> http://www.storagereview.com/guide2000/ref/hdd/perf/raid/index.html\n\nYeah ... this is a really good article. Made me realize why \"stripey\" RAID \nsucks for OLTP databases, unless you throw a lot of platters at them.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 28 Jul 2003 10:10:02 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
}
] |
[
{
"msg_contents": "> What would you guys think of not using RAID5 in that case, but just a really\r\n> fast 15.000 rpm SCSI-320 disk?\r\n\r\n \r\nI'd say you must be able to tolerate losing all the data since your last database backup. Your battery backed cache, rotational speed, and transfer rate aren't going to help at all when the drive itself degrades and corrupts data. If you can really only afford 3 drives, I'd have a single drive with the OS & WAL on it, and the data on a RAID-1 mirror set using the other 2 drives. If you need more space for data, or want your OS drives to be mirrored - it's going to cost more. See if you can get 2x18GB drives for the OS and 2x73GB drives for the data. \r\n \r\nYou have to consider how much headache that small amount of additional money is going to save you (and your users) down the road.\r\n \r\nRoman\r\n\r\n\t-----Original Message----- \r\n\tFrom: Alexander Priem [mailto:[email protected]] \r\n\tSent: Mon 7/21/2003 5:43 AM \r\n\tTo: [email protected]; [email protected] \r\n\tCc: \r\n\tSubject: Re: [PERFORM] Tuning PostgreSQL\r\n\t\r\n\t\r\n\r\n\tThanks, i'll look further into these mount setting.\r\n\t\r\n\tI was just thinking, the server will have a (RAID) controller containing\r\n\t128Mb of battery-backed cache memory. This would really speed up inserts to\r\n\tthe disk and would prevent data loss in case of a power-down also.\r\n\t\r\n\tWhat would you guys think of not using RAID5 in that case, but just a really\r\n\tfast 15.000 rpm SCSI-320 disk?\r\n\t\r\n\tKind regards,\r\n\tAlexander.\r\n\t\r\n\t\r\n\t----- Original Message -----\r\n\tFrom: \"Shridhar Daithankar\" <[email protected]>\r\n\tTo: <[email protected]>\r\n\tSent: Monday, July 21, 2003 2:05 PM\r\n\tSubject: Re: [PERFORM] Tuning PostgreSQL\r\n\t\r\n\t\r\n\t> On 21 Jul 2003 at 13:45, Alexander Priem wrote:\r\n\t>\r\n\t> > So where can I set the noatime & data=writeback variables? They are not\r\n\t> > PostgreSQL settings, but rather Linux settings, right? Where can I find\r\n\t> > these?\r\n\t>\r\n\t> These are typicaly set in /etc/fstab.conf. These are mount settings. man\r\n\tmount\r\n\t> for more details.\r\n\t>\r\n\t> The second setting data=writeback is ext3 specific, IIRC.\r\n\t>\r\n\t> HTH\r\n\t>\r\n\t> Bye\r\n\t> Shridhar\r\n\t>\r\n\t> --\r\n\t> History tends to exaggerate. -- Col. Green, \"The Savage Curtain\", stardate\r\n\t> 5906.4\r\n\t>\r\n\t>\r\n\t> ---------------------------(end of broadcast)---------------------------\r\n\t> TIP 1: subscribe and unsubscribe commands go to [email protected]\r\n\t\r\n\t\r\n\t---------------------------(end of broadcast)---------------------------\r\n\tTIP 4: Don't 'kill -9' the postmaster\r\n\t\r\n\r\n",
"msg_date": "Mon, 21 Jul 2003 06:45:32 -0700",
"msg_from": "\"Roman Fail\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "That's true, certainly, and with four disks (2x18 and 2x72 or 36), I would\nbe able to (a) be safe and (b) split the data and WAL.\n\nHmmm. Seems to me that this setup would be better than one RAID5 with three\n36Gb disks, wouldn't you think so? With one RAID5 array, I would still have\nthe data and the WAL on one volume...\n\nThanks for all your help so far.\n\nKind regards,\nAlexander Priem.\n\n\n\n\n----- Original Message -----\nFrom: \"Roman Fail\" <[email protected]>\nTo: \"Alexander Priem\" <[email protected]>; <[email protected]>;\n<[email protected]>\nSent: Monday, July 21, 2003 3:45 PM\nSubject: Re: [PERFORM] Tuning PostgreSQL\n\n\n> > What would you guys think of not using RAID5 in that case, but just a\nreally\n> > fast 15.000 rpm SCSI-320 disk?\n>\n>\n> I'd say you must be able to tolerate losing all the data since your last\ndatabase backup. Your battery backed cache, rotational speed, and transfer\nrate aren't going to help at all when the drive itself degrades and corrupts\ndata. If you can really only afford 3 drives, I'd have a single drive with\nthe OS & WAL on it, and the data on a RAID-1 mirror set using the other 2\ndrives. If you need more space for data, or want your OS drives to be\nmirrored - it's going to cost more. See if you can get 2x18GB drives for\nthe OS and 2x73GB drives for the data.\n>\n> You have to consider how much headache that small amount of additional\nmoney is going to save you (and your users) down the road.\n>\n> Roman\n>\n> -----Original Message-----\n> From: Alexander Priem [mailto:[email protected]]\n> Sent: Mon 7/21/2003 5:43 AM\n> To: [email protected]; [email protected]\n> Cc:\n> Subject: Re: [PERFORM] Tuning PostgreSQL\n>\n>\n>\n> Thanks, i'll look further into these mount setting.\n>\n> I was just thinking, the server will have a (RAID) controller containing\n> 128Mb of battery-backed cache memory. This would really speed up inserts\nto\n> the disk and would prevent data loss in case of a power-down also.\n>\n> What would you guys think of not using RAID5 in that case, but just a\nreally\n> fast 15.000 rpm SCSI-320 disk?\n>\n> Kind regards,\n> Alexander.\n>\n>\n> ----- Original Message -----\n> From: \"Shridhar Daithankar\" <[email protected]>\n> To: <[email protected]>\n> Sent: Monday, July 21, 2003 2:05 PM\n> Subject: Re: [PERFORM] Tuning PostgreSQL\n>\n>\n> > On 21 Jul 2003 at 13:45, Alexander Priem wrote:\n> >\n> > > So where can I set the noatime & data=writeback variables? They are\nnot\n> > > PostgreSQL settings, but rather Linux settings, right? Where can I\nfind\n> > > these?\n> >\n> > These are typicaly set in /etc/fstab.conf. These are mount settings. man\n> mount\n> > for more details.\n> >\n> > The second setting data=writeback is ext3 specific, IIRC.\n> >\n> > HTH\n> >\n> > Bye\n> > Shridhar\n> >\n> > --\n> > History tends to exaggerate. -- Col. Green, \"The Savage Curtain\",\nstardate\n> > 5906.4\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n",
"msg_date": "Mon, 21 Jul 2003 16:00:34 +0200",
"msg_from": "\"Alexander Priem\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "Alexander,\n\n> Hmmm. Seems to me that this setup would be better than one RAID5 with three\n> 36Gb disks, wouldn't you think so? With one RAID5 array, I would still have\n> the data and the WAL on one volume...\n\nDefinitely. As I've said, my experience with RAID5 is that with less than 5 \ndisks, it performs around 40% of a single scsi disk for large read-write \noperation on Postgres. \n\nIf you have only 3 disks, I'd advocate one disk for WAL and one RAID 1 array \nfor the database.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 21 Jul 2003 09:06:10 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On 2003-07-21 09:06:10 -0700, Josh Berkus wrote:\n> Alexander,\n> \n> > Hmmm. Seems to me that this setup would be better than one RAID5 with three\n> > 36Gb disks, wouldn't you think so? With one RAID5 array, I would still have\n> > the data and the WAL on one volume...\n> \n> Definitely. As I've said, my experience with RAID5 is that with less than 5 \n> disks, it performs around 40% of a single scsi disk for large read-write \n> operation on Postgres. \n> \n> If you have only 3 disks, I'd advocate one disk for WAL and one RAID 1 array \n> for the database.\n> \n\nIn this setup your database is still screwed if a single disk (the WAL disk)\nstops working. You'll have to revert to your last backup if this happens. The\nRAID-1 redundancy on your data disks buys you almost nothing: marginally\nbetter performance and no real redundancy should a single disk fail.\n\nI'd use RAID-5 if you absolutely cannot use more disks, but I would use\nRAID-10 or two RAID-1 partitions if you can afford to use 4 disks.\n\nVincent van Leeuwen\nMedia Design - http://www.mediadesign.nl/\n",
"msg_date": "Mon, 21 Jul 2003 18:28:52 +0200",
"msg_from": "Vincent van Leeuwen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "Hi all,\n\nVincent, You said that using RAID1, you don't have real redundancy. But\nRAID1 is mirroring, right? So if one of the two disks should fail, there\nshould be no data lost, right?\n\nI have been thinking some more. 18Gb drives are cheaper than 36 or 72Gb\ndrives. I don't know if I can get the money for this, but how would the\nfollowing setup sound?\n\nTwo 18Gb (15.000rpm) disks in RAID1 array for Operating System + WAL.\nFour 18Gb (15.000rpm) disks in RAID5 array for data.\n\nFor the same amount of money, I could also get:\n\nTwo 36Gb (10.000rpm) disks in RAID1 array for Operating System + WAL.\nFive/Six 36Gb (10.000rpm) disks in RAID5 array for data.\n\nWhich would be the best of the above? The one with four 15k-rpm disks or the\none with five/six 10k-rpm disks?\nWould these configs be better than all disks in one huge RAID5 array? There\nare so many possible configs with RAID.......\n\nKind regards,\nAlexander Priem.\n\n\n\n----- Original Message -----\nFrom: \"Vincent van Leeuwen\" <[email protected]>\nTo: <[email protected]>\nSent: Monday, July 21, 2003 6:28 PM\nSubject: Re: [PERFORM] Tuning PostgreSQL\n\n\n> On 2003-07-21 09:06:10 -0700, Josh Berkus wrote:\n> > Alexander,\n> >\n> > > Hmmm. Seems to me that this setup would be better than one RAID5 with\nthree\n> > > 36Gb disks, wouldn't you think so? With one RAID5 array, I would still\nhave\n> > > the data and the WAL on one volume...\n> >\n> > Definitely. As I've said, my experience with RAID5 is that with less\nthan 5\n> > disks, it performs around 40% of a single scsi disk for large read-write\n> > operation on Postgres.\n> >\n> > If you have only 3 disks, I'd advocate one disk for WAL and one RAID 1\narray\n> > for the database.\n> >\n>\n> In this setup your database is still screwed if a single disk (the WAL\ndisk)\n> stops working. You'll have to revert to your last backup if this happens.\nThe\n> RAID-1 redundancy on your data disks buys you almost nothing: marginally\n> better performance and no real redundancy should a single disk fail.\n>\n> I'd use RAID-5 if you absolutely cannot use more disks, but I would use\n> RAID-10 or two RAID-1 partitions if you can afford to use 4 disks.\n>\n> Vincent van Leeuwen\n> Media Design - http://www.mediadesign.nl/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n",
"msg_date": "Tue, 22 Jul 2003 09:04:42 +0200",
"msg_from": "\"Alexander Priem\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On 2003-07-22 09:04:42 +0200, Alexander Priem wrote:\n> Hi all,\n> \n> Vincent, You said that using RAID1, you don't have real redundancy. But\n> RAID1 is mirroring, right? So if one of the two disks should fail, there\n> should be no data lost, right?\n> \n\nRight. But the proposal was a single disk for WAL, without redundancy, and I\nargued that wasn't really safe. RAID1 by itself is extremely safe, possibly\neven the safest RAID type there is.\n\n> I have been thinking some more. 18Gb drives are cheaper than 36 or 72Gb\n> drives. I don't know if I can get the money for this, but how would the\n> following setup sound?\n> \n> Two 18Gb (15.000rpm) disks in RAID1 array for Operating System + WAL.\n> Four 18Gb (15.000rpm) disks in RAID5 array for data.\n> \n\nOur own testing has shown that a 6 disk RAID-10 array is faster than what you\ndescribe. Of course, this is very much dependant on how much INSERT/UPDATES\nyou generate (which taxes your WAL more), so your mileage may vary.\n\n> For the same amount of money, I could also get:\n> \n> Two 36Gb (10.000rpm) disks in RAID1 array for Operating System + WAL.\n> Five/Six 36Gb (10.000rpm) disks in RAID5 array for data.\n> \n\nIt is said that a higher RPM is particularly useful for a WAL disk. So you\nmight consider using two 18GB 15K rpm drives for a RAID-1 WAL disk (+OS and\nswap), and using 36GB 10K rpm disks in a RAID-5 array if you need that\ndiskspace.\n\n> Which would be the best of the above? The one with four 15k-rpm disks or the\n> one with five/six 10k-rpm disks?\n> Would these configs be better than all disks in one huge RAID5 array? There\n> are so many possible configs with RAID.......\n> \n\n15K rpm disks are significantly faster than 10K rpm disks. If your only\nconcern is performance, buy 15K rpm disks. If you want more diskspace for your\nmoney, fall back to larger 10K rpm disks.\n\nI personally think seperate WAL disks are vastly overrated, since they haven't\nshown a big performance gain in our own tests. But as I have said, this is\nextremely dependant on the type of load you generate, so only your own tests\ncan tell you what you should do in this respect.\n\nAbout RAID types: the fastest RAID type by far is RAID-10. However, this will\ncost you a lot of useable diskspace, so it isn't for everyone. You need at\nleast 4 disks for a RAID-10 array. RAID-5 is a nice compromise if you want as\nmuch useable diskspace as possible and still want to be redundant. RAID-1 is\nvery useful for small (2-disk) arrays.\n\nIf you have the time and are settled on buying 6 disks, I'd test the following\nscenarios:\n- 6-disk RAID-10 array (should perform best)\n- 4-disk RAID-10 array containing data, 2-disk RAID-1 array for WAL, OS, etc\n- 4-disk RAID-5 array containing data, 2-disk RAID-1 array for WAL, OS, etc\n- 6-disk RAID-5 array (will probably perform worst)\n\n\nHope this helps.\n\nVincent van Leeuwen\nMedia Design - http://www.mediadesign.nl/\n",
"msg_date": "Tue, 22 Jul 2003 11:40:35 +0200",
"msg_from": "Vincent van Leeuwen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "Wow, I never figured how many different RAID configurations one could think\nof :)\n\nAfter reading lots of material, forums and of course, this mailing-list, I\nthink I am going for a RAID5 configuration of 6 disks (18Gb, 15.000 rpm\neach), one of those six disks will be a 'hot spare'. I will just put the OS,\nthe WAL and the data one one volume. RAID10 is way to expensive :)\n\nIf I understand correctly, this will give great read-performance, but less\nwrite-performance. But since this server will be equipped with an embedded\nRAID controller featuring 128Mb of battery-backed cache, I figure that this\ncontroller will negate that (at least somewhat). I will need to find out\nwhether this cache can be configured so that it will ONLY cache WRITES, not\nREADS....\n\nAlso because of this battery backed cache controller, I will go for the ext2\nfile system, mounted with 'noatime'. I will use a UPS, so I don't think I\nneed the journaling of ext3. XFS is not natively supported by RedHat and I\nwill go for the easy way here :)\n\n1 Gb of RAM should be enough, I think. That is about the only point that\nalmost everyone agrees on :) Do you think ECC is very important? The\nserver I have in mind does not support it. Another one does, but is is about\n1.000 euros more expensive :(\n\nOne CPU should also be enough.\n\nAs for postgresql.conf settings, I think I will start with the following :\n\nmax_connections = 128\nsuperuser_reserved_connections = 1\nshared_buffers = 8192\nmax_fsm_relations = 1000\nmax_fsm_pages = 100000\nwal_buffers = 32\nsort_mem = 2048\nvacuum_mem = 32768\neffective_cache_size = 28672 (this one I'm not sure about, maybe this one\nneeds to be higher)\nrandom_page_cost = 2\ngeq0_threshold = 20\n\nThis pretty much sums it up. What do you think about this config? It may not\nbe the fastest, but a server like this will cost about 4750 euros, and that\nis including an Intel Xeon 2.4GHz cpu, redundant power supply, WITHOUT the\nUPS. Seems very reasonable to me...\n\nKind regards,\nAlexander Priem.\n\n\n\n----- Original Message -----\nFrom: \"Vincent van Leeuwen\" <[email protected]>\nTo: <[email protected]>\nSent: Tuesday, July 22, 2003 11:40 AM\nSubject: Re: [PERFORM] Tuning PostgreSQL\n\n\n> On 2003-07-22 09:04:42 +0200, Alexander Priem wrote:\n> > Hi all,\n> >\n> > Vincent, You said that using RAID1, you don't have real redundancy. But\n> > RAID1 is mirroring, right? So if one of the two disks should fail, there\n> > should be no data lost, right?\n> >\n>\n> Right. But the proposal was a single disk for WAL, without redundancy, and\nI\n> argued that wasn't really safe. RAID1 by itself is extremely safe,\npossibly\n> even the safest RAID type there is.\n>\n> > I have been thinking some more. 18Gb drives are cheaper than 36 or 72Gb\n> > drives. I don't know if I can get the money for this, but how would the\n> > following setup sound?\n> >\n> > Two 18Gb (15.000rpm) disks in RAID1 array for Operating System + WAL.\n> > Four 18Gb (15.000rpm) disks in RAID5 array for data.\n> >\n>\n> Our own testing has shown that a 6 disk RAID-10 array is faster than what\nyou\n> describe. Of course, this is very much dependant on how much\nINSERT/UPDATES\n> you generate (which taxes your WAL more), so your mileage may vary.\n>\n> > For the same amount of money, I could also get:\n> >\n> > Two 36Gb (10.000rpm) disks in RAID1 array for Operating System + WAL.\n> > Five/Six 36Gb (10.000rpm) disks in RAID5 array for data.\n> >\n>\n> It is said that a higher RPM is particularly useful for a WAL disk. So you\n> might consider using two 18GB 15K rpm drives for a RAID-1 WAL disk (+OS\nand\n> swap), and using 36GB 10K rpm disks in a RAID-5 array if you need that\n> diskspace.\n>\n> > Which would be the best of the above? The one with four 15k-rpm disks or\nthe\n> > one with five/six 10k-rpm disks?\n> > Would these configs be better than all disks in one huge RAID5 array?\nThere\n> > are so many possible configs with RAID.......\n> >\n>\n> 15K rpm disks are significantly faster than 10K rpm disks. If your only\n> concern is performance, buy 15K rpm disks. If you want more diskspace for\nyour\n> money, fall back to larger 10K rpm disks.\n>\n> I personally think seperate WAL disks are vastly overrated, since they\nhaven't\n> shown a big performance gain in our own tests. But as I have said, this is\n> extremely dependant on the type of load you generate, so only your own\ntests\n> can tell you what you should do in this respect.\n>\n> About RAID types: the fastest RAID type by far is RAID-10. However, this\nwill\n> cost you a lot of useable diskspace, so it isn't for everyone. You need at\n> least 4 disks for a RAID-10 array. RAID-5 is a nice compromise if you want\nas\n> much useable diskspace as possible and still want to be redundant. RAID-1\nis\n> very useful for small (2-disk) arrays.\n>\n> If you have the time and are settled on buying 6 disks, I'd test the\nfollowing\n> scenarios:\n> - 6-disk RAID-10 array (should perform best)\n> - 4-disk RAID-10 array containing data, 2-disk RAID-1 array for WAL, OS,\netc\n> - 4-disk RAID-5 array containing data, 2-disk RAID-1 array for WAL, OS,\netc\n> - 6-disk RAID-5 array (will probably perform worst)\n>\n>\n> Hope this helps.\n>\n> Vincent van Leeuwen\n> Media Design - http://www.mediadesign.nl/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Tue, 22 Jul 2003 15:27:20 +0200",
"msg_from": "\"Alexander Priem\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Tue, Jul 22, 2003 at 03:27:20PM +0200, Alexander Priem wrote:\n\n> file system, mounted with 'noatime'. I will use a UPS, so I don't think I\n> need the journaling of ext3. XFS is not natively supported by RedHat and I\n\nJust in case you're still thinking, why do you suppose that only\npower failures lead to system crashes? Surprise kernel panics due to\nbad hardware or OS upgrades with bugs in them, sudden failures\nbecause of bad memory, &c: all these things also can lead to crashes,\nand though super-redundant hardware can mitigate that risk, they\ncan't eliminate them completely. This is not advice, of course, but\nfor my money, its a bad idea not to use a journalled filesystem (or\nsomething similar) for production systems.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 22 Jul 2003 10:12:20 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Tue, Jul 22, 2003 at 03:27:20PM +0200, Alexander Priem wrote:\n> Wow, I never figured how many different RAID configurations one could think\n> of :)\n> \n> After reading lots of material, forums and of course, this mailing-list, I\n> think I am going for a RAID5 configuration of 6 disks (18Gb, 15.000 rpm\n> each), one of those six disks will be a 'hot spare'. I will just put the OS,\n> the WAL and the data one one volume. RAID10 is way to expensive :)\n> \n> If I understand correctly, this will give great read-performance, but less\n> write-performance. But since this server will be equipped with an embedded\n> RAID controller featuring 128Mb of battery-backed cache, I figure that this\n> controller will negate that (at least somewhat). I will need to find out\n> whether this cache can be configured so that it will ONLY cache WRITES, not\n> READS....\n \nI think the bigger isssue with RAID5 write performance in a database is\nthat it hits every spindle. The real performance bottleneck you run into\nis latency, especially the latency of positioning the heads. I don't\nhave any proof to this theory, but I believe this is why moving WAL\nand/or temp_db to seperate drives from the main database files can be a\nbig benefit for some applications; not because of disk bandwidth but\nbecause it drastically cuts down the amount of time the heads have to\nspend flying around the disk.\n\nOf course, this is also highly dependant on how the filesystem operates,\ntoo. If it puts your WALs, temp_db, and database files very close to\neach other on the drive, splitting them out to seperate spindles won't\nhelp as much.\n-- \nJim C. Nasby, Database Consultant [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 22 Jul 2003 09:33:35 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "OK, another change of plans :)\n\next2 seems to be a bad idea. So i'll stick with ext3. Better safe than\nsorry...\n\nAbout the RAID-config: Maybe RAID-10 with six disks is affordable after all.\nI would have to take the smallest disks in this case, 18Gb per disk. So six\n18Gb disks (15000rpm) would result in a total capacity of 54 Gb, right? This\nvolume would hold OS, WAL and data, but since RAID10 appears to deliver such\ngreat performance (according to several people), in combination with the\n128Mb of battery backed cache, this would be a good solution?\n\nHmmm. I keep changing my mind about this. My Db would be mostly 'selecting',\nbut there would also be pretty much inserting and updating done. But most of\nthe work would be selects. So would this config be OK?\n\nKind regards,\nAlexander.\n\n\n----- Original Message -----\nFrom: \"Jim C. Nasby\" <[email protected]>\nTo: \"Alexander Priem\" <[email protected]>\nCc: \"Vincent van Leeuwen\" <[email protected]>;\n<[email protected]>\nSent: Tuesday, July 22, 2003 4:33 PM\nSubject: Re: [PERFORM] Tuning PostgreSQL\n\n\n> On Tue, Jul 22, 2003 at 03:27:20PM +0200, Alexander Priem wrote:\n> > Wow, I never figured how many different RAID configurations one could\nthink\n> > of :)\n> >\n> > After reading lots of material, forums and of course, this mailing-list,\nI\n> > think I am going for a RAID5 configuration of 6 disks (18Gb, 15.000 rpm\n> > each), one of those six disks will be a 'hot spare'. I will just put the\nOS,\n> > the WAL and the data one one volume. RAID10 is way to expensive :)\n> >\n> > If I understand correctly, this will give great read-performance, but\nless\n> > write-performance. But since this server will be equipped with an\nembedded\n> > RAID controller featuring 128Mb of battery-backed cache, I figure that\nthis\n> > controller will negate that (at least somewhat). I will need to find out\n> > whether this cache can be configured so that it will ONLY cache WRITES,\nnot\n> > READS....\n>\n> I think the bigger isssue with RAID5 write performance in a database is\n> that it hits every spindle. The real performance bottleneck you run into\n> is latency, especially the latency of positioning the heads. I don't\n> have any proof to this theory, but I believe this is why moving WAL\n> and/or temp_db to seperate drives from the main database files can be a\n> big benefit for some applications; not because of disk bandwidth but\n> because it drastically cuts down the amount of time the heads have to\n> spend flying around the disk.\n>\n> Of course, this is also highly dependant on how the filesystem operates,\n> too. If it puts your WALs, temp_db, and database files very close to\n> each other on the drive, splitting them out to seperate spindles won't\n> help as much.\n> --\n> Jim C. Nasby, Database Consultant [email protected]\n> Member: Triangle Fraternity, Sports Car Club of America\n> Give your computer some brain candy! www.distributed.net Team #1828\n>\n> Windows: \"Where do you want to go today?\"\n> Linux: \"Where do you want to go tomorrow?\"\n> FreeBSD: \"Are you guys coming, or what?\"\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n",
"msg_date": "Tue, 22 Jul 2003 17:01:36 +0200",
"msg_from": "\"Alexander Priem\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Tue, 2003-07-22 at 10:01, Alexander Priem wrote:\n> OK, another change of plans :)\n> \n> ext2 seems to be a bad idea. So i'll stick with ext3. Better safe than\n> sorry...\n\nDon't forget noatime!\n\n> About the RAID-config: Maybe RAID-10 with six disks is affordable after all.\n> I would have to take the smallest disks in this case, 18Gb per disk. So six\n> 18Gb disks (15000rpm) would result in a total capacity of 54 Gb, right? This\n> volume would hold OS, WAL and data, but since RAID10 appears to deliver such\n> great performance (according to several people), in combination with the\n> 128Mb of battery backed cache, this would be a good solution?\n> \n> Hmmm. I keep changing my mind about this. My Db would be mostly 'selecting',\n> but there would also be pretty much inserting and updating done. But most of\n> the work would be selects. So would this config be OK?\n\nOthers may disagree, but I'd put the OS and executables on a separate\ndisk from the db and WAL, and make it an IDE drive, since it's so \nmuch less expensive than SCSI disks. (Make a copy of the disk, and\nif it craps out, pop out the old disk, stick in the new disk, and\nfire the box right back up...)\n\nThus, you'll have an OS/executables disk, and a separate DB disk,\nand never the twain shall meet. Theoretically, you could pick up\nthose 6 drives and controller, move them to another machine, and\nthe data should be just as it was on the other box.\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "22 Jul 2003 10:36:30 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": ">>>>> \"AP\" == Alexander Priem <[email protected]> writes:\n\nAP> Hmmm. I keep changing my mind about this. My Db would be mostly\nAP> 'selecting', but there would also be pretty much inserting and\nAP> updating done. But most of the work would be selects. So would\nAP> this config be OK?\n\nI'm about to order a new server. I haven't decided exactly how many\ndisks I will get, but my plan is to get an 8-disk RAID10 with 15k RPM\ndrives. I don't need the volume, just the speed and number of\nspindles, so I'm buying the smallest drives that meet my speed\nprobably 18Gb each (sheesh! I remember getting my first 5Mb disk for\nmy 8088 PC in college and thinking that was too much space).\n\nMy mix is nearly even read/write, but probably a little biased towards\nthe reading.\n\nThis machine is replacing a 5-disk box that was switched from RAID5 to\n4-disk RAID10 for data plus one system disk in January (what a pain\nthat was to re-index, but that's another story). The switch from\nRAID5 to RAID10 made an enormous improvement in performance. The\nspeedup wasn't from recreating the database: It was restored from a\nfile-level backup so the actual files were not compacted or secretly\n\"improved\" in any way, other than my occasional reindexing.\n\nSo I think your 6-disk RAID10 will be good.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Tue, 22 Jul 2003 12:12:54 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Tue, Jul 22, 2003 at 11:40:35 +0200,\n Vincent van Leeuwen <[email protected]> wrote:\n> \n> About RAID types: the fastest RAID type by far is RAID-10. However, this will\n> cost you a lot of useable diskspace, so it isn't for everyone. You need at\n> least 4 disks for a RAID-10 array. RAID-5 is a nice compromise if you want as\n> much useable diskspace as possible and still want to be redundant. RAID-1 is\n> very useful for small (2-disk) arrays.\n\nNote that while raid 10 requires 4 disks, you get the space of 2 disks.\nThis is the same ratio as for raid 1.\n",
"msg_date": "Tue, 22 Jul 2003 12:10:33 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Tue, 22 Jul 2003, Jim C. Nasby wrote:\n\n> On Tue, Jul 22, 2003 at 03:27:20PM +0200, Alexander Priem wrote:\n> > Wow, I never figured how many different RAID configurations one could think\n> > of :)\n> > \n> > After reading lots of material, forums and of course, this mailing-list, I\n> > think I am going for a RAID5 configuration of 6 disks (18Gb, 15.000 rpm\n> > each), one of those six disks will be a 'hot spare'. I will just put the OS,\n> > the WAL and the data one one volume. RAID10 is way to expensive :)\n> > \n> > If I understand correctly, this will give great read-performance, but less\n> > write-performance. But since this server will be equipped with an embedded\n> > RAID controller featuring 128Mb of battery-backed cache, I figure that this\n> > controller will negate that (at least somewhat). I will need to find out\n> > whether this cache can be configured so that it will ONLY cache WRITES, not\n> > READS....\n> \n> I think the bigger isssue with RAID5 write performance in a database is\n> that it hits every spindle.\n\nThis is a common, and wrong misconception.\n\nIf you are writing 4k out to a RAID5 of 10 disks, this is what happens:\n\n(assumiung 64k stipes...)\nREAD data stripe (64k read)\nREAD parity stripe (64k read)\nmake changes to data stripe\nXOR new data stripe with old parity stripe to get a new parity stripe\nwrite new parity stripe (64k)\nwrite new data stripe (64k)\n\nSo it's not as bad as you might think. No modern controller (or sw raid \nfor linux) hits all the spindles anymore for writes. As you add more \ndrives to a RAID5 writes actually get faster on average, because there's \nless chance of having contention for the same drives (remember, parity \nmoves about in RAID5 so the parity disk isn't a choke point in RAID5 like \nit is in RAID4.)\n\n> The real performance bottleneck you run into\n> is latency, especially the latency of positioning the heads. I don't\n> have any proof to this theory, but I believe this is why moving WAL\n> and/or temp_db to seperate drives from the main database files can be a\n> big benefit for some applications; not because of disk bandwidth but\n> because it drastically cuts down the amount of time the heads have to\n> spend flying around the disk.\n\nThis is absolutely true. moving the heads costs hugely. while most \nmodern drives have SEEK times <10 ms, the SETTLE times tend to be about \nthat as well, followed by the average of about 3 ms for rotational latency \nto allow the proper sector to be under the head (10krpm drives rotate once \nabout every 6 ms.)\n\n\n",
"msg_date": "Tue, 22 Jul 2003 11:19:51 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "\n\"scott.marlowe\" <[email protected]> writes:\n\n> If you are writing 4k out to a RAID5 of 10 disks, this is what happens:\n> \n> (assumiung 64k stipes...)\n> READ data stripe (64k read)\n> READ parity stripe (64k read)\n> make changes to data stripe\n> XOR new data stripe with old parity stripe to get a new parity stripe\n> write new parity stripe (64k)\n> write new data stripe (64k)\n> \n> So it's not as bad as you might think. \n\nThe main negative for RAID5 is that it had to do that extra READ. If you're\ndoing lots of tiny updates then the extra latency to have to go read the\nparity block before it can write the parity block out is a real killer. For\nthat reason people prefer 0+1 for OLTP systems.\n\nBut you have to actually test your setup in practice to see if it hurts. A big\ndata warehousing system will be faster under RAID5 than under RAID1+0 because\nof the extra disks in the stripeset. The more disks in the stripeset the more\nbandwidth you get.\n\nEven for OLTP systems I've had success with RAID5 or not depending largely on\nthe quality of the implementation. The Hitachi systems were amazing. They had\nenough battery backed cache that the extra latency for the parity read/write\ncycle really never showed up at all. But it had a lot more than 128M. I think\nit had 1G and could be expanded.\n\n-- \ngreg\n\n",
"msg_date": "24 Jul 2003 14:29:33 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Thu, 2003-07-24 at 13:29, Greg Stark wrote:\n> \"scott.marlowe\" <[email protected]> writes:\n> \n> > If you are writing 4k out to a RAID5 of 10 disks, this is what happens:\n> > \n> > (assumiung 64k stipes...)\n> > READ data stripe (64k read)\n> > READ parity stripe (64k read)\n> > make changes to data stripe\n> > XOR new data stripe with old parity stripe to get a new parity stripe\n> > write new parity stripe (64k)\n> > write new data stripe (64k)\n> > \n> > So it's not as bad as you might think. \n> \n> The main negative for RAID5 is that it had to do that extra READ. If you're\n> doing lots of tiny updates then the extra latency to have to go read the\n> parity block before it can write the parity block out is a real killer. For\n> that reason people prefer 0+1 for OLTP systems.\n> \n> But you have to actually test your setup in practice to see if it hurts. A big\n> data warehousing system will be faster under RAID5 than under RAID1+0 because\n> of the extra disks in the stripeset. The more disks in the stripeset the more\n> bandwidth you get.\n> \n> Even for OLTP systems I've had success with RAID5 or not depending largely on\n> the quality of the implementation. The Hitachi systems were amazing. They had\n> enough battery backed cache that the extra latency for the parity read/write\n> cycle really never showed up at all. But it had a lot more than 128M. I think\n> it had 1G and could be expanded.\n\nYour last paragraph just stole the objection to the 1st paragraph \nright out of my mouth, since enough cache will allow it to \"batch\"\nall those tiny updates into big updates. But those Hitachi controllers\nweren't plugged into x86-type boxen, were they?\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "24 Jul 2003 15:13:47 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": ">>>>> \"GS\" == Greg Stark <[email protected]> writes:\n\nGS> \"scott.marlowe\" <[email protected]> writes:\n\nGS> But you have to actually test your setup in practice to see if it\nGS> hurts. A big data warehousing system will be faster under RAID5\nGS> than under RAID1+0 because of the extra disks in the\nGS> stripeset. The more disks in the stripeset the more bandwidth you\nGS> get.\n\nAnyone have ideas on 14 spindles? I just ordered a disk subsystem\nwith 14 high speed (U320 15kRPM) SCSI disks to hook up with a dell\nPERC3/DC controller (only 128MB cache, though).\n\nMy plan was to do RAID10, but I think I'll play with RAID5 again and\nsee which gives me the best performance. Unfortunatly, it is\ndifficult to recreate the highly fragmented tables I have now (vacuum\nfull takes over 14 hours on one of the tables) so I'm not sure how to\nbest compare them.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Tue, 29 Jul 2003 11:14:54 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Tue, 2003-07-29 at 10:14, Vivek Khera wrote:\n> >>>>> \"GS\" == Greg Stark <[email protected]> writes:\n> \n> GS> \"scott.marlowe\" <[email protected]> writes:\n> \n> GS> But you have to actually test your setup in practice to see if it\n> GS> hurts. A big data warehousing system will be faster under RAID5\n> GS> than under RAID1+0 because of the extra disks in the\n> GS> stripeset. The more disks in the stripeset the more bandwidth you\n> GS> get.\n> \n> Anyone have ideas on 14 spindles? I just ordered a disk subsystem\n> with 14 high speed (U320 15kRPM) SCSI disks to hook up with a dell\n> PERC3/DC controller (only 128MB cache, though).\n\n14 drives on one SCSI card, eh? I'd be worried about saturating\nthe bus.\n\nMaybe it's an old rule of thumb, but I would fill a SCSI chain\nmore than half full.\n\n> My plan was to do RAID10, but I think I'll play with RAID5 again and\n> see which gives me the best performance. Unfortunatly, it is\n> difficult to recreate the highly fragmented tables I have now (vacuum\n> full takes over 14 hours on one of the tables) so I'm not sure how to\n> best compare them.\n\nAlso IMO: if I needed something *really* high performance, I'd\nstart with a mobo that has dual PCI buses (133MB/s can get swamped\nquickly by U320 devices) or PCI-X (but they're so new...).\n\nThen:\n- get dual U320 SCSI cards (one for each PCI bus) \n- plug them into dual redundant fast path external storage controllers\n that have, oh, 512MB RAM cache *each*)\n- dual port the drives, so they plug into both storage controllers\n\nSince I wouldn't put 14 drives on one SCSI chain, double what I\njust said, and only plug 7 drives in each controller.\n\nIf your app needs One Big Honkin' Device, use the Linux Volume\nManager (LVM) to merge the 2 RAID logical devices into one \"super-\nlogical\" device.\n\nYes, that's lot's of money, but is the data, and speed, important\nenough?\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "29 Jul 2003 10:46:15 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Tue, 2003-07-29 at 08:14, Vivek Khera wrote:\n> >>>>> \"GS\" == Greg Stark <[email protected]> writes:\n> \n> GS> \"scott.marlowe\" <[email protected]> writes:\n> \n> GS> But you have to actually test your setup in practice to see if it\n> GS> hurts. A big data warehousing system will be faster under RAID5\n> GS> than under RAID1+0 because of the extra disks in the\n> GS> stripeset. The more disks in the stripeset the more bandwidth you\n> GS> get.\n> \n> Anyone have ideas on 14 spindles? I just ordered a disk subsystem\n> with 14 high speed (U320 15kRPM) SCSI disks to hook up with a dell\n> PERC3/DC controller (only 128MB cache, though).\n<SNIP>\n\nHey one comment on this. With dell Perc3/DC you should check the\nmegaraid-devel list to find the best BIOS settings for maximum\nperformance. There have been many comments on it and trials to get it\ngoing really well. All told though I totally love the LSI Megaraid (\nwhich is what the perc3/dc is ) controllers. We use the Elite 1650 with\nseagate cheetah drives for a nice little array.\n\n--Will",
"msg_date": "29 Jul 2003 09:16:36 -0700",
"msg_from": "Will LaShell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On 29 Jul 2003, Ron Johnson wrote:\n\n> On Tue, 2003-07-29 at 10:14, Vivek Khera wrote:\n> > >>>>> \"GS\" == Greg Stark <[email protected]> writes:\n> > \n> > GS> \"scott.marlowe\" <[email protected]> writes:\n> > \n> > GS> But you have to actually test your setup in practice to see if it\n> > GS> hurts. A big data warehousing system will be faster under RAID5\n> > GS> than under RAID1+0 because of the extra disks in the\n> > GS> stripeset. The more disks in the stripeset the more bandwidth you\n> > GS> get.\n> > \n> > Anyone have ideas on 14 spindles? I just ordered a disk subsystem\n> > with 14 high speed (U320 15kRPM) SCSI disks to hook up with a dell\n> > PERC3/DC controller (only 128MB cache, though).\n> \n> 14 drives on one SCSI card, eh? I'd be worried about saturating\n> the bus.\n\nI'm pretty sure those PERCs are based on the megaraid cards, which can \nhandle 3 or 4 channels each...\n\n> Maybe it's an old rule of thumb, but I would fill a SCSI chain\n> more than half full.\n\nIt's an old rule of thumb, but it still applies, it just takes more drives \nto saturate the channel. Figure ~ 30 to 50 MBytes a second per drive, on \na U320 port it would take 10 drives to saturate it, and considering random \naccesses will be much slower than the max ~30 megs a second off the \nplatter rate, it might take more than the max 14 drives to saturate U320.\n\n",
"msg_date": "Tue, 29 Jul 2003 10:18:13 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Tue, 2003-07-29 at 11:18, scott.marlowe wrote:\n> On 29 Jul 2003, Ron Johnson wrote:\n> \n> > On Tue, 2003-07-29 at 10:14, Vivek Khera wrote:\n> > > >>>>> \"GS\" == Greg Stark <[email protected]> writes:\n> > > \n> > > GS> \"scott.marlowe\" <[email protected]> writes:\n> > > \n> > > GS> But you have to actually test your setup in practice to see if it\n> > > GS> hurts. A big data warehousing system will be faster under RAID5\n> > > GS> than under RAID1+0 because of the extra disks in the\n> > > GS> stripeset. The more disks in the stripeset the more bandwidth you\n> > > GS> get.\n> > > \n> > > Anyone have ideas on 14 spindles? I just ordered a disk subsystem\n> > > with 14 high speed (U320 15kRPM) SCSI disks to hook up with a dell\n> > > PERC3/DC controller (only 128MB cache, though).\n> > \n> > 14 drives on one SCSI card, eh? I'd be worried about saturating\n> > the bus.\n> \n> I'm pretty sure those PERCs are based on the megaraid cards, which can \n> handle 3 or 4 channels each...\n\nEach with 14 devices? If so, isn't that a concentrated point of\nfailure, even if the channels are 1/2 full?\n\n> > Maybe it's an old rule of thumb, but I would fill a SCSI chain\n> > more than half full.\n> \n> It's an old rule of thumb, but it still applies, it just takes more drives \n> to saturate the channel. Figure ~ 30 to 50 MBytes a second per drive, on \n> a U320 port it would take 10 drives to saturate it, and considering random \n> accesses will be much slower than the max ~30 megs a second off the \n> platter rate, it might take more than the max 14 drives to saturate U320.\n\nOk. You'd still saturate the 133MB/s PCI bus at 133/30 = 4.4 drives.\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "29 Jul 2003 13:08:22 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On 29 Jul 2003, Ron Johnson wrote:\n\n> On Tue, 2003-07-29 at 11:18, scott.marlowe wrote:\n> > On 29 Jul 2003, Ron Johnson wrote:\n> > \n> > > On Tue, 2003-07-29 at 10:14, Vivek Khera wrote:\n> > > > >>>>> \"GS\" == Greg Stark <[email protected]> writes:\n> > > > \n> > > > GS> \"scott.marlowe\" <[email protected]> writes:\n> > > > \n> > > > GS> But you have to actually test your setup in practice to see if it\n> > > > GS> hurts. A big data warehousing system will be faster under RAID5\n> > > > GS> than under RAID1+0 because of the extra disks in the\n> > > > GS> stripeset. The more disks in the stripeset the more bandwidth you\n> > > > GS> get.\n> > > > \n> > > > Anyone have ideas on 14 spindles? I just ordered a disk subsystem\n> > > > with 14 high speed (U320 15kRPM) SCSI disks to hook up with a dell\n> > > > PERC3/DC controller (only 128MB cache, though).\n> > > \n> > > 14 drives on one SCSI card, eh? I'd be worried about saturating\n> > > the bus.\n> > \n> > I'm pretty sure those PERCs are based on the megaraid cards, which can \n> > handle 3 or 4 channels each...\n> \n> Each with 14 devices? If so, isn't that a concentrated point of\n> failure, even if the channels are 1/2 full?\n\nYep. I've built one once before when BIG hard drives were 9 gigs. :-)\n\nAnd it is a point of concentrated failure, which brings me to my favorite \npart about the LSI megaraid cards (which most / all perc3s are \napparently.)\n\nIf you build a RAID1+0 or 0+1, you can seperate it out so each sub part is \non it's own card, and the other cards keep acting like one big card. \nAssuming the bad card isn't killing your PCI bus or draining the 12V rail \nor something.\n\n> > > Maybe it's an old rule of thumb, but I would fill a SCSI chain\n> > > more than half full.\n> > \n> > It's an old rule of thumb, but it still applies, it just takes more drives \n> > to saturate the channel. Figure ~ 30 to 50 MBytes a second per drive, on \n> > a U320 port it would take 10 drives to saturate it, and considering random \n> > accesses will be much slower than the max ~30 megs a second off the \n> > platter rate, it might take more than the max 14 drives to saturate U320.\n> \n> Ok. You'd still saturate the 133MB/s PCI bus at 133/30 = 4.4 drives.\n\nBut that's seq scan. For many database applications, random access \nperformance is much more important. Imagine 200 people entering \nreservations of 8k or less each into a transaction processing engine. \nEach transactions chance to hit an unoccupied spindle is what really \ncounts. If there's 30 spindles, each doing a stripe's worth of access all \nthe time, it's likely to never flood the channel. \n\nIf random access is 1/4th the speed of seq scan, then you need to multiply \nit by 4 to get the number of drives that'd saturate the PCI bus.\n\n\n",
"msg_date": "Tue, 29 Jul 2003 13:00:30 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Tue, 2003-07-29 at 14:00, scott.marlowe wrote:\n> On 29 Jul 2003, Ron Johnson wrote:\n> \n> > On Tue, 2003-07-29 at 11:18, scott.marlowe wrote:\n> > > On 29 Jul 2003, Ron Johnson wrote:\n> > > \n> > > > On Tue, 2003-07-29 at 10:14, Vivek Khera wrote:\n> > > > > >>>>> \"GS\" == Greg Stark <[email protected]> writes:\n> > > > > \n> > > > > GS> \"scott.marlowe\" <[email protected]> writes:\n> > > > > \n> > > > > GS> But you have to actually test your setup in practice to see if it\n> > > > > GS> hurts. A big data warehousing system will be faster under RAID5\n> > > > > GS> than under RAID1+0 because of the extra disks in the\n> > > > > GS> stripeset. The more disks in the stripeset the more bandwidth you\n> > > > > GS> get.\n> > > > > \n> > > > > Anyone have ideas on 14 spindles? I just ordered a disk subsystem\n> > > > > with 14 high speed (U320 15kRPM) SCSI disks to hook up with a dell\n> > > > > PERC3/DC controller (only 128MB cache, though).\n> > > > \n> > > > 14 drives on one SCSI card, eh? I'd be worried about saturating\n> > > > the bus.\n> > > \n> > > I'm pretty sure those PERCs are based on the megaraid cards, which can \n> > > handle 3 or 4 channels each...\n> > \n> > Each with 14 devices? If so, isn't that a concentrated point of\n> > failure, even if the channels are 1/2 full?\n> \n> Yep. I've built one once before when BIG hard drives were 9 gigs. :-)\n> \n> And it is a point of concentrated failure, which brings me to my favorite \n> part about the LSI megaraid cards (which most / all perc3s are \n> apparently.)\n> \n> If you build a RAID1+0 or 0+1, you can seperate it out so each sub part is \n> on it's own card, and the other cards keep acting like one big card. \n> Assuming the bad card isn't killing your PCI bus or draining the 12V rail \n> or something.\n\nSounds like my kinda card!\n\nIs the cache battery-backed up? \n\nHow much cache can you stuff in them?\n\n\n\n> > > > Maybe it's an old rule of thumb, but I would fill a SCSI chain\n> > > > more than half full.\n> > > \n> > > It's an old rule of thumb, but it still applies, it just takes more drives \n> > > to saturate the channel. Figure ~ 30 to 50 MBytes a second per drive, on \n> > > a U320 port it would take 10 drives to saturate it, and considering random \n> > > accesses will be much slower than the max ~30 megs a second off the \n> > > platter rate, it might take more than the max 14 drives to saturate U320.\n> > \n> > Ok. You'd still saturate the 133MB/s PCI bus at 133/30 = 4.4 drives.\n> \n> But that's seq scan. For many database applications, random access \n> performance is much more important. Imagine 200 people entering \n> reservations of 8k or less each into a transaction processing engine. \n> Each transactions chance to hit an unoccupied spindle is what really \n> counts. If there's 30 spindles, each doing a stripe's worth of access all \n> the time, it's likely to never flood the channel. \n> \n> If random access is 1/4th the speed of seq scan, then you need to multiply \n> it by 4 to get the number of drives that'd saturate the PCI bus.\n\nMaybe it's just me, but I've never seen a purely TP system.\n\nEven if roll off the daily updates to a \"reporting database\" each\nnight, some yahoo manager with enough juice to have his way still\nwants up-to-the-minute reports...\n\nBetter yet, the Access Jockey, who thinks s/he's an SQL whiz but\ncouldn't JOIN himself out of a paper bag...\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "29 Jul 2003 14:50:32 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On 29 Jul 2003, Ron Johnson wrote:\n\n> On Tue, 2003-07-29 at 14:00, scott.marlowe wrote:\n> > On 29 Jul 2003, Ron Johnson wrote:\n> > \n> > > On Tue, 2003-07-29 at 11:18, scott.marlowe wrote:\n> > > > On 29 Jul 2003, Ron Johnson wrote:\n> > > > \n> > > > > On Tue, 2003-07-29 at 10:14, Vivek Khera wrote:\n> > > > > > >>>>> \"GS\" == Greg Stark <[email protected]> writes:\n> > > > > > \n> > > > > > GS> \"scott.marlowe\" <[email protected]> writes:\n> > > > > > \n> > > > > > GS> But you have to actually test your setup in practice to see if it\n> > > > > > GS> hurts. A big data warehousing system will be faster under RAID5\n> > > > > > GS> than under RAID1+0 because of the extra disks in the\n> > > > > > GS> stripeset. The more disks in the stripeset the more bandwidth you\n> > > > > > GS> get.\n> > > > > > \n> > > > > > Anyone have ideas on 14 spindles? I just ordered a disk subsystem\n> > > > > > with 14 high speed (U320 15kRPM) SCSI disks to hook up with a dell\n> > > > > > PERC3/DC controller (only 128MB cache, though).\n> > > > > \n> > > > > 14 drives on one SCSI card, eh? I'd be worried about saturating\n> > > > > the bus.\n> > > > \n> > > > I'm pretty sure those PERCs are based on the megaraid cards, which can \n> > > > handle 3 or 4 channels each...\n> > > \n> > > Each with 14 devices? If so, isn't that a concentrated point of\n> > > failure, even if the channels are 1/2 full?\n> > \n> > Yep. I've built one once before when BIG hard drives were 9 gigs. :-)\n> > \n> > And it is a point of concentrated failure, which brings me to my favorite \n> > part about the LSI megaraid cards (which most / all perc3s are \n> > apparently.)\n> > \n> > If you build a RAID1+0 or 0+1, you can seperate it out so each sub part is \n> > on it's own card, and the other cards keep acting like one big card. \n> > Assuming the bad card isn't killing your PCI bus or draining the 12V rail \n> > or something.\n> \n> Sounds like my kinda card!\n> \n> Is the cache battery-backed up? \n\nYep\n\n> How much cache can you stuff in them?\n\nthe old old old school MegaRAID428 could hold up to 128 Meg. I'm sure the \nnew ones can handle 512Meg or more.\n\n> > > > > Maybe it's an old rule of thumb, but I would fill a SCSI chain\n> > > > > more than half full.\n> > > > \n> > > > It's an old rule of thumb, but it still applies, it just takes more drives \n> > > > to saturate the channel. Figure ~ 30 to 50 MBytes a second per drive, on \n> > > > a U320 port it would take 10 drives to saturate it, and considering random \n> > > > accesses will be much slower than the max ~30 megs a second off the \n> > > > platter rate, it might take more than the max 14 drives to saturate U320.\n> > > \n> > > Ok. You'd still saturate the 133MB/s PCI bus at 133/30 = 4.4 drives.\n> > \n> > But that's seq scan. For many database applications, random access \n> > performance is much more important. Imagine 200 people entering \n> > reservations of 8k or less each into a transaction processing engine. \n> > Each transactions chance to hit an unoccupied spindle is what really \n> > counts. If there's 30 spindles, each doing a stripe's worth of access all \n> > the time, it's likely to never flood the channel. \n> > \n> > If random access is 1/4th the speed of seq scan, then you need to multiply \n> > it by 4 to get the number of drives that'd saturate the PCI bus.\n> \n> Maybe it's just me, but I've never seen a purely TP system.\n\nI think most of them are running under TPF on a mainframe in a basement \nsomewhere, like for airline reservations. I've never worked on one, but \nmet one of the guys who runs one, and they use 12 mainframes for 6 live \nmachines and each live machine has a failover machine behind it in sysplex \nmode. I kept thinking of the giant dinosaurs in Jurassic park... \n\n> Even if roll off the daily updates to a \"reporting database\" each\n> night, some yahoo manager with enough juice to have his way still\n> wants up-to-the-minute reports...\n\nJust because it's TP doesn't mean it doesn't have real time reporting. \nBut expensive reports probably do get run at night.\n\n> Better yet, the Access Jockey, who thinks s/he's an SQL whiz but\n> couldn't JOIN himself out of a paper bag...\n\nI've seen a few who got joins and unions and what not, but explaining fks \nor transactions got me a glazed look... :-)\n\n",
"msg_date": "Tue, 29 Jul 2003 14:09:56 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Tue, 2003-07-29 at 15:09, scott.marlowe wrote:\n> On 29 Jul 2003, Ron Johnson wrote:\n> \n> > On Tue, 2003-07-29 at 14:00, scott.marlowe wrote:\n> > > On 29 Jul 2003, Ron Johnson wrote:\n> > > \n> > > > On Tue, 2003-07-29 at 11:18, scott.marlowe wrote:\n> > > > > On 29 Jul 2003, Ron Johnson wrote:\n> > > > > \n> > > > > > On Tue, 2003-07-29 at 10:14, Vivek Khera wrote:\n> > > > > > > >>>>> \"GS\" == Greg Stark <[email protected]> writes:\n> > > > > > > \n> > > > > > > GS> \"scott.marlowe\" <[email protected]> writes:\n[snip]\n> > > But that's seq scan. For many database applications, random access \n> > > performance is much more important. Imagine 200 people entering \n> > > reservations of 8k or less each into a transaction processing engine. \n> > > Each transactions chance to hit an unoccupied spindle is what really \n> > > counts. If there's 30 spindles, each doing a stripe's worth of access all \n> > > the time, it's likely to never flood the channel. \n> > > \n> > > If random access is 1/4th the speed of seq scan, then you need to multiply \n> > > it by 4 to get the number of drives that'd saturate the PCI bus.\n> > \n> > Maybe it's just me, but I've never seen a purely TP system.\n> \n> I think most of them are running under TPF on a mainframe in a basement \n> somewhere, like for airline reservations. I've never worked on one, but \n> met one of the guys who runs one, and they use 12 mainframes for 6 live \n> machines and each live machine has a failover machine behind it in sysplex \n> mode. I kept thinking of the giant dinosaurs in Jurassic park... \n\nWe have something similar running on Alphas and VMS; does about\n8M Txn/day. Anyone who uses E-ZPass in the northeast eventually\ngets stuck in our systems.\n\n(Made me fear Big Brother...)\n\n> > Even if roll off the daily updates to a \"reporting database\" each\n> > night, some yahoo manager with enough juice to have his way still\n> > wants up-to-the-minute reports...\n> \n> Just because it's TP doesn't mean it doesn't have real time reporting. \n> But expensive reports probably do get run at night.\n\nYes, but... There's always the exception.\n\n> > Better yet, the Access Jockey, who thinks s/he's an SQL whiz but\n> > couldn't JOIN himself out of a paper bag...\n> \n> I've seen a few who got joins and unions and what not, but explaining fks \n> or transactions got me a glazed look... :-)\n\nWow! They understood joins? You lucky dog!!!\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "29 Jul 2003 15:38:35 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
},
{
"msg_contents": "On Tue, 2003-07-29 at 15:38, Ron Johnson wrote:\n> On Tue, 2003-07-29 at 15:09, scott.marlowe wrote:\n> > On 29 Jul 2003, Ron Johnson wrote:\n> > \n> > > On Tue, 2003-07-29 at 14:00, scott.marlowe wrote:\n> > > > On 29 Jul 2003, Ron Johnson wrote:\n> > > > \n> > > > > On Tue, 2003-07-29 at 11:18, scott.marlowe wrote:\n> > > > > > On 29 Jul 2003, Ron Johnson wrote:\n> > > > > > \n> > > > > > > On Tue, 2003-07-29 at 10:14, Vivek Khera wrote:\n> > > > > > > > >>>>> \"GS\" == Greg Stark <[email protected]> writes:\n> > > > > > > > \n> > > > > > > > GS> \"scott.marlowe\" <[email protected]> writes:\n> [snip]\n[snip]\n> > I think most of them are running under TPF on a mainframe in a basement \n> > somewhere, like for airline reservations. I've never worked on one, but \n> > met one of the guys who runs one, and they use 12 mainframes for 6 live \n> > machines and each live machine has a failover machine behind it in sysplex \n> > mode. I kept thinking of the giant dinosaurs in Jurassic park... \n> \n> We have something similar running on Alphas and VMS; does about\n> 8M Txn/day. Anyone who uses E-ZPass in the northeast eventually\n> gets stuck in our systems.\n\nOh, forget to mention:\nyes, they are in a 2-deep basement.\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "30 Jul 2003 09:41:52 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL, pt 2"
},
{
"msg_contents": ">>>>> \"RJ\" == Ron Johnson <[email protected]> writes:\n\nRJ> On Tue, 2003-07-29 at 14:00, scott.marlowe wrote:\n\nRJ> Sounds like my kinda card!\n\nRJ> Is the cache battery-backed up? \n\nyep\n\n\nRJ> How much cache can you stuff in them?\n\nas per dell, the max is 128Mb, which was a bummer.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Thu, 31 Jul 2003 14:57:14 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning PostgreSQL"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm hunting for some advice on loading 50,000+ files all less than\n32KB to a 7.3.2 database. The table is simple.\n\ncreate table files (\n id int8 not null primary key,\n file text not null,\n size int8 not null,\n uid int not null,\n raw oid\n);\n\nThe script (currently bash) pulls a TAR file out of a queue, unpacks it\nto a large ramdisk mounted with noatime and performs a battery of tests\non the files included in the TAR file. For each file in the TAR is will\nadd the following to a SQL file...\n\nupdate files set raw=lo_import('/path/to/file/from/tar') where \nfile='/path/to/file/from/tar';\n\nThis file begins with BEGIN; and ends with END; and is fed to Postgres\nvia a \"psql -f sqlfile\" command. This part of the process can take\nanywhere from 30 to over 90 minutes depending on the number of files\nincluded in the TAR file.\n\nSystem is a RedHat 7.3 running a current 2.4.20 RedHat kernel and\n dual PIII 1.4GHz\n 2GB of memory\n 512MB ramdisk (mounted noatime)\n mirrored internal SCSI160 10k rpm drives for OS and swap\n 1 PCI 66MHz 64bit QLA2300\n 1 Gbit SAN with several RAID5 LUN's on a Hitachi 9910\n\nAll filesystems are ext3.\n\nAny thoughts?\n\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n",
"msg_date": "Mon, 21 Jul 2003 14:51:36 -0400",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Mass file imports"
}
] |
[
{
"msg_contents": "Folks,\n\nThere was a general consensus (I think) on this list that we want more verbose \ncomments in postgresql.conf for 7.4. Is anyone available to do the work? \nWe'll need the patch this week ...\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 21 Jul 2003 12:51:22 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commenting postgresql.conf"
},
{
"msg_contents": "On Mon, 21 Jul 2003, Josh Berkus wrote:\n\n> Folks,\n> \n> There was a general consensus (I think) on this list that we want more verbose \n> comments in postgresql.conf for 7.4. Is anyone available to do the work? \n> We'll need the patch this week ...\n\nI'll help. this is probably the kind of thing that needs a bit of round \nrobin to get it right...\n\n",
"msg_date": "Mon, 21 Jul 2003 14:16:31 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commenting postgresql.conf"
}
] |
[
{
"msg_contents": "Folks,\n\nIs the auto-vacuum daemon a new feature for 7.4, or is there a version for \n7.3.3? It's a bit unclear from the PGAvd page ...\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 21 Jul 2003 13:49:19 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "PGAvd"
},
{
"msg_contents": "\nI think it is new for 7.4. I don't see it in 7.3.X CVS.\n\n---------------------------------------------------------------------------\n\nJosh Berkus wrote:\n> Folks,\n> \n> Is the auto-vacuum daemon a new feature for 7.4, or is there a version for \n> 7.3.3? It's a bit unclear from the PGAvd page ...\n> \n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 21 Jul 2003 16:54:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGAvd"
}
] |
[
{
"msg_contents": " >Is the auto-vacuum daemon a new feature for 7.4, or is there a version \nfor \n >7.3.3? It's a bit unclear from the PGAvd page ...\n \nIt was added to CVS as of 7.4. It works perfectly well with 7.3.x\n\n-- \nlet name=\"cbbrowne\" and tld=\"libertyrms.info\" in String.concat \"@\" [name;tld];;\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n\n\n",
"msg_date": "Mon, 21 Jul 2003 18:00:30 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PGAvd "
}
] |
[
{
"msg_contents": "Hi all,\nI'm running Postgres7.3.3 and I'm performing this simple select:\n\nselect *\nfrom user_logs ul,\n user_data ud,\n class_default cd\nwhere\n ul.id_user = ud.id_user and\n ud.id_class = cd.id_class and\n cd.id_provider = 39;\n\nthese are the number of rows for each table:\n\nuser_logs: 1258955\nclass_default: 31 ( only one with id_provider = 39 )\nuser_data: 10274;\n\n\nthis is the explain analyze for that query:\n\nQUERY PLAN\n Hash Join (cost=265.64..32000.76 rows=40612 width=263) (actual\ntime=11074.21..11134.28 rows=10 loops=1)\n Hash Cond: (\"outer\".id_user = \"inner\".id_user)\n -> Seq Scan on user_logs ul (cost=0.00..24932.65 rows=1258965 width=48)\n(actual time=0.02..8530.21 rows=1258966 loops=1)\n -> Hash (cost=264.81..264.81 rows=331 width=215) (actual\ntime=30.22..30.22 rows=0 loops=1)\n -> Nested Loop (cost=0.00..264.81 rows=331 width=215) (actual\ntime=29.95..30.20 rows=6 loops=1)\n -> Seq Scan on class_default cd (cost=0.00..1.39 rows=1\nwidth=55) (actual time=0.08..0.10 rows=1 loops=1)\n Filter: (id_provider = 39)\n -> Index Scan using idx_user_data_class on user_data ud\n(cost=0.00..258.49 rows=395 width=160) (actual time=29.82..29.96 rows=6\nloops=1)\n Index Cond: (ud.id_class = \"outer\".id_class)\n Total runtime: 11135.65 msec\n(10 rows)\n\n\nI'm able to performe that select with these 3 steps:\n\nSELECT id_class from class_default where id_provider = 39;\n id_class\n----------\n 48\n(1 row)\n\nSELECT id_user from user_data where id_class in ( 48 );\n id_user\n---------\n 10943\n 10942\n 10934\n 10927\n 10910\n 10909\n(6 rows)\n\n\nSELECT * from user_logs where id_user in (\n 10943, 10942, 10934, 10927, 10910, 10909\n);\n[SNIPPED]\n\nand the time ammount is a couple of milliseconds.\n\nWhy the planner or the executor ( I don't know ) do not follow\nthe same strategy ?\n\n\n\nThank you\nGaetano Mendola\n\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 22 Jul 2003 19:10:14 +0200",
"msg_from": "\"Mendola Gaetano\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong plan or what ?"
},
{
"msg_contents": "Gaetano,\n\n> SELECT * from user_logs where id_user in (\n> 10943, 10942, 10934, 10927, 10910, 10909\n> );\n> [SNIPPED]\n\n> Why the planner or the executor ( I don't know ) do not follow\n> the same strategy ?\n\nIt is, actually, according to the query plan. \n\nCan you post the EXPLAIN ANALYZE for the above query?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Tue, 22 Jul 2003 10:40:28 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Wrong plan or what ?"
},
{
"msg_contents": "\"Josh Berkus\" <[email protected]>\n> Gaetano,\n>\n> > SELECT * from user_logs where id_user in (\n> > 10943, 10942, 10934, 10927, 10910, 10909\n> > );\n> > [SNIPPED]\n>\n> > Why the planner or the executor ( I don't know ) do not follow\n> > the same strategy ?\n>\n> It is, actually, according to the query plan.\n>\n> Can you post the EXPLAIN ANALYZE for the above query?\n\nIndex Scan using idx_user_user_logs, idx_user_user_logs, idx_user_user_logs,\nidx_user_user_logs, idx_user_user_logs, idx_user_user_logs on user_logs\n(cost=0.00..5454.21 rows=2498 width=48) (actual time=0.09..0.28 rows=10\nloops=1)\n Index Cond: ((id_user = 10943) OR (id_user = 10942) OR (id_user = 10934)\nOR (id_user = 10927) OR (id_user = 10910) OR (id_user = 10909))\n Total runtime: 0.41 msec\n(3 rows)\n\n\nThank you\nGaetano\n\n\nPS: if I execute the query I obtain 10 rows instead of 3 that say the\nexplain analyze.\n\n\n\n",
"msg_date": "Tue, 22 Jul 2003 19:48:20 +0200",
"msg_from": "\"Mendola Gaetano\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Wrong plan or what ?"
},
{
"msg_contents": "Gaetano,\n\n> QUERY PLAN\n> Hash Join (cost=265.64..32000.76 rows=40612 width=263) (actual\n> time=11074.21..11134.28 rows=10 loops=1)\n> Hash Cond: (\"outer\".id_user = \"inner\".id_user)\n> -> Seq Scan on user_logs ul (cost=0.00..24932.65 rows=1258965 width=48)\n> (actual time=0.02..8530.21 rows=1258966 loops=1)\n\nOK, here's your problem\n\nThe planner thinks that you're going to get 40162 rows out of the final join, \nnot 10. If the row estimate was correct, then the Seq Scan would be a \nreasonable plan. But it's not. Here's some steps you can take to clear \nthings up for the planner:\n\n1) Make sure you've VACUUM ANALYZED\n2) Adjust the following postgresql.conf statistics:\n\ta) effective_cache_size: increase to 70% of available (not used by other \nprocesses) RAM.\n\tb) random_page_cost: decrease, maybe to 2.\n\tc) default_statistics_target: try increasing to 100\n\t\t(warning: this will significantly increase the time required to do ANALYZE)\n\nThen test again!\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Tue, 22 Jul 2003 10:56:14 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong plan or what ?"
},
{
"msg_contents": "Forget my PS to last message.\n\n",
"msg_date": "Tue, 22 Jul 2003 19:56:30 +0200",
"msg_from": "\"Mendola Gaetano\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Wrong plan or what ?"
},
{
"msg_contents": "\"Josh Berkus\" <[email protected]>\n> Gaetano,\n>\n> > QUERY PLAN\n> > Hash Join (cost=265.64..32000.76 rows=40612 width=263) (actual\n> > time=11074.21..11134.28 rows=10 loops=1)\n> > Hash Cond: (\"outer\".id_user = \"inner\".id_user)\n> > -> Seq Scan on user_logs ul (cost=0.00..24932.65 rows=1258965\nwidth=48)\n> > (actual time=0.02..8530.21 rows=1258966 loops=1)\n>\n> OK, here's your problem\n>\n> The planner thinks that you're going to get 40162 rows out of the final\njoin,\n> not 10. If the row estimate was correct, then the Seq Scan would be a\n> reasonable plan. But it's not. Here's some steps you can take to clear\n> things up for the planner:\n>\n> 1) Make sure you've VACUUM ANALYZED\n> 2) Adjust the following postgresql.conf statistics:\n> a) effective_cache_size: increase to 70% of available (not used by other\n> processes) RAM.\n> b) random_page_cost: decrease, maybe to 2.\n> c) default_statistics_target: try increasing to 100\n> (warning: this will significantly increase the time required to do\nANALYZE)\n>\n> Then test again!\n\nNo improvement at all,\nI pushed default_statistics_target to 1000\nbut the rows expected are still 40612 :-(\nOf course I restarted the postmaster and I vacuumed analyze the DB\n\n\nThank you\nGaetano\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 22 Jul 2003 20:35:45 +0200",
"msg_from": "\"Mendola Gaetano\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Wrong plan or what ?"
},
{
"msg_contents": "In response to \"Mendola Gaetano\":\n> I'm running Postgres7.3.3 and I'm performing this simple select:\n\nLooking at your fast three step plan\n> SELECT id_class from class_default where id_provider = 39;\n> SELECT id_user from user_data where id_class in ( 48 );\n> SELECT * from user_logs where id_user in (\n> 10943, 10942, 10934, 10927, 10910, 10909 );\nI'ld stem for reordering the from and where clauses alike:\n select *\n from\n class_default cd,\n user_data ud,\n user_logs ul\n where\n cd.id_provider = 39 and\n ud.id_class = cd.id_class and\n ul.id_user = ud.id_user;\n\nPersonally I dislike implied joins and rather go for _about_ this:\n select *\n from\n ( class_default cd\n LEFT JOIN user_data ud ON ud.id_class = cd.id_class )\n LEFT JOIN user_logs ul ON ul.id_user = ud.id_user,\n where\n cd.id_provider = 39;\n\nGood luck,\n\nHansH\n\n\n\n\n\n",
"msg_date": "Sat, 26 Jul 2003 12:53:23 +0200",
"msg_from": "\"HansH\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong plan or what ?"
},
{
"msg_contents": "\"\"HansH\"\" <[email protected]>\n> In response to \"Mendola Gaetano\":\n> > I'm running Postgres7.3.3 and I'm performing this simple select:\n>\n> Looking at your fast three step plan\n> > SELECT id_class from class_default where id_provider = 39;\n> > SELECT id_user from user_data where id_class in ( 48 );\n> > SELECT * from user_logs where id_user in (\n> > 10943, 10942, 10934, 10927, 10910, 10909 );\n> I'ld stem for reordering the from and where clauses alike:\n> select *\n> from\n> class_default cd,\n> user_data ud,\n> user_logs ul\n> where\n> cd.id_provider = 39 and\n> ud.id_class = cd.id_class and\n> ul.id_user = ud.id_user;\n\n\nstill wrong:\n\nHash Join (cost=267.10..32994.34 rows=41881 width=264) (actual\ntime=6620.17..6847.20 rows=94 loops=1)\n Hash Cond: (\"outer\".id_user = \"inner\".id_user)\n -> Seq Scan on user_logs ul (cost=0.00..25712.15 rows=1298315 width=48)\n(actual time=0.01..5381.69 rows=1298351 loops=1)\n -> Hash (cost=266.25..266.25 rows=339 width=216) (actual\ntime=0.89..0.89 rows=0 loops=1)\n -> Nested Loop (cost=0.00..266.25 rows=339 width=216) (actual\ntime=0.16..0.83 rows=21 loops=1)\n -> Seq Scan on class_default cd (cost=0.00..1.39 rows=1\nwidth=55) (actual time=0.08..0.09 rows=1 loops=1)\n Filter: (id_provider = 39)\n -> Index Scan using idx_user_data_class on user_data ud\n(cost=0.00..260.00 rows=389 width=161) (actual time=0.06..0.40 rows=21\nloops=1)\n Index Cond: (ud.id_class = \"outer\".id_class)\n Total runtime: 6847.60 msec\n(10 rows)\n\n\nthe returned are 94.\n\n\n> Personally I dislike implied joins and rather go for _about_ this:\n> select *\n> from\n> ( class_default cd\n> LEFT JOIN user_data ud ON ud.id_class = cd.id_class )\n> LEFT JOIN user_logs ul ON ul.id_user = ud.id_user,\n> where\n> cd.id_provider = 39;\n\nworst:\n\n Merge Join (cost=280.48..55717.14 rows=41881 width=264) (actual\ntime=18113.64..18182.94 rows=105 loops=1)\n Merge Cond: (\"outer\".id_user = \"inner\".id_user)\n -> Index Scan using idx_user_user_logs on user_logs ul\n(cost=0.00..51665.66 rows=1298315 width=48) (actual time=10.78..15459.37\nrows=1298354 loops=1)\n -> Sort (cost=280.48..281.33 rows=339 width=216) (actual\ntime=1.11..1.20 rows=105 loops=1)\n Sort Key: ud.id_user\n -> Nested Loop (cost=0.00..266.25 rows=339 width=216) (actual\ntime=0.14..0.82 rows=21 loops=1)\n -> Seq Scan on class_default cd (cost=0.00..1.39 rows=1\nwidth=55) (actual time=0.07..0.07 rows=1 loops=1)\n Filter: (id_provider = 39)\n -> Index Scan using idx_user_data_class on user_data ud\n(cost=0.00..260.00 rows=389 width=161) (actual time=0.05..0.39 rows=21\nloops=1)\n Index Cond: (ud.id_class = \"outer\".id_class)\n Total runtime: 18185.61 msec\n\n:-(\n\n\n\nthank you anyway.\n\nGaetano\n\n\n",
"msg_date": "Fri, 1 Aug 2003 17:49:11 +0200",
"msg_from": "\"Mendola Gaetano\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong plan or what ?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm working on a project that has a data set of approximately 6million rows\nwith about 12,000 different elements, each element has 7 columns of data.\n\nI'm wondering what would be faster from a scanning perspective (SELECT\nstatements with some calculations) for this type of set up;\n\tone table for all the data\n\tone table for each data element (12,000 tables)\n\tone table per subset of elements (eg all elements that start with\n\"a\" in a table)\n\nThe data is static once its in the database, only new records are added on a\nregular basis.\n\nI'd like to run quite a few different formulated scans in the longer term so\nhaving efficient scans is a high priority.\n\nCan I do anything with Indexing to help with performance? I suspect for the\nmajority of scans I will need to evaluate an outcome based on 4 or 5 of the\n7 columns of data.\n\nThanks in advance :-)\n\nLinz\n",
"msg_date": "Wed, 23 Jul 2003 10:34:41 +1000",
"msg_from": "\"Castle, Lindsay\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "One table or many tables for data set"
},
{
"msg_contents": "On Tue, 2003-07-22 at 20:34, Castle, Lindsay wrote:\n> Hi all,\n> \n> I'm working on a project that has a data set of approximately 6million rows\n> with about 12,000 different elements, each element has 7 columns of data.\n\nAre these 7 columns the same for each element?",
"msg_date": "22 Jul 2003 20:53:56 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: One table or many tables for data set"
},
{
"msg_contents": "Castle, Lindsay wrote:\n> I'm working on a project that has a data set of approximately 6million rows\n> with about 12,000 different elements, each element has 7 columns of data.\n> \n> I'm wondering what would be faster from a scanning perspective (SELECT\n> statements with some calculations) for this type of set up;\n> \tone table for all the data\n> \tone table for each data element (12,000 tables)\n> \tone table per subset of elements (eg all elements that start with\n> \"a\" in a table)\n> \n\nI, for one, am having difficulty understanding exactly what your data \nlooks like, so it's hard to give advice. Maybe some concrete examples of \nwhat you are calling \"rows\", \"elements\", and \"columns\" would help.\n\nDoes each of 6 million rows have 12000 elements, each with 7 columns? Or \ndo you mean that out of 6 million rows, there are 12000 distinct kinds \nof elements?\n\n> Can I do anything with Indexing to help with performance? I suspect for the\n> majority of scans I will need to evaluate an outcome based on 4 or 5 of the\n> 7 columns of data.\n> \n\nAgain, this isn't clear to me -- but maybe I'm just being dense ;-)\nDoes this mean you expect 4 or 5 items in your WHERE clause?\n\nJoe\n\n",
"msg_date": "Tue, 22 Jul 2003 18:02:21 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: One table or many tables for data set"
}
] |
[
{
"msg_contents": "I'm trying to update a table but it's taking a very long time. I would\nappreciate any tips folks may have about ways to speed it up. \n\nThe table is paprospect2, as below:\n\n \\d paprospect2\n Column | Type | Modifiers\n -------------+---------+-------------------------------------------------------------------\n pfeature_id | integer | not null default nextval('unison.pfeature_pfeature_id_seq'::text)\n pseq_id | integer | not null\n pftype_id | integer | not null\n start | integer |\n stop | integer |\n confidence | real |\n run_id | integer | not null\n [snip 13 integer and real columns]\n run_id_new | integer |\n \n Indexes: paprospect2_redundant_alignment unique btree (pseq_id, \"start\", stop, run_id, pmodel_id),\n p2thread_p2params_id btree (run_id),\n p2thread_pmodel_id btree (pmodel_id)\n Foreign Key constraints: pftype_id_exists FOREIGN KEY (pftype_id) REFERENCES pftype(pftype_id) ON UPDATE CASCADE ON DELETE CASCADE,\n p2thread_pmodel_id_exists FOREIGN KEY (pmodel_id) REFERENCES pmprospect2(pmodel_id) ON UPDATE CASCADE ON DELETE CASCADE,\n pseq_id_exists FOREIGN KEY (pseq_id) REFERENCES pseq(pseq_id) ON UPDATE CASCADE ON DELETE CASCADE\n Triggers: p2thread_i_trigger\n \n\nThe columns pfeature_id..confidence and run_id_new (in red) are from an\ninherited table. Although the inheritance itself is probably not\nrelevant here (correction welcome), I suspect it may be relevant that\nall existing rows were written before the table definition included\nrun_id_new. p2thread_i_trigger is defined fires on insert only (not\nupdate).\n\npaprospect2 contains ~40M rows. The goal now is to migrate the data to\nthe supertable-inherited column with\n\n update paprospect2 set run_id_new=run_id;\n\n\nThe update's been running for 5 hours (unloaded dual 2.4 GHz Xeon w/2GB\nRAM, SCSI160 10K drive). There are no other jobs running. Load is ~1.2\nand the update's using ~3-5% of the CPU.\n\n $ ps -ostime,time,pcpu,cmd 28701\n STIME TIME %CPU CMD\n 12:18 00:07:19 2.3 postgres: admin csb 128.137.116.213 UPDATE\n\nThis suggests that the update is I/O bound (duh) and vmstat supports\nthis:\n\n $ vmstat 1\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 0 1 0 0 11288 94632 3558960 0 0 14 6 12 21 1 0 6\n 0 1 0 0 12044 94632 3558956 0 0 0 972 332 16 0 1 99\n 0 1 0 0 11092 94632 3558932 0 0 16 4420 309 25 0 2 97\n 0 1 0 0 11456 94636 3558928 0 0 0 980 326 23 0 1 99\n 1 0 0 0 12340 94636 3558924 0 0 16 532 329 14 0 0 100\n 0 1 0 0 12300 94636 3558916 0 0 0 1376 324 16 1 0 99\n 0 1 0 0 12252 94636 3558904 0 0 16 1888 325 18 0 0 99\n 0 1 0 0 11452 94636 3558888 0 0 16 2864 324 23 1 1 98\n 0 1 0 0 12172 94636 3558884 0 0 0 940 320 12 0 1 99\n 0 1 0 0 12180 94636 3558872 0 0 16 1840 318 22 0 1 99\n 0 1 0 0 11588 94636 3558856 0 0 0 2752 312 16 1 2 97\n\n\nPresumably the large number of blocks written (bo) versus blocks read\n(bi) reflects an enormous amount of bookkeeping that has to be done for\nMVCC, logging, perhaps rewriting a row for the new definition (a guess\n-- I don't know how this is handled), indicies, etc. There's no swapping\nand no processes are waiting. In short, it seems that this is ENTIRELY\nan I/O issue. Obviously, faster drives will help (but probably only by\nsmall factor).\n\nAny ideas how I might speed this up? Presumably this is all getting\nwrapped in a transaction -- does that hurt me for such a large update?\n \nThanks,\nReece\n\n\nBonus diversionary topic: In case it's not obvious, the motivation for\nthis is that the subtable (paprospect2) contains a column (run_id) whose\ndefinition I would like to migrate to the inherited table (i.e., the\n'super-table'). Although postgresql permits adding a column to a\nsupertable with the same name as an extant column in a subtable, it\nappears that such \"merged definition\" columns do not have the same\nproperties as a typical inherited column. In particular, dropping the\ncolumn from the supertable does not drop it from the subtable (but\nrenaming it does change both names). Hmm.\n\n-- \nReece Hart, Ph.D. [email protected], http://www.gene.com/\nGenentech, Inc. 650/225-6133 (voice), -5389 (fax)\nBioinformatics and Protein Engineering\n1 DNA Way, MS-93 http://www.in-machina.com/~reece/\nSouth San Francisco, CA 94080-4990 [email protected], GPG: 0x25EC91A0\n\n\n\n\n\n\n\nI'm trying to update a table but it's taking a very long time. I would appreciate any tips folks may have about ways to speed it up. \n\nThe table is paprospect2, as below:\n\n\\d paprospect2\n Column | Type | Modifiers\n-------------+---------+-------------------------------------------------------------------\n pfeature_id | integer | not null default nextval('unison.pfeature_pfeature_id_seq'::text)\n pseq_id | integer | not null\n pftype_id | integer | not null\n start | integer |\n stop | integer |\n confidence | real |\n run_id | integer | not null\n [snip 13 integer and real columns]\n run_id_new | integer |\n\nIndexes: paprospect2_redundant_alignment unique btree (pseq_id, \"start\", stop, run_id, pmodel_id),\n p2thread_p2params_id btree (run_id),\n p2thread_pmodel_id btree (pmodel_id)\nForeign Key constraints: pftype_id_exists FOREIGN KEY (pftype_id) REFERENCES pftype(pftype_id) ON UPDATE CASCADE ON DELETE CASCADE,\n p2thread_pmodel_id_exists FOREIGN KEY (pmodel_id) REFERENCES pmprospect2(pmodel_id) ON UPDATE CASCADE ON DELETE CASCADE,\n pseq_id_exists FOREIGN KEY (pseq_id) REFERENCES pseq(pseq_id) ON UPDATE CASCADE ON DELETE CASCADE\nTriggers: p2thread_i_trigger\n\n\nThe columns pfeature_id..confidence and run_id_new (in red) are from an inherited table. Although the inheritance itself is probably not relevant here (correction welcome), I suspect it may be relevant that all existing rows were written before the table definition included run_id_new. p2thread_i_trigger is defined fires on insert only (not update).\n\npaprospect2 contains ~40M rows. The goal now is to migrate the data to the supertable-inherited column with\n\nupdate paprospect2 set run_id_new=run_id;\n\n\nThe update's been running for 5 hours (unloaded dual 2.4 GHz Xeon w/2GB RAM, SCSI160 10K drive). There are no other jobs running. Load is ~1.2 and the update's using ~3-5% of the CPU.\n\n$ ps -ostime,time,pcpu,cmd 28701\nSTIME TIME %CPU CMD\n12:18 00:07:19 2.3 postgres: admin csb 128.137.116.213 UPDATE\n\nThis suggests that the update is I/O bound (duh) and vmstat supports this:\n\n$ vmstat 1\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 0 1 0 0 11288 94632 3558960 0 0 14 6 12 21 1 0 6\n 0 1 0 0 12044 94632 3558956 0 0 0 972 332 16 0 1 99\n 0 1 0 0 11092 94632 3558932 0 0 16 4420 309 25 0 2 97\n 0 1 0 0 11456 94636 3558928 0 0 0 980 326 23 0 1 99\n 1 0 0 0 12340 94636 3558924 0 0 16 532 329 14 0 0 100\n 0 1 0 0 12300 94636 3558916 0 0 0 1376 324 16 1 0 99\n 0 1 0 0 12252 94636 3558904 0 0 16 1888 325 18 0 0 99\n 0 1 0 0 11452 94636 3558888 0 0 16 2864 324 23 1 1 98\n 0 1 0 0 12172 94636 3558884 0 0 0 940 320 12 0 1 99\n 0 1 0 0 12180 94636 3558872 0 0 16 1840 318 22 0 1 99\n 0 1 0 0 11588 94636 3558856 0 0 0 2752 312 16 1 2 97\n\n\nPresumably the large number of blocks written (bo) versus blocks read (bi) reflects an enormous amount of bookkeeping that has to be done for MVCC, logging, perhaps rewriting a row for the new definition (a guess -- I don't know how this is handled), indicies, etc. There's no swapping and no processes are waiting. In short, it seems that this is ENTIRELY an I/O issue. Obviously, faster drives will help (but probably only by small factor).\n\nAny ideas how I might speed this up? Presumably this is all getting wrapped in a transaction -- does that hurt me for such a large update?\n\nThanks,\nReece\n\n\nBonus diversionary topic: In case it's not obvious, the motivation for this is that the subtable (paprospect2) contains a column (run_id) whose definition I would like to migrate to the inherited table (i.e., the 'super-table'). Although postgresql permits adding a column to a supertable with the same name as an extant column in a subtable, it appears that such \"merged definition\" columns do not have the same properties as a typical inherited column. In particular, dropping the column from the supertable does not drop it from the subtable (but renaming it does change both names). Hmm.\n\n\n\n\n-- \nReece Hart, Ph.D. [email protected], http://www.gene.com/\nGenentech, Inc. 650/225-6133 (voice), -5389 (fax)\nBioinformatics and Protein Engineering\n1 DNA Way, MS-93 http://www.in-machina.com/~reece/\nSouth San Francisco, CA 94080-4990 [email protected], GPG: 0x25EC91A0",
"msg_date": "22 Jul 2003 17:40:01 -0700",
"msg_from": "Reece Hart <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow table updates"
},
{
"msg_contents": "On Wednesday 23 July 2003 01:40, Reece Hart wrote:\n> I'm trying to update a table but it's taking a very long time. I would\n> appreciate any tips folks may have about ways to speed it up.\n[snip]\n> paprospect2 contains ~40M rows. The goal now is to migrate the data to\n> the supertable-inherited column with\n>\n> update paprospect2 set run_id_new=run_id;\n>\n>\n> The update's been running for 5 hours (unloaded dual 2.4 GHz Xeon w/2GB\n> RAM, SCSI160 10K drive). There are no other jobs running. Load is ~1.2\n> and the update's using ~3-5% of the CPU.\n[snip]\n> This suggests that the update is I/O bound (duh) and vmstat supports\n> this:\n[snip]\n> Presumably the large number of blocks written (bo) versus blocks read\n> (bi) reflects an enormous amount of bookkeeping that has to be done for\n> MVCC, logging, perhaps rewriting a row for the new definition (a guess\n> -- I don't know how this is handled), indicies, etc. There's no swapping\n> and no processes are waiting. In short, it seems that this is ENTIRELY\n> an I/O issue. Obviously, faster drives will help (but probably only by\n> small factor).\n>\n> Any ideas how I might speed this up? Presumably this is all getting\n> wrapped in a transaction -- does that hurt me for such a large update?\n\nWell, it needs to keep enought bookkeeping to be able to rollback the whole \ntransaction if it encounters a problem, or 40M rows in your case. Looks like \nyou're right and it's an I/O issue. I must admit, I'm a bit puzzled that your \nCPU is quite so low, but I suppose you've got two fast CPUs so it shouldn't \nbe high.\n\n[note the following is more speculation than experience]\nWhat might be happening is that the drive is spending all its time seeking \nbetween the WAL, index and table as it updates. I would also tend to be \nsuspicious of the foreign keys - PG might be re-checking these, and obviously \nthat would take time too.\n\nWhat you might want to try in future:\n1. begin transaction\n2. drop indexes, foreign keys\n3. update table\n4. vacuum it\n5. recreate indexes, foreign keys etc\n6. commit\n\nNow that's just moving the index updating/fk stuff to the end of the task, but \nit does seem to help sometimes.\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 23 Jul 2003 09:49:09 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow table updates"
},
{
"msg_contents": "Richard-\n\nThanks for the suggestions. I too had thought about the FK checks, even\nthough the columns aren't getting updated.\n\nI'm flabbergasted that the update is still running (~22 hours elapsed).\nBy comparison, the database takes only 4 hours to recreate from backup!\nSomething funny is happening here. I just interrupted the update and\nwill find another way.\n\nBut hang on, what's this:\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n30164 compbio 25 0 6056 6056 3568 R 98.8 0.1 29:55 postgres: admin csb [local] COPY\n\n\nI am the only user and I'm not doing a copy... this must be part of the\nupdate process. Does anyone out there know whether updates do a table\ncopy instead of in-table udpating (perhaps as a special case for\nwhole-table updates)?\n\nOf course, I can't help but wonder whether I just killed it when it was\nnearly done...\n\nThanks,\nReece\n\n\n-- \nReece Hart, Ph.D. [email protected], http://www.gene.com/\nGenentech, Inc. 650/225-6133 (voice), -5389 (fax)\nBioinformatics and Protein Engineering\n1 DNA Way, MS-93 http://www.in-machina.com/~reece/\nSouth San Francisco, CA 94080-4990 [email protected], GPG: 0x25EC91A0\n\n\n\n\n\n\n\nRichard-\n\nThanks for the suggestions. I too had thought about the FK checks, even though the columns aren't getting updated.\n\nI'm flabbergasted that the update is still running (~22 hours elapsed). By comparison, the database takes only 4 hours to recreate from backup! Something funny is happening here. I just interrupted the update and will find another way.\n\nBut hang on, what's this:\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n30164 compbio 25 0 6056 6056 3568 R 98.8 0.1 29:55 postgres: admin csb [local] COPY\n\nI am the only user and I'm not doing a copy... this must be part of the update process. Does anyone out there know whether updates do a table copy instead of in-table udpating (perhaps as a special case for whole-table updates)?\n\nOf course, I can't help but wonder whether I just killed it when it was nearly done...\n\nThanks,\nReece\n\n\n\n\n-- \nReece Hart, Ph.D. [email protected], http://www.gene.com/\nGenentech, Inc. 650/225-6133 (voice), -5389 (fax)\nBioinformatics and Protein Engineering\n1 DNA Way, MS-93 http://www.in-machina.com/~reece/\nSouth San Francisco, CA 94080-4990 [email protected], GPG: 0x25EC91A0",
"msg_date": "23 Jul 2003 10:44:36 -0700",
"msg_from": "Reece Hart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] slow table updates"
}
] |
[
{
"msg_contents": "Apologies, let me clear this up a bit (hopefully) :-)\n\nThe data structure looks like this:\n\telement\n\tdate\n\tnum1\n\tnum2\n\tnum3\n\tnum4\n\tunits\n\nThere are approx 12,000 distinct elements for a total of about 6 million\nrows of data.\n\nThe scanning technology I want to use may need a different number of rows\nand different columns depending on the scan formula;\n\teg scan1 may need num1, num2 and num3 from the last 200 rows for\nelement \"x\"\n\t scan2 may need num1, units from the last 10 rows for element \"y\"\n\nI can either do the scans and calculate what i need within SQL or drag the\ndata out and process it outside of SQL, my preference is to go inside SQL as\nI've assumed that would be faster and less development work.\n\nIf I went with the many tables design I would not expect to need to join\nbetween tables, there is no relationship between the different elements that\nI need to cater for.\n\nCheers,\n\nLinz\n\n\nCastle, Lindsay wrote and <snipped>:\n> I'm working on a project that has a data set of approximately 6million\nrows\n> with about 12,000 different elements, each element has 7 columns of data.\n> \n> I'm wondering what would be faster from a scanning perspective (SELECT\n> statements with some calculations) for this type of set up;\n> \tone table for all the data\n> \tone table for each data element (12,000 tables)\n> \tone table per subset of elements (eg all elements that start with\n> \"a\" in a table)\n> \n\nI, for one, am having difficulty understanding exactly what your data \nlooks like, so it's hard to give advice. Maybe some concrete examples of \nwhat you are calling \"rows\", \"elements\", and \"columns\" would help.\n\nDoes each of 6 million rows have 12000 elements, each with 7 columns? Or \ndo you mean that out of 6 million rows, there are 12000 distinct kinds \nof elements?\n\n> Can I do anything with Indexing to help with performance? I suspect for\nthe\n> majority of scans I will need to evaluate an outcome based on 4 or 5 of\nthe\n> 7 columns of data.\n> \n\nAgain, this isn't clear to me -- but maybe I'm just being dense ;-)\nDoes this mean you expect 4 or 5 items in your WHERE clause?\n",
"msg_date": "Wed, 23 Jul 2003 11:25:07 +1000",
"msg_from": "\"Castle, Lindsay\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: One table or many tables for data set"
},
{
"msg_contents": "Castle, Lindsay wrote:\n> The data structure looks like this:\n> \telement\n> \tdate\n> \tnum1\n> \tnum2\n> \tnum3\n> \tnum4\n> \tunits\n> \n> There are approx 12,000 distinct elements for a total of about 6 million\n> rows of data.\n\nAhh, that helps! So are the elements evenly distributed, i.e. are there \napprox 500 rows of each element? If so, it should be plenty quick to put \nall the data in one table with an index on \"element\" (and maybe a \nmulticolumn key, depending on other factors).\n\n> The scanning technology I want to use may need a different number of rows\n> and different columns depending on the scan formula;\n> \teg scan1 may need num1, num2 and num3 from the last 200 rows for\n> element \"x\"\n> \t scan2 may need num1, units from the last 10 rows for element \"y\"\n\nWhen you say \"last X rows\", do you mean sorted by \"date\"? If so, you \nmight want that index to be on (element, date). Then do:\n\nSELECT num1, num2, num3 FROM mytable WHERE element = 'an_element' order \nby date DESC LIMIT 20;\n\nReplace num1, num2, num3 by whatever columns you want, and \"LIMIT X\" as \nthe number of rows you want.\n\nHTH,\n\nJoe\n\n",
"msg_date": "Tue, 22 Jul 2003 18:36:20 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: One table or many tables for data set"
}
] |
[
{
"msg_contents": "Thanks Joe,\n\nThis certainly helps me get going on the right path.\n\n\nLindsay Castle\nEDS Australia\nMidrange & Distributed Tools\nInfrastructure Tools AP\nPh: +61 (0)8 8464 7101\nFax: +61 (0)8 8464 2135\n\n\n-----Original Message-----\nFrom: Joe Conway [mailto:[email protected]]\nSent: Wednesday, 23 July 2003 11:06 AM\nTo: Castle, Lindsay\nCc: [email protected]\nSubject: Re: [PERFORM] One table or many tables for data set\n\n\nCastle, Lindsay wrote:\n> The data structure looks like this:\n> \telement\n> \tdate\n> \tnum1\n> \tnum2\n> \tnum3\n> \tnum4\n> \tunits\n> \n> There are approx 12,000 distinct elements for a total of about 6 million\n> rows of data.\n\nAhh, that helps! So are the elements evenly distributed, i.e. are there \napprox 500 rows of each element? If so, it should be plenty quick to put \nall the data in one table with an index on \"element\" (and maybe a \nmulticolumn key, depending on other factors).\n\n> The scanning technology I want to use may need a different number of rows\n> and different columns depending on the scan formula;\n> \teg scan1 may need num1, num2 and num3 from the last 200 rows for\n> element \"x\"\n> \t scan2 may need num1, units from the last 10 rows for element \"y\"\n\nWhen you say \"last X rows\", do you mean sorted by \"date\"? If so, you \nmight want that index to be on (element, date). Then do:\n\nSELECT num1, num2, num3 FROM mytable WHERE element = 'an_element' order \nby date DESC LIMIT 20;\n\nReplace num1, num2, num3 by whatever columns you want, and \"LIMIT X\" as \nthe number of rows you want.\n\nHTH,\n\nJoe\n",
"msg_date": "Wed, 23 Jul 2003 11:47:29 +1000",
"msg_from": "\"Castle, Lindsay\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: One table or many tables for data set"
}
] |
[
{
"msg_contents": "Ok.. Unless I'm missing something, the data will be static (or near\nstatic). It also sounds as if the structure is common for elements, so\nyou probably only want 2 tables.\n\nOne with 6 million rows and any row information. The other with 6\nmillion * 12000 rows with the element data linking to the row\ninformation line with an identifier, and have an 'element type' (I\nassume there are 12000 types of elements -- or something of that\nnature).\n\nUnique constraint on (row_identifier, element_type)\n\nThe speed you achieve will be based on what indexes you create.\n\nIf you spend most of your time with one or a few (5% or less of the\nstructure) element types, create a partial index for those element types\nonly, and a partial index for all of the others.\n\nIf you have a standard mathematical operation on num1, num2, etc. you\nmay want to make use of functional indexes to index the result of the\ncalculation.\n\nBe sure to create the tables WITHOUT OIDS and be prepared for the\ndataload to take a while, and CLUSTER the table based on your most\ncommonly used index (once they've been setup).\n\nTo help with speed, we would need to see EXPLAIN ANALYZE results and the\nquery being performed.\n\nOn Tue, 2003-07-22 at 21:00, Castle, Lindsay wrote:\n> All rows have the same structure, the data itself will be different for each\n> row, the structure is something like this:\n> \n> \telement\n> \tdate\n> \tnum1\n> \tnum2\n> \tnum3\n> \tnum4\n> \tunits\n> \n> Thanks,\n> \n> \n> Lindsay Castle\n> EDS Australia\n> Midrange & Distributed Tools\n> Infrastructure Tools AP\n> Ph: +61 (0)8 8464 7101\n> Fax: +61 (0)8 8464 2135\n> \n> \n> -----Original Message-----\n> From: Rod Taylor [mailto:[email protected]]\n> Sent: Wednesday, 23 July 2003 10:24 AM\n> To: Castle, Lindsay\n> Cc: Postgresql Performance\n> Subject: Re: One table or many tables for data set\n> \n> \n> On Tue, 2003-07-22 at 20:34, Castle, Lindsay wrote:\n> > Hi all,\n> > \n> > I'm working on a project that has a data set of approximately 6million\n> rows\n> > with about 12,000 different elements, each element has 7 columns of data.\n> \n> Are these 7 columns the same for each element?\n>",
"msg_date": "22 Jul 2003 21:50:03 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: One table or many tables for data set"
},
{
"msg_contents": "On Tue, 2003-07-22 at 21:50, Rod Taylor wrote:\n> Ok.. Unless I'm missing something, the data will be static (or near\n> static). It also sounds as if the structure is common for elements, so\n> you probably only want 2 tables.\n\nI misunderstood. Do what Joe suggested.",
"msg_date": "22 Jul 2003 22:10:59 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: One table or many tables for data set"
}
] |
[
{
"msg_contents": "Thanks Rod \n\nMy explanations will be better next time. :-)\n\n\n-----Original Message-----\nFrom: Rod Taylor [mailto:[email protected]]\nSent: Wednesday, 23 July 2003 11:41 AM\nTo: Castle, Lindsay\nCc: Postgresql Performance\nSubject: Re: One table or many tables for data set\n\n\nOn Tue, 2003-07-22 at 21:50, Rod Taylor wrote:\n> Ok.. Unless I'm missing something, the data will be static (or near\n> static). It also sounds as if the structure is common for elements, so\n> you probably only want 2 tables.\n\nI misunderstood. Do what Joe suggested.\n",
"msg_date": "Wed, 23 Jul 2003 12:14:04 +1000",
"msg_from": "\"Castle, Lindsay\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: One table or many tables for data set"
}
] |
[
{
"msg_contents": "I have a database which is constantly being written to. A web server's \nlog file (and extras) is being written to it. There are no deletions or \nupdates (at least I think so :).\n\nAs the web traffic increases so will the write intensity.\n\nRight now the database tables have no foreign keys defined even though \nthere are foreign keys. The code that inserts into the DB is simple \nenough (now) that we can make sure that nothing is inserted if the \ncorresponding fk does not exist and that all fk checks pass.\n\nI want to add foreign key constraints to the table definitions but I am \nworried that it might be a big performance hit. Can anyone tell me how \nmuch of a performance hit adding one foreign key constraint to one field \nin a table will roughly be?\n\nAlso, for a DB that is write-intensive and rarely read, what are some \nthings I can do to increase performance? (Keeping in mind that there is \nmore than on DB on the same pg server).\n\nThanks,\n\nJean-Christian Imbeault\n\n",
"msg_date": "Wed, 23 Jul 2003 16:05:00 +0900",
"msg_from": "Jean-Christian Imbeault <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance hit of foreign key constraints?"
},
{
"msg_contents": "On 23 Jul 2003 at 16:05, Jean-Christian Imbeault wrote:\n\n> I have a database which is constantly being written to. A web server's \n> log file (and extras) is being written to it. There are no deletions or \n> updates (at least I think so :).\n> \n> As the web traffic increases so will the write intensity.\n> \n> Right now the database tables have no foreign keys defined even though \n> there are foreign keys. The code that inserts into the DB is simple \n> enough (now) that we can make sure that nothing is inserted if the \n> corresponding fk does not exist and that all fk checks pass.\n> \n> I want to add foreign key constraints to the table definitions but I am \n> worried that it might be a big performance hit. Can anyone tell me how \n> much of a performance hit adding one foreign key constraint to one field \n> in a table will roughly be?\n> \n> Also, for a DB that is write-intensive and rarely read, what are some \n> things I can do to increase performance? (Keeping in mind that there is \n> more than on DB on the same pg server).\n\n1. Insert them in batches. Proper size of transactions can speed the write \nperformance heavily.\n2. What kind of foreign keys you have? It might be possible to reduce FK \noverhead if you are checking against small number of records.\n3. Tune your hardware for write performance like getting a good-for-write RAID. \nI forgot which performs which for read and write.\n4. Tune WAL and move it to separate drive. That should win you some \nperformance.\n\nHTH\n\nBye\n Shridhar\n\n--\nBeauty:\tWhat's in your eye when you have a bee in your hand.\n\n",
"msg_date": "Wed, 23 Jul 2003 15:24:01 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance hit of foreign key constraints?"
},
{
"msg_contents": "\nOn Wed, 23 Jul 2003, Jean-Christian Imbeault wrote:\n\n> I have a database which is constantly being written to. A web server's\n> log file (and extras) is being written to it. There are no deletions or\n> updates (at least I think so :).\n>\n> As the web traffic increases so will the write intensity.\n>\n> Right now the database tables have no foreign keys defined even though\n> there are foreign keys. The code that inserts into the DB is simple\n> enough (now) that we can make sure that nothing is inserted if the\n> corresponding fk does not exist and that all fk checks pass.\n>\n> I want to add foreign key constraints to the table definitions but I am\n> worried that it might be a big performance hit. Can anyone tell me how\n> much of a performance hit adding one foreign key constraint to one field\n> in a table will roughly be?\n\nWell, generally speaking it'll be (assuming no ref actions - and covering\nactions you aren't doing):\n one select for each insert to the table with the constraint\n one select for each update to the table with the constraint, in current\n releases unpatched\n one select for each update to the table with the constraint if the\n key is changed in patched 7.3 or 7.4beta.\n one select for each delete to the referenced table\n one select for each update to the referenced table if the key is changed\n plus management of the trigger queue (this can be an issue in long\n transactions since the queue can get big)\n and some misc. work in the triggers.\n\nYou really want the foregin key on the table with the constraint to be\nindexed and using the index if you expect eitherof the referenced table\nconditions to happen.\n\n\n",
"msg_date": "Wed, 23 Jul 2003 07:49:12 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance hit of foreign key constraints?"
},
{
"msg_contents": "\nOn Wed, 23 Jul 2003, Stephan Szabo wrote:\n\n>\n> On Wed, 23 Jul 2003, Jean-Christian Imbeault wrote:\n>\n> > I have a database which is constantly being written to. A web server's\n> > log file (and extras) is being written to it. There are no deletions or\n> > updates (at least I think so :).\n> >\n> > As the web traffic increases so will the write intensity.\n> >\n> > Right now the database tables have no foreign keys defined even though\n> > there are foreign keys. The code that inserts into the DB is simple\n> > enough (now) that we can make sure that nothing is inserted if the\n> > corresponding fk does not exist and that all fk checks pass.\n> >\n> > I want to add foreign key constraints to the table definitions but I am\n> > worried that it might be a big performance hit. Can anyone tell me how\n> > much of a performance hit adding one foreign key constraint to one field\n> > in a table will roughly be?\n>\n> Well, generally speaking it'll be (assuming no ref actions - and covering\n> actions you aren't doing):\n> one select for each insert to the table with the constraint\n> one select for each update to the table with the constraint, in current\n> releases unpatched\n> one select for each update to the table with the constraint if the\n> key is changed in patched 7.3 or 7.4beta.\n> one select for each delete to the referenced table\n> one select for each update to the referenced table if the key is changed\n\nSo much for answering questions before I take my shower and wake up.\nMake those last two be two selects, and in 7.3 and earlier, one of those\nselects on update to referenced happens even if the key isn't changed\n(there's a patch that should work to change that on -patches archive).\n\n",
"msg_date": "Wed, 23 Jul 2003 08:02:49 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance hit of foreign key constraints?"
}
] |
[
{
"msg_contents": "\nHi ,\n\nI have a view which is a union of select of certain feilds from\nindentical tables. The problem is when we query a column on\nwhich index exists exists foreach of the tables does not use the\nindexes.\n\n\nBut when we query individual tables it uses indexes.\n\n\nRegds\nMallah.\n\ntradein_clients=# create view sent_enquiry_eyp_iid_ip_cat1 as \n\nselect rfi_id,sender_uid,receiver_uid,subject,generated from eyp_rfi UNION \nselect rfi_id,sender_uid,receiver_uid,subject,generated from iid_rfi UNION \nselect rfi_id,sender_uid,receiver_uid,subject,generated from ip_rfi UNION \nselect rfi_id,sender_uid,receiver_uid,subject,generated from catalog_rfi ;\n\nCREATE VIEW\ntradein_clients=#\ntradein_clients=# explain analyze select rfi_id from \nsent_enquiry_eyp_iid_ip_cat1 where sender_uid = 34866;\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan sent_enquiry_eyp_iid_ip_cat1 (cost=173347.05..182139.66 \nrows=58617 width=55) (actual time=57514.58..62462.15 rows=73 loops=1)\n Filter: (sender_uid = 34866)\n -> Unique (cost=173347.05..182139.66 rows=58617 width=55) (actual \ntime=57514.54..61598.82 rows=586230 loops=1)\n -> Sort (cost=173347.05..174812.49 rows=586174 width=55) (actual \ntime=57514.54..58472.01 rows=586231 loops=1)\n Sort Key: rfi_id, sender_uid, receiver_uid, subject, generated\n -> Append (cost=0.00..90563.74 rows=586174 width=55) (actual \ntime=13.17..50500.95 rows=586231 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..57800.63 \nrows=369463 width=42) (actual time=13.17..30405.33 rows=369536 loops=1)\n -> Seq Scan on eyp_rfi (cost=0.00..57800.63 \nrows=369463 width=42) (actual time=13.14..28230.00 rows=369536 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..7317.11 \nrows=45811 width=47) (actual time=0.04..534.89 rows=45811 loops=1)\n -> Seq Scan on iid_rfi (cost=0.00..7317.11 \nrows=45811 width=47) (actual time=0.03..359.88 rows=45811 loops=1)\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..22335.44 \nrows=145244 width=42) (actual time=0.08..17815.66 rows=145251 loops=1)\n -> Seq Scan on ip_rfi (cost=0.00..22335.44 \nrows=145244 width=42) (actual time=0.05..16949.03 rows=145251 loops=1)\n -> Subquery Scan \"*SELECT* 4\" (cost=0.00..3110.56 \nrows=25656 width=55) (actual time=0.07..469.60 rows=25633 loops=1)\n -> Seq Scan on catalog_rfi (cost=0.00..3110.56 \nrows=25656 width=55) (actual time=0.06..380.64 rows=25633 loops=1)\n Total runtime: 62504.24 msec\n(15 rows)\n\ntradein_clients=# explain analyze select rfi_id from eyp_rfi where \nsender_uid = 34866;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Index Scan using eyp_sender_uid_idx on eyp_rfi (cost=0.00..376.11 rows=117 \nwidth=4) (actual time=9.88..69.10 rows=12 loops=1)\n Index Cond: (sender_uid = 34866)\n Total runtime: 69.17 msec\n(3 rows)\n\ntradein_clients=#\n\n\n\n",
"msg_date": "Wed, 23 Jul 2003 15:51:48 +0530",
"msg_from": "Rajesh Kumar Mallah <[email protected]>",
"msg_from_op": true,
"msg_subject": "factoring problem with view in 7.3.3"
},
{
"msg_contents": "On Wednesday 23 July 2003 11:21, Rajesh Kumar Mallah wrote:\n> Hi ,\n>\n> I have a view which is a union of select of certain feilds from\n> indentical tables. The problem is when we query a column on\n> which index exists exists foreach of the tables does not use the\n> indexes.\n>\n>\n> But when we query individual tables it uses indexes.\n>\n> tradein_clients=# create view sent_enquiry_eyp_iid_ip_cat1 as\n> select rfi_id,sender_uid,receiver_uid,subject,generated from eyp_rfi UNION\n> select rfi_id,sender_uid,receiver_uid,subject,generated from iid_rfi UNION\n> select rfi_id,sender_uid,receiver_uid,subject,generated from ip_rfi UNION\n> select rfi_id,sender_uid,receiver_uid,subject,generated from catalog_rfi ;\n>\n> CREATE VIEW\n> tradein_clients=#\n> tradein_clients=# explain analyze select rfi_id from\n> sent_enquiry_eyp_iid_ip_cat1 where sender_uid = 34866;\n\n[snip query plan showing full selects being done and then filtering on the \noutputs]\n\nI do remember some talk about issues with pushing where clauses down into \nunions on a view (sorry - can't remember when - maybe check the archives). \nActually, I thought work had been done on that for 7.3.3, but it might have \nbeen 7.4\n\nIf you generally do that particular query (checking agains sender_uid) then \nthe simplest solution is to build an SQL query to push the comparison down \nfor you:\n\nCREATE my_function(int4) RETURNS SETOF my_type AS '\n SELECT ... FROM eyp_rfi WHERE sender_uid = $1 UNION\n ...etc...\n' LANGUAGE 'SQL';\n\nNote that you may get an error about an operator \"=$\" if you miss the spaces \naround the \"=\".\n\nHTH\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 23 Jul 2003 15:13:14 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: factoring problem with view in 7.3.3"
},
{
"msg_contents": "Rajesh Kumar Mallah <[email protected]> writes:\n> I have a view which is a union of select of certain feilds from\n> indentical tables. The problem is when we query a column on\n> which index exists exists foreach of the tables does not use the\n> indexes.\n\nHard to be certain since you didn't show us the table definitions,\nbut I suspect the culprit is a datatype mismatch. Here are the\ncomments for 7.3's subquery_is_pushdown_safe, which determines whether\nit's okay to push down a qualifier:\n\n * Conditions checked here:\n *\n * 1. If the subquery has a LIMIT clause or a DISTINCT ON clause, we must\n * not push down any quals, since that could change the set of rows\n * returned. (Actually, we could push down quals into a DISTINCT ON\n * subquery if they refer only to DISTINCT-ed output columns, but\n * checking that seems more work than it's worth. In any case, a\n * plain DISTINCT is safe to push down past.)\n *\n * 2. If the subquery has any functions returning sets in its target list,\n * we do not push down any quals, since the quals\n * might refer to those tlist items, which would mean we'd introduce\n * functions-returning-sets into the subquery's WHERE/HAVING quals.\n * (It'd be sufficient to not push down quals that refer to those\n * particular tlist items, but that's much clumsier to check.)\n *\n * 3. If the subquery contains EXCEPT or EXCEPT ALL set ops we cannot push\n * quals into it, because that would change the results. For subqueries\n * using UNION/UNION ALL/INTERSECT/INTERSECT ALL, we can push the quals\n * into each component query, so long as all the component queries share\n * identical output types. (That restriction could probably be relaxed,\n * but it would take much more code to include type coercion code into\n * the quals, and I'm also concerned about possible semantic gotchas.)\n\n1 and 2 don't seem to apply to your problem, which leaves 3 ...\n\n(BTW, 7.4 has addressed all of the possible improvements noted in the\nparenthetical remarks here.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Jul 2003 11:43:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: factoring problem with view in 7.3.3 "
},
{
"msg_contents": "> Rajesh Kumar Mallah <[email protected]> writes:\n>> I have a view which is a union of select of certain feilds from\n>> indentical tables. The problem is when we query a column on\n>> which index exists exists foreach of the tables does not use the\n>> indexes.\n>\n> Hard to be certain since you didn't show us the table definitions, but\n> I suspect the culprit is a datatype mismatch.\n\n\nRightly guessed , one of the columns in the view was having a diffrent type\n(date vs timestamp ). The column was removed from the view it worked.\n\nthe column 'generated' was timestamp in 2 place and date in 2 place,\ni wanted it in my and did a typecasting in the view below\nbut it suffers from the same problem .\n\nI could use Richards suggestion then ?\n\n\nregds\nmallah.\n\n\n\n CREATE VIEW sent_enquiry_eyp_iid_ip_cat2 as ((((((SELECT eyp_rfi.rfi_id,\n eyp_rfi.sender_uid, eyp_rfi.receiver_uid, eyp_rfi.subject,\n eyp_rfi.generated::timestamp FROM ONLY eyp_rfi) UNION (SELECT\n iid_rfi.rfi_id, iid_rfi.sender_uid, iid_rfi.receiver_uid,\n iid_rfi.subject, iid_rfi.generated FROM ONLY iid_rfi))) UNION (SELECT\n ip_rfi.rfi_id, ip_rfi.sender_uid, ip_rfi.receiver_uid, ip_rfi.subject,\n ip_rfi.generated::timestamp FROM ONLY ip_rfi))) UNION (SELECT\n catalog_rfi.rfi_id, catalog_rfi.sender_uid, catalog_rfi.receiver_uid,\n catalog_rfi.subject, catalog_rfi.generated FROM ONLY catalog_rfi));\n\n Here are the\n> comments for 7.3's subquery_is_pushdown_safe, which determines whether\n> it's okay to push down a qualifier:\n>\n> * Conditions checked here:\n> *\n> * 1. If the subquery has a LIMIT clause or a DISTINCT ON clause, we\n> must * not push down any quals, since that could change the set of rows\n> * returned. (Actually, we could push down quals into a DISTINCT ON *\n> subquery if they refer only to DISTINCT-ed output columns, but\n> * checking that seems more work than it's worth. In any case, a\n> * plain DISTINCT is safe to push down past.)\n> *\n> * 2. If the subquery has any functions returning sets in its target\n> list, * we do not push down any quals, since the quals\n> * might refer to those tlist items, which would mean we'd introduce *\n> functions-returning-sets into the subquery's WHERE/HAVING quals. *\n> (It'd be sufficient to not push down quals that refer to those\n> * particular tlist items, but that's much clumsier to check.)\n> *\n> * 3. If the subquery contains EXCEPT or EXCEPT ALL set ops we cannot\n> push * quals into it, because that would change the results. For\n> subqueries * using UNION/UNION ALL/INTERSECT/INTERSECT ALL, we can push\n> the quals * into each component query, so long as all the component\n> queries share * identical output types. (That restriction could\n> probably be relaxed, * but it would take much more code to include type\n> coercion code into * the quals, and I'm also concerned about possible\n> semantic gotchas.)\n>\n> 1 and 2 don't seem to apply to your problem, which leaves 3 ...\n>\n> (BTW, 7.4 has addressed all of the possible improvements noted in the\n> parenthetical remarks here.)\n>\n> \t\t\tregards, tom lane\n\n\n\n-----------------------------------------\nOver 1,00,000 exporters are waiting for your order! Click below to get\nin touch with leading Indian exporters listed in the premier\ntrade directory Exporters Yellow Pages.\nhttp://www.trade-india.com/dyn/gdh/eyp/\n\n\n",
"msg_date": "Wed, 23 Jul 2003 22:07:09 +0530 (IST)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: factoring problem with view in 7.3.3 [ PARTIALLY SOLVED ]"
},
{
"msg_contents": "<[email protected]> writes:\n> the column 'generated' was timestamp in 2 place and date in 2 place,\n> i wanted it in my and did a typecasting in the view below\n> but it suffers from the same problem .\n\nAFAIR it should work if you insert casts into the UNION's member selects.\nMaybe you didn't get the casting quite right? (For instance,\n\"timestamp\" isn't \"timestamp with time zone\" ...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Jul 2003 12:47:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: factoring problem with view in 7.3.3 [ PARTIALLY SOLVED ] "
},
{
"msg_contents": "\nYep!\n\nit works perfectly now.\n\n\nCREATE or replace VIEW sent_enquiry_eyp_iid_ip_cat2 as ((((((SELECT\neyp_rfi.rfi_id, eyp_rfi.sender_uid, eyp_rfi.receiver_uid, eyp_rfi.subject,\ncast(eyp_rfi.generated as timestamp with time zone ) FROM ONLY eyp_rfi)\nUNION (SELECT iid_rfi.rfi_id, iid_rfi.sender_uid, iid_rfi.receiver_uid,\niid_rfi.subject, iid_rfi.generated FROM ONLY iid_rfi))) UNION (SELECT\nip_rfi.rfi_id, ip_rfi.sender_uid, ip_rfi.receiver_uid, ip_rfi.subject, \ncast(ip_rfi.generated as timestamp with time zone ) FROM ONLY ip_rfi)))\nUNION (SELECT catalog_rfi.rfi_id, catalog_rfi.sender_uid,\ncatalog_rfi.receiver_uid, catalog_rfi.subject, catalog_rfi.generated FROM\nONLY catalog_rfi));\ntradein_clients=# explain analyze SELECT rfi_id from\nsent_enquiry_eyp_iid_ip_cat2 where sender_uid=38466; QUERY\n PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------ Subquery Scan sent_enquiry_eyp_iid_ip_cat2 (cost=641.62..644.67 rows=20\n width=55) (actual time=0.17..0.17 rows=0 loops=1) -> Unique (cost=641.62..644.67 rows=20 width=55) (actual\n time=0.17..0.17 rows=0 loops=1) -> Sort (cost=641.62..642.12 rows=204 width=55) (actual\n time=0.17..0.17 rows=0 loops=1) Sort Key: rfi_id, sender_uid, receiver_uid, subject, generated\n -> Append (cost=0.00..633.80 rows=204 width=55) (actual\n time=0.08..0.08 rows=0 loops=1) -> Subquery Scan \"*SELECT* 1\" (cost=0.00..376.11\n rows=117 width=42) (actual time=0.03..0.03 rows=0\n loops=1) -> Index Scan using eyp_sender_uid_idx on\n eyp_rfi (cost=0.00..376.11 rows=117 width=42)\n (actual time=0.03..0.03 rows=0 loops=1) Index Cond: (sender_uid = 38466)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..81.33\n rows=21 width=47) (actual time=0.02..0.02 rows=0\n loops=1) -> Index Scan using iid_sender_uid_idx on\n iid_rfi (cost=0.00..81.33 rows=21 width=47)\n (actual time=0.02..0.02 rows=0 loops=1) Index Cond: (sender_uid = 38466)\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..160.18\n rows=57 width=42) (actual time=0.02..0.02 rows=0\n loops=1) -> Index Scan using ip_sender_uid_idx on\n ip_rfi (cost=0.00..160.18 rows=57 width=42)\n (actual time=0.02..0.02 rows=0 loops=1) Index Cond: (sender_uid = 38466)\n -> Subquery Scan \"*SELECT* 4\" (cost=0.00..16.19\n rows=8 width=55) (actual time=0.01..0.01 rows=0\n loops=1) -> Index Scan using catalog_sender_uid_idx on\n catalog_rfi (cost=0.00..16.19 rows=8 width=55)\n (actual time=0.01..0.01 rows=0 loops=1) Index Cond: (sender_uid = 38466)\n Total runtime: 0.41 msec\n(18 rows)\n\n\n\nregds\nmallah.\n\n\n\n> <[email protected]> writes:\n>> the column 'generated' was timestamp in 2 place and date in 2 place, i\n>> wanted it in my and did a typecasting in the view below\n>> but it suffers from the same problem .\n>\n> AFAIR it should work if you insert casts into the UNION's member\n> selects. Maybe you didn't get the casting quite right? (For instance,\n> \"timestamp\" isn't \"timestamp with time zone\" ...)\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of\n> broadcast)--------------------------- TIP 6: Have you searched our list\n> archives?\n>\n> http://archives.postgresql.org\n\n\n\n-----------------------------------------\nOver 1,00,000 exporters are waiting for your order! Click below to get\nin touch with leading Indian exporters listed in the premier\ntrade directory Exporters Yellow Pages.\nhttp://www.trade-india.com/dyn/gdh/eyp/\n\n\n",
"msg_date": "Wed, 23 Jul 2003 22:51:16 +0530 (IST)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: factoring problem with view in 7.3.3 [ SOLVED ]"
}
] |
[
{
"msg_contents": "I have these two tables with the same data on two different\nmachines (SuSE 8.2 and Gentoo both with 7.3.2):\n\n\nschulz=> \\d rechnung\n Table \"jschulz.rechnung\"\n Column | Type | Modifiers \n----------------+----------+-----------\n system | smallint | not null\n jahr | smallint | not null\n monat | smallint | not null\n rechnungsnr | integer | not null\n rechnungsdatum | date | not null\n kundennr | integer | not null\n seiten | smallint | not null\n formularnr | smallint | \n text | text | not null\nIndexes: rechnung_pkey primary key btree (system, jahr, rechnungsnr),\n rechnung_kundennr btree (kundennr),\n rechnung_rechnungsdatum btree (rechnungsdatum),\n rechnung_rechnungsnr btree (rechnungsnr)\n\nschulz=> \\d rechnung_zusatz\n Table \"jschulz.rechnung_zusatz\"\n Column | Type | Modifiers \n-------------+----------+-----------\n system | smallint | not null\n jahr | smallint | not null\n rechnungsnr | integer | not null\n objektnr | integer | \nIndexes: rechnung_zusatz_uniq_objektnr unique btree (system, jahr, \nrechnungsnr, objektnr),\n rechnung_zusatz_objektnr btree (objektnr)\nForeign Key constraints: $1 FOREIGN KEY (system, jahr, rechnungsnr) REFERENCES \nrechnung(system, jahr, rechnungsnr) ON UPDATE NO ACTION ON DELETE NO ACTION\n\nschulz=> \n\n\nOn the SuSE machine an explain gives the following:\n\n\nschulz=> explain select system, jahr, rechnungsnr from (rechnung natural left \njoin rechnung_zusatz) where objektnr=1;\n QUERY PLAN \n--------------------------------------------------------------------------------------\n Hash Join (cost=0.00..25.04 rows=1000 width=20)\n Hash Cond: (\"outer\".rechnungsnr = \"inner\".rechnungsnr)\n Join Filter: ((\"outer\".system = \"inner\".system) AND (\"outer\".jahr = \n\"inner\".jahr))\n Filter: (\"inner\".objektnr = 1)\n -> Seq Scan on rechnung (cost=0.00..20.00 rows=1000 width=8)\n -> Hash (cost=0.00..0.00 rows=1 width=12)\n -> Seq Scan on rechnung_zusatz (cost=0.00..0.00 rows=1 width=12)\n(7 rows)\n\nschulz=>\n\n\nOn the Gentoo machine the same explain gives:\n\n\nschulz=> explain select system, jahr, rechnungsnr from (rechnung natural left \njoin rechnung_zusatz) where objektnr=1;\n QUERY PLAN \n---------------------------------------------------------------------------------\n Merge Join (cost=0.00..109.00 rows=1000 width=20)\n Merge Cond: ((\"outer\".system = \"inner\".system) AND (\"outer\".jahr = \n\"inner\".jahr) AND (\"outer\".rechnungsnr = \"inner\".rechnungsnr))\n Filter: (\"inner\".objektnr = 1)\n -> Index Scan using rechnung_pkey on rechnung (cost=0.00..52.00 rows=1000 \nwidth=8)\n -> Index Scan using rechnung_zusatz_uni_objektnr on rechnung_zusatz \n(cost=0.00..52.00 rows=1000 width=12)\n(5 Zeilen)\n\nschulz=>\n\n\nThe select on the SuSE machine finishes in about 3 seconds and on the\nGentoo machine it doesn't seem to come to an end at all. Each table has\nabout 80.000 rows.\n\nI'm not very familar with the output of the explain command but can you\ntell me why I get two different query plans?\n\n\nJörg\n",
"msg_date": "Wed, 23 Jul 2003 16:28:54 +0200",
"msg_from": "=?iso-8859-1?q?J=F6rg=20Schulz?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "different query plan for same select"
},
{
"msg_contents": "=?iso-8859-1?q?J=F6rg=20Schulz?= <[email protected]> writes:\n> I'm not very familar with the output of the explain command but can you\n> tell me why I get two different query plans?\n\nJudging from the suspiciously round numbers in the cost estimates,\nyou've never done a VACUUM ANALYZE on any of these tables. Try that and\nthen see what plans you get...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Jul 2003 11:21:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: different query plan for same select "
},
{
"msg_contents": "> > I'm not very familar with the output of the explain command but can you\n> > tell me why I get two different query plans?\n>\n> Judging from the suspiciously round numbers in the cost estimates,\n> you've never done a VACUUM ANALYZE on any of these tables. Try that and\n> then see what plans you get...\n\nOops.. You were right!\n\nThank you anyway.\n\nJörg\n",
"msg_date": "Thu, 24 Jul 2003 08:06:22 +0200",
"msg_from": "=?iso-8859-15?q?J=F6rg=20Schulz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: different query plan for same select"
}
] |
[
{
"msg_contents": "On Wed, 2003-07-23 at 10:47, Guthrie, Jeremy wrote:\n\n> Have you checked the sizes of your indexes? You may need to rebuild them...\n> \n> Multiply the relpages colum by 8192.\n\n\nSo, what does this tell me? I'm guessing that you're implying that I\nshould expect 8192 keys per page, and that this therefore indicates the\nsparseness of the key pages. Guessing that, I did:\n\n\nrkh@csb=> SELECT c2.relname, c2.relpages, c2.relpages*8192 as \"*8192\",\n 43413476::real/(c2.relpages*8192) FROM pg_class c, pg_class c2, pg_index i\n where c.oid = i.indrelid AND c2.oid = i.indexrelid and c2.relname~'^p2th|^papro'\n ORDER BY c2.relname;\n\n relname | relpages | *8192 | ?column?\n---------------------------------+----------+------------+--------------------\n p2thread_p2params_id | 122912 | 1006895104 | 0.0431161854174633\n p2thread_pmodel_id | 123243 | 1009606656 | 0.0430003860830331\n paprospect2_redundant_alignment | 229934 | 1883619328 | 0.0230479032332376\n\n\nWhat do you make of 'em apples?\n\nThanks,\nReece\n\n\n-- \nReece Hart, Ph.D. [email protected], http://www.gene.com/\nGenentech, Inc. 650/225-6133 (voice), -5389 (fax)\nBioinformatics and Protein Engineering\n1 DNA Way, MS-93 http://www.in-machina.com/~reece/\nSouth San Francisco, CA 94080-4990 [email protected], GPG: 0x25EC91A0\n\n\n\n\n\n\n\nOn Wed, 2003-07-23 at 10:47, Guthrie, Jeremy wrote:\n\nHave you checked the sizes of your indexes? You may need to rebuild them...\n\nMultiply the relpages colum by 8192.\n\n\nSo, what does this tell me? I'm guessing that you're implying that I should expect 8192 keys per page, and that this therefore indicates the sparseness of the key pages. Guessing that, I did:\n\nrkh@csb=> SELECT c2.relname, c2.relpages, c2.relpages*8192 as \"*8192\",\n 43413476::real/(c2.relpages*8192) FROM pg_class c, pg_class c2, pg_index i\n where c.oid = i.indrelid AND c2.oid = i.indexrelid and c2.relname~'^p2th|^papro'\n ORDER BY c2.relname;\n\n relname | relpages | *8192 | ?column?\n---------------------------------+----------+------------+--------------------\n p2thread_p2params_id | 122912 | 1006895104 | 0.0431161854174633\n p2thread_pmodel_id | 123243 | 1009606656 | 0.0430003860830331\n paprospect2_redundant_alignment | 229934 | 1883619328 | 0.0230479032332376\n\nWhat do you make of 'em apples?\n\nThanks,\nReece\n\n\n\n\n-- \nReece Hart, Ph.D. [email protected], http://www.gene.com/\nGenentech, Inc. 650/225-6133 (voice), -5389 (fax)\nBioinformatics and Protein Engineering\n1 DNA Way, MS-93 http://www.in-machina.com/~reece/\nSouth San Francisco, CA 94080-4990 [email protected], GPG: 0x25EC91A0",
"msg_date": "23 Jul 2003 11:07:10 -0700",
"msg_from": "Reece Hart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] slow table updates"
}
] |
[
{
"msg_contents": "Look at it like this(this is how this affected me):\nI had a table that use to be the primary home for my data(6 gigs worth). I copied out and copied to another table. I purged and then I 'vacuum full''d the database. After a day things really started going to hell. SLOOOW.. like 30 minutes to run my software versus the 1-5 seconds it normally takes.\n\nThe old table is still used but I use it to queue up data. After the data is processed, it is deleted. Mind you that the repurposed 'queue' table usually has no more than 3000-10000 entries in it. Guess what the index size was..... all told I had 7 gigs of indexes. Why? Because vacuum doesn't reoptimize the indexes. If postgresql can't use a deleted row's index entry, it creates a new one. The docs make it sound that if the difference between the values of the deleted rows vs the new row aren't close, it can't use the old index space. Look in the docs about reindexing to see their explanation. So back to my example, my table should maybe be 100K w/ indexes but it was more like 7 gigs. I re-indexed and BAM! My times were sub-second. \n\nBased on the information you have below, you have 3 gigs worth of indexes. Do you have that much data(in terms of rows)?\n\n\n-----Original Message-----\nFrom:\tReece Hart [mailto:[email protected]]\nSent:\tWed 7/23/2003 1:07 PM\nTo:\tGuthrie, Jeremy\nCc:\[email protected]; [email protected]; SF PostgreSQL\nSubject:\tRE: [PERFORM] slow table updates\nOn Wed, 2003-07-23 at 10:47, Guthrie, Jeremy wrote:\n\n> Have you checked the sizes of your indexes? You may need to rebuild them...\n> \n> Multiply the relpages colum by 8192.\n\n\nSo, what does this tell me? I'm guessing that you're implying that I\nshould expect 8192 keys per page, and that this therefore indicates the\nsparseness of the key pages. Guessing that, I did:\n\n\nrkh@csb=> SELECT c2.relname, c2.relpages, c2.relpages*8192 as \"*8192\",\n 43413476::real/(c2.relpages*8192) FROM pg_class c, pg_class c2, pg_index i\n where c.oid = i.indrelid AND c2.oid = i.indexrelid and c2.relname~'^p2th|^papro'\n ORDER BY c2.relname;\n\n relname | relpages | *8192 | ?column?\n---------------------------------+----------+------------+--------------------\n p2thread_p2params_id | 122912 | 1006895104 | 0.0431161854174633\n p2thread_pmodel_id | 123243 | 1009606656 | 0.0430003860830331\n paprospect2_redundant_alignment | 229934 | 1883619328 | 0.0230479032332376\n\n\nWhat do you make of 'em apples?\n\nThanks,\nReece\n\n\n-- \nReece Hart, Ph.D. [email protected], http://www.gene.com/\nGenentech, Inc. 650/225-6133 (voice), -5389 (fax)\nBioinformatics and Protein Engineering\n1 DNA Way, MS-93 http://www.in-machina.com/~reece/\nSouth San Francisco, CA 94080-4990 [email protected], GPG: 0x25EC91A0\n\n\n\n",
"msg_date": "Wed, 23 Jul 2003 13:38:03 -0500",
"msg_from": "\"Guthrie, Jeremy\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow table updates"
}
] |
[
{
"msg_contents": "On 24 Jul 2003 at 15:54, Kasim Oztoprak wrote:\n\n> The questions for this explanation are:\n> 1 - Can we use postgresql within clustered environment?\n> 2 - if the answer is yes, in which method can we use postgresql within a cluster?\n> active - passive or active - active?\n\nCoupled with linux-HA( See http://linux-ha.org) heartbeat service, it *should* \nbe possible to run postgresql in active-passive clustering.\n\nIf postgresql supported read-only database so that several nodes could read off \na single disk but only one could update that, a sort of active-active should be \npossible as well. But postgresql can not have a read only database. That would \nbe a handy addition in such cases..\n \n> Now, the second question is related to the performance of the database. Assuming we have a \n> dell's poweredge 6650 with 4 x 2.8 Ghz Xeon processors having 2 MB of cache for each, with the \n> main memory of lets say 32 GB. We can either use a small SAN from EMC or we can put all disks \n> into the machines with the required raid confiuration.\n> \n> We will install RedHat Advanced Server 2.1 to the machine as the operating system and postgresql as \n> the database server. We have a database having 25 millions records having the length of 250 bytes \n> on average for each record. And there are 1000 operators accessing the database concurrently. The main \n> operation on the database (about 95%) is select rather than insert, so do you have any idea about \n> the performance of the system? \n\nAssumig 325 bytes per tuple(250 bytes field+24-28 byte header+varchar fields) \ngives 25 tuples per 8K page, there would be 8GB of data. This configuration \ncould fly with 12-16GB of RAM. After all data is read that is. You can cut down \non other requirements as well. May be a 2x opteron with 16GB RAMmight be a \nbetter fit but check out how much CPU cache it has.\n\nA grep -rwn across data directory would fill the disk cache pretty well..:-)\n\nHTH\n\nBye\n Shridhar\n\n--\nEgotism, n:\tDoing the New York Times crossword puzzle with a pen.Egotist, n:\tA \nperson of low taste, more interested in himself than me.\t\t-- Ambrose Bierce, \n\"The Devil's Dictionary\"\n\n",
"msg_date": "Thu, 24 Jul 2003 18:39:26 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware performance and some more"
},
{
"msg_contents": "hello,\n\nsome of my questions may not be related to this group however, I know that some \nof them are directly related to this list. \n\nfirst of all I would like to learn that, any of you use the postgresql within the \nclustered environment? Or, let me ask you the question, in different manner,\ncan we use postgresql in a cluster environment? If we can do what is the support\nmethod of the postgresql for clusters? \n\nI would like to know two main clustering methods. (let us assume we use 2 machines \nin the clustering system) in the first case we have two machines running in a cluster\nhowever, the second one does not run the database server untill the observation of the \nfailure of the first machine, the oracle guys call this situation as active-passive \nconfiguration. There is only one machine running the database server at the same time. \nHence, in the case of failure there are some time to be waited untill the second \nmachine comes up.\n\nIn the second option both machines run the database server at the same time. Again oracle \nsupports this method using some additional applications called Real Application Cluster (RAC).\nAgain oracle guys call this method as active-active configuration.\n\nThe questions for this explanation are:\n 1 - Can we use postgresql within clustered environment?\n 2 - if the answer is yes, in which method can we use postgresql within a cluster?\n active - passive or active - active?\n\nNow, the second question is related to the performance of the database. Assuming we have a \ndell's poweredge 6650 with 4 x 2.8 Ghz Xeon processors having 2 MB of cache for each, with the \nmain memory of lets say 32 GB. We can either use a small SAN from EMC or we can put all disks \ninto the machines with the required raid confiuration.\n\nWe will install RedHat Advanced Server 2.1 to the machine as the operating system and postgresql as \nthe database server. We have a database having 25 millions records having the length of 250 bytes \non average for each record. And there are 1000 operators accessing the database concurrently. The main \noperation on the database (about 95%) is select rather than insert, so do you have any idea about \nthe performance of the system? \n\nbest regards,\n\n-kas�m\n\n\n",
"msg_date": "Thu, 24 Jul 2003 15:54:52 EEST",
"msg_from": "Kasim Oztoprak <[email protected]>",
"msg_from_op": false,
"msg_subject": "hardware performance and some more"
}
] |
[
{
"msg_contents": "> Now, the second question is related to the performance of the database. Assuming we have a\r\n> dell's poweredge 6650 with 4 x 2.8 Ghz Xeon processors having 2 MB of cache for each, with the\r\n> main memory of lets say 32 GB. We can either use a small SAN from EMC or we can put all disks\r\n> into the machines with the required raid confiuration.\r\n>\r\n> We will install RedHat Advanced Server 2.1 to the machine as the operating system and postgresql as\r\n> the database server. We have a database having 25 millions records having the length of 250 bytes\r\n> on average for each record. And there are 1000 operators accessing the database concurrently. The main\r\n> operation on the database (about 95%) is select rather than insert, so do you have any idea about\r\n> the performance of the system?\r\n\r\nI have a very similar installation: Dell PE6600 with dual 2.0 Xeons/2MB cache, 4 GB memory, 6-disk RAID-10 for data, 2-disk RAID-1 for RH Linux 8. My database has over 60 million records averaging 200 bytes per tuple. I have a large nightly data load, then very complex multi-table join queries all day with a few INSERT transactions. While I do not have 1000 concurrent users (more like 30 for me), my processors and disks seem to be idle the vast majority of the time - this machine is overkill. So I think you will have no problem with your hardware, and could probably easily get away with only two processors. Someday, if you can determine with certainty that the CPU is a bottleneck, drop in the 3rd and 4th processors (and $10,000). And save yourself money on the RAM as well - it's incredibly easy to put in more if you need it. If you really want to spend money, set up the fastest disk arrays you can imagine.\r\n \r\nI cannot emphasize enough: allocate a big chunk of time for tuning your database and learning from this list. I migrated from Microsoft SQL Server. Out of the box PostgreSQL was horrible for me, and even after significant tuning it crawled on certain queries (compared to MSSQL). The list helped me find a data type mismatch in a JOIN clause, and since then the performance of PostgreSQL has blown the doors off of MSSQL. Since I only gave myself a couple days to do tuning before the db had to go in production, I almost had to abandon PostgreSQL and revert to MS. My problems were solved in the nick of time, but I really wish I had made more time for tuning. \r\n \r\nRunning strong in production for 7 months now with PostgreSQL 7.3, and eagerly awaiting 7.4!\r\n \r\nRoman Fail\r\nPOS Portal, Inc.\r\n \r\n \r\n \r\n \r\n \r\n",
"msg_date": "Thu, 24 Jul 2003 08:27:31 -0700",
"msg_from": "\"Roman Fail\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware performance and some more"
}
] |
[
{
"msg_contents": "| first of all I would like to learn that, any of you use the postgresql\n| within the clustered environment? Or, let me ask you the question, in\n| different manner, can we use postgresql in a cluster environment? If\n| we can do what is the support method of the postgresql for clusters?\n\nYou could do active-active but it would require work on your end. I did \na recent check on all the Postgres replication packages and they all \nseem to be single master -> single/many slaves. Updating on more than 1 \nserver looks to be problematic. I run an active-active now but I had to \ndevelop my own custom replication strategy.\n\nAs a background, we develop & host web-based apps that use Postgres as \nthe DB engine. Since our clients access our server over the internet, \nuptime is a big issue. Hence, we have two server farms: one colocated in \nSan Francisco and the other in Sterling, VA. In addition to redudancy, \nwe also wanted to spread the load across the servers. To do this, we \nwent with the expedient method of 1-minute DNS zonemaps where if both \nservers are up, 70% traffic is sent to the faster farm and 30% to the \nother. Both servers are constantly monitored and if one goes down, a new \nzonemap is pushed out listing only the servers that are up.\n\nThe first step in making this work was converting all integer keys to \ncharacter keys. By making keys into characters, we could prepend a \nserver location code so ID 100 generated at SF would not conflict with \nID 100 generated in Sterling. Instead, they would be marked as S00000100 \nand V00000100. Another benefit is the increase of possible key \ncombinations by being able to use alpha characters. (36^(n-1) versus 10^n)\n\nAt this time, the method we use is a periodic sweep of all updated \nrecords. In every table, we add extra fields to mark the date/time the \nrecord was last inserted/updated/deleted. All records touched as of the \nlast resync are extracted, zipped up, pgp-encrypted and then posted on \nan ftp server. Files are then transfered between servers, records \nunpacked and inserted/updated. Some checks are needed to determine what \ntakes precedence if users updated the same record on both servers but \notherwise it's a straightforward process.\n\nAs far as I can tell, the performance impact seems to be minimal. \nThere's a periodic storm of replication updates in cases where there's \nmass updates sync last resync. But if you have mostly reads and few \nwrites, you shouldn't see this situation. The biggest performance impact \nseems to be the CPU power needed to zip/unzip/encrypt/decrypt files.\n\nI'm thinking over strats to get more \"real-time\" replication working. I \nsuppose I could just make the resync program run more often but that's a \nbit inelegant. Perhaps I could capture every update/delete/insert/alter \nstatement from the postgres logs, parsing them out to commands and then \nzipping/encrypting every command as a separate item to be processed. Or \nadd triggers to every table where updated records are pushed to a custom \n\"updated log\".\n\nThe biggest problem is of course locks -- especially at the application \nlevel. I'm still thinking over what to do here.\n\n",
"msg_date": "Thu, 24 Jul 2003 09:42:56 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware performance and some more"
},
{
"msg_contents": "On 24 Jul 2003 at 9:42, William Yu wrote:\n\n> As far as I can tell, the performance impact seems to be minimal. \n> There's a periodic storm of replication updates in cases where there's \n> mass updates sync last resync. But if you have mostly reads and few \n> writes, you shouldn't see this situation. The biggest performance impact \n> seems to be the CPU power needed to zip/unzip/encrypt/decrypt files.\n\nCan you use WAL based replication? I don't have a URL handy but there are \nreplication projects which transmit WAL files to another server when they fill \nin.\n\nOTOH, I was thinking of a simple replication theme. If postgresql provides a \nhook where it calls an external library routine for each heapinsert in WAL, \nthere could be a simple multi-slave replication system. One doesn't have to \nwait till WAL file fills up.\n\nOf course, it's upto the library to make sure that it does not hold postgresql \ncommits for too long that would hamper the performance.\n\nAlso there would need a receiving hook which would directly heapinsert the data \non another node.\n\nBut if the external library is threaded, will that work well with postgresql?\n\nJust a thought. If it works, load-balancing could be lot easy and near-\nrealtime..\n\n\nBye\n Shridhar\n\n--\nWe fight only when there is no other choice. We prefer the ways ofpeaceful contact.\t\t-- Kirk, \"Spectre of the Gun\", stardate 4385.3\n\n",
"msg_date": "Fri, 25 Jul 2003 19:33:26 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware performance and some more"
}
] |
[
{
"msg_contents": "On 24 Jul 2003 17:08 EEST you wrote:\n\n> On 24 Jul 2003 at 15:54, Kasim Oztoprak wrote:\n> \n> > The questions for this explanation are:\n> > 1 - Can we use postgresql within clustered environment?\n> > 2 - if the answer is yes, in which method can we use postgresql within a cluster?\n> > active - passive or active - active?\n> \n> Coupled with linux-HA( See http://linux-ha.org) heartbeat service, it *should* \n> be possible to run postgresql in active-passive clustering.\n> \n> If postgresql supported read-only database so that several nodes could read off \n> a single disk but only one could update that, a sort of active-active should be \n> possible as well. But postgresql can not have a read only database. That would \n> be a handy addition in such cases..\n> \n\nso in the master and slave configuration we can use the system within clustering environment. \n\n> > Now, the second question is related to the performance of the database. Assuming we have a \n> > dell's poweredge 6650 with 4 x 2.8 Ghz Xeon processors having 2 MB of cache for each, with the \n> > main memory of lets say 32 GB. We can either use a small SAN from EMC or we can put all disks \n> > into the machines with the required raid confiuration.\n> > \n> > We will install RedHat Advanced Server 2.1 to the machine as the operating system and postgresql as \n> > the database server. We have a database having 25 millions records having the length of 250 bytes \n> > on average for each record. And there are 1000 operators accessing the database concurrently. The main \n> > operation on the database (about 95%) is select rather than insert, so do you have any idea about \n> > the performance of the system? \n> \n> Assumig 325 bytes per tuple(250 bytes field 24-28 byte header varchar fields) \n> gives 25 tuples per 8K page, there would be 8GB of data. This configuration \n> could fly with 12-16GB of RAM. After all data is read that is. You can cut down \n> on other requirements as well. May be a 2x opteron with 16GB RAMmight be a \n> better fit but check out how much CPU cache it has.\n\nwe do not have memory problem or disk problems. as I have seen in the list the best way to \nuse disks are using raid 10 for data and raid 1 for os. we can put as much memory as \nwe require. \n\nnow the question, if we have 100 searches per second and in each search if we need 30 sql\ninstruction, what will be the performance of the system in the order of time. Let us say\nwe have two machines described aove in a cluster.\n\n\n\n> \n> A grep -rwn across data directory would fill the disk cache pretty well..:-)\n> \n> HTH\n> \n> Bye\n> Shridhar\n> \n> --\n> Egotism, n:\tDoing the New York Times crossword puzzle with a pen.Egotist, n:\tA \n> person of low taste, more interested in himself than me.\t\t-- Ambrose Bierce, \n> \"The Devil's Dictionary\"\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n",
"msg_date": "Thu, 24 Jul 2003 18:25:38 EEST",
"msg_from": "Kasim Oztoprak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware performance and some more"
},
{
"msg_contents": "On Thu, 2003-07-24 at 13:25, Kasim Oztoprak wrote:\n> On 24 Jul 2003 17:08 EEST you wrote:\n> \n> > On 24 Jul 2003 at 15:54, Kasim Oztoprak wrote:\n[snip]\n> \n> we do not have memory problem or disk problems. as I have seen in the list the best way to \n> use disks are using raid 10 for data and raid 1 for os. we can put as much memory as \n> we require. \n> \n> now the question, if we have 100 searches per second and in each search if we need 30 sql\n> instruction, what will be the performance of the system in the order of time. Let us say\n> we have two machines described aove in a cluster.\n\nThat's 3000 sql statements per second, 180 thousand per minute!!!!\nWhat the heck is this database doing!!!!!\n\nA quad-CPU Opteron sure is looking useful right about now... Or\nan quad-CPU AlphaServer ES45 running Linux, if 4x Opterons aren't\navailable.\n\nHow complicated are each of these SELECT statements?\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "24 Jul 2003 15:09:14 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware performance and some more"
}
] |
[
{
"msg_contents": "On 24 Jul 2003 18:44 EEST you wrote:\n\n> > Now, the second question is related to the performance of the database. Assuming we have a\n> > dell's poweredge 6650 with 4 x 2.8 Ghz Xeon processors having 2 MB of cache for each, with the\n> > main memory of lets say 32 GB. We can either use a small SAN from EMC or we can put all disks\n> > into the machines with the required raid confiuration.\n> >\n> > We will install RedHat Advanced Server 2.1 to the machine as the operating system and postgresql as\n> > the database server. We have a database having 25 millions records having the length of 250 bytes\n> > on average for each record. And there are 1000 operators accessing the database concurrently. The main\n> > operation on the database (about 95%) is select rather than insert, so do you have any idea about\n> > the performance of the system?\n> \n> I have a very similar installation: Dell PE6600 with dual 2.0 Xeons/2MB cache, 4 GB memory, 6-disk RAID-10 for data, 2-disk RAID-1 for RH Linux 8. My database has over 60 million records averaging 200 bytes per tuple. I have a large nightly data load, then very complex multi-table join queries all day with a few INSERT transactions. While I do not have 1000 concurrent users (more like 30 for me), my processors and disks seem to be idle the vast majority of the time - this machine is overkill. So I think you will have no problem with your hardware, and could probably easily get away with only two processors. Someday, if you can determine with certainty that the CPU is a bottleneck, drop in the 3rd and 4th processors (and $10,000). And save yourself money on the RAM as well - it's incredibly easy to put in more if you need it. If you really want to spend money, set up the fastest disk arrays you can imagine.\n> \n\ni have some time for the production, therefore, i can wait for the beta and production of version 7.4.\nas i have seeen from your comments, you have 30 clients reaching to the database. assuming the maximum number \nof search for each client is 5 then, search per second will be atmost 3. in my case, there will be around \n100 search per second. so the main bothleneck comes from there. \n\nand finally, the rate for the insert operation is about %0.1 (1 in every thousand). I've started to learn\nabout my limitations a few days ago, i would like to learn whether i can solve my problem with postgresql \nor not. \n\n> I cannot emphasize enough: allocate a big chunk of time for tuning your database and learning from this list. I migrated from Microsoft SQL Server. Out of the box PostgreSQL was horrible for me, and even after significant tuning it crawled on certain queries (compared to MSSQL). The list helped me find a data type mismatch in a JOIN clause, and since then the performance of PostgreSQL has blown the doors off of MSSQL. Since I only gave myself a couple days to do tuning before the db had to go in production, I almost had to abandon PostgreSQL and revert to MS. My problems were solved in the nick of time, but I really wish I had made more time for tuning. \n> \n> Running strong in production for 7 months now with PostgreSQL 7.3, and eagerly awaiting 7.4!\n> \n> Roman Fail\n> POS Portal, Inc.\n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n",
"msg_date": "Thu, 24 Jul 2003 19:06:01 EEST",
"msg_from": "Kasim Oztoprak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware performance and some more"
}
] |
[
{
"msg_contents": "On 25 Jul 2003 at 16:38, Kasim Oztoprak wrote:\n> this is kind of directory assistance application. actually the select statements are not\n> very complex. the database contain 25 million subscriber records and the operators searches \n> for the subscriber numbers or addresses. there are not much update operations actually the \n> update ratio is approximately %0.1 . \n> \n> i will use at least 4 machines each having 4 cpu with the speed of 2.8 ghz xeon processors.\n> and suitable memory capacity with it. \n\nAre you going to duplicate the data?\n\nIf you are going to have 3000 sql statements per second, I would suggest,\n\n1. Get quad CPU. You probably need that horsepower\n2. Use prepared statements and stored procedures to avoid parsing overhead.\n\nI doubt you would need cluster of machines though. If you run it thr. a pilot \nprogram, that would give you an idea whether or not you need a cluster..\n\nBye\n Shridhar\n\n--\nDefault, n.:\tThe hardware's, of course.\n\n",
"msg_date": "Fri, 25 Jul 2003 19:29:08 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware performance and some more"
},
{
"msg_contents": "On Fri, 2003-07-25 at 11:38, Kasim Oztoprak wrote:\n> On 24 Jul 2003 23:25 EEST you wrote:\n> \n> > On Thu, 2003-07-24 at 13:25, Kasim Oztoprak wrote:\n> > > On 24 Jul 2003 17:08 EEST you wrote:\n> > > \n> > > > On 24 Jul 2003 at 15:54, Kasim Oztoprak wrote:\n> > [snip]\n> > > \n> > > we do not have memory problem or disk problems. as I have seen in the list the best way to \n> > > use disks are using raid 10 for data and raid 1 for os. we can put as much memory as \n> > > we require. \n> > > \n> > > now the question, if we have 100 searches per second and in each search if we need 30 sql\n> > > instruction, what will be the performance of the system in the order of time. Let us say\n> > > we have two machines described aove in a cluster.\n> > \n> > That's 3000 sql statements per second, 180 thousand per minute!!!!\n> > What the heck is this database doing!!!!!\n> > \n> > A quad-CPU Opteron sure is looking useful right about now... Or\n> > an quad-CPU AlphaServer ES45 running Linux, if 4x Opterons aren't\n> > available.\n> > \n> > How complicated are each of these SELECT statements?\n> \n> this is kind of directory assistance application. actually the select statements are not\n> very complex. the database contain 25 million subscriber records and the operators searches \n> for the subscriber numbers or addresses. there are not much update operations actually the \n> update ratio is approximately %0.1 . \n> \n> i will use at least 4 machines each having 4 cpu with the speed of 2.8 ghz xeon processors.\n> and suitable memory capacity with it. \n> \n> i hope it will overcome with this problem. any similar implementation?\n\nSince PG doesn't have active-active clustering, that's out, but since\nthe database will be very static, why not have, say 8 machines, each\nwith it's own copy of the database? (Since there are so few updates,\nyou feed the updates to a litle Perl app that then makes the changes\non each machine.) (A round-robin load balancer would do the trick\nin utilizing them all.)\n\nAlso, with lots of machines, you could get away with less expensive\nmachines, say 2GHz CPU, 1GB RAM and a 40GB IDE drive. Then, if one\ngoes down for some reason, you've only lost a small portion of your\ncapacity, and replacing a part will be very inexpensive.\n\nAnd if volume increases, just add more USD1000 machines...\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "25 Jul 2003 09:19:56 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware performance and some more"
},
{
"msg_contents": "Folks,\n\n> Since PG doesn't have active-active clustering, that's out, but since\n> the database will be very static, why not have, say 8 machines, each\n> with it's own copy of the database? (Since there are so few updates,\n> you feed the updates to a litle Perl app that then makes the changes\n> on each machine.) (A round-robin load balancer would do the trick\n> in utilizing them all.)\n\nAnother approach I've seen work is to have several servers connect to one SAN \nor NAS where the data lives. Only one server is enabled to handle \"write\" \nrequests; all the rest are read-only. This does mean having dispacting \nmiddleware that parcels out requests among the servers, but works very well \nfor the java-based company that's using it.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 25 Jul 2003 09:13:10 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware performance and some more"
},
{
"msg_contents": "On 24 Jul 2003 23:25 EEST you wrote:\n\n> On Thu, 2003-07-24 at 13:25, Kasim Oztoprak wrote:\n> > On 24 Jul 2003 17:08 EEST you wrote:\n> > \n> > > On 24 Jul 2003 at 15:54, Kasim Oztoprak wrote:\n> [snip]\n> > \n> > we do not have memory problem or disk problems. as I have seen in the list the best way to \n> > use disks are using raid 10 for data and raid 1 for os. we can put as much memory as \n> > we require. \n> > \n> > now the question, if we have 100 searches per second and in each search if we need 30 sql\n> > instruction, what will be the performance of the system in the order of time. Let us say\n> > we have two machines described aove in a cluster.\n> \n> That's 3000 sql statements per second, 180 thousand per minute!!!!\n> What the heck is this database doing!!!!!\n> \n> A quad-CPU Opteron sure is looking useful right about now... Or\n> an quad-CPU AlphaServer ES45 running Linux, if 4x Opterons aren't\n> available.\n> \n> How complicated are each of these SELECT statements?\n\nthis is kind of directory assistance application. actually the select statements are not\nvery complex. the database contain 25 million subscriber records and the operators searches \nfor the subscriber numbers or addresses. there are not much update operations actually the \nupdate ratio is approximately %0.1 . \n\ni will use at least 4 machines each having 4 cpu with the speed of 2.8 ghz xeon processors.\nand suitable memory capacity with it. \n\ni hope it will overcome with this problem. any similar implementation?\n\n> \n> -- \n> ----------------------------------------------------------------- \n> | Ron Johnson, Jr. Home: [email protected] |\n> | Jefferson, LA USA |\n> | |\n> | \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n> | because I hate vegetables!\" |\n> | unknown |\n> ----------------------------------------------------------------- \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n",
"msg_date": "Fri, 25 Jul 2003 16:38:31 EEST",
"msg_from": "Kasim Oztoprak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware performance and some more"
},
{
"msg_contents": "On Fri, 2003-07-25 at 11:13, Josh Berkus wrote:\n> Folks,\n> \n> > Since PG doesn't have active-active clustering, that's out, but since\n> > the database will be very static, why not have, say 8 machines, each\n> > with it's own copy of the database? (Since there are so few updates,\n> > you feed the updates to a litle Perl app that then makes the changes\n> > on each machine.) (A round-robin load balancer would do the trick\n> > in utilizing them all.)\n> \n> Another approach I've seen work is to have several servers connect to one SAN \n> or NAS where the data lives. Only one server is enabled to handle \"write\" \n> requests; all the rest are read-only. This does mean having dispacting \n> middleware that parcels out requests among the servers, but works very well \n> for the java-based company that's using it.\n\nWouldn't the cache on the read-only databases get out of sync with\nthe true on-disk data?\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "25 Jul 2003 12:12:56 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware performance and some more"
}
] |
[
{
"msg_contents": "On 25 Jul 2003 at 18:41, Kasim Oztoprak wrote:\n> what exactly do you mean from a pilot program?\n\nLike get a quad CPU box, load the data and ask only 10 operators to test the \nsystem..\n\nBeta testing basically..\n\nBye\n Shridhar\n\n--\nThe man on tops walks a lonely street; the \"chain\" of command is often a noose.\n\n",
"msg_date": "Fri, 25 Jul 2003 21:01:44 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware performance and some more"
},
{
"msg_contents": "On 25 Jul 2003 17:13 EEST you wrote:\n\n> On 25 Jul 2003 at 16:38, Kasim Oztoprak wrote:\n> > this is kind of directory assistance application. actually the select statements are not\n> > very complex. the database contain 25 million subscriber records and the operators searches \n> > for the subscriber numbers or addresses. there are not much update operations actually the \n> > update ratio is approximately %0.1 . \n> > \n> > i will use at least 4 machines each having 4 cpu with the speed of 2.8 ghz xeon processors.\n> > and suitable memory capacity with it. \n> \n> Are you going to duplicate the data?\n> \n> If you are going to have 3000 sql statements per second, I would suggest,\n> \n> 1. Get quad CPU. You probably need that horsepower\n> 2. Use prepared statements and stored procedures to avoid parsing overhead.\n> \n> I doubt you would need cluster of machines though. If you run it thr. a pilot \n> program, that would give you an idea whether or not you need a cluster..\n> \n> Bye\n> Shridhar\n>\n\ni will try to cluster them. i can duplicate the data if i need. in the case of \nupdate, then, i will fix them through. \n\nwhat exactly do you mean from a pilot program?\n\n-kas�m \n> --\n> Default, n.:\tThe hardware's, of course.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n",
"msg_date": "Fri, 25 Jul 2003 18:41:55 EEST",
"msg_from": "Kasim Oztoprak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware performance and some more"
}
] |
[
{
"msg_contents": "Hi everyone.\n\nI have created a simplified example of a real case, to show you what I'm\ntryng to do. I have\na table, like this:\n\nCREATE TABLE sales (\n saleId SERIAL,\n clientId INTEGER,\n branchId INTEGER,\n productId INTEGER,\n employeeId INTEGER,\n saleDate DATE,\n price NUMERIC(12, 2),\n qty INTEGER,\n PRIMARY KEY(saleId)\n);\nCREATE INDEX sales_k1 ON sales(clientId, branchId, productId,\nemployeeId, saleDate, price, qty);\n\nThis table will grow to *many* rows in the future.\n\nI want to make a function that returns the FIRS saleId of the sale that\nmatches some conditions. I will\nalways receive the Client Id, but not always the other arguments (sent\nas NULLs).\n\nThe fetched resultset shoud prioritize the passed arguments, and after\nthat, the saleDate, price\nand quantity.\n\n\n/**\n * Finds the first sale that matches the conditions received.\n * @param $1 Client Id.\n * @param $2 Preferred Branch Id.\n * @param $3 Preferred Product Id.\n * @param $4 Preferred Employee Id.\n * @return Sale Id if found, NULL if not.\n */\nCREATE OR REPLACE FUNCTION findSale(INTEGER, INTEGER, INTEGER, INTEGER)\nRETURNS INTEGER AS '\nDECLARE\n a_clientId ALIAS FOR $1;\n a_branchId ALIAS FOR $1;\n a_productId ALIAS FOR $1;\n a_employeeId ALIAS FOR $1;\n r_result INTEGER;\nBEGIN\n SELECT\n INTO r_result employeeId\n FROM\n sales\n WHERE\n clientId=a_clientId AND\n branchId=coalesce(a_branchId, branchId) AND /*branchId is null?\nanything will be ok*/\n productId=coalesce(a_productId, productId) AND /*productId is\nnull? anything will be ok*/\n employeeId=coalesce(a_employeeId, employeeId) /*employeeId is\nnull? anything will be ok*/\n ORDER BY\n clientId, branchId, productId, employeeId, saleDate, price, qty\n LIMIT 1;\n\n RETURN r_result;\nEND;\n' LANGUAGE 'plpgsql';\n\n\nWill findSale() in the future, when I have *many* rows still use the\nindex when only the first couple of\narguments are passed to the function?\nIf not, should I create more indexes (and functions) for each possible\nargument combination? (of course, with\nthe given order)\n\nThe thing here is that I don't understand how postgreSQL solves the\nquery when the COALESCEs are used... it uses\nthe index now, with a few thowsand records, but what will happen in a\nfew months?\n\nThanks in advance.",
"msg_date": "25 Jul 2003 13:52:43 -0300",
"msg_from": "Franco Bruno Borghesi <[email protected]>",
"msg_from_op": true,
"msg_subject": "index questions"
},
{
"msg_contents": "On Fri, 2003-07-25 at 11:52, Franco Bruno Borghesi wrote:\n[snip]\n> \n> \n> Will findSale() in the future, when I have *many* rows still use the\n> index when only the first couple of\n> arguments are passed to the function?\n> If not, should I create more indexes (and functions) for each possible\n> argument combination? (of course, with\n> the given order)\n> \n> The thing here is that I don't understand how postgreSQL solves the\n> query when the COALESCEs are used... it uses\n> the index now, with a few thowsand records, but what will happen in a\n> few months?\n\nWhen faced with cases like this, I cobble together a script/program\nthat generates a few million rows of random data (within the confines\nof FKs, of course) to populate these tables like \"sales\", and then I\nsee how things perform.\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "25 Jul 2003 12:21:25 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index questions"
},
{
"msg_contents": "Franco,\n\n> CREATE INDEX sales_k1 ON sales(clientId, branchId, productId,\n> employeeId, saleDate, price, qty);\n\nA 7-column index is unlikely to be effective -- the index will be almost as \nlarge as the table. Try indexing only the first 3-4 columns instead. \n\n> I want to make a function that returns the FIRS saleId of the sale that\n> matches some conditions. I will\n> always receive the Client Id, but not always the other arguments (sent\n> as NULLs).\n\nWell, keep in mind that your multi-column index will only be useful if all \ncolumns are queried starting from the left. That is, the index will be \nignored if you have a \"where productId = x\" without a \"where branchid = y\".\n\n> CREATE OR REPLACE FUNCTION findSale(INTEGER, INTEGER, INTEGER, INTEGER)\n> RETURNS INTEGER AS '\n> DECLARE\n> a_clientId ALIAS FOR $1;\n> a_branchId ALIAS FOR $1;\n> a_productId ALIAS FOR $1;\n> a_employeeId ALIAS FOR $1;\n\nYour aliases are wrong here.\n\n> branchId=coalesce(a_branchId, branchId) AND /*branchId is null?\n> anything will be ok*/\n> productId=coalesce(a_productId, productId) AND /*productId is\n> null? anything will be ok*/\n\nOn a very large table this will be very inefficient. you'll be comparing the \nproductid, for example, even if no productid is passed ... and the index \nwon't do you any good because the planner should figure out that 100% of rows \nmatch the condition. \n\nInstead, I recommend that you build up a dynamic query as a string and then \npass only the conditions sent by the user. You can then EXECUTE the query \nand loop through it for a result.\n\nOf course, YMMV. My approach will require you to create more indexes which \ncould be a problem if you have limited disk space.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 25 Jul 2003 10:28:27 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index questions"
},
{
"msg_contents": "what you say is that the index is not effective because of its size, but\nit would still be used *if* the conditions are right... In this case, I\ncare about performance, not space. \n\nBut what you say about the index not being good because 100% of rows\nmatch the condition confirms what I suspected. \n\nThanks for your help.\n\n\nOn Fri, 2003-07-25 at 14:28, Josh Berkus wrote:\n\n> Franco,\n> \n> > CREATE INDEX sales_k1 ON sales(clientId, branchId, productId,\n> > employeeId, saleDate, price, qty);\n> \n> A 7-column index is unlikely to be effective -- the index will be almost as \n> large as the table. Try indexing only the first 3-4 columns instead. \n> \n> > I want to make a function that returns the FIRS saleId of the sale that\n> > matches some conditions. I will\n> > always receive the Client Id, but not always the other arguments (sent\n> > as NULLs).\n> \n> Well, keep in mind that your multi-column index will only be useful if all \n> columns are queried starting from the left. That is, the index will be \n> ignored if you have a \"where productId = x\" without a \"where branchid = y\".\n> \n> > CREATE OR REPLACE FUNCTION findSale(INTEGER, INTEGER, INTEGER, INTEGER)\n> > RETURNS INTEGER AS '\n> > DECLARE\n> > a_clientId ALIAS FOR $1;\n> > a_branchId ALIAS FOR $1;\n> > a_productId ALIAS FOR $1;\n> > a_employeeId ALIAS FOR $1;\n> \n> Your aliases are wrong here.\n> \n> > branchId=coalesce(a_branchId, branchId) AND /*branchId is null?\n> > anything will be ok*/\n> > productId=coalesce(a_productId, productId) AND /*productId is\n> > null? anything will be ok*/\n> \n> On a very large table this will be very inefficient. you'll be comparing the \n> productid, for example, even if no productid is passed ... and the index \n> won't do you any good because the planner should figure out that 100% of rows \n> match the condition. \n> \n> Instead, I recommend that you build up a dynamic query as a string and then \n> pass only the conditions sent by the user. You can then EXECUTE the query \n> and loop through it for a result.\n> \n> Of course, YMMV. My approach will require you to create more indexes which \n> could be a problem if you have limited disk space.",
"msg_date": "25 Jul 2003 15:33:54 -0300",
"msg_from": "Franco Bruno Borghesi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index questions"
}
] |
[
{
"msg_contents": "Hi all,\nlast week Josh Berkus point my attenction\n( see post Wrong plan or what )\nto the\nfact that in this select:\n\nselect *\nfrom user_logs ul,\n user_data ud,\n class_default cd\nwhere\n ul.id_user = ud.id_user and\n ud.id_class = cd.id_class and\n cd.id_provider = 39;\n\n\nThe planner say:\nQUERY PLAN\n Hash Join (cost=265.64..32000.76 rows=40612 width=263) (actual\ntime=11074.21..11134.28 rows=10 loops=1)\n Hash Cond: (\"outer\".id_user = \"inner\".id_user)\n -> Seq Scan on user_logs ul (cost=0.00..24932.65 rows=1258965 width=48)\n(actual time=0.02..8530.21 rows=1258966 loops=1)\n -> Hash (cost=264.81..264.81 rows=331 width=215) (actual\ntime=30.22..30.22 rows=0 loops=1)\n -> Nested Loop (cost=0.00..264.81 rows=331 width=215) (actual\ntime=29.95..30.20 rows=6 loops=1)\n -> Seq Scan on class_default cd (cost=0.00..1.39 rows=1\nwidth=55) (actual time=0.08..0.10 rows=1 loops=1)\n Filter: (id_provider = 39)\n -> Index Scan using idx_user_data_class on user_data ud\n(cost=0.00..258.49 rows=395 width=160) (actual time=29.82..29.96 rows=6\nloops=1)\n Index Cond: (ud.id_class = \"outer\".id_class)\n Total runtime: 11135.65 msec\n(10 rows)\n\n\nand the quantity reported in:\nHash Join (cost=265.64..32000.76 rows=40612 width=263) (actual\ntime=11074.21..11134.28 rows=10 loops=1)\n\nis wrong about the rows returned,\nI did what Josh Berkus suggeted me:\n\n1) Make sure you've VACUUM ANALYZED\n2) Adjust the following postgresql.conf statistics:\na) effective_cache_size: increase to 70% of available (not used by other\nprocesses) RAM.\nb) random_page_cost: decrease, maybe to 2.\nc) default_statistics_target: try increasing to 100\n(warning: this will significantly increase the time required to do ANALYZE)\n\n\nI pushed also default_statistics_target to 1000 but the plan remain the same\nwith an execution\nof 11 secs but If I do the followin 3 equivalent query I obatin the same\nresult in olny fews ms:\n\n\nSELECT id_class from class_default where id_provider = 39;\n id_class\n----------\n 48\n(1 row)\n\nSELECT id_user from user_data where id_class in ( 48 );\n id_user\n---------\n 10943\n 10942\n 10934\n 10927\n 10910\n 10909\n(6 rows)\n\n\nSELECT * from user_logs where id_user in (\n 10943, 10942, 10934, 10927, 10910, 10909\n);\n\n\n\nMay I do something else ?\n\n\nThank you in advance\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Sat, 26 Jul 2003 02:44:13 +0200",
"msg_from": "\"Mendola Gaetano\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong rows number expected"
}
] |
[
{
"msg_contents": "SUBSCRIBE\n\n",
"msg_date": "Sat, 26 Jul 2003 14:52:37 +0500",
"msg_from": "Rauf Kuliyev <[email protected]>",
"msg_from_op": true,
"msg_subject": "SUBSCRIBE"
}
] |
[
{
"msg_contents": "Hallo pgsql-performance,\n\nI just wondered if there is a possibility to map my database running\non a linux system completly into memory and to only use disk\naccesses for writes.\n\nI got a nice machine around with 2 gigs of ram, and my database at\nthe moment uses about 30MB on the disks.\n\nOr does Postgresql do this automtatically, with some cache adjusting\nparameters, and after doing a select * from <everything> on my\ndatabase?\n\nThank you and ciao,\n Mig-O\n\n\n",
"msg_date": "Sun, 27 Jul 2003 10:40:09 +0200",
"msg_from": "Daniel Migowski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Mapping Database completly into memory"
},
{
"msg_contents": "On Sun, 27 Jul 2003, Daniel Migowski wrote:\n\n> Hallo pgsql-performance,\n> \n> I just wondered if there is a possibility to map my database running\n> on a linux system completly into memory and to only use disk\n> accesses for writes.\n> \n> I got a nice machine around with 2 gigs of ram, and my database at\n> the moment uses about 30MB on the disks.\n> \n> Or does Postgresql do this automtatically, with some cache adjusting\n> parameters, and after doing a select * from <everything> on my\n> database?\n\nAre you looking at a read only type database thing here? It's generally \nconsidered bad practice to run databases from memory only, since a loss of \npower results in a loss of all data.\n\nPostgresql and whatever OS it runs on can usually cache an entire 30 meg \ndata set in memory easily. You'll need to crank up shared buffers a bit \n(1000 shared buffers is 8 megs, so 5000 should be enough to cache the \nwhole thing (~40 megs). Also, be sure and crank up your \neffective_cache_size so the planner knows the kernel has lots of space for \ncaching data and favors index scans.\n\n",
"msg_date": "Mon, 28 Jul 2003 11:58:11 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping Database completly into memory"
}
] |
[
{
"msg_contents": "Hallo pgsql-performance,\n\nI just wondered if there is a possibility to map my database running\non a linux system completly into memory and to only use disk\naccesses for writes.\n\nI got a nice machine around with 2 gigs of ram, and my database at\nthe moment uses about 30MB on the disks.\n\nOr does Postgresql do this automtatically, with some cache adjusting\nparameters, and after doing a select * from <everything> on my\ndatabase?\n\nThank you and ciao,\n Mig-O\n\n\n",
"msg_date": "Sun, 27 Jul 2003 10:49:01 +0200",
"msg_from": "Daniel Migowski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Mapping a database completly into Memory"
},
{
"msg_contents": "Daniel Migowski <[email protected]> writes:\n> I just wondered if there is a possibility to map my database running\n> on a linux system completly into memory and to only use disk\n> accesses for writes.\n\nThat happens for free, if you have enough RAM. The kernel will use\nspare RAM to hold copies of every disk block it's ever read.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 27 Jul 2003 12:18:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping a database completly into Memory "
},
{
"msg_contents": "Daniel,\n\n> > I just wondered if there is a possibility to map my database running\n> > on a linux system completly into memory and to only use disk\n> > accesses for writes.\n>\n> That happens for free, if you have enough RAM. The kernel will use\n> spare RAM to hold copies of every disk block it's ever read.\n\nAlso, don't forget to raise your effective_cache_size so that PostgreSQL \n*knows* that you have lots of RAM.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 27 Jul 2003 21:14:11 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping a database completly into Memory"
},
{
"msg_contents": "wouldn't also increasing shared_buffers to 64 or 128 MB be a good\nperformance improvement? This way, pages belonging to heavily used\nindexes would be already cached by the database itself.\n\nPlease, correct me if I'm wrong.\n\nOn Mon, 2003-07-28 at 01:14, Josh Berkus wrote:\n\n> Daniel,\n> \n> > > I just wondered if there is a possibility to map my database running\n> > > on a linux system completly into memory and to only use disk\n> > > accesses for writes.\n> >\n> > That happens for free, if you have enough RAM. The kernel will use\n> > spare RAM to hold copies of every disk block it's ever read.\n> \n> Also, don't forget to raise your effective_cache_size so that PostgreSQL \n> *knows* that you have lots of RAM.",
"msg_date": "28 Jul 2003 11:16:55 -0300",
"msg_from": "Franco Bruno Borghesi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping a database completly into Memory"
},
{
"msg_contents": "Franco Bruno Borghesi <[email protected]> writes:\n> wouldn't also increasing shared_buffers to 64 or 128 MB be a good\n> performance improvement? This way, pages belonging to heavily used\n> indexes would be already cached by the database itself.\n\nNot necessarily. The trouble with large shared_buffers settings is you\nend up with lots of pages being doubly cached (both in PG's buffers and\nin the kernel's disk cache), thus wasting RAM. If we had a portable way\nof preventing the kernel from caching the same page, it would make more\nsense to run with large shared_buffers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Jul 2003 12:25:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping a database completly into Memory "
},
{
"msg_contents": "Tom,\n\n> If we had a portable way\n> of preventing the kernel from caching the same page, it would make more\n> sense to run with large shared_buffers.\n\nReally? I thought we wanted to move the other way ... that is, if we could \nget over the portability issues, eliminate shared_buffers entirely and rely \ncompletely on the OS cache.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 28 Jul 2003 09:50:36 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping a database completly into Memory"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> If we had a portable way\n>> of preventing the kernel from caching the same page, it would make more\n>> sense to run with large shared_buffers.\n\n> Really? I thought we wanted to move the other way ... that is, if we could \n> get over the portability issues, eliminate shared_buffers entirely and rely \n> completely on the OS cache.\n\nThat seems unlikely to happen: there are cache-coherency problems if you\ndon't do your page-level access through shared buffers. Some have\nsuggested using mmap access to the data files in place of shared memory,\nbut that introduces a slew of issues of its own. It might happen but\nI'm not holding my breath.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Jul 2003 12:58:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping a database completly into Memory "
},
{
"msg_contents": "But I think it's still a good option. \n\nFor example, in servers where there are other applications running (a\nweb server, for example) that are constantly accesing the disk and\nreplacing cached postgresql pages in the kernel, having shared buffers\ncould reduce this efect and assure the precense of our pages in\nmemory... I gues :)\n\nOn Mon, 2003-07-28 at 13:50, Josh Berkus wrote:\n\n> Tom,\n> \n> > If we had a portable way\n> > of preventing the kernel from caching the same page, it would make more\n> > sense to run with large shared_buffers.\n> \n> Really? I thought we wanted to move the other way ... that is, if we could \n> get over the portability issues, eliminate shared_buffers entirely and rely \n> completely on the OS cache.",
"msg_date": "28 Jul 2003 14:48:02 -0300",
"msg_from": "Franco Bruno Borghesi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping a database completly into Memory"
},
{
"msg_contents": "On Mon, Jul 28, 2003 at 12:25:57PM -0400, Tom Lane wrote:\n> in the kernel's disk cache), thus wasting RAM. If we had a portable way\n> of preventing the kernel from caching the same page, it would make more\n> sense to run with large shared_buffers.\n\nPlus, Postgres seems not to be very good at managing very large\nbuffer sets.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 28 Jul 2003 14:20:33 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping a database completly into Memory"
},
{
"msg_contents": ">>>>> \"TL\" == Tom Lane <[email protected]> writes:\n\nTL> Franco Bruno Borghesi <[email protected]> writes:\n>> wouldn't also increasing shared_buffers to 64 or 128 MB be a good\n>> performance improvement? This way, pages belonging to heavily used\n>> indexes would be already cached by the database itself.\n\nTL> Not necessarily. The trouble with large shared_buffers settings is you\nTL> end up with lots of pages being doubly cached (both in PG's buffers and\n\nI think if you do a lot of inserting/updating to your table, then more\nSHM is better (and very high fsm settings), since you defer pushing\nout the dirty pages to the disk. For read-mostly, I agree that\nletting the OS do the caching is a better way.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Tue, 29 Jul 2003 11:22:40 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping a database completly into Memory"
},
{
"msg_contents": "\nI think it all depends on your working set. Having shared memory be\nsmaller than you working set causes pages to have to be copied in from\nthe kernel buffers (not a huge problem, but a small penalty), while\nhaving shared memory larger than the working set causes overhead of\nsearching through all those buffers.\n\n---------------------------------------------------------------------------\n\nVivek Khera wrote:\n> >>>>> \"TL\" == Tom Lane <[email protected]> writes:\n> \n> TL> Franco Bruno Borghesi <[email protected]> writes:\n> >> wouldn't also increasing shared_buffers to 64 or 128 MB be a good\n> >> performance improvement? This way, pages belonging to heavily used\n> >> indexes would be already cached by the database itself.\n> \n> TL> Not necessarily. The trouble with large shared_buffers settings is you\n> TL> end up with lots of pages being doubly cached (both in PG's buffers and\n> \n> I think if you do a lot of inserting/updating to your table, then more\n> SHM is better (and very high fsm settings), since you defer pushing\n> out the dirty pages to the disk. For read-mostly, I agree that\n> letting the OS do the caching is a better way.\n> \n> \n> -- \n> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\n> Vivek Khera, Ph.D. Khera Communications, Inc.\n> Internet: [email protected] Rockville, MD +1-240-453-8497\n> AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 30 Jul 2003 18:10:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping a database completly into Memory"
},
{
"msg_contents": "\nYou make an interesting distinction that read/write needs more shared\nmemory. I think this is because if you want to reused a read-only\nshared buffer, you can just throw away the contents, while a dirty\nbuffer requires you to write it into the kernel before you can use it.\n\n---------------------------------------------------------------------------\n\nVivek Khera wrote:\n> >>>>> \"TL\" == Tom Lane <[email protected]> writes:\n> \n> TL> Franco Bruno Borghesi <[email protected]> writes:\n> >> wouldn't also increasing shared_buffers to 64 or 128 MB be a good\n> >> performance improvement? This way, pages belonging to heavily used\n> >> indexes would be already cached by the database itself.\n> \n> TL> Not necessarily. The trouble with large shared_buffers settings is you\n> TL> end up with lots of pages being doubly cached (both in PG's buffers and\n> \n> I think if you do a lot of inserting/updating to your table, then more\n> SHM is better (and very high fsm settings), since you defer pushing\n> out the dirty pages to the disk. For read-mostly, I agree that\n> letting the OS do the caching is a better way.\n> \n> \n> -- \n> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\n> Vivek Khera, Ph.D. Khera Communications, Inc.\n> Internet: [email protected] Rockville, MD +1-240-453-8497\n> AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 30 Jul 2003 18:12:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping a database completly into Memory"
},
{
"msg_contents": ">>>>> \"BM\" == Bruce Momjian <[email protected]> writes:\n\nBM> I think it all depends on your working set. Having shared memory be\nBM> smaller than you working set causes pages to have to be copied in from\nBM> the kernel buffers (not a huge problem, but a small penalty), while\nBM> having shared memory larger than the working set causes overhead of\nBM> searching through all those buffers.\n\ni.e., It is a black art, and no single piece of advice can be taken in\nisolation ;-(\n\n",
"msg_date": "Thu, 31 Jul 2003 10:15:04 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping a database completly into Memory"
}
] |
[
{
"msg_contents": "Hi ,\n\nI am working to migrate a oracle application to pg.\nI am using pg 7.3.3 on Dual PIII, 2 GB RAM,linux RedHat 7.3.\n\nOne of my selects is working much slower as in oracle.\nIn this example I am using not many rows in tables.\nFor all the joins I have indexes.\nAll IDS or IDS_xxx are name.\n\nPls if it is possible poit me how to fix this problem.\n\nI send the query and the explai analyze.\nI have ran vacuum analyze full on db.\n\nMany thanks,\nivan.\n\n explain analyze select O.IDS as oids,O.IDS_MDL_MDF_VOL as\nids_mmv,M.MNAME AS MODDELNAME,M.KOD AS MODELKOD,O.IDS_COLOR,COL.MNAME\nAS COLORNAME,COL.KOD AS COLORKOD, TT.IDS AS LOCIDS,TT.MNAME AS LOCNAME,\nTT.KOD AS LOC_KOD ,O.IDS_DOSTAV,DST.MNAME AS DOSTAVNAME,\nO.IDS_PROIZV,PRZ.MNAME as\nPROIZVNAME,O.CHASSI,O.CHASSI_ACC,O.DVIGATEL,O.ORDER_NUM,O.ORDER_DATE,O.DOG_OR_FREE,\nO.NALICHEN,O.DATE_PROIZV, O.DATE_IN,O.ALI,O.DATE_ALI,\nO.PRICE_PAY,O.PRICE_PAY_VAL,\nO.START_DATE,O.DAYS,O.END_DATE,O.COMENTAR,O.IDS_AUTOVOZ,AWT.MNAME AS\nAUTOVNAME,\nO.SVERKA,O.NEW_OLD,O.KM,O.START_DATE_REZ,O.END_DATE_REZ,O.IDS_SLUJITEL,SLU.KOD,NULL\nAS CT_IDS, NULL AS C_NUM, O.DATE_ALI2, NULL AS C_STATE, 0 AS DAMAGE,\nO.REG_NUMBER AS CARREGNUMBER,O.DATE_REG AS CARREGDATE,O.GARTYPE,2002 AS\nGODINA,O.COMENTAR1, O.IDS_COMBOPT,CB.KOD AS\nIDS_COMBOPT_KOD,O.REF_BG,O.DAM,O.OBEM, O.IDS_TAPICERII,TAP.KOD AS\nIDS_TAPICERII_KOD,TAP.MNAME AS\nIDS_TAPICERII_NAME,O.PAPKA_N,O.CEDMICAPR, O.RADIO_KOD AS\nRADIO_KOD,O.KEY_KOD AS KEY_KOD,O.ALARM_KOD AS ALARM_KOD,O.BOLT_KOD AS\nBOLT_KOD,M.MOST_PS, NULL AS IDS_KLIENT , NULL AS KlientName ,O.TALON_N\nAS talonN,O.STATEMODIFY AS STATEMOD,O.MESTA AS MESTA,O.CENA_COLOR AS\nCENA_COL,O.CENA_TAP AS CENA_TAP,M.CENA_PROD AS\nMCENA_PROD,M.CENA_PROD_VAL AS\nMCENA_PROD_VAL,O.CENA_MDL,O.MESTA_MDL,O.CENA_COLOR_VAL,O.CENA_TAP_VAL,O.CENA_MDL_VAL,O.VIRTUALEN,M.IDS_GRUPA,COL.MNAME_1\nAS COLMNAME1,O.DATE_PLAN_P,O.KM_PLAN_P from A_COLORS COL, A_MDL_MDF_VOL\nM ,A_LOCATIONS TT, A_CARS O left outer join A_SLUJITELI SLU\nON(O.IDS_SLUJITEL=SLU.IDS) left outer join A_AUTOVOZ AWT\nON(O.IDS_AUTOVOZ=AWT.IDS) left outer join A_COMBOPT CB\nON(O.IDS_COMBOPT=CB.IDS) left outer join A_TAPICERII TAP\nON(O.IDS_TAPICERII=TAP.IDS) left outer join A_KLIENTI DST ON(\nO.IDS_DOSTAV=DST.IDS) left outer join A_KLIENTI PRZ ON( O.IDS_PROIZV =\nPRZ.IDS) ,A_CH_CAR CHT WHERE O.IDS_LOCATION=TT.IDS AND\nO.IDS_MDL_MDF_VOL=M.IDS AND O.IDS_COLOR=COL.IDS AND CHT.IDS=O.IDS AND\nCHT.INSTIME=1059300812726 AND CHT.SES=1059300377005 and O.DOG_OR_FREE\nIN(0,2,3) ;\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Hash Join (cost=138.54..142.57 rows=2 width=2051) (actual\ntime=286.17..286.29 rows=2 loops=1)\n Hash Cond: (\"outer\".ids_location = \"inner\".ids)\n -> Hash Join (cost=137.42..141.40 rows=2 width=1971) (actual\ntime=285.95..286.02 rows=2 loops=1)\n Hash Cond: (\"outer\".ids = \"inner\".ids_color)\n -> Seq Scan on a_colors col (cost=0.00..3.12 rows=112\nwidth=101) (actual time=0.01..0.30 rows=112 loops=1)\n -> Hash (cost=137.41..137.41 rows=2 width=1870) (actual\ntime=285.43..285.43 rows=0 loops=1)\n -> Hash Join (cost=134.88..137.41 rows=2 width=1870)\n(actual time=285.12..285.42 rows=2 loops=1)\n Hash Cond: (\"outer\".ids = \"inner\".ids_mdl_mdf_vol)\n -> Seq Scan on a_mdl_mdf_vol m (cost=0.00..2.34\nrows=34 width=189) (actual time=0.03..0.21 rows=34 loops=1)\n -> Hash (cost=134.88..134.88 rows=2 width=1681)\n(actual time=284.98..284.98 rows=0 loops=1)\n -> Hash Join (cost=10.76..134.88 rows=2\nwidth=1681) (actual time=189.62..284.97 rows=2 loops=1)\n Hash Cond: (\"outer\".ids = \"inner\".ids)\n -> Hash Join (cost=9.73..128.72\nrows=1019 width=1617) (actual time=1.58..283.39 rows=1023 loops=1)\n Hash Cond: (\"outer\".ids_proizv =\n\"inner\".ids)\n -> Hash Join (cost=7.50..108.66\nrows=1019 width=1545) (actual time=1.34..234.05 rows=1023 loops=1)\n Hash Cond:\n(\"outer\".ids_dostav = \"inner\".ids)\n -> Hash Join\n(cost=5.28..88.60 rows=1019 width=1473) (actual time=1.12..188.41\nrows=1023 loops=1)\n Hash Cond:\n(\"outer\".ids_tapicerii = \"inner\".ids)\n -> Hash Join\n(cost=2.40..67.89 rows=1019 width=1372) (actual time=0.68..145.58\nrows=1023 loops=1)\n Hash Cond:\n(\"outer\".ids_combopt = \"inner\".ids)\n -> Hash Join\n(cost=1.09..46.19 rows=1019 width=1301) (actual time=0.45..106.88\nrows=1023 loops=1)\n Hash\nCond: (\"outer\".ids_autovoz = \"inner\".ids)\n -> Hash\nJoin (cost=1.09..41.03 rows=1019 width=1189) (actual time=0.31..72.28\nrows=1023 loops=1)\n\nHash Cond: (\"outer\".ids_slujitel = \"inner\".ids)\n ->\nIndex Scan using i_cars_dog_or_free on a_cars o (cost=0.00..22.11\nrows=1019 width=1119) (actual time=0.12..37.41 rows=1023 loops=1)\n\nFilter: ((dog_or_free = 0) OR (dog_or_free = 2) OR (dog_or_free = 3))\n ->\nHash (cost=1.07..1.07 rows=7 width=70) (actual time=0.04..0.04 rows=0\nloops=1)\n\n-> Seq Scan on a_slujiteli slu (cost=0.00..1.07 rows=7 width=70)\n(actual time=0.01..0.03 rows=7 loops=1)\n -> Hash\n(cost=0.00..0.00 rows=1 width=112) (actual time=0.00..0.00 rows=0\nloops=1)\n ->\nSeq Scan on a_autovoz awt (cost=0.00..0.00 rows=1 width=112) (actual\ntime=0.00..0.00 rows=0 loops=1)\n -> Hash\n(cost=1.25..1.25 rows=25 width=71) (actual time=0.09..0.09 rows=0\nloops=1)\n -> Seq\nScan on a_combopt cb (cost=0.00..1.25 rows=25 width=71) (actual\ntime=0.01..0.06 rows=25 loops=1)\n -> Hash\n(cost=2.70..2.70 rows=70 width=101) (actual time=0.29..0.29 rows=0\nloops=1)\n -> Seq Scan on\na_tapicerii tap (cost=0.00..2.70 rows=70 width=101) (actual\ntime=0.01..0.17 rows=70 loops=1)\n -> Hash (cost=2.18..2.18\nrows=18 width=72) (actual time=0.06..0.06 rows=0 loops=1)\n -> Seq Scan on\na_klienti dst (cost=0.00..2.18 rows=18 width=72) (actual\ntime=0.01..0.03 rows=18 loops=1)\n -> Hash (cost=2.18..2.18\nrows=18 width=72) (actual time=0.07..0.07 rows=0 loops=1)\n -> Seq Scan on a_klienti\nprz (cost=0.00..2.18 rows=18 width=72) (actual time=0.01..0.05 rows=18\nloops=1)\n -> Hash (cost=1.03..1.03 rows=2\nwidth=64) (actual time=0.03..0.03 rows=0 loops=1)\n -> Seq Scan on a_ch_car cht\n(cost=0.00..1.03 rows=2 width=64) (actual time=0.02..0.03 rows=2\nloops=1)\n Filter: ((instime =\n1059300812726::bigint) AND (ses = 1059300377005::bigint))\n -> Hash (cost=1.10..1.10 rows=10 width=80) (actual time=0.07..0.07\nrows=0 loops=1)\n -> Seq Scan on a_locations tt (cost=0.00..1.10 rows=10\nwidth=80) (actual time=0.03..0.05 rows=10 loops=1)\n Total runtime: 287.61 msec\n(44 rows)\n\nTime: 301.36 ms\n\n\n",
"msg_date": "Sun, 27 Jul 2003 13:49:45 +0200",
"msg_from": "pginfo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query problem"
},
{
"msg_contents": "Hi Bruno,\nI think I have tunet it.\nPg is working for most of my selects, but I have problem with this one.\n\nregards,\nivan\n\nBruno BAGUETTE wrote:\n\n> Hello,\n>\n> > One of my selects is working much slower as in oracle.\n> > In this example I am using not many rows in tables.\n> > For all the joins I have indexes.\n> > All IDS or IDS_xxx are name.\n> >\n> > Pls if it is possible poit me how to fix this problem.\n> >\n> > I send the query and the explai analyze.\n> > I have ran vacuum analyze full on db.\n>\n> Have you tuned your postgresql.conf settings ?\n>\n> The PostgreSQL default settings are very low in order to allow\n> PostgreSQL to RUN on old machines and new machines. If you need\n> PERFORMANCE (which is quite logic), you must setup the postgresql.conf\n> file.\n>\n> Here's a nice article about the postgresql.conf file tuning :\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n>\n> Hope this help ! :-)\n>\n> Cheers,\n>\n> ---------------------------------------\n> Bruno BAGUETTE - [email protected]\n\n\n\n",
"msg_date": "Sun, 27 Jul 2003 14:22:48 +0200",
"msg_from": "pginfo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RE : Query problem"
},
{
"msg_contents": "Hello,\n\n> One of my selects is working much slower as in oracle.\n> In this example I am using not many rows in tables.\n> For all the joins I have indexes.\n> All IDS or IDS_xxx are name.\n> \n> Pls if it is possible poit me how to fix this problem.\n> \n> I send the query and the explai analyze.\n> I have ran vacuum analyze full on db.\n\nHave you tuned your postgresql.conf settings ?\n\nThe PostgreSQL default settings are very low in order to allow\nPostgreSQL to RUN on old machines and new machines. If you need\nPERFORMANCE (which is quite logic), you must setup the postgresql.conf\nfile.\n\nHere's a nice article about the postgresql.conf file tuning :\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nHope this help ! :-)\n\nCheers,\n\n---------------------------------------\nBruno BAGUETTE - [email protected] \n\n",
"msg_date": "Sun, 27 Jul 2003 15:26:03 +0200",
"msg_from": "\"Bruno BAGUETTE\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE : Query problem"
},
{
"msg_contents": "Try re-arranging your join structure:\n\n , A_CARS O\n JOIN A_CH_CAR CHT ON (CHT.IDS=O.IDS)\n left outer join A_SLUJITELI SLU ON(O.IDS_SLUJITEL=SLU.IDS)\n left outer join A_AUTOVOZ AWT ON(O.IDS_AUTOVOZ=AWT.IDS)\n left outer join A_COMBOPT CB ON(O.IDS_COMBOPT=CB.IDS)\n left outer join A_TAPICERII TAP ON(O.IDS_TAPICERII=TAP.IDS)\n left outer join A_KLIENTI DST ON(O.IDS_DOSTAV=DST.IDS)\n left outer join A_KLIENTI PRZ ON(O.IDS_PROIZV = PRZ.IDS)\n WHERE O.IDS_LOCATION=TT.IDS\n AND O.IDS_MDL_MDF_VOL=M.IDS\n AND CHT.INSTIME=1059300812726\n AND CHT.SES=1059300377005\n and O.DOG_OR_FREE IN(0,2,3);\n\nI believe this will cause fewer rows to be used when hashing for the\nleft outer joins.\n\nOn Sun, 2003-07-27 at 07:49, pginfo wrote:\n> Hi ,\n> \n> I am working to migrate a oracle application to pg.\n> I am using pg 7.3.3 on Dual PIII, 2 GB RAM,linux RedHat 7.3.\n> \n> One of my selects is working much slower as in oracle.\n> In this example I am using not many rows in tables.\n> For all the joins I have indexes.\n> All IDS or IDS_xxx are name.\n> \n> Pls if it is possible poit me how to fix this problem.\n> \n> I send the query and the explai analyze.\n> I have ran vacuum analyze full on db.\n> \n> Many thanks,\n> ivan.\n> \n> explain analyze select O.IDS as oids,O.IDS_MDL_MDF_VOL as\n> ids_mmv,M.MNAME AS MODDELNAME,M.KOD AS MODELKOD,O.IDS_COLOR,COL.MNAME\n> AS COLORNAME,COL.KOD AS COLORKOD, TT.IDS AS LOCIDS,TT.MNAME AS LOCNAME,\n> TT.KOD AS LOC_KOD ,O.IDS_DOSTAV,DST.MNAME AS DOSTAVNAME,\n> O.IDS_PROIZV,PRZ.MNAME as\n> PROIZVNAME,O.CHASSI,O.CHASSI_ACC,O.DVIGATEL,O.ORDER_NUM,O.ORDER_DATE,O.DOG_OR_FREE,\n> O.NALICHEN,O.DATE_PROIZV, O.DATE_IN,O.ALI,O.DATE_ALI,\n> O.PRICE_PAY,O.PRICE_PAY_VAL,\n> O.START_DATE,O.DAYS,O.END_DATE,O.COMENTAR,O.IDS_AUTOVOZ,AWT.MNAME AS\n> AUTOVNAME,\n> O.SVERKA,O.NEW_OLD,O.KM,O.START_DATE_REZ,O.END_DATE_REZ,O.IDS_SLUJITEL,SLU.KOD,NULL\n> AS CT_IDS, NULL AS C_NUM, O.DATE_ALI2, NULL AS C_STATE, 0 AS DAMAGE,\n> O.REG_NUMBER AS CARREGNUMBER,O.DATE_REG AS CARREGDATE,O.GARTYPE,2002 AS\n> GODINA,O.COMENTAR1, O.IDS_COMBOPT,CB.KOD AS\n> IDS_COMBOPT_KOD,O.REF_BG,O.DAM,O.OBEM, O.IDS_TAPICERII,TAP.KOD AS\n> IDS_TAPICERII_KOD,TAP.MNAME AS\n> IDS_TAPICERII_NAME,O.PAPKA_N,O.CEDMICAPR, O.RADIO_KOD AS\n> RADIO_KOD,O.KEY_KOD AS KEY_KOD,O.ALARM_KOD AS ALARM_KOD,O.BOLT_KOD AS\n> BOLT_KOD,M.MOST_PS, NULL AS IDS_KLIENT , NULL AS KlientName ,O.TALON_N\n> AS talonN,O.STATEMODIFY AS STATEMOD,O.MESTA AS MESTA,O.CENA_COLOR AS\n> CENA_COL,O.CENA_TAP AS CENA_TAP,M.CENA_PROD AS\n> MCENA_PROD,M.CENA_PROD_VAL AS\n> MCENA_PROD_VAL,O.CENA_MDL,O.MESTA_MDL,O.CENA_COLOR_VAL,O.CENA_TAP_VAL,O.CENA_MDL_VAL,O.VIRTUALEN,M.IDS_GRUPA,COL.MNAME_1\n> AS COLMNAME1,O.DATE_PLAN_P,O.KM_PLAN_P from A_COLORS COL, A_MDL_MDF_VOL\n> M ,A_LOCATIONS TT, A_CARS O left outer join A_SLUJITELI SLU\n> ON(O.IDS_SLUJITEL=SLU.IDS) left outer join A_AUTOVOZ AWT\n> ON(O.IDS_AUTOVOZ=AWT.IDS) left outer join A_COMBOPT CB\n> ON(O.IDS_COMBOPT=CB.IDS) left outer join A_TAPICERII TAP\n> ON(O.IDS_TAPICERII=TAP.IDS) left outer join A_KLIENTI DST ON(\n> O.IDS_DOSTAV=DST.IDS) left outer join A_KLIENTI PRZ ON( O.IDS_PROIZV =\n> PRZ.IDS) ,A_CH_CAR CHT WHERE O.IDS_LOCATION=TT.IDS AND\n> O.IDS_MDL_MDF_VOL=M.IDS AND O.IDS_COLOR=COL.IDS AND CHT.IDS=O.IDS AND\n> CHT.INSTIME=1059300812726 AND CHT.SES=1059300377005 and O.DOG_OR_FREE\n> IN(0,2,3) ;\n> \n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> \n> Hash Join (cost=138.54..142.57 rows=2 width=2051) (actual\n> time=286.17..286.29 rows=2 loops=1)\n> Hash Cond: (\"outer\".ids_location = \"inner\".ids)\n> -> Hash Join (cost=137.42..141.40 rows=2 width=1971) (actual\n> time=285.95..286.02 rows=2 loops=1)\n> Hash Cond: (\"outer\".ids = \"inner\".ids_color)\n> -> Seq Scan on a_colors col (cost=0.00..3.12 rows=112\n> width=101) (actual time=0.01..0.30 rows=112 loops=1)\n> -> Hash (cost=137.41..137.41 rows=2 width=1870) (actual\n> time=285.43..285.43 rows=0 loops=1)\n> -> Hash Join (cost=134.88..137.41 rows=2 width=1870)\n> (actual time=285.12..285.42 rows=2 loops=1)\n> Hash Cond: (\"outer\".ids = \"inner\".ids_mdl_mdf_vol)\n> -> Seq Scan on a_mdl_mdf_vol m (cost=0.00..2.34\n> rows=34 width=189) (actual time=0.03..0.21 rows=34 loops=1)\n> -> Hash (cost=134.88..134.88 rows=2 width=1681)\n> (actual time=284.98..284.98 rows=0 loops=1)\n> -> Hash Join (cost=10.76..134.88 rows=2\n> width=1681) (actual time=189.62..284.97 rows=2 loops=1)\n> Hash Cond: (\"outer\".ids = \"inner\".ids)\n> -> Hash Join (cost=9.73..128.72\n> rows=1019 width=1617) (actual time=1.58..283.39 rows=1023 loops=1)\n> Hash Cond: (\"outer\".ids_proizv =\n> \"inner\".ids)\n> -> Hash Join (cost=7.50..108.66\n> rows=1019 width=1545) (actual time=1.34..234.05 rows=1023 loops=1)\n> Hash Cond:\n> (\"outer\".ids_dostav = \"inner\".ids)\n> -> Hash Join\n> (cost=5.28..88.60 rows=1019 width=1473) (actual time=1.12..188.41\n> rows=1023 loops=1)\n> Hash Cond:\n> (\"outer\".ids_tapicerii = \"inner\".ids)\n> -> Hash Join\n> (cost=2.40..67.89 rows=1019 width=1372) (actual time=0.68..145.58\n> rows=1023 loops=1)\n> Hash Cond:\n> (\"outer\".ids_combopt = \"inner\".ids)\n> -> Hash Join\n> (cost=1.09..46.19 rows=1019 width=1301) (actual time=0.45..106.88\n> rows=1023 loops=1)\n> Hash\n> Cond: (\"outer\".ids_autovoz = \"inner\".ids)\n> -> Hash\n> Join (cost=1.09..41.03 rows=1019 width=1189) (actual time=0.31..72.28\n> rows=1023 loops=1)\n> \n> Hash Cond: (\"outer\".ids_slujitel = \"inner\".ids)\n> ->\n> Index Scan using i_cars_dog_or_free on a_cars o (cost=0.00..22.11\n> rows=1019 width=1119) (actual time=0.12..37.41 rows=1023 loops=1)\n> \n> Filter: ((dog_or_free = 0) OR (dog_or_free = 2) OR (dog_or_free = 3))\n> ->\n> Hash (cost=1.07..1.07 rows=7 width=70) (actual time=0.04..0.04 rows=0\n> loops=1)\n> \n> -> Seq Scan on a_slujiteli slu (cost=0.00..1.07 rows=7 width=70)\n> (actual time=0.01..0.03 rows=7 loops=1)\n> -> Hash\n> (cost=0.00..0.00 rows=1 width=112) (actual time=0.00..0.00 rows=0\n> loops=1)\n> ->\n> Seq Scan on a_autovoz awt (cost=0.00..0.00 rows=1 width=112) (actual\n> time=0.00..0.00 rows=0 loops=1)\n> -> Hash\n> (cost=1.25..1.25 rows=25 width=71) (actual time=0.09..0.09 rows=0\n> loops=1)\n> -> Seq\n> Scan on a_combopt cb (cost=0.00..1.25 rows=25 width=71) (actual\n> time=0.01..0.06 rows=25 loops=1)\n> -> Hash\n> (cost=2.70..2.70 rows=70 width=101) (actual time=0.29..0.29 rows=0\n> loops=1)\n> -> Seq Scan on\n> a_tapicerii tap (cost=0.00..2.70 rows=70 width=101) (actual\n> time=0.01..0.17 rows=70 loops=1)\n> -> Hash (cost=2.18..2.18\n> rows=18 width=72) (actual time=0.06..0.06 rows=0 loops=1)\n> -> Seq Scan on\n> a_klienti dst (cost=0.00..2.18 rows=18 width=72) (actual\n> time=0.01..0.03 rows=18 loops=1)\n> -> Hash (cost=2.18..2.18\n> rows=18 width=72) (actual time=0.07..0.07 rows=0 loops=1)\n> -> Seq Scan on a_klienti\n> prz (cost=0.00..2.18 rows=18 width=72) (actual time=0.01..0.05 rows=18\n> loops=1)\n> -> Hash (cost=1.03..1.03 rows=2\n> width=64) (actual time=0.03..0.03 rows=0 loops=1)\n> -> Seq Scan on a_ch_car cht\n> (cost=0.00..1.03 rows=2 width=64) (actual time=0.02..0.03 rows=2\n> loops=1)\n> Filter: ((instime =\n> 1059300812726::bigint) AND (ses = 1059300377005::bigint))\n> -> Hash (cost=1.10..1.10 rows=10 width=80) (actual time=0.07..0.07\n> rows=0 loops=1)\n> -> Seq Scan on a_locations tt (cost=0.00..1.10 rows=10\n> width=80) (actual time=0.03..0.05 rows=10 loops=1)\n> Total runtime: 287.61 msec\n> (44 rows)\n> \n> Time: 301.36 ms\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>",
"msg_date": "27 Jul 2003 10:25:40 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query problem"
}
] |
[
{
"msg_contents": "Greetings,\n\nI am trying to understand the various factors used by Postgres to optimize. I presently have a dual-866 Dell server with 1GB of memory. I've done the following:\n\nset /proc/sys/kernel/shmmax to 512000000\nshared_buffers = 32000\nsort_mem = 32000\nmax_connections=64\nfsync=false\n\nCan someone tell me what effective_cache_size should be set to? what kind of formula to use for this? (I got the other figures from phpbuilder.com, and modified for 512k memory). \n\nThe databases I'm using have about 200,000+ news headlines with full-text indexes (which range upwards of a few million records). They are updated about every 5 to 10 minutes, which means I also have to run a vacuum about once every 2 to 3 hours at least. As I get more updates obviously the efficiency goes down. I'm trying to make the most of this system but don't fully understand PG's optimization stuff.\n\nThanks in advance,\nJustin Long\n\n\n\n\n\n\n\nGreetings,\n \nI am trying to understand the various factors used \nby Postgres to optimize. I presently have a dual-866 Dell server with 1GB of \nmemory. I've done the following:\n \nset /proc/sys/kernel/shmmax to \n512000000\nshared_buffers = 32000sort_mem = \n32000max_connections=64fsync=false\nCan someone tell me what effective_cache_size \nshould be set to? what kind of formula to use for this? (I got the other figures \nfrom phpbuilder.com, and modified for 512k memory). \n \nThe databases I'm using have about 200,000+ news \nheadlines with full-text indexes (which range upwards of a few million records). \nThey are updated about every 5 to 10 minutes, which means I also have to run a \nvacuum about once every 2 to 3 hours at least. As I get more updates obviously \nthe efficiency goes down. I'm trying to make the most of this system but don't \nfully understand PG's optimization stuff.\n \nThanks in advance,\nJustin Long",
"msg_date": "Mon, 28 Jul 2003 12:18:55 -0400",
"msg_from": "\"Justin Long\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimization"
},
{
"msg_contents": "Justin,\n\n> I am trying to understand the various factors used by Postgres to optimize. \nI presently have a dual-866 Dell server with 1GB of memory. I've done the \nfollowing:\n\nPlease set the performance articles at: \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 28 Jul 2003 11:00:19 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization"
},
{
"msg_contents": "Justin-\n\nIt sounds like you're on a system similar to ours, so I'll pass along the\nchanges that I made, which seem to have increased performance, and most\nimportantly, haven't hurt anything. The main difference in our environment\nis that we are less Update/Insert intensive than you are- in our\napplication, 90% of our information (court cases) is static (Closed) and 10%\nare frequently being updated (Pending/Active). This means I only vacuum once\na week. I haven't had chance to churn out objective tests yet, but my\nsubjective judgment is that this set of params works well:\n\nSet SHMMAX and SHMALL in the kernel to 134217728 (128MB)\nSet shared_buffers to 8192 (64MB)\nSet sort_mem to 16384 (16MB)\nSet effective_cache_size to 65536 (1/2 GB)\n\n\nThe Hardware is a dual-processor Athlon 1.2 Ghz box with 1 GB of RAM and the\nDB on SCSI RAID drives.\n\nThe database size is about 8GB, with the largest table 2.5 GB, and the two\nmost commonly queried tables at 1 GB each.\n\nThe OS is Debian Linux kernel 2.4.x (recompiled custom kernel for dual\nprocessor support)\nThe PostgreSQL version is 7.3.2\n\nMy reasoning was to increase shared_buffers based on anecdotal\nrecommendations I've seen on this list to 64MB and boost the OS SHMMAX to\ntwice that value to allow adequate room for other shared memory needs, thus\nreserving 128MB. Of the remaining memory, 256MB goes to 16MB sort space\ntimes\na guesstimate of 16 simultaneous sorts at any given time. If I leave about\n128 MB for headroom, then 1/2 GB should be left available for the effective\ncache size.\n\nI've never been tempted to turn fsync off. That seems like a risky move.\n\nRegards,\n -Nick\n\n---------------------------------------------------------------------\nNick Fankhauser\n\n [email protected] Phone 1.765.965.7363 Fax 1.765.962.9788\ndoxpop - Court records at your fingertips - http://www.doxpop.com/\n\n",
"msg_date": "Mon, 28 Jul 2003 13:25:55 -0500",
"msg_from": "\"Nick Fankhauser - Doxpop\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization"
},
{
"msg_contents": "Justin-\n\nIt sounds like you're on a system similar to ours, so I'll pass along the\nchanges that I made, which seem to have increased performance, and most\nimportantly, haven't hurt anything. The main difference in our environment\nis that we are less Update/Insert intensive than you are- in our\napplication, 90% of our information (court cases) is static (Closed) and 10%\nare frequently being updated (Pending/Active). This means I only vacuum once\na week. I haven't had chance to churn out objective tests yet, but my\nsubjective judgment is that this set of params works well:\n\nSet SHMMAX and SHMALL in the kernel to 134217728 (128MB)\nSet shared_buffers to 8192 (64MB)\nSet sort_mem to 16384 (16MB)\nSet effective_cache_size to 65536 (1/2 GB)\n\nThe Hardware is a dual-processor Athlon 1.2 Ghz box with 1 GB of RAM and the\nDB on SCSI RAID drives.\n\nThe database size is about 8GB, with the largest table 2.5 GB, and the two\nmost commonly queried tables at 1 GB each.\n\nThe OS is Debian Linux kernel 2.4.x (recompiled custom kernel for dual\nprocessor support)\n\nThe PostgreSQL version is 7.3.2\n\nMy reasoning was to increase shared_buffers based on anecdotal\nrecommendations I've seen on this list to 64MB and boost the OS SHMMAX to\ntwice that value to allow adequate room for other shared memory needs, thus\nreserving 128MB off the top. Of the remaining memory, 256MB goes to 16MB\nsort space times a guesstimate of 16 simultaneous sorts at any given time.\nIf I leave about 128 MB for headroom, then 1/2 GB should be left available\nfor the effective cache size.\n\nI've never been tempted to turn fsync off. That seems like a risky move.\n\nRegards,\n -Nick\n---------------------------------------------------------------------\nNick Fankhauser\[email protected] Phone 1.765.965.7363 Fax 1.765.962.9788\ndoxpop - Court records at your fingertips - http://www.doxpop.com/\n\n",
"msg_date": "Mon, 28 Jul 2003 13:28:34 -0500",
"msg_from": "\"Nick Fankhauser\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization"
},
{
"msg_contents": "\n\n>Can someone tell me what effective_cache_size should be set to?\n\nYou may be able to intuit this from my last post, but if I understand\ncorrectly, what you should be doing is estimating how much memory is likely\nto be \"left over\" for the OS to do disk caching with after all of the basic\nneeds of the OS, PostgreSQL & any other applications are taken care of. You\nthen tell postgresql what to expect in terms of caching resources by putting\nthis number into effective_cache_size, and this allows the query planner\ncome up with a strategy that is optimized for the expected cache size.\n\nSo the \"formula\" would be: Figure out how much memory is normally in use\nallowing adequate margins, subtract this from your total RAM, and make the\nremainder your effective_cache size.\n\n\n-Nick\n\n",
"msg_date": "Mon, 28 Jul 2003 13:39:09 -0500",
"msg_from": "\"Nick Fankhauser\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization"
},
{
"msg_contents": "Justin,\n\n> I am trying to understand the various factors used by Postgres to optimize. \nI presently have a dual-866 Dell server with 1GB of memory. I've done the \nfollowing:\n\nsee: http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\nwhich has articles on .conf files.\n(feel free to link these articles at PHPbuilder.com and elsewhere!)\n\n> The databases I'm using have about 200,000+ news headlines with full-text \nindexes (which range upwards of a few million records). They are updated \nabout every 5 to 10 minutes, which means I also have to run a vacuum about \nonce every 2 to 3 hours at least. As I get more updates obviously the \nefficiency goes down. I'm trying to make the most of this system but don't \nfully understand PG's optimization stuff.\n\nUnless you're running PostgreSQL 7.1 or earlier, you should be VACUUMing every \n10-15 minutes, not every 2-3 hours. Regular VACUUM does not lock your \ndatabase. You will also want to increase your FSM_relations so that VACUUM \nis more effective/efficient; again, see the articles.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 28 Jul 2003 12:27:44 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization"
},
{
"msg_contents": "On 28 Jul 2003 at 12:27, Josh Berkus wrote:\n> Unless you're running PostgreSQL 7.1 or earlier, you should be VACUUMing every \n> 10-15 minutes, not every 2-3 hours. Regular VACUUM does not lock your \n> database. You will also want to increase your FSM_relations so that VACUUM \n> is more effective/efficient; again, see the articles.\n\nThere is an auto-vacuum daemon in contrib and if I understand it correctly, it \nis not getting much of a field testing. How about you guys installing it and \ntrying it?\n\nBye\n Shridhar\n\n--\nO'Reilly's Law of the Kitchen:\tCleanliness is next to impossible\n\n",
"msg_date": "Tue, 29 Jul 2003 12:21:59 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization"
},
{
"msg_contents": "On Tue, 29 Jul 2003, Shridhar Daithankar wrote:\n\n> On 28 Jul 2003 at 12:27, Josh Berkus wrote:\n> > Unless you're running PostgreSQL 7.1 or earlier, you should be VACUUMing every \n> > 10-15 minutes, not every 2-3 hours. Regular VACUUM does not lock your \n> > database. You will also want to increase your FSM_relations so that VACUUM \n> > is more effective/efficient; again, see the articles.\n> \n> There is an auto-vacuum daemon in contrib and if I understand it correctly, it \n> is not getting much of a field testing. How about you guys installing it and \n> trying it?\n\n\tIf there is such a daemon, what is it called? As I can't see it. \nIs it part of gborg?\n\nPeter Childs\n\n",
"msg_date": "Tue, 29 Jul 2003 08:14:59 +0100 (BST)",
"msg_from": "Peter Childs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization"
},
{
"msg_contents": "On 29 Jul 2003 at 8:14, Peter Childs wrote:\n\n> On Tue, 29 Jul 2003, Shridhar Daithankar wrote:\n> \n> > On 28 Jul 2003 at 12:27, Josh Berkus wrote:\n> > > Unless you're running PostgreSQL 7.1 or earlier, you should be VACUUMing every \n> > > 10-15 minutes, not every 2-3 hours. Regular VACUUM does not lock your \n> > > database. You will also want to increase your FSM_relations so that VACUUM \n> > > is more effective/efficient; again, see the articles.\n> > \n> > There is an auto-vacuum daemon in contrib and if I understand it correctly, it \n> > is not getting much of a field testing. How about you guys installing it and \n> > trying it?\n> \n> \tIf there is such a daemon, what is it called? As I can't see it. \n> Is it part of gborg?\n\nIt is in sources. See contrib module in postgresql CVS, 7.4 beta if you prefer \nto wait till announement.\n\nIt is called as pgavd..\n\nBye\n Shridhar\n\n--\nsquatcho, n.:\tThe button at the top of a baseball cap.\t\t-- \"Sniglets\", Rich \nHall & Friends\n\n",
"msg_date": "Tue, 29 Jul 2003 12:51:30 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization"
}
] |
[
{
"msg_contents": "Hi Everyone,\n\nI've a kind of less inserts/mostly updates table,\nwhich we vacuum every half-hour. \n\nhere is the output of vacuum analyze \n\nINFO: --Relation public.accounts--\nINFO: Index accounts_u1: Pages 1498; Tuples 515:\nDeleted 179.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: Index accounts_u2: Pages 2227; Tuples 515:\nDeleted 179.\n\tCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: Index accounts_u3: Pages 246; Tuples 515:\nDeleted 179.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nhowever its indexes keeps growing on and on. After\nsurfing the manuals for a while, i came to know that\nvacuum doesn't clears up dead tuples caused by\nupdates. so i then decided to do reindex online, but\nthat makes exclusive lock on table which would prevent\n\nwriting on to tables.\n\nfinally i'm at a point where i decided to do index\nswapping.\n\nfor e.g.\n\n1. create index accounts_u1_swap,accounts_u2_swap and\naccounts_u3_swap in addition to the original indexes\n\n2. analyze table to update stats, so that the table\nknows about new indexes.\n\n3. drop original indexes\n\n4. i wish i had a rename index command to rename _swap\nto its original index name. now create indexes with\noriginal name\n\n5. follow #2 and #3 (now drop _swap indexes)\n\nIs there a better way to do this. comments are\nappreciated.\n\nthanks\n-Shankar\n\n__________________________________\nDo you Yahoo!?\nYahoo! SiteBuilder - Free, easy-to-use web site design software\nhttp://sitebuilder.yahoo.com\n",
"msg_date": "Mon, 28 Jul 2003 14:29:05 -0700 (PDT)",
"msg_from": "Shankar K <[email protected]>",
"msg_from_op": true,
"msg_subject": "Rebuild indexes"
},
{
"msg_contents": "Shankar,\n\n> Is there a better way to do this. comments are\n> appreciated.\n\nNo. This is one of the major features in 7.4; FSM and VACUUM will manage \nindexes as well. Until then, we all suffer ....\n\nBTW, the REINDEX command is transaction-safe. So if your database has \"lull\" \nperiods, you can run it without worrying that any updates will get turned \nback.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 28 Jul 2003 17:18:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Rebuild indexes"
},
{
"msg_contents": "Shankar K <[email protected]> writes:\n> ... so i then decided to do reindex online, but\n> that makes exclusive lock on table which would prevent\n> writing on to tables.\n\nSo does CREATE INDEX, so it's not clear what you're buying with\nall these pushups.\n\n> 2. analyze table to update stats, so that the table\n> knows about new indexes.\n\nYou do not need to ANALYZE to get the system to notice new indexes.\n\n> 4. i wish i had a rename index command to rename _swap\n> to its original index name.\n\nYou can rename indexes as if they were tables.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Jul 2003 22:21:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rebuild indexes "
},
{
"msg_contents": "thanks tom. i wasn't sure about create index taking\nexclusive locks on tables too. so i could as well\nreindex than doing the whole _swap mess during\noff-peak hrs.\n\n\n--- Tom Lane <[email protected]> wrote:\n> Shankar K <[email protected]> writes:\n> > ... so i then decided to do reindex online, but\n> > that makes exclusive lock on table which would\n> prevent\n> > writing on to tables.\n> \n> So does CREATE INDEX, so it's not clear what you're\n> buying with\n> all these pushups.\n> \n> > 2. analyze table to update stats, so that the\n> table\n> > knows about new indexes.\n> \n> You do not need to ANALYZE to get the system to\n> notice new indexes.\n> \n> > 4. i wish i had a rename index command to rename\n> _swap\n> > to its original index name.\n> \n> You can rename indexes as if they were tables.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please\n> send an appropriate\n> subscribe-nomail command to\n> [email protected] so that your\n> message can get through to the mailing list\ncleanly\n\n\n__________________________________\nDo you Yahoo!?\nYahoo! SiteBuilder - Free, easy-to-use web site design software\nhttp://sitebuilder.yahoo.com\n",
"msg_date": "Mon, 28 Jul 2003 20:00:39 -0700 (PDT)",
"msg_from": "Shankar K <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ADMIN] Rebuild indexes "
}
] |
[
{
"msg_contents": "\nHi,\n\nFor each company_id in certain table i have to search the same table\nget certain rows sort them and pick up the top one , i tried using this\nsubselect:\n\nexplain analyze SELECT company_id , (SELECT edition FROM ONLY \npublic.branding_master b WHERE old_company_id = a.company_id OR company_id = \na.company_id ORDER BY b.company_id DESC LIMIT 1) from public.branding_master \na limit 50;\n\n\n \nQUERY PLAN\n\nLimit (cost=0.00..3.52 rows=50 width=4) (actual time=463.97..19429.54 rows=50 \nloops=1)\n -> Seq Scan on branding_master a (cost=0.00..6530.79 rows=92679 width=4) \n(actual time=463.97..19429.28 rows=51 loops=1)\n SubPlan\n -> Limit (cost=0.00..168.36 rows=1 width=6) (actual \ntime=66.96..380.94 rows=1 loops=51)\n -> Index Scan Backward using branding_master_pkey on \nbranding_master b (cost=0.00..23990.26 rows=142 width=6) (actual \ntime=66.95..380.93 rows=1 loops=51)\n Filter: ((old_company_id = $0) OR (company_id = $0))\nTotal runtime: 19429.76 msec\n(7 rows)\n\nVery Slow 20 secs.\n\n\nCREATE FUNCTION most_recent_edition (integer) returns integer AS 'SELECT \nedition::integer FROM ONLY public.branding_master b WHERE old_company_id = $1 \nOR company_id = $1 ORDER BY b.company_id DESC LIMIT 1 ' language 'sql';\n\ntradein_clients=# explain analyze SELECT company_id , \nmost_recent_edition(company_id) from public.branding_master limit 50;\n\nQUERY PLAN\n\nLimit (cost=0.00..3.52 rows=50 width=4) (actual time=208.23..3969.39 rows=50 \nloops=1)\n -> Seq Scan on branding_master (cost=0.00..6530.79 rows=92679 width=4) \n(actual time=208.22..3969.15 rows=51 loops=1)\nTotal runtime: 3969.52 msec\n(3 rows)\n\nTime: 4568.33 ms\n\n 4 times faster.\n\n\nBut i feel it can be lot more faster , can anyone suggest me something\nto try.\n\nIndexes exists on company_id(pkey) and old_company_id Most of the chores \nare already done [ vacuum full analyze , reindex ]\n\n\nRegds\nmallah.\n\n\n\n\n\n\n",
"msg_date": "Tue, 29 Jul 2003 11:14:29 +0530",
"msg_from": "Rajesh Kumar Mallah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why performance improvement on converting subselect to a function ?"
},
{
"msg_contents": "Rajesh Kumar Mallah <[email protected]> writes:\n> explain analyze SELECT company_id , (SELECT edition FROM ONLY \n> public.branding_master b WHERE old_company_id = a.company_id OR company_id = \n> a.company_id ORDER BY b.company_id DESC LIMIT 1) from public.branding_master\n> a limit 50;\n> Total runtime: 19429.76 msec\n\n> CREATE FUNCTION most_recent_edition (integer) returns integer AS 'SELECT \n> edition::integer FROM ONLY public.branding_master b WHERE old_company_id = $1\n> OR company_id = $1 ORDER BY b.company_id DESC LIMIT 1 ' language 'sql';\n\n> tradein_clients=# explain analyze SELECT company_id , \n> most_recent_edition(company_id) from public.branding_master limit 50;\n> Total runtime: 3969.52 msec\n\nOdd. Apparently the planner is picking a better plan in the function\ncontext than in the subselect context --- which is strange since it\nought to have less information.\n\nAFAIK the only way to see the plan generated for a SQL function's query\nis like this:\n\nregression=# create function foo(int) returns int as\nregression-# 'select unique1 from tenk1 where unique1 = $1' language sql;\nCREATE FUNCTION\nregression=# set debug_print_plan TO 1;\nSET\nregression=# set client_min_messages TO debug;\nSET\nregression=# select foo(55);\nDEBUG: plan:\nDETAIL: {RESULT :startup_cost 0.00 :total_cost 0.01 :plan_rows 1 :plan_width 0\n:targetlist ({TARGETENTRY :resdom {RESDOM :resno 1 :restype 23 :restypmod -1\n:resname foo :ressortgroupref 0 :resorigtbl 0 :resorigcol 0 :resjunk false}\n:expr {FUNCEXPR :funcid 706101 :funcresulttype 23 :funcretset false\n ... (etc etc)\n\nWould you do that and send it along? I'm curious ...\n\n> But i feel it can be lot more faster , can anyone suggest me something\n> to try.\n\nCreate an index on old_company_id, perhaps.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jul 2003 10:09:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why performance improvement on converting subselect to a function\n\t?"
},
{
"msg_contents": "Tom Lane wrote:\n\n>Rajesh Kumar Mallah <[email protected]> writes:\n> \n>\n>>explain analyze SELECT company_id , (SELECT edition FROM ONLY \n>>public.branding_master b WHERE old_company_id = a.company_id OR company_id = \n>>a.company_id ORDER BY b.company_id DESC LIMIT 1) from public.branding_master\n>>a limit 50;\n>>Total runtime: 19429.76 msec\n>> \n>>\n>\n> \n>\n>>CREATE FUNCTION most_recent_edition (integer) returns integer AS 'SELECT \n>>edition::integer FROM ONLY public.branding_master b WHERE old_company_id = $1\n>>OR company_id = $1 ORDER BY b.company_id DESC LIMIT 1 ' language 'sql';\n>> \n>>\n>\n> \n>\n>>tradein_clients=# explain analyze SELECT company_id , \n>>most_recent_edition(company_id) from public.branding_master limit 50;\n>>Total runtime: 3969.52 msec\n>> \n>>\n>\n>Odd. Apparently the planner is picking a better plan in the function\n>context than in the subselect context --- which is strange since it\n>ought to have less information.\n>\n>AFAIK the only way to see the plan generated for a SQL function's query\n>is like this:\n>\n>regression=# create function foo(int) returns int as\n>regression-# 'select unique1 from tenk1 where unique1 = $1' language sql;\n>CREATE FUNCTION\n>regression=# set debug_print_plan TO 1;\n>SET\n>regression=# set client_min_messages TO debug;\n>SET\n>regression=# select foo(55);\n>DEBUG: plan:\n>DETAIL: {RESULT :startup_cost 0.00 :total_cost 0.01 :plan_rows 1 :plan_width 0\n>:targetlist ({TARGETENTRY :resdom {RESDOM :resno 1 :restype 23 :restypmod -1\n>:resname foo :ressortgroupref 0 :resorigtbl 0 :resorigcol 0 :resjunk false}\n>:expr {FUNCEXPR :funcid 706101 :funcresulttype 23 :funcretset false\n> ... (etc etc)\n>\n>Would you do that and send it along? I'm curious ...\n>\n\nSorry for the delayed response.\n\n tradein_clients=# explain analyze SELECT company_id , \n data_bank.most_recent_edition(company_id) from\n public.branding_master limit 50;\n\n --------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3.57 rows=50 width=4) (actual\n time=149.52..2179.49 rows=50 loops=1)\n -> Seq Scan on branding_master (cost=0.00..6626.52 rows=92752\n width=4) (actual time=149.51..2179.30 rows=51 loops=1)\n tradein_clients=#\n tradein_clients=#\n tradein_clients=# explain analyze SELECT company_id , \n data_bank.most_recent_edition(company_id) from\n public.branding_master limit 50;\n DEBUG: StartTransactionCommand\n LOG: plan:\n { LIMIT :startup_cost 0.00 :total_cost 185.65 :rows 1 :width 6\n :qptargetlist\n ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 23 :restypmod -1\n :resname\n edition :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false }\n :expr { EXPR\n :typeOid 23 :opType func :oper { FUNC :funcid 313 :funcresulttype 23\n :funcretset false :funcformat 1 } :args ({ VAR :varno 1 :varattno 31\n :vartype\n 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 31})}} {\n TARGETENTRY\n :resdom { RESDOM :resno 2 :restype 23 :restypmod -1 :resname company_id\n :reskey 0 :reskeyop 0 :ressortgroupref 1 :resjunk true } :expr { VAR\n :varno 1\n :varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1\n :varoattno\n 1}}) :qpqual <> :lefttree { INDEXSCAN :startup_cost 0.00 :total_cost\n 24302.69\n :rows 131 :width 6 :qptargetlist ({ TARGETENTRY :resdom { RESDOM\n :resno 1\n :restype 23 :restypmod -1 :resname edition :reskey 0 :reskeyop 0\n :ressortgroupref 0 :resjunk false } :expr { EXPR :typeOid 23 \n :opType func\n :oper { FUNC :funcid 313 :funcresulttype 23 :funcretset false\n :funcformat 1 }\n :args ({ VAR :varno 1 :varattno 31 :vartype 21 :vartypmod -1 \n :varlevelsup 0\n :varnoold 1 :varoattno 31})}} { TARGETENTRY :resdom { RESDOM :resno\n 2 :restype\n 23 :restypmod -1 :resname company_id :reskey 0 :reskeyop 0\n :ressortgroupref 1\n :resjunk true } :expr { VAR :varno 1 :varattno 1 :vartype 23\n :vartypmod -1\n :varlevelsup 0 :varnoold 1 :varoattno 1}}) :qpqual ({ EXPR :typeOid 16\n :opType or :oper <> :args ({ EXPR :typeOid 16 :opType op :oper {\n OPER :opno\n 96 :opid 65 :opresulttype 16 :opretset false } :args ({ VAR :varno 1\n :varattno\n 19 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno\n 19} {\n PARAM :paramkind 12 :paramid 1 :paramname \\<unnamed> :paramtype 23\n })} { EXPR\n :typeOid 16 :opType op :oper { OPER :opno 96 :opid 65 :opresulttype 16\n :opretset false } :args ({ VAR :varno 1 :varattno 1 :vartype 23\n :vartypmod -1\n :varlevelsup 0 :varnoold 1 :varoattno 1} { PARAM :paramkind 12\n :paramid 1\n :paramname \\<unnamed> :paramtype 23 })})}) :lefttree <> :righttree\n <> :extprm\n () :locprm () :initplan <> :nprm 0 :scanrelid 1 :indxid ( 310742439)\n :indxqual (<>) :indxqualorig (<>) :indxorderdir -1 } :righttree <>\n :extprm ()\n :locprm () :initplan <> :nprm 0 :limitOffset <> :limitCount { CONST\n :consttype 23 :constlen 4 :constbyval true :constisnull false\n :constvalue 4 [\n 1 0 0 0 ] }}\n\n DEBUG: CommitTransactionCommand\n\n\n\n>\n> \n>\n>>But i feel it can be lot more faster , can anyone suggest me something\n>>to try.\n>> \n>>\n>Create an index on old_company_id, perhaps.\n>\nIts there already..\n branding_master_old_comapany_id btree (old_company_id),\n\nregds , mallah.\n\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n>\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\nRajesh Kumar Mallah <[email protected]> writes:\n \n\nexplain analyze SELECT company_id , (SELECT edition FROM ONLY \npublic.branding_master b WHERE old_company_id = a.company_id OR company_id = \na.company_id ORDER BY b.company_id DESC LIMIT 1) from public.branding_master\na limit 50;\nTotal runtime: 19429.76 msec\n \n\n\n \n\nCREATE FUNCTION most_recent_edition (integer) returns integer AS 'SELECT \nedition::integer FROM ONLY public.branding_master b WHERE old_company_id = $1\nOR company_id = $1 ORDER BY b.company_id DESC LIMIT 1 ' language 'sql';\n \n\n\n \n\ntradein_clients=# explain analyze SELECT company_id , \nmost_recent_edition(company_id) from public.branding_master limit 50;\nTotal runtime: 3969.52 msec\n \n\n\nOdd. Apparently the planner is picking a better plan in the function\ncontext than in the subselect context --- which is strange since it\nought to have less information.\n\nAFAIK the only way to see the plan generated for a SQL function's query\nis like this:\n\nregression=# create function foo(int) returns int as\nregression-# 'select unique1 from tenk1 where unique1 = $1' language sql;\nCREATE FUNCTION\nregression=# set debug_print_plan TO 1;\nSET\nregression=# set client_min_messages TO debug;\nSET\nregression=# select foo(55);\nDEBUG: plan:\nDETAIL: {RESULT :startup_cost 0.00 :total_cost 0.01 :plan_rows 1 :plan_width 0\n:targetlist ({TARGETENTRY :resdom {RESDOM :resno 1 :restype 23 :restypmod -1\n:resname foo :ressortgroupref 0 :resorigtbl 0 :resorigcol 0 :resjunk false}\n:expr {FUNCEXPR :funcid 706101 :funcresulttype 23 :funcretset false\n ... (etc etc)\n\nWould you do that and send it along? I'm curious ...\n\n\nSorry for the delayed response.\ntradein_clients=# explain analyze SELECT company_id , \ndata_bank.most_recent_edition(company_id) from public.branding_master\nlimit 50;\n\n--------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3.57 rows=50 width=4) (actual time=149.52..2179.49\nrows=50 loops=1)\n -> Seq Scan on branding_master (cost=0.00..6626.52 rows=92752\nwidth=4) (actual time=149.51..2179.30 rows=51 loops=1)\ntradein_clients=# \ntradein_clients=# \ntradein_clients=# explain analyze SELECT company_id , \ndata_bank.most_recent_edition(company_id) from public.branding_master\nlimit 50;\nDEBUG: StartTransactionCommand\nLOG: plan:\n{ LIMIT :startup_cost 0.00 :total_cost 185.65 :rows 1 :width 6\n:qptargetlist\n({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 23 :restypmod -1\n:resname\nedition :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr\n{ EXPR\n:typeOid 23 :opType func :oper { FUNC :funcid 313 :funcresulttype 23\n:funcretset false :funcformat 1 } :args ({ VAR :varno 1 :varattno 31\n:vartype\n21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 31})}} {\nTARGETENTRY\n:resdom { RESDOM :resno 2 :restype 23 :restypmod -1 :resname company_id\n:reskey 0 :reskeyop 0 :ressortgroupref 1 :resjunk true } :expr { VAR\n:varno 1\n:varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno\n1}}) :qpqual <> :lefttree { INDEXSCAN :startup_cost 0.00\n:total_cost 24302.69\n:rows 131 :width 6 :qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno\n1\n:restype 23 :restypmod -1 :resname edition :reskey 0 :reskeyop 0\n:ressortgroupref 0 :resjunk false } :expr { EXPR :typeOid 23 :opType\nfunc\n:oper { FUNC :funcid 313 :funcresulttype 23 :funcretset false\n:funcformat 1 }\n:args ({ VAR :varno 1 :varattno 31 :vartype 21 :vartypmod -1 \n:varlevelsup 0\n:varnoold 1 :varoattno 31})}} { TARGETENTRY :resdom { RESDOM :resno 2\n:restype\n23 :restypmod -1 :resname company_id :reskey 0 :reskeyop 0\n:ressortgroupref 1\n:resjunk true } :expr { VAR :varno 1 :varattno 1 :vartype 23 :vartypmod\n-1 \n:varlevelsup 0 :varnoold 1 :varoattno 1}}) :qpqual ({ EXPR :typeOid 16 \n:opType or :oper <> :args ({ EXPR :typeOid 16 :opType op :oper {\nOPER :opno\n96 :opid 65 :opresulttype 16 :opretset false } :args ({ VAR :varno 1\n:varattno\n19 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 19}\n{\nPARAM :paramkind 12 :paramid 1 :paramname \\<unnamed> :paramtype\n23 })} { EXPR\n:typeOid 16 :opType op :oper { OPER :opno 96 :opid 65 :opresulttype 16\n:opretset false } :args ({ VAR :varno 1 :varattno 1 :vartype 23\n:vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 1} { PARAM :paramkind 12 :paramid\n1\n:paramname \\<unnamed> :paramtype 23 })})}) :lefttree <>\n:righttree <> :extprm\n() :locprm () :initplan <> :nprm 0 :scanrelid 1 :indxid (\n310742439)\n:indxqual (<>) :indxqualorig (<>) :indxorderdir -1 }\n:righttree <> :extprm ()\n:locprm () :initplan <> :nprm 0 :limitOffset <>\n:limitCount { CONST\n:consttype 23 :constlen 4 :constbyval true :constisnull false\n:constvalue 4 [\n1 0 0 0 ] }}\n\nDEBUG: CommitTransactionCommand\n\n\n\n\n\n\n \n\nBut i feel it can be lot more faster , can anyone suggest me something\nto try.\n \n\n\n\nCreate an index on old_company_id, perhaps.\n\nIts there already..\n branding_master_old_comapany_id btree (old_company_id),\n\nregds , mallah.\n\n\n\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])",
"msg_date": "Tue, 29 Jul 2003 22:16:59 +0530",
"msg_from": "Rajesh Kumar Mallah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why performance improvement on converting subselect"
},
{
"msg_contents": "Rajesh Kumar Mallah <[email protected]> writes:\n> Tom Lane wrote:\n>> Odd. Apparently the planner is picking a better plan in the function\n>> context than in the subselect context --- which is strange since it\n>> ought to have less information.\n\n> [ verbose plan snipped ]\n\nWell, that sure seems to be the same plan. Curious that the runtime\nwasn't about the same. Perhaps the slow execution of the first query\nwas a caching effect? If you alternate trying the query both ways,\ndoes the speed difference persist?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jul 2003 17:32:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why performance improvement on converting subselect to a function\n\t?"
},
{
"msg_contents": "\nDear Tom,\n\nthe problem was repeatble in the sense repeated \nexecution of queries made no difference on \nperformance. \n\n\nWhat lead to degradation was the bumping off of\neffective_cache_size parameter from 1000 to 64K\n\nCan any one point me the recent guide done by\nSridhar and Josh i want to see what i mis(read|understood)\n from there ;-) [ it was on GeneralBits' Home Page ]\n\nAnyway the performance gain was from 32 secs to less\nthan a sec what i restored cache size from 64K to 1000.\n\nI will post again with more details but at the moment\ni got to load my data_bank :)\n\n\n\nRegds\nMallah.\n\n\n\n\n\nOn Wednesday 30 Jul 2003 3:02 am, Tom Lane wrote:\n> Rajesh Kumar Mallah <[email protected]> writes:\n> > Tom Lane wrote:\n> >> Odd. Apparently the planner is picking a better plan in the function\n> >> context than in the subselect context --- which is strange since it\n> >> ought to have less information.\n> >\n> > [ verbose plan snipped ]\n>\n> Well, that sure seems to be the same plan. Curious that the runtime\n> wasn't about the same. Perhaps the slow execution of the first query\n> was a caching effect? If you alternate trying the query both ways,\n> does the speed difference persist?\n>\n> \t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 30 Jul 2003 12:54:49 +0530",
"msg_from": "Rajesh Kumar Mallah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why performance improvement on converting subselect to a function\n\t?"
},
{
"msg_contents": "Rajesh Kumar Mallah <[email protected]> writes:\n> What lead to degradation was the bumping off of\n> effective_cache_size parameter from 1000 to 64K\n\nCheck the plan then; AFAIR the only possible effect of changing\neffective_cache_size is to influence which plan the planner picks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Jul 2003 09:51:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why performance improvement on converting subselect to a function\n\t?"
},
{
"msg_contents": "On 30 Jul 2003 at 12:54, Rajesh Kumar Mallah wrote:\n\n> Can any one point me the recent guide done by\n> Sridhar and Josh i want to see what i mis(read|understood)\n> from there ;-) [ it was on GeneralBits' Home Page ]\n\nhttp://www.varlena.com/GeneralBits/Tidbits/perf.html\n\nHTH\n\nBye\n Shridhar\n\n--\nprogram, n.:\tA magic spell cast over a computer allowing it to turn one's input\t\ninto error messages. tr.v. To engage in a pastime similar to banging\tone's \nhead against a wall, but with fewer opportunities for reward.\n\n",
"msg_date": "Wed, 30 Jul 2003 19:26:48 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why performance improvement on converting subselect to a function\n\t?"
},
{
"msg_contents": "Tom Lane wrote:\n\n>Rajesh Kumar Mallah <[email protected]> writes:\n> \n>\n>>What lead to degradation was the bumping off of\n>>effective_cache_size parameter from 1000 to 64K\n>> \n>>\n>\n>Check the plan then; AFAIR the only possible effect of changing\n>effective_cache_size is to influence which plan the planner picks.\n>\nDear Tom,\n\nBelow are the plans for two cases. I dont know how to read them accurately\ncan u please explain them. Also can anyone point to some documentation\noriented towards understanding explain analyze output?\n\nRegds\nMallah.\n\ntradein_clients=# SET effective_cache_size = 1000;\nSET\ntradein_clients=# explain analyze SELECT \npri_key,most_recent_edition(pri_key) from profiles where \nsource='BRANDING' limit 100;\n QUERY \nPLAN \n--------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..25.67 rows=100 width=4) (actual time=141.11..154.71 \nrows=100 loops=1)\n -> Seq Scan on profiles (cost=0.00..15754.83 rows=61385 width=4) \n(actual time=141.11..154.51 rows=101 loops=1)\n Filter: (source = 'BRANDING'::character varying)\n Total runtime: 154.84 msec\n(4 rows)\n\ntradein_clients=# SET effective_cache_size = 64000;\nSET\ntradein_clients=# explain analyze SELECT \npri_key,most_recent_edition(pri_key) from profiles where \nsource='BRANDING' limit 100;\n QUERY \nPLAN \n----------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..25.67 rows=100 width=4) (actual \ntime=587.61..22884.75 rows=100 loops=1)\n -> Seq Scan on profiles (cost=0.00..15754.83 rows=61385 width=4) \n(actual time=587.60..22884.25 rows=101 loops=1)\n Filter: (source = 'BRANDING'::character varying)\n Total runtime: 22884.97 msec\n(4 rows)\n\ntradein_clients=#\n\n\n\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n>\n\n\n",
"msg_date": "Thu, 31 Jul 2003 01:08:31 +0530",
"msg_from": "Rajesh Kumar Mallah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why performance improvement on converting subselect"
},
{
"msg_contents": "Rajesh Kumar Mallah <[email protected]> writes:\n> Below are the plans for two cases. I dont know how to read them accurately\n> can u please explain them.\n\nWell, they're the same plan, as far as they go. I suppose that the\nruntime difference must come from choosing a different plan inside the\nmost_recent_edition() function, which we cannot see in the explain\noutput. As before, turning on logging of verbose query plans is the\nonly way to look at what the function is doing.\n\n> Also can anyone point to some documentation\n> oriented towards understanding explain analyze output?\n\nhttp://www.postgresql.org/docs/view.php?version=7.3&idoc=0&file=performance-tips.html\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Jul 2003 23:21:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why performance improvement on converting subselect "
}
] |
[
{
"msg_contents": "Shridhar wrote:\n>There is an auto-vacuum daemon in contrib and if I understand it correctly,\n>it is not getting much of a field testing. How about you guys installing it\n>and trying it.\n\nI'm one of those that has been running it; there are numerous test systems\naround where it has been running off and on over the last few months.\n\nThe thing I keep reconstructing is the script to automatically start it\nup. It should get started shortly after the postmaster starts up; based\non variations on the systems I work with, there's a lot of variation on\nhow that gets handled, which hasn't let me see anything stabilize as the\nRight Way to start it up :-(.\n-- \n(reverse (concatenate 'string \"ofni.smrytrebil@\" \"enworbbc\"))\n<http://dev6.int.libertyrms.info/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n\n\n",
"msg_date": "Tue, 29 Jul 2003 07:56:56 -0400 (EDT)",
"msg_from": "\"Christopher Browne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "autovacuum"
}
] |
[
{
"msg_contents": "\"Shridhar Daithankar\" <[email protected]> wrote:\n>It is called as pgavd..\n\nNo, it is called pg_autovacuum\n\n\"pgavd\" was a previous attempt at this that was being distributed on\ngborg. Its parser ussage (I don't recall if it was just lex or whether it\nalso included yacc) made it troublesome to get to run on all platforms.\n\nThe code for pg_autovacuum is in 7.4, in contrib; it also works perfectly\nwell in 7.3, so you should be able to grab the directory and drop the code\ninto the 7.3 contrib area.\n-- \n(reverse (concatenate 'string \"ofni.smrytrebil@\" \"enworbbc\"))\n<http://dev6.int.libertyrms.info/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n\n\n",
"msg_date": "Tue, 29 Jul 2003 08:03:44 -0400 (EDT)",
"msg_from": "\"Christopher Browne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Autovacuum"
},
{
"msg_contents": "On 29 Jul 2003 at 8:03, Christopher Browne wrote:\n\n> \"Shridhar Daithankar\" <[email protected]> wrote:\n> >It is called as pgavd..\n> \n> No, it is called pg_autovacuum\n> \n> \"pgavd\" was a previous attempt at this that was being distributed on\n> gborg. Its parser ussage (I don't recall if it was just lex or whether it\n> also included yacc) made it troublesome to get to run on all platforms.\n\nYeah.. I wrote that and didn't quite maintain that after. Mathew finished \npg_autovacuum shortly after that.\n\nI recall reading some bug reports and I fixed couple of problems in CVS but \ndidn't bother to make a release after there was a contrib module..\n\n \n> The code for pg_autovacuum is in 7.4, in contrib; it also works perfectly\n> well in 7.3, so you should be able to grab the directory and drop the code\n> into the 7.3 contrib area.\n\nGood to know that..\n\n\nBye\n Shridhar\n\n--\nFlon's Law:\tThere is not now, and never will be, a language in\twhich it is the least bit difficult to write bad programs.\n\n",
"msg_date": "Tue, 29 Jul 2003 18:01:36 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum"
}
] |
[
{
"msg_contents": "\n I have a question regarding the performance of a function returning a \nset of a view as opposed to just selecting the view with the same \nwhere clause. Please, if this should go to the performance list instead, \nlet me know. I'm just wondering about this from the sql end of things. \n\n Here's the environment:\n\n I'm working from PHP, calling on the query. \n\n I have a view that joins 12 tables and orders the results. \n\n From PHP, I do a select on that view with a where clause. \n\n I created a function that queries the view with the where clause \nincluded in the function. The function is returning a setof that \nview taking one variable for the where clause (there are several \nother static wheres in there).\n\n I have found that querying the view with the where clause is \ngiving me quicker results than if I call the function. \n\n The performance hit is tiny, we're talking less than 1/2 a second, \nbut when I've done this sort of thing in Oracle I've seen a performance \nincrease, not a decrease. \n\n Any ideas? \n\n Thanks folks... I'm new to the list. \n\n\n-- \n\nMark Bronnimann\[email protected]\n \n-- Let's organize this thing and take all the fun out of it. --\n",
"msg_date": "Tue, 29 Jul 2003 22:08:59 -0400",
"msg_from": "Mark Bronnimann <[email protected]>",
"msg_from_op": true,
"msg_subject": "function returning setof performance question"
},
{
"msg_contents": "> The performance hit is tiny, we're talking less than 1/2 a second, \n> but when I've done this sort of thing in Oracle I've seen a performance \n> increase, not a decrease. \n\nThats just plain strange (never tried on Oracle). Why in the world\nwould adding the overhead of a function call (with no other changes)\nincrease performance?\n\nThe function has additional overhead in the form of the plpgsql\ninterpreter. You may find a c function will give close to identical\nperformance as with the standard view so long as the query is the same.\n\n\nOne thing to keep in mind is that the view can be rearranged to give a\nbetter query overall. The exact work completed for the view may be\ndifferent when called from within a different SQL statement. Most\nfunctions -- some SQL language based functions are strange this way --\ncannot do this",
"msg_date": "29 Jul 2003 22:28:19 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function returning setof performance question"
},
{
"msg_contents": "\n Thanks for the reply. \n\n I was hoping to eliminate the parse call on the view because I was doing \nthe where clause on the view instead of putting the where in the view. \nIn all, I was hoping to keep a single view called from multiple functions \nwith different where clauses. Yep... I shoulda known better...\n\n Thanks again!\n\n\nAnd Rod Taylor ([email protected]) said...:\n\n> > The performance hit is tiny, we're talking less than 1/2 a second, \n> > but when I've done this sort of thing in Oracle I've seen a performance \n> > increase, not a decrease. \n> \n> Thats just plain strange (never tried on Oracle). Why in the world\n> would adding the overhead of a function call (with no other changes)\n> increase performance?\n> \n> The function has additional overhead in the form of the plpgsql\n> interpreter. You may find a c function will give close to identical\n> performance as with the standard view so long as the query is the same.\n> \n> \n> One thing to keep in mind is that the view can be rearranged to give a\n> better query overall. The exact work completed for the view may be\n> different when called from within a different SQL statement. Most\n> functions -- some SQL language based functions are strange this way --\n> cannot do this\n> \n\n\n\n-- \n\nMark Bronnimann\[email protected]\n \n-- Let's organize this thing and take all the fun out of it. --\n",
"msg_date": "Tue, 29 Jul 2003 22:57:27 -0400",
"msg_from": "Mark Bronnimann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: function returning setof performance question"
},
{
"msg_contents": "Mark Bronnimann wrote:\n> I was hoping to eliminate the parse call on the view because I was doing \n> the where clause on the view instead of putting the where in the view. \n> In all, I was hoping to keep a single view called from multiple functions \n> with different where clauses. Yep... I shoulda known better...\n> \n\nIt sounds like you're using a sql function, not a plpgsql function \n(although I don't think you said either way). If you write the function \nin plpgsql it will get parsed and cached on the first call in a \nparticular backend session, which *might* give you improved performance \non subsequent calls, if there are any; are you using persistent connections?\n\nAlternatively, it might work to use a prepared query.\n\nJoe\n\n",
"msg_date": "Tue, 29 Jul 2003 20:51:04 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function returning setof performance question"
},
{
"msg_contents": "Mark,\n\nI'm crossing this over to the performance list; it's really appropriate on \nboth lists. So I'm quoting you in full as well.\n\n> I have a question regarding the performance of a function returning a \n> set of a view as opposed to just selecting the view with the same \n> where clause. Please, if this should go to the performance list instead, \n> let me know. I'm just wondering about this from the sql end of things. \n> \n> Here's the environment:\n> \n> I'm working from PHP, calling on the query. \n> \n> I have a view that joins 12 tables and orders the results. \n> \n> From PHP, I do a select on that view with a where clause. \n> \n> I created a function that queries the view with the where clause \n> included in the function. The function is returning a setof that \n> view taking one variable for the where clause (there are several \n> other static wheres in there).\n> \n> I have found that querying the view with the where clause is \n> giving me quicker results than if I call the function. \n> \n> The performance hit is tiny, we're talking less than 1/2 a second, \n> but when I've done this sort of thing in Oracle I've seen a performance \n> increase, not a decrease. \n> \n> Any ideas? \n\nActually, this is exactly what I'd expect in your situation. The SRF returns \nthe records in a very inefficient fashion: by materializing the result set \nand looping through it to return it to the calling cursor, whereas the View \ndoes set-based operations to grab blocks of data. Also PL/pgSQL as a \nlanguage is not nearly as optimized as Oracle's PL/SQL.\n\nIt's also possible that PostgreSQL handles criteria-filtered views better than \nOracle does. I wouldn't be surprised.\n\nThe only times I can imagine an SRF being faster than a view with a where \nclause are:\n\n1) When you're only returning a small part of a complex result set, e.g. 10 \nrows out of 32,718.\n2) When the view is too complex (e.g. UNION with subselects) for the Postgres \nplanner to \"push down\" the WHERE criteria into the view execution.\n\nI've been planning on testing the performance of SRFs vs. views myself for \npaginated result sets in a web application, but haven't gotten around to it \nsince I can't get my www clients to upgrade to 7.3 ...\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 30 Jul 2003 10:28:50 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] function returning setof performance question"
}
] |
[
{
"msg_contents": "Folks,\n\nSorry for the cross-posting!\n\nSomebody approached me with the skeleton of a \"Gettting started with \nPostgreSQL\" page, and now I can't find the e-mail. Who was it? Please send \nagain!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 30 Jul 2003 09:00:37 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Getting Started Guide?"
}
] |
[
{
"msg_contents": "I'm looking at doing the example postgresql.conf files for the 7.4 \nrelease. So far, the catagories we have would be a matrix of:\n\n-------------- Large Machine -- Small Machine\nWebserver\nOLAP\nOLTP\nWorkstation\n\nBut likely only one entry for workstation.\n\nanyone have any advice on what they use in which situations and what we \nshould include in the examples? \n\nI'm guessing OLTP needs things like FSM cranked up, \nOLAP (a for analytical) needs more shared buffers and sort memory\nWebserver might be better served just slightly higher values than default \nbut well under those of either OLTP or OLAP...\n\n\n\n\n",
"msg_date": "Wed, 30 Jul 2003 10:59:23 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql.conf"
},
{
"msg_contents": "Scott,\n\n> I'm guessing OLTP needs things like FSM cranked up, \n> OLAP (a for analytical) needs more shared buffers and sort memory\n> Webserver might be better served just slightly higher values than default \n> but well under those of either OLTP or OLAP...\n\nYes. Take sort_mem for example:\nOLTP_SM\t1024\nOLTP_LM\t2048\nOLAP_SM\t4096\nOLAP_LM\t16384\nWWW_SM\t512\nWWW_LM\t1024\nWorkstation\t1024\n\nThe basic idea is:\nMore RAM => more sort_mem\nMore concurrent queries => less sort_mem\nLarger data sets => more sort_mem\nLots of grouped aggregates => more sort_mem\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 30 Jul 2003 10:19:21 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf"
},
{
"msg_contents": "On Wed, 2003-07-30 at 11:59, scott.marlowe wrote:\n> I'm looking at doing the example postgresql.conf files for the 7.4 \n> release. So far, the catagories we have would be a matrix of:\n> \n> -------------- Large Machine -- Small Machine\n> Webserver\n> OLAP\n> OLTP\n> Workstation\n> \n> But likely only one entry for workstation.\n\nHow about \"General Purpose\", for DBs that don't fit into any one\ncategory?\n\n> anyone have any advice on what they use in which situations and what we \n> should include in the examples? \n> \n> I'm guessing OLTP needs things like FSM cranked up, \n> OLAP (a for analytical) needs more shared buffers and sort memory\n> Webserver might be better served just slightly higher values than default \n> but well under those of either OLTP or OLAP...\n\n\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "30 Jul 2003 13:32:43 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf"
}
] |
[
{
"msg_contents": "On Wed, 30 Jul 2003 10:59:23 -0600 (MDT), \"scott.marlowe\" wrote:\n> \n> I'm looking at doing the example postgresql.conf files for the 7.4 \n> release. So far, the catagories we have would be a matrix of:\n> \n> -------------- Large Machine -- Small Machine\n> Webserver\n> OLAP\n> OLTP\n> Workstation\n> \n> But likely only one entry for workstation.\n> \n> anyone have any advice on what they use in which situations and what we \n> should include in the examples? \n> \n> I'm guessing OLTP needs things like FSM cranked up, \n> OLAP (a for analytical) needs more shared buffers and sort memory\n> Webserver might be better served just slightly higher values than default \n> but well under those of either OLTP or OLAP...\n> \n\nAre you planning on differentiating between dedicated machines and multi-server\nmachines? For example, a dedicated database for a webserver would be tuned\ndifferently from a server that was running both the webserver and the database on\nthe same machine. \n\nRobert Treat\n\n--\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n",
"msg_date": "Wed, 30 Jul 2003 14:34:13 -0400 (EDT)",
"msg_from": "\"Robert Treat\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf"
},
{
"msg_contents": "Robert,\n\n> Are you planning on differentiating between dedicated machines and \nmulti-server\n> machines? For example, a dedicated database for a webserver would be tuned\n> differently from a server that was running both the webserver and the \ndatabase on\n> the same machine. \n\nMy thought is when we define \"Small Machine\" in at the top of the file we \ndefine it as \"Small Machine or Multi-Purpose machine\". The settings should \nbe nearly the same for a machine that has only 64MB of *available* RAM as one \nthat has only 96MB of RAM at all.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 30 Jul 2003 11:43:11 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf"
},
{
"msg_contents": "Josh Berkus wrote:\n\n>Robert,\n>\n> \n>\n>>Are you planning on differentiating between dedicated machines and \n>> \n>>\n>multi-server\n> \n>\n>>machines? For example, a dedicated database for a webserver would be tuned\n>>differently from a server that was running both the webserver and the \n>> \n>>\n>database on\n> \n>\n>>the same machine. \n>> \n>>\n>\n>My thought is when we define \"Small Machine\" in at the top of the file we \n>define it as \"Small Machine or Multi-Purpose machine\". The settings should \n>be nearly the same for a machine that has only 64MB of *available* RAM as one \n>that has only 96MB of RAM at all.\n>\n> \n>\nWe are using postgres 7.3.2 for one of our clients with a smallish db, \non a p4 with 4G ram box which also servers as a web server and a \nweb-based file repository (~ 80G). We are starting another project for \nanother customer, on a p4 with 2G ram, and the db will be larger, approx \n7-15G when finished. This box will also be used as a web server, and as \nit won't go 'live' until the fall, will run 7.4.\n\nI don't know if this is representative of other postgresql installs, but \nI would also put in my vote for the differentiation added, as these are \nnot small machines but are multi-server boxes.\n\nmy 2 cents worth\nRon\n\nPS the new postgresql.conf performance tuning docs are extremely \nhelpful, thanks\n\n",
"msg_date": "Wed, 30 Jul 2003 11:59:42 -0700",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf"
},
{
"msg_contents": "Ron,\n\n> I don't know if this is representative of other postgresql installs, but \n> I would also put in my vote for the differentiation added, as these are \n> not small machines but are multi-server boxes.\n\nBut how is the Multi-purpose configuration different from the Small Machine \nconfiguration? If the actual settings are the same, we just need to \nexplain somewhere what it means.\n\nI'll argue pretty strongly against including a seperate MP configuration \nbecause it would raise our number of suggested sets to 10 from 7.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 30 Jul 2003 12:10:49 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf"
},
{
"msg_contents": "On Wed, 30 Jul 2003, Josh Berkus wrote:\n\n> Ron,\n> \n> > I don't know if this is representative of other postgresql installs, but \n> > I would also put in my vote for the differentiation added, as these are \n> > not small machines but are multi-server boxes.\n> \n> But how is the Multi-purpose configuration different from the Small Machine \n> configuration? If the actual settings are the same, we just need to \n> explain somewhere what it means.\n> \n> I'll argue pretty strongly against including a seperate MP configuration \n> because it would raise our number of suggested sets to 10 from 7.\n\nMaybe we should look at it more from the point of view of how much \nhorsepower (I/O bandwidth, memory, memory bandwidth, cpu bandwidth) is \nleft over for postgresql. After all, a Dual 2.8GHz Opteron with 32 gigs \nof ram is gonna be faster, even if it has apache/LDAP/etc on it than a \ndedicated P100 with 64 meg of ram.\n\nI think the default postgresql.conf should be the one for the 64 Meg free \nPII-300 and below class, and our first step up should assume say, 256 Meg \nram and simple RAID1, approximately 1GHz CPU or less. The high end should \nassume Dual CPUs of 1Ghz or better, 1Gig of ram (or more).\n\nOnce someone is getting into the 8 way Itanium II with 32 Gigs of RAM, \nthe fact that they are doing something that big means that by looking at \nthe default, the workgroup, and the large server configs, they can \nextrapolate and experiment to determine the best settings, and are going \nto need to anyway to get it right.\n\nSo, maybe just a note on which parameters to increase if you have more \nRAM/CPU/I/O bandwidth in the big server example?\n\n",
"msg_date": "Wed, 30 Jul 2003 14:09:32 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf"
},
{
"msg_contents": "Scott,\n\n> Once someone is getting into the 8 way Itanium II with 32 Gigs of RAM, \n> the fact that they are doing something that big means that by looking at \n> the default, the workgroup, and the large server configs, they can \n> extrapolate and experiment to determine the best settings, and are going \n> to need to anyway to get it right.\n> \n> So, maybe just a note on which parameters to increase if you have more \n> RAM/CPU/I/O bandwidth in the big server example?\n\nAlso, lets not get away from our goal here, which is NOT to provide \ncomprehensive documenation (which is available elsewhere) but to give new \nDBAs a *sample* config that will perform better than the default.\n\nPlus, I would assume that anybody who spent $50,000 on hardware would be smart \nenough to pay a consultant $1000 to tune their database correctly.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 30 Jul 2003 13:42:24 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf"
},
{
"msg_contents": ">So, maybe just a note on which parameters to increase if you have more \n>RAM/CPU/I/O bandwidth in the big server example?\n> \n>\nYes, that would be great. Actually I prefer rules of thumb and examples \nfor each extreme. If possible a little note WHY the parameter should be \ntweaked, and what effects it will have. For example, more note's like \nJosh's below would be a big help...\n\n>Yes. Take sort_mem for example:\n>OLTP_SM\t1024\n>OLTP_LM\t2048\n>OLAP_SM\t4096\n>OLAP_LM\t16384\n>WWW_SM\t512\n>WWW_LM\t1024\n>Workstation\t1024\n>\n>The basic idea is:\n>More RAM => more sort_mem\n>More concurrent queries => less sort_mem\n>Larger data sets => more sort_mem\n>Lots of grouped aggregates => more sort_mem\n>\n> \n>\n\n-- \n[ Christian Fowler\n[ [email protected]\n[ http://www.lulu.com\n\n\n\n\n\n\n\n\n\n\n\n\nSo, maybe just a note on which parameters to increase if you have more \nRAM/CPU/I/O bandwidth in the big server example?\n \n\nYes, that would be great. Actually I prefer rules of thumb and examples\nfor each extreme. If possible a little note WHY the parameter should be\ntweaked, and what effects it will have. For example, more note's like\nJosh's below would be a big help...\n\n\nYes. Take sort_mem for example:\nOLTP_SM\t1024\nOLTP_LM\t2048\nOLAP_SM\t4096\nOLAP_LM\t16384\nWWW_SM\t512\nWWW_LM\t1024\nWorkstation\t1024\n\nThe basic idea is:\nMore RAM => more sort_mem\nMore concurrent queries => less sort_mem\nLarger data sets => more sort_mem\nLots of grouped aggregates => more sort_mem\n\n \n\n\n-- \n[ Christian Fowler\n[ [email protected]\n[ http://www.lulu.com",
"msg_date": "Thu, 31 Jul 2003 10:43:00 -0400",
"msg_from": "cafweb <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf"
}
] |
[
{
"msg_contents": "I have the following schema which I have set up, and I have inserted a\nbunch of entries into it:\n\ncreate domain contact_id as integer;\ncreate sequence contact_seq;\ncreate domain street_address as character varying(64);\ncreate domain name as character varying(64);\ncreate domain country as character(2);\ncreate domain telno as character varying(17);\ncreate domain extension as character varying(12);\ncreate domain public_key as character varying(64);\n\ncreate table contact (\n id contact_id unique not null default nextval('contact_seq'),\n public_key public_key unique not null,\n name character varying (64),\n org character varying (64),\n street1 street_address,\n street2 street_address,\n city character varying(30),\n region character(2),\n country country,\n postcode character varying(15),\n voice telno,\n voice_ext extension,\n fax telno,\n fax_ext extension,\n cell telno,\n email character varying(64),\n created_on timestamp with time zone default now(),\n updated_on timestamp with time zone default now()\n);\n\n\nperformance=# explain analyze select * from contact where country::country\n= 'AD'::character(2);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using contact_country_idx on contact (cost=0.00..1405.24\nrows=360 width=302) (actual time=0.07..5.30 rows=344 loops=1)\n Index Cond: ((country)::bpchar = 'AD'::bpchar)\n Total runtime: 5.72 msec\n(3 rows)\n\nI do the same query using the actual name of the domain, and get the\nfollowing:\n\nperformance=# explain analyze select * from contact where country::country\n= 'AD'::country;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Seq Scan on contact (cost=0.00..8181.81 rows=360 width=302) (actual\ntime=0.04..825.48 rows=344 loops=1)\n Filter: ((country)::text = 'AD'::text)\n Total runtime: 825.85 msec\n(3 rows)\n\nApparently the filter transforms ::country into ::text, essentially losing\nthe domain information, and destroying the ability to detect the index.\n\nI was a little disappointed that\n explain analyze select * from contact where country = 'AD';\ndidn't do well; the value of DOMAINS is seriously injured if their\nmetadata gets lost, for optimization purposes.\n\nVersion 7.3.3, FYI... It's not inconceivable that this might have changed\nin 7.4, which would strengthen the argument that DOMAINs didn't become\nuseful 'til 7.4...\n\n[Rummaging around for 7.4 instance...]\n\nperformance=# explain analyze select * from contact where country =\n'AD'::country;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using contact_country_idx on contact (cost=0.00..547.57\nrows=291 width=516) (actual time=0.24..33.88 rows=143 loops=1)\n Index Cond: ((country)::bpchar = (('AD'::bpchar)::country)::bpchar)\n Total runtime: 34.43 msec\n(3 rows)\n\nLooks like that IS the case; in fact, it gets that same plan even if I\ndon't specify ::country on the country string...\n\nThis is obviously something that has changed _big time_ betwixt 7.3 and\n7.4...\n-- \n(reverse (concatenate 'string \"ofni.smrytrebil@\" \"enworbbc\"))\n<http://dev6.int.libertyrms.info/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n\n\n",
"msg_date": "Thu, 31 Jul 2003 01:39:44 -0400 (EDT)",
"msg_from": "\"Christopher Browne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Possible problem with DOMAIN evaluation?"
},
{
"msg_contents": "> Looks like that IS the case; in fact, it gets that same plan even if I\n> don't specify ::country on the country string...\n> \n> This is obviously something that has changed _big time_ betwixt 7.3 and\n> 7.4...\n\nSeveral issues of this type have been fixed during 7.4, though there are\na few left with the pl languages.",
"msg_date": "31 Jul 2003 08:11:58 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible problem with DOMAIN evaluation?"
}
] |
[
{
"msg_contents": "Hi,\n\nHas anyone done any benchmarks to see if PG compiled for the target\nCPU (Athlon XP, P4 Xeon, Pentium III, etc) is significantly more\nefficient than when compiled for the i386? This should also apply\nto Sparc, Alpha, etc.\n\nYes, more RAM and faster disks are extremely important, but when the\nfast disks have sucked the commonly shared data into all that RAM,\nthe RAM cache will be exercise heavily, and that means lots of CPU.\nBusiness logic embedded in SQL and stored procedures, and CRC (or \nMD5) calculations would also be sped up.\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "31 Jul 2003 00:45:17 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Targeted CPU compilations"
}
] |
[
{
"msg_contents": "Hi! There:\n\nI ran the same explain analyze on two similar tables. However, the table\nwith less data took much more time than the one with more data. Could anyone\ntell me what happened?\nHere is the explain analyze:\nexplain analyze select productid from tfd_catalog;\nNOTICE: QUERY PLAN:\n\nSeq Scan on tfd_catalog (cost=0.00..43769.82 rows=161282 width=10) (actual\ntime\n=3928.64..12905.76 rows=161282 loops=1)\nTotal runtime: 13240.21 msec\n\n\nexplain analyze select productid from hm_catalog;\nNOTICE: QUERY PLAN:\n\nSeq Scan on hm_catalog (cost=0.00..22181.18 rows=277518 width=9) (actual\ntime=2\n1.32..6420.76 rows=277518 loops=1)\nTotal runtime: 6772.95 msec\n\nThank you for your help\n\nJosh\n\n\n\n",
"msg_date": "Thu, 31 Jul 2003 11:06:09 -0400",
"msg_from": "\"Jianshuo Niu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help on my database performance"
},
{
"msg_contents": "On Thu, 31 Jul 2003 11:06:09 -0400, \"Jianshuo Niu\" <[email protected]>\nwrote:\n>I ran the same explain analyze on two similar tables. However, the table\n>with less data took much more time than the one with more data. Could anyone\n>tell me what happened?\n\n>Seq Scan on tfd_catalog (cost=0.00..43769.82 rows=161282 width=10) (actual\n>time=3928.64..12905.76 rows=161282 loops=1)\n>Total runtime: 13240.21 msec\n>\n>Seq Scan on hm_catalog (cost=0.00..22181.18 rows=277518 width=9) (actual\n>time=21.32..6420.76 rows=277518 loops=1)\n>Total runtime: 6772.95 msec\n\nThe first SELECT takes almost twice the time because tfd_catalog has\nalmost twice as many pages than hm_catalog. This may be due to having\nwider tuples or more dead tuples in tfd_catalog.\n\nIn the former case theres not much you can do.\n\nBut the high startup cost of the first SELECT is a hint for lots of\ndead tuples. So VACUUM FULL ANALYSE might help.\n\nServus\n Manfred\n",
"msg_date": "Thu, 31 Jul 2003 21:13:07 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help on my database performance"
},
{
"msg_contents": "Dear Manfred:\n\nThank you so much for your response. vacuum full anaylze works!\nexplain analyze select count(*) from tfd_catalog ;\nNOTICE: QUERY PLAN:\n\nexplain analyze select count(*) from tfd_catalog ;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=15986.02..15986.02 rows=1 width=0) (actual\ntime=1089.99..1089.9\n9 rows=1 loops=1)\n -> Seq Scan on tfd_catalog (cost=0.00..15582.82 rows=161282 width=0)\n(actual\n time=0.11..833.41 rows=161282 loops=1)\nTotal runtime: 1090.51 msec\n\nEXPLAIN -> Seq Scan on tfd_catalog (cost=0.00..15582.82 rows=161282\nwidth=0) (actual\n time=0.11..833.41 rows=161282 loops=1)\nTotal runtime: 1090.51 msec\n\nCould you tell me what does \"Aggregate (cost=15986.02..15986.02 rows=1\nwidth=0) (actual time=1089.99..1089.99 rows=1 loops=1)\" mean? It does not\nshow in my previous report.\n\nI appreicate it.\n\n\nJosh\n\n\n\n",
"msg_date": "Thu, 31 Jul 2003 16:08:11 -0400",
"msg_from": "\"Jianshuo Niu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help on my database performance"
},
{
"msg_contents": "On Thu, 31 Jul 2003 16:08:11 -0400, \"Jianshuo Niu\" <[email protected]>\nwrote:\n>explain analyze select count(*) from tfd_catalog ;\n>NOTICE: QUERY PLAN:\n>\n>Aggregate (cost=15986.02..15986.02 rows=1 width=0)\n> (actual time=1089.99..1089.99 rows=1 loops=1)\n> -> Seq Scan on tfd_catalog (cost=0.00..15582.82 rows=161282 width=0)\n> (actual time=0.11..833.41 rows=161282 loops=1)\n>Total runtime: 1090.51 msec\n\n>Could you tell me what does \"Aggregate (cost=15986.02..15986.02 rows=1\n>width=0) (actual time=1089.99..1089.99 rows=1 loops=1)\" mean? It does not\n>show in my previous report.\n\nIn your first post you did \n\tSELECT productid FROM tfd_catalog;\n\nnow you did\n\tSELECT count(*) FROM tfd_catalog;\n\ncount() is an aggregate function which in your case takes 161282 rows\nas input and produces a single row as output. The \"actual\" part of\nthe \"Aggregate\" line tells you that the first resulting row is\ngenerated 1089.99 milliseconds after query start and the last row (not\nsurprisingly) at the same time. The \"cost\" part contains the\nplanner's estimations for these values.\n\nServus\n Manfred\n",
"msg_date": "Thu, 31 Jul 2003 23:32:04 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help on my database performance"
}
] |
[
{
"msg_contents": "Hello,\n\nA few days ago, I asked for advice on speeding up substring queries on\nthe GENERAL mailing list. Joe Conway helpfully pointed out the ALTER\nTABLE STORAGE EXTERNAL documentation. After doing the alter,\nthe queries got slower! Here is the background:\n\nA freshly loaded database is VACUUM ANALYZEd and I run this query:\n\nexplain analyze select substring(residues from 1000000 for 20000)\nfrom feature where feature_id=1;\n\nwhere feature is a table with ~3 million rows, and residues is a text\ncolumn, where for the majority of the rows of feature, it is null, for a\nlarge minority, it is shortish strings (a few thousand characters), and\nfor 6 rows, residues contains very long strings (~20 million characters\n(it's chromosome DNA sequence from fruit flies)).\n\nHere's the result from the ANALYZE:\n Index Scan using feature_pkey on feature (cost=0.00..3.01 rows=1\nwidth=152) (actual time=388.88..388.89 rows=1 loops=1)\n Index Cond: (feature_id = 1)\n Total runtime: 389.00 msec\n(3 rows)\n\nNow, I'll change the storage:\n\nalter table feature alter column residues set storage external;\n\nTo make sure that really happens, I run an update on feature:\n\nupdate feature set residues = residues where feature_id<8;\n\nand then VACUUM ANALYZE again. I run the same EXPLAIN ANALYZE query as\nabove and get this output:\n\n Index Scan using feature_pkey on feature (cost=0.00..3.01 rows=1\nwidth=153) (actual time=954.13..954.14 rows=1 loops=1)\n Index Cond: (feature_id = 1)\n Total runtime: 954.26 msec\n(3 rows)\n\nWhoa! That's not what I expected, the time to do the query got more\nthat twice as long. So I think, maybe it was just an unlucky section,\nand overall performance will be much better. So I write a perl script\nto do substring queries over all of my chromosomes at various positions\nand lengths (20,000 queries total). For comparison, I also ran the same\nscript, extracting the chromosomes via sql and doing the substring in\nperl. Here's what happened:\n\nsubstr in perl 0.014sec/query\nEXTENDED storage 0.0052sec/query\ndefault storage 0.0040sec/query\n\nSo, what am I missing? Why doesn't EXTENDED storage improve substring\nperformance as it says it should in\nhttp://www.postgresql.org/docs/7.3/interactive/sql-altertable.html ?\n\nI am using an IDE drive on a laptop, running Postgresql 7.3.2 on RedHat\nLinux 7.3 with 512M RAM.\n\nThanks,\nScott\n\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "31 Jul 2003 15:26:40 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Scott Cain <[email protected]> writes:\n> explain analyze select substring(residues from 1000000 for 20000)\n> from feature where feature_id=1;\n\n> where feature is a table with ~3 million rows, and residues is a text\n> column, where for the majority of the rows of feature, it is null, for a\n> large minority, it is shortish strings (a few thousand characters), and\n> for 6 rows, residues contains very long strings (~20 million characters\n> (it's chromosome DNA sequence from fruit flies)).\n\nI think the reason uncompressed storage loses here is that the runtime\nis dominated by the shortish strings, and you have to do more I/O to get\nat those if they're uncompressed, negating any advantage from not having\nto fetch all of the longish strings.\n\nOr it could be that there's a bug preventing John Gray's substring-slice\noptimization from getting used. The only good way to tell that I can\nthink of is to rebuild PG with profiling enabled and try to profile the\nexecution both ways. Are you up for that?\n\n(BTW, if you are using a multibyte database encoding, then that's your\nproblem right there --- the optimization is practically useless unless\ncharacter and byte indexes are the same.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 Jul 2003 15:44:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXTERNAL storage and substring on long strings "
},
{
"msg_contents": "On Thu, 2003-07-31 at 15:44, Tom Lane wrote:\n> Scott Cain <[email protected]> writes:\n> > explain analyze select substring(residues from 1000000 for 20000)\n> > from feature where feature_id=1;\n> \n> > where feature is a table with ~3 million rows, and residues is a text\n> > column, where for the majority of the rows of feature, it is null, for a\n> > large minority, it is shortish strings (a few thousand characters), and\n> > for 6 rows, residues contains very long strings (~20 million characters\n> > (it's chromosome DNA sequence from fruit flies)).\n> \n> I think the reason uncompressed storage loses here is that the runtime\n> is dominated by the shortish strings, and you have to do more I/O to get\n> at those if they're uncompressed, negating any advantage from not having\n> to fetch all of the longish strings.\n\nI'm not sure I understand what that paragraph means, but it sounds like,\nif PG is working the way it is supposed to, tough for me, right?\n> \n> Or it could be that there's a bug preventing John Gray's substring-slice\n> optimization from getting used. The only good way to tell that I can\n> think of is to rebuild PG with profiling enabled and try to profile the\n> execution both ways. Are you up for that?\n\nI am not against recompiling. I am currently using an RPM version, but\nI could probably recompile; the compilation is probably straight forward\n(adding something like `--with_profiling` to ./configure), but how\nstraight forward is actually doing the profiling? Is there a document\nsomewhere that lays it out?\n> \n> (BTW, if you are using a multibyte database encoding, then that's your\n> problem right there --- the optimization is practically useless unless\n> character and byte indexes are the same.)\n\nI shouldn't be, but since it is an RPM, I can't be sure. It sure would\nbe silly since the strings consist only of [ATGCN].\n\nThanks,\nScott\n\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "31 Jul 2003 16:20:39 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Scott Cain wrote:\n> Index Scan using feature_pkey on feature (cost=0.00..3.01 rows=1\n> width=153) (actual time=954.13..954.14 rows=1 loops=1)\n> Index Cond: (feature_id = 1)\n> Total runtime: 954.26 msec\n> (3 rows)\n> \n> Whoa! That's not what I expected, the time to do the query got more\n> that twice as long. So I think, maybe it was just an unlucky section,\n> and overall performance will be much better. So I write a perl script\n> to do substring queries over all of my chromosomes at various positions\n> and lengths (20,000 queries total). For comparison, I also ran the same\n> script, extracting the chromosomes via sql and doing the substring in\n> perl. Here's what happened:\n\nHmmm, what happens if you compare with a shorter substring, e.g.:\n\nexplain analyze select substring(residues from 1000000 for 2000)\nfrom feature where feature_id=1;\n\nI'm just guessing, but it might be that the extra I/O time to read 20K \nof uncompressed text versus the smaller compressed text is enough to \nswamp the time saved from not needing to uncompress.\n\nAny other ideas out there?\n\nJoe\n\n",
"msg_date": "Thu, 31 Jul 2003 13:31:54 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Scott Cain <[email protected]> writes:\n>> (BTW, if you are using a multibyte database encoding, then that's your\n>> problem right there --- the optimization is practically useless unless\n>> character and byte indexes are the same.)\n\n> I shouldn't be, but since it is an RPM, I can't be sure.\n\nLook at \"psql -l\" to see what encoding it reports for your database.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 Jul 2003 16:32:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXTERNAL storage and substring on long strings "
},
{
"msg_contents": "On Thu, 2003-07-31 at 16:32, Tom Lane wrote:\n> Scott Cain <[email protected]> writes:\n> >> (BTW, if you are using a multibyte database encoding, then that's your\n> >> problem right there --- the optimization is practically useless unless\n> >> character and byte indexes are the same.)\n> \n> > I shouldn't be, but since it is an RPM, I can't be sure.\n> \n> Look at \"psql -l\" to see what encoding it reports for your database.\n> \nI see, encoding is a per database option. Since I've never set it, all\nmy databases use sql_ascii.\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "31 Jul 2003 16:39:47 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "So it is possible that if I had a fast scsi drive, the performance might\nbe better?\n\nOn Thu, 2003-07-31 at 16:31, Joe Conway wrote:\n> Scott Cain wrote:\n> > Index Scan using feature_pkey on feature (cost=0.00..3.01 rows=1\n> > width=153) (actual time=954.13..954.14 rows=1 loops=1)\n> > Index Cond: (feature_id = 1)\n> > Total runtime: 954.26 msec\n> > (3 rows)\n> > \n> > Whoa! That's not what I expected, the time to do the query got more\n> > that twice as long. So I think, maybe it was just an unlucky section,\n> > and overall performance will be much better. So I write a perl script\n> > to do substring queries over all of my chromosomes at various positions\n> > and lengths (20,000 queries total). For comparison, I also ran the same\n> > script, extracting the chromosomes via sql and doing the substring in\n> > perl. Here's what happened:\n> \n> Hmmm, what happens if you compare with a shorter substring, e.g.:\n> \n> explain analyze select substring(residues from 1000000 for 2000)\n> from feature where feature_id=1;\n> \n> I'm just guessing, but it might be that the extra I/O time to read 20K \n> of uncompressed text versus the smaller compressed text is enough to \n> swamp the time saved from not needing to uncompress.\n> \n> Any other ideas out there?\n> \n> Joe\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "31 Jul 2003 16:41:37 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Scott Cain wrote:\n> So it is possible that if I had a fast scsi drive, the performance might\n> be better?\n\nFaster drives are always better ;-)\n\nDid you try the comparison with shorter substrings? Also, maybe not \nrelated to your specific question, but have you tuned any other \npostgresql.conf settings?\n\nJoe\n\n\n\n",
"msg_date": "Thu, 31 Jul 2003 13:49:33 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Scott Cain <[email protected]> writes:\n> I see, encoding is a per database option. Since I've never set it, all\n> my databases use sql_ascii.\n\nOkay, then you've dodged the obvious bullet; time to try profiling I\nguess. The way I usually do it is (given a clean, configured source\ntree):\n\n\tcd src/backend\n\tgmake PROFILE=\"-pg -DLINUX_PROFILE\" all\n\tinstall resulting postgres executable\n\n(The -DLINUX_PROFILE is unnecessary on non-Linux machines, but AFAIK it\nwon't hurt anything either.) Once you have this installed, each session\nwill end by dumping a gmon.out profile file into the $PGDATA/base/nnn\ndirectory for its database. After you've done a test run, you do\n\n\tgprof path/to/postgres/executable path/to/gmon.out >outputfile\n\nand voila, you have a profile.\n\nIt's a good idea to make sure that you accumulate a fair amount of CPU\ntime in a test session, since the profile depends on statistical\nsampling. I like to have about a minute of accumulated runtime before\ntrusting the results. Repeat the same query multiple times if needed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 Jul 2003 16:58:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXTERNAL storage and substring on long strings "
},
{
"msg_contents": "Scott Cain wrote:\n> I am not against recompiling. I am currently using an RPM version, but\n> I could probably recompile; the compilation is probably straight forward\n> (adding something like `--with_profiling` to ./configure), but how\n> straight forward is actually doing the profiling? Is there a document\n> somewhere that lays it out?\n> \n\nTry:\nrpm --rebuild --define 'beta 1' postgresql-7.3.4-1PGDG.src.rpm\n\nThis will get you Postgres with --enable-cassert and --enable-debug, and \nit will leave the binaries unstripped. Install the new RPMs.\n\nThen start up psql in one terminal, followed by gdb in another. Attach \nto the postgres backend pid and set a breakpoint at \ntoast_fetch_datum_slice. Then continue the gdb session, and run your sql \nstatement in the psql session. Something like:\n\nsession 1:\n psql mydatabase\n\nsession 2:\n ps -ef | grep postgres\n (note the pid on the postgres backend, *not* the psql session)\n gdb /usr/bin/postgres\n attach <pid-of-backend>\n break toast_fetch_datum_slice\n continue\n\nsession 1:\n select substring(residues from 1000000 for 20000) from feature where\n feature_id=1;\n\nsession 2:\n did we hit the breakpoint in toast_fetch_datum_slice?\n\nHTH,\n\nJoe\n\n",
"msg_date": "Thu, 31 Jul 2003 14:03:05 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Joe,\n\nI'm working on the comparison--I think the best way to do it is to\nreload the original data into a new database and compare them, so it\nwill take a while.\n\nI have tuned postgresql.conf according to the page that everybody around\nhere seems to cite. I'll probably post back tomorrow with another set of\nresults.\n\nAlso, the perl script that did several queries used lengths of 5000,\n10,000 and 40,000 because those are the typical lengths I would use\n(occasionally shorter).\n\nThanks,\nScott\n\nOn Thu, 2003-07-31 at 16:49, Joe Conway wrote:\n> Scott Cain wrote:\n> > So it is possible that if I had a fast scsi drive, the performance might\n> > be better?\n> \n> Faster drives are always better ;-)\n> \n> Did you try the comparison with shorter substrings? Also, maybe not \n> related to your specific question, but have you tuned any other \n> postgresql.conf settings?\n> \n> Joe\n> \n> \n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "31 Jul 2003 17:11:15 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "On Thu, 2003-07-31 at 15:31, Joe Conway wrote:\n> Scott Cain wrote:\n> > Index Scan using feature_pkey on feature (cost=0.00..3.01 rows=1\n> > width=153) (actual time=954.13..954.14 rows=1 loops=1)\n> > Index Cond: (feature_id = 1)\n> > Total runtime: 954.26 msec\n> > (3 rows)\n> > \n> > Whoa! That's not what I expected, the time to do the query got more\n> > that twice as long. So I think, maybe it was just an unlucky section,\n> > and overall performance will be much better. So I write a perl script\n> > to do substring queries over all of my chromosomes at various positions\n> > and lengths (20,000 queries total). For comparison, I also ran the same\n> > script, extracting the chromosomes via sql and doing the substring in\n> > perl. Here's what happened:\n> \n> Hmmm, what happens if you compare with a shorter substring, e.g.:\n> \n> explain analyze select substring(residues from 1000000 for 2000)\n> from feature where feature_id=1;\n> \n> I'm just guessing, but it might be that the extra I/O time to read 20K \n> of uncompressed text versus the smaller compressed text is enough to \n> swamp the time saved from not needing to uncompress.\n\nAre you asking, \"Can his CPU decompress faster than his disks can\nread?\"\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n",
"msg_date": "31 Jul 2003 16:21:38 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Hello,\n\nNote: there is a SQL question way at the bottom of this narrative :-)\n\nLast week I asked about doing substring operations on very long strings\n(>10 million characters). I was given a suggestion to use EXTERNAL\nstorage on the column via the ALTER TABLE ... SET STORAGE command. In\none test case, the performance of substring actually got worse using\nEXTERNAL storage. \n\nIn an effort to find the best way to do this operation, I decided to\nlook at what is my \"worst case\" scenario: the DNA sequence for human\nchromosome 1, which is about 250 million characters long (previous\nstrings where about 20 million characters long). I wrote a perl script\nto do several substring operations over this very long string, with\nsubstring lengths varying between 1000 and 40,000 characters spread out\nover various locations along the string. While EXTENDED storage won in\nthis case, it was a hollow victory: 38 seconds per operation versus 40\nseconds, both of which are way too long to for an interactive\napplication.\n\nTime for a new method. A suggestion from my boss was to \"shred\" the DNA\ninto smallish chunks and a column giving offsets from the beginning of\nthe string, so that it can be reassembled when needed. Here is the test\ntable:\n\nstring=> \\d dna\n Table \"public.dna\"\n Column | Type | Modifiers\n---------+---------+-----------\n foffset | integer |\n pdna | text |\nIndexes: foffset_idx btree (foffset)\n\nIn practice, there would also be a foreign key column to give the\nidentifier of the dna. Then I wrote the following function (here's the\nSQL part promised above):\n\nCREATE OR REPLACE FUNCTION dna_string (integer, integer) RETURNS TEXT AS '\nDECLARE\n smin ALIAS FOR $1;\n smax ALIAS FOR $2;\n longdna TEXT := '''';\n dna_row dna%ROWTYPE;\n dnastring TEXT;\n firstchunk INTEGER;\n lastchunk INTEGER;\n in_longdnastart INTEGER;\n in_longdnalen INTEGER;\n chunksize INTEGER;\nBEGIN\n SELECT INTO chunksize min(foffset) FROM dna WHERE foffset>0;\n firstchunk := chunksize*(smin/chunksize);\n lastchunk := chunksize*(smax/chunksize);\n \n in_longdnastart := smin % chunksize;\n in_longdnalen := smax - smin + 1;\n \n FOR dna_row IN\n SELECT * FROM dna\n WHERE foffset >= firstchunk AND foffset <= lastchunk\n ORDER BY foffset\n LOOP\n\n longdna := longdna || dna_row.pdna;\n END LOOP;\n \n dnastring := substring(longdna FROM in_longdnastart FOR in_longdnalen);\n \n RETURN dnastring;\nEND;\n' LANGUAGE 'plpgsql';\n\nSo here's the question: I've never written a plpgsql function before, so\nI don't have much experience with it; is there anything obviously wrong\nwith this function, or are there things that could be done better? At\nleast this appears to work and is much faster, completing substring\noperations like above in about 0.27 secs (that's about two orders of\nmagnitude improvement!)\n\nThanks,\nScott\n\n\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "04 Aug 2003 11:25:36 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Scott Cain <[email protected]> writes:\n> At least this appears to work and is much faster, completing substring\n> operations like above in about 0.27 secs (that's about two orders of\n> magnitude improvement!)\n\nI find it really, really hard to believe that a crude reimplementation\nin plpgsql of the TOAST concept could beat the built-in implementation\nat all, let alone beat it by two orders of magnitude.\n\nEither there's something unrealistic about your testing of the\ndna_string function, or your original tests are not causing TOAST to be\ninvoked in the expected way, or there's a bug we need to fix. I'd\nreally like to see some profiling of the poor-performing\nexternal-storage case, so we can figure out what's going on.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Aug 2003 11:53:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings "
},
{
"msg_contents": "On Monday 04 August 2003 16:25, Scott Cain wrote:\n[snip]\n> In an effort to find the best way to do this operation, I decided to\n> look at what is my \"worst case\" scenario: the DNA sequence for human\n> chromosome 1, which is about 250 million characters long (previous\n> strings where about 20 million characters long). I wrote a perl script\n> to do several substring operations over this very long string, with\n> substring lengths varying between 1000 and 40,000 characters spread out\n> over various locations along the string. While EXTENDED storage won in\n> this case, it was a hollow victory: 38 seconds per operation versus 40\n> seconds, both of which are way too long to for an interactive\n> application.\n>\n> Time for a new method. A suggestion from my boss was to \"shred\" the DNA\n> into smallish chunks and a column giving offsets from the beginning of\n> the string, so that it can be reassembled when needed. Here is the test\n> table:\n>\n> string=> \\d dna\n> Table \"public.dna\"\n> Column | Type | Modifiers\n> ---------+---------+-----------\n> foffset | integer |\n> pdna | text |\n> Indexes: foffset_idx btree (foffset)\n\n[snipped plpgsql function which stitches chunks together and then substrings]\n\n> So here's the question: I've never written a plpgsql function before, so\n> I don't have much experience with it; is there anything obviously wrong\n> with this function, or are there things that could be done better? At\n> least this appears to work and is much faster, completing substring\n> operations like above in about 0.27 secs (that's about two orders of\n> magnitude improvement!)\n\nYou might want some checks to make sure that smin < smax, otherwise looks like \nit does the job in a good clean fashion.\n\nGlad to hear it's going to solve your problems. Two things you might want to \nbear in mind:\n1. There's probably a \"sweet spot\" where the chunk size interacts well with \nyour data, usage patterns and PGs backend to give you peak performance. \nYou'll have to test.\n2. If you want to search for a sequence you'll need to deal with the case \nwhere it starts in one chunk and ends in another.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 4 Aug 2003 16:55:48 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "On Mon, 2003-08-04 at 11:55, Richard Huxton wrote:\n> On Monday 04 August 2003 16:25, Scott Cain wrote:\n> [snip]\n> > [snip]\n> \n> You might want some checks to make sure that smin < smax, otherwise looks like \n> it does the job in a good clean fashion.\n\nGood point--smin < smax generally by virtue of the application using the\ndatabase, but I shouldn't assume that will always be the case.\n> \n> Glad to hear it's going to solve your problems. Two things you might want to \n> bear in mind:\n> 1. There's probably a \"sweet spot\" where the chunk size interacts well with \n> your data, usage patterns and PGs backend to give you peak performance. \n> You'll have to test.\n\nYes, I had a feeling that was probably the case-- since this is an open\nsource project, I will need to write directions for installers on\npicking a reasonable chunk size.\n\n> 2. If you want to search for a sequence you'll need to deal with the case \n> where it starts in one chunk and ends in another.\n\nI forgot about searching--I suspect that application is why I faced\nopposition for shredding in my schema development group. Maybe I should\npush that off to the file system and use grep (or BLAST). Otherwise, I\ncould write a function that would search the chunks first, then after\nfailing to find the substring in those, I could start sewing the chunks\ntogether to look for the query string. That could get ugly (and\nslow--but if the user knows that and expects it to be slow, I'm ok with\nthat).\n\nThanks,\nScott\n\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "04 Aug 2003 12:14:06 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "On 4 Aug 2003 at 12:14, Scott Cain wrote:\n> I forgot about searching--I suspect that application is why I faced\n> opposition for shredding in my schema development group. Maybe I should\n> push that off to the file system and use grep (or BLAST). Otherwise, I\n> could write a function that would search the chunks first, then after\n> failing to find the substring in those, I could start sewing the chunks\n> together to look for the query string. That could get ugly (and\n> slow--but if the user knows that and expects it to be slow, I'm ok with\n> that).\n\nI assume your DNA sequence is compacted. Your best bet would be to fetch them \nfrom database and run blast on them in client memory. No point duplicating \nblast functionality. Last I tried it beat every technique of text searching \nwhen heuristics are involved.\n\nBye\n Shridhar\n\n--\nThere are two types of Linux developers - those who can spell, andthose who \ncan't. There is a constant pitched battle between the two.(From one of the post-\n1.1.54 kernel update messages posted to c.o.l.a)\n\n",
"msg_date": "Mon, 04 Aug 2003 21:49:15 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "On Mon, 2003-08-04 at 11:53, Tom Lane wrote:\n> Scott Cain <[email protected]> writes:\n> > At least this appears to work and is much faster, completing substring\n> > operations like above in about 0.27 secs (that's about two orders of\n> > magnitude improvement!)\n> \n> I find it really, really hard to believe that a crude reimplementation\n> in plpgsql of the TOAST concept could beat the built-in implementation\n> at all, let alone beat it by two orders of magnitude.\n> \n> Either there's something unrealistic about your testing of the\n> dna_string function, or your original tests are not causing TOAST to be\n> invoked in the expected way, or there's a bug we need to fix. I'd\n> really like to see some profiling of the poor-performing\n> external-storage case, so we can figure out what's going on.\n> \nI was really hoping for a \"Good job and glad to hear it\" from you :-)\n\nI don't think there is anything unrealistic about my function or its\ntesting, as it is very much along the lines of the types of things we do\nnow. I will really try to do some profiling this week to help figure\nout what is going on.\n\nScott\n\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "04 Aug 2003 12:25:41 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "> > 2. If you want to search for a sequence you'll need to deal with the case\n> > where it starts in one chunk and ends in another.\n>\n> I forgot about searching--I suspect that application is why I faced\n> opposition for shredding in my schema development group. Maybe I should\n> push that off to the file system and use grep (or BLAST). Otherwise, I\n> could write a function that would search the chunks first, then after\n> failing to find the substring in those, I could start sewing the chunks\n> together to look for the query string. That could get ugly (and\n> slow--but if the user knows that and expects it to be slow, I'm ok with\n> that).\n\nIf you know the max length of the sequences being searched for, and this is much less than the chunk size, then you could simply\nhave the chunks overlap by that much, thus guaranteeing every substring will be found in its entirety in at least one chunk.\n\n\n",
"msg_date": "Mon, 4 Aug 2003 17:56:00 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Scott Cain wrote:\n> On Mon, 2003-08-04 at 11:53, Tom Lane wrote:\n>>I find it really, really hard to believe that a crude reimplementation\n>>in plpgsql of the TOAST concept could beat the built-in implementation\n>>at all, let alone beat it by two orders of magnitude.\n>>\n>>Either there's something unrealistic about your testing of the\n>>dna_string function, or your original tests are not causing TOAST to be\n>>invoked in the expected way, or there's a bug we need to fix. I'd\n>>really like to see some profiling of the poor-performing\n>>external-storage case, so we can figure out what's going on.\n> \n> I was really hoping for a \"Good job and glad to hear it\" from you :-)\n> \n> I don't think there is anything unrealistic about my function or its\n> testing, as it is very much along the lines of the types of things we do\n> now. I will really try to do some profiling this week to help figure\n> out what is going on.\n\nIs there a sample table schema and dataset available (external-storage \ncase) that we can play with?\n\nJoe\n\n",
"msg_date": "Mon, 04 Aug 2003 13:29:56 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Joe,\n\nGood idea, since I may not get around to profiling it this week. I\ncreated a dump of the data set I was working with. It is available at\nhttp://www.gmod.org/string_dump.bz2\n\nThanks,\nScott\n\n\nOn Mon, 2003-08-04 at 16:29, Joe Conway wrote:\n> Is there a sample table schema and dataset available (external-storage \n> case) that we can play with?\n> \n> Joe\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "05 Aug 2003 11:01:19 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Oh, and I forgot to mention: it's highly compressed (bzip2 -9) and is\n109M.\n\nScott\n\nOn Tue, 2003-08-05 at 11:01, Scott Cain wrote:\n> Joe,\n> \n> Good idea, since I may not get around to profiling it this week. I\n> created a dump of the data set I was working with. It is available at\n> http://www.gmod.org/string_dump.bz2\n> \n> Thanks,\n> Scott\n> \n> \n> On Mon, 2003-08-04 at 16:29, Joe Conway wrote:\n> > Is there a sample table schema and dataset available (external-storage \n> > case) that we can play with?\n> > \n> > Joe\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "05 Aug 2003 11:05:08 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Scott Cain wrote:\n> Oh, and I forgot to mention: it's highly compressed (bzip2 -9) and is\n> 109M.\n\nThanks. I'll grab a copy from home later today and see if I can find \nsome time to poke at it.\n\nJoe\n\n\n\n",
"msg_date": "Tue, 05 Aug 2003 08:19:51 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Scott Cain <[email protected]> writes:\n> A few days ago, I asked for advice on speeding up substring queries on\n> the GENERAL mailing list. Joe Conway helpfully pointed out the ALTER\n> TABLE STORAGE EXTERNAL documentation. After doing the alter,\n> the queries got slower! Here is the background:\n\nAh-hah, I've sussed it ... you didn't actually change the storage\nrepresentation. You wrote:\n\n> Now, I'll change the storage:\n>\talter table feature alter column residues set storage external;\n> To make sure that really happens, I run an update on feature:\n>\tupdate feature set residues = residues where feature_id<8;\n> and then VACUUM ANALYZE again.\n\nThis sounds good --- in fact, I think we all just accepted it when we\nread it --- but in fact *that update didn't decompress the toasted data*.\nThe tuple toaster sees that the same toasted value is being stored back\ninto the row, and so it just re-uses the existing toasted data; it does\nnot stop to notice that the column storage preference has changed.\n\nTo actually get the storage to change, you need to feed the value\nthrough some function or operator that will decompress it. Then it\nwon't get recompressed when it's stored. One easy way (since this\nis a text column) is\n\n\tupdate feature set residues = residues || '' where feature_id<8;\n\nTo verify that something really happened, try doing VACUUM VERBOSE on\nthe table before and after. The quoted number of tuples in the toast\ntable should rise substantially.\n\nI did the following comparisons on the test data you made available,\nusing two tables in which one has default storage and one has \"external\"\n(not compressed) storage:\n\nscott=# \\timing\nTiming is on.\nscott=# select length (dna) from edna;\n length\n-----------\n 245203899\n(1 row)\n\nTime: 1.05 ms\nscott=# select length (dna) from ddna;\n length\n-----------\n 245203899\n(1 row)\n\nTime: 1.11 ms\nscott=# select length(substring(dna from 1000000 for 20000)) from edna;\n length\n--------\n 20000\n(1 row)\n\nTime: 30.43 ms\nscott=# select length(substring(dna from 1000000 for 20000)) from ddna;\n length\n--------\n 20000\n(1 row)\n\nTime: 37383.02 ms\nscott=#\n\nSo it looks like the external-storage optimization for substring() does\nwork as expected, once you get the data into the right format ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Aug 2003 16:36:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXTERNAL storage and substring on long strings "
},
{
"msg_contents": "Tom Lane wrote:\n> Scott Cain <[email protected]> writes:\n> \n>>A few days ago, I asked for advice on speeding up substring queries on\n>>the GENERAL mailing list. Joe Conway helpfully pointed out the ALTER\n>>TABLE STORAGE EXTERNAL documentation. After doing the alter,\n>>the queries got slower! Here is the background:\n> \n> Ah-hah, I've sussed it ... you didn't actually change the storage\n> representation. You wrote:\n\nYeah, I came to the same conclusion this morning (update longdna set dna \n= dna || '';), but it still seems that the chunked table is very \nslightly faster than the substring on the externally stored column:\n\ndna=# explain analyze select pdna from dna where foffset > 6000000 and \nfoffset < 6024000;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Index Scan using foffset_idx on dna (cost=0.00..4.22 rows=14 \nwidth=32) (actual time=0.06..0.16 rows=11 loops=1)\n Index Cond: ((foffset > 6000000) AND (foffset < 6024000))\n Total runtime: 0.27 msec\n(3 rows)\n\ndna=# explain analyze select pdna from dna where foffset > 6000000 and \nfoffset < 6024000;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Index Scan using foffset_idx on dna (cost=0.00..4.22 rows=14 \nwidth=32) (actual time=0.07..0.16 rows=11 loops=1)\n Index Cond: ((foffset > 6000000) AND (foffset < 6024000))\n Total runtime: 0.25 msec\n(3 rows)\n\ndna=# explain analyze select substr(dna,6002000,20000) from longdna;\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Seq Scan on longdna (cost=0.00..1.01 rows=1 width=32) (actual \ntime=0.46..0.47 rows=1 loops=1)\n Total runtime: 0.58 msec\n(2 rows)\n\ndna=# explain analyze select substr(dna,6002000,20000) from longdna;\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Seq Scan on longdna (cost=0.00..1.01 rows=1 width=32) (actual \ntime=0.23..0.24 rows=1 loops=1)\n Total runtime: 0.29 msec\n(2 rows)\n\nI ran each command twice after starting psql to observe the effects of \ncaching.\n\nHowever with the provided sample data, longdna has only one row, and dna \nhas 122,540 rows, all of which are chunks of the one longdna row. I \nwould tend to think that if you had 1000 or so longdna records indexed \non some id column, versus 122,540,000 dna chunks indexed on both an id \nand segment column, the substring from longdna would win.\n\nJoe\n\n",
"msg_date": "Wed, 06 Aug 2003 13:51:13 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Joe Conway <[email protected]> writes:\n> Tom Lane wrote:\n>> Ah-hah, I've sussed it ... you didn't actually change the storage\n>> representation. You wrote:\n\n> Yeah, I came to the same conclusion this morning (update longdna set dna \n> = dna || '';), but it still seems that the chunked table is very \n> slightly faster than the substring on the externally stored column:\n\n> dna=# explain analyze select pdna from dna where foffset > 6000000 and \n> foffset < 6024000;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------\n> Index Scan using foffset_idx on dna (cost=0.00..4.22 rows=14 \n> width=32) (actual time=0.07..0.16 rows=11 loops=1)\n> Index Cond: ((foffset > 6000000) AND (foffset < 6024000))\n> Total runtime: 0.25 msec\n> (3 rows)\n\n> dna=# explain analyze select substr(dna,6002000,20000) from longdna;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------\n> Seq Scan on longdna (cost=0.00..1.01 rows=1 width=32) (actual \n> time=0.23..0.24 rows=1 loops=1)\n> Total runtime: 0.29 msec\n> (2 rows)\n\nThis isn't a totally fair comparison, though, since the second case is\nactually doing the work of assembling the chunks into a single string,\nwhile the first is not. Data-copying alone would probably account for\nthe difference.\n\nI would expect that the two would come out to essentially the same cost\nwhen fairly compared, since the dna table is nothing more nor less than\na hand implementation of the TOAST concept. The toaster's internal\nfetching of toasted data segments ought to be equivalent to the above\nindexscan. The toaster would have a considerable edge on Scott's\nimplementation when it came to assembling the chunks, since it's working\nin C and not in plpgsql, but the table access costs ought to be just\nabout the same.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Aug 2003 17:08:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXTERNAL storage and substring on long strings "
},
{
"msg_contents": "> snipped much discussion about EXTERNAL storage and substring speed\n\nJoe and Tom,\n\nThanks for all of your help; I went back to my (nearly) production\ndatabase, and executed the `update feature set residues = residues\n||'';` and then redid my benchmark. Here's a summary of the results:\n\nsubstr in perl 0.83sec/op\nsubstring on default text column 0.24sec/op\nsubstring on EXTERNAL column 0.0069sec/op\n\nThere's that 2 orders of magnitude improvement I was looking for!\n\nThanks again,\nScott\n\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "07 Aug 2003 17:15:51 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Tom Lane wrote:\n> Scott Cain <[email protected]> writes:\n>> At least this appears to work and is much faster, completing substring\n>> operations like above in about 0.27 secs (that's about two orders of\n>> magnitude improvement!)\n> \n> I find it really, really hard to believe that a crude reimplementation\n> in plpgsql of the TOAST concept could beat the built-in implementation\n> at all, let alone beat it by two orders of magnitude.\n> \n> Either there's something unrealistic about your testing of the\n> dna_string function, or your original tests are not causing TOAST to be\n> invoked in the expected way, or there's a bug we need to fix. I'd\n> really like to see some profiling of the poor-performing\n> external-storage case, so we can figure out what's going on.\n\nDoesn't look that unrealistic to me. A plain text based substring \nfunction will reassemble the whole beast first before cutting out the \nwanted part. His manually chunked version will read only those chunks \nneeded. Considering that he's talking about retrieving a few thousand \nchars from a hundreds of MB size string ...\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n",
"msg_date": "Thu, 14 Aug 2003 11:02:53 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Jan Wieck <[email protected]> writes:\n> Doesn't look that unrealistic to me. A plain text based substring \n> function will reassemble the whole beast first before cutting out the \n> wanted part. His manually chunked version will read only those chunks \n> needed.\n\nSo does substr(), if applied to EXTERNAL (non-compressed) toasted text.\nSee John Gray's improvements a release or two back.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Aug 2003 14:00:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings "
},
{
"msg_contents": "Agreed. When I actually Did It Right (tm), EXTERNAL storage gave\nsimilar (probably better) performance as my shredding method, without\nall the hoops to breakup and reassemble the string.\n\nScott\n\nOn Thu, 2003-08-14 at 14:00, Tom Lane wrote:\n> Jan Wieck <[email protected]> writes:\n> > Doesn't look that unrealistic to me. A plain text based substring \n> > function will reassemble the whole beast first before cutting out the \n> > wanted part. His manually chunked version will read only those chunks \n> > needed.\n> \n> So does substr(), if applied to EXTERNAL (non-compressed) toasted text.\n> See John Gray's improvements a release or two back.\n> \n> \t\t\tregards, tom lane\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. [email protected]\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n",
"msg_date": "14 Aug 2003 14:11:07 -0400",
"msg_from": "Scott Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Jan Wieck <[email protected]> writes:\n>> Doesn't look that unrealistic to me. A plain text based substring \n>> function will reassemble the whole beast first before cutting out the \n>> wanted part. His manually chunked version will read only those chunks \n>> needed.\n> \n> So does substr(), if applied to EXTERNAL (non-compressed) toasted text.\n> See John Gray's improvements a release or two back.\n\nDuh ... of course, EXTERNAL is uncompressed ... where do I have my head?\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n",
"msg_date": "Thu, 14 Aug 2003 14:55:38 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] EXTERNAL storage and substring on long strings"
}
] |
[
{
"msg_contents": "Why is pgsql estimating a cost of 100000000 for retire_today in this\nquery? I analyzed it, and there's nothing very odd about it, other than\nit's a temp table.\n\nBTW, I had to set enable_seqscan=false to get this, otherwise it wants\nto seqscan ogr_results, which is rather painful since it occupies 350k\npages.\n\nogr=# explain analyze select distinct stub_id, nodecount, id from (select distinct stub_id, nodecount, o.id, r.stats_id from retire_today r, ogr_results o where o.id=r.id) o where exists (select * from ogr_results o2 where o2.stub_id=o.stub_id and o2.nodecount=o.nodecount and o2.id=o.stats_id);\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=101349702.99..101350940.01 rows=12370 width=24) (actual time=422568.80..422568.82 rows=1 loops=1)\n -> Sort (cost=101349702.99..101350012.25 rows=123702 width=24) (actual time=422568.79..422568.80 rows=1 loops=1)\n Sort Key: stub_id, nodecount, id\n -> Subquery Scan o (cost=101323777.30..101339240.00 rows=123702 width=24) (actual time=388142.51..422568.59 rows=1 loops=1)\n Filter: (subplan)\n -> Unique (cost=101323777.30..101339240.00 rows=123702 width=24) (actual time=12456.49..13570.23 rows=56546 loops=1)\n -> Sort (cost=101323777.30..101326869.84 rows=1237016 width=24) (actual time=12456.47..12758.86 rows=56546 loops=1)\n Sort Key: o.stub_id, o.nodecount, o.id, r.stats_id\n -> Nested Loop (cost=100000000.00..101198600.98 rows=1237016 width=24) (actual time=93.57..11747.10 rows=56546 loops=1)\n -> Seq Scan on retire_today r (cost=100000000.00..100000001.93 rows=93 width=8) (actual time=0.03..1.78 rows=93 loops=1)\n -> Index Scan using ogr_results__id on ogr_results o (cost=0.00..12721.90 rows=13301 width=16) (actual time=18.03..118.43 rows=608 loops=93)\n Index Cond: (o.id = \"outer\".id)\n SubPlan\n -> Index Scan using results_id_count on ogr_results o2 (cost=0.00..3.03 rows=1 width=24) (actual time=7.21..7.21 rows=0 loops=56546)\n Index Cond: ((stub_id = $0) AND (nodecount = $1))\n Filter: (id = $2)\n Total runtime: 422591.48 msec\n(17 rows)\n\n Table \"pg_temp_2.retire_today\"\n Column | Type | Modifiers \n----------+-----------------------+-----------\n email | character varying(64) | not null\n id | integer | not null\n stats_id | integer | not null\n\nogr=# select * from pg_stats where tablename='retire_today';\n schemaname | tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation \n------------+--------------+----------+-----------+-----------+------------+--------------------------------------+-----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n pg_temp_1 | retire_today | email | 0 | 23 | -1 | | | {[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected]} | 0.894781\n pg_temp_1 | retire_today | id | 0 | 4 | -1 | | | {17193,153860,220570,315863,351077,382582,405976,413303,418589,423335,424575} | 0.17536\n pg_temp_1 | retire_today | stats_id | 0 | 4 | -0.946237 | {142167,391154,402835,422577,423809} | {0.0215054,0.0215054,0.0215054,0.0215054,0.0215054} | {136669,373730,415341,421924,423416,423553,423959,424089,424354,424609,424976} | -0.132419\n pg_temp_2 | retire_today | email | 0 | 23 | -1 | | | {[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected]} | 0.894781\n pg_temp_2 | retire_today | id | 0 | 4 | -1 | | | {17193,153860,220570,315863,351077,382582,405976,413303,418589,423335,424575} | 0.17536\n pg_temp_2 | retire_today | stats_id | 0 | 4 | -0.946237 | {142167,391154,402835,422577,423809} | {0.0215054,0.0215054,0.0215054,0.0215054,0.0215054} | {136669,373730,415341,421924,423416,423553,423959,424089,424354,424609,424976} | -0.132419\n(6 rows)\n\nogr=# select * from pg_class where relname='retire_today';\n relname | relnamespace | reltype | relowner | relam | relfilenode | relpages | reltuples | reltoastrelid | reltoastidxid | relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers | relukeys | relfkeys | relrefs | relhasoids | relhaspkey | relhasrules | relhassubclass | relacl \n--------------+--------------+-----------+----------+-------+-------------+----------+-----------+---------------+---------------+-------------+-------------+---------+----------+-----------+-------------+----------+----------+---------+------------+------------+-------------+----------------+--------\n retire_today | 16765 | 636609103 | 101 | 0 | 636609102 | 1 | 93 | 0 | 0 | f | f | r | 3 | 0 | 0 | 0 | 0 | 0 | f | f | f | f | \n retire_today | 411964549 | 636609142 | 110 | 0 | 636609141 | 1 | 93 | 0 | 0 | f | f | r | 3 | 0 | 0 | 0 | 0 | 0 | f | f | f | f | \n retire_today | 478929703 | 632973603 | 101 | 0 | 632973602 | 0 | 0 | 0 | 0 | f | f | r | 3 | 0 | 0 | 0 | 0 | 0 | f | f | f | f | \n(3 rows)\n-- \nJim C. Nasby, Database Consultant [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Thu, 31 Jul 2003 14:51:45 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd explain estimate"
},
{
"msg_contents": "On Thu, Jul 31, 2003 at 02:51:45PM -0500, Jim C. Nasby wrote:\n> Why is pgsql estimating a cost of 100000000 for retire_today in this\n> query? I analyzed it, and there's nothing very odd about it, other than\n> it's a temp table.\n> \n> BTW, I had to set enable_seqscan=false to get this, otherwise it wants\n\nThat's why. When you do that, it just automatically adds 100000000\nto the cost of a seqscan. It can't really disable it, because there\nmight be no other way to pull the result.\n\nIf you really needed to set enable_seqscan=false (did you really? \nAre you sure that's not the cheapest way?), you might want to\ninvestigate expainding the statistics on the indexed column,\nincreasing the correlation through clustering, and other such tricks.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Thu, 31 Jul 2003 16:59:21 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd explain estimate"
},
{
"msg_contents": "On Thu, Jul 31, 2003 at 04:59:21PM -0400, Andrew Sullivan wrote:\n> On Thu, Jul 31, 2003 at 02:51:45PM -0500, Jim C. Nasby wrote:\n> If you really needed to set enable_seqscan=false (did you really? \n> Are you sure that's not the cheapest way?), you might want to\n> investigate expainding the statistics on the indexed column,\n> increasing the correlation through clustering, and other such tricks.\n \nWell, if I don't do this it wants to seqscan a table that occupies 350k\npages, instead of pulling a couple thousand rows. I started running it\nwith the seqscan and it's already taken way longer than it does if I\ndisable seqscan.\n\nI guess I'll try expanding the statistics.\n-- \nJim C. Nasby, Database Consultant [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Thu, 31 Jul 2003 17:59:59 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd explain estimate"
},
{
"msg_contents": "On Thu, Jul 31, 2003 at 05:59:59PM -0500, Jim C. Nasby wrote:\n> \n> Well, if I don't do this it wants to seqscan a table that occupies 350k\n> pages, instead of pulling a couple thousand rows. I started running it\n> with the seqscan and it's already taken way longer than it does if I\n> disable seqscan.\n\nThat was indeed the question. \n\nIf it uses a seqscan when it ought not to do, then there's something\nwrong with the statistics, or you haven't vacuum analysed correctly,\nor your table needs vacuum full (is it really 350k pages, or is that\nmostly dead space?), &c. -- all the usual bad-seqscan candidates.\n\nenable_seqscan=off is probably not a good strategy for any moderately\ncomplicated query. If the planner were perfect, of course, you'd\nnever need it at all.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Fri, 1 Aug 2003 08:16:12 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd explain estimate"
},
{
"msg_contents": "On Fri, Aug 01, 2003 at 08:16:12AM -0400, Andrew Sullivan wrote:\n> On Thu, Jul 31, 2003 at 05:59:59PM -0500, Jim C. Nasby wrote:\n> > \n> > Well, if I don't do this it wants to seqscan a table that occupies 350k\n> > pages, instead of pulling a couple thousand rows. I started running it\n> > with the seqscan and it's already taken way longer than it does if I\n> > disable seqscan.\n> \n> That was indeed the question. \n> \n> If it uses a seqscan when it ought not to do, then there's something\n> wrong with the statistics, or you haven't vacuum analysed correctly,\n> or your table needs vacuum full (is it really 350k pages, or is that\n> mostly dead space?), &c. -- all the usual bad-seqscan candidates.\n> \n> enable_seqscan=off is probably not a good strategy for any moderately\n> complicated query. If the planner were perfect, of course, you'd\n> never need it at all.\n \nSet statistics on the ID colum to 1000, vacuum analyze, and it's good to\ngo now. Thanks for your help!\n-- \nJim C. Nasby, Database Consultant [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Sat, 2 Aug 2003 10:59:54 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd explain estimate"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.