threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi List,\n\n\n Could anyone tell me a documentation that explains the \" explain \" result\nand how to analyze it ?\n\nAtenciosamente,\n\nRhaoni Chiu Pereira\nSist�mica Computadores\n\nVisite-nos na Web: http://sistemica.info\nFone/Fax : +55 51 3328 1122\n\n\n\n\n\n",
"msg_date": "Mon, 8 Sep 2003 11:29:21 -0300",
"msg_from": "Rhaoni Chiu Pereira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Explain Doc"
},
{
"msg_contents": "On Mon, 08-sep-2003 at 16:29, Rhaoni Chiu Pereira wrote:\n\n> Could anyone tell me a documentation that explains the \" explain \" result\n> and how to analyze it ?\n> \n\nhttp://archives.postgresql.org/pgsql-performance/2003-09/msg00000.php\n\nRegards,\n\n-- \nAlberto Caso Palomino\nAdaptia Soluciones Integrales\nhttp://www.adaptia.net\[email protected]",
"msg_date": "Tue, 09 Sep 2003 00:04:40 +0200",
"msg_from": "Alberto Caso <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Explain Doc"
}
] |
[
{
"msg_contents": "\n\n",
"msg_date": "Mon, 8 Sep 2003 19:38:23 -0500",
"msg_from": "=?iso-8859-1?Q?Odiel_Le=F3n?= <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Tom,\n\nBack in the 7.0 days, \n\nWHERE EXISTS (SELECT * FROM a WHERE condition)\n\nwas significantly slower on broad tables than\n\nWHERE EXISTS (SELECT small_col FROM a WHERE condition)\n\nIs this still true, or something that's been fixed in the last 3 versions? \nJoe Celko is making fun of me because Oracle doesn't have this performance \nissue.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 8 Sep 2003 20:02:18 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Quick question"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Back in the 7.0 days, \n> WHERE EXISTS (SELECT * FROM a WHERE condition)\n> was significantly slower on broad tables than\n> WHERE EXISTS (SELECT small_col FROM a WHERE condition)\n> Is this still true, or something that's been fixed in the last 3 versions? \n\nIt's still true that all the sub-select's output columns will be\nevaluated. Given that this happens for at most one row, I'm not sure\nhow significant the hit really is. But it's annoying, seeing that the\nouter EXISTS doesn't care what the column values are.\n\n> Joe Celko is making fun of me because Oracle doesn't have this performance \n> issue.\n\nPerhaps Joe can tell us exactly which part of SQL92 says it's okay not\nto evaluate side-effect-producing functions in the targetlist of an\nEXISTS subselect.\n\nI would like to make the system change the targetlist to just \"SELECT 1\"\nin an EXISTS subquery. But I'm slightly concerned about changing the\nsemantics of existing queries. If someone can produce proof that this\nis allowed (or even better, required) by the SQL spec, it'd be easier...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Sep 2003 23:14:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quick question "
}
] |
[
{
"msg_contents": "Hi All,\n Is it usual that the following query to take 22 secs with the machine I have?\nAny other reason?\nHope I have provided all the details need.\n\nThanks,\nWaruna\n\nTables:\n/* -------------------------------------------------------- \n Table structure for table \"tvDiary\" \n-------------------------------------------------------- */\nCREATE TABLE \"tvDiary\" (\n \"member\" int4 NOT NULL,\n \"timeSlot\" int2 NOT NULL references \"timeSlot\"(\"code\"),\n \"channel\" varchar(4) NOT NULL references \"tvChannel\"(\"code\"),\n \"date\" date NOT NULL,\n CONSTRAINT \"tvDiary_pkey\" PRIMARY KEY (\"date\", \"member\", \"timeSlot\")\n);\nIndexed on \"date\"\n\n/* -------------------------------------------------------- \n Table structure for table \"mDiary\" \n-------------------------------------------------------- */\nCREATE TABLE \"mDiary\" (\n \"member\" int4 NOT NULL,\n \"area\" char(1) NOT NULL,\n \"district\" int2 references \"district\"(\"code\"),\n \"date\" date NOT NULL,\n CONSTRAINT \"mDiary_pkey\" PRIMARY KEY (\"date\", \"member\")\n);\nIndexed on \"date\"\n\n# Records\ntvDiary : 7 300 000\nmDiary : 850 000\n\nmachine : \nCeleron 1.0GHz RAM - 390MB , 40 GB IDE HDD\nRedHat Linux 9\n\nkernel.shmmni = 4096\nkernel.shmall = 33554432\nkernel.shmmax = 134217728\n\npostgres 7.3.4\n\nshared_buffers = 8192\nsort_mem = 65536\n\nQuery:\n\nSELECT COUNT(td.member) AS count, td.date AS date, td.\"timeSlot\" AS \"timeSlot\", td.channel AS channel, \n tg.district AS district,tg.area AS area \nFROM \"tvDiary\" td ,(SELECT DISTINCT(md.member) AS member, md.area AS area, md.district as district \n FROM \"mDiary\" md \n WHERE (md.date BETWEEN '20020301' AND '20020330') ) AS tg \nWHERE(td.date BETWEEN '20020301' AND '20020330') AND (td.member=tg.member) \nGROUP BY td.date,td.\"timeSlot\", td.channel,tg.district,tg.area;\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n-----------------------\n Aggregate (cost=91790.44..100942.65 rows=52298 width=28) (actual time=18396.42..21764.44 rows=57478 loops=1)\n -> Group (cost=91790.44..99635.19 rows=522983 width=28) (actual time=18396.34..21158.23 rows=281733 loops=1)\n -> Sort (cost=91790.44..93097.90 rows=522983 width=28) (actual time=18396.30..18588.91 rows=281733 loops=1)\n Sort Key: td.date, td.\"timeSlot\", td.channel, tg.district, tg.area\n -> Merge Join (cost=34290.10..42116.42 rows=522983 width=28) (actual time=8159.30..10513.62 rows=281733 ops=1)\n Merge Cond: (\"outer\".member = \"inner\".member)\n -> Sort (cost=29121.48..29755.35 rows=253551 width=17) (actual time=6752.36..6933.38 rows=282552 loops=1)\n Sort Key: td.member\n -> Index Scan using d_tvdiary_key on \"tvDiary\" td (cost=0.00..6362.82 rows=253551 width=17) (actual time=95.80..4766.25 rows=282587\n loops=1)\n Index Cond: ((date >= '2002-03-01'::date) AND (date <= '2002-03-30'::date))\n -> Sort (cost=5168.63..5179.26 rows=4251 width=11) (actual time=1406.88..1590.72 rows=281955 loops=1)\n Sort Key: tg.member\n -> Subquery Scan tg (cost=4487.31..4912.42 rows=4251 width=11) (actual time=1228.55..1397.20 rows=2348 loops=1)\n -> Unique (cost=4487.31..4912.42 rows=4251 width=11) (actual time=1228.52..1390.12 rows=2348 loops=1)\n -> Sort (cost=4487.31..4593.59 rows=42511 width=11) (actual time=1228.51..1257.87 rows=46206 loops=1)\n Sort Key: member, area, district\n -> Index Scan using d_mdiary_key on \"mDiary\" md (cost=0.00..1219.17 rows=42511 width=11) (actual time=60.20..750.\n67 rows=46206 loops=1)\n Index Cond: ((date >= '2002-03-01'::date) AND (date <= '2002-03-30'::date))\n Total runtime: 21992.24 msec\n(19 rows)\n\n\n\n\n\n\n\n\nHi All,\n Is it usual that the following query to take \n22 secs with the machine I have?\nAny other reason?\nHope I have provided all the details \nneed.\n\n \nThanks,\nWaruna\n \nTables:\n/* \n-------------------------------------------------------- Table \nstructure for table \"tvDiary\" \n-------------------------------------------------------- */CREATE TABLE \n\"tvDiary\" ( \"member\" int4 NOT NULL, \n\"timeSlot\" int2 NOT NULL references \"timeSlot\"(\"code\"), \n\"channel\" varchar(4) NOT NULL references \"tvChannel\"(\"code\"), \n\"date\" date NOT NULL, CONSTRAINT \"tvDiary_pkey\" PRIMARY KEY \n(\"date\", \"member\", \"timeSlot\"));Indexed on \"date\"\n \n/* \n-------------------------------------------------------- Table \nstructure for table \"mDiary\" \n-------------------------------------------------------- */CREATE TABLE \n\"mDiary\" ( \"member\" int4 NOT NULL, \"area\" \nchar(1) NOT NULL, \"district\" int2 references \n\"district\"(\"code\"), \"date\" date NOT NULL, \nCONSTRAINT \"mDiary_pkey\" PRIMARY KEY (\"date\", \"member\"));Indexed on \n\"date\"\n \n# RecordstvDiary : 7 300 000mDiary : 850 \n000\n \nmachine : Celeron 1.0GHz RAM - 390MB , 40 GB \nIDE HDDRedHat Linux 9\n \nkernel.shmmni = 4096kernel.shmall = \n33554432kernel.shmmax = 134217728\n \npostgres 7.3.4\n \nshared_buffers = 8192sort_mem = \n65536\n \nQuery:\n \nSELECT COUNT(td.member) AS count, td.date AS date, \ntd.\"timeSlot\" AS \"timeSlot\", td.channel AS \nchannel, \n tg.district AS district,tg.area \nAS area FROM \"tvDiary\" td ,(SELECT DISTINCT(md.member) AS member, md.area AS \narea, md.district as district \n \n FROM \"mDiary\" md \n \n WHERE (md.date BETWEEN '20020301' AND \n'20020330') ) AS tg WHERE(td.date BETWEEN '20020301' AND '20020330') AND \n(td.member=tg.member) GROUP BY td.date,td.\"timeSlot\", \ntd.channel,tg.district,tg.area;\n \n QUERY PLAN\n \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate \n(cost=91790.44..100942.65 rows=52298 width=28) (actual time=18396.42..21764.44 \nrows=57478 loops=1) -> Group \n(cost=91790.44..99635.19 rows=522983 width=28) (actual time=18396.34..21158.23 \nrows=281733 loops=1) \n-> Sort (cost=91790.44..93097.90 rows=522983 width=28) (actual \ntime=18396.30..18588.91 rows=281733 \nloops=1) \nSort Key: td.date, td.\"timeSlot\", td.channel, tg.district, \ntg.area \n-> Merge Join (cost=34290.10..42116.42 rows=522983 width=28) \n(actual time=8159.30..10513.62 rows=281733 \nops=1) \nMerge Cond: (\"outer\".member = \n\"inner\".member) \n-> Sort (cost=29121.48..29755.35 rows=253551 width=17) (actual \ntime=6752.36..6933.38 rows=282552 \nloops=1) \nSort Key: \ntd.member \n-> Index Scan using d_tvdiary_key on \"tvDiary\" td \n(cost=0.00..6362.82 rows=253551 width=17) (actual time=95.80..4766.25 \nrows=282587 loops=1) \nIndex Cond: ((date >= '2002-03-01'::date) AND (date <= \n'2002-03-30'::date)) \n-> Sort (cost=5168.63..5179.26 rows=4251 width=11) (actual \ntime=1406.88..1590.72 rows=281955 \nloops=1) \nSort Key: \ntg.member \n-> Subquery Scan tg (cost=4487.31..4912.42 rows=4251 width=11) \n(actual time=1228.55..1397.20 rows=2348 \nloops=1) \n-> Unique (cost=4487.31..4912.42 rows=4251 width=11) (actual \ntime=1228.52..1390.12 rows=2348 \nloops=1) \n-> Sort (cost=4487.31..4593.59 rows=42511 width=11) (actual \ntime=1228.51..1257.87 rows=46206 \nloops=1) \nSort Key: member, area, \ndistrict \n-> Index Scan using d_mdiary_key on \"mDiary\" md \n(cost=0.00..1219.17 rows=42511 width=11) (actual time=60.20..750.67 \nrows=46206 \nloops=1) \nIndex Cond: ((date >= '2002-03-01'::date) AND (date <= \n'2002-03-30'::date)) Total runtime: 21992.24 msec(19 \nrows)",
"msg_date": "Tue, 9 Sep 2003 12:27:27 +0600",
"msg_from": "\"Waruna Geekiyanage\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query?"
}
] |
[
{
"msg_contents": "would it cause problem in postgres DB if /var/lib/psql partition is mounted \nwith \"noatime\"?\n\nTIA \nJM\n",
"msg_date": "Tue, 9 Sep 2003 16:12:48 +0800",
"msg_from": "JM <[email protected]>",
"msg_from_op": true,
"msg_subject": "increase performancr with \"noatime\"?"
},
{
"msg_contents": "On Tue, Sep 09, 2003 at 04:12:48PM +0800, JM wrote:\n> would it cause problem in postgres DB if /var/lib/psql partition is mounted \n> with \"noatime\"?\n\nNo; in fact, that's been suggested by many people. I don't know\nwhether anyone's done any tests to prove that it helps.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 9 Sep 2003 08:37:57 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increase performancr with \"noatime\"?"
},
{
"msg_contents": "Andrew Sullivan wrote:\n> On Tue, Sep 09, 2003 at 04:12:48PM +0800, JM wrote:\n> \n>>would it cause problem in postgres DB if /var/lib/psql partition is mounted \n>>with \"noatime\"?\n> \n> \n> No; in fact, that's been suggested by many people. I don't know\n> whether anyone's done any tests to prove that it helps.\n\nhttp://www.potentialtech.com/wmoran/postgresql.php#results\n\nYou can see, from my _limited_ testing, that it doesn't seem to help enough\nto be worth worrying about. In this test, it actually seems to hurt\nperformance. Read the whole page, though. These tests are heavy on the\nwriting, it's quite possible that it could improve things if your database\nis a heavy read scenerio.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Tue, 09 Sep 2003 08:44:37 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increase performancr with \"noatime\"?"
},
{
"msg_contents": ">>>>> \"AS\" == Andrew Sullivan <[email protected]> writes:\n\nAS> On Tue, Sep 09, 2003 at 04:12:48PM +0800, JM wrote:\n>> would it cause problem in postgres DB if /var/lib/psql partition is mounted \n>> with \"noatime\"?\n\nAS> No; in fact, that's been suggested by many people. I don't know\nAS> whether anyone's done any tests to prove that it helps.\n\nI honestly can't expect it to be much of an improvement since the\nnumber of files involved compared with the size of the files is\nminimal. However, if you're opening/closing the files often it might\ncause you problems. I think in the normal case where it does matter\nyou have pooled connections so the open/close happens rarely.\n\nOf course, if I had a good synthetic workload to pound on my DB, I'd\nrun a test... Sean?\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Tue, 09 Sep 2003 12:08:55 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increase performancr with \"noatime\"?"
}
] |
[
{
"msg_contents": "\n Hello,\n\n I have small table (up to 10000 rows) and every row will be updated\nonce per minute. Table also has \"before update on each row\" trigger\nwritten in plpgsql. But trigger 99.99% of the time will do nothing\nto the database. It will just compare old and new values in the row\nand those values almost always will be identical.\n\n Now I tried simple test and was able to do 10000 updates on 1000\nrows table in ~30s. That's practically enough but I'd like to have\nmore room to slow down.\n Also best result I achieved by doing commit+vacuum every ~500\nupdates.\n\n How can I improve performance and will version 7.4 bring something\nvaluable for my task? Rewrite to some other scripting language is not\na problem. Trigger is simple enough.\n\n Postgres v7.3.4, shared_buffers=4096 max_fsm settings also bumped up\n10 times.\n\n Thanks,\n\n Mindaugas\n\n",
"msg_date": "Tue, 9 Sep 2003 15:40:31 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need advice about triggers"
},
{
"msg_contents": "On Tuesday 09 September 2003 13:40, Mindaugas Riauba wrote:\n> Hello,\n>\n> I have small table (up to 10000 rows) and every row will be updated\n> once per minute. Table also has \"before update on each row\" trigger\n> written in plpgsql. But trigger 99.99% of the time will do nothing\n> to the database. It will just compare old and new values in the row\n> and those values almost always will be identical.\n>\n> Now I tried simple test and was able to do 10000 updates on 1000\n> rows table in ~30s. That's practically enough but I'd like to have\n> more room to slow down.\n> Also best result I achieved by doing commit+vacuum every ~500\n> updates.\n>\n> How can I improve performance and will version 7.4 bring something\n> valuable for my task? Rewrite to some other scripting language is not\n> a problem. Trigger is simple enough.\n\nWell, try it without the trigger. If performance improves markedly, it might \nbe worth rewriting in C.\n\nIf not, you're probably saturating the disk I/O - using iostat/vmstat will let \nyou see what's happening. If it is your disks, you might see if moving the \nWAL onto a separate drive would help, or check the archives for plenty of \ndiscussion about raid setups.\n\n> Postgres v7.3.4, shared_buffers=4096 max_fsm settings also bumped up\n> 10 times.\n\nWell effective_cache_size is useful for reads, but won't help with writing. \nYou might want to look at wal_buffers and see if increasing that helps, but I \ncouldn't say for sure.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 9 Sep 2003 14:14:23 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need advice about triggers"
},
{
"msg_contents": "> How can I improve performance and will version 7.4 bring something\n> valuable for my task? Rewrite to some other scripting language is not\n> a problem. Trigger is simple enough.\n\nYour best bet is to have additional clients connected to the database\nrequesting work. Approx NUMCPUs * 2 + 1 seems to be ideal. (+1 to ensure\nthere is something waiting when the others complete. *2 to ensure that\nyou can have 50% reading from disk, 50% doing calculations)\n\nYou may simply want to put vacuum into a loop of it's own so it executes\n~1 second after the previous run finished. Work should still be going\non even though vacuum is running.",
"msg_date": "Tue, 09 Sep 2003 09:28:07 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need advice about triggers"
},
{
"msg_contents": "\n> > How can I improve performance and will version 7.4 bring something\n> > valuable for my task? Rewrite to some other scripting language is not\n> > a problem. Trigger is simple enough.\n>\n> Well, try it without the trigger. If performance improves markedly, it\nmight\n> be worth rewriting in C.\n\n Nope. Execution time is practically the same without trigger.\n\n> If not, you're probably saturating the disk I/O - using iostat/vmstat will\nlet\n> you see what's happening. If it is your disks, you might see if moving the\n> WAL onto a separate drive would help, or check the archives for plenty of\n> discussion about raid setups.\n\n Bottleneck in this case is CPU. postmaster process uses almost 100% of\nCPU.\n\n> > Postgres v7.3.4, shared_buffers=4096 max_fsm settings also bumped up\n> > 10 times.\n> Well effective_cache_size is useful for reads, but won't help with\nwriting.\n> You might want to look at wal_buffers and see if increasing that helps,\nbut I\n> couldn't say for sure.\n\n Disk I/O should not be a problem in this case. vmstat shows ~300kb/s write\nactivity.\n\n Mindaugas\n\n",
"msg_date": "Tue, 9 Sep 2003 16:33:32 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need advice about triggers"
},
{
"msg_contents": "On Tue, 9 Sep 2003, Mindaugas Riauba wrote:\n\n> \n> Hello,\n> \n> I have small table (up to 10000 rows) and every row will be updated\n> once per minute. Table also has \"before update on each row\" trigger\n> written in plpgsql. But trigger 99.99% of the time will do nothing\n> to the database. It will just compare old and new values in the row\n> and those values almost always will be identical.\n\nIf the rows aren't going to actually change all that often, perhaps you \ncould program your trigger to just silently drop the update, i.e. only \nchange the rows that need updating and ignore the rest? That should speed \nthings up. Unless I'm misunderstanding your needs here.\n\n",
"msg_date": "Tue, 9 Sep 2003 07:39:06 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need advice about triggers"
},
{
"msg_contents": "\"Mindaugas Riauba\" <[email protected]> writes:\n>> Well, try it without the trigger. If performance improves markedly, it\n>> might be worth rewriting in C.\n\n> Nope. Execution time is practically the same without trigger.\n\n>> If not, you're probably saturating the disk I/O -\n\n> Bottleneck in this case is CPU. postmaster process uses almost 100% of\n> CPU.\n\nThat seems very odd. Updates should be I/O intensive, not CPU\nintensive. I wouldn't have been surprised to hear of a plpgsql trigger\nconsuming lots of CPU, but without it, I'm not sure where the time is\ngoing. Can you show us an EXPLAIN ANALYZE result for a typical update\ncommand?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 09 Sep 2003 10:40:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need advice about triggers "
},
{
"msg_contents": "On Tuesday 09 September 2003 14:33, Mindaugas Riauba wrote:\n> > Well, try it without the trigger. If performance improves markedly, it\n> might\n> > be worth rewriting in C.\n>\n> Nope. Execution time is practically the same without trigger.\n\nOK - no point in rewriting it then.\n\n> > If not, you're probably saturating the disk I/O - using iostat/vmstat\n> > will\n>\n> let\n>\n> > you see what's happening. If it is your disks, you might see if moving\n> > the WAL onto a separate drive would help, or check the archives for\n> > plenty of discussion about raid setups.\n>\n> Bottleneck in this case is CPU. postmaster process uses almost 100% of\n> CPU.\n\n> Disk I/O should not be a problem in this case. vmstat shows ~300kb/s\n> write activity.\n\nHmm - I must admit I wasn't expecting that. Closest I can get on my test \nmachine here: AMD 400MHz / 256MB / IDE disk / other stuff running is about 20 \nsecs.\n\nI've attached the perl script I used - what sort of timings does it give you?\n\n-- \n Richard Huxton\n Archonet Ltd",
"msg_date": "Tue, 9 Sep 2003 15:40:51 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need advice about triggers"
},
{
"msg_contents": "Mindaugas Riauba kirjutas T, 09.09.2003 kell 15:40:\n> Hello,\n> \n> I have small table (up to 10000 rows) and every row will be updated\n> once per minute. Table also has \"before update on each row\" trigger\n> written in plpgsql. But trigger 99.99% of the time will do nothing\n> to the database. It will just compare old and new values in the row\n> and those values almost always will be identical.\n> \n> Now I tried simple test and was able to do 10000 updates on 1000\n> rows table in ~30s. That's practically enough but I'd like to have\n> more room to slow down.\n\nIs it 10000 *rows* or 10000*1000 = 10 000 000 *rows* updated ?\n\nWhen I run a simple update 10 times on 1000 rows (with no trigger, which\nyou claim to take about the same time) it took 0.25 sec.\n\n> Also best result I achieved by doing commit+vacuum every ~500\n> updates.\n\nIt seems like you are updating more than one row at each update ?\n\n---------------\nHannu\n\n",
"msg_date": "Tue, 09 Sep 2003 18:18:03 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need advice about triggers"
},
{
"msg_contents": "\n> >> Well, try it without the trigger. If performance improves markedly, it\n> >> might be worth rewriting in C.\n>\n> > Nope. Execution time is practically the same without trigger.\n>\n> >> If not, you're probably saturating the disk I/O -\n>\n> > Bottleneck in this case is CPU. postmaster process uses almost 100% of\n> > CPU.\n>\n> That seems very odd. Updates should be I/O intensive, not CPU\n> intensive. I wouldn't have been surprised to hear of a plpgsql trigger\n> consuming lots of CPU, but without it, I'm not sure where the time is\n> going. Can you show us an EXPLAIN ANALYZE result for a typical update\n> command?\n\n Two EXPLAIN ANALYZE below. One is before another is after REINDEX. It\nseems\nthat REINDEX before updates helps. Time went down to ~17s. Also CPU is not\nat\n100%. vmstat output is below (machine is 2xCPU so 40% load means 80% on one\nCPU).\n\n So the solution would be REINDEX before updates and VACUUM at the same\ntime?\nWithout REINDEX performance slowly degrades.\n\n Mindaugas\n\n\nrouter_db=# explain analyze update ifdata set ifspeed=256000,\nifreason='12121', iflastupdate=CURRENT_TIMESTAMP WHERE clientid='#0003904#';\n QUERY PLAN\n----------------------------------------------------------------------------\n------------------------------------------------\n Index Scan using ifdata_clientid_key on ifdata (cost=0.00..5.64 rows=1\nwidth=116) (actual time=0.17..0.36 rows=1 loops=1)\n Index Cond: (clientid = '#0003904#'::character varying)\n Total runtime: 1.70 msec\n(3 rows)\n\nrouter_db=# reindex table ifdata;\nREINDEX\nrouter_db=# explain analyze update ifdata set ifspeed=256000,\nifreason='12121', iflastupdate=CURRENT_TIMESTAMP WHERE clientid='#0003904#';\n QUERY PLAN\n----------------------------------------------------------------------------\n------------------------------------------------\n Index Scan using ifdata_clientid_key on ifdata (cost=0.00..5.65 rows=1\nwidth=116) (actual time=0.06..0.07 rows=1 loops=1)\n Index Cond: (clientid = '#0003904#'::character varying)\n Total runtime: 0.47 msec\n(3 rows)\n\n----------------------------------------------------------------------------\n---\n\n procs memory swap io system\ncpu\n r b w swpd free buff cache si so bi bo in cs us sy\nid\n 0 0 0 5048 20616 273556 1614692 0 0 4 3 2 0 0 1\n3\n 0 0 0 5048 20612 273556 1614692 0 0 0 0 109 8 0 0\n100\n 0 0 0 5048 20612 273556 1614692 0 0 0 168 144 20 0 0\n100\n 1 0 0 5048 19420 273556 1614612 0 0 0 192 123 4120 35 2\n63\n 0 1 1 5048 19420 273572 1614652 0 0 0 672 144 4139 32 2\n66\n 1 0 0 5048 19420 273580 1614660 0 0 0 360 125 4279 33 12\n55\n 1 0 0 5048 19420 273580 1614724 0 0 0 272 119 5887 41 2\n57\n 1 0 0 5048 19420 273580 1614716 0 0 0 488 124 4871 40 1\n59\n\n",
"msg_date": "Wed, 10 Sep 2003 13:21:55 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need advice about triggers "
},
{
"msg_contents": "Mindaugas Riauba kirjutas K, 10.09.2003 kell 13:21:\n\n> \n> router_db=# explain analyze update ifdata set ifspeed=256000,\n> ifreason='12121', iflastupdate=CURRENT_TIMESTAMP WHERE clientid='#0003904#';\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> ------------------------------------------------\n> Index Scan using ifdata_clientid_key on ifdata (cost=0.00..5.64 rows=1\n> width=116) (actual time=0.17..0.36 rows=1 loops=1)\n> Index Cond: (clientid = '#0003904#'::character varying)\n> Total runtime: 1.70 msec\n> (3 rows)\n\ncould you try the same query on similar table, where clientid is int4 ?\n\nis it faster ?\n\ndoes the performance degrade at a slower rate?\n\n---------------\nHannu\n\n",
"msg_date": "Wed, 10 Sep 2003 18:05:35 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need advice about triggers"
}
] |
[
{
"msg_contents": "I've got an application that needs to chunk through ~2GB of data. The \ndata is ~7000 different sets of 300 records each. I put all of the data \ninto a postgres database but that doesn't look like its going to work \nbecause of how the data lives on the disk.\n\nWhen the app runs on a 500 Mhz G4 the CPU is 30% idle... the processing \napplication eating about 50%, postgres taking about 10%. I don't know \nhow to tell for sure but it looks like postgres is blocking on disk i/o.\n\nFor a serial scan of the postgres table (e.g. \"select * from \ndatatable\"), \"iostat\" reports 128K per transfer, ~140 tps and between \n14 and 20 MB/s from disk0 - with postgres taking more than 90% CPU.\n\nIf I then run a loop asking for only the 300 records at a time (e.g. \n\"select from datatable where group_id='123'\"), iostat reports 8k per \ntransfer, ~200 tps, less than 1MB/s throughput and postgres taking ~10% \nCPU. (There is an index defined for group_id and EXPLAIN says it's \nbeing used.)\n\nSo I'm guessing that postgres is jumping all over the disk and my app \nis just waiting on data. Is there a way to fix this? Or should I move \nto a scientific data file format like NCSA's HDF?\n\nI need to push new values into each of the 7000 datasets once or twice \na day and then read-process the entire data set as many times as I can \nin a 12 hour period - nearly every day of the year. Currently there is \nonly single table but I had planned to add several others.\n\nThanks,\n- Chris\n\n",
"msg_date": "Tue, 9 Sep 2003 17:49:02 -0600",
"msg_from": "Chris Huston <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reading data in bulk - help?"
},
{
"msg_contents": "Chris,\n\n> I've got an application that needs to chunk through ~2GB of data. The \n> data is ~7000 different sets of 300 records each. I put all of the data \n> into a postgres database but that doesn't look like its going to work \n> because of how the data lives on the disk.\n\nYour problem is curable through 4 steps:\n\n1) adjust your postgresql.conf to appropriate levels for memory usage.\n\n2) if those sets of 300 are blocks in some contiguous order, then cluster them \nto force their physical ordering on disk to be the same order you want to \nread them in. This will require you to re-cluster whenever you change a \nsignificant number of records, but from the sound of it that happens in \nbatches.\n\n3) Get better disks, preferrably a RAID array, or just very fast scsi if the \ndatabase is small. If you're budget-constrained, Linux software raid (or \nBSD raid) on IDE disks is cheap. What kind of RAID depends on what else \nyou'll be doing with the app; RAID 5 is better for read-only access, RAID 1+0 \nis better for read-write.\n\n4) Make sure that you aren't dumping the data to the same disk postgreSQL \nlives on! Preferably, make sure that your swap partition is on a different \ndisk/array from postgresql. If the computing app is complex and requires \ndisk reads aside from postgres data, you should make sure that it lives on \nyet another disk. Or you can simplify this with a good, really large \nmulti-channel RAID array.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Tue, 9 Sep 2003 17:11:32 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading data in bulk - help?"
},
{
"msg_contents": "Thanks Josh that helped. I had gone looking for some kind of cluster \noption but was looking under create database, create index and \ninitlocation - didn't see the CLUSTER index ON table.\n\nI ran the CLUSTER which took about 2 1/2 hours to complete. That \nimproved the query performance about 6x - which is great - but is still \ntaking 26 minutes to do what a serial read does in about 2 1/2 minutes.\n\nAt this point I'm ok because each fetch is taking around 200 \nmilliseconds from call to the time the data is ready. The processing \ntakes 300-600ms per batch. I've got the fetch and the processing \nrunning in separate threads so even if postgres was running faster it \nwouldn't help this implementation.\n\nHowever, \"iostat\" is still reporting average size per transfer of about \n10kB and total thru-put of about 1MB/s. The transfers per second went \nfrom >200/s to about 80/s. It still seams like it ought to be a faster.\n\nThe system is currently running on a single processor 500Mhz G4. We're \nlikely to move to a two processor 2Ghz G5 in the next few months. Then \neach block may take only a 30-60 milliseconds to complete and their can \nbe two concurrent blocks processing at once.\n\nSometime before then I need to figure out how to cut the fetch times \nfrom the now 200ms to something like 10ms. There are currently \n1,628,800 records in the single data table representing 6817 groups. \nEach group has 2 to 284 records - with 79% having the max 284 (max \ngrows by 1 every day - although the value may change throughout the \nday). Each record is maybe 1 or 2k so ideally each batch/group should \nrequire 284-568k - at 10MB/s - that'd be\n\nRELATED QUESTION: How now do I speed up the following query: \"select \ndistinct group_id from datatable\"? Which results in a sequential scan \nof the db. Why doesn't it use the group_id index? I only do this once \nper run so it's not as critical as the fetch speed which is done 6817 \ntimes.\n\nThanks for the help!\n- Chris\n\nOn Tuesday, Sep 9, 2003, at 18:11 America/Denver, Josh Berkus wrote:\n\n> Chris,\n>\n>> I've got an application that needs to chunk through ~2GB of data. The\n>> data is ~7000 different sets of 300 records each. I put all of the \n>> data\n>> into a postgres database but that doesn't look like its going to work\n>> because of how the data lives on the disk.\n>\n> Your problem is curable through 4 steps:\n>\n> 1) adjust your postgresql.conf to appropriate levels for memory usage.\n>\n> 2) if those sets of 300 are blocks in some contiguous order, then \n> cluster them\n> to force their physical ordering on disk to be the same order you want \n> to\n> read them in. This will require you to re-cluster whenever you \n> change a\n> significant number of records, but from the sound of it that happens in\n> batches.\n>\n> 3) Get better disks, preferrably a RAID array, or just very fast scsi \n> if the\n> database is small. If you're budget-constrained, Linux software \n> raid (or\n> BSD raid) on IDE disks is cheap. What kind of RAID depends on what \n> else\n> you'll be doing with the app; RAID 5 is better for read-only access, \n> RAID 1+0\n> is better for read-write.\n>\n> 4) Make sure that you aren't dumping the data to the same disk \n> postgreSQL\n> lives on! Preferably, make sure that your swap partition is on a \n> different\n> disk/array from postgresql. If the computing app is complex and \n> requires\n> disk reads aside from postgres data, you should make sure that it \n> lives on\n> yet another disk. Or you can simplify this with a good, really large\n> multi-channel RAID array.\n>\n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Wed, 10 Sep 2003 01:37:02 -0600",
"msg_from": "Chris Huston <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reading data in bulk - help?"
},
{
"msg_contents": "On Wed, 10 Sep 2003, Chris Huston wrote:\n\n> Sometime before then I need to figure out how to cut the fetch times \n> from the now 200ms to something like 10ms.\n\nYou didn't say anything about Joshs first point of adjusting\npostgresql.conf to match your machine. Settings like effective_cache_size\nyou almost always want to increase from the default setting, also shared \nmemory.\n\n-- \n/Dennis\n\n",
"msg_date": "Wed, 10 Sep 2003 11:01:07 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading data in bulk - help?"
},
{
"msg_contents": "Chris Huston said:\n> Thanks Josh that helped. I had gone looking for some kind of cluster\n> option but was looking under create database, create index and\n> initlocation - didn't see the CLUSTER index ON table.\n>\n> I ran the CLUSTER which took about 2 1/2 hours to complete. That\n> improved the query performance about 6x - which is great - but is still\n> taking 26 minutes to do what a serial read does in about 2 1/2 minutes.\n>\n> At this point I'm ok because each fetch is taking around 200\n> milliseconds from call to the time the data is ready. The processing\n> takes 300-600ms per batch. I've got the fetch and the processing\n> running in separate threads so even if postgres was running faster it\n> wouldn't help this implementation.\n>\n> However, \"iostat\" is still reporting average size per transfer of about\n> 10kB and total thru-put of about 1MB/s. The transfers per second went\n> from >200/s to about 80/s. It still seams like it ought to be a faster.\n>\n> The system is currently running on a single processor 500Mhz G4. We're\n> likely to move to a two processor 2Ghz G5 in the next few months. Then\n> each block may take only a 30-60 milliseconds to complete and their can\n> be two concurrent blocks processing at once.\n>\n> Sometime before then I need to figure out how to cut the fetch times\n> from the now 200ms to something like 10ms. There are currently\n> 1,628,800 records in the single data table representing 6817 groups.\n> Each group has 2 to 284 records - with 79% having the max 284 (max\n> grows by 1 every day - although the value may change throughout the\n> day). Each record is maybe 1 or 2k so ideally each batch/group should\n> require 284-568k - at 10MB/s - that'd be\n>\n> RELATED QUESTION: How now do I speed up the following query: \"select\n> distinct group_id from datatable\"? Which results in a sequential scan\n> of the db. Why doesn't it use the group_id index? I only do this once\n> per run so it's not as critical as the fetch speed which is done 6817\n> times.\n>\n> Thanks for the help!\n> - Chris\n>\n\nHow are you fetching the data?\nIf you are using cursors, be sure to fetch a substatial bit at a time so\nthat youre not punished by latency.\nI got a big speedup when i changed my original clueless code to fetch 64\nrows in a go instead of only one.\n\nMagnus\n\n\n",
"msg_date": "Wed, 10 Sep 2003 13:47:22 +0200 (CEST)",
"msg_from": "\"Magnus Naeslund(w)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading data in bulk - help?"
},
{
"msg_contents": "Chris,\n\n> The system is currently running on a single processor 500Mhz G4. We're\n> likely to move to a two processor 2Ghz G5 in the next few months. Then\n> each block may take only a 30-60 milliseconds to complete and their can\n> be two concurrent blocks processing at once.\n\nWhat about explaining your disk setup? Or mentioning postgresql.conf? For \nsomebody who wants help, you're ignoring a lot of advice and questions.\n\nPersonally, I'm not going to be of any further help until you report back on \nthe other 3 of 4 options.\n\n> RELATED QUESTION: How now do I speed up the following query: \"select\n> distinct group_id from datatable\"? Which results in a sequential scan\n> of the db. Why doesn't it use the group_id index? I only do this once\n> per run so it's not as critical as the fetch speed which is done 6817\n> times.\n\nBecause it can't until PostgreSQL 7.4, which has hash aggregates. Up to \n7.3, we have to use seq scans for all group bys. I'd suggest that you keep a \ntable of group_ids, instead.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 10 Sep 2003 10:16:14 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading data in bulk - help?"
},
{
"msg_contents": "\nOn Wednesday, Sep 10, 2003, at 11:16 America/Denver, Josh Berkus wrote:\n\n> What about explaining your disk setup? Or mentioning \n> postgresql.conf? For\n> somebody who wants help, you're ignoring a lot of advice and questions.\n>\n> Personally, I'm not going to be of any further help until you report \n> back on\n> the other 3 of 4 options.\n\nEEEK! Peace. Sorry I didn't include that info in the response.\n\n1) Memory - clumsily adjusted shared_buffer - tried three values: 64, \n128, 256 with no discernible change in performance. Also adjusted, \nclumsily, effective_cache_size to 1000, 2000, 4000 - with no \ndiscernible change in performance. I looked at the Admin manual and \ngoogled around for how to set these values and I confess I'm clueless \nhere. I have no idea how many kernel disk page buffers are used nor do \nI understand what the \"shared memory buffers\" are used for (although \nthe postgresql.conf file hints that it's for communication between \nmultiple connections). Any advice or pointers to articles/docs is \nappreciated.\n\n2) Clustering - tried it - definite improvement - thanks for the tip\n\n3) RAID - haven't tried it - but I'm guessing that the speed \nimprovement from a RAID 5 may be on the order of 10x - which I can \nlikely get from using something like HDF. Since the data is unlikely to \ngrow beyond 10-20gig, a fast drive and firewire ought to give me the \nperformance I need. I know experimentally that the current machine can \nsustain a 20MB/s transfer rate which is 20-30x the speed of these \nqueries. (If there's any concern about my enthusiasm for postgres - no \nworries - I've been very happy with it on several projects - it might \nnot be the right tool for this kind of job - but I haven't come to that \nconclusion yet.)\n\n4) I'd previously commented out the output/writing steps from the app - \nto isolate read performance.\n\nOn Wednesday, Sep 10, 2003, at 05:47 America/Denver, Magnus Naeslund(w) \nwrote:\n>\n> How are you fetching the data?\n> If you are using cursors, be sure to fetch a substatial bit at a time \n> so\n> that youre not punished by latency.\n> I got a big speedup when i changed my original clueless code to fetch \n> 64\n> rows in a go instead of only one.\nThat's an excellent question... I hadn't thought about it. I'm using a \nJDBC connection... I have no idea (yet) how the results are moving \nbetween postgres and the client app. I'm testing once with the app and \nthe DB on the same machine (to remove network latency) and once with \ndb/app on separate machines. However, I wonder if postgres is blocking \non network io (even if it's the loopback interface) and not on disk?!\n\nI'll definitely look into it. Maybe I'll try a loop in psql and see \nwhat the performance looks like. Thanks Magnus.\n\nOn Wednesday, Sep 10, 2003, at 07:05 America/Denver, Sean McCorkle \nwrote:\n\n> I ended up solving the problem by going \"retro\" and using the\n> quasi-database functions of unix and flat files: grep, sort,\n> uniq and awk.\nThat's an cool KISS approach. If I end up moving out of postgres I'll \nspeed test this approach against HDF. Thanks.\n\n\nThis is a very helpful list,\n- Chris\n\n",
"msg_date": "Wed, 10 Sep 2003 14:59:50 -0600",
"msg_from": "Chris Huston <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reading data in bulk - help?"
},
{
"msg_contents": "> 1) Memory - clumsily adjusted shared_buffer - tried three values: 64, \n> 128, 256 with no discernible change in performance. Also adjusted, \n> clumsily, effective_cache_size to 1000, 2000, 4000 - with no discernible \n> change in performance. I looked at the Admin manual and googled around \n> for how to set these values and I confess I'm clueless here. I have no \n> idea how many kernel disk page buffers are used nor do I understand what \n> the \"shared memory buffers\" are used for (although the postgresql.conf \n> file hints that it's for communication between multiple connections). \n> Any advice or pointers to articles/docs is appreciated.\n\nThe standard procedure is 1/4 of your memory for shared_buffers. Easiest \nway to calculate would be ###MB / 32 * 1000. E.g. if you have 256MB of \nmemory, your shared_buffers should be 256 / 32 * 1000 = 8000.\n\nThe remaining memory you have leftover should be \"marked\" as OS cache \nvia the effective_cache_size setting. I usually just multiply the \nshared_buffers value by 3 on systems with a lot of memory. With less \nmemory, OS/Postgres/etc takes up a larger percentage of memory so values \nof 2 or 2.5 would be more accurate.\n\n",
"msg_date": "Wed, 10 Sep 2003 15:08:48 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading data in bulk - help?"
},
{
"msg_contents": "Chris,\n\n> 1) Memory - clumsily adjusted shared_buffer - tried three values: 64,\n> 128, 256 with no discernible change in performance. Also adjusted,\n> clumsily, effective_cache_size to 1000, 2000, 4000 - with no\n> discernible change in performance. I looked at the Admin manual and\n> googled around for how to set these values and I confess I'm clueless\n> here. I have no idea how many kernel disk page buffers are used nor do\n> I understand what the \"shared memory buffers\" are used for (although\n> the postgresql.conf file hints that it's for communication between\n> multiple connections). Any advice or pointers to articles/docs is\n> appreciated.\n\nYou want values *much* higher than that. How much RAM do you have? See:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\nFor example, if you have 512mb RAM, I'd crank up the shared buffers to 8000. \nthe sort_mem to 8mb, and the effective_cache_size to 24,000.\n\n> 3) RAID - haven't tried it - but I'm guessing that the speed\n> improvement from a RAID 5 may be on the order of 10x\n\nProbably not ... more like 1.5x - 2.0x, but that's still a significant help, \nyes? Also, the advantage will get better the more your data grows.\n\n> - which I can\n> likely get from using something like HDF. \n\nHDF sucks for I/O speed. XServe will become a much more significant option \nin the market when Apple can bring themselves to abandon HDF, and adopt XFS \nor something. This is part of your problem.\n\n> Since the data is unlikely to\n> grow beyond 10-20gig, a fast drive and firewire ought to give me the\n> performance I need.\n\nNot sure about that. Is Firewire really faster for I/O than modern SCSI or \n233mhz ATA? I don't do much Mac anymore, but I'd the impression that \nFirewire was mainly for peripherals .... \n\nWhat is important for your app in terms of speed is to get the data coming \nfrom multiple drives over multiple channels. Were it a PC, I'd recommend a \nmotherboard with 4 IDE channels or Serial ATA, and spreading the data over 4 \ndrives via RAID 0 or RAID 5, and adding dual processors. Then you could use \nmultiple postgres connections to read different parts of the table \nsimultaneously.\n\n> I know experimentally that the current machine can\n> sustain a 20MB/s transfer rate which is 20-30x the speed of these\n> queries.\n\nThat is interesting. Adjust your PostgreSQL.conf and see what results you \nget. It's possible that PostgreSQL is convinced that you have little or no \nRAM because of your .conf settings, and is swapping stuff to temp file on \ndisk.\n\n> 4) I'd previously commented out the output/writing steps from the app -\n> to isolate read performance.\n\nOK.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 10 Sep 2003 19:50:08 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading data in bulk - help?"
},
{
"msg_contents": "> You want values *much* higher than that. How much RAM do you have? See:\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\nNow THAT is a remarkable document! I vote for putting that information into\nthe PostgreSQL documentation tree.\n\nChris\n\n",
"msg_date": "Thu, 11 Sep 2003 12:40:43 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reading data in bulk - help?"
}
] |
[
{
"msg_contents": "Nitpicking --\n\nPerhaps the 4th data line is meant to be:\n Inserts in separate transactions 2500 inserts/second\n ^^^^^^^^^^^^^^^^^^^^^^^\n??\n\n\nGreg Williamson\n\n-----Original Message-----\nFrom:\tBruce Momjian [mailto:[email protected]]\nSent:\tTue 9/9/2003 8:25 PM\nTo:\tMatt Clark\nCc:\tRon Johnson; PgSQL Performance ML\nSubject:\tRe: [PERFORM] Hardware recommendations to scale to silly load\n\nMatt Clark wrote:\n> > Just a data point, but on my Dual Xeon 2.4Gig machine with a 10k SCSI\n> > drive I can do 4k inserts/second if I turn fsync off. If you have a\n> > battery-backed controller, you should be able to do the same. (You will\n> > not need to turn fsync off --- fsync will just be fast because of the\n> > disk drive RAM).\n> >\n> > Am I missing something?\n> \n> I think Ron asked this, but I will too, is that 4k inserts in\n> one transaction or 4k transactions each with one insert?\n> \n> fsync is very much faster (as are all random writes) with the\n> write-back cache, but I'd hazard a guess that it's still not\n> nearly as fast as turning fsync off altogether. I'll do a test\n> perhaps...\n\nSorry to be replying late. Here is what I found.\n\nfsync on\n Inserts all in one transaction 3700 inserts/second\n Inserts in separate transactions 870 inserts/second\n\nfsync off\n Inserts all in one transaction 3700 inserts/second\n Inserts all in one transaction 2500 inserts/second\n\nECPG test program attached.\n\n--\n\n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n\n\n",
"msg_date": "Wed, 10 Sep 2003 00:40:45 -0700",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware recommendations to scale to silly load"
},
{
"msg_contents": "Gregory S. Williamson wrote:\n> Nitpicking --\n> \n> Perhaps the 4th data line is meant to be:\n> Inserts in separate transactions 2500 inserts/second\n> ^^^^^^^^^^^^^^^^^^^^^^^\n\n\nOh, yes, sorry. It is:\n\n> Sorry to be replying late. Here is what I found.\n> \n> fsync on\n> Inserts all in one transaction 3700 inserts/second\n> Inserts in separate transactions 870 inserts/second\n> \n> fsync off\n> Inserts all in one transaction 3700 inserts/second\n> Inserts in separate transactions 2500 inserts/second\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 10 Sep 2003 12:06:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware recommendations to scale to silly load"
}
] |
[
{
"msg_contents": "Hi,\n\nMy name is Alex Turner and I work for a small Tech company in Pottstown PA. We run Postgresql on a number of systems for a variety of different applications, and it has been a joy to deal with all around, working fast and reliably for over 2 years.\n\nWe recently upgraded from RedHat 7.2 to RedHat 9.0, and we are running Postgres 7.3.2 on our Proliant ML370 (Raid 1 2x18 10k, and Raid 5 3x36 10k, 2x866 PIII, 2GB RAM).\n\nWe seem to have had a serious drop after the upgrade. The database is a database of properties that is updated on a daily basis, and when I say updated I mean that I insert/update the whole data download because the data provider doesn't tell us what changed, just gives us a complete dump. The integrity of the dumb isn't great so I can't process as a COPY or a block transaction because some of the data is often bad. Each and every row is a seperate insert or update. \nData insert performance used to degrade in a linear fasion as time progressed I'm guessing as the transaction logs filled up. About once every six weeks I would dump the database, destroy and recreate the db and reload the dump. This 'reset' the whole thing, and brought insert/vacuum times back down. Since the upgrade, performance has degraded very rapidly over the first week, and then more slowly later, but enough that we now have to reload the db every 2-3 weeks. The insert procedure triggers a stored procedure that updates a timestamp on the record so that we can figure out what records have been touched, and which have not so that we can determine which properties have been removed from the feed as the record was not touched in the last two days.\n\nI have noticed that whilst inserts seem to be slower than before, the vacuum full doesn't seem to take as long overall.\n\npostgresql.conf is pretty virgin, and we run postmaster with -B512 -N256 -i. /var/lib/pgsql/data is a symlink to /eda/data, /eda being the mount point for the Raid 5 array.\n\nthe database isn't huge, storing about 30000 properties, and the largest table is 2.1 Million rows for property features. The dump file is only 221MB. Alas, I did not design the schema, but I have made several 'tweaks' to it to greatly improve read performance allowing us to be the fastest provider in the Tristate area. Unfortunately the Job starts at 01:05 (thats the earliest the dump is available) and runs until completion finishing with a vacuum full. The vacuum full locks areas of the database long enough that our service is temporarily down. At the worst point, the vacuum full was starting after 09:00, which our customers didn't appreciate.\n\nI'm wondering if there is anything I can do with postgres to allieviate this problem. Either upgrading to 7.3.4 (although I tried 7.3.3 for another app, and we had to roll back to 7.3.2 because of performance problems), or working with the postgresql.conf to enhance performance. I really don't want to roll back the OS version if possible, but I'm not ruling it out at this point, as that seems to be the biggest thing that has changed. All the drive lights are showing green, so I don't believe the array is running in degraded mode. I keep logs of all the insert jobs, and plotting average insert times on a graph revealed that this started at the time of the upgrade.\n\nAny help/suggestions would be grealy appreciated,\n\nThanks,\n\nAlex Turner\nNetEconomist\n\nP.S. Sorry this is so long, but I wanted to include as much info as possible.\n",
"msg_date": "Wed, 10 Sep 2003 13:53:40 -0400",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Upgrade Woes"
},
{
"msg_contents": "[email protected] writes:\n> P.S. Sorry this is so long, but I wanted to include as much info as possible.\n\nThrow in the non-commented lines in postgresql.conf; that would more\nthan likely make numeric answers possible, for some of it. If the\nconfig is \"out-of-the-box,\" then it's pretty likely that some\nsignificant improvements can be gotten from modifying a few of the\nconfig parameters. Increasing buffers would probably help query\nspeed, and if you're getting too many dead tuples, increasing the free\nspace map would make it possible for more to vacuum out.\n\nBeyond that, you might want to grab the code for pg_autovacuum, and\ndrop that into place, as that would do periodic ANALYZEs that would\nprobably improve the quality of your selects somewhat. (It's in the\n7.4 code \"contrib\" base, but works fine with 7.3.)\n\nI think you might also get some significant improvements out of\nchanging the way you load the properties. If you set up a schema that\nis suitably \"permissive,\" and write a script that massages it a\nlittle, COPY should do the trick to load the data in, which should be\nhelpful to the load process. If the data comes in a little more\nintelligently (which might well involve some parts of the process\n\"dumbing down\" :-)), you might take advantage of COPY and perhaps\nother things (we see through the glass darkly).\n\nI would think it also begs the question of whether or not you _truly_\nneed the \"vacuum full.\" Are you _certain_ you need that? I would\nthink it likely that running \"vacuum analyze\" (and perhaps doing it a\nlittle bit, continuously, during the load, via pg_autovacuum) would\nlikely suffice. Have you special reason to think otherwise?\n-- \noutput = (\"cbbrowne\" \"@\" \"libertyrms.info\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n",
"msg_date": "Wed, 10 Sep 2003 14:25:19 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Upgrade Woes"
},
{
"msg_contents": "On Wednesday 10 September 2003 18:53, [email protected] wrote:\n> Hi,\n>\n> My name is Alex Turner and I work for a small Tech company in Pottstown PA.\n> We run Postgresql on a number of systems for a variety of different\n> applications, and it has been a joy to deal with all around, working fast\n> and reliably for over 2 years.\n>\n> We recently upgraded from RedHat 7.2 to RedHat 9.0, and we are running\n> Postgres 7.3.2 on our Proliant ML370 (Raid 1 2x18 10k, and Raid 5 3x36 10k,\n> 2x866 PIII, 2GB RAM).\n[snip]\n> I have noticed that whilst inserts seem to be slower than before, the\n> vacuum full doesn't seem to take as long overall.\n>\n> postgresql.conf is pretty virgin, and we run postmaster with -B512 -N256\n> -i. /var/lib/pgsql/data is a symlink to /eda/data, /eda being the mount\n> point for the Raid 5 array.\n\nFirst things first then, go to:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\nand read the item on Performance Tuning and the commented postgresql.conf\n\n> the database isn't huge, storing about 30000 properties, and the largest\n> table is 2.1 Million rows for property features. The dump file is only\n> 221MB. Alas, I did not design the schema, but I have made several 'tweaks'\n> to it to greatly improve read performance allowing us to be the fastest\n> provider in the Tristate area. Unfortunately the Job starts at 01:05\n> (thats the earliest the dump is available) and runs until completion\n> finishing with a vacuum full. The vacuum full locks areas of the database\n> long enough that our service is temporarily down. At the worst point, the\n> vacuum full was starting after 09:00, which our customers didn't\n> appreciate.\n\nYou might be able to avoid a vacuum full by tweaking the *fsm* settings to be \nable to cope with activity.\n\n> I'm wondering if there is anything I can do with postgres to allieviate\n> this problem. Either upgrading to 7.3.4 (although I tried 7.3.3 for\n> another app, and we had to roll back to 7.3.2 because of performance\n> problems), \n\nHmm - can't think what would have changed radically between 7.3.2 and 7.3.3, \nupgrading to .4 is probably sensible.\n\n[snip]\n> Any help/suggestions would be grealy appreciated,\n\nYou say that each insert/update is a separate transaction. I don't know how \nmuch \"bad\" data you get in the dump, but you might be able to do something \nlike:\n\n1. Set batch size to 128 items\n2. Read batch-size rows from the dump\n3. Try to insert/update the batch. If it works, move along by the size of the \nbatch and back to #1\n4. If batch-size=1, record error, move along one row and back to #1\n5. If batch-size>1, halve batch-size and go back to #3\n\nYour initial batch-size will depend on how many errors there are (but \nobviously use a power of 2).\n\nYou could also run an ordinary vacuum every 1000 rows or so (number depends on \nyour *fsm* settings as mentioned above).\n\nYou might also want to try a REINDEX once a night/week too.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 10 Sep 2003 19:31:53 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Upgrade Woes"
},
{
"msg_contents": "Thanks for the URL, I went through postgresql.conf and made some modifications to the config based on information therein. I will have to wait and see how it affects things, as I won't know for a week or so.\n\nSelect time has never been a problem, the DB has always been very fast, it's the insert time that has been a problem. I'm not sure how much this is a function of the drive array sucking, the OS not doing a good job or the DB getting caught up in transaction logs.\n\nWhat does seem odd is that the performance degrades as time goes on, and the space that the DB files takes up increases as well.\n\nThe Vacuum full is performed once at the end of the whole job. We could probably get away with doing this once per week, but in the past I have noticed that if I don't run it regularlly, when I do run it, it seems to take much longer. This has lead me to run more regularly than not.\n\nAs for 7.3.3, the project in question suffered a 10x performance degredation on 7.3.3 which went away when we rolled back to 7.3.2. Almost all the inserts had triggers which updated stats tables, the database in question was very very write heavy, it was pretty much a datawarehouse for X10 sensor information which was then mined for analysis.\n\nI had certainly considered building the script to do binary seperation style inserts, split the job in half, insert, if it fails, split in half again until you get everything in. This would probably work okay considering only about two dozen out of 30,000 rows fail. The only reason not to do that it the time and effort required, particularly as we are looking at a substantial overhaul of the whole system in the next 6 months.\n\nAlex Turner\n\n\nOn Wed, Sep 10, 2003 at 07:31:53PM +0100, Richard Huxton wrote:\n> On Wednesday 10 September 2003 18:53, [email protected] wrote:\n> > Hi,\n> >\n> > My name is Alex Turner and I work for a small Tech company in Pottstown PA.\n> > We run Postgresql on a number of systems for a variety of different\n> > applications, and it has been a joy to deal with all around, working fast\n> > and reliably for over 2 years.\n> >\n> > We recently upgraded from RedHat 7.2 to RedHat 9.0, and we are running\n> > Postgres 7.3.2 on our Proliant ML370 (Raid 1 2x18 10k, and Raid 5 3x36 10k,\n> > 2x866 PIII, 2GB RAM).\n> [snip]\n> > I have noticed that whilst inserts seem to be slower than before, the\n> > vacuum full doesn't seem to take as long overall.\n> >\n> > postgresql.conf is pretty virgin, and we run postmaster with -B512 -N256\n> > -i. /var/lib/pgsql/data is a symlink to /eda/data, /eda being the mount\n> > point for the Raid 5 array.\n> \n> First things first then, go to:\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n> and read the item on Performance Tuning and the commented postgresql.conf\n> \n> > the database isn't huge, storing about 30000 properties, and the largest\n> > table is 2.1 Million rows for property features. The dump file is only\n> > 221MB. Alas, I did not design the schema, but I have made several 'tweaks'\n> > to it to greatly improve read performance allowing us to be the fastest\n> > provider in the Tristate area. Unfortunately the Job starts at 01:05\n> > (thats the earliest the dump is available) and runs until completion\n> > finishing with a vacuum full. The vacuum full locks areas of the database\n> > long enough that our service is temporarily down. At the worst point, the\n> > vacuum full was starting after 09:00, which our customers didn't\n> > appreciate.\n> \n> You might be able to avoid a vacuum full by tweaking the *fsm* settings to be \n> able to cope with activity.\n> \n> > I'm wondering if there is anything I can do with postgres to allieviate\n> > this problem. Either upgrading to 7.3.4 (although I tried 7.3.3 for\n> > another app, and we had to roll back to 7.3.2 because of performance\n> > problems), \n> \n> Hmm - can't think what would have changed radically between 7.3.2 and 7.3.3, \n> upgrading to .4 is probably sensible.\n> \n> [snip]\n> > Any help/suggestions would be grealy appreciated,\n> \n> You say that each insert/update is a separate transaction. I don't know how \n> much \"bad\" data you get in the dump, but you might be able to do something \n> like:\n> \n> 1. Set batch size to 128 items\n> 2. Read batch-size rows from the dump\n> 3. Try to insert/update the batch. If it works, move along by the size of the \n> batch and back to #1\n> 4. If batch-size=1, record error, move along one row and back to #1\n> 5. If batch-size>1, halve batch-size and go back to #3\n> \n> Your initial batch-size will depend on how many errors there are (but \n> obviously use a power of 2).\n> \n> You could also run an ordinary vacuum every 1000 rows or so (number depends on \n> your *fsm* settings as mentioned above).\n> \n> You might also want to try a REINDEX once a night/week too.\n> -- \n> Richard Huxton\n> Archonet Ltd\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n",
"msg_date": "Thu, 11 Sep 2003 10:16:22 -0400",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Upgrade Woes"
},
{
"msg_contents": "[email protected] writes:\n> As for 7.3.3, the project in question suffered a 10x performance\n> degredation on 7.3.3 which went away when we rolled back to 7.3.2.\n\nI would like to pursue that report and find out why. I've just gone\nthrough the CVS logs between 7.3.2 and 7.3.3, and I don't see any change\nthat would explain a 10x slowdown. Can you provide more details about\nexactly what slowed down?\n\nAlso, what PG version were you using on the old RedHat 7.2 installation?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Sep 2003 11:47:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Upgrade Woes "
},
{
"msg_contents": "On Thu, 11 Sep 2003, [email protected] wrote:\n\n>\n> The Vacuum full is performed once at the end of the whole job.\n>\nhave you also tried vacuum analyze periodically - it does not lock the\ntable and can help quite a bit?\n\nstill odd why it would be that much slower between those versions.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Thu, 11 Sep 2003 11:54:55 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Upgrade Woes"
},
{
"msg_contents": "In the performance case the machine was running RedHat AS 2.1. I have posted the database schema at (obtained from pg_dump -s):\n\nhttp://serverbeach.plexq.com/~aturner/schema.sql\n\nThe time to run all the stats procedures dropped through the floor. refresh_hourly_iud, adl_hourly_iud, rebuild_daily_total etc. There is a python script that calls the proc once for each hour or day. When running the historical calc job for a 7 day period back, it would crawl on 7.3.3. We started benching the drive array and found other issues with the system in the mean time (like the drive array was giving us 10MB/sec write speed - the guy who set it up did not enable write to cache). Once it was reconfigured the DB performance did not improve much (bonnie++ was used to verify the RAID array speed).\n\nAlex Turner\n\nOn Thu, Sep 11, 2003 at 11:47:32AM -0400, Tom Lane wrote:\n> [email protected] writes:\n> > As for 7.3.3, the project in question suffered a 10x performance\n> > degredation on 7.3.3 which went away when we rolled back to 7.3.2.\n> \n> I would like to pursue that report and find out why. I've just gone\n> through the CVS logs between 7.3.2 and 7.3.3, and I don't see any change\n> that would explain a 10x slowdown. Can you provide more details about\n> exactly what slowed down?\n> \n> Also, what PG version were you using on the old RedHat 7.2 installation?\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Sep 2003 15:10:18 -0400",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Upgrade Woes"
}
] |
[
{
"msg_contents": "Hello,\n\n\tI'm trying a work-around on the \"index on int8 column gets ignored by \nplanner when queried by literal numbers lacking the explicit '::int8'\" \nissue, and had hoped that perhaps I could create a functional index on \nthe result of casting the pk field to int4, and mabye with a little \nluck the planner would consider the functional index instead. Here's \nwhat I'm playing with on 7.3.4:\n\nsocial=# create table foo (id int8 primary key, stuff text);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n'foo_pkey' for table 'foo'\nCREATE TABLE\nsocial=# create index foo_pkey_int4 on foo(int4(id));\nCREATE INDEX\n\nsocial=# explain analyze select id from foo where id = 42;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------\n Seq Scan on foo (cost=0.00..22.50 rows=1 width=8) (actual \ntime=0.01..0.01 rows=0 loops=1)\n Filter: (id = 42)\n Total runtime: 0.15 msec\n(3 rows)\n\nsocial=# explain analyze select id from foo where id = 42::int8;\n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------\n Index Scan using foo_pkey on foo (cost=0.00..4.82 rows=1 width=8) \n(actual time=0.02..0.02 rows=0 loops=1)\n Index Cond: (id = 42::bigint)\n Total runtime: 0.09 msec\n(3 rows)\n\nsocial=# explain analyze select id from foo where id = int4(33);\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------\n Seq Scan on foo (cost=0.00..22.50 rows=1 width=8) (actual \ntime=0.01..0.01 rows=0 loops=1)\n Filter: (id = 33)\n Total runtime: 0.07 msec\n(3 rows)\n\nIs this just a dead end, or is there some variation of this that might \npossibly work, so that ultimately an undoctored literal number, when \napplied to an int8 column, could find an index?\n\nThanks,\nJames\n\n",
"msg_date": "Wed, 10 Sep 2003 21:13:29 -0400",
"msg_from": "James Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Attempt at work around of int4 query won't touch int8 index ..."
},
{
"msg_contents": "James Robinson <[email protected]> writes:\n> Is this just a dead end, or is there some variation of this that might \n> possibly work, so that ultimately an undoctored literal number, when \n> applied to an int8 column, could find an index?\n\nI think it's a dead end. What I was playing with this afternoon was\nremoving the int8-and-int4 comparison operators from pg_operator.\nIt works as far as making \"int8col = 42\" do the right thing, but I'm\nnot sure yet about side-effects.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Sep 2003 22:44:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Attempt at work around of int4 query won't touch int8 index ... "
},
{
"msg_contents": "On 10 Sep 2003 at 22:44, Tom Lane wrote:\n\n> James Robinson <[email protected]> writes:\n> > Is this just a dead end, or is there some variation of this that might \n> > possibly work, so that ultimately an undoctored literal number, when \n> > applied to an int8 column, could find an index?\n> \n> I think it's a dead end. What I was playing with this afternoon was\n> removing the int8-and-int4 comparison operators from pg_operator.\n> It works as far as making \"int8col = 42\" do the right thing, but I'm\n> not sure yet about side-effects.\n\nIs it possible to follow data type upgrade model in planner? Something like in \nC/C++ where data types are promoted upwards to find out better plan?\n\nint2->int4->int8->float4->float8 types.\n\n That could be a clean solution..\n\njust a thought..\n\nBye\n Shridhar\n\n--\nHlade's Law:\tIf you have a difficult task, give it to a lazy person --\tthey \nwill find an easier way to do it.\n\n",
"msg_date": "Mon, 15 Sep 2003 13:12:28 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Attempt at work around of int4 query won't touch int8 index ... "
},
{
"msg_contents": "\nOn 15/09/2003 08:42 Shridhar Daithankar wrote:\n> \n> Is it possible to follow data type upgrade model in planner? Something\n> like in\n> C/C++ where data types are promoted upwards to find out better plan?\n> \n> int2->int4->int8->float4->float8 types.\n> \n> That could be a clean solution..\n> \n> just a thought..\n> \n\nInterestingly, float8 indexes do work OK (float8col = 99). I spend a large \npart of yesterday grepping through the sources to try and find out why \nthis should be so. No luck so far but I'm going to keep on trying!\n\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Tue, 16 Sep 2003 08:44:48 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Attempt at work around of int4 query won't touch int8 index ..."
},
{
"msg_contents": "Paul Thomas <[email protected]> writes:\n> On 15/09/2003 08:42 Shridhar Daithankar wrote:\n>> Is it possible to follow data type upgrade model in planner?\n\nWe have one, more or less. It's not explicitly coded, it emerges from\nthe fact that certain casts are implicit and others are not. For\ninstance, int4->float8 is implicit but float8->int4 is not.\n\n> Interestingly, float8 indexes do work OK (float8col = 99). I spend a large \n> part of yesterday grepping through the sources to try and find out why \n> this should be so. No luck so far but I'm going to keep on trying!\n\nThe reason that case works is that there is no float8 = int4 operator.\nThe parser can find no other interpretation than promoting the int4 to\nfloat8 and using float8 = float8. (The dual possibility, coerce float8\nto int4 and use int4 = int4, is not considered because that coercion\ndirection is not implicit.) So you end up with an operator that matches\nthe float8 index, and all is well.\n\nThe int8 case fails because there is a cross-type operator int8 = int4,\nand the parser prefers that since it's an exact match to the initial\ndata types. But it doesn't match the int8 index.\n\nWe've floated various proposals for solving this, such as getting rid of\ncross-type operators, but none so far have passed the test of not having\nbad side-effects. See the pg_hackers archives for details (and *please*\ndon't waste this list's bandwidth with speculating about solutions until\nyou've absorbed some of the history. This topic has been heard of\nbefore ;-).)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Sep 2003 10:05:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Attempt at work around of int4 query won't touch int8 index ... "
}
] |
[
{
"msg_contents": "Hi all,\nI have some new hardware on the way and would like some advice on how to get \nthe most out of it..\n\nits a dual xeon 2.4, 4gb ram and 3x identical 15k rpm scsi disks\n\nshould i mirror 2 of the disks for postgres data, and use the 3rd disk for the \no/s and the pg logs or raid5 the 3 disks or even stripe 2 disks for pg and \nuse the 3rd for o/s,logs,backups ?\n\nthe machine will be dealing with lots of inserts, basically as many as we can \nthrow at it\n\nthanks,\nRichard\n",
"msg_date": "Fri, 12 Sep 2003 11:26:29 +0100",
"msg_from": "Richard Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "best arrangement of 3 disks for (insert) performance"
},
{
"msg_contents": "> the machine will be dealing with lots of inserts, basically as many as we can\n> throw at it\n\nIf you mean lots of _transactions_ with few inserts per transaction you should get a RAID controller w/ battery backed write-back\ncache. Nothing else will improve your write performance by nearly as much. You could sell the RAM and one of the CPU's to pay for\nit ;-)\n\nIf you have lots of inserts but all in a few transactions then it's not quite so critical.\n\nM\n\n\n",
"msg_date": "Fri, 12 Sep 2003 12:06:12 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best arrangement of 3 disks for (insert) performance"
},
{
"msg_contents": "[email protected] (Richard Jones) writes:\n> I have some new hardware on the way and would like some advice on\n> how to get the most out of it..\n>\n> its a dual xeon 2.4, 4gb ram and 3x identical 15k rpm scsi disks\n>\n> should i mirror 2 of the disks for postgres data, and use the 3rd\n> disk for the o/s and the pg logs or raid5 the 3 disks or even stripe\n> 2 disks for pg and use the 3rd for o/s,logs,backups ?\n>\n> the machine will be dealing with lots of inserts, basically as many\n> as we can throw at it\n\nHaving WAL on a separate drive from the database would be something of\na win. I'd buy that 1 disk for OS+WAL and then RAID [something]\nacross the other two drives for the database would be pretty helpful.\n\nAfter doing some [loose] benchmarking, the VERY best way to improve\nperformance would involve a RAID controller with battery-backed cache.\n\nOn a box with similar configuration to yours, it took ~3h for a\nparticular set of data to load; on another one with battery-backed\ncache (and a dozen fast SCSI drives :-)), the same data took as little\nas 6 minutes to load. The BIG effect seemed to come from the\ncontroller.\n-- \n(reverse (concatenate 'string \"ofni.smrytrebil\" \"@\" \"enworbbc\"))\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n",
"msg_date": "Fri, 12 Sep 2003 11:24:41 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best arrangement of 3 disks for (insert) performance"
},
{
"msg_contents": "RIchard,\n\n> its a dual xeon 2.4, 4gb ram and 3x identical 15k rpm scsi disks\n> \n> should i mirror 2 of the disks for postgres data, and use the 3rd disk for \nthe \n> o/s and the pg logs or raid5 the 3 disks or even stripe 2 disks for pg and \n> use the 3rd for o/s,logs,backups ?\n\nI'd mirror 2. Stripey RAID with few disks imposes a heavy performance \npenalty on data writes (particularly updates), sometimes as much as 50% for a \nRAID5-3disk config. \n\nI am a little curious why you've got a dual-xeon, but could only afford 3 \ndisks ....\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 12 Sep 2003 09:49:52 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best arrangement of 3 disks for (insert) performance"
},
{
"msg_contents": "The machine is coming from dell, and i have the option of a \nPERC 3/SC RAID Controller (32MB)\nor software raid.\n\ndoes anyone have any experience of this controller? \nits an additional £345 for this controller, i'd be interested to know what \npeople think - my other option is to buy the raid controller separately, \nwhich appeals to me but i wouldnt know what to look for in a raid controller.\n\nthat raid controller review site sounds like a good idea :)\n\nRichard.\n\nOn Friday 12 September 2003 4:24 pm, Christopher Browne wrote:\n> [email protected] (Richard Jones) writes:\n> > I have some new hardware on the way and would like some advice on\n> > how to get the most out of it..\n> >\n> > its a dual xeon 2.4, 4gb ram and 3x identical 15k rpm scsi disks\n> >\n> > should i mirror 2 of the disks for postgres data, and use the 3rd\n> > disk for the o/s and the pg logs or raid5 the 3 disks or even stripe\n> > 2 disks for pg and use the 3rd for o/s,logs,backups ?\n> >\n> > the machine will be dealing with lots of inserts, basically as many\n> > as we can throw at it\n>\n> Having WAL on a separate drive from the database would be something of\n> a win. I'd buy that 1 disk for OS+WAL and then RAID [something]\n> across the other two drives for the database would be pretty helpful.\n>\n> After doing some [loose] benchmarking, the VERY best way to improve\n> performance would involve a RAID controller with battery-backed cache.\n>\n> On a box with similar configuration to yours, it took ~3h for a\n> particular set of data to load; on another one with battery-backed\n> cache (and a dozen fast SCSI drives :-)), the same data took as little\n> as 6 minutes to load. The BIG effect seemed to come from the\n> controller.\n\n",
"msg_date": "Fri, 12 Sep 2003 17:55:40 +0100",
"msg_from": "Richard Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best arrangement of 3 disks for (insert) performance"
},
{
"msg_contents": "The dual xeon arrangement is because the machine will also have to do some \ncollaborative filtering which is very cpu intensive and very disk \nun-intensive, after loading the data into ram.\n\nOn Friday 12 September 2003 5:49 pm, you wrote:\n> RIchard,\n>\n> > its a dual xeon 2.4, 4gb ram and 3x identical 15k rpm scsi disks\n> >\n> > should i mirror 2 of the disks for postgres data, and use the 3rd disk\n> > for\n>\n> the\n>\n> > o/s and the pg logs or raid5 the 3 disks or even stripe 2 disks for pg\n> > and use the 3rd for o/s,logs,backups ?\n>\n> I'd mirror 2. Stripey RAID with few disks imposes a heavy performance\n> penalty on data writes (particularly updates), sometimes as much as 50% for\n> a RAID5-3disk config.\n>\n> I am a little curious why you've got a dual-xeon, but could only afford 3\n> disks ....\n\n",
"msg_date": "Fri, 12 Sep 2003 17:57:20 +0100",
"msg_from": "Richard Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best arrangement of 3 disks for (insert) performance"
},
{
"msg_contents": "On Fri, 2003-09-12 at 12:55, Richard Jones wrote:\n> The machine is coming from dell, and i have the option of a \n> PERC 3/SC RAID Controller (32MB)\n> or software raid.\n> \n> does anyone have any experience of this controller? \n> its an additional £345 for this controller, i'd be interested to know what \n> people think - my other option is to buy the raid controller separately, \n> which appeals to me but i wouldnt know what to look for in a raid controller.\n\nHardware raid with the write cache, and sell a CPU if necessary to buy\nit (don't sell the ram though!).",
"msg_date": "Fri, 12 Sep 2003 18:11:15 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best arrangement of 3 disks for (insert) performance"
},
{
"msg_contents": "> Having WAL on a separate drive from the database would be something of\n> a win. I'd buy that 1 disk for OS+WAL and then RAID [something]\n> across the other two drives for the database would be pretty helpful.\n\nJust my .02, \n\nI did a lot of testing before I deployed our ~50GB postgresql databases\nwith various combinations of 6 15k SCSI drives. I did custom benchmarks\nto simulate our applications, I downloaded several benchmarks, etc.\n\nIt might be a fluke, but I never got better performance with WALs on a\ndifferent disk than I did with all 6 disks in a 0+1 configuration.\nObviously that's not an option with 3 disks. =) \n\nI ended up going with that for easier space maintenance.\n\nObviously YMMV, benchmark for your own situation. :)\n\n\n",
"msg_date": "Sat, 13 Sep 2003 21:44:48 -0700",
"msg_from": "Cott Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best arrangement of 3 disks for (insert) performance"
},
{
"msg_contents": "On Sat, 2003-09-13 at 23:44, Cott Lang wrote:\n> > Having WAL on a separate drive from the database would be something of\n> > a win. I'd buy that 1 disk for OS+WAL and then RAID [something]\n> > across the other two drives for the database would be pretty helpful.\n> \n> Just my .02, \n> \n> I did a lot of testing before I deployed our ~50GB postgresql databases\n> with various combinations of 6 15k SCSI drives. I did custom benchmarks\n> to simulate our applications, I downloaded several benchmarks, etc.\n> \n> It might be a fluke, but I never got better performance with WALs on a\n> different disk than I did with all 6 disks in a 0+1 configuration.\n> Obviously that's not an option with 3 disks. =) \n\nInteresting. Where did you put the OS, and what kind of, and how\nmany, SCSI controllers did you have? \n\nPCI 32bit/33MHz or 64bit/66MHz PCI? 32bit/33MHz PCI has a max\nthroughput of 132MB/s, which is 60% smaller than the theoretical\nbandwidth of U320 SCSI, so maybe you were saturating the PCI bus,\nand that's why a separate WAL didn't show any improvement?\n\n(If the WAL ever becomes the vehicle for PITR, then it will have \nto be on a separate disk [and preferably a separate controller], \neven if it slows performance.)\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Eternal vigilance is the price of liberty: power is ever \nstealing from the many to the few. The manna of popular liberty \nmust be gathered each day, or it is rotten... The hand entrusted \nwith power becomes, either from human depravity or esprit de \ncorps, the necessary enemy of the people. Only by continual \noversight can the democrat in office be prevented from hardening \ninto a despot: only by unintermitted agitation can a people be \nkept sufficiently awake to principle not to let liberty be \nsmothered in material prosperity... Never look, for an age when \nthe people can be quiet and safe. At such times despotism, like \na shrouding mist, steals over the mirror of Freedom\"\nWendell Phillips\n\n",
"msg_date": "Mon, 15 Sep 2003 03:05:10 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] best arrangement of 3 disks for (insert) performance"
},
{
"msg_contents": "On Mon, 2003-09-15 at 01:05, Ron Johnson wrote:\n\n> Interesting. Where did you put the OS, and what kind of, and how\n> many, SCSI controllers did you have? \n\nI ended up with the OS on the same volume, since it didn't seem to make\nany difference. I'm using a SuperMicro 6023P chassis with an Adaptec\n2010S ZCR controller (64bit/66mhz). \n\nI wouldn't recommend SuperMicro to anyone else at this point because\ninstead of hooking up both U320 channels to the 6 drive backplane, they\nonly hook up one. Half the bandwidth, no redundancy. I already had a\nburp on the SCSI channel during a single drive death take out one box.\n:(\n\n> (If the WAL ever becomes the vehicle for PITR, then it will have \n> to be on a separate disk [and preferably a separate controller], \n> even if it slows performance.)\n\nWell, it won't have to... but it's certainly a good idea. :)\n\nIf we ever get PITR, I'll be so happy I won't mind rebuilding my boxes,\nand hopefully I'll have a better budget at that point. ;^)\n\nBTW, I didn't get WORSE performance with the WALs on separate disks, it\njust wasn't any better. Unfortunately I lost the spreadsheet I had all\nmy results in, so I can't be any more specific.\n\n\n",
"msg_date": "Mon, 15 Sep 2003 07:41:11 -0700",
"msg_from": "Cott Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] best arrangement of 3 disks for (insert)"
}
] |
[
{
"msg_contents": "Due to various third party issues, and the fact PG rules, we're planning\non migrating our deplorable informix db to PG. It is a rather large DB\nwith a rather high amount of activity (mostly updates). So I'm going to\nbe aquiring a dual (or quad if they'll give me money) box. (In my testing\nmy glorious P2 with a 2 spindle raid0 is able to handle it fairly well)\n\nWhat I'm wondering about is what folks experience with software raid vs\nhardware raid on linux is. A friend of mine ran a set of benchmarks at\nwork and found sw raid was running obscenely faster than the mylex and\n(some other brand that isn't 3ware) raids..\n\nOn the pro-hw side you have ones with battery backed cache, chacnes are\nthey are less likely to fail..\n\nOn the pro-sw side you have lots of speed and less cost (unfortunately,\nthere is a pathetic budget so spending $15k on a raid card is out of the\nquestion really).\n\nany thoughts?\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Fri, 12 Sep 2003 10:34:26 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "software vs hw hard on linux"
},
{
"msg_contents": "My personal experience with RAID cards is that you have to spend money to get good performance. You need battery backed cache because RAID 5 only works well with write to cache turned on, and you need a good size cache too. If you don't have it, RAID 5 performance will suck big time. If you need speed, RAID 10 seems to be the only way to go, but of course that means you are gonna spend $$s on drives and chasis. I wish someone would start a website like storagereview.com for RAID cards because I have had _vastly_ differing experience with different cards. We currently have a compaq ML370 with a Compaq Smart Array 5300, and quite frankly it sucks (8MB/sec write). I get better performance numbers off my new Tyan Thunder s2469UGN board with a single U320 10k RPM drive (50MB/sec) than we get off our RAID 5 array including seeks/sec. Definately shop around, and hopefully some other folks can give some suggestions of a good RAID card, and a good config.\n\nAlex Turner\n\nP.S. If there is movement for a RAID review site, I would be willing to start one, I'm pretty dissapointed at the lack of resources out there for this.\n\nOn Fri, Sep 12, 2003 at 10:34:26AM -0400, Jeff wrote:\n> Due to various third party issues, and the fact PG rules, we're planning\n> on migrating our deplorable informix db to PG. It is a rather large DB\n> with a rather high amount of activity (mostly updates). So I'm going to\n> be aquiring a dual (or quad if they'll give me money) box. (In my testing\n> my glorious P2 with a 2 spindle raid0 is able to handle it fairly well)\n> \n> What I'm wondering about is what folks experience with software raid vs\n> hardware raid on linux is. A friend of mine ran a set of benchmarks at\n> work and found sw raid was running obscenely faster than the mylex and\n> (some other brand that isn't 3ware) raids..\n> \n> On the pro-hw side you have ones with battery backed cache, chacnes are\n> they are less likely to fail..\n> \n> On the pro-sw side you have lots of speed and less cost (unfortunately,\n> there is a pathetic budget so spending $15k on a raid card is out of the\n> question really).\n> \n> any thoughts?\n> \n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n",
"msg_date": "Fri, 12 Sep 2003 10:49:24 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: software vs hw hard on linux"
},
{
"msg_contents": "[email protected] (Jeff) writes:\n> On the pro-sw side you have lots of speed and less cost (unfortunately,\n> there is a pathetic budget so spending $15k on a raid card is out of the\n> question really).\n\nI have been playing with a Perq3 QC card\n <http://www.scsi4me.com/?menu=menu_scsi&pid=143>\nwhich isn't anywhere near $15K, and which certainly seems to provide the\ncharacteristic improved performance.\n\nPriceWatch is showing several LSI Logic cards in the $300-$400 range\nwith battery backed cache, which doesn't seem too out of line. \n\nIt would seem a good tradeoff to buy one of these cards and drop a\nSCSI drive off the array.\n-- \noutput = (\"cbbrowne\" \"@\" \"libertyrms.info\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n",
"msg_date": "Fri, 12 Sep 2003 11:32:18 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: software vs hw hard on linux"
},
{
"msg_contents": "Jeff,\n\n> What I'm wondering about is what folks experience with software raid vs\n> hardware raid on linux is. A friend of mine ran a set of benchmarks at\n> work and found sw raid was running obscenely faster than the mylex and\n> (some other brand that isn't 3ware) raids..\n\nOur company has stopped recommending hardware raid for all low-to-medium end \nsystems. Our experience is that Linux SW RAID does as good a job as any \n$700 to $1000 RAID card, and has the advantage of not having lots of driver \nissues (for example, we still have one system running Linux 2.2.19 because \nthe Mylex driver maintainer passed away in early 2002).\n\nThe exception to this is if you are expecting to frequently max out your CPU \nand/or RAM with your application, in which case the SW RAID might not be so \ngood because you would get query-vs.-RAID CPU contention.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 12 Sep 2003 09:55:06 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: software vs hw hard on linux"
},
{
"msg_contents": ">>>>> \"a\" == aturner <[email protected]> writes:\n\na> you need a good size cache too. If you don't have it, RAID 5\na> performance will suck big time. If you need speed, RAID 10 seems\na> to be the only way to go, but of course that means you are gonna\na> spend $$s on drives and chasis. I wish someone would start a\n\nI disagree on your RAID level assertions. Check back about 10 or 15\ndays on this list for some numbers I posted on restore times for a 20+\nGB database with different RAID levels. RAID5 came out fastest\ncompared with RAID10 and RAID50 across 14 disks. On my 5 disk system,\nI run RAID10 plus a spare in preference to RAID5 as it is faster for\nthat. So the answer is \"it depends\". ;-)\n\nBoth systems use SCSI hardware RAID controllers, one is LSI and the\nother Adaptec, all hardware from Dell.\n\nBut if you're budget limited, spend every last penny you have on the\nfastest disks you can get, and then boost memory. Any current CPU\nwill be more than enough for Postgres.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Fri, 12 Sep 2003 15:03:07 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: software vs hw hard on linux"
},
{
"msg_contents": ">>>>> \"J\" == Jeff <[email protected]> writes:\n\nJ> Due to various third party issues, and the fact PG rules, we're planning\nJ> on migrating our deplorable informix db to PG. It is a rather large DB\nJ> with a rather high amount of activity (mostly updates). So I'm going to\n\nIf at all possible, batch your updates within transactions containing\nas many of those updates as you can. You will get *much* better\nperformance.\n\nMore than 2 procs is probably overkill.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Fri, 12 Sep 2003 15:04:20 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: software vs hw hard on linux"
},
{
"msg_contents": "On Fri, 2003-09-12 at 07:34, Jeff wrote:\n\n> What I'm wondering about is what folks experience with software raid vs\n> hardware raid on linux is. A friend of mine ran a set of benchmarks at\n> work and found sw raid was running obscenely faster than the mylex and\n> (some other brand that isn't 3ware) raids..\n\nI ended up going with a hybrid: RAID-1 across sets of two disks in\nhardware on Adaptec ZCR cards, and RAID-0 across the RAID-1s with Linux\nsoftware RAID.\n\nAlthough the ZCR (2010 I believe) supports 0+1, using software striping\nturned in better performance for me.\n\nThis way, I get brain dead simple dead disk replacement handled by\nhardware with some speed from software RAID.\n\nAlso, I would think mirroring on the SCSI controller should take traffic\noff the PCI bus... <shrug>\n\nI have another machine that's stuck using a Compaq 5i plus controller\nwith no battery backed write cache, in RAID 5. It sucks. Really bad. I'd\nrather use an IDE drive. :)\n\n\n\n\n\n",
"msg_date": "Sat, 13 Sep 2003 22:40:20 -0700",
"msg_from": "Cott Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: software vs hw hard on linux"
}
] |
[
{
"msg_contents": "\nThe Dell PERC controllers have a very strong reputation for terrible\nperformance. If you search the archives of the Dell Linux Power Edge list\n(dell.com/linux), you will find many, many people who get better\nperformance from software RAID, rather than the hw RAID on the PERC.\nHaving said that, the 3/SC might be one of the better PERC controllers. I\nwould spend and hour or two and benchmark hw vs. sw before I committed to\neither one.\n\nThom Dyson\nDirector of Information Services\nSybex, Inc.\n\nOn 9/12/2003 9:55:40 AM, Richard Jones <[email protected]> wrote:\n> The machine is coming from dell, and i have the option of a\n> PERC 3/SC RAID Controller (32MB)\n> or software raid.\n>\n> does anyone have any experience of this controller?\n> its an additional £345 for this controller, i'd be interested to know\nwhat\n> people think - my other option is to buy the raid controller separately,\n> which appeals to me but i wouldnt know what to look for in a raid\n> controller.\n>\n> that raid controller review site sounds like a good idea :)\n>\n> Richard.\n\n\n",
"msg_date": "Fri, 12 Sep 2003 10:03:10 -0700",
"msg_from": "\"Thom Dyson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best arrangement of 3 disks for (insert) performance - Dell"
},
{
"msg_contents": "I would like to point out though on the PERC controllers that are LSI\nbased ( Megaraid ) there -are- settings that can be changed to fix any\no the performance issues. Check the linux megaraid driver list archives\nto see the full description. I've seen it come up many times and\nbasically all the problems have turned up resolved.\n\nWill\n\n\nOn Fri, 2003-09-12 at 10:03, Thom Dyson wrote:\n> \n> The Dell PERC controllers have a very strong reputation for terrible\n> performance. If you search the archives of the Dell Linux Power Edge list\n> (dell.com/linux), you will find many, many people who get better\n> performance from software RAID, rather than the hw RAID on the PERC.\n> Having said that, the 3/SC might be one of the better PERC controllers. I\n> would spend and hour or two and benchmark hw vs. sw before I committed to\n> either one.\n> \n> Thom Dyson\n> Director of Information Services\n> Sybex, Inc.\n> \n> On 9/12/2003 9:55:40 AM, Richard Jones <[email protected]> wrote:\n> > The machine is coming from dell, and i have the option of a\n> > PERC 3/SC RAID Controller (32MB)\n> > or software raid.\n> >\n> > does anyone have any experience of this controller?\n> > its an additional £345 for this controller, i'd be interested to know\n> what\n> > people think - my other option is to buy the raid controller separately,\n> > which appeals to me but i wouldnt know what to look for in a raid\n> > controller.\n> >\n> > that raid controller review site sounds like a good idea :)\n> >\n> > Richard.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster",
"msg_date": "12 Sep 2003 10:33:28 -0700",
"msg_from": "Will LaShell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best arrangement of 3 disks for (insert) performance"
},
{
"msg_contents": ">>>>> \"TD\" == Thom Dyson <[email protected]> writes:\n\nTD> The Dell PERC controllers have a very strong reputation for terrible\nTD> performance. If you search the archives of the Dell Linux Power Edge list\nTD> (dell.com/linux), you will find many, many people who get better\nTD> performance from software RAID, rather than the hw RAID on the PERC.\n\nThe PERC controllers are just a fancy name for a whole host of\ndifferent hardware. I have several, and some are made by LSI and some\nare made by Adaptec. My latest is PERC3/DC which is an LSI MegaRAID\nand is pretty darned fast.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Fri, 12 Sep 2003 15:06:59 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best arrangement of 3 disks for (insert) performance - Dell"
},
{
"msg_contents": ">>>>> \"WL\" == Will LaShell <[email protected]> writes:\n\nWL> o the performance issues. Check the linux megaraid driver list archives\nWL> to see the full description. I've seen it come up many times and\nWL> basically all the problems have turned up resolved.\n\nI've seen this advice a couple of times, but perhaps I'm just not a\ngood archive searcher because I can't find such recommendations on the\nlinux-megaraid-devel list archives...\n\nAnyone have a direct pointer to right info?\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Fri, 12 Sep 2003 15:23:06 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best arrangement of 3 disks for (insert) performance"
},
{
"msg_contents": "[email protected] (\"Thom Dyson\") writes:\n> The Dell PERC controllers have a very strong reputation for terrible\n> performance. If you search the archives of the Dell Linux Power\n> Edge list (dell.com/linux), you will find many, many people who get\n> better performance from software RAID, rather than the hw RAID on\n> the PERC. Having said that, the 3/SC might be one of the better\n> PERC controllers. I would spend and hour or two and benchmark hw\n> vs. sw before I committed to either one.\n\nI can't agree with that.\n\n1. If you search the archives for messages dated a couple of years\nago, you can find lots of messages indicating terrible performance.\n\nDrivers are not cast in concrete; there has been a LOT of change to\nthem since then.\n\n2. The second MAJOR merit to hardware RAID is the ability to hot-swap\ndrives. Software RAID doesn't help with that at all.\n\n3. The _immense_ performance improvement that can be gotten out of\nthese controllers comes from having fsync() turn into a near no-op\nsince changes can be committed to the 128K battery-backed cache REALLY\nQUICKLY.\n\nThat is something you should avoid doing with software RAID in any\ncase where you actually care about your data.\n\nThat third part is where Big Wins come. It is the very same sort of\n\"big win from cacheing\" that we saw, years ago, when we improved\nsystem performance _immensely_ by adding a mere 16 bytes of cache by\nbuying serial controller cards with cacheing UUARTs. It is akin to\nthe way SCSI controllers got pretty big performance improvements by\nadding 256 bytes of tagged command cache.\n-- \noutput = (\"cbbrowne\" \"@\" \"libertyrms.info\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n",
"msg_date": "Fri, 12 Sep 2003 16:47:22 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best arrangement of 3 disks for (insert) performance - Dell"
}
] |
[
{
"msg_contents": "And the winner is... checkpoint_segments.\n\nRestore of a significanly big database (~19.8GB restored) shows nearly\nno time difference depending on sort_mem when checkpoint_segments is\nlarge. There are quite a number of tables and indexes. The restore\nwas done from a pg_dump -Fc dump of one database.\n\nAll tests with 16KB page size, 30k shared buffers, sort_mem=8192, PG\n7.4b2 on FreeBSD 4.8.\n\n3 checkpoint_segments restore time: 14983 seconds\n50 checkpoint_segments restore time: 11537 seconds\n50 checkpoint_segments, sort_mem 131702 restore time: 11262 seconds\n\nThere's an initdb between each test.\n\nFor reference, the restore with 8k page size, 60k buffers, 8192\nsort_mem and 3 checkpoint buffers was 14777 seconds.\n\nIt seems for restore that a larger number of checkpoint buffers is the\nkey, especially when dealing with large numbers of rows in a table.\n\nI notice during the restore that the disk throughput triples during\nthe checkpoint.\n\nThe postgres data partition is on a 14-spindle hardware RAID5 on U320\nSCSI disks.\n",
"msg_date": "Mon, 15 Sep 2003 15:15:46 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": true,
"msg_subject": "restore time: sort_mem vs. checkpoing_segments"
},
{
"msg_contents": "Vivek,\n\n> And the winner is... checkpoint_segments.\n> \n> Restore of a significanly big database (~19.8GB restored) shows nearly\n> no time difference depending on sort_mem when checkpoint_segments is\n> large. There are quite a number of tables and indexes. The restore\n> was done from a pg_dump -Fc dump of one database.\n\nCool! Thank you for posting this.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 15 Sep 2003 14:42:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: restore time: sort_mem vs. checkpoing_segments"
},
{
"msg_contents": "Vivek Khera <[email protected]> writes:\n> Restore of a significanly big database (~19.8GB restored) shows nearly\n> no time difference depending on sort_mem when checkpoint_segments is\n> large. There are quite a number of tables and indexes. The restore\n> was done from a pg_dump -Fc dump of one database.\n\nI was just bugging Marc for some useful data, so I'll ask you too:\ncould you provide a trace of the pg_restore execution? log_statement\nplus log_duration output would do it. I am curious to understand\nexactly which steps in the restore are significant time sinks.\n\n> I notice during the restore that the disk throughput triples during\n> the checkpoint.\n\nHm, better make sure the log includes some indication of when\ncheckpoints happen.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Sep 2003 01:19:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: restore time: sort_mem vs. checkpoing_segments "
},
{
"msg_contents": ">>>>> \"TL\" == Tom Lane <[email protected]> writes:\n\nTL> I was just bugging Marc for some useful data, so I'll ask you too:\nTL> could you provide a trace of the pg_restore execution? log_statement\nTL> plus log_duration output would do it. I am curious to understand\nTL> exactly which steps in the restore are significant time sinks.\n\nSure... machine isn't gonna do much of anything until 7.4 is released\n(or I hear a promise of no more dump/reload).\n\n>> I notice during the restore that the disk throughput triples during\n>> the checkpoint.\n\nTL> Hm, better make sure the log includes some indication of when\nTL> checkpoints happen.\n\nThat it does.\n\nI'll post the results in the next couple of days, as each run takes\nabout 4 hours ;-)\n",
"msg_date": "Tue, 16 Sep 2003 09:59:14 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: restore time: sort_mem vs. checkpoing_segments "
},
{
"msg_contents": "On Mon, 2003-09-15 at 15:15, Vivek Khera wrote:\n> And the winner is... checkpoint_segments.\n> \n> Restore of a significanly big database (~19.8GB restored) shows nearly\n> no time difference depending on sort_mem when checkpoint_segments is\n> large. There are quite a number of tables and indexes. The restore\n> was done from a pg_dump -Fc dump of one database.\n> \n> All tests with 16KB page size, 30k shared buffers, sort_mem=8192, PG\n> 7.4b2 on FreeBSD 4.8.\n\nhmm... i wonder what would happen if you pushed your sort_mem higher...\non some of our development boxes and upgrade scripts, i push the\nsort_mem to 102400 and sometimes even higher depending on the box. this\nreally speeds up my restores quit a bit (and is generally safe as i make\nsure there isn't any other activity going on at the time)\n\nanother thing i like to do is turn of fsync, as if the system crashes in\nthe middle of reload i'm pretty sure i'd be starting all over anyway...\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "17 Sep 2003 16:15:46 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: restore time: sort_mem vs. checkpoing_segments"
},
{
"msg_contents": ">>>>> \"RT\" == Robert Treat <[email protected]> writes:\n\nRT> hmm... i wonder what would happen if you pushed your sort_mem higher...\nRT> on some of our development boxes and upgrade scripts, i push the\nRT> sort_mem to 102400 and sometimes even higher depending on the box. this\nRT> really speeds up my restores quit a bit (and is generally safe as i make\nRT> sure there isn't any other activity going on at the time)\n\nOk... just two more tests to run, no big deal ;-)\n\n\nRT> another thing i like to do is turn of fsync, as if the system crashes in\nRT> the middle of reload i'm pretty sure i'd be starting all over anyway...\n\nI'll test it and see what happens. I suspect not a big improvement on\na hardware RAID card with 128Mb backed up cache, though. But never\nsay never!\n",
"msg_date": "Wed, 17 Sep 2003 16:21:46 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: restore time: sort_mem vs. checkpoing_segments"
},
{
"msg_contents": ">>>>> \"RT\" == Robert Treat <[email protected]> writes:\n\nRT> hmm... i wonder what would happen if you pushed your sort_mem higher...\nRT> on some of our development boxes and upgrade scripts, i push the\nRT> sort_mem to 102400 and sometimes even higher depending on the box. this\nRT> really speeds up my restores quit a bit (and is generally safe as i make\nRT> sure there isn't any other activity going on at the time)\n\nI was just checking, and I already ran test with larger sort_mem. the\ncheckpoint segments made more of a difference...\n",
"msg_date": "Mon, 22 Sep 2003 16:17:54 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: restore time: sort_mem vs. checkpoing_segments"
},
{
"msg_contents": "Vivek Khera wrote:\n> And the winner is... checkpoint_segments.\n> \n> Restore of a significanly big database (~19.8GB restored) shows nearly\n> no time difference depending on sort_mem when checkpoint_segments is\n> large. There are quite a number of tables and indexes. The restore\n> was done from a pg_dump -Fc dump of one database.\n> \n> All tests with 16KB page size, 30k shared buffers, sort_mem=8192, PG\n> 7.4b2 on FreeBSD 4.8.\n> \n> 3 checkpoint_segments restore time: 14983 seconds\n> 50 checkpoint_segments restore time: 11537 seconds\n> 50 checkpoint_segments, sort_mem 131702 restore time: 11262 seconds\n\nWith the new warning about too-frequent checkpoints, people have actual\nfeedback to encourage them to increase checkpoint_segments. One issue\nis that it is likely to recommend increasing checkpoint_segments during\nrestore, even if there is no value to it being large during normal\nserver operation. Should that be decumented?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 23 Sep 2003 09:59:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: restore time: sort_mem vs. checkpoing_segments"
},
{
"msg_contents": ">>>>> \"BM\" == Bruce Momjian <[email protected]> writes:\n\nBM> restore, even if there is no value to it being large during normal\nBM> server operation. Should that be decumented?\n\n\nYes, right alongside the recommendation to bump sort_mem, even though\nin my tests sort_mem made no significant difference in restore time\ngoing from 8m to 128m.\n",
"msg_date": "Tue, 23 Sep 2003 10:27:55 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: restore time: sort_mem vs. checkpoing_segments"
},
{
"msg_contents": "On Tue, 23 Sep 2003, Bruce Momjian wrote:\n\n> With the new warning about too-frequent checkpoints, people have actual\n> feedback to encourage them to increase checkpoint_segments. One issue\n> is that it is likely to recommend increasing checkpoint_segments during\n> restore, even if there is no value to it being large during normal\n> server operation. Should that be decumented?\n\nOne could have a variable that turns off that warning, and have pg_dump\ninsert a statement to turn it off. That is, if one never want these\nwarnings from a restore (from a new dump).\n\nIn any case, documentation is good and still needed.\n\n-- \n/Dennis\n\n",
"msg_date": "Wed, 24 Sep 2003 07:24:26 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: restore time: sort_mem vs. checkpoing_segments"
}
] |
[
{
"msg_contents": "FINALLY A LIP PLUMPER THAT ACTUALLY WORKS !!!\n\n\n\n\n\nGet Plump, Sexy Lip'sIn Under 30 Days!\n\n\nvisit website\n\nCITY LIP'S exclusive lip treatment...\n\t>\nStimulates collagen & hyaluronic moisture in your lip's resulting in BIGGER, LUSCIOUS, more SENSUOUS Lip's\n\t>\nCITY LIP'S is used by men & women in 34 countries. Recommended by Plastic Surgeons, Celebrities, & Movie Stars\n\t>\n CITY LIP'S super-hydrating formula plumps & reduces unattractive lip wrinkles & fine lines\n\t>\n\tEasy to use, completely pain-free and GUARANTEED to work in 30 days or your MONEY BACK!\nBe the envy of all your friends!\nretail $47.95\nONLINE SALE $24.76you save: $23.19 (48% OFF)\n ~> BUY 2 GET 1 FREE <~\n\n\n\n\n\nbuy now\nvisit website\n\tcustomer ratings:\n\nWomen love beauty tips, forward this to a friend!\nDistributors Welcome!\n\n\n\n \n\n",
"msg_date": "Tue, 16 Sep 2003 00:54:18 +0400",
"msg_from": "\"Womens Breakthrough\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "L|p Treatment that makes your L|ps PLUMP"
}
] |
[
{
"msg_contents": "To whoever can assist,\n\nI am working with a decent sized database on an extremely powerful machine. \nThe specs follow:\n\n\tOS:\t\t\tRedHat Linux 9.0\n\tPG Version\t\t7.3\n\tMemory\t\t1 gig\n\tCPU\t\t\tQuad Processor - Unsure of exact CPUs\n\tHard Drive\t\t80 gigs\n\tDatabase Size\t\t2 gigs\n\t\n\nAs you can see the server is built for overkill.\n\nThe problem that I see is as follows.\n\nI do a rather simple query: select count (*) from large-table where column \n= some value;\n\nAbout 80% of the time, the response time is sub-second. However, at 10% of \nthe time, the response time is 5 - 10 seconds.\n\nThis is nothing readily apparent at the system level that comes close to \nexplaining the performance hits. CPU and memory usage (as measured by top) \nappear to be fine.\n\nAlthough there are certain tuning issues within the database itself, no \ndocumentation I have seen seems to indicate that tuning issues would lead \nto such inconsistent response time.\n\nAny ideas?\n\nRegards,\n\nJoseph\n\n",
"msg_date": "Mon, 15 Sep 2003 17:34:12 -0400",
"msg_from": "Joseph Bove <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inconsistent performance"
},
{
"msg_contents": "On Mon, 15 Sep 2003, Joseph Bove wrote:\n\n> I am working with a decent sized database on an extremely powerful machine.\n> The specs follow:\n>\n> \tOS:\t\t\tRedHat Linux 9.0\n> \tPG Version\t\t7.3\n> \tMemory\t\t1 gig\n> \tCPU\t\t\tQuad Processor - Unsure of exact CPUs\n> \tHard Drive\t\t80 gigs\n> \tDatabase Size\t\t2 gigs\n>\n>\n> As you can see the server is built for overkill.\n>\n> The problem that I see is as follows.\n>\n> I do a rather simple query: select count (*) from large-table where column\n> = some value;\n>\n> About 80% of the time, the response time is sub-second. However, at 10% of\n> the time, the response time is 5 - 10 seconds.\n\nIs it consistant for various values of \"some value\"? If so, it's possible\nthat it's switching plans based on the apparent selectivity of the column\nfor that value.\n",
"msg_date": "Mon, 15 Sep 2003 14:34:49 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
},
{
"msg_contents": "On Mon, Sep 15, 2003 at 17:34:12 -0400,\n Joseph Bove <[email protected]> wrote:\n> \n> I do a rather simple query: select count (*) from large-table where column \n> = some value;\n> \n> About 80% of the time, the response time is sub-second. However, at 10% of \n> the time, the response time is 5 - 10 seconds.\n> \n> This is nothing readily apparent at the system level that comes close to \n> explaining the performance hits. CPU and memory usage (as measured by top) \n> appear to be fine.\n> \n> Although there are certain tuning issues within the database itself, no \n> documentation I have seen seems to indicate that tuning issues would lead \n> to such inconsistent response time.\n\nLooking at the output from explain analyze for the query would be useful.\nIt may be that there are a lot of rows that have the value in the problem\nqueries.\n",
"msg_date": "Mon, 15 Sep 2003 16:42:28 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
},
{
"msg_contents": "[email protected] (Joseph Bove) writes:\n> I do a rather simple query: select count (*) from large-table where\n> column = some value;\n>\n> About 80% of the time, the response time is sub-second. However, at\n> 10% of the time, the response time is 5 - 10 seconds.\n\nDoes it seem data-dependent?\n\nThat is, does the time vary for different values of \"some value?\"\n\nIf a particular value is particularly common, the system might well\nrevert to a sequential scan, making the assumption that it is quicker\nto look at every page in the table rather than to walk through\nEnormous Numbers of records.\n\nI had a case very similar to this where a table had _incredible_\nskewing of this sort where there were a small number of column values\nthat occurred hundreds of thousands of times, and other column values\nonly occurred a handful of times.\n\nI was able to get Excellent Performance back by setting up two partial\nindices:\n - One for WHERE THIS_COLUMN > VITAL_VALUE;\n - One for WHERE THIS_COLUMN < VITAL_VALUE;\n\nThe REALLY COMMON values were in the range < VITAL_VALUE.\n-- \noutput = (\"cbbrowne\" \"@\" \"libertyrms.info\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n",
"msg_date": "Mon, 15 Sep 2003 18:18:50 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
}
] |
[
{
"msg_contents": "Joseph,\n\nPlease see this web page before posting anything else:\nhttp://techdocs.postgresql.org/guides/SlowQueryPostingGuidelines\n\nCurrently, you are not posting enough data for anyone to be of meaningful \nhelp.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 15 Sep 2003 15:15:09 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent performance"
},
{
"msg_contents": "Stephan,\n\nActually, it's inconsistent with the exact same command. I've now \nreplicated the problem by doing the following command:\n\nselect count (*) from table;\n\nThe table in question has 88899 rows.\n\nThe response time is anywhere from 1 second to 12 seconds. Different \nresponse times can occur in the same minute of testing!\n\nRegards,\n\nJoseph\n\n\n\nAt 02:34 PM 9/15/2003 -0700, you wrote:\n>On Mon, 15 Sep 2003, Joseph Bove wrote:\n>\n> > I am working with a decent sized database on an extremely powerful machine.\n> > The specs follow:\n> >\n> > OS: RedHat Linux 9.0\n> > PG Version 7.3\n> > Memory 1 gig\n> > CPU Quad Processor - Unsure of exact CPUs\n> > Hard Drive 80 gigs\n> > Database Size 2 gigs\n> >\n> >\n> > As you can see the server is built for overkill.\n> >\n> > The problem that I see is as follows.\n> >\n> > I do a rather simple query: select count (*) from large-table where column\n> > = some value;\n> >\n> > About 80% of the time, the response time is sub-second. However, at 10% of\n> > the time, the response time is 5 - 10 seconds.\n>\n>Is it consistant for various values of \"some value\"? If so, it's possible\n>that it's switching plans based on the apparent selectivity of the column\n>for that value.\n\n\n",
"msg_date": "Mon, 15 Sep 2003 18:24:27 -0400",
"msg_from": "Joseph Bove <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
},
{
"msg_contents": "\nOn Mon, 15 Sep 2003, Joseph Bove wrote:\n\n> Stephan,\n>\n> Actually, it's inconsistent with the exact same command. I've now\n> replicated the problem by doing the following command:\n>\n> select count (*) from table;\n>\n> The table in question has 88899 rows.\n>\n> The response time is anywhere from 1 second to 12 seconds. Different\n> response times can occur in the same minute of testing!\n\nWell, that's really only got one valid plan right now (seqscan and\naggregate). It'd be mildly interesting to see what explain analyze says in\nslow and fast states, although I'd be willing to bet that it's just going\nto effectively show that the seqscan is taking more or less time.\n\nI think we're going to need to see the configuration settings for the\nserver and possibly some info on how big the table is (say relpages for\nthe pg_class row associated with the table after a vacuum full).\n",
"msg_date": "Mon, 15 Sep 2003 15:49:10 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
},
{
"msg_contents": "Stephan,\n\nI've run explain analyze a number of times and have gotten results between \n5.5 and 7.5 seconds\n\nAttached is a typical output\n\n QUERY PLAN\n-------------------------------------\n Aggregate (cost=9993.92..9993.92 rows=1 width=0)\n (actual time=7575.59..7575.59 rows=1 loops=1)\n-> Seq Scan on vetapview (cost=0.00..9771.34 rows=89034 width=0)\n (actual time=0.06..7472.20 \nrows=88910 loops=1)\n Total runtime: 7575.67 msec\n(3 rows)\n\nThe only things changing are the actual time. The costs are constant.\n\nThe relpages from pg_class for vetapview (the table in question) is 8881.\n\nAt the end of this message is the exhaustive contents of postgresql.conf. \nThe only settings I have attempted tuning are as follows:\n\ntcpip_socket = true\nmax_connections = 100\nshared_buffers = 5000\nsort_mem = 8192\nfsync = false\n\nI did have shared_buffers and sort_mem both set higher originally (15000, \n32168) but decreased them in case over-utilization of memory was the problem.\n\nThe kernel setting shmmax is set to 256,000,000 (out of 1 gig)\n\nRegards,\n\nJoseph\n\npostgresql.conf\n\n#\n# Connection Parameters\n#\ntcpip_socket = true\n#ssl = false\n\nmax_connections = 100\n#superuser_reserved_connections = 2\n\n#port = 5432\n#hostname_lookup = false\n#show_source_port = false\n\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n\n#virtual_host = ''\n\n#krb_server_keyfile = ''\n\n\n#\n# Shared Memory Size\n#\n#shared_buffers = 15000 # min max_connections*2 or 16, 8KB each\nshared_buffers = 5000\n#max_fsm_relations = 1000 # min 10, fsm is free space map, ~40 bytes\n#max_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes\n#max_locks_per_transaction = 64 # min 10\n#wal_buffers = 8 # min 4, typically 8KB each\n\n#\n# Non-shared Memory Sizes\n#\n#sort_mem = 32168 # min 64, size in KB\nsort_mem = 8192\n#vacuum_mem = 8192 # min 1024, size in KB\n# \n\n# Write-ahead log (WAL)\n#\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n#\nfsync = false\n#wal_sync_method = fsync # the default varies across platforms:\n# # fsync, fdatasync, open_sync, or open_datasync\n#wal_debug = 0 # range 0-16\n\n\n#\n# Optimizer Parameters\n#\n#enable_seqscan = true\n#enable_indexscan = true\n#enable_tidscan = true\n#enable_sort = true#enable_tidscan = true\n#enable_sort = true\n#enable_nestloop = true\n#enable_mergejoin = true\n#enable_hashjoin = true\n\n#effective_cache_size = 1000 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n#default_statistics_target = 10 # range 1-1000\n\n#\n# GEQO Optimizer Parameters\n#\n#geqo = true\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n#geqo_threshold = 11\n#geqo_pool_size = 0 # default based on tables in statement,\n # range 128-1024\n#geqo_effort = 1\n#geqo_generations = 0#geqo_random_seed = -1 # auto-compute seed\n\n\n#\n# Message display\n#\n#server_min_messages = notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log, fatal,\n # panic\n#client_min_messages = notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, info, notice, warning, error\n#silent_mode = false\n\n#log_connections = false\n#log_pid = false\n#log_statement = false\n#log_duration = false\n#log_timestamp = false\n\n#log_min_error_statement = error # Values in order of increasing severity:\n\n#log_min_error_statement = error # Values in order of increasing severity:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, panic(off)\n\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n\n#explain_pretty_print = true\n\n# requires USE_ASSERT_CHECKING\n#debug_assertions = true\n\n\n#\n# Syslog\n#\n#syslog = 0 # range 0-2\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n# \n\n# Statistics\n#\n#show_parser_stats = false\n#show_planner_stats = false\n#show_executor_stats = false\n#show_statement_stats = false\n\n# requires BTREE_BUILD_STATS\n#show_btree_build_stats = false\n\n\n#\n# Access statistics collection\n#\n#stats_start_collector = true\n#stats_reset_on_server_start = true\n#stats_command_string = false\n#stats_row_level = false\n#stats_block_level = false\n\n# \n\n# Lock Tracing\n#\n#trace_notify = false\n\n# requires LOCK_DEBUG\n#trace_locks = false\n#trace_userlocks = false\n#trace_lwlocks = false\n#debug_deadlocks = false\n#trace_lock_oidmin = 16384\n#trace_lock_table = 0\n\n\n#\n# Misc\n#\n#autocommit = true\n#dynamic_library_path = '$libdir'\n#search_path = '$user,public'\n#datestyle = 'iso, us'\n#timezone = unknown # actually, defaults to TZ environment setting\n#datestyle = 'iso, us'\n#timezone = unknown # actually, defaults to TZ environment setting\n#australian_timezones = false\n#client_encoding = sql_ascii # actually, defaults to database encoding\n#authentication_timeout = 60 # 1-600, in seconds\n#deadlock_timeout = 1000 # in milliseconds\n#default_transaction_isolation = 'read committed'\n#max_expr_depth = 10000 # min 10\n#max_files_per_process = 1000 # min 25\n#password_encryption = true\n#sql_inheritance = true\n#transform_null_equals = false\n#statement_timeout = 0 # 0 is disabled, in milliseconds\n#db_user_namespace = false\n\n\n\n#\n# Locale settings\n#\n# (initialized by initdb -- may be changed)\nLC_MESSAGES = 'en_US.UTF-8'\nLC_MONETARY = 'en_US.UTF-8'\nLC_NUMERIC = 'en_US.UTF-8'\nLC_TIME = 'en_US.UTF-8'\n\nAt 03:49 PM 9/15/2003 -0700, Stephan Szabo wrote:\n\n>On Mon, 15 Sep 2003, Joseph Bove wrote:\n>\n> > Stephan,\n> >\n> > Actually, it's inconsistent with the exact same command. I've now\n> > replicated the problem by doing the following command:\n> >\n> > select count (*) from table;\n> >\n> > The table in question has 88899 rows.\n> >\n> > The response time is anywhere from 1 second to 12 seconds. Different\n> > response times can occur in the same minute of testing!\n>\n>Well, that's really only got one valid plan right now (seqscan and\n>aggregate). It'd be mildly interesting to see what explain analyze says in\n>slow and fast states, although I'd be willing to bet that it's just going\n>to effectively show that the seqscan is taking more or less time.\n>\n>I think we're going to need to see the configuration settings for the\n>server and possibly some info on how big the table is (say relpages for\n>the pg_class row associated with the table after a vacuum full).\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n",
"msg_date": "Mon, 15 Sep 2003 19:28:40 -0400",
"msg_from": "Joseph Bove <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
},
{
"msg_contents": "On Mon, 15 Sep 2003, Joseph Bove wrote:\n\n> Stephan,\n> \n> I've run explain analyze a number of times and have gotten results between \n> 5.5 and 7.5 seconds\n> \n> Attached is a typical output\n> \n> QUERY PLAN\n> -------------------------------------\n> Aggregate (cost=9993.92..9993.92 rows=1 width=0)\n> (actual time=7575.59..7575.59 rows=1 loops=1)\n> -> Seq Scan on vetapview (cost=0.00..9771.34 rows=89034 width=0)\n> (actual time=0.06..7472.20 \n> rows=88910 loops=1)\n> Total runtime: 7575.67 msec\n> (3 rows)\n> \n> The only things changing are the actual time. The costs are constant.\n> \n> The relpages from pg_class for vetapview (the table in question) is 8881.\n> \n> At the end of this message is the exhaustive contents of postgresql.conf. \n> The only settings I have attempted tuning are as follows:\n> \n> tcpip_socket = true\n> max_connections = 100\n> shared_buffers = 5000\n> sort_mem = 8192\n> fsync = false\n\nA couple of things.\n\n1: Is there an index on the parts of the query used for the where clause?\n2: What is your effect_cache_size set to? It needs to be set right for \nyour postgresql server to be able to take advantage of the kernel's cache \n(i.e. use an index scan when the kernel is likely to have that data in \nmemory.)\n\n",
"msg_date": "Mon, 15 Sep 2003 18:22:34 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
},
{
"msg_contents": "On Mon, 15 Sep 2003, scott.marlowe wrote:\n\n> On Mon, 15 Sep 2003, Joseph Bove wrote:\n> \n> > Stephan,\n> > \n> > I've run explain analyze a number of times and have gotten results between \n> > 5.5 and 7.5 seconds\n> > \n> > Attached is a typical output\n> > \n> > QUERY PLAN\n> > -------------------------------------\n> > Aggregate (cost=9993.92..9993.92 rows=1 width=0)\n> > (actual time=7575.59..7575.59 rows=1 loops=1)\n> > -> Seq Scan on vetapview (cost=0.00..9771.34 rows=89034 width=0)\n> > (actual time=0.06..7472.20 \n> > rows=88910 loops=1)\n> > Total runtime: 7575.67 msec\n> > (3 rows)\n> > \n> > The only things changing are the actual time. The costs are constant.\n> > \n> > The relpages from pg_class for vetapview (the table in question) is 8881.\n> > \n> > At the end of this message is the exhaustive contents of postgresql.conf. \n> > The only settings I have attempted tuning are as follows:\n> > \n> > tcpip_socket = true\n> > max_connections = 100\n> > shared_buffers = 5000\n> > sort_mem = 8192\n> > fsync = false\n> \n> A couple of things.\n> \n> 1: Is there an index on the parts of the query used for the where clause?\n> 2: What is your effect_cache_size set to? It needs to be set right for \n> your postgresql server to be able to take advantage of the kernel's cache \n> (i.e. use an index scan when the kernel is likely to have that data in \n> memory.)\n\nSorry, that should be effective_cache_size, not effect_cache_size. It's \nset in 8k blocks and is usually about how much buffer / cache you have \nleft over after the machines \"settles\" after being up and running for a \nwhile. Fer instance, on my server, I show 784992K cache, and 42976K buff \nunder top, so, that's 827968k/8k=103496 blocks. Note that if you've \nrecompiled you may have somehow set block size larger, but installations \nwith postgresql block sizes ~=8k are pretty uncommon, and you'd know if \nyou had done that, so it's probably 8k blocks.\n\n",
"msg_date": "Mon, 15 Sep 2003 18:39:08 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
},
{
"msg_contents": "it seems like the difference is probably related to caching. you say \nyou have 1gb of ram, and the database is 2gb. Obviously the entire \ndatabase isn't cached, but maybe your query runs fast when the table is \nin memory, and they it gets swapped out of cache because some other \npiece of information moves into memory. In that circumstance, it has \nto load the information from disk and is therefor slow.\n\nhow busy is the system? what other programs are running on the \nmachine? how big (on disk) is the table in question? what kind of load \ndoes the system have? is it a single 80gb ide drive? Even though \nyou have 4 CPU's a small amount of memory and bad IO system will kill \nthe database.\n\n\nOn Monday, September 15, 2003, at 05:28 PM, Joseph Bove wrote:\n\n> Stephan,\n>\n> I've run explain analyze a number of times and have gotten results \n> between 5.5 and 7.5 seconds\n>\n> Attached is a typical output\n>\n> QUERY PLAN\n> -------------------------------------\n> Aggregate (cost=9993.92..9993.92 rows=1 width=0)\n> (actual time=7575.59..7575.59 rows=1 loops=1)\n> -> Seq Scan on vetapview (cost=0.00..9771.34 rows=89034 width=0)\n> (actual time=0.06..7472.20 \n> rows=88910 loops=1)\n> Total runtime: 7575.67 msec\n> (3 rows)\n>\n> The only things changing are the actual time. The costs are constant.\n>\n> The relpages from pg_class for vetapview (the table in question) is \n> 8881.\n>\n> At the end of this message is the exhaustive contents of \n> postgresql.conf. The only settings I have attempted tuning are as \n> follows:\n>\n> tcpip_socket = true\n> max_connections = 100\n> shared_buffers = 5000\n> sort_mem = 8192\n> fsync = false\n>\n> I did have shared_buffers and sort_mem both set higher originally \n> (15000, 32168) but decreased them in case over-utilization of memory \n> was the problem.\n>\n> The kernel setting shmmax is set to 256,000,000 (out of 1 gig)\n>\n> Regards,\n>\n> Joseph\n>\n> postgresql.conf\n>\n> #\n> # Connection Parameters\n> #\n> tcpip_socket = true\n> #ssl = false\n>\n> max_connections = 100\n> #superuser_reserved_connections = 2\n>\n> #port = 5432\n> #hostname_lookup = false\n> #show_source_port = false\n>\n> #unix_socket_directory = ''\n> #unix_socket_group = ''\n> #unix_socket_permissions = 0777 # octal\n>\n> #virtual_host = ''\n>\n> #krb_server_keyfile = ''\n>\n>\n> #\n> # Shared Memory Size\n> #\n> #shared_buffers = 15000 # min max_connections*2 or 16, 8KB each\n> shared_buffers = 5000\n> #max_fsm_relations = 1000 # min 10, fsm is free space map, ~40 \n> bytes\n> #max_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 \n> bytes\n> #max_locks_per_transaction = 64 # min 10\n> #wal_buffers = 8 # min 4, typically 8KB each\n>\n> #\n> # Non-shared Memory Sizes\n> #\n> #sort_mem = 32168 # min 64, size in KB\n> sort_mem = 8192\n> #vacuum_mem = 8192 # min 1024, size in KB\n> #\n> # Write-ahead log (WAL)\n> #\n> #checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n> #checkpoint_timeout = 300 # range 30-3600, in seconds\n> #\n> #commit_delay = 0 # range 0-100000, in microseconds\n> #commit_siblings = 5 # range 1-1000\n> #\n> fsync = false\n> #wal_sync_method = fsync # the default varies across platforms:\n> # # fsync, fdatasync, open_sync, or \n> open_datasync\n> #wal_debug = 0 # range 0-16\n>\n>\n> #\n> # Optimizer Parameters\n> #\n> #enable_seqscan = true\n> #enable_indexscan = true\n> #enable_tidscan = true\n> #enable_sort = true#enable_tidscan = true\n> #enable_sort = true\n> #enable_nestloop = true\n> #enable_mergejoin = true\n> #enable_hashjoin = true\n>\n> #effective_cache_size = 1000 # typically 8KB each\n> #random_page_cost = 4 # units are one sequential page fetch \n> cost\n> #cpu_tuple_cost = 0.01 # (same)\n> #cpu_index_tuple_cost = 0.001 # (same)\n> #cpu_operator_cost = 0.0025 # (same)\n>\n> #default_statistics_target = 10 # range 1-1000\n>\n> #\n> # GEQO Optimizer Parameters\n> #\n> #geqo = true\n> #geqo_selection_bias = 2.0 # range 1.5-2.0\n> #geqo_threshold = 11\n> #geqo_pool_size = 0 # default based on tables in statement,\n> # range 128-1024\n> #geqo_effort = 1\n> #geqo_generations = 0#geqo_random_seed = -1 # auto-compute \n> seed\n>\n>\n> #\n> # Message display\n> #\n> #server_min_messages = notice # Values, in order of decreasing \n> detail:\n> # debug5, debug4, debug3, debug2, \n> debug1,\n> # info, notice, warning, error, log, \n> fatal,\n> # panic\n> #client_min_messages = notice # Values, in order of decreasing \n> detail:\n> # debug5, debug4, debug3, debug2, \n> debug1,\n> # log, info, notice, warning, error\n> #silent_mode = false\n>\n> #log_connections = false\n> #log_pid = false\n> #log_statement = false\n> #log_duration = false\n> #log_timestamp = false\n>\n> #log_min_error_statement = error # Values in order of increasing \n> severity:\n>\n> #log_min_error_statement = error # Values in order of increasing \n> severity:\n> # debug5, debug4, debug3, debug2, \n> debug1,\n> # info, notice, warning, error, \n> panic(off)\n>\n> #debug_print_parse = false\n> #debug_print_rewritten = false\n> #debug_print_plan = false\n> #debug_pretty_print = false\n>\n> #explain_pretty_print = true\n>\n> # requires USE_ASSERT_CHECKING\n> #debug_assertions = true\n>\n>\n> #\n> # Syslog\n> #\n> #syslog = 0 # range 0-2\n> #syslog_facility = 'LOCAL0'\n> #syslog_ident = 'postgres'\n>\n> #\n> # Statistics\n> #\n> #show_parser_stats = false\n> #show_planner_stats = false\n> #show_executor_stats = false\n> #show_statement_stats = false\n>\n> # requires BTREE_BUILD_STATS\n> #show_btree_build_stats = false\n>\n>\n> #\n> # Access statistics collection\n> #\n> #stats_start_collector = true\n> #stats_reset_on_server_start = true\n> #stats_command_string = false\n> #stats_row_level = false\n> #stats_block_level = false\n>\n> #\n> # Lock Tracing\n> #\n> #trace_notify = false\n>\n> # requires LOCK_DEBUG\n> #trace_locks = false\n> #trace_userlocks = false\n> #trace_lwlocks = false\n> #debug_deadlocks = false\n> #trace_lock_oidmin = 16384\n> #trace_lock_table = 0\n>\n>\n> #\n> # Misc\n> #\n> #autocommit = true\n> #dynamic_library_path = '$libdir'\n> #search_path = '$user,public'\n> #datestyle = 'iso, us'\n> #timezone = unknown # actually, defaults to TZ environment \n> setting\n> #datestyle = 'iso, us'\n> #timezone = unknown # actually, defaults to TZ environment \n> setting\n> #australian_timezones = false\n> #client_encoding = sql_ascii # actually, defaults to database \n> encoding\n> #authentication_timeout = 60 # 1-600, in seconds\n> #deadlock_timeout = 1000 # in milliseconds\n> #default_transaction_isolation = 'read committed'\n> #max_expr_depth = 10000 # min 10\n> #max_files_per_process = 1000 # min 25\n> #password_encryption = true\n> #sql_inheritance = true\n> #transform_null_equals = false\n> #statement_timeout = 0 # 0 is disabled, in milliseconds\n> #db_user_namespace = false\n>\n>\n>\n> #\n> # Locale settings\n> #\n> # (initialized by initdb -- may be changed)\n> LC_MESSAGES = 'en_US.UTF-8'\n> LC_MONETARY = 'en_US.UTF-8'\n> LC_NUMERIC = 'en_US.UTF-8'\n> LC_TIME = 'en_US.UTF-8'\n>\n> At 03:49 PM 9/15/2003 -0700, Stephan Szabo wrote:\n>\n>> On Mon, 15 Sep 2003, Joseph Bove wrote:\n>>\n>> > Stephan,\n>> >\n>> > Actually, it's inconsistent with the exact same command. I've now\n>> > replicated the problem by doing the following command:\n>> >\n>> > select count (*) from table;\n>> >\n>> > The table in question has 88899 rows.\n>> >\n>> > The response time is anywhere from 1 second to 12 seconds. Different\n>> > response times can occur in the same minute of testing!\n>>\n>> Well, that's really only got one valid plan right now (seqscan and\n>> aggregate). It'd be mildly interesting to see what explain analyze \n>> says in\n>> slow and fast states, although I'd be willing to bet that it's just \n>> going\n>> to effectively show that the seqscan is taking more or less time.\n>>\n>> I think we're going to need to see the configuration settings for the\n>> server and possibly some info on how big the table is (say relpages \n>> for\n>> the pg_class row associated with the table after a vacuum full).\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 6: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Mon, 15 Sep 2003 18:39:53 -0600",
"msg_from": "Brian Hirt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
},
{
"msg_contents": "The world rejoiced as [email protected] (Joseph Bove) wrote:\n> Actually, it's inconsistent with the exact same command. I've now\n> replicated the problem by doing the following command:\n>\n> select count (*) from table;\n>\n> The table in question has 88899 rows.\n>\n> The response time is anywhere from 1 second to 12 seconds. Different\n> response times can occur in the same minute of testing!\n\nThe only possible plan for THAT query will involve a seq scan of the\nwhole table. If the postmaster already has the data in cache, it\nmakes sense for it to run in 1 second. If it has to read it from\ndisk, 12 seconds makes a lot of sense.\n\nYou might want to increase the \"shared_buffers\" parameter in\npostgresql.conf; that should lead to increased stability of times as\nit should be more likely that the data in \"table\" will remain in\ncache.\n-- \n(reverse (concatenate 'string \"gro.gultn\" \"@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/x.html\nSigns of a Klingon Programmer - 8. \"Debugging? Klingons do not\ndebug. Our software does not coddle the weak. Bugs are good for\nbuilding character in the user.\"\n",
"msg_date": "Mon, 15 Sep 2003 22:26:45 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
},
{
"msg_contents": "On Mon, 15 Sep 2003 22:26:45 -0400, Christopher Browne\n<[email protected]> wrote:\n>> select count (*) from table;\n>The only possible plan for THAT query will involve a seq scan of the\n>whole table. If the postmaster already has the data in cache, it\n>makes sense for it to run in 1 second. If it has to read it from\n>disk, 12 seconds makes a lot of sense.\n\nYes. And note that the main difference is between having the data in\nmemory and having to fetch it from disk. I don't believe that this\ndifference can be explained by 9000 read calls hitting the operating\nsystem's cache.\n\n>You might want to increase the \"shared_buffers\" parameter in\n>postgresql.conf; that should lead to increased stability of times as\n>it should be more likely that the data in \"table\" will remain in\n>cache.\n\nLet's not jump to this conclusion before we know what's going on.\n\nJoseph Bove <[email protected]> wrote in another message above:\n| I did have shared_buffers and sort_mem both set higher originally (15000, \n| 32168)\n\nAs I read this I think he meant \"... and had the same performance\nproblem.\"\n\nJoseph, what do you get, if you run that\n\t EXPLAIN ANALYSE SELECT count(*) ...\nseveral times? What do vmstat and top show while the query is\nrunning? Are there other processes active during or between the runs?\nWhat kind of processes? Postgres backends? Web server? ...\n\nServus\n Manfred\n",
"msg_date": "Tue, 16 Sep 2003 09:09:05 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
},
{
"msg_contents": "...\n> #effective_cache_size = 1000 # typically 8KB each\n\nThat's horribly wrong. It's telling PG that your OS is only likely to cache\n8MB of the DB in RAM. If you've got 1GB of memory it should be between\n64000 and 96000\n\n\n\n",
"msg_date": "Tue, 16 Sep 2003 09:31:01 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
},
{
"msg_contents": "Dear list,\n\nFirst and foremost, thanks to the great number of people who have responded \nwith various tips and suggestions. I am now starting to fully appreciate \nthe various cache settings and what they can do for performance.\n\nI just want to redefine the problem based on the knowledge of it that I now \nhave.\n\nIn my example, I am purposefully forcing a full table scan - select count \n(*) from table. This table has only 90,000 rows. Each row is comprised of \nabout 300 bytes of data.\n\nIf the table has not been cached, I was seeing response times from 5 to 18 \nseconds to read the table. If it had been cached, then the response time \ndropped to sub-second response.\n\nObviously, I can tune the caching so as to make sure that as much data that \ncan be reasonably cached is cached. However, I don't think that a hit of \neven 5 seconds to read a table of 90,000 rows is acceptable.\n\nOne thing that has been tried with some success was to dump the table and \nrecreate it. After this exercise, selecting all rows from the table when it \nis not in cache takes about 3 seconds. (Of course, when in cache, the same \nsub-second response time is seen.)\n\nI still think that 3 seconds is not acceptable. However, I reserve the \nright to be wrong. Does it sound unrealistic to expect PostgreSQL to be \nable to read 90,000 rows with 300 bytes per row in under a second?\n\nBased on suggestions from the list, I am also thinking of making the \nfollowing tuning changes:\n\nshared_buffers = 15000\nsort_mem = 32168\neffective_cache_size = 64000\n\nThis is based on one gig of memory.\n\nDoes anyone have any feedback on these values? Also, realizing that no two \ndatabase are the same, etc., etc... does anyone have a good formula for \nsetting these values?\n\nThanks in advance,\n\nJoseph\n\nAt 09:09 AM 9/16/2003 +0200, Manfred Koizar wrote:\n>On Mon, 15 Sep 2003 22:26:45 -0400, Christopher Browne\n><[email protected]> wrote:\n> >> select count (*) from table;\n> >The only possible plan for THAT query will involve a seq scan of the\n> >whole table. If the postmaster already has the data in cache, it\n> >makes sense for it to run in 1 second. If it has to read it from\n> >disk, 12 seconds makes a lot of sense.\n>\n>Yes. And note that the main difference is between having the data in\n>memory and having to fetch it from disk. I don't believe that this\n>difference can be explained by 9000 read calls hitting the operating\n>system's cache.\n>\n> >You might want to increase the \"shared_buffers\" parameter in\n> >postgresql.conf; that should lead to increased stability of times as\n> >it should be more likely that the data in \"table\" will remain in\n> >cache.\n>\n>Let's not jump to this conclusion before we know what's going on.\n>\n>Joseph Bove <[email protected]> wrote in another message above:\n>| I did have shared_buffers and sort_mem both set higher originally (15000,\n>| 32168)\n>\n>As I read this I think he meant \"... and had the same performance\n>problem.\"\n>\n>Joseph, what do you get, if you run that\n> EXPLAIN ANALYSE SELECT count(*) ...\n>several times? What do vmstat and top show while the query is\n>running? Are there other processes active during or between the runs?\n>What kind of processes? Postgres backends? Web server? ...\n>\n>Servus\n> Manfred\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n\n",
"msg_date": "Tue, 16 Sep 2003 11:21:00 -0400",
"msg_from": "Joseph Bove <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
},
{
"msg_contents": "On Tue, 16 Sep 2003, Joseph Bove wrote:\n\n> I still think that 3 seconds is not acceptable. However, I reserve the\n> right to be wrong. Does it sound unrealistic to expect PostgreSQL to be\n> able to read 90,000 rows with 300 bytes per row in under a second?\n>\nfirst, check to see what your max throughput on your disk is using a\nbenchmark such as Bonnie (Making sure to use a size LARGER than phsyical\nmemory. 2x physical is veyr optimial).\n\nnext, run your query again with a vmstat 1 running in another term.\n\nSee how close the vmstat \"bi\" numbers correspond to your max according to\nbonnie. You could have an IO bottleneck. (I once went running around\ntrying to figure it out and then discovered the issue was IO).\n\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Tue, 16 Sep 2003 11:38:12 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent performance"
}
] |
[
{
"msg_contents": "Yesterday Jenny and I started to look at plan changes with different\nseed and default_statistics_sample changes. \n\nSince we have 21 plans to check, it takes a long time to determine if\nthe plans were different. We had to do it visually with xxdiff. Diff\nwill always show a difference since the costs are almost always\ndifferent.\n\nIs there any option to remove the cost numbers from the plan so we can\njust use \"diff\" to automate the plan comparisons? Otherwise it will be\nvery tedious to do this experiment.\n\n\n\n\nOn Sun, 2003-09-07 at 17:32, Tom Lane wrote:\n> Greg Stark <[email protected]> writes:\n> > Perhaps the default of 10 is simply way\n> > too small and should be raised? \n> \n> I've suspected since the default existed that it might be too small ;-).\n> No one's yet done any experiments to try to establish a better default,\n> though. I suppose the first hurdle is to find a representative dataset.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n-- \nMary Edie Meredith <[email protected]>\nOpen Source Development Lab\n\n",
"msg_date": "16 Sep 2003 17:21:04 -0700",
"msg_from": "Mary Edie Meredith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic"
},
{
"msg_contents": "Mary Edie Meredith <[email protected]> writes:\n> Is there any option to remove the cost numbers from the plan so we can\n> just use \"diff\" to automate the plan comparisons?\n\nNo, but a few moments with sed or perl should get the job done for you.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 17 Sep 2003 00:58:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic "
}
] |
[
{
"msg_contents": "Hi,\n\nI have been following a thread on this list \"Inconsistent performance\"\nand had a few questions especially the bits about effective_cache_size.\nI have read some of the docs, and some other threads on this setting,\nand it seems to used by the planner to either choose a sequential or\nindex scan. So it will not necessarily increase performance I suppose\nbut instead choose the most optimal plan. Is this correct?\n\nWe are not that we are suffering massive performance issues at the\nmoment but it is expected that our database is going to grow\nconsiderably in the next couple of years, both in terms of load and\nsize.\n\nAlso what would an appropriate setting be? \n\n From what I read of Scott Marlowes email, and from the information below\nI reckon it should be somewhere in the region of 240,000. \n\nDanger maths ahead. Beware!!!!\n\n<maths>\n 141816K buff\n+ 1781764K cached\n-----------------\n 1923580K total\n\neffective_cache_size = 1923580 / 8 = 240447.5\n</maths>\n\nHere is some information on the server in question. If any more\ninformation is required then please say. It is a dedicated PG machine\nwith no other services being hosted off it. As you can see from the\nuptime, its load average is 0.00, and is currently so chilled its almost\nfrozen!!!!! That will change though :-(\n\n\nHardware\n========\nDual PIII 1.4GHz\n2Gb RAM\n1Tb SAN with hardware RAID 5 using 1Gbps Fibre channel.\n\n\nOS\n==\nLinux webbasedth5 2.4.18-18.7.xsmp #1 SMP Wed Nov 13 19:01:42 EST 2002\ni686\nRed Hat Linux release 7.3 (Valhalla)\n\n\nPG\n==\nPostgreSQL 7.3.1 on i686-pc-linux-gnu, compiled by GCC 2.96\n\n\nDatabase\n========\nThis includes all indexes and tables. I can provide more information on\nhow this is chopped up if needed.\n\nSize : 1,141.305 Mb\nTuples : 13,416,397\n\n\nUptime\n======\n11:15am up 197 days, 16:50, 1 user, load average: 0.00, 0.00, 0.00\n\n\nTop\n===\nMem: 2064836K av, 2018648K used, 46188K free, 0K shrd, 141816K\nbuff\nSwap: 2096472K av, 4656K used, 2091816K free 1781764K\ncached\n\n\nPostgresql.conf (all defaults except)\n=====================================\nmax_connections = 1000\nshared_buffers = 16000 (128 Mb)\nmax_fsm_relations = 5000\nmax_fsm_pages = 500000\nvacuum_mem = 65535\n\n\n\nKind Regards,\n\nNick Barr\n\n\nThis email and any attachments are confidential to the intended\nrecipient and may also be privileged. If you are not the intended\nrecipient please delete it from your system and notify the sender. You\nshould not copy it or use it for any purpose nor disclose or distribute\nits contents to any other person.\n\n\n\n\n\n",
"msg_date": "Wed, 17 Sep 2003 11:48:57 +0100",
"msg_from": "\"Nick Barr\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Effective Cache Size"
},
{
"msg_contents": "On 17 Sep 2003 at 11:48, Nick Barr wrote:\n\n> Hi,\n> \n> I have been following a thread on this list \"Inconsistent performance\"\n> and had a few questions especially the bits about effective_cache_size.\n> I have read some of the docs, and some other threads on this setting,\n> and it seems to used by the planner to either choose a sequential or\n> index scan. So it will not necessarily increase performance I suppose\n> but instead choose the most optimal plan. Is this correct?\n\nThat is correct.\n\n> Danger maths ahead. Beware!!!!\n> \n> <maths>\n> 141816K buff\n> + 1781764K cached\n> -----------------\n> 1923580K total\n> \n> effective_cache_size = 1923580 / 8 = 240447.5\n> </maths>\n\nThat would be bit too aggressive. I would say set it around 200K to leave room \nfor odd stuff.\n\nRest seems fine with your configuration. Of course a latest version of \npostgresql is always good though..\n\nBye\n Shridhar\n\n--\nPower is danger.\t\t-- The Centurion, \"Balance of Terror\", stardate 1709.2\n\n",
"msg_date": "Wed, 17 Sep 2003 16:33:09 +0530",
"msg_from": "\"Shridhar Daithankar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Effective Cache Size"
},
{
"msg_contents": "\nI have been experimenting with a new Seagate Cheetah 10k-RPM SCSI to\ncompare with a cheaper Seagate Barracuda 7200-RPM IDE (each in a\nsingle-drive configuration). The Cheetah definately dominates the generic\nIO tests such as bonnie++, but fares poorly with pgbench (and other\npostgresql operations).\n\nI don't understand why switching to a SCSI drive in an otherwise identical\nsetup would so seriously degrade performance. I would have expected the\nopposite. vmstat does not reveal (to me) any bottlenecks in the SCSI\nconfiguration.\n\nThe only difference between the two test scenarios is that I stopped the\npostmaster, copied the data dir to the other drive and put a symlink to\npoint to the new path. I ran the tests several times, so these are not\nflukes.\n\nCan anyone explain why this might be happening and how to better leverage\nthis 10k drive?\n\nthanks,\nMike Adler\n\n\nSystem info:\n\nBox is a Dell 600SC with Adaptec 39160 SCSI controller.\nLinux 2.4.18-bf2.4\nCPU: Intel(R) Celeron(R) CPU 2.00GHz stepping 09\nMemory: 512684k/524224k available (1783k kernel code, 11156k reserved,\n549k data, 280k init, 0k highmem)\n\npostgresql.conf settings:\nshared_buffers = 10000\nrandom_page_cost = 0.3\nsort_mem = 4096\n\n##################################################\nTEST 1:\n\nIDE Seagate Baracuda\nhde: ST340014A, ATA DISK drive\nhde: 78165360 sectors (40021 MB) w/2048KiB Cache, CHS=77545/16/63\n\nbonnie++ -f:\nVersion 1.02b ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\ndellmar 1G 27001 10 11863 4 20867 3 161.7 0\n\nsample vmstat 1 output during bonnie++:\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 1 0 0 9332 4456 5056 467728 0 0 20864 0 429 698 0 5 95\n 0 1 1 9332 4380 5056 467728 0 0 5248 27056 361 207 1 4 95\n 0 1 1 9332 4376 5056 467728 0 0 384 26936 338 55 0 0 100\n 0 1 0 9332 4416 5064 468368 0 0 10112 9764 385 350 0 4 96\n 1 0 0 9332 4408 5056 468120 0 0 20608 0 427 684 1 7 92\n 1 0 0 9332 4392 5056 467864 0 0 20992 0 431 692 0 5 95\n\n\npgbench:\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 2\nnumber of transactions per client: 400\nnumber of transactions actually processed: 800/800\ntps = 110.213013(including connections establishing)\ntps = 110.563927(excluding connections establishing)\n\nsample \"vmstat 1\" output during pgbench:\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 2 0 0 160 4348 50032 419320 0 0 240 3432 514 3849 34 7 59\n 0 2 0 160 4392 50764 418544 0 0 224 3348 500 3701 33 6 61\n 2 0 0 160 4364 51652 417688 0 0 240 3908 573 4411 43 8 50\n 2 0 0 160 4364 52508 416832 0 0 160 3708 548 4273 44 8 49\n 1 1 1 160 4420 53332 415944 0 0 160 3604 541 4174 40 13 48\n 0 1 1 160 4420 54160 415120 0 0 104 3552 526 4048 42 14 45\n 1 0 0 160 4964 54720 414576 0 0 128 4328 645 5819 69 7 24\n\n\n\n########################################################\nTEST 2:\n\nSCSI Drive Seagate Cheetah 10k.6\n Vendor: SEAGATE Model: ST336607LW Rev: DS08\n\nbonnie++ -f:\nVersion 1.02b ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\ndellmar 1G 40249 14 21177 7 46620 7 365.8 0\n\nsample vmstat 1 output during bonnie++:\n 0 1 1 8916 4400 1844 467216 0 0 384 42348 475 80 0 0 100\n 0 1 1 8916 4392 1844 467216 0 0 512 46420 472 103 0 2 98\n 1 0 0 8916 4364 1852 469392 0 0 7168 26552 507 268 0 3 97\n 1 0 0 8916 4452 1868 469392 0 0 28544 12312 658 947 1 15 84\n 1 0 0 8916 4416 1860 468888 0 0 47744 4 850 1534 0 18 82\n 1 0 0 8916 4436 1796 468312 0 0 48384 0 859 1555 0 19 81\n 1 0 0 8916 4452 1744 467724 0 0 48640 0 863 1569 2 20 78\n\n\npgbench (sounds thrashy):\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 2\nnumber of transactions per client: 400\nnumber of transactions actually processed: 800/800\ntps = 33.274922(including connections establishing)\ntps = 33.307125(excluding connections establishing)\n\nsample \"vmstat 1\" output during pgbench:\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 0 1 1 160 4356 36568 432772 0 0 0 1120 232 1325 12 2 86\n 0 1 1 160 4452 36592 432652 0 0 0 1108 229 1295 14 2 84\n 0 1 1 160 4428 36616 432652 0 0 0 1168 234 1370 9 4 87\n 0 1 1 160 4392 36636 432668 0 0 0 1120 231 1303 12 3 85\n 0 1 0 160 4364 36664 432668 0 0 0 1084 230 1361 16 5 79\n 0 1 0 160 4456 36696 432548 0 0 0 1196 234 1300 13 2 85\n\n\nMike Adler\n",
"msg_date": "Wed, 17 Sep 2003 14:55:40 -0400 (EDT)",
"msg_from": "Michael Adler <[email protected]>",
"msg_from_op": false,
"msg_subject": "inferior SCSI performance"
},
{
"msg_contents": "Michael Adler <[email protected]> writes:\n> I have been experimenting with a new Seagate Cheetah 10k-RPM SCSI to\n> compare with a cheaper Seagate Barracuda 7200-RPM IDE (each in a\n> single-drive configuration). The Cheetah definately dominates the generic\n> IO tests such as bonnie++, but fares poorly with pgbench (and other\n> postgresql operations).\n\nIt's fairly common for ATA drives to be configured to lie about write\ncompletion (ie, claim write-complete as soon as data is accepted into\ntheir onboard RAM buffer), whereas SCSI drives usually report write\ncomplete only when the data is actually down to disk. The performance\ndifferential may thus be coming at the expense of reliability. If you\nrun Postgres with fsync off, does the differential go away?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 17 Sep 2003 15:08:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inferior SCSI performance "
},
{
"msg_contents": "\n\nOn Wed, 17 Sep 2003, Tom Lane wrote:\n> Michael Adler <[email protected]> writes:\n> > I have been experimenting with a new Seagate Cheetah 10k-RPM SCSI to\n> > compare with a cheaper Seagate Barracuda 7200-RPM IDE (each in a\n> > single-drive configuration). The Cheetah definately dominates the generic\n> > IO tests such as bonnie++, but fares poorly with pgbench (and other\n> > postgresql operations).\n>\n> It's fairly common for ATA drives to be configured to lie about write\n> completion (ie, claim write-complete as soon as data is accepted into\n> their onboard RAM buffer), whereas SCSI drives usually report write\n> complete only when the data is actually down to disk. The performance\n> differential may thus be coming at the expense of reliability. If you\n> run Postgres with fsync off, does the differential go away?\n\nYes, they both perform equally at about 190 tps with fsync off.\n\nThe culprit turns out to be write-caching on the IDE drive. It is enabled\nby default, but can be disabled with \"hdparm -W0 /dev/hdx\". After it is\ndisabled, the tps are proportional to rpms.\n\nThere's an (2001) Linux thread on this if anyone is interested:\nhttp://www.ussg.iu.edu/hypermail/linux/kernel/0103.0/0331.html\n\nSo the quesiton is whether it is ever sensible to use write-caching and\nexpect comparable persistence.\n\nThanks,\n\nMichael Adler\n",
"msg_date": "Wed, 17 Sep 2003 16:46:00 -0400 (EDT)",
"msg_from": "Michael Adler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inferior SCSI performance "
},
{
"msg_contents": "On Wed, Sep 17, 2003 at 04:46:00PM -0400, Michael Adler wrote:\n> So the quesiton is whether it is ever sensible to use write-caching and\n> expect comparable persistence.\n\nYes. If and only if you have a battery-backed cache. I know of no\nIDE drives that have that, but there's nothing about the spec which\nmakes it impossible.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Wed, 1 Oct 2003 06:33:38 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inferior SCSI performance"
},
{
"msg_contents": "On Wed, 1 Oct 2003, Andrew Sullivan wrote:\n\n> On Wed, Sep 17, 2003 at 04:46:00PM -0400, Michael Adler wrote:\n> > So the quesiton is whether it is ever sensible to use write-caching and\n> > expect comparable persistence.\n> \n> Yes. If and only if you have a battery-backed cache. I know of no\n> IDE drives that have that, but there's nothing about the spec which\n> makes it impossible.\n\nFYI, on a Dual PIV2800 with 2 gig ram and a single UDMA 80 gig hard drive, \nI from 420 tps to 22 tps when I disable write caching. WOW. A factor of \nabout 20 times slower. (pgbench -c 4 -t 100)\n\n",
"msg_date": "Wed, 1 Oct 2003 07:14:32 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inferior SCSI performance"
},
{
"msg_contents": "On Wed, Oct 01, 2003 at 07:14:32AM -0600, scott.marlowe wrote:\n> FYI, on a Dual PIV2800 with 2 gig ram and a single UDMA 80 gig hard drive, \n> I from 420 tps to 22 tps when I disable write caching. WOW. A factor of \n> about 20 times slower. (pgbench -c 4 -t 100)\n\nThat's completely consistent with tests Chris Browne has done here on\ncache-enabled and cache-disabled boxes that we have.\n\nIt's a _really_ big difference. The combination of battery-backed\nwrite cache on your controller plus a real good UPS is quite possibly\nthe number one thing you can do to improve performance. For what\nit's worth, I can't see how this is something special about Postgres:\neven raw-filesystem type systems have to make sure the disk actually\nhas the data, and a write cache is bound to be a big help for that.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Wed, 1 Oct 2003 09:25:36 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inferior SCSI performance"
},
{
"msg_contents": "[email protected] (Andrew Sullivan) writes:\n> On Wed, Oct 01, 2003 at 07:14:32AM -0600, scott.marlowe wrote:\n>> FYI, on a Dual PIV2800 with 2 gig ram and a single UDMA 80 gig hard drive, \n>> I from 420 tps to 22 tps when I disable write caching. WOW. A factor of \n>> about 20 times slower. (pgbench -c 4 -t 100)\n>\n> That's completely consistent with tests Chris Browne has done here on\n> cache-enabled and cache-disabled boxes that we have.\n>\n> It's a _really_ big difference. The combination of battery-backed\n> write cache on your controller plus a real good UPS is quite possibly\n> the number one thing you can do to improve performance. For what\n> it's worth, I can't see how this is something special about Postgres:\n> even raw-filesystem type systems have to make sure the disk actually\n> has the data, and a write cache is bound to be a big help for that.\n\nIndeed.\n\nWhen I ran the tests, I found that JFS was preferable to XFS and ext3\non Linux on the machine with the big battery backed cache. (And the\nside-effect that it was getting yes, probably about 20x the\nperformance of systems without the cache.)\n\nThe FS-related result appeared surprising, as the \"stories\" I had\nheard suggested that JFS hadn't been particularly heavily tuned on\nLinux, whereas XFS was supposed to be the \"speed demon.\"\n\nIt is entirely possible that the result I saw was one that would\nreverse partially or even totally on a system LACKING that cache. XFS\nmight \"play better\" when we're cacheless; the (perhaps only fabled)\ndemerits of JFS being more than totally hidden if we add the cache.\n\nWhat I find disappointing is that it isn't possible to get SSD cards\nthat are relatively inexpensive. A similarly fabulous performance\nincrease _ought_ to be attainable if you could stick pg_xlog and\npg_clog on a 256MB (or bigger!) battery-backed SSD, ideally one that\nplugs into a PCI slot.\n\nThis should have the further benefit of diminishing the amount of\nmechanical activity going on, as WAL activity would no longer involve\nANY i/o operations.\n\nUnfortunately, while there are companies hawking SSDs, they are in the\n\"you'll have to talk to our salescritter for pricing\" category, which\nmeans that they must be ferociously expensive. :-(.\n-- \noutput = (\"cbbrowne\" \"@\" \"libertyrms.info\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n",
"msg_date": "Wed, 01 Oct 2003 12:21:53 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inferior SCSI performance"
},
{
"msg_contents": "Christopher Browne kirjutas K, 01.10.2003 kell 19:21:\n\n> \n> The FS-related result appeared surprising, as the \"stories\" I had\n> heard suggested that JFS hadn't been particularly heavily tuned on\n> Linux, whereas XFS was supposed to be the \"speed demon.\"\n\nGentoo linux recommends XFS only for SAN+fibre channel + good ups for\nanything but database use ;)\n\n> It is entirely possible that the result I saw was one that would\n> reverse partially or even totally on a system LACKING that cache. XFS\n> might \"play better\" when we're cacheless; the (perhaps only fabled)\n> demerits of JFS being more than totally hidden if we add the cache.\n> \n> What I find disappointing is that it isn't possible to get SSD cards\n> that are relatively inexpensive. A similarly fabulous performance\n> increase _ought_ to be attainable if you could stick pg_xlog and\n> pg_clog on a 256MB (or bigger!) battery-backed SSD, ideally one that\n> plugs into a PCI slot.\n\nFor really cheap and for small-size transactions you could experiment\nwith USB2 memory sticks (some of them claim 34MB/s write speed), perhaps\nin striped/mirrored configuration. You would just need something\ncounting writes in the driver layer to alert you when you are reaching\nthe x00k \"writes\" limit and have to plug in new sticks :)\n\nOTOH, articles I found through quick googling suggest only 2.4MB/s write\nand 7MB/s read speeds for USB 2.0 memory sticks, so the 34MB is proably\njust sales pitch and refers to bus speed, not actual write speed ;(\n\n> Unfortunately, while there are companies hawking SSDs, they are in the\n> \"you'll have to talk to our salescritter for pricing\" category, which\n> means that they must be ferociously expensive. :-(.\n\nthe cheapest I found was the one with external backup power was ~1.8k$\nfor 2GB PCI device\n\nhttp://www.cdw.com/shop/search/Results.aspx?key=platypus&x=0&y=0\n\nAn external 16GB one with battery backup and\nwrite-t-small-ide-drives-on-power-failure was ~25k$\n\n-----------------\nHannu\n",
"msg_date": "Thu, 02 Oct 2003 10:13:53 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inferior SCSI performance"
},
{
"msg_contents": "\n> > Unfortunately, while there are companies hawking SSDs, they are in the\n> > \"you'll have to talk to our salescritter for pricing\" category, which\n> > means that they must be ferociously expensive. :-(.\n> \n> the cheapest I found was the one with external backup power was ~1.8k$\n> for 2GB PCI device\n>\n> http://www.cdw.com/shop/search/Results.aspx?key=platypus&x=0&y=0\n\nThat is pretty neat.\n\n> An external 16GB one with battery backup and\n> write-t-small-ide-drives-on-power-failure was ~25k$\n\nAnd they scale up from there. This company has one that goes up to 1TB RAM.\n4.5kW power consumption? I hate to see what kind of heat that thing generates.\n\nhttp://www.imperialtech.com/pdf/MRSpec_021803.pdf\n\nI have no idea what price the monster unit is, but someone described the price\nof one of the *lesser* units as \"made me physically ill\". So I can only imagine.\n\n-- \ngreg\n\n",
"msg_date": "02 Oct 2003 04:00:22 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inferior SCSI performance"
},
{
"msg_contents": "Andrew Sullivan wrote:\n\n> Yes. If and only if you have a battery-backed cache. I know of no\n> IDE drives that have that, but there's nothing about the spec which\n> makes it impossible.\n\nhttp://www.ussg.iu.edu/hypermail/linux/kernel/0103.0/1084.html\n\nRelevant section:\n<quote>\nMaybe that is why there is a vender disk-cache dump zone on the edge of\nthe platters...just maybe you need to buy your drives from somebody that\ndoes this and has a predictive sector stretcher as the energy from the\ninertia by the DC three-phase motor executes the dump.\n\nEver wondered why modern drives have open collectors on the databuss?\nMaybe to disconnect the power draw so that the motor now generator\nprovides the needed power to complete the data dump...\n</quote>\n\nSEEMS to imply that some IDE drives at least have enough power left \nafter power's off to store the write-cached data to disk.\n\nThe rest of the email's not very reassuring, though, but note that this \nemail's two years old.\n\nAnyone want to test? :)\n\n-- \nLinux homer 2.4.18-14 #1 Wed Sep 4 13:35:50 EDT 2002 i686 i686 i386 \nGNU/Linux\n 4:30pm up 280 days, 8:00, 8 users, load average: 6.05, 6.01, 6.02",
"msg_date": "Thu, 02 Oct 2003 16:54:09 +0800",
"msg_from": "Ang Chin Han <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inferior SCSI performance"
},
{
"msg_contents": ">>>>> \"CB\" == Christopher Browne <[email protected]> writes:\n\nCB> Unfortunately, while there are companies hawking SSDs, they are in the\nCB> \"you'll have to talk to our salescritter for pricing\" category, which\nCB> means that they must be ferociously expensive. :-(.\n\nYou ain't kidding. Unfortunately, one of the major vendors just went\nbelly up (Imperial Technology) so pricing probably won't get any\nbetter anytime soon.\n\nPerhaps one of these days I'll try that experiment on my SSD... ;-)\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Thu, 02 Oct 2003 16:16:58 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inferior SCSI performance"
}
] |
[
{
"msg_contents": "I'm running a load of stress scripts against my staging environment to\nsimulate user interactions, and watching the various boxen as time goes by.\n\nI noticed that the CPU utilisation on the DB server (PG 7.2.3, RH7.3, Dual\nPII 550MHz, 1GB RAM, 1GB database on disk, Single 10k SCSI drive) was\nincreasing over time, and manually launched a vacuum analyze verbose.\n\nA typical output from the VAV is:\n\nNOTICE: --Relation mobilepm--\nNOTICE: Index mobilepm_ownerid_idx: Pages 1103; Tuples 32052: Deleted\n46012.\n CPU 0.15s/0.66u sec elapsed 14.82 sec.\nNOTICE: Index mobilepm_id_idx: Pages 1113; Tuples 32143: Deleted 46012.\n CPU 0.33s/1.08u sec elapsed 45.89 sec.\nNOTICE: Index mobilepm_ownerid_status_idx: Pages 1423; Tuples 32319:\nDeleted 46\n012.\n CPU 0.52s/1.05u sec elapsed 54.59 sec.\nNOTICE: Index mobilepm_number_idx: Pages 1141; Tuples 32413: Deleted 46012.\n CPU 0.26s/0.61u sec elapsed 16.13 sec.\nNOTICE: Removed 46012 tuples in 2548 pages.\n CPU 0.88s/0.79u sec elapsed 75.57 sec.\nNOTICE: Pages 3188: Changed 10, Empty 0; Tup 32007: Vac 46012, Keep 11,\nUnUsed\n0.\n Total CPU 2.56s/4.25u sec elapsed 216.50 sec.\nNOTICE: --Relation pg_toast_112846940--\nNOTICE: Pages 0: Changed 0, Empty 0; Tup 0: Vac 0, Keep 0, UnUsed 0.\n Total CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing mobilepm\n\nSo you can see that some tables are seeing a hell of a lot of updates.\nThat's life, and yes, I do need all those indexes :-)\n\nNow I see no drop in performance while the VAV is running, the CPU\nutilisation gradually drops from 80% to 30% on the DB server, and life in\ngeneral improves.\n\nOn the live server (PG 7.2.3, RH7.3, Quad Xeon 700Mhz 1MB cache, 4Gb RAM,\n256MB write-back RAID10 over 4 10K disks) I vacuum analyze daily, and vacuum\nanalyze a couple of key tables every 15 minutes, but my question is...\n\n*** THE QUESTION(S) ***\nIs there any reason for me not to run continuous sequential vacuum analyzes?\nAt least for the 6 tables that see a lot of updates?\nI hear 10% of tuples updated as a good time to vac-an, but does my typical\ncount of 3 indexes per table affect that?\n\nCheers\n\nMatt\n\n\nPostscript: I may have answered my own question while writing this mail.\nUnder the current stress test load about 10% of the key tables' tuples are\nupdated between sequential vacuum-analyzes, so the received wisdom on\nintervals suggests '0' in my case anyway...\n\n\n\n",
"msg_date": "Wed, 17 Sep 2003 20:40:16 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is there a reason _not_ to vacuum continuously?"
},
{
"msg_contents": "On Wed, 17 Sep 2003, Matt Clark wrote:\n\n> *** THE QUESTION(S) ***\n> Is there any reason for me not to run continuous sequential vacuum analyzes?\n> At least for the 6 tables that see a lot of updates?\n> I hear 10% of tuples updated as a good time to vac-an, but does my typical\n> count of 3 indexes per table affect that?\n\nGenerally, the only time continuous vacuuming is a bad thing is when you \nare I/O bound. If you are CPU bound, then continuous vacuuming is usually \nacceptable.\n\n",
"msg_date": "Wed, 17 Sep 2003 13:54:42 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there a reason _not_ to vacuum continuously?"
},
{
"msg_contents": "Matt,\n\n> Is there any reason for me not to run continuous sequential vacuum analyzes?\n> At least for the 6 tables that see a lot of updates?\n\nNo. You've already proven that the performance gain on queries offsets the \nloss from the vacuuming. There is no other \"gotcha\". \n\nHowever: \n1) You may be able to decrease the required frequency of vacuums by adjusting \nyour FSM_relations parameter. Have you played with this at all? The default \nis very low.\n2) Are you sure that ANALYZE is needed? Vacuum is required whenever lots of \nrows are updated, but analyze is needed only when the *distribution* of \nvalues changes significantly.\n3) using PG 7.3 or less, you will also need to REINDEX these tables+indexes \noften (daily?). This issue will go away in 7.4, which should make you an \nearly adopter of 7.4.\n\n> I hear 10% of tuples updated as a good time to vac-an, but does my typical\n> count of 3 indexes per table affect that?\n\nNot until 7.4.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 17 Sep 2003 13:13:31 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there a reason _not_ to vacuum continuously?"
},
{
"msg_contents": "Yes, that makes sense. My worry is really the analyzes. I gather/imagine\nthat:\n\n1)\tIndexes on fields that are essentially random gain little from being\nanalyzed.\n2)\tFields that increase monotonically with insertion order have a problem\nwith index growth in 7.2. There may be a performance issue connected with\nthis, although indexes on these fields also gain little from analysis. So\nif I can't vacuum full I'm SOL anyway and should upgrade to 7.4.1 when\navailable?\n\nFurther data: When I run a vacuum analyze my app servers do see an increase\nin response time from PG, even though the DB server is under no more\napparent load. I can only assume some kind of locking issue. Is that fair?\n\nM\n\n\n\n\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of\n> scott.marlowe\n> Sent: 17 September 2003 20:55\n> To: Matt Clark\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Is there a reason _not_ to vacuum continuously?\n>\n>\n> On Wed, 17 Sep 2003, Matt Clark wrote:\n>\n> > *** THE QUESTION(S) ***\n> > Is there any reason for me not to run continuous sequential\n> vacuum analyzes?\n> > At least for the 6 tables that see a lot of updates?\n> > I hear 10% of tuples updated as a good time to vac-an, but does\n> my typical\n> > count of 3 indexes per table affect that?\n>\n> Generally, the only time continuous vacuuming is a bad thing is when you\n> are I/O bound. If you are CPU bound, then continuous vacuuming\n> is usually\n> acceptable.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n\n",
"msg_date": "Wed, 17 Sep 2003 21:20:02 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is there a reason _not_ to vacuum continuously?"
},
{
"msg_contents": "> 2) Are you sure that ANALYZE is needed? Vacuum is required\n> whenever lots of\n> rows are updated, but analyze is needed only when the *distribution* of\n> values changes significantly.\n\nYou are right. I have a related qn in this thread about random vs. monotonic\nvalues in indexed fields.\n\n> 3) using PG 7.3 or less, you will also need to REINDEX these\n> tables+indexes\n> often (daily?). This issue will go away in 7.4, which should\n> make you an\n> early adopter of 7.4.\n\nI understand this needs an exclusive lock on the whole table, which is\nsimply not possible more than once a month, if that... Workarounds/hack\nsuggestions are more than welcome :-)\n\nTa\n\nM\n\n",
"msg_date": "Wed, 17 Sep 2003 21:24:37 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is there a reason _not_ to vacuum continuously?"
},
{
"msg_contents": "Matt,\n\n> I understand this needs an exclusive lock on the whole table, which is\n> simply not possible more than once a month, if that... Workarounds/hack\n> suggestions are more than welcome :-)\n\nWould it be reasonable to use partial indexes on the table?\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n",
"msg_date": "Wed, 17 Sep 2003 15:17:26 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there a reason _not_ to vacuum continuously?"
},
{
"msg_contents": "Oops! [email protected] (Josh Berkus) was seen spray-painting on a wall:\n>> I understand this needs an exclusive lock on the whole table, which is\n>> simply not possible more than once a month, if that... Workarounds/hack\n>> suggestions are more than welcome :-)\n>\n> Would it be reasonable to use partial indexes on the table?\n\nDumb question...\n\n... If you create a partial index, does this lock the whole table\nwhile it is being built, or only those records that are affected by\nthe index definition?\n\nI expect that the answer to that is \"Yes, it locks the whole table,\"\nwhich means that a partial index won't really help very much, except\ninsofar as you might, by having it be restrictive in range, lock the\ntable for a somewhat shorter period of time.\n\nAn alternative that may or may not be viable would be to have a series\nof tables:\n\n create table t1 ();\n create table t2 ();\n create table t3 ();\n create table t4 ();\n\nThen create a view: \n\n create view t as select * from t1 union all select * from t2 union\n all select * from t13 union all select * from t4;\n\nThen you set this view to be updatable, by having a function that\nrotates between the 4 tables based on a sequence. \n\nYou do SELECT NEXTVAL('t_controller') and the entries start flooding\ninto t2 rather than t1, or into t3, or into t4, and after t4, they go\nback into t1.\n\nWhen you need to reindex t1, you switch over to load entries into t2,\ndo maintenance on t1, and then maybe roll back to t1 so you can do the\nsame maintenance on t2.\n-- \nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://www3.sympatico.ca/cbbrowne/lisp.html\nLinux is like a Vorlon. It is incredibly powerful, gives terse,\ncryptic answers and has a lot of things going on in the background.\n",
"msg_date": "Wed, 17 Sep 2003 21:59:43 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there a reason _not_ to vacuum continuously?"
}
] |
[
{
"msg_contents": "Hi list,\n\n I have a table like this:\n\nCREATE TABLE \"gsames00\" (\n \"ano_mes\" varchar(6) NOT NULL,\n \"descricao\" varchar(30),\n PRIMARY KEY (\"ano_mes\")\n);\n\nand an index like this:\n\nCREATE INDEX GSAMES01 ON GSAMES00 (ANO_MES);\n\n When I run a explain analyze with this where clause: \n\n ... gsames00.ano_mes = to_char(ftnfco00.data_emissao,'YYYYMM') AND ...\n\n ftnfco00.data_emissao is a timestamp. When I run the explain analyze it says:\n\n...\n -> Seq Scan on gsames00 (cost=100000000.00..100000006.72 rows=372 width=10) \n(actual time=0.01..0.96 rows=372 loops=19923)\n...\n\n So it is not using the index, and it makes the query too slow to return the \nresult. If a run the same query without this clause it gets about 1 minute \nfaster. You you're wondering : If you can run this query without this clause, \nWhy don't you take it out ? \n I must use it because this query is created by a BI software and to \nchange it, I'll have to make a lot of changes in the BI software source. In the \nOracle DB it works fine 'cuz Oracle use the index and do it instantly. \n Any suggestion on how to force PostgreSQL to use this index ???\n I run Vaccum Full Analyze many time before posting this ...\n\nHere follow the whole query and the whole explain:\n\nQuery:\n\nSELECT /*+ */ \nftnfco00.estado_cliente , \nftcofi00.grupo_faturamento , \nSUM( DECODE( ftcofi00.atual_fatura, '-', -(NVL\n(ftnfpr00.qtde_duzias,0)), '+', NVL(ftnfpr00.qtde_duzias,0), 0) ) , \nSUM( DECODE( ftcofi00.atual_fatura, '-', -(NVL(ftnfpr00.vlr_liquido,0)), '+', \nNVL(ftnfpr00.vlr_liquido,0), 0) ) , \nftprod00.tipo_cadastro||ftprod00.codigo_produto , \nftprod00.descricao_produto , \nDIVIDE( SUM( DECODE( ftcofi00.atual_fatura, '-', -(NVL\n(ftnfpr00.vlr_liquido,0)), '+', NVL(ftnfpr00.vlr_liquido,0), 0)\n*ftnfpr00.margem_comercial ),\n SUM( DECODE( ftcofi00.atual_fatura, '-', -(NVL\n(ftnfpr00.vlr_liquido,0)), '+', NVL(ftnfpr00.vlr_liquido,0), 0)) ) , \nSUM( DECODE( ftcofi00.nf_prodgratis, 'S', NVL(ftnfpr00.qtde_duzias,0), 0 ) ) , \nSUM( DECODE( ftcofi00.nf_prodgratis, 'S', NVL(ftnfpr00.vlr_liquido,0), 0 ) ) \nFROM \nftprod00 , \nftnfco00 , \nftcgma00 , \nftcgca00 , \nftspro00 , \nftclcr00 , \ngsames00 , \nftcofi00 , \nftrepr00 , \ngsesta00 , \nftsupv00 , \nftgrep00 , \nftclgr00 , \nftband00 , \nfttcli00 , \nftredc00 , \nftnfpr00 \nWHERE \nftnfco00.emp = 909 AND \nftnfpr00.fil IN ('101') AND \nftnfco00.situacao_nf = 'N' AND \nftnfco00.data_emissao >= CAST('01-JAN-2003' AS DATE) AND \nftnfco00.data_emissao <= CAST('31-MAR-2003' AS DATE) AND \nftcofi00.grupo_faturamento >= '01' AND \n(ftcofi00.atual_fatura IN ('+','-') OR ftcofi00.nf_prodgratis = 'S') AND \nftcgma00.emp = ftprod00.emp AND \nftcgma00.fil = ftprod00.fil AND \nftcgma00.codigo = ftprod00.cla_marca AND \nftcgca00.emp = ftprod00.emp AND \nftcgca00.fil = ftprod00.fil AND \nftcgca00.codigo = ftprod00.cla_categoria AND \nftspro00.emp = ftprod00.emp AND \nftspro00.fil = ftprod00.fil AND \nftspro00.codigo = ftprod00.situacao AND \nftclcr00.emp = ftnfco00.emp AND \nftclcr00.fil = ftnfco00.empfil AND \nftclcr00.tipo_cadastro = ftnfco00.tipo_cad_clicre AND \nftclcr00.codigo = ftnfco00.cod_cliente AND \ngsames00.ano_mes = TO_CHAR(ftnfco00.data_emissao,'YYYYMM') AND \nftcofi00.emp = ftnfco00.emp AND \nftcofi00.fil = ftnfco00.empfil AND \nftcofi00.codigo_fiscal = ftnfco00.cod_fiscal AND \nftrepr00.emp = ftnfco00.emp AND \nftrepr00.fil = ftnfco00.empfil AND \nftrepr00.codigo_repr = ftnfco00.cod_repres AND \ngsesta00.estado_sigla = ftnfco00.estado_cliente AND \nftsupv00.emp = ftrepr00.emp AND \nftsupv00.fil = ftrepr00.fil AND \nftsupv00.codigo_supervisor = ftrepr00.codigo_supervisor AND \nftgrep00.emp = ftrepr00.emp AND \nftgrep00.fil = ftrepr00.fil AND \nftgrep00.codigo_grupo_rep = ftrepr00.codigo_grupo_rep AND \nftclgr00.emp = ftclcr00.emp AND \nftclgr00.fil = ftclcr00.fil AND \nftclgr00.codigo = ftclcr00.codigo_grupo_cliente AND \nftband00.emp = ftclcr00.emp AND \nftband00.fil = ftclcr00.fil AND \nftband00.codigo = ftclcr00.bandeira_cliente AND \nfttcli00.emp = ftclcr00.emp AND \nfttcli00.fil = ftclcr00.fil AND \nfttcli00.cod_tipocliente = ftclcr00.codigo_tipo_cliente AND \nftredc00.emp = ftclcr00.emp AND \nftredc00.fil = ftclcr00.fil AND \nftredc00.tipo_contribuinte = ftclcr00.tipo_contribuinte AND \nftredc00.codigo_rede = ftclcr00.codigo_rede AND \ngsesta00.estado_sigla = ftclcr00.emp_estado AND \nftnfco00.emp = ftnfpr00.emp AND \nftnfco00.fil = ftnfpr00.fil AND \nftnfco00.nota_fiscal = ftnfpr00.nota_fiscal AND \nftnfco00.serie = ftnfpr00.serie AND \nftnfco00.data_emissao = ftnfpr00.data_emissao AND \nftprod00.emp = ftnfpr00.emp AND \nftprod00.fil = ftnfpr00.empfil AND \nftprod00.tipo_cadastro = ftnfpr00.tipo_cad_promat AND \nftprod00.codigo_produto= ftnfpr00.cod_produto \nGROUP BY \nftnfco00.estado_cliente , \nftcofi00.grupo_faturamento , \nftprod00.tipo_cadastro||ftprod00.codigo_produto ,\nftprod00.descricao_produto\n\n\n\nExplain:\n\n \n \n QUERY \nPLAN \n \n \n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n---------------------------------------------------------\n Aggregate (cost=100027780.66..100027780.69 rows=1 width=818) (actual \ntime=101278.24..105839.69 rows=363 loops=1)\n -> Group (cost=100027780.66..100027780.68 rows=1 width=818) (actual \ntime=101272.08..101761.18 rows=19923 loops=1)\n -> Sort (cost=100027780.66..100027780.67 rows=1 width=818) (actual \ntime=101272.05..101299.09 rows=19923 loops=1)\n Sort Key: ftnfco00.estado_cliente, ftcofi00.grupo_faturamento, \n((ftprod00.tipo_cadastro)::text || (ftprod00.codigo_produto)::text), \nftprod00.descricao_produto\n -> Nested Loop (cost=100025960.94..100027780.65 rows=1 \nwidth=818) (actual time=3476.87..99606.77 rows=19923 loops=1)\n Join Filter: ((\"outer\".emp = \"inner\".emp) AND (\"outer\".fil \n= \"inner\".fil) AND (\"outer\".codigo_supervisor = \"inner\".codigo_supervisor) AND \n(\"outer\".codigo_grupo_rep = \"inner\".codigo_grupo_rep))\n -> Nested Loop (cost=100025960.94..100027775.22 rows=1 \nwidth=765) (actual time=3476.74..97802.69 rows=19923 loops=1)\n Join Filter: ((\"inner\".ano_mes)::text = to_char\n(\"outer\".data_emissao, 'YYYYMM'::text))\n -> Nested Loop (cost=25960.94..27762.92 rows=1 \nwidth=755) (actual time=3475.14..32090.12 rows=19923 loops=1)\n Join Filter: ((\"inner\".emp = \"outer\".emp) AND \n(\"outer\".fil = \"inner\".fil) AND (\"outer\".codigo = \"inner\".cla_categoria) AND \n(\"outer\".codigo = \"inner\".cla_marca) AND (\"outer\".codigo = \"inner\".situacao))\n -> Nested Loop (cost=25960.94..27705.22 \nrows=10 width=665) (actual time=3474.12..17734.21 rows=199230 loops=1)\n Join Filter: ((\"outer\".emp \n= \"inner\".emp) AND (\"inner\".fil = \"outer\".fil))\n -> Nested Loop \n(cost=25960.94..27699.30 rows=1 width=638) (actual time=3474.02..6030.09 \nrows=19923 loops=1)\n Join Filter: ((\"inner\".emp \n= \"outer\".emp) AND (\"inner\".empfil = \"outer\".fil))\n -> Merge Join \n(cost=25960.94..26128.25 rows=265 width=526) (actual time=3473.78..3841.18 \nrows=6358 loops=1)\n Merge Cond: ((\"outer\".emp \n= \"inner\".emp) AND (\"outer\".fil = \"inner\".fil) AND (\"outer\".codigo_fiscal \n= \"inner\".cod_fiscal))\n -> Index Scan using \nftcofi01 on ftcofi00 (cost=0.00..151.73 rows=72 width=52) (actual \ntime=0.15..6.40 rows=64 loops=1)\n Filter: \n((grupo_faturamento >= '01'::character varying) AND ((atual_fatura \n= '+'::character varying) OR (atual_fatura = '-'::character varying) OR \n(nf_prodgratis = 'S'::character varying)))\n -> Sort \n(cost=25960.94..25965.34 rows=1760 width=474) (actual time=3471.17..3486.98 \nrows=7666 loops=1)\n Sort Key: \nftnfco00.emp, ftredc00.fil, ftnfco00.cod_fiscal\n -> Nested Loop \n(cost=25687.75..25866.07 rows=1760 width=474) (actual time=2981.05..3241.15 \nrows=7666 loops=1)\n Join Filter: \n((\"inner\".emp = \"outer\".emp) AND (\"inner\".fil = \"outer\".fil) AND \n(\"outer\".codigo = \"inner\".codigo_grupo_cliente))\n -> Index Scan \nusing ftclgr01 on ftclgr00 (cost=0.00..4.68 rows=1 width=32) (actual \ntime=0.04..0.06 rows=1 loops=1)\n -> Materialize \n(cost=25830.59..25830.59 rows=1760 width=442) (actual time=2980.93..2990.31 \nrows=7666 loops=1)\n -> Hash \nJoin (cost=25687.75..25830.59 rows=1760 width=442) (actual \ntime=2507.55..2945.35 rows=7666 loops=1)\n Hash \nCond: (\"outer\".emp_estado = \"inner\".estado_sigla)\n -> \nNested Loop (cost=25683.33..25790.98 rows=1760 width=436) (actual \ntime=2507.09..2711.66 rows=7666 loops=1)\n \n Join Filter: ((\"inner\".emp = \"outer\".emp) AND (\"inner\".fil = \"outer\".fil))\n \n -> Index Scan using ftgrep01 on ftgrep00 (cost=0.00..4.68 rows=1 width=32) \n(actual time=0.05..0.07 rows=1 loops=1)\n \n -> Materialize (cost=25759.91..25759.91 rows=1760 width=404) (actual \ntime=2506.98..2516.14 rows=7666 loops=1)\n \n -> Nested Loop (cost=25683.33..25759.91 rows=1760 width=404) (actual \ntime=2288.68..2474.11 rows=7666 loops=1)\n \n Join Filter: ((\"inner\".emp = \"outer\".emp) AND (\"inner\".fil \n= \"outer\".fil))\n \n -> Index Scan using ftsupv01 on ftsupv00 (cost=0.00..4.68 rows=1 \nwidth=32) (actual time=0.04..0.05 rows=1 loops=1)\n \n -> Materialize (cost=25728.83..25728.83 rows=1760 width=372) \n(actual time=2288.58..2297.79 rows=7666 loops=1)\n \n -> Merge Join (cost=25683.33..25728.83 rows=1760 \nwidth=372) (actual time=2086.89..2265.03 rows=7666 loops=1)\n \n Merge Cond: ((\"outer\".emp = \"inner\".emp) AND \n(\"outer\".fil = \"inner\".fil) AND (\"outer\".cod_tipocliente \n= \"inner\".codigo_tipo_cliente))\n \n -> Index Scan using fttcli01 on fttcli00 \n(cost=0.00..5.85 rows=17 width=33) (actual time=0.03..0.25 rows=17 loops=1)\n \n -> Sort (cost=25683.33..25687.73 rows=1760 \nwidth=339) (actual time=2086.71..2095.86 rows=7666 loops=1)\n \n Sort Key: ftnfco00.emp, ftredc00.fil, \nftclcr00.codigo_tipo_cliente\n \n -> Nested Loop (cost=25389.10..25588.46 \nrows=1760 width=339) (actual time=1729.53..1897.73 rows=7666 loops=1)\n \n Join Filter: ((\"inner\".emp = \"outer\".emp) \nAND (\"inner\".fil = \"outer\".fil) AND (\"outer\".codigo = \"inner\".bandeira_cliente))\n \n -> Index Scan using ftband01 on ftband00 \n(cost=0.00..4.68 rows=1 width=32) (actual time=0.04..0.06 rows=1 loops=1)\n \n -> Materialize (cost=25552.99..25552.99 \nrows=1760 width=307) (actual time=1729.44..1738.69 rows=7666 loops=1)\n \n -> Nested Loop \n(cost=25389.10..25552.99 rows=1760 width=307) (actual time=1566.24..1705.51 \nrows=7666 loops=1)\n \n Join Filter: ((\"inner\".emp \n= \"outer\".emp) AND (\"inner\".fil = \"outer\".fil))\n \n -> Index Scan using ftcgma01 \non ftcgma00 (cost=0.00..4.68 rows=1 width=32) (actual time=0.03..0.05 rows=1 \nloops=1)\n \n -> Materialize \n(cost=25521.91..25521.91 rows=1760 width=275) (actual time=1566.16..1575.29 \nrows=7666 loops=1)\n \n -> Merge Join \n(cost=25389.10..25521.91 rows=1760 width=275) (actual time=1320.59..1542.54 \nrows=7666 loops=1)\n \n Merge Cond: \n((\"outer\".codigo = \"inner\".cod_cliente) AND (\"outer\".emp_estado \n= \"inner\".estado_cliente) AND (\"outer\".tipo_cadastro = \"inner\".tipo_cad_clicre) \nAND (\"outer\".fil = \"inner\".empfil) AND (\"outer\".emp = \"inner\".emp))\n \n -> Sort \n(cost=6241.05..6269.31 rows=11304 width=166) (actual time=1093.04..1105.44 \nrows=10478 loops=1)\n \n Sort Key: \nftclcr00.codigo, ftclcr00.emp_estado, ftclcr00.tipo_cadastro, ftredc00.fil, \nftredc00.emp\n \n -> Merge \nJoin (cost=3920.20..5480.05 rows=11304 width=166) (actual time=516.40..951.73 \nrows=10956 loops=1)\n \n Merge \nCond: ((\"outer\".emp = \"inner\".emp) AND (\"outer\".fil = \"inner\".fil) AND \n(\"outer\".tipo_contribuinte = \"inner\".tipo_contribuinte) AND \n(\"outer\".codigo_rede = \"inner\".codigo_rede))\n \n -> \nMerge Join (cost=0.00..1256.74 rows=8906 width=72) (actual time=0.13..180.25 \nrows=8906 loops=1)\n \n \nMerge Cond: (\"outer\".emp = \"inner\".emp)\n \n -\n> Index Scan using ftredc01 on ftredc00 (cost=0.00..1118.47 rows=8906 \nwidth=40) (actual time=0.05..72.02 rows=8906 loops=1)\n \n -\n> Index Scan using ftcgca01 on ftcgca00 (cost=0.00..4.68 rows=1 width=32) \n(actual time=0.04..19.14 rows=1 loops=1)\n \n -> \nSort (cost=3920.20..3947.59 rows=10956 width=94) (actual time=516.19..529.77 \nrows=10956 loops=1)\n \n \nSort Key: ftclcr00.emp, ftclcr00.fil, ftclcr00.tipo_contribuinte, \nftclcr00.codigo_rede\n \n -\n> Index Scan using ftclcr07 on ftclcr00 (cost=0.00..3185.08 rows=10956 \nwidth=94) (actual time=0.09..146.20 rows=10956 loops=1)\n \n -> Sort \n(cost=19148.05..19167.27 rows=7688 width=109) (actual time=227.46..237.00 \nrows=7668 loops=1)\n \n Sort Key: \nftnfco00.cod_cliente, ftnfco00.estado_cliente, ftnfco00.tipo_cad_clicre, \nftnfco00.empfil, ftnfco00.emp\n \n -> Index \nScan using ftnfco06 on ftnfco00 (cost=0.00..18651.88 rows=7688 width=109) \n(actual time=0.16..116.43 rows=7668 loops=1)\n \n Index \nCond: ((emp = 909::numeric) AND (situacao_nf = 'N'::character varying) AND \n(data_emissao >= '2002-10-01 00:00:00'::timestamp without time zone) AND \n(data_emissao <= '2003-03-31 00:00:00'::timestamp without time zone))\n -> \nHash (cost=4.33..4.33 rows=33 width=6) (actual time=0.23..0.23 rows=0 loops=1)\n \n -> Index Scan using gsesta01 on gsesta00 (cost=0.00..4.33 rows=33 width=6) \n(actual time=0.04..0.15 rows=33 loops=1)\n -> Index Scan using ftnfpr05 on \nftnfpr00 (cost=0.00..5.91 rows=1 width=112) (actual time=0.06..0.15 rows=3 \nloops=6358)\n Index Cond: ((\"outer\".emp = \nftnfpr00.emp) AND (\"outer\".fil = ftnfpr00.fil) AND (ftnfpr00.fil = \n101::numeric) AND (\"outer\".data_emissao = ftnfpr00.data_emissao) AND \n(\"outer\".nota_fiscal = ftnfpr00.nota_fiscal) AND (\"outer\".serie = \nftnfpr00.serie))\n -> Index Scan using ftspro01 on \nftspro00 (cost=0.00..5.78 rows=10 width=27) (actual time=0.01..0.07 rows=10 \nloops=19923)\n -> Index Scan using ftprod01 on ftprod00 \n(cost=0.00..5.74 rows=1 width=90) (actual time=0.04..0.05 rows=1 loops=199230)\n Index Cond: ((ftprod00.emp \n= \"outer\".emp) AND (ftprod00.fil = \"outer\".empfil) AND (ftprod00.tipo_cadastro \n= \"outer\".tipo_cad_promat) AND (ftprod00.codigo_produto = \"outer\".cod_produto))\n -> Seq Scan on gsames00 \n(cost=100000000.00..100000006.72 rows=372 width=10) (actual time=0.01..0.96 \nrows=372 loops=19923)\n -> Index Scan using ftrepr01 on ftrepr00 \n(cost=0.00..5.41 rows=1 width=53) (actual time=0.04..0.05 rows=1 loops=19923)\n Index Cond: ((ftrepr00.emp = \"outer\".emp) AND \n(ftrepr00.fil = \"outer\".empfil) AND (ftrepr00.codigo_repr = \"outer\".cod_repres))\n Total runtime: 105885.43 msec\n(75 rows)\n\n\n\nThe Oracle functions like NVL, DECODE, and others had been created in \nPostgreSQL.\n\n\nAtenciosamente,\n\nRhaoni Chiu Pereira\nSist�mica Computadores\n\nVisite-nos na Web: http://sistemica.info\nFone/Fax : +55 51 3328 1122\n\n\n\n\n\n",
"msg_date": "Wed, 17 Sep 2003 19:17:47 -0300",
"msg_from": "Rhaoni Chiu Pereira <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to force an Index ?"
},
{
"msg_contents": "Rhaoni,\n\nFirst off, thanks for posting such complete info.\n\n> ... gsames00.ano_mes = to_char(ftnfco00.data_emissao,'YYYYMM') AND ...\n> \n> ftnfco00.data_emissao is a timestamp. When I run the explain analyze it \nsays:\n> \n> ...\n> -> Seq Scan on gsames00 (cost=100000000.00..100000006.72 rows=372 \nwidth=10) \n> (actual time=0.01..0.96 rows=372 loops=19923)\n> ...\n\nYour problem is that you're comparing against a calculated expression based on \nftnfco00, which is being filtered in about 18 other ways. As a result, the \nplanner doesn't know what to estimate (see the cost estimate of 100000000, \nwhich is a \"blind guess\" values) and goes for a seq scan.\n\n Can I ask you to try this workaround, to create an expressional index on \nftnfco00 (assuming that data_emmisao is of type DATE)\n\ncreate function date_to_yyyymm( date ) returns text as\n'select to_char($1, ''YYYYMM'');\n' language sql immutable strict;\n\ncreate index idx_data_yyyymm on ftnfco00(date_to_yyyymm(data_emmisao));\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 17 Sep 2003 15:38:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to force an Index ?"
},
{
"msg_contents": "I solve this problem doing this:\n\n create function date_to_yyyymm( timestamp ) returns gsames00.ano_mes%type as\n 'select to_char($1, ''YYYYMM'');\n ' language sql immutable strict;\n\nAnd changing the SQL where clause:\n\n ... gsames00.ano_mes = to_char(ftnfco00.data_emissao,'YYYYMM') AND ...\n\nto:\n\n ... gsames00.ano_mes = date_to_yyyymm(ftnfco00.data_emissao) AND ...\n\n Then it uses the gsames00 index instead of a SeqScan 'cuz it is camparing\nsame data type, but .. I don't want to create this function 'cuz this aplication\nis used with Oracle too. \nI need to know if there is a way to set the to_char output to varchar instead of\ntext !\n Any Idea ? So, this way I wont have to change my aplication source.\n\n\n\nAtenciosamente,\n\nRhaoni Chiu Pereira\nSist�mica Computadores\n\nVisite-nos na Web: http://sistemica.info\nFone/Fax : +55 51 3328 1122\n\n\n\nCitando Josh Berkus <[email protected]>:\n\n<> Rhaoni,\n<> \n<> First off, thanks for posting such complete info.\n<> \n<> > ... gsames00.ano_mes = to_char(ftnfco00.data_emissao,'YYYYMM') AND ...\n<> > \n<> > ftnfco00.data_emissao is a timestamp. When I run the explain analyze it\n<> \n<> says:\n<> > \n<> > ...\n<> > -> Seq Scan on gsames00 (cost=100000000.00..100000006.72 rows=372 \n<> width=10) \n<> > (actual time=0.01..0.96 rows=372 loops=19923)\n<> > ...\n<> \n<> Your problem is that you're comparing against a calculated expression based\n<> on \n<> ftnfco00, which is being filtered in about 18 other ways. As a result, the\n<> \n<> planner doesn't know what to estimate (see the cost estimate of 100000000, \n<> which is a \"blind guess\" values) and goes for a seq scan.\n<> \n<> Can I ask you to try this workaround, to create an expressional index on \n<> ftnfco00 (assuming that data_emmisao is of type DATE)\n<> \n<> create function date_to_yyyymm( date ) returns text as\n<> 'select to_char($1, ''YYYYMM'');\n<> ' language sql immutable strict;\n<> \n<> create index idx_data_yyyymm on ftnfco00(date_to_yyyymm(data_emmisao));\n<> \n<> -- \n<> -Josh Berkus\n<> Aglio Database Solutions\n<> San Francisco\n<> \n<> \n<> ---------------------------(end of broadcast)---------------------------\n<> TIP 4: Don't 'kill -9' the postmaster\n<> \n\n",
"msg_date": "Thu, 18 Sep 2003 11:45:09 -0300",
"msg_from": "Rhaoni Chiu Pereira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] How to force an Index ?"
},
{
"msg_contents": "Rhaoni Chiu Pereira <[email protected]> writes:\n> I need to know if there is a way to set the to_char output to varchar instead of\n> text !\n\nWhy don't you change the datatype of ano_mes to text, instead? It's\nunlikely your application would notice the difference. (You could set\na CHECK constraint on the length if you really want to duplicate the\nbehavior of varchar(6).)\n\nAlternatively, try 7.4 beta. I believe this issue goes away in 7.4,\nbecause varchar no longer has separate comparison operators.\n\nOf course there's also the option of modifying to_char's result type\nin pg_proc, but I won't promise that doing so wouldn't break things.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Sep 2003 12:16:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] How to force an Index ? "
},
{
"msg_contents": "Rhaoni,\n\n> ... gsames00.ano_mes = to_char(ftnfco00.data_emissao,'YYYYMM') AND ...\n\n> Then it uses the gsames00 index instead of a SeqScan 'cuz it is\n> camparing same data type, but .. I don't want to create this function 'cuz\n> this aplication is used with Oracle too.\n\nYou should have said that earlier ....\n\n> I need to know if there is a way to set the to_char output to varchar\n> instead of text !\n\nDid you try: \n\n... gsames00.ano_mes = (to_char(ftnfco00.data_emissao,'YYYYMM')::VARCHAR) AND \n...\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 18 Sep 2003 10:50:34 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] How to force an Index ?"
},
{
"msg_contents": "Rhaoni,\n\n> I could .. but this way I wont be used because Oracle doesn't accept such\n> sintax ! I changed gsames00.ano_mes from varchar to text ! But it still not\n> fast enough to take Oracle's place !!!\n> I still trying to do so ...\n\nWell, your basic problem is that performance tuning for *any* database often \nrequires use of database-specific syntax. You would be having the same \nproblem, in the opposite direction, if you were trying to port a PostgreSQL \napp to Oracle without changing any syntax.\n\nHere's syntax Oracle should accept:\n\n... gsames00.ano_mes = (CAST(to_char(ftnfco00.data_emissao,'YYYYMM') AS \nVARCHAR)) AND \n...\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 18 Sep 2003 11:02:10 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to force an Index ?"
}
] |
[
{
"msg_contents": "\nI'm on 7.3.4 and this query gets horrible performance. Is there a way to rewrite it with an exists or some way to get better performance?\n\nselect code, id, name, date_of_service\n from tbl\nwhere date_of_service in\n (select date_of_service\n from tbl\n where xxx >= '29800'\n and xxx <= '29909'\n and code = 'XX')\n and client_code = 'XX'\norder by id, date_of_service;\n\nThanks!\n\n\n\n",
"msg_date": "Wed, 17 Sep 2003 21:59:29 -0700 (GMT-07:00)",
"msg_from": "LN Cisneros <[email protected]>",
"msg_from_op": true,
"msg_subject": "rewrite in to exists?"
},
{
"msg_contents": "> I'm on 7.3.4 and this query gets horrible performance. Is there a way to\nrewrite it with an exists or some way to get better performance?\n>\n> select code, id, name, date_of_service\n> from tbl\n> where date_of_service in\n> (select date_of_service\n> from tbl\n> where xxx >= '29800'\n> and xxx <= '29909'\n> and code = 'XX')\n> and client_code = 'XX'\n> order by id, date_of_service;\n\n????\n\nWhy can't you just go:\n\nselect code, id, name, date_of_service from tbl where xxx <= 29800 and xx >=\n29909 and code='XX' and client_code='XX' order by id, date_of_service;\n\nOr use a between clause is nice:\n\nselect code, id, name, date_of_service from tbl where xxx between 29800 and\n29909 and code='XX' and client_code='XX' order by id, date_of_service;\n\nBut seriously - your query above is referencing 'tbl' twice - is that\ncorrect, or is the tbl in the subselect supposed to be something different?\n\nChris\n\n",
"msg_date": "Thu, 18 Sep 2003 13:23:37 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rewrite in to exists?"
},
{
"msg_contents": "Hi guys,\n\nI am in the process of creating a database design in which LOTS of data need\nto be modelled.\n\nFor instance, I need to store data about products. Every product has LOTS of\nproperties, well over a hundred.\n\nSo I'm wondering. What's the best approach here, performance wise? Just\ncreate one Product table with well over a hundred columns? Or would it be\nbetter to divide this over more tables and link them together via ID's? I\ncould for instance create tables Product, PriceInfo, Logistics, StorageInfo,\nPackagingInfo and link them together via the same ID. This would be easier\nto document (try to visualize a 100+ column table in a document!), but would\nit impact performance? I tihnk maybe it would impact Select performance, but\nUpdating of products would maybe speed up a little...\n\nAll info about a product is unique for this product so records in PriceInfo,\nLogistics, StorageInfo, PackagingInfo tables would map one to one to records\nin the Product table.\n\nDo any of you know if and how PostgreSQL would prefer one approach over the\nother?\n\nThanks in advance,\nAlexander Priem.\n\n",
"msg_date": "Thu, 18 Sep 2003 10:13:11 +0200",
"msg_from": "\"Alexander Priem\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Many fields in one table or many tables?"
},
{
"msg_contents": "On Thu, 18 Sep 2003 13:23:37 +0800, \"Christopher Kings-Lynne\"\n<[email protected]> wrote:\n>Why can't you just go:\n>\n>select code, id, name, date_of_service from tbl where xxx <= 29800 and xx >=\n>29909 and code='XX' and client_code='XX' order by id, date_of_service;\n\nBecause (ignoring conditions on code and client_code for a moment) if\nfor a given date there is at least one row satisfying the condition on\nxxx, the original query returns *all* rows having this date,\nregardless of their xxx value. For example:\n\n id | date | xxx\n----+------------+-------\n 1 | 2003-01-01 | 10000 *\n 2 | 2003-01-01 | 29800 * *\n 3 | 2003-01-01 | 30000 *\n 4 | 2003-02-02 | 20000\n 5 | 2003-03-03 | 29900 * *\n\n\n>> select code, id, name, date_of_service\n>> from tbl\n>> where date_of_service in\n>> (select date_of_service\n>> from tbl\n>> where xxx >= '29800'\n>> and xxx <= '29909'\n>> and code = 'XX')\n>> and client_code = 'XX'\n>> order by id, date_of_service;\n\nTo the original poster: You did not provide a lot of information, but\nthe following suggestions might give you an idea ...\n\nSELECT code, id, date_of_service\n FROM tbl\n WHERE EXISTS (SELECT *\n FROM tbl t2\n WHERE t2.xxx >= '29800' AND t2.xxx <= '29909'\n AND t2.code = 'XX'\n AND tbl.date_of_service = t2.date_of_service)\n AND client_code = 'XX'\n ORDER BY id, date_of_service;\n\nSELECT t1.code, t1.id, t1.date_of_service\n FROM tbl t1 INNER JOIN\n (SELECT DISTINCT date_of_service\n FROM tbl\n WHERE xxx >= '29800' AND xxx <= '29909'\n AND code = 'XX'\n ) AS t2 ON (t1.date_of_service = t2.date_of_service)\n WHERE t1.client_code = 'XX'\n ORDER BY id, date_of_service;\n\nSELECT DISTINCT t1.code, t1.id, t1.date_of_service\n FROM tbl AS t1 INNER JOIN tbl AS t2\n ON (t1.date_of_service = t2.date_of_service\n AND t2.xxx >= '29800' AND t2.xxx <= '29909'\n AND t2.code = 'XX')\n WHERE t1.client_code = 'XX' -- might as well put this\n -- condition into the ON clause\n ORDER BY id, date_of_service;\n\nThe last one assumes that there are no duplicates on code, id,\ndate_of_service in the desired result.\n\nServus\n Manfred\n",
"msg_date": "Thu, 18 Sep 2003 11:16:03 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rewrite in to exists?"
},
{
"msg_contents": "Alexander,\n\n> I am in the process of creating a database design in which LOTS of data\n> need to be modelled.\n>\n> For instance, I need to store data about products. Every product has LOTS\n> of properties, well over a hundred.\n<snip>\n> Do any of you know if and how PostgreSQL would prefer one approach over the\n> other?\n\nQueston 1: Do all products have all of these properties, or do some/many/most \nnot have some properties? If the answer is the former, then a single table, \nhowever broad, is the logical construct. If the latter, than several tables \nmakes more sense: why create NULL columns for stuff you could just leave out?\n\nQuestion 2: Is it true that some properties will be updated *much* (100x) more \nfrequently than others? If so, it would make sense from a \nperformance/postgresql standpoint to isolate those properties to related \ntable(s). Keep in mind that this recommendation is strictly performance \nrelated, and is not necessarily the best relational design.\n\nSuggestion 3: There was an issue in 7.3 with table rows which are overly broad \n-- some problems with PSQL, I believe. It would be worth searching for, as \nI cannot remember what the limit is where problems occurred.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 18 Sep 2003 10:27:12 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Many fields in one table or many tables?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Suggestion 3: There was an issue in 7.3 with table rows which are overly broad \n> -- some problems with PSQL, I believe.\n\nNot sure about PSQL, but I think there still are some performance issues\nin the backend with SELECTs involving more than a couple hundred\ntargetlist entries. These are probably fixable at not-very-large effort\nbut we haven't made any consistent push to find and fix the trouble\nspots. The issues that I recall are O(N^2) problems (doubly nested\nloops) so the performance with ~100 entries is no problem but it gets\nrapidly worse above that. You could hit this even with ~100-column\ntables if you try to select all columns from a join of two or more.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Sep 2003 14:20:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Many fields in one table or many tables? "
}
] |
[
{
"msg_contents": "3) using PG 7.3 or less, you will also need to REINDEX these\n tables+indexes often (daily?). This issue will go away\n in 7.4, which should make you an early adopter of 7.4.\n\nIs this true? Haven't heard of this before.\nIf so, how can this be managed in a cronjob?\nFor the hourly VACUUM there's vacuumdb, but is\nthere somehting similar like reindexdb ?\n\nregards,\nOliver Scheit\n",
"msg_date": "Thu, 18 Sep 2003 10:18:27 +0200",
"msg_from": "\"Oliver Scheit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is there a reason _not_ to vacuum continuously?"
},
{
"msg_contents": "\n> 3) using PG 7.3 or less, you will also need to REINDEX these\n> tables+indexes often (daily?). This issue will go away\n> in 7.4, which should make you an early adopter of 7.4.\n\nTry monthly maybe.\n\n> Is this true? Haven't heard of this before.\n> If so, how can this be managed in a cronjob?\n> For the hourly VACUUM there's vacuumdb, but is\n> there somehting similar like reindexdb ?\n\nYes, there is reindexdb :)\n\nChris\n\n",
"msg_date": "Thu, 18 Sep 2003 16:24:32 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there a reason _not_ to vacuum continuously?"
}
] |
[
{
"msg_contents": "> Yes, there is reindexdb :)\n\nNot on my machine. (RH 7.3)\n\n#rpm -qa|grep postgres\npostgresql-server-7.2.3-5.73\npostgresql-libs-7.2.3-5.73\npostgresql-devel-7.2.3-5.73\npostgresql-7.2.3-5.73\n\nWhat package am I missing?\n\nregards,\nOliver Scheit\n",
"msg_date": "Thu, 18 Sep 2003 10:29:42 +0200",
"msg_from": "\"Oliver Scheit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is there a reason _not_ to vacuum continuously?"
},
{
"msg_contents": "> #rpm -qa|grep postgres\n> postgresql-server-7.2.3-5.73\n> postgresql-libs-7.2.3-5.73\n> postgresql-devel-7.2.3-5.73\n> postgresql-7.2.3-5.73\n>\n> What package am I missing?\n\nIt's part of postgresql 7.3. Just get it from the 7.3 contrib dir - it\nworks fine with 7.2\n\nNote that this index growth problem has been basically solved as of\npostgresql 7.4 - so that is your other option.\n\nChris\n\n\n\n",
"msg_date": "Thu, 18 Sep 2003 16:48:42 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there a reason _not_ to vacuum continuously?"
},
{
"msg_contents": "Guys,\n\nI also wrote a perl script that reindexes all tables, if anyone can't get \nreindexdb working or find it for 7.2.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 18 Sep 2003 10:30:38 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there a reason _not_ to vacuum continuously?"
}
] |
[
{
"msg_contents": ">> > It's part of postgresql 7.3. Just get it from the 7.3\n>> > contrib dir - it works fine with 7.2\n>> That's nice to hear. Thanx for that info.\n\n> That's alright - cron job it for once a month - that's what\n> I do. Basically the problem is that in certain cases\n> (monotonically increasing serial indexes) for instance,\n> PosgreSQL < 7.4 is unable to fully reclaim all the\n> space after a page split. This means that your indexes\n> just gradually grow really large.\n\nUhm, I'm unable to find reindexdb. I have postgres 7.3.4\non another server, but there's no reindexdb. Can you point\nme to the right direction?\n\nHere's what's installed on that machine:\n# rpm -qa|grep postgres\npostgresql-perl-7.2.3-5.73\npostgresql-libs-7.3.4-2PGDG\npostgresql-pl-7.3.4-2PGDG\npostgresql-7.3.4-2PGDG\npostgresql-contrib-7.3.4-2PGDG\npostgresql-server-7.3.4-2PGDG\n\n> Yeah - 7.4 beta3 will be out very shortly, you'll probably\n> have to wait a month or so for a final 7.4 release.\n\nOld version is rockstable and quite fast, so no problem with\nthat.\n\n> Even then, ugprading postgresql is always a pain in the neck.\n\nUpgrading to 7.3.4 was quite easy here. dumped the dbs,\nuninstalled 7.2, installed 7.3 and let it read the dump. done.\n\nregards,\nOli\n",
"msg_date": "Thu, 18 Sep 2003 11:29:23 +0200",
"msg_from": "\"Oliver Scheit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is there a reason _not_ to vacuum continuously?"
},
{
"msg_contents": "(I've sent him reindexdb off-list)\n\nChris\n\nOn Thu, 18 Sep 2003, Oliver Scheit wrote:\n\n> >> > It's part of postgresql 7.3. Just get it from the 7.3\n> >> > contrib dir - it works fine with 7.2\n> >> That's nice to hear. Thanx for that info.\n>\n> > That's alright - cron job it for once a month - that's what\n> > I do. Basically the problem is that in certain cases\n> > (monotonically increasing serial indexes) for instance,\n> > PosgreSQL < 7.4 is unable to fully reclaim all the\n> > space after a page split. This means that your indexes\n> > just gradually grow really large.\n>\n> Uhm, I'm unable to find reindexdb. I have postgres 7.3.4\n> on another server, but there's no reindexdb. Can you point\n> me to the right direction?\n>\n> Here's what's installed on that machine:\n> # rpm -qa|grep postgres\n> postgresql-perl-7.2.3-5.73\n> postgresql-libs-7.3.4-2PGDG\n> postgresql-pl-7.3.4-2PGDG\n> postgresql-7.3.4-2PGDG\n> postgresql-contrib-7.3.4-2PGDG\n> postgresql-server-7.3.4-2PGDG\n>\n> > Yeah - 7.4 beta3 will be out very shortly, you'll probably\n> > have to wait a month or so for a final 7.4 release.\n>\n> Old version is rockstable and quite fast, so no problem with\n> that.\n>\n> > Even then, ugprading postgresql is always a pain in the neck.\n>\n> Upgrading to 7.3.4 was quite easy here. dumped the dbs,\n> uninstalled 7.2, installed 7.3 and let it read the dump. done.\n>\n> regards,\n> Oli\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n",
"msg_date": "Thu, 18 Sep 2003 22:47:03 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there a reason _not_ to vacuum continuously?"
}
] |
[
{
"msg_contents": "\nOn Thu, 18 Sep 2003 13:23:37 +0800, \"Christopher Kings-Lynne\"\n<snip>\n\n>To the original poster: You did not provide a lot of information, but\n>the following suggestions might give you an idea ...\n<snip>\n>\n\nYes, sorry about that. But in my query for a set of dates returned from the subquery I would then like to get all records that match this set of dates (ordered).\n\nI believe this query will work and hopefully speed it up (the \"IN\" query is extremely slow)...I give this one a try:\n\n>SELECT t1.code, t1.id, t1.date_of_service\n> FROM tbl t1 INNER JOIN\n> (SELECT DISTINCT date_of_service\n> FROM tbl\n> WHERE xxx >= '29800' AND xxx <= '29909'\n> AND code = 'XX'\n> ) AS t2 ON (t1.date_of_service = t2.date_of_service)\n> WHERE t1.client_code = 'XX'\n> ORDER BY id, date_of_service;\n\nA question I have is is the \"DISTINCT\" really going to help or is it just going to throw another sort into the mix making it slower?\n\nThanks for the help!\n\nLaurette\n\n\n",
"msg_date": "Thu, 18 Sep 2003 07:59:54 -0700 (GMT-07:00)",
"msg_from": "LN Cisneros <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: rewrite in to exists?"
},
{
"msg_contents": "Laurette,\n\n> >SELECT t1.code, t1.id, t1.date_of_service\n> > FROM tbl t1 INNER JOIN\n> > (SELECT DISTINCT date_of_service\n> > FROM tbl\n> > WHERE xxx >= '29800' AND xxx <= '29909'\n> > AND code = 'XX'\n> > ) AS t2 ON (t1.date_of_service = t2.date_of_service)\n> > WHERE t1.client_code = 'XX'\n> > ORDER BY id, date_of_service;\n>\n> A question I have is is the \"DISTINCT\" really going to help or is it just\n> going to throw another sort into the mix making it slower?\n\nIt's required if you expect the subquery to return multiple rows for each \ndate_of_service match. Of course, you can also put the DISTINCT in the main \nquery instead; it depends on how many results you expect the subquery to \nhave.\n\nStill, I'd suggest trying the EXISTS version first .... under most \ncircumstances, DISTINCT is pretty slow.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 18 Sep 2003 10:34:13 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rewrite in to exists?"
},
{
"msg_contents": "Joseph,\n\n> I hope this to be a simple question. I have need to simply read the first\n> row in a given table. Right now, I have some legacy code that selects all\n> rows in a table just to see if the first row has a certain value.\n\nYour problem is conceptual: in SQL, there is no \"first\" row. \n\nIf you want to just pick a single row at random, do\nSELECT * FROM table LIMIT 1;\n\nOr if you have a primary key id, you could for example return the row with the \nlowest id:\n\nSELECT * FROM table ORDER BY id LIMIT 1;\n\n> The code is seeking to see if an update has been run or not. A hypothetical\n> scenario would be: has an update been run to populate data into a new\n> column in a table. Neither the data nor any of the rows are consistently\n> known. So the test selects all rows, tests the first row and then ends if\n> the column has a value.\n\nI'd write an ON UPDATE trigger, personally, to fire and write data somewhere \nelse whenever the table is updated. Much more reliable ....\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 18 Sep 2003 10:53:57 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Find one record"
},
{
"msg_contents": "Dear list,\n\nI hope this to be a simple question. I have need to simply read the first \nrow in a given table. Right now, I have some legacy code that selects all \nrows in a table just to see if the first row has a certain value.\n\nThe code is seeking to see if an update has been run or not. A hypothetical \nscenario would be: has an update been run to populate data into a new \ncolumn in a table. Neither the data nor any of the rows are consistently \nknown. So the test selects all rows, tests the first row and then ends if \nthe column has a value.\n\nDoes anyone have a better way to do this?\n\nRegards\n\n",
"msg_date": "Thu, 18 Sep 2003 14:01:05 -0400",
"msg_from": "Joseph Bove <[email protected]>",
"msg_from_op": false,
"msg_subject": "Find one record"
}
] |
[
{
"msg_contents": "\nThanks Josh!\n\nBut, the EXISTS version doesn't really give me what I want...all rows in tbl that match the date of the subquery.\n\nBut, using the DISTINCT does make sense.\n\nThanks again to all who helped!\n\n-----Original Message-----\nFrom: Josh Berkus <[email protected]>\nSent: Sep 18, 2003 10:34 AM\nTo: LN Cisneros <[email protected]>, LN Cisneros <[email protected]>, \n\tManfred Koizar <[email protected]>, \n\tChristopher Kings-Lynne <[email protected]>\nCc: LN Cisneros <[email protected]>, [email protected]\nSubject: Re: [PERFORM] rewrite in to exists?\n\nLaurette,\n\n> >SELECT t1.code, t1.id, t1.date_of_service\n> > FROM tbl t1 INNER JOIN\n> > (SELECT DISTINCT date_of_service\n> > FROM tbl\n> > WHERE xxx >= '29800' AND xxx <= '29909'\n> > AND code = 'XX'\n> > ) AS t2 ON (t1.date_of_service = t2.date_of_service)\n> > WHERE t1.client_code = 'XX'\n> > ORDER BY id, date_of_service;\n>\n> A question I have is is the \"DISTINCT\" really going to help or is it just\n> going to throw another sort into the mix making it slower?\n\nIt's required if you expect the subquery to return multiple rows for each \ndate_of_service match. Of course, you can also put the DISTINCT in the main \nquery instead; it depends on how many results you expect the subquery to \nhave.\n\nStill, I'd suggest trying the EXISTS version first .... under most \ncircumstances, DISTINCT is pretty slow.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n\n\n",
"msg_date": "Thu, 18 Sep 2003 12:27:23 -0700 (GMT-07:00)",
"msg_from": "LN Cisneros <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: rewrite in to exists?"
},
{
"msg_contents": "On Thu, 18 Sep 2003 12:27:23 -0700 (GMT-07:00), LN Cisneros\n<[email protected]> wrote:\n>But, the EXISTS version doesn't\n\nLaurette,\nlooking at that SELECT statement again I can't see what's wrong with\nit. One of us is missing something ;-)\n\n> really give me what I want...\n\nCan you elaborate?\n\nSELECT code, id, date_of_service\n FROM tbl\n WHERE EXISTS (SELECT *\n FROM tbl t2\n WHERE t2.xxx >= '29800' AND t2.xxx <= '29909'\n AND t2.code = 'XX'\n AND tbl.date_of_service = t2.date_of_service) -- (!)\n AND client_code = 'XX'\n ORDER BY id, date_of_service;\n\n>all rows in tbl that\n ^^^\nWell, all that have client_code = 'XX', as in your original query.\n\n> match the date of the subquery.\n\nThe matching is done by the line with the (!) comment.\n\nServus\n Manfred\n",
"msg_date": "Fri, 19 Sep 2003 10:57:10 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rewrite in to exists?"
}
] |
[
{
"msg_contents": "Our hardware/software configuration:\nkernel: 2.5.74\ndistro: RH7.2\npgsql: 7.3.3\nCPUS: 8\nMHz: 700.217\nmodel: Pentium III (Cascades)\nmemory: 8298888 kB\nshmmax: 3705032704\n\nWe did several sets of runs(repeating runs with the same database\nparameters) and have the following observation:\n\n1. With everything else the same, we did two run sets with\nsmall effective_cache_size (default=1000) and large (655360 i.e. 5GB\nor 60% of the system memory 8GB). It seems to me that small \neffective_cache_size favors the choice of nested loop joins (NLJ) \nwhile the big effective_cache_size is in favor of merge joins (MJ). \nWe thought the large effective_cache_size should lead us to better \nplans. But we found the opposite. \n\nThree plans out of 22 are different. Two of those plans are worse \nin execution time by 2 times and 8 times. For example, one plan, \nthat included NLJ ran in 4 seconds but the other, switching to an \nMJ, ran in 32 seconds. Please refer to the link at the end of \nthis mail for the query and plans. Did we miss something, or \nimprovements are needed for the optimizer?\n\n2. Thanks to all the response we got from this mailing list, we \ndecided to use SETSEED(0) default_statistics_target=1000 to reduce \nthe variation. We get now the exact the same execution plans \nand costs with repeated runs and that reduced the variation a lot.\nHowever, within the same run set consist of 6 runs, we see 2-3% \nstandard deviation for the run metrics associated with the multiple\nstream part of the test (as opposed to the single stream part).\n\nWe would like to reduce the variation to be less than 1% so that a \n2% change between two different kernels would be significant. \nIs there anything else we can do?\n\nquery: http://developer.osdl.org/~jenny/11.sql\nplan with small effective_cache_size: \nhttp://developer.osdl.org/~jenny/small_effective_cache_size_plan\nplan with large effective_cache_size: \nhttp://developer.osdl.org/~jenny/large_effective_cache_size_plan\n\nThanks,\nJenny\n\n",
"msg_date": "Thu, 18 Sep 2003 15:36:50 -0700",
"msg_from": "Jenny Zhang <[email protected]>",
"msg_from_op": true,
"msg_subject": "osdl-dbt3 run results - puzzled by the execution plans"
},
{
"msg_contents": "> We thought the large effective_cache_size should lead us to better\n> plans. But we found the opposite.\n\nMaybe it's inappropriate for little old me to jump in here, but the plan\nisn't usually that important compared to the actual runtime. The links you\ngive show the output of 'explain' but not 'explain analyze', so it's not\nclear wich plan is actually _faster_.\n\nIf you really do have only 8MB of FS cache, then either plan will run\nslowly. If you really do have 5GB of FS cache then either plan will run a\nlot faster. Why would you deliberately give the planner false information\nabout this?\n\nPG obviously thinks plan 1 is 'better' when pages have to be fetched from\ndisk, and plan 2 is 'better' when they don't. Which is really better\ndepends on whether those pages do have to be fetched from disk or not, and\nPG can only know what you tell it about that, so changing ECS without\nactually removing the RAM from the system seems a little pointless to me...\n\nM\n\n\n\n\n",
"msg_date": "Fri, 19 Sep 2003 00:19:00 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: osdl-dbt3 run results - puzzled by the execution plans"
},
{
"msg_contents": "Thanks for your prompt reply.\n\nOn Thu, 2003-09-18 at 16:19, Matt Clark wrote:\n> > We thought the large effective_cache_size should lead us to better\n> > plans. But we found the opposite.\n> \n> Maybe it's inappropriate for little old me to jump in here, but the plan\n> isn't usually that important compared to the actual runtime. The links you\n> give show the output of 'explain' but not 'explain analyze', so it's not\n> clear wich plan is actually _faster_.\n> \nI put the EXPLAIN ANALYZE output at:\nhttp://developer.osdl.org/~jenny/large_explain_analyze\nhttp://developer.osdl.org/~jenny/small_explain_analyze\nThe actual execution time is 37 seconds(large) vs 5 seconds (small).\n\nI concluded the one with nested loop one is faster since we saw it\nconsistently faster than the merge join one in our runs.\n> If you really do have only 8MB of FS cache, then either plan will run\n> slowly. If you really do have 5GB of FS cache then either plan will run a\n> lot faster. Why would you deliberately give the planner false information\n> about this?\n> \nWe did not. A little history of our runs:\nWhen we first started, not knowing PG well, we just used the default ECS\nvalue(1000). \nThen we realized since we have 8G of RAM, we should set ECS to 655360. \nBut this leads the optimizer to pick a bad plan. This is the reason why\nwe post this message.\n> PG obviously thinks plan 1 is 'better' when pages have to be fetched from\n> disk, and plan 2 is 'better' when they don't. Which is really better\n> depends on whether those pages do have to be fetched from disk or not, and\n> PG can only know what you tell it about that, so changing ECS without\n> actually removing the RAM from the system seems a little pointless to me...\n> \n> M\n> \n> \n> \nRegards,\nJenny\n\n",
"msg_date": "Thu, 18 Sep 2003 17:52:41 -0700",
"msg_from": "Jenny Zhang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: osdl-dbt3 run results - puzzled by the execution"
},
{
"msg_contents": "Jenny Zhang <[email protected]> writes:\n> ... It seems to me that small \n> effective_cache_size favors the choice of nested loop joins (NLJ) \n> while the big effective_cache_size is in favor of merge joins (MJ). \n\nNo, I wouldn't think that, because a nestloop plan will involve repeated\nfetches of the same tuples whereas a merge join doesn't (at least not\nwhen it sorts its inner input, as this plan does). Larger cache\nimproves the odds of a repeated fetch not having to do I/O. In practice\na larger cache area would also have some effects on access costs for the\nsort's temp file, but I don't think the planner's cost model for sorting\ntakes that into account.\n\nAs Matt Clark points out nearby, the real question is whether these\nplanner estimates have anything to do with reality. EXPLAIN ANALYZE\nresults would be far more interesting than plain EXPLAIN.\n\n> However, within the same run set consist of 6 runs, we see 2-3% \n> standard deviation for the run metrics associated with the multiple\n> stream part of the test (as opposed to the single stream part).\n\n<python> Och, laddie, we useta *dream* of 2-3% variation </python>\n\n> We would like to reduce the variation to be less than 1% so that a \n> 2% change between two different kernels would be significant. \n\nI think this is a pipe dream. Variation in where the data gets laid\ndown on your disk drive would alone create more than that kind of delta.\nI'm frankly amazed you could get repeatability within 2-3%.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Sep 2003 23:20:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: osdl-dbt3 run results - puzzled by the execution plans "
},
{
"msg_contents": "> I put the EXPLAIN ANALYZE output at:\n> http://developer.osdl.org/~jenny/large_explain_analyze\n> http://developer.osdl.org/~jenny/small_explain_analyze\n> The actual execution time is 37 seconds(large) vs 5 seconds (small).\n> \n\nThere's an obvious row count misestimation in the 'large' plan:\n\n-> Sort (cost=519.60..520.60 rows=400 width=31) (actual time=106.88..143.49 rows=30321 loops=1)\n\nbut I'm not good enough at reading these things to tell if that's the cause of the problem, or if so how to fix it :-(\n\n\n\n",
"msg_date": "Fri, 19 Sep 2003 11:41:55 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: osdl-dbt3 run results - puzzled by the execution"
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> I think this is a pipe dream. Variation in where the data gets laid\n> down on your disk drive would alone create more than that kind of delta.\n> I'm frankly amazed you could get repeatability within 2-3%.\n\nI think the reason he gets good repeatability is because he's talking about\nthe aggregate results for a whole test run. Not individual queries. In theory\nyou could just run the whole test multiple times. The more times you run it\nthe lower the variation in the total run time would be.\n\nActually, the variation in run time is also a useful statistic, both for\npostgres and the kernel. It might be useful to do multiple complete runs and\nkeep track of the average standard deviation of the time required for each\nstep.\n\nHigher standard deviation implies queries can't be reliably depended on not to\ntake inordinately long, which can be a problem for some working models. For\nthe kernel it could mean latency issues or it could mean the swapper or buffer\ncache was overly aggressive.\n\n-- \ngreg\n\n",
"msg_date": "19 Sep 2003 09:12:27 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: osdl-dbt3 run results - puzzled by the execution plans"
},
{
"msg_contents": "On Thu, 18 Sep 2003 15:36:50 -0700, Jenny Zhang <[email protected]>\nwrote:\n>We thought the large effective_cache_size should lead us to better \n>plans. But we found the opposite. \n\nThe common structure of your query plans is:\n\n Sort\n Sort Key: sum((partsupp.ps_supplycost * partsupp.ps_availqty))\n InitPlan\n -> Aggregate\n -> SubPlan\n -> Aggregate\n Filter: (sum((ps_supplycost * ps_availqty)) > $0)\n -> Group\n -> Sort\n Sort Key: partsupp.ps_partkey\n -> SubPlan (same as above)\n\nwhere the SubPlan is\n\n -> Merge Join (cost=519.60..99880.05 rows=32068 width=65)\n (actual time=114.78..17435.28 rows=30400 loops=1)\n ctr=5.73\n Merge Cond: (\"outer\".ps_suppkey = \"inner\".s_suppkey)\n -> Index Scan using i_ps_suppkey on partsupp\n (cost=0.00..96953.31 rows=801712 width=34)\n (actual time=0.42..14008.92 rows=799361 loops=1)\n ctr=6.92\n -> Sort (cost=519.60..520.60 rows=400 width=31)\n (actual time=106.88..143.49 rows=30321 loops=1)\n ctr=3.63\n Sort Key: supplier.s_suppkey\n -> SubSubPlan\n\nfor large effective_cache_size and\n\n -> Nested Loop (cost=0.00..130168.30 rows=32068 width=65)\n (actual time=0.56..1374.41 rows=30400 loops=1)\n ctr=94.71\n -> SubSubPlan\n -> Index Scan using i_ps_suppkey on partsupp\n (cost=0.00..323.16 rows=80 width=34)\n (actual time=0.16..2.98 rows=80 loops=380)\n ctr=108.44\n Index Cond: (partsupp.ps_suppkey = \"outer\".s_suppkey)\n\nfor small effective_cache_size. Both subplans have an almost\nidentical subsubplan:\n\n-> Nested Loop (cost=0.00..502.31 rows=400 width=31)\n (actual time=0.23..110.51 rows=380 loops=1)\n ctr=4.55\n Join Filter: (\"inner\".s_nationkey = \"outer\".n_nationkey)\n -> Seq Scan on nation (cost=0.00..1.31 rows=1 width=10)\n (actual time=0.08..0.14 rows=1 loops=1)\n ctr=9.36\n Filter: (n_name = 'ETHIOPIA'::bpchar)\n -> Seq Scan on supplier (cost=0.00..376.00 rows=10000 width=21)\n (actual time=0.10..70.72 rows=10000 loops=1)\n ctr=5.32\n\nI have added the ctr (cost:time ratio) for each plan node. These\nvalues are mostly between 5 and 10 with two notable exceptions:\n\n1) -> Sort (cost=519.60..520.60 rows=400 width=31)\n (actual time=106.88..143.49 rows=30321 loops=1)\n ctr=3.63\n\nIt has already been noticed by Matt Clark that this is the only plan\nnode where the row count estimation looks wrong. However, I don't\nbelieve that this has great influence on the total cost of the plan,\nbecause the ctr is not far from the usual range and if it were a bit\nhigher, it would only add a few hundred cost units to a branch costing\nalmost 100000 units. BTW I vaguely remember that there is something\nstrange with the way actual rows are counted inside a merge join.\nLook at the branch below this plan node: It shows an actual row count\nof 380.\n\n2) -> Index Scan using i_ps_suppkey on partsupp\n (cost=0.00..323.16 rows=80 width=34)\n (actual time=0.16..2.98 rows=80 loops=380)\n ctr=108.44\n\nHere we have the only plan node where loops > 1, and it is the only\none where the ctr is far off. The planner computes the cost for one\nloop and multiplies it by the number of loops (which it estimates\nquite accurately to be 400), thus getting a total cost of ca. 130000.\nWe have no reason to believe that the single loop cost is very far\nfrom reality (for a *single* index scan), but the planner does not\naccount for additional index scans hitting pages in the cache that\nhave been brought in by preceding scans. This is a known problem, Tom\nhas mentioned it several times, IIRC.\n\nNow I'm very interested in getting a better understanding of this\nproblem, so could you please report the results of\n\n. \\d i_ps_suppkey\n\n. VACUUM VERBOSE ANALYSE partsupp;\n VACUUM VERBOSE ANALYSE supplier;\n\n. SELECT attname, null_frac, avg_witdh, n_distinct, correlation\n FROM pg_stats\n WHERE tablename = 'partsupp' AND attname IN ('ps_suppkey', ...);\n\n Please insert other interesting column names for ..., especially\n those contained in i_ps_suppkey, if any.\n\n. SELECT relname, relpages, reltuples\n FROM pg_class\n WHERE relname IN ('partsupp', 'supplier', ...);\n ^^^\n Add relevant index names here.\n\n. EXPLAIN ANALYSE\n SELECT ps_partkey, ps_supplycost, ps_availqty\n FROM partsupp, supplier\n WHERE ps_suppkey = s_suppkey AND s_nationkey = '<youknowit>';\n\n The idea is to eliminate parts of the plan that are always the same.\n Omitting nation is possibly to much a simplification. In this case\n please re-add it.\n Do this test for small and large effective_cache_size.\n Force the use of other join methods by setting enable_<joinmethod>\n to off. Post all results.\n\n\nJenny, I understand that this long message contains more questions\nthan answers and is not of much help for you. OTOH your tests might\nbe very helpful for Postgres development ...\n\nServus\n Manfred\n",
"msg_date": "Fri, 19 Sep 2003 17:08:26 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: osdl-dbt3 run results - puzzled by the execution plans"
},
{
"msg_contents": "I posted more results as you requested:\n\nOn Fri, 2003-09-19 at 08:08, Manfred Koizar wrote:\n> On Thu, 18 Sep 2003 15:36:50 -0700, Jenny Zhang <[email protected]>\n> wrote:\n> >We thought the large effective_cache_size should lead us to better \n> >plans. But we found the opposite. \n> \n> The common structure of your query plans is:\n> \n> Sort\n> Sort Key: sum((partsupp.ps_supplycost * partsupp.ps_availqty))\n> InitPlan\n> -> Aggregate\n> -> SubPlan\n> -> Aggregate\n> Filter: (sum((ps_supplycost * ps_availqty)) > $0)\n> -> Group\n> -> Sort\n> Sort Key: partsupp.ps_partkey\n> -> SubPlan (same as above)\n> \n> where the SubPlan is\n> \n> -> Merge Join (cost=519.60..99880.05 rows=32068 width=65)\n> (actual time=114.78..17435.28 rows=30400 loops=1)\n> ctr=5.73\n> Merge Cond: (\"outer\".ps_suppkey = \"inner\".s_suppkey)\n> -> Index Scan using i_ps_suppkey on partsupp\n> (cost=0.00..96953.31 rows=801712 width=34)\n> (actual time=0.42..14008.92 rows=799361 loops=1)\n> ctr=6.92\n> -> Sort (cost=519.60..520.60 rows=400 width=31)\n> (actual time=106.88..143.49 rows=30321 loops=1)\n> ctr=3.63\n> Sort Key: supplier.s_suppkey\n> -> SubSubPlan\n> \n> for large effective_cache_size and\n> \n> -> Nested Loop (cost=0.00..130168.30 rows=32068 width=65)\n> (actual time=0.56..1374.41 rows=30400 loops=1)\n> ctr=94.71\n> -> SubSubPlan\n> -> Index Scan using i_ps_suppkey on partsupp\n> (cost=0.00..323.16 rows=80 width=34)\n> (actual time=0.16..2.98 rows=80 loops=380)\n> ctr=108.44\n> Index Cond: (partsupp.ps_suppkey = \"outer\".s_suppkey)\n> \n> for small effective_cache_size. Both subplans have an almost\n> identical subsubplan:\n> \n> -> Nested Loop (cost=0.00..502.31 rows=400 width=31)\n> (actual time=0.23..110.51 rows=380 loops=1)\n> ctr=4.55\n> Join Filter: (\"inner\".s_nationkey = \"outer\".n_nationkey)\n> -> Seq Scan on nation (cost=0.00..1.31 rows=1 width=10)\n> (actual time=0.08..0.14 rows=1 loops=1)\n> ctr=9.36\n> Filter: (n_name = 'ETHIOPIA'::bpchar)\n> -> Seq Scan on supplier (cost=0.00..376.00 rows=10000 width=21)\n> (actual time=0.10..70.72 rows=10000 loops=1)\n> ctr=5.32\n> \n> I have added the ctr (cost:time ratio) for each plan node. These\n> values are mostly between 5 and 10 with two notable exceptions:\n> \n> 1) -> Sort (cost=519.60..520.60 rows=400 width=31)\n> (actual time=106.88..143.49 rows=30321 loops=1)\n> ctr=3.63\n> \n> It has already been noticed by Matt Clark that this is the only plan\n> node where the row count estimation looks wrong. However, I don't\n> believe that this has great influence on the total cost of the plan,\n> because the ctr is not far from the usual range and if it were a bit\n> higher, it would only add a few hundred cost units to a branch costing\n> almost 100000 units. BTW I vaguely remember that there is something\n> strange with the way actual rows are counted inside a merge join.\n> Look at the branch below this plan node: It shows an actual row count\n> of 380.\n> \n> 2) -> Index Scan using i_ps_suppkey on partsupp\n> (cost=0.00..323.16 rows=80 width=34)\n> (actual time=0.16..2.98 rows=80 loops=380)\n> ctr=108.44\n> \n> Here we have the only plan node where loops > 1, and it is the only\n> one where the ctr is far off. The planner computes the cost for one\n> loop and multiplies it by the number of loops (which it estimates\n> quite accurately to be 400), thus getting a total cost of ca. 130000.\n> We have no reason to believe that the single loop cost is very far\n> from reality (for a *single* index scan), but the planner does not\n> account for additional index scans hitting pages in the cache that\n> have been brought in by preceding scans. This is a known problem, Tom\n> has mentioned it several times, IIRC.\n> \n> Now I'm very interested in getting a better understanding of this\n> problem, so could you please report the results of\n> \n> . \\d i_ps_suppkey\n> \nhttp://developer.osdl.org/~jenny/pgsql-optimizer/disc_i_ps_suppkey\n> . VACUUM VERBOSE ANALYSE partsupp;\n> VACUUM VERBOSE ANALYSE supplier;\n> \nhttp://developer.osdl.org/~jenny/pgsql-optimizer/vacuum_verbose_analyze_partsupp\nhttp://developer.osdl.org/~jenny/pgsql-optimizer/vacuum_verbose_analyze_suppler\n> . SELECT attname, null_frac, avg_witdh, n_distinct, correlation\n> FROM pg_stats\n> WHERE tablename = 'partsupp' AND attname IN ('ps_suppkey', ...);\n> \n> Please insert other interesting column names for ..., especially\n> those contained in i_ps_suppkey, if any.\n> \nI put all the related columns\nhttp://developer.osdl.org/~jenny/pgsql-optimizer/info_partsupp_col\n\n> . SELECT relname, relpages, reltuples\n> FROM pg_class\n> WHERE relname IN ('partsupp', 'supplier', ...);\n> ^^^\n> Add relevant index names here.\n> \nI put all the related tables\nhttp://developer.osdl.org/~jenny/pgsql-optimizer/info_table\n\n> . EXPLAIN ANALYSE\n> SELECT ps_partkey, ps_supplycost, ps_availqty\n> FROM partsupp, supplier\n> WHERE ps_suppkey = s_suppkey AND s_nationkey = '<youknowit>';\n> \n> The idea is to eliminate parts of the plan that are always the same.\n> Omitting nation is possibly to much a simplification. In this case\n> please re-add it.\n> Do this test for small and large effective_cache_size.\n> Force the use of other join methods by setting enable_<joinmethod>\n> to off. Post all results.\n> \nhttp://developer.osdl.org/~jenny/pgsql-optimizer/explain_query_mk\n> \n> Jenny, I understand that this long message contains more questions\n> than answers and is not of much help for you. OTOH your tests might\n> be very helpful for Postgres development ...\nLet me know if you need anything else\n\nJenny\n\n",
"msg_date": "Fri, 19 Sep 2003 11:35:35 -0700",
"msg_from": "Jenny Zhang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: osdl-dbt3 run results - puzzled by the execution"
},
{
"msg_contents": "On Thu, 2003-09-18 at 20:20, Tom Lane wrote:\n> Jenny Zhang <[email protected]> writes:\n> > ... It seems to me that small \n> > effective_cache_size favors the choice of nested loop joins (NLJ) \n> > while the big effective_cache_size is in favor of merge joins (MJ). \n> \n> No, I wouldn't think that, because a nestloop plan will involve repeated\n> fetches of the same tuples whereas a merge join doesn't (at least not\n> when it sorts its inner input, as this plan does). Larger cache\n> improves the odds of a repeated fetch not having to do I/O. In practice\n> a larger cache area would also have some effects on access costs for the\n> sort's temp file, but I don't think the planner's cost model for sorting\n> takes that into account.\nI think there is some misunderstanding here. What I meant to say is:\n>From the plans we got, the optimizer favors the choice of nested loop\njoins (NLJ) while the big effective_cache_size is in favor of merge\njoins (MJ). Which we think is not appropriate. We verified that\nsort_mem has no impact on the plans. Though it would be nice to take\nthat into account.\n> \n> As Matt Clark points out nearby, the real question is whether these\n> planner estimates have anything to do with reality. EXPLAIN ANALYZE\n> results would be far more interesting than plain EXPLAIN.\n> \n> > However, within the same run set consist of 6 runs, we see 2-3% \n> > standard deviation for the run metrics associated with the multiple\n> > stream part of the test (as opposed to the single stream part).\n> \n> <python> Och, laddie, we useta *dream* of 2-3% variation </python>\n> \nBTW, I am a she :-)\n> > We would like to reduce the variation to be less than 1% so that a \n> > 2% change between two different kernels would be significant. \n> \n> I think this is a pipe dream. Variation in where the data gets laid\n> down on your disk drive would alone create more than that kind of delta.\n> I'm frankly amazed you could get repeatability within 2-3%.\n> \nGreg is right. The repeatability is due to the aggregate results for a\nwhole test run. As for individual query, the power test(single stream)\nis very consistent, and the throughput test(multiple streams), any given\nquery execution time varies up to 15% if no swapping. If we set\nsort_mem too high and swapping occurs, the variation is bigger.\n\nJenny\n\n",
"msg_date": "Fri, 19 Sep 2003 14:35:41 -0700",
"msg_from": "Jenny Zhang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: osdl-dbt3 run results - puzzled by the execution"
},
{
"msg_contents": "On Fri, 2003-09-19 at 06:12, Greg Stark wrote:\n> Tom Lane <[email protected]> writes:\n> \n> > I think this is a pipe dream. Variation in where the data gets laid\n> > down on your disk drive would alone create more than that kind of delta.\n> > I'm frankly amazed you could get repeatability within 2-3%.\n> \n> I think the reason he gets good repeatability is because he's talking about\n> the aggregate results for a whole test run. Not individual queries. In theory\n> you could just run the whole test multiple times. The more times you run it\n> the lower the variation in the total run time would be.\n> \nThat is right. The repeatability is due to the aggregate results for a\nwhole test run. As for individual query, the power test(single stream)\nis very consistent, and the throughput test(multiple streams), any given\nquery execution time varies up to 15% if no swapping. If we set\nsort_mem too high and swapping occurs, the variation is bigger.\n\n> Actually, the variation in run time is also a useful statistic, both for\n> postgres and the kernel. It might be useful to do multiple complete runs and\n> keep track of the average standard deviation of the time required for each\n> step.\n> \nI created a page with the execution time(in seconds), average, and\nstddev for each query and each steps. The data is collected from 6 dbt3\nruns. \nhttp://developer.osdl.org/~jenny/pgsql-optimizer/exetime.html\n\n> Higher standard deviation implies queries can't be reliably depended on not to\n> take inordinately long, which can be a problem for some working models. For\n> the kernel it could mean latency issues or it could mean the swapper or buffer\n> cache was overly aggressive.\nI agree. I can think of another reason why the performance varies even\nthe swapper and buffer cache is not overly aggressive. Since PG depends\non OS to manage the buffer cache(correct me if I am wrong), it is up to\nOS to decide what to keep in the cache. And OS can not anticipate what\nis likely needed next.\n\nThanks,\nJenny\n\n",
"msg_date": "Fri, 19 Sep 2003 16:26:54 -0700",
"msg_from": "Jenny Zhang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: osdl-dbt3 run results - puzzled by the execution"
},
{
"msg_contents": "On Fri, 19 Sep 2003 11:35:35 -0700, Jenny Zhang <[email protected]>\nwrote:\n>I posted more results as you requested:\n\nUnfortunately they only confirm what I suspected earlier:\n\n>> 2) -> Index Scan using i_ps_suppkey on partsupp\n>> (cost=0.00..323.16 rows=80 width=34)\n>> (actual time=0.16..2.98 rows=80 loops=380)\n>> ctr=108.44\n\n>> the planner does not\n>> account for additional index scans hitting pages in the cache that\n>> have been brought in by preceding scans. This is a known problem\n\nPF1 = estimated number of page fetches for one loop ~ 320\nL = estimated number of loops ~ 400\nP = number of pages in relation ~ 21000\n\nCutting down the number of heap page fetches if PF1 * L > P and P <\neffective_cache_size seems like an obvious improvement, but I was not\nable to figure out where to make this change. Maybe it belongs into\ncostsize.c near\n\n\trun_cost += outer_path_rows *\n\t\t(inner_path->total_cost - inner_path->startup_cost) *\n\t\tjoininfactor;\n\nin cost_nestloop() or it should be pushed into the index cost\nestimation functions. Hackers?\n\nFor now you have to keep lying about effective_cache_size to make the\nplanner overestimate merge joins to compensate for the planner's\noverestimation of nested loops. Sorry for having no better answer.\n\nServus\n Manfred\n",
"msg_date": "Wed, 24 Sep 2003 11:14:19 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: osdl-dbt3 run results - puzzled by the execution"
},
{
"msg_contents": "Manfred Koizar <[email protected]> writes:\n> Cutting down the number of heap page fetches if PF1 * L > P and P <\n> effective_cache_size seems like an obvious improvement, but I was not\n> able to figure out where to make this change. Maybe it belongs into\n> costsize.c near\n> \trun_cost += outer_path_rows *\n> \t\t(inner_path->total_cost - inner_path->startup_cost) *\n> \t\tjoininfactor;\n\nI've been intending for some time to try to restructure the cost\nestimator so that repeated indexscans can be costed more accurately.\nWithin the context of the heap-fetch-estimating algorithm, I think\nthe entire execution of a nestloop-with-inner-index-scan could probably\nbe treated as a single scan. I'm not sure how we adjust the estimates\nfor the index-access part, though clearly those are too high as well.\n\nThis doesn't seem to be a localized change unfortunately. Certainly\ncostsize.c can't do it alone.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Sep 2003 10:01:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] osdl-dbt3 run results - puzzled by the execution "
}
] |
[
{
"msg_contents": "FINALLY A LIP PLUMPER THAT ACTUALLY WORKS !!!\n\n\n\n\n\nGet Plump, Sexy Lip'sIn Under 30 Days!\n\n\nvisit website\n\nCITY LIP'S exclusive lip treatment...\n\t>\nStimulates collagen & hyaluronic moisture in your lip's resulting in BIGGER, LUSCIOUS, more SENSUOUS Lip's\n\t>\nCITY LIP'S is used by men & women in 34 countries. Recommended by Plastic Surgeons, Celebrities, & Movie Stars\n\t>\n CITY LIP'S super-hydrating formula plumps & reduces unattractive lip wrinkles & fine lines\n\t>\n\tEasy to use, completely pain-free and GUARANTEED to work in 30 days or your MONEY BACK!\nBe the envy of all your friends!\nretail $47.95\nONLINE SALE $24.76you save: $23.19 (48% OFF)\n ~> BUY 2 GET 1 FREE <~\n\n\n\n\n\nbuy now\nvisit website\n\tcustomer ratings:\n\nWomen love beauty tips, forward this to a friend!\nDistributors Welcome!\n\n\n\n \n\n",
"msg_date": "Sat, 20 Sep 2003 15:21:26 -0200",
"msg_from": "\"Beauty Center\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sexier, Plumper L|ps can be yours only $24.76"
}
] |
[
{
"msg_contents": "Hi-\n\nI have a table- called \"event\" with a field event_date_time that is indexed.\nThere are 1,700,000 rows in the table and 92,000 distinct values of\nevent_date_time with anywhere from 1 to 400 rows sharing the same value. (I\ndid a count grouped by event_date_time & scanned it to get this info.)\n\nWhen I look at the pg_stats on this table, I always see 15,000 or lower in\nthe n_distinct column for event_date_time. (I re-ran analyze several times &\nthen checked pg_stats to see if the numbers varied significantly.)\n\nSince this is off by about a factor of 6, I think the planner is missing the\nchance to use this table as the \"driver\" in a complex query plan that I'm\ntrying to optimize.\n\nSo the question is- how can I get a better estimate of n_distinct from\nanalyze?\n\nIf I alter the stats target as high as it will go, I get closer, but it\nstill shows the index to be about 1/2 as selective as it actually is:\n\nalpha=# alter table event alter column event_date_time set statistics 1000;\nALTER TABLE\nalpha=# analyze event;\nANALYZE\nalpha=# select n_distinct from pg_stats where tablename='event' and\nattname='event_date_time';\n n_distinct\n------------\n 51741\n(1 row)\n\nThis number seems to be consistently around 51,000 if I re-run analyze a few\ntimes.\n\nI guess my question is two-part:\n\n(1)Is there any tweak to make this estimate work better?\n\n(2)Since I'm getting numbers that are consistent but way off, is there a bug\nhere?\n\n(2-1/2) Or alternately, am I totally missing what n-distinct is supposed to\ndenote?\n\nThanks!\n -Nick\n\n---------------------------------------------------------------------\nNick Fankhauser\n\n [email protected] Phone 1.765.965.7363 Fax 1.765.962.9788\ndoxpop - Court records at your fingertips - http://www.doxpop.com/\n\n\n",
"msg_date": "Mon, 22 Sep 2003 15:42:27 -0500",
"msg_from": "\"Nick Fankhauser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to make n_distinct more accurate. "
},
{
"msg_contents": "\nThe performance list seemed to be off-line for a while, so I posed the same\nquestion on the admin list and Tom Lane has been helping in that forum.\n\n-Nick\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Nick\n> Fankhauser\n> Sent: Monday, September 22, 2003 3:42 PM\n> To: Pgsql-Performance@Postgresql. Org\n> Subject: [PERFORM] How to make n_distinct more accurate.\n>\n>\n> Hi-\n>\n> I have a table- called \"event\" with a field event_date_time that\n> is indexed.\n> There are 1,700,000 rows in the table and 92,000 distinct values of\n> event_date_time with anywhere from 1 to 400 rows sharing the same\n> value. (I\n> did a count grouped by event_date_time & scanned it to get this info.)\n>\n> When I look at the pg_stats on this table, I always see 15,000 or lower in\n> the n_distinct column for event_date_time. (I re-ran analyze\n> several times &\n> then checked pg_stats to see if the numbers varied significantly.)\n>\n> Since this is off by about a factor of 6, I think the planner is\n> missing the\n> chance to use this table as the \"driver\" in a complex query plan that I'm\n> trying to optimize.\n>\n> So the question is- how can I get a better estimate of n_distinct from\n> analyze?\n>\n> If I alter the stats target as high as it will go, I get closer, but it\n> still shows the index to be about 1/2 as selective as it actually is:\n>\n> alpha=# alter table event alter column event_date_time set\n> statistics 1000;\n> ALTER TABLE\n> alpha=# analyze event;\n> ANALYZE\n> alpha=# select n_distinct from pg_stats where tablename='event' and\n> attname='event_date_time';\n> n_distinct\n> ------------\n> 51741\n> (1 row)\n>\n> This number seems to be consistently around 51,000 if I re-run\n> analyze a few\n> times.\n>\n> I guess my question is two-part:\n>\n> (1)Is there any tweak to make this estimate work better?\n>\n> (2)Since I'm getting numbers that are consistent but way off, is\n> there a bug\n> here?\n>\n> (2-1/2) Or alternately, am I totally missing what n-distinct is\n> supposed to\n> denote?\n>\n> Thanks!\n> -Nick\n>\n> ---------------------------------------------------------------------\n> Nick Fankhauser\n>\n> [email protected] Phone 1.765.965.7363 Fax 1.765.962.9788\n> doxpop - Court records at your fingertips - http://www.doxpop.com/\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n\n",
"msg_date": "Tue, 23 Sep 2003 21:32:24 -0500",
"msg_from": "\"Nick Fankhauser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to make n_distinct more accurate. "
}
] |
[
{
"msg_contents": "Recently we upgraded the RAM in our server. After the install a LIKE query that used to take 5 seconds now takes 5 minutes. We have tried the usual suspects, VACUUM, ANALYZE and Re-indexing.\n\nAny thoughts on what might have happened?\n\n-Garrett Bladow\n\n",
"msg_date": "Tue, 23 Sep 2003 19:24:26 -0500 (CDT)",
"msg_from": "Garrett Bladow <[email protected]>",
"msg_from_op": true,
"msg_subject": "LIKE query running slow"
},
{
"msg_contents": "Garrett,\n\n> Recently we upgraded the RAM in our server. After the install a LIKE query \nthat used to take 5 seconds now takes 5 minutes. We have tried the usual \nsuspects, VACUUM, ANALYZE and Re-indexing.\n> \n> Any thoughts on what might have happened?\n\nBad RAM? Have you tested it?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Tue, 23 Sep 2003 17:35:50 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE query running slow"
},
{
"msg_contents": "On Tue, 23 Sep 2003, Josh Berkus wrote:\n\n> Garrett,\n> \n> > Recently we upgraded the RAM in our server. After the install a LIKE query \n> that used to take 5 seconds now takes 5 minutes. We have tried the usual \n> suspects, VACUUM, ANALYZE and Re-indexing.\n> > \n> > Any thoughts on what might have happened?\n> \n> Bad RAM? Have you tested it?\n\nRAM was tested and is good.\n\n",
"msg_date": "Tue, 23 Sep 2003 19:41:13 -0500 (CDT)",
"msg_from": "Garrett Bladow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE query running slow"
},
{
"msg_contents": "On Tue, 2003-09-23 at 20:24, Garrett Bladow wrote:\n> Recently we upgraded the RAM in our server. After the install a LIKE query that used to take 5 seconds now takes 5 minutes. We have tried the usual suspects, VACUUM, ANALYZE and Re-indexing.\n> \n> Any thoughts on what might have happened?\n\nWhat settings did you change at that time?\n\nCare to share an EXPLAIN ANALYZE with us?",
"msg_date": "Tue, 23 Sep 2003 20:53:57 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE query running slow"
},
{
"msg_contents": "On Tue, 23 Sep 2003, Garrett Bladow wrote:\n\n> Recently we upgraded the RAM in our server. After the install a LIKE\n> query that used to take 5 seconds now takes 5 minutes. We have tried the\n> usual suspects, VACUUM, ANALYZE and Re-indexing.\n\nIf you mean that you reinstalled postgresql then it's probably because you\nbefore run the database with the \"C\" locale but now you run it with\nsomething else.\n\nIf all you did was to install the extra memory then I don't see how that\ncan affect it at all (especially so if you have not altered\npostgresql.conf to make use of more memory).\n\n-- \n/Dennis\n\n",
"msg_date": "Wed, 24 Sep 2003 07:20:44 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE query running slow"
},
{
"msg_contents": "Garrett Bladow wrote:\n\n> Recently we upgraded the RAM in our server. After the install a LIKE query that used to take 5 seconds now takes 5 minutes. We have tried the usual suspects, VACUUM, ANALYZE and Re-indexing.\n> \n> Any thoughts on what might have happened?\n\nWhat all tuning you have done? Have you set effective cache size to take care of \nadditional RAM.\n\nJust check out.\n\n Shridhar\n\n",
"msg_date": "Wed, 24 Sep 2003 11:36:02 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE query running slow"
},
{
"msg_contents": "On Tue, 23 Sep 2003, Garrett Bladow wrote:\n\n> Recently we upgraded the RAM in our server. After the install a LIKE query that used to take 5 seconds now takes 5 minutes. We have tried the usual suspects, VACUUM, ANALYZE and Re-indexing.\n>\n> Any thoughts on what might have happened?\n>\nDid you reload the db? If you did perhaps you didn't use the \"C\" locale?\nThat can cause a huge slowdown.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 24 Sep 2003 08:12:17 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE query running slow"
}
] |
[
{
"msg_contents": "Hello,\n\nI have been trying to get my Postgres database to do faster inserts.\n\nThe environment is basically a single user situation.\n\nThe part that I would like to speed up is when a User copys a Project.\nA Project consists of a number of Rooms(say 60). Each room contains a \nnumber of items.\nA project will contain say 20,000 records.\n\nAnyway the copying process gets slower and slower, as more projects are \nadded to the database.\n\nMy statistics(Athlon 1.8Ghz)\n----------------\n20,000 items Takes on average 0.078seconds/room\n385,000 items Takes on average .11seconds/room\n690,000 items takes on average .270seconds/room\n1,028,000 items Takes on average .475seconds/room\n\nAs can be seen the time taken to process each room increases. A commit \noccurs when a room has been copied.\nThe hard drive is not being driven very hard. The hard drive light \nonly flashes about twice a second when there are a million records in \nthe database.\n\nI thought that the problem could have been my plpgsql procedure because \nI assume the code is interpreted.\nHowever I have just rewriten the code using straight sql(with some temp \nfields),\nand the times turn out to be almost exactly the same as the plpgsql \nversion.\n\nThe read speed for the Application is fine. The sql planner seems to be \ndoing a good job. There has been only one problem\nthat I have found with one huge select, which was fixed by a cross join.\n\n I am running Red hat 8. Some of my conf entries that I have changed \nfollow\nshared_buffers = 3700\neffective_cache_size = 4000\nsort_mem = 32168\n\nAre the increasing times reasonable?\nThe times themselves might look slow, but thats because there are a \nnumber of tables involved in a Copy\n\nI can increase the shared buffer sizes above 32M, but would this really \nhelp?\n\nTIA\n\npeter Mcgregor\n\n",
"msg_date": "Wed, 24 Sep 2003 17:48:15 +1200",
"msg_from": "peter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance issue"
},
{
"msg_contents": "get rid of any unnecessary indexes?\ni've found that droping indexes and re-creating them isn't usually worth the \neffort\n\nmount the disk with the noatime option which saves you the time involved in \nupdating the last access time on files\n\nmake sure you're doing all the inserts in one transaction.. wrapping a bunch \nof INSERTS in BEGIN & COMMIT speeds them up loads.\n\n\n\n\n> At 05:48 PM 9/24/2003 +1200, peter wrote:\n> >Hello,\n> >\n> >I have been trying to get my Postgres database to do faster inserts.\n> >\n> >The environment is basically a single user situation.\n> >\n> >The part that I would like to speed up is when a User copys a Project.\n> >A Project consists of a number of Rooms(say 60). Each room contains a\n> >number of items.\n> >A project will contain say 20,000 records.\n> >\n> >Anyway the copying process gets slower and slower, as more projects are\n> >added to the database.\n> >\n> >My statistics(Athlon 1.8Ghz)\n> >----------------\n> >20,000 items Takes on average 0.078seconds/room\n> >385,000 items Takes on average .11seconds/room\n> >690,000 items takes on average .270seconds/room\n> >1,028,000 items Takes on average .475seconds/room\n> >\n> >As can be seen the time taken to process each room increases. A commit\n> >occurs when a room has been copied.\n> >The hard drive is not being driven very hard. The hard drive light only\n> >flashes about twice a second when there are a million records in the\n> > database.\n> >\n> >I thought that the problem could have been my plpgsql procedure because I\n> >assume the code is interpreted.\n> >However I have just rewriten the code using straight sql(with some temp\n> >fields),\n> >and the times turn out to be almost exactly the same as the plpgsql\n> > version.\n> >\n> >The read speed for the Application is fine. The sql planner seems to be\n> >doing a good job. There has been only one problem\n> >that I have found with one huge select, which was fixed by a cross join.\n> >\n> > I am running Red hat 8. Some of my conf entries that I have changed\n> > follow shared_buffers = 3700\n> >effective_cache_size = 4000\n> >sort_mem = 32168\n> >\n> >Are the increasing times reasonable?\n> >The times themselves might look slow, but thats because there are a number\n> >of tables involved in a Copy\n> >\n> >I can increase the shared buffer sizes above 32M, but would this really\n> > help?\n> >\n> >TIA\n> >\n> >peter Mcgregor\n> >\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 4: Don't 'kill -9' the postmaster\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n",
"msg_date": "Wed, 24 Sep 2003 18:05:23 +0100",
"msg_from": "Richard Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue"
},
{
"msg_contents": "Peter,\n\nOne possibility is to drop all the indexes, do the insert and re-add the \nindexes.\n\nThe more indexes that exist and the more rows that exist, the more costly \nthe insert.\n\nRegards,\n\nJoseph\n\nAt 05:48 PM 9/24/2003 +1200, peter wrote:\n>Hello,\n>\n>I have been trying to get my Postgres database to do faster inserts.\n>\n>The environment is basically a single user situation.\n>\n>The part that I would like to speed up is when a User copys a Project.\n>A Project consists of a number of Rooms(say 60). Each room contains a \n>number of items.\n>A project will contain say 20,000 records.\n>\n>Anyway the copying process gets slower and slower, as more projects are \n>added to the database.\n>\n>My statistics(Athlon 1.8Ghz)\n>----------------\n>20,000 items Takes on average 0.078seconds/room\n>385,000 items Takes on average .11seconds/room\n>690,000 items takes on average .270seconds/room\n>1,028,000 items Takes on average .475seconds/room\n>\n>As can be seen the time taken to process each room increases. A commit \n>occurs when a room has been copied.\n>The hard drive is not being driven very hard. The hard drive light only \n>flashes about twice a second when there are a million records in the database.\n>\n>I thought that the problem could have been my plpgsql procedure because I \n>assume the code is interpreted.\n>However I have just rewriten the code using straight sql(with some temp \n>fields),\n>and the times turn out to be almost exactly the same as the plpgsql version.\n>\n>The read speed for the Application is fine. The sql planner seems to be \n>doing a good job. There has been only one problem\n>that I have found with one huge select, which was fixed by a cross join.\n>\n> I am running Red hat 8. Some of my conf entries that I have changed follow\n>shared_buffers = 3700\n>effective_cache_size = 4000\n>sort_mem = 32168\n>\n>Are the increasing times reasonable?\n>The times themselves might look slow, but thats because there are a number \n>of tables involved in a Copy\n>\n>I can increase the shared buffer sizes above 32M, but would this really help?\n>\n>TIA\n>\n>peter Mcgregor\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n\n\n",
"msg_date": "Wed, 24 Sep 2003 13:09:34 -0400",
"msg_from": "Joseph Bove <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue"
},
{
"msg_contents": "> My statistics(Athlon 1.8Ghz)\n> ----------------\n> 20,000 items Takes on average 0.078seconds/room\n> 385,000 items Takes on average .11seconds/room\n> 690,000 items takes on average .270seconds/room\n> 1,028,000 items Takes on average .475seconds/room\n[snip]\n> I am running Red hat 8. Some of my conf entries that I have changed \n> follow\n> shared_buffers = 3700\n> effective_cache_size = 4000\n> sort_mem = 32168\n\nHave you twiddled with your wal_buffers or checkpoint_segments? Might\nbe something to look at.\n\n-sc\n\n-- \nSean Chittenden\n",
"msg_date": "Wed, 24 Sep 2003 10:47:21 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue"
},
{
"msg_contents": "> 20,000 items Takes on average 0.078seconds/room\n> 385,000 items Takes on average .11seconds/room\n> 690,000 items takes on average .270seconds/room\n> 1,028,000 items Takes on average .475seconds/room\n> \n> As can be seen the time taken to process each room increases. A commit \n> occurs when a room has been copied.\n\nIt probably isn't the insert that is getting slower, but a select. \nForeign keys to growing tables will exhibit this behaviour.\n\nSince the time is doubling with the number of items, you might want to\ncheck for a SELECT working with a sequential scan rather than an index\nscan.",
"msg_date": "Wed, 24 Sep 2003 14:35:50 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a table containing columns:\n\n \"END_DATE\" timestamptz NOT NULL\n \"REO_ID\" int4 NOT NULL\n\nand i am indexed \"REO_ID\" coulumn.\nI have a query:\n\nselect \"REO_ID\", \"END_DATE\" from \"PRIORITY_STATISTICS\" where \"REO_ID\" IN\n('112851' ,'112859' ,'112871' ,'112883' ,'112891' ,'112904' ,'112915'\n,'112924' ,'112939' ,'112947' ,'112960' ,'112984' ,'112999' ,'113013'\n,'113032' ,'113059' ,'113067' ,'113084' ,'113096' ,'113103' ,'113110'\n,'113117' ,'113125' ,'113132' ,'113139' ,'113146' ,'113153' ,'113160'\n,'113167' ,'113174' ,'113181' ,'113188' ,'113195' ,'113204' ,'113268'\n,'113279' ,'113294' ,'113302' ,'113317' ,'113340' ,'113358' ,'113385'\n,'113404' ,'113412' ,'113419' ,'113429' ,'113436' ,'113443' ,'113571'\n,'113636' ,'113649' ,'113689' ,'113705' ,'113744' ,'113755' ,'113724'\n,'113737' ,'113812' ,'113828' ,'113762' ,'113842' ,'113869' ,'113925'\n,'113976' ,'114035' ,'114044' ,'114057' ,'114070' ,'114084' ,'114094'\n,'114119' )\n\nand it is _not_ using that index\n\nBut following query (notice there are less id-s in WHERE clause, but rest is\nsame)\n\nselect \"REO_ID\", \"END_DATE\" from \"PRIORITY_STATISTICS\" where \"REO_ID\" IN\n('112851' ,'112859' ,'112871' ,'112883' ,'112891' ,'112904' ,'112915'\n,'112924' ,'112939' ,'112947' ,'112960' ,'112984' ,'112999' ,'113013'\n,'113032' ,'113059' ,'113067' ,'113084' ,'113096' ,'113103' ,'113110'\n,'113117' ,'113125' ,'113132' ,'113139' ,'113146' ,'113153' ,'113160'\n,'113167' ,'113174' ,'113181' ,'113188' ,'113195' ,'113204' ,'113268'\n,'113279' ,'113294' ,'113302' ,'113317' ,'113340' ,'113358' ,'113385'\n,'113404' ,'113412' ,'113419' ,'113429' ,'113436' ,'113443' ,'113571'\n,'113636' ,'113649' ,'113689' ,'113705' ,'113744' ,'113755' ,'113724'\n,'113737' )\n\nwill _is_ using index:\n\nIndex Scan using PRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id on PRIORITY_STATISTICS (cost=0.00..394.06\nrows=102 width=12)\n\nWhat causes this behaviour? is there any workaround? Suggestions?\n\nbest,\nRigmor Ukuhe\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.515 / Virus Database: 313 - Release Date: 01.09.2003\n\n",
"msg_date": "Wed, 24 Sep 2003 13:09:37 +0300",
"msg_from": "\"Rigmor Ukuhe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index problem"
},
{
"msg_contents": "> Hi,\n> \n> I have a table containing columns:\n> \n> \"END_DATE\" timestamptz NOT NULL\n> \"REO_ID\" int4 NOT NULL\n> \n> and i am indexed \"REO_ID\" coulumn.\n> I have a query:\n> \n> select \"REO_ID\", \"END_DATE\" from \"PRIORITY_STATISTICS\" where \"REO_ID\" IN\n> ('112851' ,'112859' ,'112871' ,'112883' ,'112891' ,'112904' ,'112915'\n> ,'112924' ,'112939' ,'112947' ,'112960' ,'112984' ,'112999' ,'113013'\n> ,'113032' ,'113059' ,'113067' ,'113084' ,'113096' ,'113103' ,'113110'\n> ,'113117' ,'113125' ,'113132' ,'113139' ,'113146' ,'113153' ,'113160'\n> ,'113167' ,'113174' ,'113181' ,'113188' ,'113195' ,'113204' ,'113268'\n> ,'113279' ,'113294' ,'113302' ,'113317' ,'113340' ,'113358' ,'113385'\n> ,'113404' ,'113412' ,'113419' ,'113429' ,'113436' ,'113443' ,'113571'\n> ,'113636' ,'113649' ,'113689' ,'113705' ,'113744' ,'113755' ,'113724'\n> ,'113737' ,'113812' ,'113828' ,'113762' ,'113842' ,'113869' ,'113925'\n> ,'113976' ,'114035' ,'114044' ,'114057' ,'114070' ,'114084' ,'114094'\n> ,'114119' )\n> \n> and it is _not_ using that index\n> \n> But following query (notice there are less id-s in WHERE clause, but rest is\n> same)\n> \n> select \"REO_ID\", \"END_DATE\" from \"PRIORITY_STATISTICS\" where \"REO_ID\" IN\n> ('112851' ,'112859' ,'112871' ,'112883' ,'112891' ,'112904' ,'112915'\n> ,'112924' ,'112939' ,'112947' ,'112960' ,'112984' ,'112999' ,'113013'\n> ,'113032' ,'113059' ,'113067' ,'113084' ,'113096' ,'113103' ,'113110'\n> ,'113117' ,'113125' ,'113132' ,'113139' ,'113146' ,'113153' ,'113160'\n> ,'113167' ,'113174' ,'113181' ,'113188' ,'113195' ,'113204' ,'113268'\n> ,'113279' ,'113294' ,'113302' ,'113317' ,'113340' ,'113358' ,'113385'\n> ,'113404' ,'113412' ,'113419' ,'113429' ,'113436' ,'113443' ,'113571'\n> ,'113636' ,'113649' ,'113689' ,'113705' ,'113744' ,'113755' ,'113724'\n> ,'113737' )\n> \n> will _is_ using index:\n\nWhy not. It's just because the second query is more selective. Probably \nyou don't have too many rows in your table and Postgres thinks it's \nbetter (faster) to use sequential scan than index one.\n\nRegards,\nTomasz Myrta\n\n",
"msg_date": "Wed, 24 Sep 2003 19:03:06 +0200",
"msg_from": "Tomasz Myrta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index problem"
},
{
"msg_contents": "\n> What causes this behaviour? is there any workaround? Suggestions?\n>\n\nHow many rows are there in the table, and can you post the 'explain analyze' for both queries after doing a 'vacuum verbose analyze\n[tablename]'?\n\nCheers\n\nMatt\n\n\n",
"msg_date": "Wed, 24 Sep 2003 18:35:34 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index problem"
},
{
"msg_contents": "> > What causes this behaviour? is there any workaround? Suggestions?\n> >\n>\n> How many rows are there in the table, and can you post the\n> 'explain analyze' for both queries after doing a 'vacuum verbose analyze\n> [tablename]'?\n\nThere are about 2500 rows in that table.\n\n1st query explain analyze: Seq Scan on PRIORITY_STATISTICS\n(cost=0.00..491.44 rows=127 width=12) (actual time=98.58..98.58 rows=0\nloops=1)\nTotal runtime: 98.74 msec\n\n2nd query explain analyze: NOTICE: QUERY PLAN:\n\nIndex Scan using PRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\nPRIORITY_STATISTICS_reo_id on PRIORITY_STATISTICS (cost=0.00..394.06\nrows=102 width=12) (actual time=20.93..20.93 rows=0 loops=1)\nTotal runtime: 21.59 msec\n\nAny help?\n\nRigmor\n\n\n>\n> Cheers\n>\n> Matt\n>\n>\n>\n> ---\n> Incoming mail is certified Virus Free.\n> Checked by AVG anti-virus system (http://www.grisoft.com).\n> Version: 6.0.515 / Virus Database: 313 - Release Date: 01.09.2003\n>\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.515 / Virus Database: 313 - Release Date: 01.09.2003\n\n",
"msg_date": "Thu, 25 Sep 2003 13:22:40 +0300",
"msg_from": "\"Rigmor Ukuhe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index problem"
},
{
"msg_contents": "> There are about 2500 rows in that table.\n>\n> 1st query explain analyze: Seq Scan on PRIORITY_STATISTICS\n> (cost=0.00..491.44 rows=127 width=12) (actual time=98.58..98.58 rows=0\n> loops=1)\n> Total runtime: 98.74 msec\n>\n> 2nd query explain analyze: NOTICE: QUERY PLAN:\n>\n> Index Scan using PRIORITY_STATISTICS_reo_id, PRIORITY_STATISTICS_reo_id,\n[snip]\n> PRIORITY_STATISTICS_reo_id on PRIORITY_STATISTICS (cost=0.00..394.06\n> rows=102 width=12) (actual time=20.93..20.93 rows=0 loops=1)\n> Total runtime: 21.59 msec\n\nWith only 2500 rows the planner could be deciding that it's going to have to read every disk block to do an index scan anyway, so it\nmight as well do a sequential scan. If the pages are in fact in the kernel cache then the compute time will dominate, not the IO\ntime, so it ends up looking like a bad plan, but it's probably not really such a bad plan...\n\nIs your effective_cache_size set to something sensibly large?\n\nYou could also try decreasing cpu_index_tuple_cost and cpu_tuple_cost. These will affect all your queries though, so what you gain\non one might be lost on another.\n\nMatt\n\n\n",
"msg_date": "Thu, 25 Sep 2003 13:13:32 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index problem"
},
{
"msg_contents": "\"Rigmor Ukuhe\" <[email protected]> writes:\n>>> What causes this behaviour? is there any workaround? Suggestions?\n\nAt some point the planner is going to decide that one seqscan is cheaper\nthan repeated indexscans. At some point it'll be right ... but in this\ncase it seems its relative cost estimates are off a bit. You might try\nreducing random_page_cost to bring them more into line with reality.\n(But keep in mind that the reality you are measuring appears to be\nsmall-table-already-fully-cached reality. On a large table you might\nfind that small random_page_cost isn't such a hot idea after all.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Sep 2003 09:38:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index problem "
}
] |
[
{
"msg_contents": "All this talk of checkpoints got me wondering if I have them set at an\noptimum level on my production servers. I noticed the following in the\ndocs:\n\n \"There will be at least one 16 MB segment file, and will normally not\nbe more than 2 * checkpoint_segments + 1 files. You can use this to\nestimate space requirements for WAL. Ordinarily, when old log segment\nfiles are no longer needed, they are recycled (renamed to become the\nnext segments in the numbered sequence). If, due to a short-term peak of\nlog output rate, there are more than 2 * checkpoint_segments + 1 segment\nfiles, the unneeded segment files will be deleted instead of recycled\nuntil the system gets back under this limit.\" \n\nIn .conf file I have default checkpoints set to 3, but I noticed that in\nmy pg_xlog directory I always seem to have at least 8 log files. Since\nthis is more than the suggested 7, I'm wondering if this means I ought\nto bump my checkpoint segments up to 4? I don't really want to bump it\nup unnecessarily as quick recover time is important on this box, however\nif i would get an overall performance boost it seems like it would be\nworth it, and given that I seem to be using more than the default number\nanyways... I've always treated wal logs as self maintaining, am I over\nanalyzing this?\n\nAnother thought popped into my head, is it just coincidence that I\nalways seem to have 8 files and that wal_buffers defaults to 8? Seems\nlike it's not but I love a good conspiracy theory.\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "24 Sep 2003 17:24:14 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": true,
"msg_subject": "upping checkpoints on production server"
},
{
"msg_contents": "Robert Treat <[email protected]> writes:\n> In .conf file I have default checkpoints set to 3, but I noticed that in\n> my pg_xlog directory I always seem to have at least 8 log files. Since\n> this is more than the suggested 7, I'm wondering if this means I ought\n> to bump my checkpoint segments up to 4?\n\nHm. What is the typical delta in the mod times of the log files? It\nsounds like you are in a regime where checkpoints are always triggered\nby checkpoint_segments and never by checkpoint_timeout, in which case\nincreasing the former might be a good idea. Or decrease the latter,\nbut that could put a drag on performance.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Sep 2003 17:57:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: upping checkpoints on production server "
},
{
"msg_contents": "On Wed, 2003-09-24 at 17:57, Tom Lane wrote:\n> Robert Treat <[email protected]> writes:\n> > In .conf file I have default checkpoints set to 3, but I noticed that in\n> > my pg_xlog directory I always seem to have at least 8 log files. Since\n> > this is more than the suggested 7, I'm wondering if this means I ought\n> > to bump my checkpoint segments up to 4?\n> \n> Hm. What is the typical delta in the mod times of the log files? It\n> sounds like you are in a regime where checkpoints are always triggered\n> by checkpoint_segments and never by checkpoint_timeout, in which case\n> increasing the former might be a good idea. Or decrease the latter,\n> but that could put a drag on performance.\n> \n\n# ls -lht /var/lib/pgsql/data/pg_xlog/\ntotal 129M\n-rw------- 1 postgres postgres 16M Sep 25 11:12 0000006E00000059\n-rw------- 1 postgres postgres 16M Sep 25 11:12 0000006E0000005A\n-rw------- 1 postgres postgres 16M Sep 25 11:08 0000006E00000058\n-rw------- 1 postgres postgres 16M Sep 25 11:05 0000006E0000005F\n-rw------- 1 postgres postgres 16M Sep 25 11:02 0000006E0000005E\n-rw------- 1 postgres postgres 16M Sep 25 10:59 0000006E0000005D\n-rw------- 1 postgres postgres 16M Sep 25 10:55 0000006E0000005B\n-rw------- 1 postgres postgres 16M Sep 25 10:51 0000006E0000005C\n\n#ls -lht /var/lib/pgsql/data/pg_xlog/\ntotal 129M\n-rw------- 1 postgres postgres 16M Sep 25 10:52 0000006E00000054\n-rw------- 1 postgres postgres 16M Sep 25 10:51 0000006E00000053\n-rw------- 1 postgres postgres 16M Sep 25 10:49 0000006E00000052\n-rw------- 1 postgres postgres 16M Sep 25 10:45 0000006E00000059\n-rw------- 1 postgres postgres 16M Sep 25 10:40 0000006E00000057\n-rw------- 1 postgres postgres 16M Sep 25 10:37 0000006E00000058\n-rw------- 1 postgres postgres 16M Sep 25 10:33 0000006E00000056\n-rw------- 1 postgres postgres 16M Sep 25 10:29 0000006E00000055\n\n\n\n\n\n\nfrom the 7.4 docs:\n\n \"Checkpoints are fairly expensive because they force all dirty kernel\nbuffers to disk using the operating system sync() call. Busy servers may\nfill checkpoint segment files too quickly, causing excessive\ncheckpointing.\" \n\nit goes on to mention checkpoint_warning, which I don't have in 7.3, but\nI think this is a case where I'd likely see those warnings. The server\nin question has a fairly high write/read ratio and is fairly busy (over\n100 tps iirc). \n\nsince more often than not I don't make it to 5 minutes, seems like\nupping checkpoint segments is the way to go, right?\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "25 Sep 2003 11:23:12 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: upping checkpoints on production server"
}
] |
[
{
"msg_contents": "Hi!\n\nA performance question:\n\nI have some tables:\n\n Tabell \"public.person\"\n Kolumn | Typ | Modifierare\n------------------+--------------------------+---------------\n userid | text | not null\n giver | text |\n first_name | text |\n last_name | text |\n email | text |\n default_language | text | default 'sv'\n created | timestamp with time zone | default now()\n created_by | text |\nIndex: person_pkey primärnyckel btree (userid),\n person_index unik btree (userid),\n person_giver_idx btree (giver)\nFrämmande nyckel-villkor: pp_fk9 FOREIGN KEY (giver) REFERENCES \nproviders(giver) ON UPDATE CASCADE ON DELETE CASCADE,\n pp_fk2 FOREIGN KEY (created_by) REFERENCES \nperson(userid) ON UPDATE CASCADE ON DELETE SET NULL\n\n\n Tabell \"public.wiol\"\n Kolumn | Typ | Modifierare\n-----------------+-----------------------------+---------------\n userid | text | not null\n course_id | integer |\n login_ts | timestamp without time zone | default now()\n latest_event_ts | timestamp without time zone | default now()\nFrämmande nyckel-villkor: pp_fk2 FOREIGN KEY (course_id) REFERENCES \ncourse(id) ON UPDATE CASCADE ON DELETE CASCADE,\n pp_fk1 FOREIGN KEY (userid) REFERENCES \nperson(userid) ON UPDATE CASCADE ON DELETE CASCADE\n\nand a view:\n\n Vy \"public.person_wiol_view\"\n Kolumn | Typ | Modifierare\n------------------+--------------------------+-------------\n userid | text |\n giver | text |\n first_name | text |\n last_name | text |\n email | text |\n default_language | text |\n created | timestamp with time zone |\n created_by | text |\n course_id | integer |\nVydefinition: SELECT p.userid, p.giver, p.first_name, p.last_name, p.email, \np.default_language, p.created, p.created_by, w.course_id FROM (person p \nLEFT JOIN wiol w ON ((p.userid = w.userid)));\n\n\nNow, with about 30000 tuples in person and about 40 in wiol, executing a \nleft outer join with the view gives horrible performance:\n\n explain analyze select p.pim_id, p.recipient, p.sender, p.message, p.ts, \np.type, case when sender.userid is not null then sender.first_name || ' ' \n|| sender.last_name else null end as sender_name, sender.course_id is not \nnull as is_online from pim p left outer join person_wiol_view sender on \n(sender.userid = p.sender) where p.recipient = 'axto6551' and p.type >= 0 \nlimit 1;\n QUERY PLAN\n---------------------------------------------------------------------------\n----------------------------------------------------------\n Limit (cost=0.00..1331.26 rows=1 width=180) (actual time=866.14..1135.65 \nrows=1 loops=1)\n -> Nested Loop (cost=0.00..1331.26 rows=1 width=180) (actual \ntime=866.13..1135.63 rows=2 loops=1)\n Join Filter: (\"inner\".userid = \"outer\".sender)\n -> Seq Scan on pim p (cost=0.00..0.00 rows=1 width=112) (actual \ntime=0.05..0.18 rows=2 loops=1)\n Filter: ((recipient = 'axto6551'::text) AND (\"type\" >= 0))\n -> Materialize (cost=956.15..956.15 rows=30009 width=68) (actual \ntime=369.33..437.86 rows=22045 loops=2)\n -> Hash Join (cost=0.00..956.15 rows=30009 width=68) \n(actual time=0.45..605.21 rows=30013 loops=1)\n Hash Cond: (\"outer\".userid = \"inner\".userid)\n -> Seq Scan on person p (cost=0.00..806.09 \nrows=30009 width=32) (actual time=0.16..279.28 rows=30009 loops=1)\n -> Hash (cost=0.00..0.00 rows=1 width=36) (actual \ntime=0.13..0.13 rows=0 loops=1)\n -> Seq Scan on wiol w (cost=0.00..0.00 rows=1 \nwidth=36) (actual time=0.02..0.09 rows=8 loops=1)\n Total runtime: 1143.93 msec\n(12 rader)\n\n\nbut rewriting the question with an explicit join uses the indices, and runs \n*much* faster:\n\nexplain analyze select p.pim_id, p.recipient, p.sender, p.message, p.ts, \np.type, case when sender.userid is not null then sender.first_name || ' ' \n|| sender.last_name else null end as sender_name, w.course_id is not null \nas is_online from pim p left outer join person sender on (sender.userid = \np.sender) left join wiol w on (w.userid=sender.userid) where p.recipient = \n'axto6551' and p.type >= 0 limit 1;\n QUERY PLAN\n---------------------------------------------------------------------------\n-----------------------------------------------------------------\n Limit (cost=0.00..6.03 rows=1 width=180) (actual time=0.89..1.13 rows=1 \nloops=1)\n -> Hash Join (cost=0.00..6.03 rows=1 width=180) (actual \ntime=0.88..1.12 rows=2 loops=1)\n Hash Cond: (\"outer\".userid = \"inner\".userid)\n -> Nested Loop (cost=0.00..6.02 rows=1 width=144) (actual \ntime=0.48..0.69 rows=2 loops=1)\n -> Seq Scan on pim p (cost=0.00..0.00 rows=1 width=112) \n(actual time=0.04..0.16 rows=2 loops=1)\n Filter: ((recipient = 'axto6551'::text) AND (\"type\" >= \n0))\n -> Index Scan using person_pkey on person sender \n(cost=0.00..6.01 rows=1 width=32) (actual time=0.23..0.24 rows=1 loops=2)\n Index Cond: (sender.userid = \"outer\".sender)\n -> Hash (cost=0.00..0.00 rows=1 width=36) (actual \ntime=0.22..0.22 rows=0 loops=1)\n -> Seq Scan on wiol w (cost=0.00..0.00 rows=1 width=36) \n(actual time=0.12..0.17 rows=8 loops=1)\n Total runtime: 1.39 msec\n(11 rader)\n\n\n\nTests run on postgresql-7.3.4.\n\nMain question is, is it bad SQL to join with a view, or is it postgresql \nthat does something not quite optimal? If the latter, is it fixed in 7.4?\n\nThanks,\nPalle\n\n\n\n",
"msg_date": "Thu, 25 Sep 2003 14:36:53 +0200",
"msg_from": "Palle Girgensohn <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance hit when joining with a view?"
},
{
"msg_contents": "Palle Girgensohn <[email protected]> writes:\n> Vydefinition: SELECT p.userid, p.giver, p.first_name, p.last_name, p.email, \n> p.default_language, p.created, p.created_by, w.course_id FROM (person p \n> LEFT JOIN wiol w ON ((p.userid = w.userid)));\n\n> explain analyze select p.pim_id, p.recipient, p.sender, p.message, p.ts, \n> p.type, case when sender.userid is not null then sender.first_name || ' ' \n> || sender.last_name else null end as sender_name, sender.course_id is not \n> null as is_online from pim p left outer join person_wiol_view sender on \n> (sender.userid = p.sender) where p.recipient = 'axto6551' and p.type >= 0 \n> limit 1;\n\n> explain analyze select p.pim_id, p.recipient, p.sender, p.message, p.ts, \n> p.type, case when sender.userid is not null then sender.first_name || ' ' \n> || sender.last_name else null end as sender_name, w.course_id is not null \n> as is_online from pim p left outer join person sender on (sender.userid = \n> p.sender) left join wiol w on (w.userid=sender.userid) where p.recipient = \n> 'axto6551' and p.type >= 0 limit 1;\n\nThese are not actually the same query. In the former case the implicit\nparenthesization of the joins is\n\tpim left join (person left join wiol)\nwhereas in the latter case the implicit parenthesization is left-to-right:\n\t(pim left join person) left join wiol\nSince the only restriction conditions you have provided are on pim, the\nfirst parenthesization implies forming the entire join of person and\nwiol :-(.\n\nIf you were using plain joins then the two queries would be logically\nequivalent, but outer joins are in general not associative, so the\nplanner will not consider re-ordering them.\n\nThere is some work in 7.4 to make the planner smarter about outer joins,\nbut offhand I don't think any of it will improve results for this\nparticular example.\n\nI have seen some academic papers about how to prove that a particular\npair of outer join operators can safely be swapped (as I think is true\nin this example). Some knowledge of that sort may eventually get into\nthe planner, but it ain't there now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Sep 2003 09:55:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance hit when joining with a view? "
}
] |
[
{
"msg_contents": "have the table \"numbercheck\"\n\t Attribute | Type | Modifier\n\t-----------+------------+----------\n\t svcnumber | integer | not null\n\t svcqual | varchar(9) |\n\t svcequip | char(1) |\n\t svctroub | varchar(6) |\n\t svcrate | varchar(4) |\n\t svcclass | char(1) |\n\t trailer | varchar(3) |\n\tIndex: numbercheck_pkey\n\nalso have a csv file\n\t7057211380,Y,,,3,B\n\t7057216800,Y,,,3,B\n\t7057265038,Y,,,3,B\n\t7057370261,Y,,,3,B\n\t7057374613,Y,,,3,B\n\t7057371832,Y,,,3,B\n\t4166336554,Y,,,3,B\n\t4166336863,Y,,,3,B\n\t7057201148,Y,,,3,B\n\naside from parsing the csv file through a PHP interface, what isthe easiest way\nto get that csv data importted into the postgres database. thoughts?\n\nthanks\n\nDave\n\n\n",
"msg_date": "Thu, 25 Sep 2003 12:38:07 -0400",
"msg_from": "\"Dave [Hawk-Systems]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "populate table with large csv file"
},
{
"msg_contents": "Dave [Hawk-Systems] wrote:\n> aside from parsing the csv file through a PHP interface, what isthe easiest way\n> to get that csv data importted into the postgres database. thoughts?\n> \n\nsee COPY:\nhttp://www.postgresql.org/docs/view.php?version=7.3&idoc=0&file=sql-copy.html\n\nJoe\n\n",
"msg_date": "Thu, 25 Sep 2003 09:43:41 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: populate table with large csv file"
},
{
"msg_contents": "Dave [Hawk-Systems] wrote:\n\n> aside from parsing the csv file through a PHP interface, what isthe easiest way\n> to get that csv data importted into the postgres database. thoughts?\n\nAssuming the CSV file data is well formed, use psql and\nthe COPY command.\n\nIn psql, create the table. Then issue command:\n\ncopy <tablename> from 'filename' using delimiters ',';\n-- \nP. J. \"Josh\" Rovero Sonalysts, Inc.\nEmail: [email protected] www.sonalysts.com 215 Parkway North\nWork: (860)326-3671 or 442-4355 Waterford CT 06385\n***********************************************************************\n\n",
"msg_date": "Thu, 25 Sep 2003 12:50:56 -0400",
"msg_from": "\"P.J. \\\"Josh\\\" Rovero\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: populate table with large csv file"
},
{
"msg_contents": "On Thu, 2003-09-25 at 11:38, Dave [Hawk-Systems] wrote:\n> have the table \"numbercheck\"\n> \t Attribute | Type | Modifier\n> \t-----------+------------+----------\n> \t svcnumber | integer | not null\n> \t svcqual | varchar(9) |\n> \t svcequip | char(1) |\n> \t svctroub | varchar(6) |\n> \t svcrate | varchar(4) |\n> \t svcclass | char(1) |\n> \t trailer | varchar(3) |\n> \tIndex: numbercheck_pkey\n> \n> also have a csv file\n> \t7057211380,Y,,,3,B\n> \t7057216800,Y,,,3,B\n> \t7057265038,Y,,,3,B\n> \t7057370261,Y,,,3,B\n> \t7057374613,Y,,,3,B\n> \t7057371832,Y,,,3,B\n> \t4166336554,Y,,,3,B\n> \t4166336863,Y,,,3,B\n> \t7057201148,Y,,,3,B\n> \n> aside from parsing the csv file through a PHP interface, what isthe easiest way\n> to get that csv data importted into the postgres database. thoughts?\n\nNo matter what you do, it's going to barf: svcnumber is a 32-bit\ninteger, and 7,057,211,380 is significantly out of range.\n\nOnce you change svcnumber to bigint, the COPY command will easily\nsuck in the csv file.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Python is executable pseudocode; Perl is executable line noise\"\n\n",
"msg_date": "Thu, 25 Sep 2003 11:57:57 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: populate table with large csv file"
},
{
"msg_contents": ">> aside from parsing the csv file through a PHP interface, what isthe\n>easiest way\n>> to get that csv data importted into the postgres database. thoughts?\n>\n>Assuming the CSV file data is well formed, use psql and\n>the COPY command.\n>\n>In psql, create the table. Then issue command:\n>\n>copy <tablename> from 'filename' using delimiters ',';\n\nperfect solution that was overlooked.\n\nUnfortunately processing the 143mb file which would result in a database size of\napprox 500mb takes an eternity. As luck would have it we can get away with just\ndropping to an exec and doing a cat/grep for any data we need... takes 2-3\nseconds.\n\nthe copy command is definately a keeper as I am not looking at replacing code\nelsewhere with a simpler model using that.\n\nThanks\n\nDave\n\n\n",
"msg_date": "Fri, 26 Sep 2003 07:58:16 -0400",
"msg_from": "\"Dave [Hawk-Systems]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: populate table with large csv file"
},
{
"msg_contents": "On Fri, 2003-09-26 at 06:58, Dave [Hawk-Systems] wrote:\n[snip]\n> Unfortunately processing the 143mb file which would result in a database size of\n> approx 500mb takes an eternity. As luck would have it we can get away with just\n\nSomething's not right, then. I loaded 30GB in about 8 hours, on\na slow system, with non-optimized IO. Did you drop the indexes\nfirst?\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"As the night fall does not come at once, neither does \noppression. It is in such twilight that we must all be aware of \nchange in the air - however slight - lest we become unwitting \nvictims of the darkness.\"\nJustice William O. Douglas\n\n",
"msg_date": "Fri, 26 Sep 2003 07:37:35 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: populate table with large csv file"
},
{
"msg_contents": "\nWe are doing some performance testing among various databases (Oracle, MySQL\nand Postgres).\n\nOne of the queries is showing Postgres lagging quite a bit:\n\nSELECT count(*)\nFROM commercial_entity, country, user_account, address_list\nLEFT JOIN state_province ON address_list.state_province_id =\nstate_province.state_province_id\nLEFT JOIN contact_info ON address_list.contact_info_id =\ncontact_info.contact_info_id\nWHERE address_list.address_type_id = 101\nAND commercial_entity.commercial_entity_id =\naddress_list.commercial_entity_id\nAND address_list.country_id = country.country_id\nAND commercial_entity.user_account_id = user_account.user_account_id\nAND user_account.user_role_id IN (101, 101);\n\nI ran a \"vacuum analyze\" after realizing that I had loaded all the data into\nthe database without redoing the statistics; the query jumped from 19\nseconds to 41 seconds _after_ the analyze.\n\nI'd also like to make sure my query is performing correctly - I want all the\ncount of records where the commercial_entity matches user_account,\naddress_list, country, and a left-outer-join on address_list-province and\naddress_list-contact_info.\n\nFinally, I read some posts on the shared_buffers; they stated that the\nshared_buffers should be set to 1/4 to 1/5 of total memory available. Is\nthat correct? I give the MySQL/InnoDB buffers about 70% of the 2 gig on the\nmachine.\n\n\nHere's the explain (I'm not too familiar with reading a Postgres\nexplain...):\n\n----------------------------------------------------------------------------\n----------------------------------------------------------------\n Aggregate (cost=52951.09..52951.09 rows=1 width=116)\n -> Merge Join (cost=52941.61..52950.83 rows=105 width=116)\n Merge Cond: (\"outer\".country_id = \"inner\".country_id)\n -> Index Scan using country_pkey on country (cost=0.00..7.54\nrows=231 width=11)\n -> Sort (cost=52941.61..52941.88 rows=105 width=105)\n Sort Key: address_list.country_id\n -> Merge Join (cost=52729.54..52938.07 rows=105 width=105)\n Merge Cond: (\"outer\".commercial_entity_id =\n\"inner\".commercial_entity_id)\n -> Sort (cost=8792.01..8792.52 rows=201 width=36)\n Sort Key: commercial_entity.commercial_entity_id\n -> Nested Loop (cost=0.00..8784.31 rows=201\nwidth=36)\n -> Index Scan using usr_acc_usr_role_id_i\non user_account (cost=0.00..2403.08 rows=1401 width=12)\n Index Cond: (user_role_id =\n101::numeric)\n -> Index Scan using comm_ent_usr_acc_id_i\non commercial_entity (cost=0.00..4.54 rows=1 width=24)\n Index Cond:\n(commercial_entity.user_account_id = \"outer\".user_account_id)\n -> Sort (cost=43937.53..44173.84 rows=94526 width=69)\n Sort Key: address_list.commercial_entity_id\n -> Merge Join (cost=29019.03..32585.73\nrows=94526 width=69)\n Merge Cond: (\"outer\".contact_info_id =\n\"inner\".contact_info_id)\n -> Index Scan using contact_info_pkey on\ncontact_info (cost=0.00..3366.76 rows=56435 width=12)\n -> Sort (cost=29019.03..29255.34\nrows=94526 width=57)\n Sort Key:\naddress_list.contact_info_id\n -> Merge Join\n(cost=16930.18..18354.55 rows=94526 width=57)\n Merge Cond:\n(\"outer\".state_province_id = \"inner\".state_province_id)\n -> Index Scan using\nstate_province_pkey on state_province (cost=0.00..3.81 rows=67 width=11)\n -> Sort\n(cost=16930.18..17166.50 rows=94526 width=46)\n Sort Key:\naddress_list.state_province_id\n -> Seq Scan on\naddress_list (cost=0.00..6882.52 rows=94526 width=46)\n Filter:\n(address_type_id = 101::numeric)\n\nWhat's the \"Sort (cost...)\"?\n\nI noticed that joining the address_list to country was slow; there was no\nindex on just country_id; there were composite indexes on multiple columns,\nso I added one and did a vacuum analyze on the table, and got:\n\n Aggregate (cost=54115.74..54115.74 rows=1 width=116)\n -> Merge Join (cost=54105.91..54115.46 rows=109 width=116)\n Merge Cond: (\"outer\".country_id = \"inner\".country_id)\n -> Index Scan using country_pkey on country (cost=0.00..7.54\nrows=231 width=11)\n -> Sort (cost=54105.91..54106.19 rows=110 width=105)\n Sort Key: address_list.country_id\n -> Merge Join (cost=53884.34..54102.18 rows=110 width=105)\n Merge Cond: (\"outer\".commercial_entity_id =\n\"inner\".commercial_entity_id)\n -> Sort (cost=8792.01..8792.52 rows=201 width=36)\n Sort Key: commercial_entity.commercial_entity_id\n -> Nested Loop (cost=0.00..8784.31 rows=201\nwidth=36)\n -> Index Scan using usr_acc_usr_role_id_i\non user_account (cost=0.00..2403.08 rows=1401 width=12)\n Index Cond: (user_role_id =\n101::numeric)\n -> Index Scan using comm_ent_usr_acc_id_i\non commercial_entity (cost=0.00..4.54 rows=1 width=24)\n Index Cond:\n(commercial_entity.user_account_id = \"outer\".user_account_id)\n -> Sort (cost=45092.32..45335.37 rows=97221 width=69)\n Sort Key: address_list.commercial_entity_id\n -> Merge Join (cost=29770.81..33338.09\nrows=97221 width=69)\n Merge Cond: (\"outer\".contact_info_id =\n\"inner\".contact_info_id)\n -> Index Scan using contact_info_pkey on\ncontact_info (cost=0.00..3366.76 rows=56435 width=12)\n -> Sort (cost=29770.81..30013.86\nrows=97221 width=57)\n Sort Key:\naddress_list.contact_info_id\n -> Merge Join\n(cost=17271.79..18731.55 rows=97221 width=57)\n Merge Cond:\n(\"outer\".state_province_id = \"inner\".state_province_id)\n -> Index Scan using\nstate_province_pkey on state_province (cost=0.00..3.81 rows=67 width=11)\n -> Sort\n(cost=17271.79..17514.84 rows=97221 width=46)\n Sort Key:\naddress_list.state_province_id\n -> Seq Scan on\naddress_list (cost=0.00..6882.52 rows=97221 width=46)\n Filter:\n(address_type_id = 101::numeric)\n\nNo difference. Note that all the keys that are used in the joins are\nnumeric(10)'s, so there shouldn't be any cast-issues.\n\nWhen you create a primary key on a table, is an index created (I seem to\nremember a message going by stating that an index would be added).\n\nFor comparison, our production Oracle database (running on nearly identical\nhardware - the Postgres machine has IDE-RAID-5 and the Oracle machine has\nRAID mirroring) takes between 1 and 2 seconds.\n\nI've got one last question, and I really hope responses don't get\nsidetracked by it; I see alot of negative comments towards MySQL, many of\nthem stating that it's a database layer overtop of the file system. Can\nsomeone explain why Postgres is better than MySQL 4.0.14 using InnoDB?\nMySQL, on the above query, with one less index (on address_list.country)\ntakes 0.20 seconds.\n\nDavid.\n\n",
"msg_date": "Sat, 27 Sep 2003 20:49:23 -0700",
"msg_from": "David Griffiths <[email protected]>",
"msg_from_op": false,
"msg_subject": "Tuning/performance question."
},
{
"msg_contents": "On Sunday 28 September 2003 09:19, David Griffiths wrote:\n> No difference. Note that all the keys that are used in the joins are\n> numeric(10)'s, so there shouldn't be any cast-issues.\n\nCan you make them bigint and see? It might make some difference perhaps.\n\nChecking the plan in the meantime.. BTW what tuning you did to postgresql?\n\nCheck http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html, assuming \nyou haven't seen earlier..\n\n HTH\n\n Shridhar\n\n",
"msg_date": "Sun, 28 Sep 2003 15:39:56 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance question."
},
{
"msg_contents": "On Sat, 27 Sep 2003, David Griffiths wrote:\n\n>\n> We are doing some performance testing among various databases (Oracle, MySQL\n> and Postgres).\n>\n> One of the queries is showing Postgres lagging quite a bit:\n>\n> SELECT count(*)\n> FROM commercial_entity, country, user_account, address_list\n> LEFT JOIN state_province ON address_list.state_province_id =\n> state_province.state_province_id\n> LEFT JOIN contact_info ON address_list.contact_info_id =\n> contact_info.contact_info_id\n> WHERE address_list.address_type_id = 101\n> AND commercial_entity.commercial_entity_id =\n> address_list.commercial_entity_id\n> AND address_list.country_id = country.country_id\n> AND commercial_entity.user_account_id = user_account.user_account_id\n> AND user_account.user_role_id IN (101, 101);\n\nI guess that this question has been discussed very often - but I cannot\nremember why exactly. Is there a pointer to a technical explanation? Has\nit something to do with MVCC? But ist it one of MVCC's benefits that we\ncan make a consistent online backup without archiving redo locks (Oracle\ncan't, DB2 can). Is DB2 slower than Oracle in such cases (count(*)) as\nwell?\n\nWorkaround:\nWe can sometimes fake a bit to avoid such costly queries and set up a\ntrigger that calls a function that increases a counter in a separate\ncounter table. Then we are lightning-fast.\n\nBut many users compain about PostgreSQL's poor count(*) performance,\nthat's true and can be critical when someone wants to replace another\ndatabase product by PostgreSQL.\n",
"msg_date": "Sun, 28 Sep 2003 13:13:54 +0200 (CEST)",
"msg_from": "Holger Marzen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance question."
},
{
"msg_contents": "Holger Marzen <[email protected]> writes:\n> But many users compain about PostgreSQL's poor count(*) performance,\n\nI don't think that's relevant here. Some other DB's have shortcuts for\ndetermining the total number of rows in a single table, that is they can\ndo \"SELECT COUNT(*) FROM a_table\" quickly, but David's query is messy\nenough that I can't believe anyone can actually do it without forming\nthe join result.\n\nWhat I'd ask for is EXPLAIN ANALYZE output. Usually, if a complex query\nis slower than it should be, it's because the planner is picking a bad\nplan. So you need to look at how its estimates diverge from reality.\nBut plain EXPLAIN doesn't show the reality, only the estimates ...\n\nDavid, could we see EXPLAIN ANALYZE for the query, and also the table\nschemas (psql \\d displays would do)? Also, please take it to\npgsql-performance, it's not really on-topic for pgsql-general.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 Sep 2003 12:13:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance question. "
},
{
"msg_contents": "\n> I guess that this question has been discussed very often - but I cannot\n> remember why exactly. Is there a pointer to a technical explanation? Has\n> it something to do with MVCC? But ist it one of MVCC's benefits that we\n> can make a consistent online backup without archiving redo locks (Oracle\n> can't, DB2 can). Is DB2 slower than Oracle in such cases (count(*)) as\n> well?\n>\n> Workaround:\n> We can sometimes fake a bit to avoid such costly queries and set up a\n> trigger that calls a function that increases a counter in a separate\n> counter table. Then we are lightning-fast.\n>\n> But many users compain about PostgreSQL's poor count(*) performance,\n> that's true and can be critical when someone wants to replace another\n> database product by PostgreSQL.\n\nThis is but one of many tests we're doing. The count(*) performance is not\nthe deciding factor. This query was pulled from our production system, and\nI've\nextracted the exact tables and data from the production system to test.\n\nMySQL with MyISAM does in fact cheat on the count(*). InnoDB does not,\nhowever. The \"explain\" indicates that it's doing the work, and analyzing the\ntables dropped the cost of the query from .35 seconds to .20 seconds.\n\nHere's the same query, but selecting data (to test the databases ability to\nfind a single row quicky):\n\nSELECT current_timestamp;\nSELECT company_name, address_1, address_2, address_3, city,\naddress_list.state_province_id, state_province_short_desc, country_desc,\nzip_code, address_list.country_id,\ncontact_info.email, commercial_entity.user_account_id, phone_num_1,\nphone_num_fax, website, boats_website\nFROM commercial_entity, country, user_account,\naddress_list LEFT JOIN state_province ON address_list.state_province_id =\nstate_province.state_province_id\nLEFT JOIN contact_info ON address_list.contact_info_id =\ncontact_info.contact_info_id\nWHERE address_list.address_type_id = 101\nAND commercial_entity.commercial_entity_id=225528\nAND commercial_entity.commercial_entity_id =\naddress_list.commercial_entity_id\nAND address_list.country_id = country.country_id\nAND commercial_entity.user_account_id = user_account.user_account_id\nAND user_account.user_role_id IN (101, 101);\nSELECT current_timestamp;\n\nPostgres takes about 33 seconds to get the row back.\n\nHere's the \"EXPLAIN\":\n\n Nested Loop (cost=0.00..64570.33 rows=1 width=385)\n -> Nested Loop (cost=0.00..64567.30 rows=1 width=361)\n -> Nested Loop (cost=0.00..64563.97 rows=1 width=349)\n Join Filter: (\"outer\".commercial_entity_id =\n\"inner\".commercial_entity_id)\n -> Index Scan using commercial_entity_pkey on\ncommercial_entity (cost=0.00..5.05 rows=1 width=94)\n Index Cond: (commercial_entity_id = 225528::numeric)\n -> Materialize (cost=63343.66..63343.66 rows=97221\nwidth=255)\n -> Merge Join (cost=0.00..63343.66 rows=97221\nwidth=255)\n Merge Cond: (\"outer\".contact_info_id =\n\"inner\".contact_info_id)\n -> Nested Loop (cost=0.00..830457.52 rows=97221\nwidth=222)\n Join Filter: (\"outer\".state_province_id =\n\"inner\".state_province_id)\n -> Index Scan using addr_list_ci_id_i on\naddress_list (cost=0.00..586676.65 rows=97221 width=205)\n Filter: (address_type_id =\n101::numeric)\n -> Seq Scan on state_province\n(cost=0.00..1.67 rows=67 width=17)\n -> Index Scan using contact_info_pkey on\ncontact_info (cost=0.00..3366.76 rows=56435 width=33)\n -> Index Scan using user_account_pkey on user_account\n(cost=0.00..3.32 rows=1 width=12)\n Index Cond: (\"outer\".user_account_id =\nuser_account.user_account_id)\n Filter: (user_role_id = 101::numeric)\n -> Index Scan using country_pkey on country (cost=0.00..3.01 rows=1\nwidth=24)\n Index Cond: (\"outer\".country_id = country.country_id)\n(20 rows)\n\nDavid.\n",
"msg_date": "Sun, 28 Sep 2003 10:01:13 -0700",
"msg_from": "David Griffiths <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance question."
},
{
"msg_contents": "> David, could we see EXPLAIN ANALYZE for the query, and also the table\n> schemas (psql \\d displays would do)? Also, please take it to\n> pgsql-performance, it's not really on-topic for pgsql-general.\n> \n> regards, tom lane\n\nWill do.\n\nThanks,\nDavid.\n\n",
"msg_date": "Sun, 28 Sep 2003 10:02:13 -0700",
"msg_from": "David Griffiths <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance question. "
},
{
"msg_contents": "Holger Marzen wrote:\n\n>On Sat, 27 Sep 2003, David Griffiths wrote:\n>\n> \n>\n>>We are doing some performance testing among various databases (Oracle, MySQL\n>>and Postgres).\n>>\n>>One of the queries is showing Postgres lagging quite a bit:\n>>\n>>SELECT count(*)\n>>FROM commercial_entity, country, user_account, address_list\n>>LEFT JOIN state_province ON address_list.state_province_id =\n>>state_province.state_province_id\n>>LEFT JOIN contact_info ON address_list.contact_info_id =\n>>contact_info.contact_info_id\n>>WHERE address_list.address_type_id = 101\n>>AND commercial_entity.commercial_entity_id =\n>>address_list.commercial_entity_id\n>>AND address_list.country_id = country.country_id\n>>AND commercial_entity.user_account_id = user_account.user_account_id\n>>AND user_account.user_role_id IN (101, 101);\n>> \n>>\n>\n>I guess that this question has been discussed very often - but I cannot\n>remember why exactly. Is there a pointer to a technical explanation? Has\n>it something to do with MVCC? But ist it one of MVCC's benefits that we\n>can make a consistent online backup without archiving redo locks (Oracle\n>can't, DB2 can). Is DB2 slower than Oracle in such cases (count(*)) as\n>well?\n>\n>Workaround:\n>We can sometimes fake a bit to avoid such costly queries and set up a\n>trigger that calls a function that increases a counter in a separate\n>counter table. Then we are lightning-fast.\n>\n>But many users compain about PostgreSQL's poor count(*) performance,\n>that's true and can be critical when someone wants to replace another\n>database product by PostgreSQL.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n> \n>\nYup, it'd be nice to have faster count(*) performance.\n\n",
"msg_date": "Sun, 28 Sep 2003 11:21:00 -0700",
"msg_from": "Dennis Gearon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance question."
}
] |
[
{
"msg_contents": "I am running TPC-R benchmarks with a scale factor of 1, which correspond\nto approximately 1 GB database size on PostgreSQL 7.3.4 installed on\nCygWin on Windows XP. I dedicated 128 MB of shared memory to my postrges\ninstallation.\nMost of the queries were able to complete in a matter of minutes, but\nquery 17 was taking hours and hours. The query is show below. Is there\nany way to optimize it ?\n \nselect\n sum(l_extendedprice) / 7.0 as avg_yearly\nfrom\n lineitem,\n part\nwhere\n p_partkey = l_partkey\n and p_brand = 'Brand#11'\n and p_container = 'SM PKG'\n and l_quantity < (\n select\n 0.2 * avg(l_quantity)\n from\n lineitem\n where\n l_partkey = p_partkey\n );\n \nThanks.\n \nOleg\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************\n\nMessage\n\n\n\nI am running TPC-R \nbenchmarks with a scale factor of 1, which correspond to approximately 1 GB \ndatabase size on PostgreSQL 7.3.4 installed on CygWin on Windows XP. I dedicated \n128 MB of shared memory to my postrges installation.\nMost of the queries \nwere able to complete in a matter of minutes, but query 17 was taking hours and \nhours. The query is show below. Is there any way to optimize it \n?\n \nselect sum(l_extendedprice) / 7.0 as \navg_yearlyfrom lineitem, partwhere p_partkey \n= l_partkey and p_brand = 'Brand#11' and p_container = 'SM \nPKG' and l_quantity < \n( select 0.2 * \navg(l_quantity) from lineitem where l_partkey \n= p_partkey );\n \nThanks.\n \nOleg\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************",
"msg_date": "Thu, 25 Sep 2003 13:40:12 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "TPC-R benchmarks"
},
{
"msg_contents": "I am running TPC-H with scale factor of 1 on RedHat7.2 with the kernel\n2.5.74. Q17 can always finish in about 7 seconds on my system. The\nexecution plan is:\n----------------------------------------------------------------------------------------------------\n Aggregate (cost=780402.43..780402.43 rows=1 width=48)\n -> Nested Loop (cost=0.00..780397.50 rows=1973 width=48)\n Join Filter: (\"inner\".l_quantity < (subplan))\n -> Seq Scan on part (cost=0.00..8548.00 rows=197 width=12)\n Filter: ((p_brand = 'Brand#31'::bpchar) AND (p_container\n= 'LG CASE'::bpchar))\n -> Index Scan using i_l_partkey on lineitem \n(cost=0.00..124.32 rows=30 width=36)\n Index Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n SubPlan\n -> Aggregate (cost=124.40..124.40 rows=1 width=11)\n -> Index Scan using i_l_partkey on lineitem \n(cost=0.00..124.32 rows=30 width=11)\n Index Cond: (l_partkey = $0)\n(11 rows)\n\nHope this helps,\nJenny\nOn Thu, 2003-09-25 at 12:40, Oleg Lebedev wrote:\n> I am running TPC-R benchmarks with a scale factor of 1, which correspond\n> to approximately 1 GB database size on PostgreSQL 7.3.4 installed on\n> CygWin on Windows XP. I dedicated 128 MB of shared memory to my postrges\n> installation.\n> Most of the queries were able to complete in a matter of minutes, but\n> query 17 was taking hours and hours. The query is show below. Is there\n> any way to optimize it ?\n> \n> select\n> sum(l_extendedprice) / 7.0 as avg_yearly\n> from\n> lineitem,\n> part\n> where\n> p_partkey = l_partkey\n> and p_brand = 'Brand#11'\n> and p_container = 'SM PKG'\n> and l_quantity < (\n> select\n> 0.2 * avg(l_quantity)\n> from\n> lineitem\n> where\n> l_partkey = p_partkey\n> );\n> \n> Thanks.\n> \n> Oleg\n> \n> *************************************\n> \n> This e-mail may contain privileged or confidential material intended for the named recipient only.\n> If you are not the named recipient, delete this message and all attachments.\n> Unauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\n> We reserve the right to monitor e-mail sent through our network. \n> \n> *************************************\n\n",
"msg_date": "Thu, 25 Sep 2003 14:32:35 -0700",
"msg_from": "Jenny Zhang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "\nOn Thu, 2003-09-25 at 14:32, Jenny Zhang wrote:\n> I am running TPC-H with scale factor of 1 on RedHat7.2 with the kernel\n> 2.5.74. Q17 can always finish in about 7 seconds on my system. The\n> execution plan is:\n\n I just want to point out that we are the OSDL are not running\na TPC-X anything. We have fair use implementations of these \nbenchmarks but because of differences our performance tests can\nnot be compared with the TPCC's benchmark results.\n\n> ----------------------------------------------------------------------------------------------------\n> Aggregate (cost=780402.43..780402.43 rows=1 width=48)\n> -> Nested Loop (cost=0.00..780397.50 rows=1973 width=48)\n> Join Filter: (\"inner\".l_quantity < (subplan))\n> -> Seq Scan on part (cost=0.00..8548.00 rows=197 width=12)\n> Filter: ((p_brand = 'Brand#31'::bpchar) AND (p_container\n> = 'LG CASE'::bpchar))\n> -> Index Scan using i_l_partkey on lineitem \n> (cost=0.00..124.32 rows=30 width=36)\n> Index Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n> SubPlan\n> -> Aggregate (cost=124.40..124.40 rows=1 width=11)\n> -> Index Scan using i_l_partkey on lineitem \n> (cost=0.00..124.32 rows=30 width=11)\n> Index Cond: (l_partkey = $0)\n> (11 rows)\n> \n> Hope this helps,\n> Jenny\n> On Thu, 2003-09-25 at 12:40, Oleg Lebedev wrote:\n> > I am running TPC-R benchmarks with a scale factor of 1, which correspond\n> > to approximately 1 GB database size on PostgreSQL 7.3.4 installed on\n> > CygWin on Windows XP. I dedicated 128 MB of shared memory to my postrges\n> > installation.\n> > Most of the queries were able to complete in a matter of minutes, but\n> > query 17 was taking hours and hours. The query is show below. Is there\n> > any way to optimize it ?\n> > \n> > select\n> > sum(l_extendedprice) / 7.0 as avg_yearly\n> > from\n> > lineitem,\n> > part\n> > where\n> > p_partkey = l_partkey\n> > and p_brand = 'Brand#11'\n> > and p_container = 'SM PKG'\n> > and l_quantity < (\n> > select\n> > 0.2 * avg(l_quantity)\n> > from\n> > lineitem\n> > where\n> > l_partkey = p_partkey\n> > );\n> > \n> > Thanks.\n> > \n> > Oleg\n> > \n> > *************************************\n> > \n> > This e-mail may contain privileged or confidential material intended for the named recipient only.\n> > If you are not the named recipient, delete this message and all attachments.\n> > Unauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\n> > We reserve the right to monitor e-mail sent through our network. \n> > \n> > *************************************\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n-- \nTimothy D. Witham - Lab Director - [email protected]\nOpen Source Development Lab Inc - A non-profit corporation\n12725 SW Millikan Way - Suite 400 - Beaverton OR, 97005\n(503)-626-2455 x11 (office) (503)-702-2871 (cell)\n(503)-626-2436 (fax)\n\n",
"msg_date": "Tue, 07 Oct 2003 11:34:33 -0700",
"msg_from": "\"Timothy D. Witham\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "Seems like in your case postgres uses an i_l_partkey index on lineitem\ntable. I have a foreign key constraint defined between the lineitem and\npart table, but didn't create an special indexes. Here is my query plan:\n\n -> Aggregate (cost=1517604222.32..1517604222.32 rows=1 width=31)\n -> Hash Join (cost=8518.49..1517604217.39 rows=1969 width=31)\n Hash Cond: (\"outer\".l_partkey = \"inner\".p_partkey)\n Join Filter: (\"outer\".l_quantity < (subplan))\n -> Seq Scan on lineitem (cost=0.00..241889.15\nrows=6001215 widt\nh=27)\n -> Hash (cost=8518.00..8518.00 rows=197 width=4)\n -> Seq Scan on part (cost=0.00..8518.00 rows=197\nwidth=4)\n\n Filter: ((p_brand = 'Brand#11'::bpchar) AND\n(p_contai\nner = 'SM PKG'::bpchar))\n SubPlan\n -> Aggregate (cost=256892.28..256892.28 rows=1\nwidth=11)\n -> Seq Scan on lineitem (cost=0.00..256892.19\nrows=37 w\nidth=11)\n Filter: (l_partkey = $0)\n\n-----Original Message-----\nFrom: Jenny Zhang [mailto:[email protected]] \nSent: Thursday, September 25, 2003 3:33 PM\nTo: Oleg Lebedev\nCc: [email protected];\[email protected]\nSubject: Re: [PERFORM] TPC-R benchmarks\n\n\nI am running TPC-H with scale factor of 1 on RedHat7.2 with the kernel\n2.5.74. Q17 can always finish in about 7 seconds on my system. The\nexecution plan is:\n------------------------------------------------------------------------\n----------------------------\n Aggregate (cost=780402.43..780402.43 rows=1 width=48)\n -> Nested Loop (cost=0.00..780397.50 rows=1973 width=48)\n Join Filter: (\"inner\".l_quantity < (subplan))\n -> Seq Scan on part (cost=0.00..8548.00 rows=197 width=12)\n Filter: ((p_brand = 'Brand#31'::bpchar) AND (p_container\n= 'LG CASE'::bpchar))\n -> Index Scan using i_l_partkey on lineitem \n(cost=0.00..124.32 rows=30 width=36)\n Index Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n SubPlan\n -> Aggregate (cost=124.40..124.40 rows=1 width=11)\n -> Index Scan using i_l_partkey on lineitem \n(cost=0.00..124.32 rows=30 width=11)\n Index Cond: (l_partkey = $0)\n(11 rows)\n\nHope this helps,\nJenny\nOn Thu, 2003-09-25 at 12:40, Oleg Lebedev wrote:\n> I am running TPC-R benchmarks with a scale factor of 1, which \n> correspond to approximately 1 GB database size on PostgreSQL 7.3.4 \n> installed on CygWin on Windows XP. I dedicated 128 MB of shared memory\n\n> to my postrges installation. Most of the queries were able to complete\n\n> in a matter of minutes, but query 17 was taking hours and hours. The \n> query is show below. Is there any way to optimize it ?\n> \n> select\n> sum(l_extendedprice) / 7.0 as avg_yearly\n> from\n> lineitem,\n> part\n> where\n> p_partkey = l_partkey\n> and p_brand = 'Brand#11'\n> and p_container = 'SM PKG'\n> and l_quantity < (\n> select\n> 0.2 * avg(l_quantity)\n> from\n> lineitem\n> where\n> l_partkey = p_partkey\n> );\n> \n> Thanks.\n> \n> Oleg\n> \n> *************************************\n> \n> This e-mail may contain privileged or confidential material intended \n> for the named recipient only. If you are not the named recipient, \n> delete this message and all attachments. Unauthorized reviewing, \n> copying, printing, disclosing, or otherwise using information in this \n> e-mail is prohibited. We reserve the right to monitor e-mail sent \n> through our network.\n> \n> *************************************\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************\n\n",
"msg_date": "Thu, 25 Sep 2003 15:39:51 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "The index is created by:\ncreate index i_l_partkey on lineitem (l_partkey);\nI do not have any foreign key defined. Does the spec require foreign\nkeys?\n\nWhen you create a foreign key reference, does PG create an index\nautomatically?\n\nCan you try with the index?\n\nJenny\nOn Thu, 2003-09-25 at 14:39, Oleg Lebedev wrote:\n> Seems like in your case postgres uses an i_l_partkey index on lineitem\n> table. I have a foreign key constraint defined between the lineitem and\n> part table, but didn't create an special indexes. Here is my query plan:\n> \n> -> Aggregate (cost=1517604222.32..1517604222.32 rows=1 width=31)\n> -> Hash Join (cost=8518.49..1517604217.39 rows=1969 width=31)\n> Hash Cond: (\"outer\".l_partkey = \"inner\".p_partkey)\n> Join Filter: (\"outer\".l_quantity < (subplan))\n> -> Seq Scan on lineitem (cost=0.00..241889.15\n> rows=6001215 widt\n> h=27)\n> -> Hash (cost=8518.00..8518.00 rows=197 width=4)\n> -> Seq Scan on part (cost=0.00..8518.00 rows=197\n> width=4)\n> \n> Filter: ((p_brand = 'Brand#11'::bpchar) AND\n> (p_contai\n> ner = 'SM PKG'::bpchar))\n> SubPlan\n> -> Aggregate (cost=256892.28..256892.28 rows=1\n> width=11)\n> -> Seq Scan on lineitem (cost=0.00..256892.19\n> rows=37 w\n> idth=11)\n> Filter: (l_partkey = $0)\n> \n> -----Original Message-----\n> From: Jenny Zhang [mailto:[email protected]] \n> Sent: Thursday, September 25, 2003 3:33 PM\n> To: Oleg Lebedev\n> Cc: [email protected];\n> [email protected]\n> Subject: Re: [PERFORM] TPC-R benchmarks\n> \n> \n> I am running TPC-H with scale factor of 1 on RedHat7.2 with the kernel\n> 2.5.74. Q17 can always finish in about 7 seconds on my system. The\n> execution plan is:\n> ------------------------------------------------------------------------\n> ----------------------------\n> Aggregate (cost=780402.43..780402.43 rows=1 width=48)\n> -> Nested Loop (cost=0.00..780397.50 rows=1973 width=48)\n> Join Filter: (\"inner\".l_quantity < (subplan))\n> -> Seq Scan on part (cost=0.00..8548.00 rows=197 width=12)\n> Filter: ((p_brand = 'Brand#31'::bpchar) AND (p_container\n> = 'LG CASE'::bpchar))\n> -> Index Scan using i_l_partkey on lineitem \n> (cost=0.00..124.32 rows=30 width=36)\n> Index Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n> SubPlan\n> -> Aggregate (cost=124.40..124.40 rows=1 width=11)\n> -> Index Scan using i_l_partkey on lineitem \n> (cost=0.00..124.32 rows=30 width=11)\n> Index Cond: (l_partkey = $0)\n> (11 rows)\n> \n> Hope this helps,\n> Jenny\n> On Thu, 2003-09-25 at 12:40, Oleg Lebedev wrote:\n> > I am running TPC-R benchmarks with a scale factor of 1, which \n> > correspond to approximately 1 GB database size on PostgreSQL 7.3.4 \n> > installed on CygWin on Windows XP. I dedicated 128 MB of shared memory\n> \n> > to my postrges installation. Most of the queries were able to complete\n> \n> > in a matter of minutes, but query 17 was taking hours and hours. The \n> > query is show below. Is there any way to optimize it ?\n> > \n> > select\n> > sum(l_extendedprice) / 7.0 as avg_yearly\n> > from\n> > lineitem,\n> > part\n> > where\n> > p_partkey = l_partkey\n> > and p_brand = 'Brand#11'\n> > and p_container = 'SM PKG'\n> > and l_quantity < (\n> > select\n> > 0.2 * avg(l_quantity)\n> > from\n> > lineitem\n> > where\n> > l_partkey = p_partkey\n> > );\n> > \n> > Thanks.\n> > \n> > Oleg\n> > \n> > *************************************\n> > \n> > This e-mail may contain privileged or confidential material intended \n> > for the named recipient only. If you are not the named recipient, \n> > delete this message and all attachments. Unauthorized reviewing, \n> > copying, printing, disclosing, or otherwise using information in this \n> > e-mail is prohibited. We reserve the right to monitor e-mail sent \n> > through our network.\n> > \n> > *************************************\n> \n> *************************************\n> \n> This e-mail may contain privileged or confidential material intended for the named recipient only.\n> If you are not the named recipient, delete this message and all attachments.\n> Unauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\n> We reserve the right to monitor e-mail sent through our network. \n> \n> *************************************\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n",
"msg_date": "Thu, 25 Sep 2003 15:24:59 -0700",
"msg_from": "Jenny Zhang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "Jenny,\n\n> create index i_l_partkey on lineitem (l_partkey);\n> I do not have any foreign key defined. Does the spec require foreign\n> keys?\n>\n> When you create a foreign key reference, does PG create an index\n> automatically?\n\nNo. A index is not required to enforce a foriegn key, and is sometimes not \nuseful (for example, FK fields with only 3 possible values).\n\nSo it may be that you need to create an index on that field.\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 25 Sep 2003 20:41:15 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "Oleg Lebedev <[email protected]> writes:\n> Seems like in your case postgres uses an i_l_partkey index on lineitem\n> table. I have a foreign key constraint defined between the lineitem and\n> part table, but didn't create an special indexes. Here is my query plan:\n\nThe planner is obviously unhappy with this plan (note the large cost\nnumbers), but it can't find a way to do better. An index on\nlineitem.l_partkey would help, I think.\n\nThe whole query seems like it's written in a very inefficient fashion;\ncouldn't the estimation of '0.2 * avg(l_quantity)' be amortized across\nmultiple join rows? But I dunno whether the TPC rules allow for\nsignificant manual rewriting of the given query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Sep 2003 00:28:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks "
},
{
"msg_contents": "The TPC-H/R rules allow only minor changes to the SQL that are necessary\ndue to SQL implementation differences. They do not allow changes made to\nimprove performance. It is their way to test optimizer's ability to\nrecognize an inefficient SQL statement and do the rewrite.\n\nThe rule makes sense for the TPC-H, which is supposed to represent\nad-Hoc query. One might argue that for TPC-R, which is suppose to\nrepresent \"Reporting\" with pre-knowledge of the query, that re-write\nshould be allowed. However, that is currently not the case. Since the\nRDBMS's represented on the TPC council are competing with TPC-H, their\noptimizers already do the re-write, so (IMHO) there is no motivation to\nrelax the rules for the TPC-R.\n\n\nOn Thu, 2003-09-25 at 21:28, Tom Lane wrote:\n> Oleg Lebedev <[email protected]> writes:\n> > Seems like in your case postgres uses an i_l_partkey index on lineitem\n> > table. I have a foreign key constraint defined between the lineitem and\n> > part table, but didn't create an special indexes. Here is my query plan:\n> \n> The planner is obviously unhappy with this plan (note the large cost\n> numbers), but it can't find a way to do better. An index on\n> lineitem.l_partkey would help, I think.\n> \n> The whole query seems like it's written in a very inefficient fashion;\n> couldn't the estimation of '0.2 * avg(l_quantity)' be amortized across\n> multiple join rows? But I dunno whether the TPC rules allow for\n> significant manual rewriting of the given query.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n-- \nMary Edie Meredith <[email protected]>\nOpen Source Development Lab\n\n",
"msg_date": "26 Sep 2003 09:11:37 -0700",
"msg_from": "Mary Edie Meredith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "Hi guys \n\nIm running a Datawarehouse benchmark (APB-1) on PostgreSql. The objective is to\nchoose which of the to main db (PostgreSQL, MySQL) is fastest. I've run into a\nsmall problem which I hope could be resolved here.\n\nI'm trying to speed up this query:\n\nselect count(*) from actvars, prodlevel where\nactvars.product_level=prodlevel.code_level and\nprodlevel.division_level='OY3S5LAPALL6';\n\nACTVARS is a fact table of aproximatly 16 million rows, PRODLEVEL has 20700\nrows. Both have btree indices. \n\nI executed the query and it took me almost half an hour to execute! Running the\nsame query on MySQL the result came 6 seconds after. As you can see there is a\nlarge differences between execution times.\n\nAfter running an explain:\n\nAggregate (cost=3123459.62..3123459.62 rows=1 width=32)\n -> Merge Join (cost=3021564.79..3119827.17 rows=1452981 width=32)\n Merge Cond: (\"outer\".product_level = \"inner\".code_level)\n -> Sort (cost=3020875.00..3060938.81 rows=16025523 width=16)\n Sort Key: actvars.product_level\n -> Seq Scan on actvars (cost=0.00..365711.23 rows=16025523\nwidth=16)\n -> Sort (cost=689.79..694.48 rows=1877 width=16)\n Sort Key: prodlevel.code_level\n -> Seq Scan on prodlevel (cost=0.00..587.75 rows=1877 width=16)\n Filter: (division_level = 'OY3S5LAPALL6'::bpchar)\n\nI found that the indices werent being used. \n\nThe database has been vacuumed and analyze has been executed.\n\nI tried disabling the seqscan, so as to force index usage. The planner uses\nindex scans but the query stil takes a very long time to execute.\n\nAny suggestions on resolving this would would be appreciated.\n\nP.S: Im running PostgrSQL\n7.3.2\n\n---------------------------------------------\nThis message was sent using Endymion MailMan.\nhttp://www.endymion.com/products/mailman/\n\n\n",
"msg_date": "Thu, 25 Sep 2003 22:28:40 GMT",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Indices arent being used"
},
{
"msg_contents": "> Im running a Datawarehouse benchmark (APB-1) on PostgreSql. The objective is to\n> choose which of the to main db (PostgreSQL, MySQL) is fastest. I've run into a\n> small problem which I hope could be resolved here.\n> \n> I'm trying to speed up this query:\n> \n> select count(*) from actvars, prodlevel where\n> actvars.product_level=prodlevel.code_level and\n> prodlevel.division_level='OY3S5LAPALL6';\n\nHow about EXPLAIN ANALYZE output?\n\n> ACTVARS is a fact table of aproximatly 16 million rows, PRODLEVEL has 20700\n> rows. Both have btree indices. \n\n> The database has been vacuumed and analyze has been executed.\n\nThe usual postgresql.conf adjustments have also been made?",
"msg_date": "Thu, 25 Sep 2003 19:46:41 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indices arent being used"
},
{
"msg_contents": "[email protected] wrote:\n> Hi guys \n> \n> Im running a Datawarehouse benchmark (APB-1) on PostgreSql. The objective is to\n> choose which of the to main db (PostgreSQL, MySQL) is fastest. I've run into a\n> small problem which I hope could be resolved here.\n> \n> I'm trying to speed up this query:\n> \n> select count(*) from actvars, prodlevel where\n> actvars.product_level=prodlevel.code_level and\n> prodlevel.division_level='OY3S5LAPALL6';\n> \n> ACTVARS is a fact table of aproximatly 16 million rows, PRODLEVEL has 20700\n> rows. Both have btree indices. \n> \n> I executed the query and it took me almost half an hour to execute! Running the\n> same query on MySQL the result came 6 seconds after. As you can see there is a\n> large differences between execution times.\n> \n> After running an explain:\n> \n> Aggregate (cost=3123459.62..3123459.62 rows=1 width=32)\n> -> Merge Join (cost=3021564.79..3119827.17 rows=1452981 width=32)\n> Merge Cond: (\"outer\".product_level = \"inner\".code_level)\n> -> Sort (cost=3020875.00..3060938.81 rows=16025523 width=16)\n> Sort Key: actvars.product_level\n> -> Seq Scan on actvars (cost=0.00..365711.23 rows=16025523\n> width=16)\n\nDamn.. Seq. scan for actvars? I would say half an hour is a good throughput.\n\nAre there any indexes on both actvars.product_level and prodlevel.code_level? \nAre they exactly compatible type? int2 and int4 are not compatible in postgresql \nlingo.\n\nThat plan should go for index scan. Can you show us the table definitions?\n\nAnd yes, what tuning you did to postgresql?\n\n Shridhar\n\n",
"msg_date": "Fri, 26 Sep 2003 12:55:15 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indices arent being used"
}
] |
[
{
"msg_contents": ">Damn.. Seq. scan for actvars? I would say half an hour is a good throughput.\n>\n>Are there any indexes on both actvars.product_level and prodlevel.code_level?\nAre >they exactly compatible type? int2 and int4 are not compatible in\npostgresql >lingo.\n>\n>That plan should go for index scan. Can you show us the table definitions?\n>\n>And yes, what tuning you did to postgresql?\n>\n>Shridhar\nThe alterations done upon postgresql.conf with 512 RAM were these:\n\nmax_connections = 3\nshared_buffers = 6000\nwal_buffers = 32\nsort_mem = 2048\nfsync = false\neffective_cache_size = 44800\nrandom_page_cost = 3\ndefault_statistics_target = 50\n\nYes I have an index on actvars.product_level and an index on\nprodlevel.code_level.Both indices have character(12) data types.\n\n\n\n---------------------------------------------\nThis message was sent using Endymion MailMan.\nhttp://www.endymion.com/products/mailman/\n\n\n",
"msg_date": "Fri, 26 Sep 2003 10:57:28 GMT",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Indices arent being used"
},
{
"msg_contents": "[email protected] writes:\n> sort_mem = 2048\n\n2 meg sort_mem seems on the small side.\n\n> Yes I have an index on actvars.product_level and an index on\n> prodlevel.code_level.Both indices have character(12) data types.\n\nCan you force an indexscan to be chosen by setting enable_seqscan off?\nIf so, what does the explain look like?\n\nBTW, it's always much more useful to show EXPLAIN ANALYZE output than\nplain EXPLAIN. The issue is generally \"why did the planner misestimate\"\nand so knowing how its estimates diverge from reality is always a\ncritical bit of information.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Sep 2003 10:35:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indices arent being used "
}
] |
[
{
"msg_contents": "\n\nHi List;\n\n Where can I find a plan-readinf tutorial ?\n\nAtenciosamente,\n\nRhaoni Chiu Pereira\nSist�mica Computadores\n\nVisite-nos na Web: http://sistemica.info\nFone/Fax : +55 51 3328 1122\n\n\n\n\n\n",
"msg_date": "Fri, 26 Sep 2003 13:47:59 -0300",
"msg_from": "Rhaoni Chiu Pereira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Plan-Reading"
},
{
"msg_contents": "Rhaoni,\n\n> Where can I find a plan-readinf tutorial ?\n\nIt's a little out of date, but is very well written and gives you the basics:\nhttp://www.argudo.org/postgresql/soft-tuning.html\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 26 Sep 2003 10:34:07 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Plan-Reading"
},
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\n> Where can I find a plan-readinf tutorial?\n\nThis covers explain plans in depth:\n\nhttp://www.gtsm.com/oscon2003/explain.html\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200309291123\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE/eE65vJuQZxSWSsgRAiJeAJ9YPEopowDJiRgn9sXnrF2G8ddVHACfRR3F\n3mwwf3V1P1XCAB6wy/LnoXc=\n=5El1\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Mon, 29 Sep 2003 15:23:49 -0000",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Plan-Reading"
}
] |
[
{
"msg_contents": "Here is the explain analyze of the query:\n\nexplain analyze select count(*) from actvars, prodlevel where\nactvars.product_level=prodlevel.code_level and\nprodlevel.division_level='OY3S5LAPALL6';\n\n Aggregate (cost=3123459.62..3123459.62 rows=1 width=32) (actual\ntime=1547173.60..1547173.60 rows=1 loops=1)\n -> Merge Join (cost=3021564.79..3119827.17 rows=1452981 width=32) (actual\ntime=1400269.29..1545793.13 rows=1918466 loops=1)\n Merge Cond: (\"outer\".product_level = \"inner\".code_level)\n -> Sort (cost=3020875.00..3060938.81 rows=16025523 width=16) (actual\ntime=1400117.06..1518059.84 rows=16020985 loops=1)\n Sort Key: actvars.product_level\n -> Seq Scan on actvars (cost=0.00..365711.23 rows=16025523\nwidth=16) (actual time=29.14..51259.82 rows=16025523 loops=1)\n -> Sort (cost=689.79..694.48 rows=1877 width=16) (actual\ntime=92.90..1217.15 rows=1917991 loops=1)\n Sort Key: prodlevel.code_level\n -> Seq Scan on prodlevel (cost=0.00..587.75 rows=1877 width=16)\n(actual time=16.48..82.72 rows=1802 loops=1)\n Filter: (division_level = 'OY3S5LAPALL6'::bpchar)\n Total runtime: 1547359.08 msec\n\nI have tried diabeling the seqscan:\n\nset enable_seqscan=false;\n\nexplain select count(*) from actvars, prodlevel where\nactvars.product_level=prodlevel.code_level and\nprodlevel.division_level='OY3S5LAPALL6';\n\nAggregate (cost=6587448.25..6587448.25 rows=1 width=32)\n -> Nested Loop (cost=0.00..6583815.80 rows=1452981 width=32)\n -> Index Scan using division_level_prodlevel_index on prodlevel \n(cost=0.00..999.13 rows=1877 width=16)\n Index Cond: (division_level = 'OY3S5LAPALL6'::bpchar)\n -> Index Scan using product_level_actvars_index on actvars \n(cost=0.00..3492.95 rows=1161 width=16)\n Index Cond: (actvars.product_level = \"outer\".code_level)\n\nThis method forces the indices to work but it looks like it takes a long to\nfinish executing, I had to cancel the query after 10 min. Using vmstat i found\nthat there were alot of swap outs and swap ins, affecting the overall performance. \n\nHow can i speed this\nup?\n\n---------------------------------------------\nThis message was sent using Endymion MailMan.\nhttp://www.endymion.com/products/mailman/\n\n\n",
"msg_date": "Sat, 27 Sep 2003 14:48:58 GMT",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Indices arent being used"
},
{
"msg_contents": "[email protected] writes:\n> Here is the explain analyze of the query:\n> explain analyze select count(*) from actvars, prodlevel where\n> actvars.product_level=prodlevel.code_level and\n> prodlevel.division_level='OY3S5LAPALL6';\n\n> [ slow merge join ]\n\nI wonder whether a hash join wouldn't work better. Can you force a hash\njoin? (Try \"enable_mergejoin = 0\" and if needed \"enable_nestloop = 0\";\ndon't disable seqscans though.) If you can get such a plan, please post\nthe explain analyze results for it.\n\n> This method forces the indices to work but it looks like it takes a long to\n> finish executing, I had to cancel the query after 10 min.\n\n\"Force use of the indexes\" is not always an answer to performance issues.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Sep 2003 13:14:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indices arent being used "
}
] |
[
{
"msg_contents": "Hi, i'm on the verge of buying a \"MegaRAID SCSI 320-2\" raid controller.\nI need it to build a db server using 4x ultra320 scsi disks\ni'm thinking raid 1+0 but will try with raid5 too and compare\n\nDoes anyone have any experience with this model, good or bad i'd like to \nknow.. thanks :)\n\nas seen:\nhttp://uk.azzurri.com/product/product.cgi?productId=188\n\nRegards,\nRichard.\n\nPS: whoever mentioned starting a site with raid controller reviews, excellent \nidea - its hard to find decent info on which card to buy.\n\n",
"msg_date": "Sat, 27 Sep 2003 18:24:33 +0100",
"msg_from": "Richard Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "advice on raid controller"
},
{
"msg_contents": "On 2003-09-27T18:24:33+0100, Richard Jones wrote:\n> i'm on the verge of buying a \"MegaRAID SCSI 320-2\" raid controller.\n\nYou may want to check out the PCI-X version of this controller that\nLSILogic just released (MegaRAID SCSI 320-2X). PCI-X is backwards\ncompatible with PCI, but also gives you greater bandwidth if your\nmotherboard supports it (at least, that's the marketing fluff).\n\nAdaptec and Intel makes (PCI) controllers with similar specs to the one\nyou mentioned.\n\n> I need it to build a db server using 4x ultra320 scsi disks\n> i'm thinking raid 1+0 but will try with raid5 too and compare\n\nThe Fujitsu 15k drives look sweet :-)\n\n> PS: whoever mentioned starting a site with raid controller reviews, excellent \n> idea - its hard to find decent info on which card to buy.\n\nYou may want to check recent archives for RAID threads.\n\n\n/Allan\n-- \nAllan Wind\nP.O. Box 2022\nWoburn, MA 01888-0022\nUSA\n",
"msg_date": "Sat, 27 Sep 2003 15:31:30 -0400",
"msg_from": "[email protected] (Allan Wind)",
"msg_from_op": false,
"msg_subject": "Re: advice on raid controller"
},
{
"msg_contents": "RIchard,\n\n> Hi, i'm on the verge of buying a \"MegaRAID SCSI 320-2\" raid controller.\n> I need it to build a db server using 4x ultra320 scsi disks\n> i'm thinking raid 1+0 but will try with raid5 too and compare\n\nDepends on your type of database. If you're doing web or OLAP (lots of \nread-only queries) RAID 5 will probably be better. If you're doing OLTP \n(lots of read-write) RAID 10 will almost certainly be better. But if you \nhave time, testing is always best.\n\n> as seen:\n> http://uk.azzurri.com/product/product.cgi?productId=188\n\nI haven'te used it personally, but what I don't see in the docs is a \nbattery-backed cache. Without battery backup on the write cache, IMHO you \nare better off with Linux of BSD software RAID, since you'll have to turn off \nthe card's write cache, lest your database get corrupted on power-out.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 27 Sep 2003 12:53:04 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on raid controller"
},
{
"msg_contents": "On Sat, 2003-09-27 at 12:24, Richard Jones wrote:\n> Hi, i'm on the verge of buying a \"MegaRAID SCSI 320-2\" raid controller.\n> I need it to build a db server using 4x ultra320 scsi disks\n> i'm thinking raid 1+0 but will try with raid5 too and compare\n> \n> Does anyone have any experience with this model, good or bad i'd like to \n> know.. thanks :)\n> \n> as seen:\n> http://uk.azzurri.com/product/product.cgi?productId=188\n\nI don't see anything on that page regarding RAM cache. It's been\nmy experience that RAID 5 needs a *minimum* of 128MB cache to have\ngood performance.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\nWhy is cyber-crime not being effectively controlled? What is \nfuelling the rampancy?\n* Parental apathy & the public education system\nhttp://www.linuxsecurity.com/feature_stories/feature_story-150.html\n\n",
"msg_date": "Sat, 27 Sep 2003 14:53:51 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on raid controller"
},
{
"msg_contents": "As others have mentioned, you really ought to get battery-backed cache if\nyou're doing any volume of writes. The ability to do safe write-back\ncaching makes an *insane* difference to write performance.\n\nThe site you link to also has that for only 15% more money:\nhttp://uk.azzurri.com/product/product.cgi?productId=80\n\nNo experience with the card(s) I'm afraid.\n\nIn general though, U320 will only be faster than U160 for large sequential\nreads, or when you have silly numbers of disks on a channel (i.e. more than\n4/channel). If you have silly numbers of disks, then RAID5 will probably be\nbetter, if you have 4 disks total then RAID1+0 will probably be better. In\nbetween it depends on all sorts of other factors. Bear in mind though that\nif you *do* have silly numbers of disks then more channels and more cache\nwill count for more than anything else, so spend the money on that rather\nthan latest-and-greatest performance for a single channel.\n\nHTH\n\nMatt\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Richard\n> Jones\n> Sent: 27 September 2003 18:25\n> To: [email protected]\n> Subject: [PERFORM] advice on raid controller\n>\n>\n> Hi, i'm on the verge of buying a \"MegaRAID SCSI 320-2\" raid controller.\n> I need it to build a db server using 4x ultra320 scsi disks\n> i'm thinking raid 1+0 but will try with raid5 too and compare\n>\n> Does anyone have any experience with this model, good or bad i'd like to\n> know.. thanks :)\n>\n> as seen:\n> http://uk.azzurri.com/product/product.cgi?productId=188\n>\n> Regards,\n> Richard.\n>\n> PS: whoever mentioned starting a site with raid controller\n> reviews, excellent\n> idea - its hard to find decent info on which card to buy.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n\n",
"msg_date": "Sun, 28 Sep 2003 13:07:57 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on raid controller"
},
{
"msg_contents": "I've used the megaraid / LSI cards in the past and they were pretty good \nin terms of reliability, but the last one I used was the 328 model, from 4 \nyears ago or so. that one had a battery backup option for the cache, and \ncould go to 128 Meg. We tested it with 4/16 and 128 meg ram, and it was \nabout the same with each, but we didn't do heavy parallel testing either.\n\nHere's the page on the megaraid cards at lsilogic.com:\n\nhttp://www.lsilogic.com/products/stor_prod/raid/ultra320products.html\n\nOn Sun, 28 Sep 2003, Matt Clark wrote:\n\n> As others have mentioned, you really ought to get battery-backed cache if\n> you're doing any volume of writes. The ability to do safe write-back\n> caching makes an *insane* difference to write performance.\n> \n> The site you link to also has that for only 15% more money:\n> http://uk.azzurri.com/product/product.cgi?productId=80\n> \n> No experience with the card(s) I'm afraid.\n> \n> In general though, U320 will only be faster than U160 for large sequential\n> reads, or when you have silly numbers of disks on a channel (i.e. more than\n> 4/channel). If you have silly numbers of disks, then RAID5 will probably be\n> better, if you have 4 disks total then RAID1+0 will probably be better. In\n> between it depends on all sorts of other factors. Bear in mind though that\n> if you *do* have silly numbers of disks then more channels and more cache\n> will count for more than anything else, so spend the money on that rather\n> than latest-and-greatest performance for a single channel.\n> \n> HTH\n> \n> Matt\n> \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Richard\n> > Jones\n> > Sent: 27 September 2003 18:25\n> > To: [email protected]\n> > Subject: [PERFORM] advice on raid controller\n> >\n> >\n> > Hi, i'm on the verge of buying a \"MegaRAID SCSI 320-2\" raid controller.\n> > I need it to build a db server using 4x ultra320 scsi disks\n> > i'm thinking raid 1+0 but will try with raid5 too and compare\n> >\n> > Does anyone have any experience with this model, good or bad i'd like to\n> > know.. thanks :)\n> >\n> > as seen:\n> > http://uk.azzurri.com/product/product.cgi?productId=188\n> >\n> > Regards,\n> > Richard.\n> >\n> > PS: whoever mentioned starting a site with raid controller\n> > reviews, excellent\n> > idea - its hard to find decent info on which card to buy.\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: the planner will ignore your desire to choose an index scan if your\n> > joining column's datatypes do not match\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n",
"msg_date": "Mon, 29 Sep 2003 07:48:35 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on raid controller"
},
{
"msg_contents": "On Mon, 2003-09-29 at 06:48, scott.marlowe wrote:\n> I've used the megaraid / LSI cards in the past and they were pretty good \n> in terms of reliability, but the last one I used was the 328 model, from 4 \n> years ago or so. that one had a battery backup option for the cache, and \n> could go to 128 Meg. We tested it with 4/16 and 128 meg ram, and it was \n> about the same with each, but we didn't do heavy parallel testing either.\n> \n> Here's the page on the megaraid cards at lsilogic.com:\n> \n> http://www.lsilogic.com/products/stor_prod/raid/ultra320products.html\n> \n> On Sun, 28 Sep 2003, Matt Clark wrote:\n> \n> > As others have mentioned, you really ought to get battery-backed cache if\n> > you're doing any volume of writes. The ability to do safe write-back\n> > caching makes an *insane* difference to write performance.\n> > \n> > The site you link to also has that for only 15% more money:\n> > http://uk.azzurri.com/product/product.cgi?productId=80\n> > \n> > No experience with the card(s) I'm afraid.\n> > \n> > In general though, U320 will only be faster than U160 for large sequential\n> > reads, or when you have silly numbers of disks on a channel (i.e. more than\n> > 4/channel). If you have silly numbers of disks, then RAID5 will probably be\n> > better, if you have 4 disks total then RAID1+0 will probably be better. In\n> > between it depends on all sorts of other factors. Bear in mind though that\n> > if you *do* have silly numbers of disks then more channels and more cache\n> > will count for more than anything else, so spend the money on that rather\n> > than latest-and-greatest performance for a single channel.\n\nJust to add my thoughts, we use the MegaRaid Elite 1650 in 3 servers\nhere that drive our core databases. We paired up the controllers with\nthe Seagate Cheetah 10k drives, we could have purchased the X15's which\nare Seagate's 15k version, but due to budget constraints and lack of\ntrue performance increase from the 10k to the 15k rpm drives we didn't\nopt for them.\n\nI have to say that I've been 100% pleased with the performance and\nreliability of the Megaraid controllers. They do everything a good\ncontroller should and they are very easy to manage. The driver is\nactively maintained by the guys at LSI and their tech support personnel\nare very good as well.\n\nIf you want any specific information or have any questions about our\nexperience or configuration please feel free to contact me.\n\nSincerely,\n\nWill LaShell\n\n\n\n> > HTH\n> > \n> > Matt\n> > \n> > > -----Original Message-----\n> > > From: [email protected]\n> > > [mailto:[email protected]]On Behalf Of Richard\n> > > Jones\n> > > Sent: 27 September 2003 18:25\n> > > To: [email protected]\n> > > Subject: [PERFORM] advice on raid controller\n> > >\n> > >\n> > > Hi, i'm on the verge of buying a \"MegaRAID SCSI 320-2\" raid controller.\n> > > I need it to build a db server using 4x ultra320 scsi disks\n> > > i'm thinking raid 1+0 but will try with raid5 too and compare\n> > >\n> > > Does anyone have any experience with this model, good or bad i'd like to\n> > > know.. thanks :)\n> > >\n> > > as seen:\n> > > http://uk.azzurri.com/product/product.cgi?productId=188\n> > >\n> > > Regards,\n> > > Richard.\n> > >\n> > > PS: whoever mentioned starting a site with raid controller\n> > > reviews, excellent\n> > > idea - its hard to find decent info on which card to buy.",
"msg_date": "29 Sep 2003 10:20:14 -0700",
"msg_from": "Will LaShell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on raid controller"
},
{
"msg_contents": "On 29 Sep 2003, Will LaShell wrote:\n\n> On Mon, 2003-09-29 at 06:48, scott.marlowe wrote:\n> > I've used the megaraid / LSI cards in the past and they were pretty good \n> > in terms of reliability, but the last one I used was the 328 model, from 4 \n> > years ago or so. that one had a battery backup option for the cache, and \n> > could go to 128 Meg. We tested it with 4/16 and 128 meg ram, and it was \n> > about the same with each, but we didn't do heavy parallel testing either.\n> > \n> > Here's the page on the megaraid cards at lsilogic.com:\n> > \n> > http://www.lsilogic.com/products/stor_prod/raid/ultra320products.html\n> > \n> > On Sun, 28 Sep 2003, Matt Clark wrote:\n> > \n> > > As others have mentioned, you really ought to get battery-backed cache if\n> > > you're doing any volume of writes. The ability to do safe write-back\n> > > caching makes an *insane* difference to write performance.\n> > > \n> > > The site you link to also has that for only 15% more money:\n> > > http://uk.azzurri.com/product/product.cgi?productId=80\n> > > \n> > > No experience with the card(s) I'm afraid.\n> > > \n> > > In general though, U320 will only be faster than U160 for large sequential\n> > > reads, or when you have silly numbers of disks on a channel (i.e. more than\n> > > 4/channel). If you have silly numbers of disks, then RAID5 will probably be\n> > > better, if you have 4 disks total then RAID1+0 will probably be better. In\n> > > between it depends on all sorts of other factors. Bear in mind though that\n> > > if you *do* have silly numbers of disks then more channels and more cache\n> > > will count for more than anything else, so spend the money on that rather\n> > > than latest-and-greatest performance for a single channel.\n> \n> Just to add my thoughts, we use the MegaRaid Elite 1650 in 3 servers\n> here that drive our core databases. We paired up the controllers with\n> the Seagate Cheetah 10k drives, we could have purchased the X15's which\n> are Seagate's 15k version, but due to budget constraints and lack of\n> true performance increase from the 10k to the 15k rpm drives we didn't\n> opt for them.\n> \n> I have to say that I've been 100% pleased with the performance and\n> reliability of the Megaraid controllers. They do everything a good\n> controller should and they are very easy to manage. The driver is\n> actively maintained by the guys at LSI and their tech support personnel\n> are very good as well.\n> \n> If you want any specific information or have any questions about our\n> experience or configuration please feel free to contact me.\n\nTo add one more feature the LSI/MegaRAIDs have that I find interesting, \nyou can put two in a machine, build a RAID0 or 5 on each card, then mirror \nthe two cards together, and should one card / RAID0 ot 5 chain die, the \nother card will keep working. I.e. the work like one big card with \nfailover.\n\n",
"msg_date": "Mon, 29 Sep 2003 13:40:16 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on raid controller"
},
{
"msg_contents": "Stupid question, perhaps, but would a battery-backed cache make it safe to \nset fsync=false in postgresql.conf?\n\n/Palle\n\n--On söndag, september 28, 2003 13.07.57 +0100 Matt Clark <[email protected]> \nwrote:\n\n> As others have mentioned, you really ought to get battery-backed cache if\n> you're doing any volume of writes. The ability to do safe write-back\n> caching makes an *insane* difference to write performance.\n\n\n\n\n",
"msg_date": "Mon, 29 Sep 2003 23:31:54 +0200",
"msg_from": "Palle Girgensohn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on raid controller"
},
{
"msg_contents": "Come to think of it, I guess a battery-backed cache will make fsync as fast \nas no fsync, right? So, the q was kinda stoopid... :-/\n\n/Palle\n\n--On måndag, september 29, 2003 23.31.54 +0200 Palle Girgensohn \n<[email protected]> wrote:\n\n> Stupid question, perhaps, but would a battery-backed cache make it safe\n> to set fsync=false in postgresql.conf?\n>\n> /Palle\n>\n> --On söndag, september 28, 2003 13.07.57 +0100 Matt Clark\n> <[email protected]> wrote:\n>\n>> As others have mentioned, you really ought to get battery-backed cache if\n>> you're doing any volume of writes. The ability to do safe write-back\n>> caching makes an *insane* difference to write performance.\n>\n>\n>\n>\n\n\n\n\n",
"msg_date": "Mon, 29 Sep 2003 23:35:16 +0200",
"msg_from": "Palle Girgensohn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on raid controller"
},
{
"msg_contents": "Not in general. Besides, with a write-back cache an fsync() is very nearly\n'free', as the controller will report the write as completed as soon as it's\nwritten to cache.\n\nI keep meaning to benchmark the difference, but I only have the facility on\na production box, so caution gets the better of me every time :-)\n\nAFAIK the fsync calls are used to guarantee the _ordering_ of writes to\npermanent storage (i.e. fsync() is called before doing something, rather\nthan after doing something. So PG can be sure that before it does B, A has\ndefinitely been written to disk).\n\nBut I could well be wrong. And there could well be strategies exploitable\nwith the knowledge that a write-back cache exists that aren't currently\nimplemented - though intuitively I doubt it.\n\nM\n\n\n\n\n> -----Original Message-----\n> From: Palle Girgensohn [mailto:[email protected]]\n> Sent: 29 September 2003 22:32\n> To: Matt Clark; [email protected]; [email protected]\n> Subject: Re: [PERFORM] advice on raid controller\n>\n>\n> Stupid question, perhaps, but would a battery-backed cache make\n> it safe to\n> set fsync=false in postgresql.conf?\n>\n> /Palle\n>\n> --On s�ndag, september 28, 2003 13.07.57 +0100 Matt Clark\n> <[email protected]>\n> wrote:\n>\n> > As others have mentioned, you really ought to get\n> battery-backed cache if\n> > you're doing any volume of writes. The ability to do safe write-back\n> > caching makes an *insane* difference to write performance.\n>\n>\n>\n>\n>\n\n",
"msg_date": "Mon, 29 Sep 2003 22:41:39 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on raid controller"
},
{
"msg_contents": "\\Palle Girgensohn wrote:\n> Come to think of it, I guess a battery-backed cache will make fsync as fast \n> as no fsync, right? So, the q was kinda stoopid... :-/\n\nWith fsync off, the data might never get to the battery-backed RAM. :-(\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 29 Sep 2003 17:52:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on raid controller"
},
{
"msg_contents": ">>>>> \"RJ\" == Richard Jones <[email protected]> writes:\n\nRJ> Hi, i'm on the verge of buying a \"MegaRAID SCSI 320-2\" raid controller.\nRJ> I need it to build a db server using 4x ultra320 scsi disks\nRJ> i'm thinking raid 1+0 but will try with raid5 too and compare\n\nNo specific tips on that particular RAID, but in general it seems that\nyou want to *disable* the read-ahead and enable the write-back cache.\nThis is from reading on the linux megaraid developers list.\n\nAlso, for 4 disks, go with RAID 1+0 for your best performance. I\nfound it faster. However, with my 14 disk system, RAID5 is fastest.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Tue, 30 Sep 2003 16:30:15 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on raid controller"
},
{
"msg_contents": ">>>>> \"PG\" == Palle Girgensohn <[email protected]> writes:\n\nPG> Come to think of it, I guess a battery-backed cache will make fsync as\nPG> fast as no fsync, right? So, the q was kinda stoopid... :-/\n\nIn my testing, yes, the battery cache makes fsync=true just about as\nfast as fsync=false. it was only about 2 seconds slower (out of 4\nhours) while doing a restore.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Tue, 30 Sep 2003 16:32:27 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: advice on raid controller"
}
] |
[
{
"msg_contents": "For the application that I'm working on, we want to\nuse data types that are database independent. (most\ndatabases has decimal, but not big int).\n\nAnyhow, we are planning on using decimal(19,0) for our\nprimary keys instead of a big int, would there be a\nperformance difference in using a bigint over using decimals?\n\n__________________________________\nDo you Yahoo!?\nThe New Yahoo! Shopping - with improved product search\nhttp://shopping.yahoo.com\n",
"msg_date": "Sat, 27 Sep 2003 16:54:14 -0700 (PDT)",
"msg_from": "\"Yusuf W.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance: BigInt vs Decimal(19,0)"
},
{
"msg_contents": "\"Yusuf W.\" <[email protected]> writes:\n> For the application that I'm working on, we want to\n> use data types that are database independent. (most\n> databases has decimal, but not big int).\n\nMost databases have bigint, I think.\n\n> Anyhow, we are planning on using decimal(19,0) for our\n> primary keys instead of a big int, would there be a\n> performance difference in using a bigint over using decimals?\n\nYou'll be taking a very large performance hit, for very little benefit\nthat I can see. How hard could it be to change the column declarations\nif you ever move to a database without bigint? There's not normally\nmuch need for apps to be explicitly aware of the column type names.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Sep 2003 20:26:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance: BigInt vs Decimal(19,0) "
},
{
"msg_contents": "Now, I've got to convince my project's software\narchitech, that a bigint would be better than a\ndecimal. \n\nDoes anyone know where I could get some documentation\non how the int and decimal are implemented so I could\nprove to him that ints are better? Can people suggest\ngood points to make in order to prove it? \n\nThanks in advance.\n\n--- Tom Lane <[email protected]> wrote:\n> \"Yusuf W.\" <[email protected]> writes:\n> > For the application that I'm working on, we want\n> to\n> > use data types that are database independent. \n> (most\n> > databases has decimal, but not big int).\n> \n> Most databases have bigint, I think.\n> \n> > Anyhow, we are planning on using decimal(19,0) for\n> our\n> > primary keys instead of a big int, would there be\n> a\n> > performance difference in using a bigint over\n> using decimals?\n> \n> You'll be taking a very large performance hit, for\n> very little benefit\n> that I can see. How hard could it be to change the\n> column declarations\n> if you ever move to a database without bigint? \n> There's not normally\n> much need for apps to be explicitly aware of the\n> column type names.\n> \n> \t\t\tregards, tom lane\n\n\n__________________________________\nDo you Yahoo!?\nThe New Yahoo! Shopping - with improved product search\nhttp://shopping.yahoo.com\n",
"msg_date": "Sat, 27 Sep 2003 19:39:36 -0700 (PDT)",
"msg_from": "\"Yusuf W.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance: BigInt vs Decimal(19,0) "
},
{
"msg_contents": "Yusuf,\n\n> Does anyone know where I could get some documentation\n> on how the int and decimal are implemented so I could\n> prove to him that ints are better? Can people suggest\n> good points to make in order to prove it?\n\nRTFM:\nhttp://www.postgresql.org/docs/7.3/interactive/datatype.html#DATATYPE-NUMERIC\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 27 Sep 2003 20:06:42 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance: BigInt vs Decimal(19,0)"
},
{
"msg_contents": "Wouldn't it be the most portable solution to work with a domain?\nCREATE DOMAIN BIG_NUMBER AS BIGINT;\n\nIf I use BIG_NUMBER everywhere I need it in my database, porting it to\nother database products should be easy... any SQL 92 compliant dbms\nshould support domains.\n\nOn Sun, 2003-09-28 at 00:06, Josh Berkus wrote:\n\n> Yusuf,\n> \n> > Does anyone know where I could get some documentation\n> > on how the int and decimal are implemented so I could\n> > prove to him that ints are better? Can people suggest\n> > good points to make in order to prove it?\n> \n> RTFM:\n> http://www.postgresql.org/docs/7.3/interactive/datatype.html#DATATYPE-NUMERIC",
"msg_date": "Mon, 29 Sep 2003 11:59:27 -0300",
"msg_from": "Franco Bruno Borghesi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance: BigInt vs Decimal(19,0)"
},
{
"msg_contents": "Franco,\n\n> Wouldn't it be the most portable solution to work with a domain?\n> CREATE DOMAIN BIG_NUMBER AS BIGINT;\n>\n> If I use BIG_NUMBER everywhere I need it in my database, porting it to\n> other database products should be easy... any SQL 92 compliant dbms\n> should support domains.\n\nThis is a good idea, on general principles. Abstracted design is a good \nthing. \n\nRegrettably, though, a lot of commercial databases do not support DOMAIN. \nYou'll need to check which databases you are thinking of porting to first.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 29 Sep 2003 10:19:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance: BigInt vs Decimal(19,0)"
},
{
"msg_contents": "\nOn Saturday, September 27, 2003, at 10:39 PM, Yusuf W. wrote:\n\n> Now, I've got to convince my project's software\n> architech, that a bigint would be better than a\n> decimal.\n>\n> Does anyone know where I could get some documentation\n> on how the int and decimal are implemented so I could\n> prove to him that ints are better? Can people suggest\n> good points to make in order to prove it?\n>\n\nPrint out Tom's reply and give it to him. Saying 'one of the people who \ndevelops the thing says so' ought to carry some weight. I would hope...\n\n\n> Thanks in advance.\n>\n> --- Tom Lane <[email protected]> wrote:\n>> \"Yusuf W.\" <[email protected]> writes:\n>>> For the application that I'm working on, we want\n>> to\n>>> use data types that are database independent.\n>> (most\n>>> databases has decimal, but not big int).\n>>\n>> Most databases have bigint, I think.\n>>\n>>> Anyhow, we are planning on using decimal(19,0) for\n>> our\n>>> primary keys instead of a big int, would there be\n>> a\n>>> performance difference in using a bigint over\n>> using decimals?\n>>\n>> You'll be taking a very large performance hit, for\n>> very little benefit\n>> that I can see. How hard could it be to change the\n>> column declarations\n>> if you ever move to a database without bigint?\n>> There's not normally\n>> much need for apps to be explicitly aware of the\n>> column type names.\n>>\n>> \t\t\tregards, tom lane\n>\n>\n> __________________________________\n> Do you Yahoo!?\n> The New Yahoo! Shopping - with improved product search\n> http://shopping.yahoo.com\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n>\n--------------------\n\nAndrew Rawnsley\nPresident\nThe Ravensfield Digital Resource Group, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n\n",
"msg_date": "Mon, 29 Sep 2003 13:26:31 -0400",
"msg_from": "Andrew Rawnsley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance: BigInt vs Decimal(19,0) "
}
] |
[
{
"msg_contents": "Hi!\n\nI have a SQL statement that I cannot get to use the index. postgresql \ninsists on using a seqscan and performance is very poor. set enable_seqscan \n= true boost performance drastically, as you can see below. Since seqscan \nis not always bad, I'd rather not turn it off completely, but rather get \nthe planner to do the right thing here. Is there another way to do this, \napart from setting enable_seqscan=false?\n\nThanks,\nPalle\n\n\nthe tables are:\nperson with about 30000 tuples\ndyn_field_content_person, 331156 tuples\ndyn_field_person, just 15 tuples\ncourse about 700 tuples\npartitcipant with ~ 7800 tuples, where ~ 60 have course_id=707...\n\nuu=# explain analyze\nuu-# select lower(substr(p.last_name,1,1)) as letter, count(*)\nuu-# FROM course c join group_data gd on (c.active_group_id = \ngd.this_group_id)\nuu-# join person p on (gd.item_text = p.userid)\nuu-# join dyn_field_person dfp on (dfp.extern_item_id = 10 and \ndfp.giver=c.giver)\nuu-# join dyn_field_content_person dfcp on (dfp.id = \ndfcp.dyn_field_id and dfcp.userid=p.userid)\nuu-# left outer join participant pt on (pt.userid = p.userid and \npt.course_id = 707)\nuu-# WHERE c.id = 707\nuu-# group by 1\nuu-# ;\n \nQUERY PLAN \n\n\n---------------------------------------------------------------------------\n---------------------------------------------------------------------------\n---------------------------------------\n-------------------\n Aggregate (cost=10496.30..10498.35 rows=27 width=106) (actual \ntime=4166.01..4167.23 rows=19 loops=1)\n -> Group (cost=10496.30..10497.67 rows=273 width=106) (actual \ntime=4165.92..4166.80 rows=60 loops=1)\n -> Sort (cost=10496.30..10496.98 rows=273 width=106) (actual \ntime=4165.91..4166.10 rows=60 loops=1)\n Sort Key: lower(substr(p.last_name, 1, 1))\n -> Merge Join (cost=10443.75..10485.23 rows=273 width=106) \n(actual time=4094.42..4165.20 rows=60 loops=1)\n Merge Cond: (\"outer\".userid = \"inner\".userid)\n Join Filter: (\"inner\".course_id = 707)\n -> Sort (cost=9803.86..9804.54 rows=273 width=88) \n(actual time=3823.78..3823.97 rows=60 loops=1)\n Sort Key: dfcp.userid\n -> Hash Join (cost=2444.22..9792.79 rows=273 \nwidth=88) (actual time=1140.50..3822.60 rows=60 loops=1)\n Hash Cond: (\"outer\".userid = \n\"inner\".item_text)\n Join Filter: (\"inner\".id = \n\"outer\".dyn_field_id)\n -> Seq Scan on dyn_field_content_person \ndfcp (cost=0.00..5643.56 rows=331156 width=16) (actual time=0.01..2028.31 \nrows=331156 loops=1)\n -> Hash (cost=2443.54..2443.54 rows=272 \nwidth=72) (actual time=340.24..340.24 rows=0 loops=1)\n -> Nested Loop \n(cost=1401.84..2443.54 rows=272 width=72) (actual time=338.76..339.91 \nrows=60 loops=1)\n Join Filter: (\"outer\".giver = \n\"inner\".giver)\n -> Seq Scan on \ndyn_field_person dfp (cost=0.00..1.19 rows=1 width=16) (actual \ntime=0.06..0.09 rows=1 loops=1)\n Filter: (extern_item_id \n= 10)\n -> Materialize \n(cost=2437.67..2437.67 rows=374 width=56) (actual time=338.64..338.82 \nrows=60 loops=1)\n -> Hash Join \n(cost=1401.84..2437.67 rows=374 width=56) (actual time=7.74..338.36 rows=60 \nloops=1)\n Hash Cond: \n(\"outer\".userid = \"inner\".item_text)\n -> Seq Scan on \nperson p (cost=0.00..806.09 rows=30009 width=23) (actual time=0.01..203.67 \nrows=30009 loops=1)\n -> Hash \n(cost=1400.89..1400.89 rows=378 width=33) (actual time=1.60..1.60 rows=0 \nloops=1)\n -> Nested \nLoop (cost=0.00..1400.89 rows=378 width=33) (actual time=0.12..1.28 \nrows=60 loops=1)\n -> \nIndex Scan using course_pkey on course c (cost=0.00..5.08 rows=1 width=16) \n(actual time=0.06..0.06 rows=1 loops=1)\n \nIndex Cond: (id = 707)\n -> \nIndex Scan using group_data_this_idx on group_data gd (cost=0.00..1390.80 \nrows=402 width=17) (actual time=0.04..0.6\n6 rows=60 loops=1)\n \nIndex Cond: (\"outer\".active_group_id = gd.this_group_id)\n -> Sort (cost=639.90..659.42 rows=7808 width=18) \n(actual time=266.55..290.81 rows=7722 loops=1)\n Sort Key: pt.userid\n -> Seq Scan on participant pt \n(cost=0.00..135.08 rows=7808 width=18) (actual time=0.02..50.24 rows=7808 \nloops=1)\n Total runtime: 4170.16 msec\n(32 rader)\n\nTid: 4184,68 ms\nuu=# set enable_seqscan = false;\nSET\nTid: 1,20 ms\nuu=# explain analyze\nuu-# select lower(substr(p.last_name,1,1)) as letter, count(*)\nuu-# FROM course c join group_data gd on (c.active_group_id = \ngd.this_group_id)\nuu-# join person p on (gd.item_text = p.userid)\nuu-# join dyn_field_person dfp on (dfp.extern_item_id = 10 and \ndfp.giver=c.giver)\nuu-# join dyn_field_content_person dfcp on (dfp.id = \ndfcp.dyn_field_id and dfcp.userid=p.userid)\nuu-# left outer join participant pt on (pt.userid = p.userid and \npt.course_id = 707)\nuu-# WHERE c.id = 707\nuu-# group by 1\nuu-# ;\n \nQUERY PLAN \n\n\n---------------------------------------------------------------------------\n---------------------------------------------------------------------------\n---------------------------------------\n---------\n Aggregate (cost=17928.32..17930.37 rows=27 width=106) (actual \ntime=171.37..172.58 rows=19 loops=1)\n -> Group (cost=17928.32..17929.68 rows=273 width=106) (actual \ntime=171.27..172.14 rows=60 loops=1)\n -> Sort (cost=17928.32..17929.00 rows=273 width=106) (actual \ntime=171.26..171.45 rows=60 loops=1)\n Sort Key: lower(substr(p.last_name, 1, 1))\n -> Merge Join (cost=17545.53..17917.25 rows=273 width=106) \n(actual time=36.64..170.53 rows=60 loops=1)\n Merge Cond: (\"outer\".userid = \"inner\".userid)\n Join Filter: (\"inner\".course_id = 707)\n -> Sort (cost=17545.53..17546.22 rows=273 width=88) \n(actual time=28.62..28.84 rows=60 loops=1)\n Sort Key: dfcp.userid\n -> Nested Loop (cost=0.00..17534.46 rows=273 \nwidth=88) (actual time=7.99..27.49 rows=60 loops=1)\n Join Filter: (\"outer\".id = \n\"inner\".dyn_field_id)\n -> Nested Loop (cost=0.00..3685.31 \nrows=272 width=72) (actual time=7.67..8.95 rows=60 loops=1)\n Join Filter: (\"outer\".giver = \n\"inner\".giver)\n -> Index Scan using \ndf_person_giver_id_idx on dyn_field_person dfp (cost=0.00..6.20 rows=1 \nwidth=16) (actual time=0.14..0.17 rows=1 loops=1)\n Filter: (extern_item_id = 10)\n -> Materialize \n(cost=3674.43..3674.43 rows=374 width=56) (actual time=7.49..7.69 rows=60 \nloops=1)\n -> Nested Loop \n(cost=0.00..3674.43 rows=374 width=56) (actual time=0.24..7.22 rows=60 \nloops=1)\n -> Nested Loop \n(cost=0.00..1400.89 rows=378 width=33) (actual time=0.10..1.34 rows=60 \nloops=1)\n -> Index Scan \nusing course_pkey on course c (cost=0.00..5.08 rows=1 width=16) (actual \ntime=0.04..0.05 rows=1 loops=1)\n Index Cond: \n(id = 707)\n -> Index Scan \nusing group_data_this_idx on group_data gd (cost=0.00..1390.80 rows=402 \nwidth=17) (actual time=0.04..0.70 rows=60 lo\nops=1)\n Index Cond: \n(\"outer\".active_group_id = gd.this_group_id)\n -> Index Scan using \nperson_pkey on person p (cost=0.00..6.01 rows=1 width=23) (actual \ntime=0.07..0.08 rows=1 loops=60)\n Index Cond: \n(\"outer\".item_text = p.userid)\n -> Index Scan using \ndf_content_person_userid_id_idx on dyn_field_content_person dfcp \n(cost=0.00..50.75 rows=12 width=16) (actual time=0.08..0.23 rows=11 l\noops=60)\n Index Cond: (dfcp.userid = \n\"outer\".item_text)\n -> Index Scan using participant_uid_cid_idx on \nparticipant pt (cost=0.00..349.76 rows=7808 width=18) (actual \ntime=0.07..84.34 rows=7722 loops=1)\n Total runtime: 173.37 msec\n(28 rader)\n\nTid: 183,37 ms\n\n",
"msg_date": "Sun, 28 Sep 2003 22:54:41 +0200",
"msg_from": "Palle Girgensohn <[email protected]>",
"msg_from_op": true,
"msg_subject": "avoiding seqscan?"
},
{
"msg_contents": "Palle,\n\n> I have a SQL statement that I cannot get to use the index. postgresql\n> insists on using a seqscan and performance is very poor. set enable_seqscan\n> = true boost performance drastically, as you can see below. Since seqscan\n> is not always bad, I'd rather not turn it off completely, but rather get\n> the planner to do the right thing here. Is there another way to do this,\n> apart from setting enable_seqscan=false?\n\nIn your postgresql.conf, try setting effective_cache_size to something like \n50% of your system's RAM, and lovering random_page_cost to 2.0 or even 1.5. \nThen restart PostgreSQL and try your query again.\n\nWhat version, btw?\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 28 Sep 2003 14:34:25 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: avoiding seqscan?"
},
{
"msg_contents": "Palle Girgensohn wrote:\n> uu=# explain analyze\n> uu-# select lower(substr(p.last_name,1,1)) as letter, count(*)\n> uu-# FROM course c join group_data gd on (c.active_group_id = \n> gd.this_group_id)\n> uu-# join person p on (gd.item_text = p.userid)\n> uu-# join dyn_field_person dfp on (dfp.extern_item_id = 10 and \n> dfp.giver=c.giver)\n> uu-# join dyn_field_content_person dfcp on (dfp.id = \n> dfcp.dyn_field_id and dfcp.userid=p.userid)\n> uu-# left outer join participant pt on (pt.userid = p.userid and \n> pt.course_id = 707)\n> uu-# WHERE c.id = 707\n> uu-# group by 1\n> uu-# ;\n\nWhy are you using this form of join ? When and if is not necessary use \nthe implicit form.\n\n\nRegards\nGaetano Mendola\n\n\n\n",
"msg_date": "Mon, 29 Sep 2003 00:54:43 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: avoiding seqscan?"
},
{
"msg_contents": "Hi,\n\nIndeed, setting random_page_cost does the trick. Thanks!\n\nIt seems to make sense to set random_page_cost to this value. Are there any \ndrawbacks?\n\npostgresql-7.3.4\n\npostgresql.conf:\n\ntcpip_socket = true\nmax_connections = 100\nsuperuser_reserved_connections = 2\n\n# Performance\n#\nshared_buffers = 12000\nsort_mem = 8192\nvacuum_mem = 32768\neffective_cache_size = 64000\nrandom_page_cost = 2\n\n...\n\n--On söndag, september 28, 2003 14.34.25 -0700 Josh Berkus \n<[email protected]> wrote:\n\n> Palle,\n>\n>> I have a SQL statement that I cannot get to use the index. postgresql\n>> insists on using a seqscan and performance is very poor. set\n>> enable_seqscan = true boost performance drastically, as you can see\n>> below. Since seqscan is not always bad, I'd rather not turn it off\n>> completely, but rather get the planner to do the right thing here. Is\n>> there another way to do this, apart from setting enable_seqscan=false?\n>\n> In your postgresql.conf, try setting effective_cache_size to something\n> like 50% of your system's RAM, and lovering random_page_cost to 2.0 or\n> even 1.5. Then restart PostgreSQL and try your query again.\n>\n> What version, btw?\n>\n>\n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n\n\n",
"msg_date": "Mon, 29 Sep 2003 00:56:54 +0200",
"msg_from": "Palle Girgensohn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: avoiding seqscan?"
},
{
"msg_contents": "Palle,\n\n> Indeed, setting random_page_cost does the trick. Thanks!\n>\n> It seems to make sense to set random_page_cost to this value. Are there any\n> drawbacks?\n\nOnly if your server was heavily multi-tasking, and as a result had little \nRAM+CPU available. Then you'd want to raise the value again.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 28 Sep 2003 19:28:52 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: avoiding seqscan?"
},
{
"msg_contents": "Will that make a difference? From what I've seen, it does not make much \ndifference, but I have seen queries speed up when rewritten explicit joins. \nI guess it depends on other things, but is it really so that the explicit \njoins are bad somehow? Do you have any pointers to documentation about it, \nif so?\n\nThanks,\nPalle\n\n--On måndag, september 29, 2003 00.54.43 +0200 Gaetano Mendola \n<[email protected]> wrote:\n\n> Palle Girgensohn wrote:\n>> uu=# explain analyze\n>> uu-# select lower(substr(p.last_name,1,1)) as letter, count(*)\n>> uu-# FROM course c join group_data gd on (c.active_group_id =\n>> gd.this_group_id)\n>> uu-# join person p on (gd.item_text = p.userid)\n>> uu-# join dyn_field_person dfp on (dfp.extern_item_id = 10 and\n>> dfp.giver=c.giver)\n>> uu-# join dyn_field_content_person dfcp on (dfp.id =\n>> dfcp.dyn_field_id and dfcp.userid=p.userid)\n>> uu-# left outer join participant pt on (pt.userid = p.userid and\n>> pt.course_id = 707)\n>> uu-# WHERE c.id = 707\n>> uu-# group by 1\n>> uu-# ;\n>\n> Why are you using this form of join ? When and if is not necessary use\n> the implicit form.\n>\n>\n> Regards\n> Gaetano Mendola\n>\n>\n>\n\n\n\n\n",
"msg_date": "Mon, 29 Sep 2003 15:25:36 +0200",
"msg_from": "Palle Girgensohn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: avoiding seqscan?"
},
{
"msg_contents": "Palle Girgensohn wrote:\n> Will that make a difference? From what I've seen, it does not make much \n> difference, but I have seen queries speed up when rewritten explicit \n> joins. I guess it depends on other things, but is it really so that the \n> explicit joins are bad somehow? Do you have any pointers to \n> documentation about it, if so?\n> \n> Thanks,\n> Palle\n\n\nAre not absolutelly bad but sometimes that path that you choose is not\nthe optimal, in postgres 7.4 the think will be better.\n\nRegards\nGaetano Mendola\n\n",
"msg_date": "Mon, 29 Sep 2003 15:31:31 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: avoiding seqscan?"
},
{
"msg_contents": "Palle Girgensohn wrote:\n> Will that make a difference? From what I've seen, it does not make much \n> difference, but I have seen queries speed up when rewritten explicit \n> joins. I guess it depends on other things, but is it really so that the \n> explicit joins are bad somehow? Do you have any pointers to \n> documentation about it, if so?\n> \n> Thanks,\n> Palle\n\n\nAre not absolutelly bad but sometimes that path that you choose is not\nthe optimal, in postgres 7.4 use the explicit join will be less \nlimitative for the planner.\n\nRegards\nGaetano Mendola\n\n",
"msg_date": "Mon, 29 Sep 2003 15:32:31 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: avoiding seqscan?"
},
{
"msg_contents": "[email protected] (Palle Girgensohn) writes:\n> Will that make a difference? From what I've seen, it does not make\n> much difference, but I have seen queries speed up when rewritten\n> explicit joins. I guess it depends on other things, but is it really\n> so that the explicit joins are bad somehow? Do you have any pointers\n> to documentation about it, if so?\n\nThe problem is that if you expressly specify the joins, the query\noptimizer can't choose its own paths. And while that may not be\nbetter at the moment, it is quite possible that when you upgrade to a\nnewer version, those queries, if \"not join-specified,\" could\nimmediately get faster.\n\nI would expect that the query that uses implicit joins will be clearer\nto read, which adds a little further merit to that direction.\n\nThat goes along with the usual way that it is preferable to optimize\nthings, namely that you should start by solving the problem as simply\nas you can, and only proceed to further optimization if that actually\nproves necessary. Optimization efforts commonly add complexity and\nmake code more difficult to maintain; that's not the place to start if\nyou don't even know the effort is necessary.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"libertyrms.info\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n",
"msg_date": "Mon, 29 Sep 2003 11:12:55 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: avoiding seqscan?"
},
{
"msg_contents": "\n\n--On måndag, september 29, 2003 11.12.55 -0400 Christopher Browne \n<[email protected]> wrote:\n\n> [email protected] (Palle Girgensohn) writes:\n>> Will that make a difference? From what I've seen, it does not make\n>> much difference, but I have seen queries speed up when rewritten\n>> explicit joins. I guess it depends on other things, but is it really\n>> so that the explicit joins are bad somehow? Do you have any pointers\n>> to documentation about it, if so?\n>\n> The problem is that if you expressly specify the joins, the query\n> optimizer can't choose its own paths. And while that may not be\n> better at the moment, it is quite possible that when you upgrade to a\n> newer version, those queries, if \"not join-specified,\" could\n> immediately get faster.\n\nYou've got a point here. Still, with some queries, since the data is pretty \nstatic and we know much about its distribution over the tables, we had to \nexplicitally tell postgresql how to optimze the queries to get them fast \nenough. We cannot afford any queries to be more than fractions of seconds, \nreally.\n\n> I would expect that the query that uses implicit joins will be clearer\n> to read, which adds a little further merit to that direction.\n\nDepends, I actually don't agree on this, but I guess it depends on which \nsyntax you're used to.\n\n> That goes along with the usual way that it is preferable to optimize\n> things, namely that you should start by solving the problem as simply\n> as you can, and only proceed to further optimization if that actually\n> proves necessary. Optimization efforts commonly add complexity and\n> make code more difficult to maintain; that's not the place to start if\n> you don't even know the effort is necessary.\n\nOh, but of course. For the queries I refer to, optimization actually proved \nnecessary, believe me :-)\n\nCheers,\nPalle\n\n",
"msg_date": "Mon, 29 Sep 2003 23:27:48 +0200",
"msg_from": "Palle Girgensohn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: avoiding seqscan?"
}
] |
[
{
"msg_contents": "--On måndag, september 29, 2003 15.32.31 +0200 Gaetano Mendola \n<[email protected]> wrote:\n\n> Are not absolutelly bad but sometimes that path that you choose is not\n> the optimal, in postgres 7.4 use the explicit join will be less\n> limitative for the planner.\n>\n> Regards\n> Gaetano Mendola\n\nAh, OK. True! In this case though, the sql questions are crafted with great \ncare, since we have a lot of data in a few of the tables, other are almost \nempty, so we try to limit the amount of data as early as possible. Our \nexperience says that we often do a better job than the planner, since we \nknow which tables are \"fat\". Hence, we have actually moved to exlicit joins \nin questions and sometimes gained speed.\n\nBut, in the general case, implicit might be better, I guess.\n\nRegards,\nPalle\n\n\n\n",
"msg_date": "Mon, 29 Sep 2003 15:45:02 +0200",
"msg_from": "Palle Girgensohn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: avoiding seqscan?"
}
] |
[
{
"msg_contents": "I left my TPC-R query #17 working over the weekend and it took 3988 mins\n~ 10 hours to complete. And this is considering that I am using a TPC-R\ndatabase created with a scale factor of 1, which corresponds to ~1 GB of\ndata. I am running RedHat 8.0 on a dual 1 GHz processor, 512 MB RAM.\n\nHere is an excerpt from my postgresql.conf file (the rest of the\nsettings are commented out):\n\n#\n#\tShared Memory Size\n#\nshared_buffers = 16384\t\t# 2*max_connections, min 16, typically\n8KB each\n\n#\n#\tNon-shared Memory Sizes\n#\nsort_mem = 32768\n\n#\n#\tOptimizer Parameters\n#\neffective_cache_size = 32000\t# typically 8KB each\n\nAny suggestions on how to optimize these settings?\n\nI agree with Jenny that declaring additional indexes on the TPC-R tables\nmay alter the validity of the benchmarks. Are there any official TPC\nbenchmarks submitted by PostgreSQL? \n\nThanks.\n\nOleg\n\n-----Original Message-----\nFrom: Mary Edie Meredith [mailto:[email protected]] \nSent: Friday, September 26, 2003 10:12 AM\nTo: Tom Lane\nCc: Oleg Lebedev; Jenny Zhang; pgsql-performance\nSubject: Re: [PERFORM] TPC-R benchmarks\n\n\nThe TPC-H/R rules allow only minor changes to the SQL that are necessary\ndue to SQL implementation differences. They do not allow changes made to\nimprove performance. It is their way to test optimizer's ability to\nrecognize an inefficient SQL statement and do the rewrite.\n\nThe rule makes sense for the TPC-H, which is supposed to represent\nad-Hoc query. One might argue that for TPC-R, which is suppose to\nrepresent \"Reporting\" with pre-knowledge of the query, that re-write\nshould be allowed. However, that is currently not the case. Since the\nRDBMS's represented on the TPC council are competing with TPC-H, their\noptimizers already do the re-write, so (IMHO) there is no motivation to\nrelax the rules for the TPC-R.\n\n\nOn Thu, 2003-09-25 at 21:28, Tom Lane wrote:\n> Oleg Lebedev <[email protected]> writes:\n> > Seems like in your case postgres uses an i_l_partkey index on \n> > lineitem table. I have a foreign key constraint defined between the \n> > lineitem and part table, but didn't create an special indexes. Here \n> > is my query plan:\n> \n> The planner is obviously unhappy with this plan (note the large cost \n> numbers), but it can't find a way to do better. An index on \n> lineitem.l_partkey would help, I think.\n> \n> The whole query seems like it's written in a very inefficient fashion;\n\n> couldn't the estimation of '0.2 * avg(l_quantity)' be amortized across\n\n> multiple join rows? But I dunno whether the TPC rules allow for \n> significant manual rewriting of the given query.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that\nyour\n> message can get through to the mailing list cleanly\n-- \nMary Edie Meredith <[email protected]>\nOpen Source Development Lab\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************\n\n",
"msg_date": "Mon, 29 Sep 2003 08:35:51 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "Oleg Lebedev wrote:\n> effective_cache_size = 32000\t# typically 8KB each\n\nThat is 256MB. You can raise it to 350+MB if nothing else is running on the box. \nAlso if you have fast disk drives, you can reduce random page cost to 2 or 1.5.\n\nI don't know how much this will make any difference to benchmark results but \nusually this helps when queries are slow.\n\n HTH\n\n Shridhar\n\n",
"msg_date": "Mon, 29 Sep 2003 20:17:35 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "On Mon, 2003-09-29 at 07:35, Oleg Lebedev wrote:\n> I left my TPC-R query #17 working over the weekend and it took 3988 mins\n> ~ 10 hours to complete. And this is considering that I am using a TPC-R\n> database created with a scale factor of 1, which corresponds to ~1 GB of\n> data. I am running RedHat 8.0 on a dual 1 GHz processor, 512 MB RAM.\n\nWas this run with or without the l_partkey index that Jenny suggested? \n\n> \n> Here is an excerpt from my postgresql.conf file (the rest of the\n> settings are commented out):\n> \n> #\n> #\tShared Memory Size\n> #\n> shared_buffers = 16384\t\t# 2*max_connections, min 16, typically\n> 8KB each\n> \n> #\n> #\tNon-shared Memory Sizes\n> #\n> sort_mem = 32768\n> \n> #\n> #\tOptimizer Parameters\n> #\n> effective_cache_size = 32000\t# typically 8KB each\n> \n> Any suggestions on how to optimize these settings?\n> \n> I agree with Jenny that declaring additional indexes on the TPC-R tables\n> may alter the validity of the benchmarks. Are there any official TPC\n> benchmarks submitted by PostgreSQL? \n\nActually, for the TPC-R you _are allowed to declare additional indexes. \nWith TPC-H you are restricted to a specific set listed in the spec (an\nindex on l_partkey is allowed for both).\n\nWhat you cannot do for either TPC-R or TPC-H is rewrite the SQL of the\nquery for the purposes of making the query run faster.\n\nSorry if I was unclear.\n\nValid TPC-R benchmark results are on the TPC web site:\nhttp://www.tpc.org/tpcr/default.asp \n\nI do not see one for PostgreSQL.\n\n\nRegards,\n\nMary \n\n-- \nMary Edie Meredith <[email protected]>\nOpen Source Development Lab\n\n> \n> Thanks.\n> \n> Oleg\n> \n> -----Original Message-----\n> From: Mary Edie Meredith [mailto:[email protected]] \n> Sent: Friday, September 26, 2003 10:12 AM\n> To: Tom Lane\n> Cc: Oleg Lebedev; Jenny Zhang; pgsql-performance\n> Subject: Re: [PERFORM] TPC-R benchmarks\n> \n> \n> The TPC-H/R rules allow only minor changes to the SQL that are necessary\n> due to SQL implementation differences. They do not allow changes made to\n> improve performance. It is their way to test optimizer's ability to\n> recognize an inefficient SQL statement and do the rewrite.\n> \n> The rule makes sense for the TPC-H, which is supposed to represent\n> ad-Hoc query. One might argue that for TPC-R, which is suppose to\n> represent \"Reporting\" with pre-knowledge of the query, that re-write\n> should be allowed. However, that is currently not the case. Since the\n> RDBMS's represented on the TPC council are competing with TPC-H, their\n> optimizers already do the re-write, so (IMHO) there is no motivation to\n> relax the rules for the TPC-R.\n> \n> \n> On Thu, 2003-09-25 at 21:28, Tom Lane wrote:\n> > Oleg Lebedev <[email protected]> writes:\n> > > Seems like in your case postgres uses an i_l_partkey index on \n> > > lineitem table. I have a foreign key constraint defined between the \n> > > lineitem and part table, but didn't create an special indexes. Here \n> > > is my query plan:\n> > \n> > The planner is obviously unhappy with this plan (note the large cost \n> > numbers), but it can't find a way to do better. An index on \n> > lineitem.l_partkey would help, I think.\n> > \n> > The whole query seems like it's written in a very inefficient fashion;\n> \n> > couldn't the estimation of '0.2 * avg(l_quantity)' be amortized across\n> \n> > multiple join rows? But I dunno whether the TPC rules allow for \n> > significant manual rewriting of the given query.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of \n> > broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that\n> your\n> > message can get through to the mailing list cleanly\n\n",
"msg_date": "29 Sep 2003 09:04:09 -0700",
"msg_from": "Mary Edie Meredith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "Mary Edie Meredith <[email protected]> writes:\n> Valid TPC-R benchmark results are on the TPC web site:\n> http://www.tpc.org/tpcr/default.asp \n> I do not see one for PostgreSQL.\n\nI'm pretty certain that there are no TPC-certified test results for\nPostgres, because to date no organization has cared to spend the money\nneeded to perform a certifiable test. From what I understand you need\na pretty significant commitment of people and hardware to jump through\nall the hoops involved...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Sep 2003 12:26:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks "
},
{
"msg_contents": "Shridhar Daithankar <[email protected]> writes:\n> Also if you have fast disk drives, you can reduce random page cost to 2 or 1.5.\n\nNote however that most of the people who have found smaller\nrandom_page_cost to be helpful are in situations where most of their\ndata fits in RAM. Reducing the cost towards 1 simply reflects the fact\nthat there's no sequential-fetch advantage when grabbing data that's\nalready in RAM.\n\nWhen benchmarking with data sets considerably larger than available\nbuffer cache, I rather doubt that small random_page_cost would be a good\nidea. Still, you might as well experiment to see.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Sep 2003 12:33:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks "
},
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> I'm pretty certain that there are no TPC-certified test results for\n> Postgres, because to date no organization has cared to spend the money\n> needed to perform a certifiable test.\n\nAnyone have a rough idea of the costs involved?\n\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200309291344\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE/eG+avJuQZxSWSsgRApDFAJ4md34LacZhJbjnydjNGzqfLy2IzQCg5m/8\nXiD273M2ugzCWd7YF5zbkio=\n=jGkx\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Mon, 29 Sep 2003 17:43:26 -0000",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "On Mon, Sep 29, 2003 at 05:43:26PM -0000, [email protected] wrote:\n> \n> Anyone have a rough idea of the costs involved?\n\nI did a back-of-an-envelope calculation one day and stopped when I\ngot to $10,000.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 29 Sep 2003 13:58:49 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "\nIt took 10 hours to compute the query without the index on\nlineitem.l_partkey.\nOnce I created the index on lineitem.l_partkey, it took only 32 secs to\nrun the same query. \nAfter VACUUM ANALYZE it took 72 secs to run the query.\nAll the subsequent runs took under 3 seconds!\n\nThat's quite amazing!\n\nI just checked \n\n-----Original Message-----\nFrom: Mary Edie Meredith [mailto:[email protected]] \nSent: Monday, September 29, 2003 10:04 AM\nTo: Oleg Lebedev\nCc: Tom Lane; Jenny Zhang; pgsql-performance\nSubject: RE: [PERFORM] TPC-R benchmarks\n\n\nOn Mon, 2003-09-29 at 07:35, Oleg Lebedev wrote:\n> I left my TPC-R query #17 working over the weekend and it took 3988 \n> mins ~ 10 hours to complete. And this is considering that I am using a\n\n> TPC-R database created with a scale factor of 1, which corresponds to \n> ~1 GB of data. I am running RedHat 8.0 on a dual 1 GHz processor, 512 \n> MB RAM.\n\nWas this run with or without the l_partkey index that Jenny suggested? \n\n> \n> Here is an excerpt from my postgresql.conf file (the rest of the \n> settings are commented out):\n> \n> #\n> #\tShared Memory Size\n> #\n> shared_buffers = 16384\t\t# 2*max_connections, min 16,\ntypically\n> 8KB each\n> \n> #\n> #\tNon-shared Memory Sizes\n> #\n> sort_mem = 32768\n> \n> #\n> #\tOptimizer Parameters\n> #\n> effective_cache_size = 32000\t# typically 8KB each\n> \n> Any suggestions on how to optimize these settings?\n> \n> I agree with Jenny that declaring additional indexes on the TPC-R \n> tables may alter the validity of the benchmarks. Are there any \n> official TPC benchmarks submitted by PostgreSQL?\n\nActually, for the TPC-R you _are allowed to declare additional indexes. \nWith TPC-H you are restricted to a specific set listed in the spec (an\nindex on l_partkey is allowed for both).\n\nWhat you cannot do for either TPC-R or TPC-H is rewrite the SQL of the\nquery for the purposes of making the query run faster.\n\nSorry if I was unclear.\n\nValid TPC-R benchmark results are on the TPC web site:\nhttp://www.tpc.org/tpcr/default.asp \n\nI do not see one for PostgreSQL.\n\n\nRegards,\n\nMary \n\n-- \nMary Edie Meredith <[email protected]>\nOpen Source Development Lab\n\n> \n> Thanks.\n> \n> Oleg\n> \n> -----Original Message-----\n> From: Mary Edie Meredith [mailto:[email protected]]\n> Sent: Friday, September 26, 2003 10:12 AM\n> To: Tom Lane\n> Cc: Oleg Lebedev; Jenny Zhang; pgsql-performance\n> Subject: Re: [PERFORM] TPC-R benchmarks\n> \n> \n> The TPC-H/R rules allow only minor changes to the SQL that are \n> necessary due to SQL implementation differences. They do not allow \n> changes made to improve performance. It is their way to test \n> optimizer's ability to recognize an inefficient SQL statement and do \n> the rewrite.\n> \n> The rule makes sense for the TPC-H, which is supposed to represent \n> ad-Hoc query. One might argue that for TPC-R, which is suppose to \n> represent \"Reporting\" with pre-knowledge of the query, that re-write \n> should be allowed. However, that is currently not the case. Since the \n> RDBMS's represented on the TPC council are competing with TPC-H, their\n\n> optimizers already do the re-write, so (IMHO) there is no motivation \n> to relax the rules for the TPC-R.\n> \n> \n> On Thu, 2003-09-25 at 21:28, Tom Lane wrote:\n> > Oleg Lebedev <[email protected]> writes:\n> > > Seems like in your case postgres uses an i_l_partkey index on\n> > > lineitem table. I have a foreign key constraint defined between\nthe \n> > > lineitem and part table, but didn't create an special indexes.\nHere \n> > > is my query plan:\n> > \n> > The planner is obviously unhappy with this plan (note the large cost\n> > numbers), but it can't find a way to do better. An index on \n> > lineitem.l_partkey would help, I think.\n> > \n> > The whole query seems like it's written in a very inefficient \n> > fashion;\n> \n> > couldn't the estimation of '0.2 * avg(l_quantity)' be amortized \n> > across\n> \n> > multiple join rows? But I dunno whether the TPC rules allow for\n> > significant manual rewriting of the given query.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of\n> > broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that\n> your\n> > message can get through to the mailing list cleanly\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************\n",
"msg_date": "Mon, 29 Sep 2003 11:23:29 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "\nOops, my previous message got cut off.\nHere is the end of it:\nI just checked the restrictions on the TPC-R and TPC-H schemas and it\nseems that all indexes are allowed in TPC-R and only those that index\nparts of primary or foreign keys are allowed in TPC-H.\nThanks.\n\nOleg \n\n-----Original Message-----\nFrom: Oleg Lebedev \nSent: Monday, September 29, 2003 11:23 AM\nTo: Mary Edie Meredith\nCc: Jenny Zhang; pgsql-performance\nSubject: Re: [PERFORM] TPC-R benchmarks\nImportance: Low\n\n\n\nIt took 10 hours to compute the query without the index on\nlineitem.l_partkey. Once I created the index on lineitem.l_partkey, it\ntook only 32 secs to run the same query. \nAfter VACUUM ANALYZE it took 72 secs to run the query.\nAll the subsequent runs took under 3 seconds!\n\nThat's quite amazing!\n\nI just checked \n\n-----Original Message-----\nFrom: Mary Edie Meredith [mailto:[email protected]] \nSent: Monday, September 29, 2003 10:04 AM\nTo: Oleg Lebedev\nCc: Tom Lane; Jenny Zhang; pgsql-performance\nSubject: RE: [PERFORM] TPC-R benchmarks\n\n\nOn Mon, 2003-09-29 at 07:35, Oleg Lebedev wrote:\n> I left my TPC-R query #17 working over the weekend and it took 3988\n> mins ~ 10 hours to complete. And this is considering that I am using a\n\n> TPC-R database created with a scale factor of 1, which corresponds to\n> ~1 GB of data. I am running RedHat 8.0 on a dual 1 GHz processor, 512 \n> MB RAM.\n\nWas this run with or without the l_partkey index that Jenny suggested? \n\n> \n> Here is an excerpt from my postgresql.conf file (the rest of the\n> settings are commented out):\n> \n> #\n> #\tShared Memory Size\n> #\n> shared_buffers = 16384\t\t# 2*max_connections, min 16,\ntypically\n> 8KB each\n> \n> #\n> #\tNon-shared Memory Sizes\n> #\n> sort_mem = 32768\n> \n> #\n> #\tOptimizer Parameters\n> #\n> effective_cache_size = 32000\t# typically 8KB each\n> \n> Any suggestions on how to optimize these settings?\n> \n> I agree with Jenny that declaring additional indexes on the TPC-R\n> tables may alter the validity of the benchmarks. Are there any \n> official TPC benchmarks submitted by PostgreSQL?\n\nActually, for the TPC-R you _are allowed to declare additional indexes. \nWith TPC-H you are restricted to a specific set listed in the spec (an\nindex on l_partkey is allowed for both).\n\nWhat you cannot do for either TPC-R or TPC-H is rewrite the SQL of the\nquery for the purposes of making the query run faster.\n\nSorry if I was unclear.\n\nValid TPC-R benchmark results are on the TPC web site:\nhttp://www.tpc.org/tpcr/default.asp \n\nI do not see one for PostgreSQL.\n\n\nRegards,\n\nMary \n\n-- \nMary Edie Meredith <[email protected]>\nOpen Source Development Lab\n\n> \n> Thanks.\n> \n> Oleg\n> \n> -----Original Message-----\n> From: Mary Edie Meredith [mailto:[email protected]]\n> Sent: Friday, September 26, 2003 10:12 AM\n> To: Tom Lane\n> Cc: Oleg Lebedev; Jenny Zhang; pgsql-performance\n> Subject: Re: [PERFORM] TPC-R benchmarks\n> \n> \n> The TPC-H/R rules allow only minor changes to the SQL that are\n> necessary due to SQL implementation differences. They do not allow \n> changes made to improve performance. It is their way to test \n> optimizer's ability to recognize an inefficient SQL statement and do \n> the rewrite.\n> \n> The rule makes sense for the TPC-H, which is supposed to represent\n> ad-Hoc query. One might argue that for TPC-R, which is suppose to \n> represent \"Reporting\" with pre-knowledge of the query, that re-write \n> should be allowed. However, that is currently not the case. Since the \n> RDBMS's represented on the TPC council are competing with TPC-H, their\n\n> optimizers already do the re-write, so (IMHO) there is no motivation\n> to relax the rules for the TPC-R.\n> \n> \n> On Thu, 2003-09-25 at 21:28, Tom Lane wrote:\n> > Oleg Lebedev <[email protected]> writes:\n> > > Seems like in your case postgres uses an i_l_partkey index on \n> > > lineitem table. I have a foreign key constraint defined between\nthe \n> > > lineitem and part table, but didn't create an special indexes.\nHere \n> > > is my query plan:\n> > \n> > The planner is obviously unhappy with this plan (note the large cost\n\n> > numbers), but it can't find a way to do better. An index on \n> > lineitem.l_partkey would help, I think.\n> > \n> > The whole query seems like it's written in a very inefficient\n> > fashion;\n> \n> > couldn't the estimation of '0.2 * avg(l_quantity)' be amortized\n> > across\n> \n> > multiple join rows? But I dunno whether the TPC rules allow for \n> > significant manual rewriting of the given query.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of\n> > broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that\n> your\n> > message can get through to the mailing list cleanly\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for\nthe named recipient only. If you are not the named recipient, delete\nthis message and all attachments. Unauthorized reviewing, copying,\nprinting, disclosing, or otherwise using information in this e-mail is\nprohibited. We reserve the right to monitor e-mail sent through our\nnetwork. \n\n*************************************\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************\n",
"msg_date": "Mon, 29 Sep 2003 11:37:11 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "Oleg,\n\n> I just checked the restrictions on the TPC-R and TPC-H schemas and it\n> seems that all indexes are allowed in TPC-R and only those that index\n> parts of primary or foreign keys are allowed in TPC-H.\n\nThat would be appropriate for this case though, yes? That column is part of \na foriegn key, unless I've totally lost track.\n\nAs I remarked before, Postgres does *not* automatically create indexes for \nFKs. Many, but not all, other database products do, so comparing PostgreSQL \nagainst those products without the index is unfair.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 29 Sep 2003 11:11:11 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "Yes Josh,\nL_partkey is a part of the foreign key on the Lineitem table, and it was\nok to create an index on it according to the TPC-R specs. I just created\nindices on the rest of the FK columns in the TPC-R database and will\ncontinue my evaluations.\nThanks.\n\nOleg \n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Monday, September 29, 2003 12:11 PM\nTo: Oleg Lebedev; Mary Edie Meredith\nCc: Jenny Zhang; pgsql-performance\nSubject: Re: [PERFORM] TPC-R benchmarks\n\n\nOleg,\n\n> I just checked the restrictions on the TPC-R and TPC-H schemas and it \n> seems that all indexes are allowed in TPC-R and only those that index \n> parts of primary or foreign keys are allowed in TPC-H.\n\nThat would be appropriate for this case though, yes? That column is\npart of \na foriegn key, unless I've totally lost track.\n\nAs I remarked before, Postgres does *not* automatically create indexes\nfor \nFKs. Many, but not all, other database products do, so comparing\nPostgreSQL \nagainst those products without the index is unfair.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************\n",
"msg_date": "Mon, 29 Sep 2003 12:24:06 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "I've posted several emails, and have yet to see one show up (this one\nmight not either).\n \nIs there a size limit to an email (it had a big analyze, and schema\ninformation)??\n\nDavid\n\n\n\n\n\n\nI've posted several emails, and have yet to see one \nshow up (this one might not either).\n \nIs there a size limit to an email (it had a big \nanalyze, and schema information)??\nDavid",
"msg_date": "Mon, 29 Sep 2003 19:44:18 -0700",
"msg_from": "David Griffiths <[email protected]>",
"msg_from_op": true,
"msg_subject": "Test..."
},
{
"msg_contents": "David Griffiths <[email protected]> writes:\n> Is there a size limit to an email\n\nIIRC, the standard policy on the pgsql lists is that messages over 40K\nor so will be delayed for moderator approval. However, you should have\ngotten immediate replies from the majordomo 'bot telling you so. If you\ngot nothing, there's a configuration problem with the pg-perform mail\nlist or your subscription or something. Talk to Marc (scrappy at\nhub.org) about identifying and fixing the issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Sep 2003 00:48:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test... "
}
] |
[
{
"msg_contents": "I continue struggling with the TPC-R benchmarks and wonder if anyone\ncould help me optimize the query below. ANALYZE statistics indicate that\nthe query should run relatively fast, but it takes hours to complete. I\nattached the query plan to this posting.\nThanks.\n\nselect\n\tnation,\n\to_year,\n\tsum(amount) as sum_profit\nfrom\n\t(\n\t\tselect\n\t\t\tn_name as nation,\n\t\t\textract(year from o_orderdate) as o_year,\n\t\t\tl_extendedprice * (1 - l_discount) -\nps_supplycost * l_quantity as amount\n\t\tfrom\n\t\t\tpart,\n\t\t\tsupplier,\n\t\t\tlineitem,\n\t\t\tpartsupp,\n\t\t\torders,\n\t\t\tnation\n\t\twhere\n\t\t\ts_suppkey = l_suppkey\n\t\t\tand ps_suppkey = l_suppkey\n\t\t\tand ps_partkey = l_partkey\n\t\t\tand p_partkey = l_partkey\n\t\t\tand o_orderkey = l_orderkey\n\t\t\tand s_nationkey = n_nationkey\n\t\t\tand p_name like '%aquamarine%'\n\t) as profit\ngroup by\n\tnation,\n\to_year\norder by\n\tnation,\n\to_year desc;\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************",
"msg_date": "Tue, 30 Sep 2003 12:40:54 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "On Tue, 30 Sep 2003, Oleg Lebedev wrote:\n\n> I continue struggling with the TPC-R benchmarks and wonder if anyone\n> could help me optimize the query below. ANALYZE statistics indicate that\n> the query should run relatively fast, but it takes hours to complete. I\n> attached the query plan to this posting.\n> Thanks.\n\nWhat are the differences between estimated and real rows and such of an \nexplain analyze on that query? Are there any estimates that are just way \noff?\n\n",
"msg_date": "Wed, 1 Oct 2003 07:23:08 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "Oleg,\n\n> I continue struggling with the TPC-R benchmarks and wonder if anyone\n> could help me optimize the query below. ANALYZE statistics indicate that\n> the query should run relatively fast, but it takes hours to complete. I\n> attached the query plan to this posting.\n\nEven though it takes hours to complete, I think we need you to run EXPLAIN \nANALYZE instead of just EXPLAIN. Without the real-time statistics, we \nsimply can't see what's slowing the query down.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 1 Oct 2003 10:08:50 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "We're having a problem with a query during our investigation into\nPostgres (as an Oracle replacement). This query Postgres takes 20-40\nseconds (multiple runs). Tom Lan recommended I post it here, with an\nexplain-analyze.\n \nHere's the query:\n \nEXPLAIN ANALYZE SELECT company_name, address_1, address_2, address_3,\ncity,\naddress_list.state_province_id, state_province_short_desc, country_desc,\nzip_code, address_list.country_id,\ncontact_info.email, commercial_entity.user_account_id, phone_num_1,\nphone_num_fax, website, boats_website\nFROM commercial_entity, country, user_account,\naddress_list LEFT JOIN state_province ON address_list.state_province_id\n= state_province.state_province_id\nLEFT JOIN contact_info ON address_list.contact_info_id =\ncontact_info.contact_info_id\nWHERE address_list.address_type_id = 101\nAND commercial_entity.commercial_entity_id=225528\nAND commercial_entity.commercial_entity_id =\naddress_list.commercial_entity_id\nAND address_list.country_id = country.country_id\nAND commercial_entity.user_account_id = user_account.user_account_id\nAND user_account.user_role_id IN (101, 101);\n \nHere's the explain:\n \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------------\n Nested Loop (cost=0.00..64570.33 rows=1 width=385) (actual\ntime=42141.08..42152.06 rows=1 loops=1)\n -> Nested Loop (cost=0.00..64567.30 rows=1 width=361) (actual\ntime=42140.80..42151.77 rows=1 loops=1)\n -> Nested Loop (cost=0.00..64563.97 rows=1 width=349) (actual\ntime=42140.31..42151.27 rows=1 loops=1)\n Join Filter: (\"outer\".commercial_entity_id =\n\"inner\".commercial_entity_id)\n -> Index Scan using commercial_entity_pkey on\ncommercial_entity (cost=0.00..5.05 rows=1 width=94) (actual\ntime=0.57..0.58 rows=1 loops=1)\n Index Cond: (commercial_entity_id =\n225528::numeric)\n -> Materialize (cost=63343.66..63343.66 rows=97221\nwidth=255) (actual time=41741.96..41901.17 rows=90527 loops=1)\n -> Merge Join (cost=0.00..63343.66 rows=97221\nwidth=255) (actual time=1.44..41387.68 rows=90527 loops=1)\n Merge Cond: (\"outer\".contact_info_id =\n\"inner\".contact_info_id)\n -> Nested Loop (cost=0.00..830457.52\nrows=97221 width=222) (actual time=0.95..39178.32 rows=90527 loops=1)\n Join Filter: (\"outer\".state_province_id\n= \"inner\".state_province_id)\n -> Index Scan using addr_list_ci_id_i\non address_list (cost=0.00..586676.65 rows=97221 width=205) (actual\ntime=0.49..2159.90 rows=90527 loops=1)\n Filter: (address_type_id =\n101::numeric)\n -> Seq Scan on state_province\n(cost=0.00..1.67 rows=67 width=17) (actual time=0.00..0.21 rows=67\nloops=90527)\n -> Index Scan using contact_info_pkey on\ncontact_info (cost=0.00..3366.76 rows=56435 width=33) (actual\ntime=0.44..395.75 rows=55916 loops=1)\n -> Index Scan using user_account_pkey on user_account\n(cost=0.00..3.32 rows=1 width=12) (actual time=0.46..0.46 rows=1\nloops=1)\n Index Cond: (\"outer\".user_account_id =\nuser_account.user_account_id)\n Filter: (user_role_id = 101::numeric)\n -> Index Scan using country_pkey on country (cost=0.00..3.01 rows=1\nwidth=24) (actual time=0.25..0.25 rows=1 loops=1)\n Index Cond: (\"outer\".country_id = country.country_id)\n Total runtime: 42165.44 msec\n(21 rows)\n \n \nI will post the schema in a seperate email - the list has rejected one\nbig email 3 times now.\n \nDavid\n\n\n\n\n\n\n\nWe're having a problem with a query during our \ninvestigation into Postgres (as an Oracle replacement). This query Postgres \ntakes 20-40 seconds (multiple runs). Tom Lan recommended I post it here, with an \nexplain-analyze.\n \nHere's the query:\n \nEXPLAIN ANALYZE SELECT company_name, address_1, \naddress_2, address_3, city,address_list.state_province_id, \nstate_province_short_desc, country_desc, zip_code, \naddress_list.country_id,contact_info.email, \ncommercial_entity.user_account_id, phone_num_1, phone_num_fax, website, \nboats_websiteFROM commercial_entity, country, user_account,address_list \nLEFT JOIN state_province ON address_list.state_province_id = \nstate_province.state_province_idLEFT JOIN contact_info ON \naddress_list.contact_info_id = contact_info.contact_info_idWHERE \naddress_list.address_type_id = 101AND \ncommercial_entity.commercial_entity_id=225528AND \ncommercial_entity.commercial_entity_id = \naddress_list.commercial_entity_idAND address_list.country_id = \ncountry.country_idAND commercial_entity.user_account_id = \nuser_account.user_account_idAND user_account.user_role_id IN (101, \n101);\n \nHere's the explain:\n \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Nested \nLoop (cost=0.00..64570.33 rows=1 width=385) (actual \ntime=42141.08..42152.06 rows=1 loops=1) -> Nested \nLoop (cost=0.00..64567.30 rows=1 width=361) (actual \ntime=42140.80..42151.77 rows=1 \nloops=1) -> Nested \nLoop (cost=0.00..64563.97 rows=1 width=349) (actual \ntime=42140.31..42151.27 rows=1 \nloops=1) \nJoin Filter: (\"outer\".commercial_entity_id = \n\"inner\".commercial_entity_id) \n-> Index Scan using commercial_entity_pkey on commercial_entity \n(cost=0.00..5.05 rows=1 width=94) (actual time=0.57..0.58 rows=1 \nloops=1) \nIndex Cond: (commercial_entity_id = \n225528::numeric) \n-> Materialize (cost=63343.66..63343.66 rows=97221 width=255) \n(actual time=41741.96..41901.17 rows=90527 \nloops=1) \n-> Merge Join (cost=0.00..63343.66 rows=97221 width=255) (actual \ntime=1.44..41387.68 rows=90527 \nloops=1) \nMerge Cond: (\"outer\".contact_info_id = \n\"inner\".contact_info_id) \n-> Nested Loop (cost=0.00..830457.52 rows=97221 width=222) \n(actual time=0.95..39178.32 rows=90527 \nloops=1) \nJoin Filter: (\"outer\".state_province_id = \n\"inner\".state_province_id) \n-> Index Scan using addr_list_ci_id_i on address_list \n(cost=0.00..586676.65 rows=97221 width=205) (actual time=0.49..2159.90 \nrows=90527 \nloops=1) \nFilter: (address_type_id = \n101::numeric) \n-> Seq Scan on state_province (cost=0.00..1.67 rows=67 width=17) \n(actual time=0.00..0.21 rows=67 \nloops=90527) \n-> Index Scan using contact_info_pkey on contact_info \n(cost=0.00..3366.76 rows=56435 width=33) (actual time=0.44..395.75 rows=55916 \nloops=1) -> Index \nScan using user_account_pkey on user_account (cost=0.00..3.32 rows=1 \nwidth=12) (actual time=0.46..0.46 rows=1 \nloops=1) \nIndex Cond: (\"outer\".user_account_id = \nuser_account.user_account_id) \nFilter: (user_role_id = 101::numeric) -> Index Scan \nusing country_pkey on country (cost=0.00..3.01 rows=1 width=24) (actual \ntime=0.25..0.25 rows=1 \nloops=1) Index Cond: \n(\"outer\".country_id = country.country_id) Total runtime: 42165.44 \nmsec(21 rows)\n \n \nI will post the schema in a seperate email - the list has rejected one big \nemail 3 times now.\n \nDavid",
"msg_date": "Tue, 30 Sep 2003 13:24:24 -0700",
"msg_from": "David Griffiths <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning/performance issue..."
},
{
"msg_contents": "David Griffiths <[email protected]> writes:\n> ... FROM commercial_entity, country, user_account,\n> address_list LEFT JOIN state_province ON address_list.state_province_id\n> = state_province.state_province_id\n> LEFT JOIN contact_info ON address_list.contact_info_id =\n> contact_info.contact_info_id\n> WHERE ...\n\nI believe what you're getting burnt by is that PG's planner interprets\nthis as forcing the address_list * state_province * contact_info join\nto be done before it joins those tables to commercial_entity, country,\nand user_account --- for discussion see\nhttp://www.postgresql.org/docs/7.3/static/explicit-joins.html\n\nUnfortunately your WHERE-clause restriction conditions are on\naddress_list, commercial_entity, and user_account; and it seems the\naddress_list constraint is very weak. So the plan ends up forming a\nlarge fraction of the address_list * state_province * contact_info join,\nonly to throw it away again when there's no matching rows selected from\ncommercial_entity and user_account. The actual runtime and actual row\ncounts from the EXPLAIN ANALYZE output show that this is what's\nhappening.\n\nThe most efficient way to handle this query would probably be to join\nthe three tables with restrictions first, and then join the other tables\nto those. You could force this with not too much rewriting using\nsomething like (untested, but I think it's right)\n\n... FROM commercial_entity CROSS JOIN user_account CROSS JOIN\naddress_list LEFT JOIN state_province ON address_list.state_province_id\n= state_province.state_province_id\nLEFT JOIN contact_info ON address_list.contact_info_id =\ncontact_info.contact_info_id\nCROSS JOIN country\nWHERE ...\n\nThe explicit JOINs associate left-to-right, so this gives the intended\njoin order. (In your original query, explicit JOIN binds more tightly\nthan commas do.)\n\nThe reason PG's planner doesn't discover this join order for itself\nis that it's written to not attempt to re-order outer joins from the\nsyntactically defined ordering. In general, such reordering would\nchange the results. It is possible to analyze the query and prove that\ncertain reorderings are valid (don't change the results), but we don't\ncurrently have code to do that.\n\n> As a reference, our production Oracle database (exactly the same\n> hardware, but RAID-mirroring) with way more load can handle the query in\n> 1-2 seconds. I have MySQL 4.0.14 with InnoDB on the same machine\n> (shutdown when I am testing Postgres, and visa versa) and it does the\n> query in 0.20 seconds.\n\nI'm prepared to believe that Oracle contains code that actually does the\nanalysis about which outer-join reorderings are valid, and is then able\nto find the right join order by deduction. The last I heard about\nMySQL, they have no join-order analysis at all; they unconditionally\ninterpret this type of query left-to-right, ie as\n\n... FROM ((((commercial_entity CROSS JOIN country) CROSS JOIN\n user_account) CROSS JOIN address_list)\n LEFT JOIN state_province ON ...)\n LEFT JOIN contact_info ON ...\nWHERE ...\n\nThis is clearly at odds with the SQL spec's syntactically defined join\norder semantics. It's possible that it always yields the same results\nas the spec requires, but I'm not at all sure about that. In any case\nthis strategy is certainly not \"better\" than ours, it just performs\npoorly on a different set of queries. Would I be out of line to\nspeculate that your query was previously tuned to work well in MySQL?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Oct 2003 00:28:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance issue... "
},
{
"msg_contents": "> The most efficient way to handle this query would probably be to join\n> the three tables with restrictions first, and then join the other tables\n> to those. You could force this with not too much rewriting using\n> something like (untested, but I think it's right)\n>\n> ... FROM commercial_entity CROSS JOIN user_account CROSS JOIN\n> address_list LEFT JOIN state_province ON address_list.state_province_id\n> = state_province.state_province_id\n> LEFT JOIN contact_info ON address_list.contact_info_id =\n> contact_info.contact_info_id\n> CROSS JOIN country\n> WHERE ...\n>\n> The explicit JOINs associate left-to-right, so this gives the intended\n> join order. (In your original query, explicit JOIN binds more tightly\n> than commas do.)\n\nOk - that's interesting - I'll have to do some reading and more testing.\n\n> The reason PG's planner doesn't discover this join order for itself\n> is that it's written to not attempt to re-order outer joins from the\n> syntactically defined ordering. In general, such reordering would\n> change the results. It is possible to analyze the query and prove that\n> certain reorderings are valid (don't change the results), but we don't\n> currently have code to do that.\n\nNot sure I follow. Are you saying that, depending on when the outer-join is\napplied to the rows found at the time, you may end up with a different set\nof rows? I would have expected the optimizer to do the outer-joins last, as\nthe extra data received by the outer-joins is not mandatory, and won't\naffect\nthe rows that were retreived by joining user_account, address_list, and\ncommercial_entity.\n\nAn outer join would *never* be the most restrictive\njoin in a query. I thought (from my readings on Oracle query tuning) that\nfinding the most restrictive table/index was the first task of an optimizer.\nReduce the result set as quickly as possible. That query has the line,\n\n\"AND commercial_entity.commercial_entity_id=225528\",\n\nwhich uses an index (primary key) and uses an \"=\". I would have expected\nthat to be done first, then joined with the other inner-join tables, and\nfinally\nhave the outer-joins applied to the final result set to fill in the \"might\nbe there\" data.\n\nAnyway, if the optimizer does the outer-joins first (address_list with\nstate_province\nand contact_info), then it's picking the table with the most rows\n(address_list has\n200K+ rows, where the other 3 big tables have 70K-90K). Would re-ordering\nthe FROM clause (and LEFT JOIN portions) help?\n\nCould you give an example where applying an outer-join at a different time\ncould\nresult in different results? I think I can see at situation where you use\npart of the results\nin the outer-join in the where clause, but I am not sure.\n\n> I'm prepared to believe that Oracle contains code that actually does the\n> analysis about which outer-join reorderings are valid, and is then able\n> to find the right join order by deduction.\n\nI'm not sure about Oracle (other than what I stated above). In fact, about\nhalf\nthe time, updating table stats to try to get the Oracle optimizer to do a\nbetter\njob on a query results in even worse performance.\n\n> ... FROM ((((commercial_entity CROSS JOIN country) CROSS JOIN\n> user_account) CROSS JOIN address_list)\n> LEFT JOIN state_province ON ...)\n> LEFT JOIN contact_info ON ...\n> WHERE ...\n>\n> This is clearly at odds with the SQL spec's syntactically defined join\n> order semantics. It's possible that it always yields the same results\n> as the spec requires, but I'm not at all sure about that.\n\nAgain, I don't know. On the 3 queries based on these tables, Postgres\nand MySQL return the exact same data (they use the same data set).\n\nDo you have a link to the SQL spec's join-order requirements?\n\n> In any case\n> this strategy is certainly not \"better\" than ours, it just performs\n> poorly on a different set of queries. Would I be out of line to\n> speculate that your query was previously tuned to work well in MySQL?\n\nThe query was pulled from our codebase (written for Oracle). I added a bit\nto it\nto make it slower, and then ported to MySQL and tested there first (just\nre-wrote\nthe outer-join syntax). I found that re-ordering the tables in the\nfrom-clause on\nMySQL changed the time by 45-ish% (0.36 seconds to .20 seconds), but that's\nbecause I had forgotten to re-analyze the tables after refreshing the\ndataset.\nNow, table order doesn't make a difference in speed (or results).\n\nIf anything, I've done more tuning for Postgres - added some extra indexes\nto try to help\n(country.country_id had a composite index with another column, but not an\nindex for\njust it), etc.\n\nThe dataset and schema is pure-Oracle. I extracted it out of the database,\nremoved all\nOracle-specific extensions, changed the column types, and migrated the\nindexes and\nforeign keys to MySQL and Postgres. Nothing more (other than an extra index\nor two for Postgres - nada for MySQL).\n\nThis is all part of a \"migrate away from Oracle\" project. We are looking at\n3 databases -\nMySQL (InnoDB), Postgres and Matisse (object oriented). We have alot of\nqueries like this\nor worse, and I'm worried that many of them would need to be re-written. The\ndevelopers\nknow SQL, but nothing about tuning, etc.\n\nThanks for the quick response - I will try explicit joining, and I'm looking\nforward to\nyour comments on outer-joins and the optmizer (and anything else I've\nwritten).\n\nDavid.\n",
"msg_date": "Tue, 30 Sep 2003 22:48:54 -0700",
"msg_from": "David Griffiths <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning/performance issue... "
},
{
"msg_contents": "On Tue, 30 Sep 2003, David Griffiths wrote:\n\n>\n> This is all part of a \"migrate away from Oracle\" project. We are looking at\n> 3 databases -\n> MySQL (InnoDB), Postgres and Matisse (object oriented). We have alot of\n> queries like this\n> or worse, and I'm worried that many of them would need to be re-written. The\n> developers\n> know SQL, but nothing about tuning, etc.\n>\n\nThere's a movement at my company to ditch several commercial db's in favor\nof a free one. I'm currently the big pg fan around here and I've actually\nwritten a rather lengthy presentation about pg features, why, tuning, etc.\nbut another part was some comparisons to other db's..\n\nI decided so I wouldn't be blinding flaming mysql to give it a whirl and\nloaded it up with the same dataset as pg. First thing I hit was lack of\nstored procedures. But I decided to code around that, giving mysql the\nbenefit of the doubt. What I found was interesting.\n\nFor 1-2 concurrent\n'beaters' it screamed. ultra-fast. But.. If you increase the concurrent\nbeaters up to say, 20 Mysql comes to a grinding halt.. Mysql and the\nmachine itself become fairly unresponsive. And if you do cache unfriendly\nqueries it becomes even worse. On PG - no problems at all. Scaled fine\nand dandy up. And with 40 concurrent beaters the machine was still\nresponsive. (The numbers for 20 client was 220 seconds (pg) and 650\nseconds (mysql))\n\nSo that is another test to try out - Given your configuration I expect you\nhave lots of concurrent activity.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 1 Oct 2003 08:23:10 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance issue..."
},
{
"msg_contents": "David Griffiths <[email protected]> writes:\n>> The reason PG's planner doesn't discover this join order for itself\n>> is that it's written to not attempt to re-order outer joins from the\n>> syntactically defined ordering. In general, such reordering would\n>> change the results. It is possible to analyze the query and prove that\n>> certain reorderings are valid (don't change the results), but we don't\n>> currently have code to do that.\n\n> Not sure I follow. Are you saying that, depending on when the outer-join is\n> applied to the rows found at the time, you may end up with a different set\n> of rows?\n\nHere's an example showing that it's not always safe to rearrange join\norder in the presence of outer joins:\n\njtest=# create table a (f1 int);\nCREATE TABLE\njtest=# create table b (f1 int, f2 int);\nCREATE TABLE\njtest=# create table c(f1 int, f2 int);\nCREATE TABLE\njtest=# insert into a values (1);\nINSERT 431307 1\njtest=# insert into b values (10,10);\nINSERT 431308 1\njtest=# insert into b values (11,11);\nINSERT 431309 1\njtest=# insert into c values (1,10);\nINSERT 431310 1\njtest=# insert into c values (2,11);\nINSERT 431311 1\n\njtest=# SELECT * FROM a, b LEFT JOIN c ON b.f2 = c.f2 WHERE a.f1 = c.f1;\n f1 | f1 | f2 | f1 | f2\n----+----+----+----+----\n 1 | 10 | 10 | 1 | 10\n(1 row)\n\nPer spec the JOIN operator binds more tightly than comma, so this is\nequivalent to:\n\njtest=# SELECT * FROM a JOIN (b LEFT JOIN c ON b.f2 = c.f2) ON a.f1 = c.f1;\n f1 | f1 | f2 | f1 | f2\n----+----+----+----+----\n 1 | 10 | 10 | 1 | 10\n(1 row)\n\nNow suppose we try to join A and C before joining to B:\n\njtest=# SELECT * FROM b LEFT JOIN (a join c ON a.f1 = c.f1) ON b.f2 = c.f2;\n f1 | f2 | f1 | f1 | f2\n----+----+----+----+----\n 10 | 10 | 1 | 1 | 10\n 11 | 11 | | |\n(2 rows)\n\nWe get a different answer, because some C rows are eliminated before\nreaching the left join, causing null-extended B rows to be added.\n\n(I don't have a MySQL installation here to try, but if they still work\nthe way they used to, they get the wrong answer on the first query.)\n\nThe point of this example is just that there are cases where it'd be\nincorrect for the planner to change the ordering of joins from what\nis implied by the query syntax. It is always safe to change the join\norder when only inner joins are involved. There are cases where outer\njoin order is safe to change too, but you need analysis code that checks\nthe query conditions to prove that a particular rearrangement is safe.\nRight now, we don't have such code, and so we just follow the simple\nrule \"never rearrange any outer joins\".\n\n> I would have expected the optimizer to do the outer-joins last, as the\n> extra data received by the outer-joins is not mandatory, and won't\n> affect the rows that were retreived by joining user_account,\n> address_list, and commercial_entity.\n\nI think your example falls into the category of provably-safe\nrearrangements ... but as I said, the planner doesn't know that.\n\n> An outer join would *never* be the most restrictive\n> join in a query.\n\nSure it can, if the restriction conditions are mainly on the outer\njoin's tables. But that's not really the issue here. As best I can\ntell without seeing your data statistics, the most restrictive\nconditions in your query are the ones on\ncommercial_entity.commercial_entity_id and user_account.user_role_id.\nThe trick is to apply those before joining any other tables.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Oct 2003 10:14:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance issue... "
}
] |
[
{
"msg_contents": "Here's the schema:\n \n Table \"public.address_list\"\n Column | Type | Modifiers\n----------------------+------------------------+-----------\n address_list_id | numeric(10,0) | not null\n address_1 | character varying(100) |\n address_2 | character varying(100) |\n address_3 | character varying(100) |\n city | character varying(100) |\n zip_code | character varying(20) |\n phone_num_1 | character varying(100) |\n phone_num_2 | character varying(100) |\n phone_num_fax | character varying(100) |\n state_province_id | numeric(10,0) |\n user_account_id | numeric(10,0) |\n marina_id | numeric(10,0) |\n commercial_entity_id | numeric(10,0) |\n address_type_id | numeric(10,0) | not null\n distributor_id | numeric(10,0) |\n contact_info_id | numeric(10,0) |\n country_id | numeric(10,0) |\n lang_id | numeric(10,0) |\n boat_listing_id | numeric(10,0) |\nIndexes: address_list_pkey primary key btree (address_list_id),\n addr_list_addr_type_id_i btree (address_type_id),\n addr_list_bl_id_i btree (boat_listing_id),\n addr_list_bl_sp_count_i btree (boat_listing_id,\nstate_province_id, country_id),\n addr_list_ce_sp_c_at_c_i btree (commercial_entity_id,\nstate_province_id, country_id, address_type_id, city),\n addr_list_ce_sp_countr_addr_type_i btree (commercial_entity_id,\nstate_province_id, country_id, address_type_id),\n addr_list_ci_id_i btree (contact_info_id),\n addr_list_comm_ent_id_i btree (commercial_entity_id),\n addr_list_count_lang_i btree (country_id, lang_id),\n addr_list_country_id_i btree (country_id),\n addr_list_cty_bl_count_i btree (city, boat_listing_id,\ncountry_id),\n addr_list_cty_i btree (city),\n addr_list_distrib_id_i btree (distributor_id),\n addr_list_marina_id_i btree (marina_id),\n addr_list_sp_id_i btree (state_province_id),\n addr_list_ua_id_i btree (user_account_id)\nForeign Key constraints: $1 FOREIGN KEY (address_type_id) REFERENCES\naddress_type(address_type_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $2 FOREIGN KEY (commercial_entity_id)\nREFERENCES commercial_entity(commercial_entity_id) ON UPDATE NO ACTION\nON DELETE NO ACTION,\n $3 FOREIGN KEY (contact_info_id) REFERENCES\ncontact_info(contact_info_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $4 FOREIGN KEY (user_account_id) REFERENCES\nuser_account(user_account_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $5 FOREIGN KEY (state_province_id) REFERENCES\nstate_province(state_province_id) ON UPDATE NO ACTION ON DELETE NO\nACTION\n\n \n Table\n\"public.commercial_entity\"\n Column | Type |\nModifiers\n---------------------------+-----------------------------+--------------\n-----------------------------------------------\n commercial_entity_id | numeric(10,0) | not null\n company_name | character varying(100) | not null\n website | character varying(200) |\n modify_date | timestamp without time zone |\n user_account_id | numeric(10,0) |\n source_id | numeric(10,0) | not null\n commercial_entity_type_id | numeric(10,0) |\n boats_website | character varying(200) |\n updated_on | timestamp without time zone | not null\ndefault ('now'::text)::timestamp(6) with time zone\n dealer_level_id | numeric(10,0) |\n lang_id | numeric(10,0) | default '100'\n yw_account_id | numeric(10,0) |\n keybank_dealer_code | numeric(10,0) |\n dnetaccess_id | numeric(10,0) | not null\ndefault 0\n interested_in_dns | numeric(10,0) | not null\ndefault 0\n parent_office_id | numeric(10,0) |\n marinesite_welcome_msg | character varying(500) |\n alt_marinesite_homepage | character varying(256) |\n comments | character varying(4000) |\n show_finance_yn | character varying(1) | not null\ndefault 'Y'\n show_insurance_yn | character varying(1) | not null\ndefault 'Y'\n show_shipping_yn | character varying(1) | not null\ndefault 'Y'\n yw_account_id_c | character varying(11) |\n sales_id | numeric(10,0) |\nIndexes: commercial_entity_pkey primary key btree\n(commercial_entity_id),\n comm_ent_boat_web_ui unique btree (boats_website),\n comm_ent_key_dlr_cd_ui unique btree (keybank_dealer_code),\n comm_ent_cny_name_i btree (company_name),\n comm_ent_dlr_lvl_id_i btree (dealer_level_id, lang_id),\n comm_ent_src_id_i btree (source_id),\n comm_ent_type_id_i btree (commercial_entity_type_id),\n comm_ent_upd_on btree (updated_on),\n comm_ent_usr_acc_id_i btree (user_account_id),\n comm_ent_yw_acc_id_i btree (yw_account_id)\nForeign Key constraints: $1 FOREIGN KEY (source_id) REFERENCES\nsource(source_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $2 FOREIGN KEY (user_account_id) REFERENCES\nuser_account(user_account_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n\n \n Table \"public.country\"\n Column | Type | Modifiers\n--------------+------------------------+-----------\n country_id | numeric(10,0) | not null\n lang_id | numeric(10,0) | not null\n country_desc | character varying(100) | not null\nIndexes: country_pkey primary key btree (country_id)\n\n \n Table \"public.user_account\"\n Column | Type |\nModifiers\n-------------------------------+-----------------------------+----------\n-------------------\n user_account_id | numeric(10,0) | not null\n first_name | character varying(100) |\n first_name_display_ind | numeric(1,0) | not null\n last_name | character varying(100) |\n last_name_display_ind | numeric(1,0) | not null\n profession | character varying(100) |\n profession_display_ind | numeric(1,0) | not null\n self_description | character varying(100) |\n self_description_display_ind | numeric(1,0) | not null\n activity_interest | character varying(100) |\n activity_interest_display_ind | numeric(1,0) | not null\n make_brand | character varying(100) |\n make_brand_display_ind | numeric(1,0) | not null\n birth_date | timestamp without time zone |\n birth_date_display_ind | numeric(1,0) | not null\n my_boat_picture_link | character varying(200) |\n user_account_name | character varying(100) | not null\n password | character varying(100) |\n password_ind | numeric(1,0) | not null\n age | numeric(10,0) |\n blacklisted_ind | numeric(1,0) | not null\n auto_login_ind | numeric(1,0) | not null\n email_addr | character varying(100) |\n create_date | timestamp without time zone | default\n('now'::text)::date\n lang_id | numeric(10,0) | not null\n user_role_id | numeric(10,0) | not null\n seller_type_id | numeric(10,0) |\n payment_method_id | numeric(10,0) |\n account_status_id | numeric(10,0) | not null\n source_id | numeric(10,0) | not null\ndefault 100\n ebay_user_id | character varying(80) |\n ebay_user_password | character varying(80) |\nIndexes: user_account_pkey primary key btree (user_account_id),\n usr_acc_acc_stat_id_i btree (account_status_id),\n usr_acc_an_pass_i btree (user_account_name, \"password\"),\n usr_acc_email_addr_i btree (email_addr),\n usr_acc_first_name_i btree (first_name),\n usr_acc_lang_id_i btree (lang_id),\n usr_acc_last_name_i btree (last_name),\n usr_acc_pay_meth_id_i btree (payment_method_id),\n usr_acc_sell_type_id_i btree (seller_type_id),\n usr_acc_usr_acc_name_i btree (user_account_name),\n usr_acc_usr_role_id_i btree (user_role_id)\nForeign Key constraints: $1 FOREIGN KEY (lang_id) REFERENCES\nlang(lang_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $2 FOREIGN KEY (source_id) REFERENCES\nsource(source_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $3 FOREIGN KEY (user_role_id) REFERENCES\nuser_role(user_role_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n\n \n Table \"public.contact_info\"\n Column | Type | Modifiers\n-----------------+------------------------+-----------\n contact_info_id | numeric(10,0) | not null\n first_name | character varying(100) |\n last_name | character varying(100) |\n email | character varying(100) |\n boat_listing_id | numeric(10,0) |\n user_account_id | numeric(10,0) |\nIndexes: contact_info_pkey primary key btree (contact_info_id),\n boat_listing_id_i btree (boat_listing_id),\n user_account_id_i btree (user_account_id)\nForeign Key constraints: $1 FOREIGN KEY (user_account_id) REFERENCES\nuser_account(user_account_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $2 FOREIGN KEY (user_account_id) REFERENCES\nuser_account(user_account_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n\n \n Table \"public.state_province\"\n Column | Type | Modifiers\n---------------------------+------------------------+-----------\n state_province_id | numeric(10,0) | not null\n state_province_short_desc | character varying(2) |\n state_province_desc | character varying(100) | not null\n country_id | numeric(10,0) | not null\n lang_id | numeric(10,0) | not null\nIndexes: state_province_pkey primary key btree (state_province_id),\n state_prov_count_lang_i btree (country_id, lang_id)\n\n \n \nAll the join columns are the same type and width, and all are indexed. I\ngoogled for what looked like the expensive parts of the query to see if\nI could at least figure out where the time was being spent.\n \nPart 3 to follow.\n \nDavid\n\n\n\n\n\n\n\nHere's the schema:\n \n \nTable \"public.address_list\" \nColumn \n| \nType | \nModifiers----------------------+------------------------+----------- address_list_id \n| numeric(10,0) | not \nnull address_1 \n| character varying(100) \n| address_2 \n| character varying(100) \n| address_3 \n| character varying(100) \n| city \n| character varying(100) \n| zip_code \n| character varying(20) \n| phone_num_1 | \ncharacter varying(100) \n| phone_num_2 | \ncharacter varying(100) \n| phone_num_fax | character \nvarying(100) | state_province_id | \nnumeric(10,0) \n| user_account_id | \nnumeric(10,0) \n| marina_id \n| numeric(10,0) \n| commercial_entity_id | \nnumeric(10,0) \n| address_type_id | \nnumeric(10,0) | not \nnull distributor_id | \nnumeric(10,0) \n| contact_info_id | \nnumeric(10,0) \n| country_id \n| numeric(10,0) \n| lang_id \n| numeric(10,0) \n| boat_listing_id | \nnumeric(10,0) \n|Indexes: address_list_pkey primary key btree \n(address_list_id), \naddr_list_addr_type_id_i btree \n(address_type_id), \naddr_list_bl_id_i btree \n(boat_listing_id), \naddr_list_bl_sp_count_i btree (boat_listing_id, state_province_id, \ncountry_id), \naddr_list_ce_sp_c_at_c_i btree (commercial_entity_id, state_province_id, \ncountry_id, address_type_id, \ncity), \naddr_list_ce_sp_countr_addr_type_i btree (commercial_entity_id, \nstate_province_id, country_id, \naddress_type_id), \naddr_list_ci_id_i btree \n(contact_info_id), \naddr_list_comm_ent_id_i btree \n(commercial_entity_id), \naddr_list_count_lang_i btree (country_id, \nlang_id), \naddr_list_country_id_i btree \n(country_id), \naddr_list_cty_bl_count_i btree (city, boat_listing_id, \ncountry_id), addr_list_cty_i \nbtree (city), \naddr_list_distrib_id_i btree \n(distributor_id), \naddr_list_marina_id_i btree \n(marina_id), \naddr_list_sp_id_i btree \n(state_province_id), \naddr_list_ua_id_i btree (user_account_id)Foreign Key constraints: $1 FOREIGN \nKEY (address_type_id) REFERENCES address_type(address_type_id) ON UPDATE NO \nACTION ON DELETE NO \nACTION, \n$2 FOREIGN KEY (commercial_entity_id) REFERENCES \ncommercial_entity(commercial_entity_id) ON UPDATE NO ACTION ON DELETE NO \nACTION, \n$3 FOREIGN KEY (contact_info_id) REFERENCES contact_info(contact_info_id) ON \nUPDATE NO ACTION ON DELETE NO \nACTION, \n$4 FOREIGN KEY (user_account_id) REFERENCES user_account(user_account_id) ON \nUPDATE NO ACTION ON DELETE NO \nACTION, \n$5 FOREIGN KEY (state_province_id) REFERENCES state_province(state_province_id) \nON UPDATE NO ACTION ON DELETE NO ACTION\n \n \nTable \n\"public.commercial_entity\" \nColumn \n| \nType \n| \nModifiers---------------------------+-----------------------------+------------------------------------------------------------- commercial_entity_id \n| \nnumeric(10,0) \n| not \nnull company_name \n| character varying(100) | not \nnull website \n| character varying(200) \n| modify_date \n| timestamp without time zone \n| user_account_id \n| \nnumeric(10,0) \n| source_id \n| \nnumeric(10,0) \n| not null commercial_entity_type_id | \nnumeric(10,0) \n| boats_website \n| character varying(200) \n| updated_on \n| timestamp without time zone | not null default ('now'::text)::timestamp(6) \nwith time \nzone dealer_level_id \n| \nnumeric(10,0) \n| lang_id \n| \nnumeric(10,0) \n| default \n'100' yw_account_id \n| \nnumeric(10,0) \n| keybank_dealer_code | \nnumeric(10,0) \n| dnetaccess_id \n| \nnumeric(10,0) \n| not null default \n0 interested_in_dns | \nnumeric(10,0) \n| not null default \n0 parent_office_id \n| \nnumeric(10,0) \n| marinesite_welcome_msg | character \nvarying(500) \n| alt_marinesite_homepage | character \nvarying(256) \n| comments \n| character varying(4000) \n| show_finance_yn \n| character varying(1) | not null \ndefault \n'Y' show_insurance_yn | \ncharacter varying(1) | not null \ndefault \n'Y' show_shipping_yn \n| character varying(1) | not null \ndefault \n'Y' yw_account_id_c \n| character varying(11) \n| sales_id \n| \nnumeric(10,0) \n|Indexes: commercial_entity_pkey primary key btree \n(commercial_entity_id), \ncomm_ent_boat_web_ui unique btree \n(boats_website), \ncomm_ent_key_dlr_cd_ui unique btree \n(keybank_dealer_code), \ncomm_ent_cny_name_i btree \n(company_name), \ncomm_ent_dlr_lvl_id_i btree (dealer_level_id, \nlang_id), comm_ent_src_id_i \nbtree (source_id), \ncomm_ent_type_id_i btree \n(commercial_entity_type_id), \ncomm_ent_upd_on btree \n(updated_on), \ncomm_ent_usr_acc_id_i btree \n(user_account_id), \ncomm_ent_yw_acc_id_i btree (yw_account_id)Foreign Key constraints: $1 \nFOREIGN KEY (source_id) REFERENCES source(source_id) ON UPDATE NO ACTION ON \nDELETE NO \nACTION, \n$2 FOREIGN KEY (user_account_id) REFERENCES user_account(user_account_id) ON \nUPDATE NO ACTION ON DELETE NO ACTION\n \n \nTable \"public.country\" Column \n| \nType | \nModifiers--------------+------------------------+----------- country_id \n| numeric(10,0) | not \nnull lang_id | \nnumeric(10,0) | not \nnull country_desc | character varying(100) | not nullIndexes: \ncountry_pkey primary key btree (country_id)\n \n \nTable \n\"public.user_account\" \nColumn \n| \nType \n| \nModifiers-------------------------------+-----------------------------+----------------------------- user_account_id \n| \nnumeric(10,0) \n| not \nnull first_name \n| character varying(100) \n| first_name_display_ind | \nnumeric(1,0) \n| not \nnull last_name \n| character varying(100) \n| last_name_display_ind \n| \nnumeric(1,0) \n| not \nnull profession \n| character varying(100) \n| profession_display_ind | \nnumeric(1,0) \n| not \nnull self_description \n| character varying(100) \n| self_description_display_ind | \nnumeric(1,0) \n| not \nnull activity_interest \n| character varying(100) \n| activity_interest_display_ind | \nnumeric(1,0) \n| not \nnull make_brand \n| character varying(100) \n| make_brand_display_ind | \nnumeric(1,0) \n| not \nnull birth_date \n| timestamp without time zone \n| birth_date_display_ind | \nnumeric(1,0) \n| not \nnull my_boat_picture_link \n| character varying(200) \n| user_account_name \n| character varying(100) | not \nnull password \n| character varying(100) \n| password_ind \n| \nnumeric(1,0) \n| not \nnull age \n| \nnumeric(10,0) \n| blacklisted_ind \n| \nnumeric(1,0) \n| not \nnull auto_login_ind \n| \nnumeric(1,0) \n| not \nnull email_addr \n| character varying(100) \n| create_date \n| timestamp without time zone | default \n('now'::text)::date lang_id \n| \nnumeric(10,0) \n| not \nnull user_role_id \n| \nnumeric(10,0) \n| not \nnull seller_type_id \n| \nnumeric(10,0) \n| payment_method_id \n| \nnumeric(10,0) \n| account_status_id \n| \nnumeric(10,0) \n| not \nnull source_id \n| \nnumeric(10,0) \n| not null default \n100 ebay_user_id \n| character varying(80) \n| ebay_user_password \n| character varying(80) |Indexes: \nuser_account_pkey primary key btree \n(user_account_id), \nusr_acc_acc_stat_id_i btree \n(account_status_id), \nusr_acc_an_pass_i btree (user_account_name, \n\"password\"), \nusr_acc_email_addr_i btree \n(email_addr), \nusr_acc_first_name_i btree \n(first_name), \nusr_acc_lang_id_i btree \n(lang_id), \nusr_acc_last_name_i btree \n(last_name), \nusr_acc_pay_meth_id_i btree \n(payment_method_id), \nusr_acc_sell_type_id_i btree \n(seller_type_id), \nusr_acc_usr_acc_name_i btree \n(user_account_name), \nusr_acc_usr_role_id_i btree (user_role_id)Foreign Key constraints: $1 \nFOREIGN KEY (lang_id) REFERENCES lang(lang_id) ON UPDATE NO ACTION ON DELETE NO \nACTION, \n$2 FOREIGN KEY (source_id) REFERENCES source(source_id) ON UPDATE NO ACTION ON \nDELETE NO \nACTION, \n$3 FOREIGN KEY (user_role_id) REFERENCES user_role(user_role_id) ON UPDATE NO \nACTION ON DELETE NO ACTION\n \n \nTable \"public.contact_info\" \nColumn \n| \nType | \nModifiers-----------------+------------------------+----------- contact_info_id \n| numeric(10,0) | not \nnull first_name | character varying(100) \n| last_name | character \nvarying(100) \n| email | \ncharacter varying(100) | boat_listing_id | \nnumeric(10,0) \n| user_account_id | \nnumeric(10,0) \n|Indexes: contact_info_pkey primary key btree \n(contact_info_id), \nboat_listing_id_i btree \n(boat_listing_id), \nuser_account_id_i btree (user_account_id)Foreign Key constraints: $1 FOREIGN \nKEY (user_account_id) REFERENCES user_account(user_account_id) ON UPDATE NO \nACTION ON DELETE NO \nACTION, \n$2 FOREIGN KEY (user_account_id) REFERENCES user_account(user_account_id) ON \nUPDATE NO ACTION ON DELETE NO ACTION\n \n \nTable \n\"public.state_province\" \nColumn \n| \nType | \nModifiers---------------------------+------------------------+----------- state_province_id \n| numeric(10,0) | not \nnull state_province_short_desc | character varying(2) \n| state_province_desc | character \nvarying(100) | not \nnull country_id \n| numeric(10,0) | not \nnull lang_id \n| numeric(10,0) | not \nnullIndexes: state_province_pkey primary key btree \n(state_province_id), \nstate_prov_count_lang_i btree (country_id, lang_id)\n \n \nAll the join columns are the same type and width, \nand all are indexed. I googled for what looked like the expensive parts of the \nquery to see if I could at least figure out where the time was being \nspent.\n \nPart 3 to follow.\n \nDavid",
"msg_date": "Tue, 30 Sep 2003 13:25:05 -0700",
"msg_from": "David Griffiths <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning/performance issue (part 2)"
}
] |
[
{
"msg_contents": "And finally,\n \nHere's the contents of the postgresql.conf file (I've been playing with\nthese setting the last couple of days, and using the guide @\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n<http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.htm\nl> to make sure I didn't have it mis-tuned):\n \ntcpip_socket = true\nmax_connections = 500 # We will need quite a few connections;\ncurrently only one connection to database, however\nport = 5432\nshared_buffers = 5000 # I've tried 5000 to 80,000 with no\napparent difference\nwal_buffers = 16\nsort_mem = 256 # decreased this due to the large # of\nconnectiosn\neffective_cache_size = 50000 # read that this can improve performance;\nhasn't done anything.\n \nThe machine is a dual-Pentium 3 933mhz, with 2 gigabytes of RAM and a\n3Ware RAID-5 card.\n \nAs a reference, our production Oracle database (exactly the same\nhardware, but RAID-mirroring) with way more load can handle the query in\n1-2 seconds. I have MySQL 4.0.14 with InnoDB on the same machine\n(shutdown when I am testing Postgres, and visa versa) and it does the\nquery in 0.20 seconds.\n \nThanks for any insight.\nDavid.\n\n\n\n\n\n\n\nAnd finally,\n \n\nHere's the contents of the postgresql.conf file \n(I've been playing with these setting the last couple of days, and using the \nguide @ http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html to \nmake sure I didn't have it mis-tuned):\n \ntcpip_socket = truemax_connections = 500 # We will need \nquite a few connections; currently only one connection to database, \nhowever\nport = 5432shared_buffers = \n5000 # I've tried \n5000 to 80,000 with no apparent differencewal_buffers = 16\nsort_mem = \n256 \n # decreased this due to the large # of \nconnectiosn\neffective_cache_size \n= 50000 # read that this can improve \nperformance; hasn't done anything.\n \nThe machine is a dual-Pentium 3 933mhz, with 2 \ngigabytes of RAM and a 3Ware RAID-5 card.\n \nAs a reference, our production Oracle database \n(exactly the same hardware, but RAID-mirroring) with way more load can handle \nthe query in 1-2 seconds. I have MySQL 4.0.14 with InnoDB on the same machine \n(shutdown when I am testing Postgres, and visa versa) and it does the query in \n0.20 seconds.\n \nThanks for any insight.\nDavid.",
"msg_date": "Tue, 30 Sep 2003 13:25:50 -0700",
"msg_from": "David Griffiths <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning/performance issue...."
},
{
"msg_contents": "David Griffiths wrote:\n\n> And finally,\n> \n> Here's the contents of the postgresql.conf file (I've been playing with \n> these setting the last couple of days, and using the guide @ \n> http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html to \n> make sure I didn't have it mis-tuned):\n> \n> tcpip_socket = true\n> max_connections = 500 # We will need quite a few connections; \n> currently only one connection to database, however\n> port = 5432\n> shared_buffers = 5000 # I've tried 5000 to 80,000 with no \n> apparent difference\n> wal_buffers = 16\n> sort_mem = 256 # decreased this due to the large # of \n> connectiosn\n> effective_cache_size = 50000 # read that this can improve performance; \n> hasn't done anything.\n\nReading this whole thread, I think most of the improvement you would get would \nbe from rethinking your schema from PG point of view and examine each query.\n\nAfter you changed your last query as Tom suggested for explicit join, how much \nimprovement did it make? I noticed that you put \n'commercial_entity.commercial_entity_id=225528' as a second codition. Does it \nmake any difference to put it ahead in where clause list?\n\n HTH\n\n Shridhar\n\n",
"msg_date": "Wed, 01 Oct 2003 12:35:30 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance issue...."
}
] |
[
{
"msg_contents": "\n Hello,\n\n While writing web application I found that it would\nbe very nice for me to have \"null\" WHERE clause. Like\nWHERE 1=1. Then it is easy to concat additional\nconditions just using $query . \" AND col=false\" syntax.\n\n But which of the possible \"null\" clauses is the fastest\none?\n\n Thanks,\n\n Mindaugas\n\n",
"msg_date": "Wed, 1 Oct 2003 15:11:30 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "What is the fastest null WHERE"
},
{
"msg_contents": "Mindaugas Riauba wrote:\n\n> Hello,\n> \n> While writing web application I found that it would\n> be very nice for me to have \"null\" WHERE clause. Like\n> WHERE 1=1. Then it is easy to concat additional\n> conditions just using $query . \" AND col=false\" syntax.\n> \n> But which of the possible \"null\" clauses is the fastest\n> one?\n\nRather than this approach, keep a flag which tells you whether or not it is \nfirst where condition. If it is not first where condition, add a 'and'. That \nwould be simple, isn't it?\n\n Shridhar\n\n",
"msg_date": "Wed, 01 Oct 2003 18:02:14 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is the fastest null WHERE"
},
{
"msg_contents": "On Wednesday 01 October 2003 13:11, Mindaugas Riauba wrote:\n> Hello,\n>\n> While writing web application I found that it would\n> be very nice for me to have \"null\" WHERE clause. Like\n> WHERE 1=1. Then it is easy to concat additional\n> conditions just using $query . \" AND col=false\" syntax.\n>\n> But which of the possible \"null\" clauses is the fastest\n> one?\n\nI suspect WHERE true, but is it really necessary.\n\nMost languages will have a join() operator that lets you do something like:\n\n$where_cond = join(' AND ', @list_of_tests)\n\n\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 1 Oct 2003 13:39:01 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is the fastest null WHERE"
},
{
"msg_contents": "On Wed, 2003-10-01 at 08:11, Mindaugas Riauba wrote:\n> While writing web application I found that it would\n> be very nice for me to have \"null\" WHERE clause. Like\n> WHERE 1=1. Then it is easy to concat additional\n> conditions just using $query . \" AND col=false\" syntax.\n> \n> But which of the possible \"null\" clauses is the fastest\n> one?\n\nWHERE true AND ....",
"msg_date": "Wed, 01 Oct 2003 08:51:46 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is the fastest null WHERE"
},
{
"msg_contents": "> > While writing web application I found that it would\n> > be very nice for me to have \"null\" WHERE clause. Like\n> > WHERE 1=1. Then it is easy to concat additional\n> > conditions just using $query . \" AND col=false\" syntax.\n> >\n> > But which of the possible \"null\" clauses is the fastest\n> > one?\n>\n> I suspect WHERE true, but is it really necessary.\n\n Thanks. I'll use \"WHERE true\" for now. And of course it is\nnot necessary it just simplifies code a bit.\n\n> Most languages will have a join() operator that lets you do something\nlike:\n>\n> $where_cond = join(' AND ', @list_of_tests)\n\n That's not the case. Test may or may not be performed based on\nweb form values.\n\n Mindaugas\n\n",
"msg_date": "Wed, 1 Oct 2003 16:05:41 +0300",
"msg_from": "\"Mindaugas Riauba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What is the fastest null WHERE"
}
] |
[
{
"msg_contents": "We have an opportunity to purchase a new, top-notch database server. I am\nwondering what kind of hardware is recommended? We're on Linux platforms and\nkernels though. I remember a comment from Tom about how he was spending a\nlot of time debugging problems which turned out to be hardware-related. I of\ncourse would like to avoid that.\n\nIn terms of numbers, we expect have an average of 100 active connections\n(most of which are idle 9/10ths of the time), with about 85% reading\ntraffic. I expect the database with flow average 10-20kBps under moderate\nload. I hope to have one server host about 1000-2000 active databases, with\nthe largest being about 60 meg (no blobs). Inactive databases will only be\nfor reading (archival) purposes, and will seldom be accessed.\n\nDoes any of this represent a problem for Postgres? The datasets are\ntypically not that large, only a few queries on a few databases ever return\nover 1000 rows. I'm worried about being able to handle the times when there\nwill be spikes in the traffic.\n\nThe configuration that is going on in my head is:\nRAID 1, 200gig\n1 server, 4g ram\nLinux 2.6\n\nI was also wondering about storage units (IBM FAStT200) with giga-bit\nEthernet to rack mount computer(s)... But would I need more than 1 CPU? If I\ndid, how would I handle the file system? We only do a few joins, so I think\nmost of it would be I/O latency.\n\nThanks!\n\n\nJason Hihn\nPaytime Payroll\n\n\n",
"msg_date": "Wed, 01 Oct 2003 11:13:15 -0400",
"msg_from": "Jason Hihn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ideal Hardware?"
},
{
"msg_contents": "Jason,\n\nYour question is really suited to the PERFORMANCE list, not NOVICE, so I have \ncross-posted it there. I reccomend that you subscribe to performance, and \ndrop novice from your replies. There are lots of hardware geeks on \nperformance, but few on novice.\n\n> We have an opportunity to purchase a new, top-notch database server. I am\n> wondering what kind of hardware is recommended? We're on Linux platforms\n> and kernels though. I remember a comment from Tom about how he was spending\n> a lot of time debugging problems which turned out to be hardware-related. I\n> of course would like to avoid that.\n>\n> In terms of numbers, we expect have an average of 100 active connections\n> (most of which are idle 9/10ths of the time), with about 85% reading\n> traffic. I expect the database with flow average 10-20kBps under moderate\n> load. I hope to have one server host about 1000-2000 active databases, with\n> the largest being about 60 meg (no blobs). Inactive databases will only be\n> for reading (archival) purposes, and will seldom be accessed.\n\nIs that 100 concurrent connections *total*, or per-database? If the \nconnections are idle 90% of the time, then are they open, or do they get \nre-established with each query? Have you considered connection pooling for \nthe read-only queries?\n\n> Does any of this represent a problem for Postgres? The datasets are\n> typically not that large, only a few queries on a few databases ever return\n> over 1000 rows. I'm worried about being able to handle the times when there\n> will be spikes in the traffic.\n\nIt's all possible, it just requires careful application design and lots of \nhardware. You should also cost things out; sometimes it's cheaper to have \nseveral good servers instead of one uber-server. The latter also helps with \nhardware replacement.\n\n> The configuration that is going on in my head is:\n> RAID 1, 200gig\n\nRAID 1+0 can be good for Postgres. However, if you have a budget, RAID 5 \nwith 6 or more disks can be better some of the time, particularly when read \nqueries are the vast majority of the load. There are, as yet, no difinitive \nstatistics, but OSDL is working on it!\n\nMore important than the RAID config is the RAID card; once again, with money, \nmulti-channel RAID cards with a battery-backed write cache are your best bet; \nsome cards even allow you to span RAID1 between cards of the same model. See \nthe discussion about LSI MegaRaid in the PERFORMANCE list archives over the \nlast 2 weeks.\n\n> 1 server, 4g ram\n> Linux 2.6\n\nYou're very brave. Me, I'm not adopting 2.6 in production until 2.6.03 is \nout, at least.\n\n> I was also wondering about storage units (IBM FAStT200) with giga-bit\n> Ethernet to rack mount computer(s)... But would I need more than 1 CPU? If\n> I did, how would I handle the file system? We only do a few joins, so I\n> think most of it would be I/O latency.\n\nPostgreSQL will make use of multiple processors. If you are worried about \npeak time loads, having 2-4 processors to distribute queries across would be \nvery useful.\n\nAlso, I'm concerned about the \"we only do a few joins\". What that says to \nme is \"we don't really know how to write complex queries, so we pull a lot of \nredundant data.\" Simple queries can be far less efficient than complex ones \nif they result in you pulling entire tables across to the client.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 1 Oct 2003 10:38:53 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ideal Hardware?"
},
{
"msg_contents": "On Wed, 1 Oct 2003, Jason Hihn wrote:\n\n> We have an opportunity to purchase a new, top-notch database server. I am\n> wondering what kind of hardware is recommended? We're on Linux platforms and\n> kernels though.\n[...]\n> The configuration that is going on in my head is:\n> RAID 1, 200gig\n> 1 server, 4g ram\n> Linux 2.6\n\nI vaguely remember someone (Tom?) mentioning that one of the log\nfiles probably might want to go on its own partition. Sometime in\nthe last 2 weeks. I am not pushing dbase stuff here, but my\nsystem is about your size. About 120 GB of my disk is RAID\non a promiseware card, using the kernel software RAID (apparently\nsoftware RAID on Linux is faster than the promisecard does it\nin hardware). I have a bunch of different things using software\nRAID:\n /tmp is a RAID 0 with ext2\n /home is a RAID 5 with ext3\n /usr, /var, /usr/local is RAID 10 with ext3\n /var/lib/postgres is on a real SCSI 10k, on ext3 with noatime\n\nSo, my postgres isn't on the RAID(s). I just got finished\nrebuilding my RAIDs for the second time (failed disk). I ended\nup rebuilding things in single user mode, so I can't set tasks in\nparallel. I don't know if you can do this in multi-user mode\nand/or in parallel. I'm being paranoid. Rebuilding RAID 5 is\nfast, rebuilding RAID 1 is a pain in the butt! My biggest RAID 10\nis about 10 GB, bundling the new partition from the new disk into\nthe RAID 0 is fast, rebuilding the mirror (RAID 1 part) takes 10\nhours! Dual athlon 1.6's and 1 GB of RAM, so I have lots of\nhorsepower. Maybe you are going with better RAID than I have,\nbut it seems to me that RAID 5 (with spares) is going to be better\nif you ever have to rebuild.\n\nGord\n\n",
"msg_date": "Wed, 1 Oct 2003 18:19:21 -0600 (MDT)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ideal Hardware?"
},
{
"msg_contents": "Gord,\n\n> I vaguely remember someone (Tom?) mentioning that one of the log\n> files probably might want to go on its own partition. \n\nThat's general knowledge, but not really applicable to a fast RAID system. \nIt's more imporant to regular-disk systems; with 4+ disk RAID, nobody has \nbeen able to demonstrate a gain from having the disk separation.\n\n> fast, rebuilding RAID 1 is a pain in the butt! My biggest RAID 10\n> is about 10 GB, bundling the new partition from the new disk into\n> the RAID 0 is fast, rebuilding the mirror (RAID 1 part) takes 10\n> hours! Dual athlon 1.6's and 1 GB of RAM, so I have lots of\n> horsepower. Maybe you are going with better RAID than I have,\n> but it seems to me that RAID 5 (with spares) is going to be better\n> if you ever have to rebuild.\n\nAlso depends on the number of disks, the controller, and the balance of read \nvs. write activity. I've found RAID 5 with no cache to be dog-slow for OLTP \n(heavy write transaction) databases, and use RAID 1 for that.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 1 Oct 2003 18:10:31 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ideal Hardware?"
},
{
"msg_contents": "On Wed, 2003-10-01 at 10:13, Jason Hihn wrote:\n> We have an opportunity to purchase a new, top-notch database server. I am\n> wondering what kind of hardware is recommended? We're on Linux platforms and\n> kernels though. I remember a comment from Tom about how he was spending a\n> lot of time debugging problems which turned out to be hardware-related. I of\n> course would like to avoid that.\n> \n> In terms of numbers, we expect have an average of 100 active connections\n> (most of which are idle 9/10ths of the time), with about 85% reading\n> traffic. I expect the database with flow average 10-20kBps under moderate\n> load. I hope to have one server host about 1000-2000 active databases, with\n> the largest being about 60 meg (no blobs). Inactive databases will only be\n> for reading (archival) purposes, and will seldom be accessed.\n\nWhoever mentioned using multiple servers instead of one uber-server\nis very right. You're putting all your eggs in one basket that way,\nand unless that \"basket\" has hot-swap CPUs, memory boards, etc, etc,\nthen if you have a hardware problem, your whole business goes down.\n\nBuy 3 or 4 smaller systems, and distribute any possible pain from\ndown time.\n\nIt seems like I'm going to contravene what I just said about eggs\nin a basket when I suggest that the disks could possibly be concen-\ntrated into a NAS, so that you could get 1 big, honkin fast *hot-\nswappable* (dual-redundant U320 storage controllers w/ 512MB battery-\nbacked cache each, for a total of 1GB cache are easily available) \ndisk subsystem for however many smaller CPU-boxes you get. (They \ncould be kept un-shared by making separate partitions, and each \nmachine only mounts one partition.)\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Adventure is a sign of incompetence\"\nStephanson, great polar explorer\n\n",
"msg_date": "Thu, 02 Oct 2003 04:28:43 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "basket, eggs & NAS (was eggs Re: Ideal Hardware?)"
}
] |
[
{
"msg_contents": "Hi,\n\tIn some situations, it looks like the optimizer does not chose \nefficient paths for joining inherited tables. For I created a \nrather trivial formulation to serve as an example. I created the table \n'numbers' comprising of the columns id (int) and value (text). I also \ncreated the table 'evens' and 'odds' that inherit numbers, with no \nadditional columns. Into 'evens' I placed 50000 entries, each one with \nan even (unique) id and random 'value'. Likewise, for 'odds', I created \n50000 odd (and unique) id fields id fields and random 'value', and \ncreated index on all ID fields of every table that has any rows (and \nanalyzed).\n\nso.. my tables look like this:\n\n Table \"public.numbers\"\n Column | Type | Modifiers \n--------+---------+-----------\n id | integer | \n value | text |\n\n Table \"public.evens\"\n Column | Type | Modifiers \n--------+---------+-----------\n id | integer | \n value | text | \nIndexes:\n \"ei\" btree (id)\nInherits: numbers\n\n Table \"public.odds\"\n Column | Type | Modifiers \n--------+---------+-----------\n id | integer | \n value | text | \nIndexes:\n \"oi\" btree (id)\nInherits: numbers\n\n\nAs per the above construction, 'evens' and 'odds' both have 50000 \nrows. 'numbers' contains none.\n\n\n\nNow, I created a trivial query that would use 'numbers' as an inheritor \ntable in a join (a very stupid one, but a join nevertheless) as \nfollows, which produces a terrible, but correct, plan:\n\nselect value from (SELECT 1::integer as id) as ids JOIN numbers on \n(numbers.id = ids.id);\n QUERY PLAN \n--------------------------------------------------------------------------- \nHash Join (cost=0.02..2195.79 rows=501 width=19)\n Hash Cond: (\"outer\".id = \"inner\".id)\n -> Append (cost=0.00..1690.50 rows=100051 width=23)\n -> Seq Scan on numbers (cost=0.00..0.00 rows=1 width=23)\n -> Seq Scan on evens numbers (cost=0.00..845.25 rows=50025 width=23)\n -> Seq Scan on odds numbers (cost=0.00..845.25 rows=50025 width=23)\n -> Hash (cost=0.02..0.02 rows=1 width=4)\n -> Subquery Scan ids (cost=0.00..0.02 rows=1 width=4)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n\n\n\n\n Now, I subsitute 'evens' for 'numbers', so I am now joining with a normal, \nnon-inherited table. The plan is much better:\n\nselect value from (SELECT 1::integer as id) as ids JOIN evens on \n(evens.id = ids.id);\n\n QUERY PLAN \n-----------------------------------------------------------------------\n Nested Loop (cost=0.00..3.05 rows=2 width=19)\n -> Subquery Scan ids (cost=0.00..0.02 rows=1 width=4)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n -> Index Scan using ei on evens (cost=0.00..3.01 rows=1 width=23)\n Index Cond: (evens.id = \"outer\".id)\n\n\nI would think that the ideal plan for the first query should be a nested \nloop like the second that considers indexes where possible, and it would \nlook as follows:\n\n\t\t\tHYPOTHETICAL PLAN\n------------------------------------------------------------------------\n Nested Loop\n -> Subquery Scan ids\n -> Result\n -> Append\n -> Seq Scan on numbers\n -> Index Scan using ei on evens\n Index Cond: (evens.id = \"outer.id\")\n -> Index Scan using oi on odds\n Index Cond: (odds.id = \"outer.id\")\n \nSo.. Why wouldn't postgres use such a plan? I could think of three \nreasons:\n\t- The planner wasn't considering this plan due to some\n\t fault of its own\n\t- That plan makes no sense and would not be able to be run in the\n\t executor, and therefore it would be wasteful to consider it.\n\t- It truly is more expensive than a hash join\n\nI've pretty much ruled out the third and I suspect the second is also \nuntrue (though have not proven that to myself), leaving the first. If it \nis indeed the second, that the plan makes no sense, someone please let me \nknow!\n\nOK, so I took a look into the optimizer and over time got a better \nunderstanding of what's going on, though I still don't understand it \ncompletely. Here's my theory as to what's happening:\n\nFor this query, most of the path consideration takes place in \nmatch_unsorted_outer() in path/joinpath.c. For the 'good' query against \nthe non-inherited 'evens' table, the good plan is generated in the line:\n\n\tbestinnerjoin = best_inner_indexscan(...);\n\nSince an inherited table doesn't have one single index over all its \ninherited tables, this call produces a null bestinnerjoin. \n\nLater on in match_unsorted_inner(), various access paths are considered \nfor a nested loop. One is bestinnerjoin (when it exists), and that is \nhow the 'good' query gets its nested loop with an index scan. \n\nOther paths considered for inclusion in the nested loop are \n'inner_cheapest_total' and 'inner_cheapest_startup'; These plans, \npresumably, contain sequential scans, which are expensive enough in the \nnested loop that the nested loop plan is suboptimal compared to a hash \njoin, which is what happens in the 'bad' query that joins against the \ninheritor table.\n\nNow, it seems to me as if the solution would be one of the following:\n\t\n\t- create a new bestinnerjoin plan when best_inner_indexscan \n\t returned null, yet the current relation is an inheritor.\n This bestinnerjoin plan will use the optimal Append plan\n that comprises of the optimal plans for the inherited\n tables that are relevant for the joins (as in the\n\t case of the hypothetical query above where the append consists\n of a number of index and sequential scans)\n\n\t- Consider the optimal Append path relevant to the join\n later on when considering paths for use in a nested loop.\n\t i.e. inner_cheapest_total or inner_cheapest_startup\n\t will have to be this good append plan.\n\nFor 2, the problem seems to be that in creating the inner_cheapest_total, \njoininfo nodes for each child relation do not exist. I experimented\nby just copying the parents joininfo nodes to each child, and it looked \nlike it was considering index scans somewhere along the line, but the \nfinal plan chosen did not use index scans. I didn't see where it \nfailed. I'm kinda skeptical of 3 anyway, because the \ninner_cheapest_total in the 'good' query with out inheritance does not \nappear to involve index scans at all.\n\nSo.. does anybody have any advice? Am I totally off base with \nmy assertions that a better plan can exist when joining inherited \ntables? Does my analysis of the problem and possible solutions in the \noptimizer make any sense? I basically pored over the source code, read \nthe documentation, and did little tests here and there to arrive at my \nconclusions, but cannot claim to have a good global view of how \neverything works. \n\n\tThanks very much\n\n\t\t-Aaron\n\n",
"msg_date": "Wed, 1 Oct 2003 12:14:13 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Joins on inherited tables"
},
{
"msg_contents": "[email protected] writes:\n> So.. does anybody have any advice?\n\nLook at set_inherited_rel_pathlist() in allpaths.c --- it forms the best\nplan for fully scanning the inheritance-tree table. Currently that's\nthe *only* plan considered, and it does not make any use of join\nclauses. It's possible that something could be done with providing a\nsimilar routine for best inner indexscans taken across the whole\ninheritance tree. You'd have to figure out how to locate the applicable\njoinclauses though. I think they'd be attached to the inheritance-tree\nparent relation and not to the individual inheritance tree member\nrelations. Also, if you are wondering why best_inner_indexscan() is so\ntense about caching its results, that's because it gets called *a lot*\nin large join problems. If you don't achieve a similar level of\nefficiency then you'll be seeing some nasty performance problems in\nlarger queries.\n\nI think you'd find there is executor work to do as well; I'm not sure\nhow the outer-relation values would get propagated down into the\nindexscans when there's an Append node between. Maybe the existing code\nwould Just Work, but I bet not.\n\n<digression>\n\nNot sure if this will help you, but:\n\nOnce upon a time the planner did the APPEND for an inheritance tree at\nthe top of the plan not the bottom. (It still does when the tree is the\ntarget of an update/delete query.) In 7.0 for example I get a plan like\nthis:\n\ncreate table pt (f1 int primary key);\ncreate table ct1 (f2 int) inherits (pt);\ncreate table ct2 (f2 int) inherits (pt);\ncreate index ct1i on ct1(f1);\ncreate table bar(f1 int);\n\nexplain select * from pt*, bar where pt.f1 = bar.f1;\nNOTICE: QUERY PLAN:\n\nAppend (cost=69.83..474.33 rows=30000 width=8)\n -> Merge Join (cost=69.83..154.83 rows=10000 width=8)\n -> Index Scan using pt_pkey on pt (cost=0.00..60.00 rows=1000 width=4)\n -> Sort (cost=69.83..69.83 rows=1000 width=4)\n -> Seq Scan on bar (cost=0.00..20.00 rows=1000 width=4)\n -> Merge Join (cost=69.83..154.83 rows=10000 width=8)\n -> Index Scan using ct1i on ct1 pt (cost=0.00..60.00 rows=1000 width=4)\n -> Sort (cost=69.83..69.83 rows=1000 width=4)\n -> Seq Scan on bar (cost=0.00..20.00 rows=1000 width=4)\n -> Merge Join (cost=139.66..164.66 rows=10000 width=8)\n -> Sort (cost=69.83..69.83 rows=1000 width=4)\n -> Seq Scan on bar (cost=0.00..20.00 rows=1000 width=4)\n -> Sort (cost=69.83..69.83 rows=1000 width=4)\n -> Seq Scan on ct2 pt (cost=0.00..20.00 rows=1000 width=4)\n\nwhereas the same test in CVS tip produces\n\n QUERY PLAN\n----------------------------------------------------------------------------\n Merge Join (cost=303.09..353.09 rows=3000 width=8)\n Merge Cond: (\"outer\".f1 = \"inner\".f1)\n -> Sort (cost=69.83..72.33 rows=1000 width=4)\n Sort Key: bar.f1\n -> Seq Scan on bar (cost=0.00..20.00 rows=1000 width=4)\n -> Sort (cost=233.26..240.76 rows=3000 width=4)\n Sort Key: public.pt.f1\n -> Append (cost=0.00..60.00 rows=3000 width=4)\n -> Seq Scan on pt (cost=0.00..20.00 rows=1000 width=4)\n -> Seq Scan on ct1 pt (cost=0.00..20.00 rows=1000 width=4)\n -> Seq Scan on ct2 pt (cost=0.00..20.00 rows=1000 width=4)\n\nThe fact that 7.0 could actually adapt to different index sets for\ndifferent child tables was kinda cool, but the append-at-the-top\nstrategy failed completely for outer joins, so we had to abandon it.\nIn practice I think the generated plan was usually worse anyway (note\nthat bar gets scanned three times in 7.0's plan), but for the specific\ncase where the inheritance tree is on the inside of a nestloop that\ncould be indexed, the new approach is not as good. If you can come up\nwith a fix that doesn't break things in other respects, it'd be great.\n\n[digs in CVS logs] The patch that altered the APPEND-at-the-top\nbehavior was this one:\n\n2000-11-11 19:36 tgl\n\n\t* src/: backend/commands/command.c, backend/commands/copy.c,\n\tbackend/commands/explain.c, backend/executor/execMain.c,\n\tbackend/executor/execQual.c, backend/executor/execTuples.c,\n\tbackend/executor/execUtils.c, backend/executor/functions.c,\n\tbackend/executor/nodeAppend.c, backend/executor/nodeSeqscan.c,\n\tbackend/nodes/copyfuncs.c, backend/nodes/equalfuncs.c,\n\tbackend/nodes/outfuncs.c, backend/nodes/readfuncs.c,\n\tbackend/optimizer/path/allpaths.c,\n\tbackend/optimizer/path/pathkeys.c,\n\tbackend/optimizer/plan/createplan.c,\n\tbackend/optimizer/plan/planmain.c,\n\tbackend/optimizer/plan/planner.c,\n\tbackend/optimizer/prep/prepunion.c,\n\tbackend/optimizer/util/pathnode.c,\n\tbackend/optimizer/util/relnode.c, backend/parser/parse_clause.c,\n\tbackend/tcop/pquery.c, include/catalog/catversion.h,\n\tinclude/executor/executor.h, include/executor/tuptable.h,\n\tinclude/nodes/execnodes.h, include/nodes/nodes.h,\n\tinclude/nodes/parsenodes.h, include/nodes/plannodes.h,\n\tinclude/nodes/relation.h, include/optimizer/pathnode.h,\n\tinclude/optimizer/planmain.h, include/optimizer/planner.h,\n\tinclude/optimizer/prep.h, backend/optimizer/README: Restructure\n\thandling of inheritance queries so that they work with outer joins,\n\tand clean things up a good deal at the same time. Append plan node\n\tno longer hacks on rangetable at runtime --- instead, all child\n\ttables are given their own RT entries during planning.\tConcept of\n\tmultiple target tables pushed up into execMain, replacing bug-prone\n\timplementation within nodeAppend. Planner now supports generating\n\tAppend plans for inheritance sets either at the top of the plan\n\t(the old way) or at the bottom. Expanding at the bottom is\n\tappropriate for tables used as sources, since they may appear\n\tinside an outer join; but we must still expand at the top when the\n\ttarget of an UPDATE or DELETE is an inheritance set, because we\n\tactually need a different targetlist and junkfilter for each target\n\ttable in that case. Fortunately a target table can't be inside an\n\touter join... Bizarre mutual recursion between union_planner and\n\tprepunion.c is gone --- in fact, union_planner doesn't really have\n\tmuch to do with union queries anymore, so I renamed it\n\tgrouping_planner.\n\nNot sure if studying that diff would teach you anything useful, but\nthere it is...\n\n</digression>\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Oct 2003 13:14:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joins on inherited tables "
},
{
"msg_contents": "OK, so I've had a bit of time to look things over, and appear to be \nmaking headway. Here's how things stand right now:\n\nI added a function called best_inner_scan used the same way as \nbest_inner_indexscan, but it's a bit more generalized in the sense that \nit can make append plans comprising of the best scans for each \nconstituent table (it makes calls to best_inner_indexscan for each \nchild), or just return the best simple index scan (or null) for plain \nrelations.\n\nIn order to make that work, I gave the child tables modified join \nclauses from the parent... modified in the sense that I had to make the \noperands match the inherited child table (they would match the parent \notherwise if I were to simply copy the Joininfo nodes, and thus fail in \nfinding an appropriate index in the child table). I'm not entirely \ncomfortable with that solution yet, as I'm not absolutely certain those \nadditonal modified join clauses wont' affect something else in the code \nthat I'm not aware of, but it appears to be having the desired effect.\n\nBasically, with optimizer debug enabled, I'm getting plans that look like \nthis (with the same queries as before) \n\nRELOPTINFO (1 2): rows=501 width=19\n\tcheapest total path:\n\tNestLoop(1 2) rows=501 cost=0.00..1253.67\n\t clauses: numbers.id = ids.id\n\t\tSeqScan(1) rows=1 cost=0.00..0.02\n\t\tAppend(2) rows=100051 cost=0.00..3.01\n\nAs opposed to this:\n\nRELOPTINFO (1 2): rows=501 width=19\n cheapest total path:\n HashJoin(1 2) rows=501 cost=0.00..2195.79\n clauses: numbers.id = ids.id\n SeqScan(1) rows=1 cost=0.00..0.02\n Append(2) rows=100051 cost=0.00..1690.50\n\nThe total cost seems a high for the nestloop.. its constituents are \ncertainly cheap. I need to look to see if I missed keeping track of \ncosts somewhere.\n\nWhen I EXPLAIN, though, I get an error from the executor:\n\"ERROR: both left and right operands are rel-vars\". I haven't looked \ninto that yet, but the results so far are encouraging enough to press on \nand get this completed. \n\nThere was one hairy part, though, which will have to be addressed at some \nlater point: Right now there is a boolean 'inh' in the RangeTblEntry \nstruct which indicates \"inheritance requested\". When the inheritance root \nis first expanded by expand_inherited_rtentry(), the rte->inh is \nnulled in order to prevent expansion of an UPDATE/DELETE target. This \npresented problems for me when I wanted to detect which relation was an \ninheritor one in order to expand it into the append path. For my testing \npurposes, I just commented out the line, but for a real \nsolution, that's not an acceptable solution and some struct might have to \nbe changed slightly in order to convey the inheritance knowledge.. \n\nSo, I guess the next step is to see what the executor is complaining \nabout and see if it's something that would need attention in the executor \nof if it's something I did wrong.. If everything appears to work after \nthat point, then I'll check for efficiency and use of cache in generating \nthe inner scan plans.\n\nThanks for the advice and historical perspective so far, Tom. It has been \nquite helpful. \n\n\t-Aaron\n\n",
"msg_date": "Fri, 3 Oct 2003 16:21:48 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Joins on inherited tables "
}
] |
[
{
"msg_contents": "Jeff,\nI would really appreciate if you could send me that lengthy presentation\nthat you've written on pg/other dbs comparison.\nThanks.\n\nOleg\n\n-----Original Message-----\nFrom: Jeff [mailto:[email protected]] \nSent: Wednesday, October 01, 2003 6:23 AM\nTo: David Griffiths\nCc: [email protected]\nSubject: Re: [PERFORM] Tuning/performance issue...\nImportance: Low\n\n\nOn Tue, 30 Sep 2003, David Griffiths wrote:\n\n>\n> This is all part of a \"migrate away from Oracle\" project. We are \n> looking at 3 databases - MySQL (InnoDB), Postgres and Matisse (object \n> oriented). We have alot of queries like this\n> or worse, and I'm worried that many of them would need to be\nre-written. The\n> developers\n> know SQL, but nothing about tuning, etc.\n>\n\nThere's a movement at my company to ditch several commercial db's in\nfavor of a free one. I'm currently the big pg fan around here and I've\nactually written a rather lengthy presentation about pg features, why,\ntuning, etc. but another part was some comparisons to other db's..\n\nI decided so I wouldn't be blinding flaming mysql to give it a whirl and\nloaded it up with the same dataset as pg. First thing I hit was lack of\nstored procedures. But I decided to code around that, giving mysql the\nbenefit of the doubt. What I found was interesting.\n\nFor 1-2 concurrent\n'beaters' it screamed. ultra-fast. But.. If you increase the concurrent\nbeaters up to say, 20 Mysql comes to a grinding halt.. Mysql and the\nmachine itself become fairly unresponsive. And if you do cache\nunfriendly\nqueries it becomes even worse. On PG - no problems at all. Scaled fine\nand dandy up. And with 40 concurrent beaters the machine was still\nresponsive. (The numbers for 20 client was 220 seconds (pg) and 650\nseconds (mysql))\n\nSo that is another test to try out - Given your configuration I expect\nyou have lots of concurrent activity.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if\nyour\n joining column's datatypes do not match\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************\n",
"msg_date": "Wed, 1 Oct 2003 10:51:52 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning/performance issue..."
},
{
"msg_contents": "On Wed, 1 Oct 2003, Oleg Lebedev wrote:\n\n> Jeff,\n> I would really appreciate if you could send me that lengthy presentation\n> that you've written on pg/other dbs comparison.\n> Thanks.\n>\n\nAfter I give the presentation at work and collect comments from my\ncoworkers (and remove some information you folks don't need to know :) I\nwill be very willing to post it for people to see.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 1 Oct 2003 13:42:15 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance issue..."
},
{
"msg_contents": "\nI have updated the FAQ to be:\n\n In comparison to MySQL or leaner database systems, we are\n faster for multiple users, complex queries, and a read/write query\n load. MySQL is faster for SELECT queries done by a few users. \n\nIs this accurate? It seems so.\n\n---------------------------------------------------------------------------\n\nOleg Lebedev wrote:\n> Jeff,\n> I would really appreciate if you could send me that lengthy presentation\n> that you've written on pg/other dbs comparison.\n> Thanks.\n> \n> Oleg\n> \n> -----Original Message-----\n> From: Jeff [mailto:[email protected]] \n> Sent: Wednesday, October 01, 2003 6:23 AM\n> To: David Griffiths\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Tuning/performance issue...\n> Importance: Low\n> \n> \n> On Tue, 30 Sep 2003, David Griffiths wrote:\n> \n> >\n> > This is all part of a \"migrate away from Oracle\" project. We are \n> > looking at 3 databases - MySQL (InnoDB), Postgres and Matisse (object \n> > oriented). We have alot of queries like this\n> > or worse, and I'm worried that many of them would need to be\n> re-written. The\n> > developers\n> > know SQL, but nothing about tuning, etc.\n> >\n> \n> There's a movement at my company to ditch several commercial db's in\n> favor of a free one. I'm currently the big pg fan around here and I've\n> actually written a rather lengthy presentation about pg features, why,\n> tuning, etc. but another part was some comparisons to other db's..\n> \n> I decided so I wouldn't be blinding flaming mysql to give it a whirl and\n> loaded it up with the same dataset as pg. First thing I hit was lack of\n> stored procedures. But I decided to code around that, giving mysql the\n> benefit of the doubt. What I found was interesting.\n> \n> For 1-2 concurrent\n> 'beaters' it screamed. ultra-fast. But.. If you increase the concurrent\n> beaters up to say, 20 Mysql comes to a grinding halt.. Mysql and the\n> machine itself become fairly unresponsive. And if you do cache\n> unfriendly\n> queries it becomes even worse. On PG - no problems at all. Scaled fine\n> and dandy up. And with 40 concurrent beaters the machine was still\n> responsive. (The numbers for 20 client was 220 seconds (pg) and 650\n> seconds (mysql))\n> \n> So that is another test to try out - Given your configuration I expect\n> you have lots of concurrent activity.\n> \n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if\n> your\n> joining column's datatypes do not match\n> \n> *************************************\n> \n> This e-mail may contain privileged or confidential material intended for the named recipient only.\n> If you are not the named recipient, delete this message and all attachments.\n> Unauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\n> We reserve the right to monitor e-mail sent through our network. \n> \n> *************************************\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 3 Oct 2003 21:39:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance issue..."
},
{
"msg_contents": "On Fri, 2003-10-03 at 21:39, Bruce Momjian wrote:\n> I have updated the FAQ to be:\n> \n> In comparison to MySQL or leaner database systems, we are\n> faster for multiple users, complex queries, and a read/write query\n> load. MySQL is faster for SELECT queries done by a few users. \n> \n> Is this accurate? It seems so.\n\nMay wish to say ... for simple SELECT queries ...\n\nSeveral left outer joins, subselects and a large number of joins are\nregularly performed faster in PostgreSQL due to a more mature optimizer.\n\nBut MySQL can pump out SELECT * FROM table WHERE key = value; queries in\na hurry.\n\n\nI've often wondered if they win on those because they have a lighter\nweight parser / optimizer with less \"lets try simplifying this query\"\nsteps or if the MYISAM storage mechanism is simply quicker at pulling\ndata off the disk.",
"msg_date": "Fri, 03 Oct 2003 22:22:00 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance issue..."
},
{
"msg_contents": "Centuries ago, Nostradamus foresaw when [email protected] (Bruce Momjian) would write:\n> I have updated the FAQ to be:\n>\n> In comparison to MySQL or leaner database systems, we are\n> faster for multiple users, complex queries, and a read/write query\n> load. MySQL is faster for SELECT queries done by a few users. \n>\n> Is this accurate? It seems so.\n\nI would think it more accurate if you use the phrase \"faster for\nsimple SELECT queries.\"\n\nMySQL uses a rule-based optimizer which, when the data fits the rules\nwell, can pump queries through lickety-split without any appreciable\npause for evaluation (or reflection :-). That's _quite_ a successful\nstrategy when users are doing what loosely amounts to evaluating\nassociation tables.\n\nselect * from table where key = value;\n\nWhich is just like tying a Perl variable to a hash table, and doing\n $value = $TABLE{$key};\n\nIn web applications where they wanted something a _little_ more\nstructured than hash tables, that may 'hit the spot.'\n\nAnything hairier than that gets, of course, hairier. If you want\nsomething that's TRULY more structured, you may lose a lot of hair\n:-).\n-- \noutput = reverse(\"gro.gultn\" \"@\" \"enworbbc\")\nhttp://www.ntlug.org/~cbbrowne/oses.html\n\"If you want to talk with some experts about something, go to the bar\nwhere they hang out, buy a round of beers, and they'll surely talk\nyour ear off, leaving you wiser than before.\n\nIf you, a stranger, show up at the bar, walk up to the table, and ask\nthem to fax you a position paper, they'll tell you to call their\noffice in the morning and ask for a rate sheet.\" -- Miguel Cruz\n",
"msg_date": "Fri, 03 Oct 2003 22:37:53 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance issue..."
},
{
"msg_contents": "Rod Taylor wrote:\n-- Start of PGP signed section.\n> On Fri, 2003-10-03 at 21:39, Bruce Momjian wrote:\n> > I have updated the FAQ to be:\n> > \n> > In comparison to MySQL or leaner database systems, we are\n> > faster for multiple users, complex queries, and a read/write query\n> > load. MySQL is faster for SELECT queries done by a few users. \n> > \n> > Is this accurate? It seems so.\n> \n> May wish to say ... for simple SELECT queries ...\n\nUpdated.\n\n> Several left outer joins, subselects and a large number of joins are\n> regularly performed faster in PostgreSQL due to a more mature optimizer.\n> \n> But MySQL can pump out SELECT * FROM table WHERE key = value; queries in\n> a hurry.\n> \n> \n> I've often wondered if they win on those because they have a lighter\n> weight parser / optimizer with less \"lets try simplifying this query\"\n\nI think that is part of it.\n\n> steps or if the MYISAM storage mechanism is simply quicker at pulling\n> data off the disk.\n\nAnd their heap is indexed by myisam, right. I know with Ingres that Isam\nwas usually faster than btree because you didn't have all those leaves\nto traverse to get to the data.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 3 Oct 2003 22:38:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance issue..."
},
{
"msg_contents": "Rod Taylor <[email protected]> writes:\n> I've often wondered if they win on those because they have a lighter\n> weight parser / optimizer with less \"lets try simplifying this query\"\n> steps or if the MYISAM storage mechanism is simply quicker at pulling\n> data off the disk.\n\nComparing pre-PREPAREd queries would probably tell something about that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Oct 2003 23:58:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance issue... "
},
{
"msg_contents": "On Fri, 3 Oct 2003, Bruce Momjian wrote:\n\n>\n> I have updated the FAQ to be:\n>\n> In comparison to MySQL or leaner database systems, we are\n> faster for multiple users, complex queries, and a read/write query\n> load. MySQL is faster for SELECT queries done by a few users.\n>\n> Is this accurate? It seems so.\n>\n>\n\nAnother thing I noticed - If you use a dataset that can live in mysql's\nquery cache / os cache it screams, until it has to hit the disk. then\nGRINDING HALT.\n\nIt would be nice if someone (I don't have the time now) did a comparison\nof say:\nselct value where name = XXX; [where xxx varies] with 1,10,20,50\nconnections\n\nthen make progressively more complex queries. And occasionally point out\nmysql silly omissions:\nselect * from myview where id = 1234\n[Oh wait! mysql doesn't have views. Ooopsy!]\n\nWrapping up - PG is not that slow for simple queries either. It can be\nrather zippy - and PREPARE can give HUGE gains - even for simple\nstatements. I've often wondered if YACC, etc is a bottleneck (You can\nonly go as fast as your parser can go).\n\nHurray for PG!\n\nAnd I'm giving my PG presentation monday. I hope to post it tuesday after\nI update with comments I receive and remove confidential information.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Sat, 4 Oct 2003 08:56:35 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning/performance issue..."
}
] |
[
{
"msg_contents": "Tom Lane wrote:\n\n> When benchmarking with data sets considerably larger than available\n> buffer cache, I rather doubt that small random_page_cost would be a \n> good idea. Still, you might as well experiment to see.\n\n From experience, I know the difference in response time can be huge when postgres incorrectly\nchooses a sequential scan over an index scan. In practice, do people experience as great a\ndifference when postgres incorrectly chooses an index scan over a sequential scan? My intuition\nis that the speed difference is a lot less for incorrectly choosing an index scan. If this is the\ncase, it would be safer to chose a small value for random_page_cost. \n\nGeorge Essig\n",
"msg_date": "Wed, 1 Oct 2003 09:55:53 -0700 (PDT)",
"msg_from": "George Essig <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "The output of the query should contain about 200 rows. So, I guess the\nplaner is off assuming that the query should return 1 row.\n\nI will start EXPLAIN ANALYZE now.\n\nThanks.\n\nOleg\n\n-----Original Message-----\nFrom: scott.marlowe [mailto:[email protected]] \nSent: Wednesday, October 01, 2003 7:23 AM\nTo: Oleg Lebedev\nCc: [email protected]\nSubject: Re: [PERFORM] TPC-R benchmarks\n\n\nOn Tue, 30 Sep 2003, Oleg Lebedev wrote:\n\n> I continue struggling with the TPC-R benchmarks and wonder if anyone \n> could help me optimize the query below. ANALYZE statistics indicate \n> that the query should run relatively fast, but it takes hours to \n> complete. I attached the query plan to this posting. Thanks.\n\nWhat are the differences between estimated and real rows and such of an \nexplain analyze on that query? Are there any estimates that are just\nway \noff?\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************\n",
"msg_date": "Wed, 1 Oct 2003 11:29:21 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "Oleg,\n\n> The output of the query should contain about 200 rows. So, I guess the\n> planer is off assuming that the query should return 1 row.\n\nOh, also did you post the query before? Can you re-post it with the planner \nresults?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 1 Oct 2003 10:41:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "Hi all,\n\nI haven't found any official documentation about the postgres sql optimizer\non the web, so please forgive me if there is such a document and point me to\nthe right direction.\n\nI've got the following problem: I cannot make the postgres SQL Optimizer use\nan index on a date field to filter out a date range, e.g.\n\nselect * from mytable where mydate >= '2003-10-01';\n\n Seq Scan on mytable (cost=0.00..2138.11 rows=12203 width=543)\n Filter: (mydate >= '2003-09-01'::date)\n\n\nthe index is created as follows:\n\ncreate index query on mytable(mydate);\n\nTesting for equality gives me the index optimization:\n\nselect * from mytable where mydate = '2003-10-01';\n\nIndex Scan using query on mytable (cost=0.00..54.93 rows=44 width=543)\n Index Cond: (mydate = '2003-09-01'::date)\n\n\nI have run vacuum analyze on the table. Also the table contains 25.000\nrecords, so the index should be used in my opinion. Am I missing something ?\nThe\nsame seems to apply to integers. \n\nThank you very much in advance\nDimi\n\nPS The postgres version is as follows:\n\n PostgreSQL 7.3.2 on i386-redhat-linux-gnu, compiled by GCC\ni386-redhat-linux-gcc (GCC) 3.2.2 20030213 (Red Hat Linux 8.0 3.2.2-1)\n\n\n\n-- \nNEU F�R ALLE - GMX MediaCenter - f�r Fotos, Musik, Dateien...\nFotoalbum, File Sharing, MMS, Multimedia-Gru�, GMX FotoService\n\nJetzt kostenlos anmelden unter http://www.gmx.net\n\n+++ GMX - die erste Adresse f�r Mail, Message, More! +++\n\n",
"msg_date": "Wed, 1 Oct 2003 19:30:47 +0200 (MEST)",
"msg_from": "\"Dimitri Nagiev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing >= and <= for numbers and dates"
},
{
"msg_contents": "On Wed, 2003-10-01 at 13:30, Dimitri Nagiev wrote:\n> Hi all,\n> \n> I haven't found any official documentation about the postgres sql optimizer\n> on the web, so please forgive me if there is such a document and point me to\n> the right direction.\n> \n> I've got the following problem: I cannot make the postgres SQL Optimizer use\n> an index on a date field to filter out a date range, e.g.\n> \n> select * from mytable where mydate >= '2003-10-01';\n> \n> Seq Scan on mytable (cost=0.00..2138.11 rows=12203 width=543)\n> Filter: (mydate >= '2003-09-01'::date)\n\nEXPLAIN ANALYZE output please.",
"msg_date": "Wed, 01 Oct 2003 13:38:02 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing >= and <= for numbers and dates"
},
{
"msg_contents": "here goes the EXPLAIN ANALYZE output:\n\n\ntemplate1=# VACUUM analyze mytable;\nVACUUM\ntemplate1=# explain analyze select * from mytable where\nmydate>='2003-09-01';\n QUERY PLAN\n \n\n---------------------------------------------------------------------------------------------------------------\n Seq Scan on mytable (cost=0.00..2209.11 rows=22274 width=562) (actual\ntime=0.06..267.30 rows=22677 loops=1)\n Filter: (mydate >= '2003-09-01'::date)\n Total runtime: 307.71 msec\n(3 rows)\n\n\ntemplate1=# explain analyze select * from mytable where mydate='2003-09-01';\n QUERY PLAN\n \n\n------------------------------------------------------------------------------------------------------------------------------\n Index Scan using mytable_query on mytable (cost=0.00..148.56 rows=43\nwidth=562) (actual time=41.22..41.27 rows=4 loops=1)\n Index Cond: (mydate = '2003-09-01'::date)\n Total runtime: 41.34 msec\n(3 rows)\n\n\n\n> On Wed, 2003-10-01 at 13:30, Dimitri Nagiev wrote:\n> > Hi all,\n> > \n> > I haven't found any official documentation about the postgres sql\n> optimiz\n> er\n> > on the web, so please forgive me if there is such a document and point\n> me\n> to\n> > the right direction.\n> > \n> > I've got the following problem: I cannot make the postgres SQL Optimizer\n> use\n> > an index on a date field to filter out a date range, e.g.\n> > \n> > select * from mytable where mydate >= '2003-10-01';\n> > \n> > Seq Scan on mytable (cost=0.00..2138.11 rows=12203 width=543)\n> > Filter: (mydate >= '2003-09-01'::date)\n> \n> EXPLAIN ANALYZE output please.\n> \n\n-- \nNEU F�R ALLE - GMX MediaCenter - f�r Fotos, Musik, Dateien...\nFotoalbum, File Sharing, MMS, Multimedia-Gru�, GMX FotoService\n\nJetzt kostenlos anmelden unter http://www.gmx.net\n\n+++ GMX - die erste Adresse f�r Mail, Message, More! +++\n\n",
"msg_date": "Wed, 1 Oct 2003 19:45:29 +0200 (MEST)",
"msg_from": "\"Dimitri Nagiev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing >= and <= for numbers and dates"
},
{
"msg_contents": "On Wed, 2003-10-01 at 13:45, Dimitri Nagiev wrote:\n> template1=# explain analyze select * from mytable where\n> mydate>='2003-09-01';\n> QUERY PLAN\n> \n> \n> ---------------------------------------------------------------------------------------------------------------\n> Seq Scan on mytable (cost=0.00..2209.11 rows=22274 width=562) (actual\n> time=0.06..267.30 rows=22677 loops=1)\n> Filter: (mydate >= '2003-09-01'::date)\n> Total runtime: 307.71 msec\n> (3 rows)\n\nIt may well be the case that a seqscan is faster than an index scan for\nthis query. Try disabling sequential scans (SET enable_seqscan = false)\nand re-running EXPLAIN ANALYZE: see if the total runtime is smaller or\nlarger.\n\n-Neil\n\n\n",
"msg_date": "Wed, 01 Oct 2003 14:35:55 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing >= and <= for numbers and dates"
},
{
"msg_contents": "On Wed, 1 Oct 2003, Dimitri Nagiev wrote:\n\n> here goes the EXPLAIN ANALYZE output:\n> \n> \n> template1=# VACUUM analyze mytable;\n> VACUUM\n> template1=# explain analyze select * from mytable where\n> mydate>='2003-09-01';\n> QUERY PLAN\n> \n> \n> ---------------------------------------------------------------------------------------------------------------\n> Seq Scan on mytable (cost=0.00..2209.11 rows=22274 width=562) (actual\n> time=0.06..267.30 rows=22677 loops=1)\n> Filter: (mydate >= '2003-09-01'::date)\n> Total runtime: 307.71 msec\n> (3 rows)\n\nHow many rows are there in this table? If the number is only two or three \ntimes as many as the number of rows returned (22677) then a seq scan is \npreferable.\n\nThe way to tune your random_page_cost is to keep making your range more \nselective until you get an index scan. Then, see what the difference is \nin speed between the two queries that sit on either side of that number, \ni.e. if a query that returns 1000 rows switches to index scan, and takes \n100 msec, while one that returns 1050 uses seq scan and takes 200 msec, \nthen you might want to lower your random page cost.\n\nIdeally, what should happen is that as the query returns more and more \nrows, the switch to seq scan should happen so that it's taking about the \nsame amount of time as the index scan, maybe just a little more.\n\n",
"msg_date": "Wed, 1 Oct 2003 13:03:21 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing >= and <= for numbers and dates"
},
{
"msg_contents": "\nOh, to followup on my previously sent post, make sure you've got \neffective_cache_size set right BEFORE you go trying to set \nrandom_page_cost, and you might wanna run a select * from table to load \nthe table into kernel buffer cache before testing, then also test it with \nthe cache cleared out (select * from a_different_really_huge_table will \nusually do that.)\n\n",
"msg_date": "Wed, 1 Oct 2003 13:04:56 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing >= and <= for numbers and dates"
},
{
"msg_contents": "On Wed, 1 Oct 2003 19:45:29 +0200 (MEST), \"Dimitri Nagiev\"\n<[email protected]> wrote:\n>template1=# explain analyze select * from mytable where\n>mydate>='2003-09-01';\n> Seq Scan on mytable (cost=0.00..2209.11 rows=22274 width=562) (actual time=0.06..267.30 rows=22677 loops=1)\n> Filter: (mydate >= '2003-09-01'::date)\n> Total runtime: 307.71 msec\n\nDidn't you say that there are 25000 rows in the table? I can't\nbelieve that for selecting 90% of all rows an index scan would be\nfaster. Try\n\n\tSET enable_seqscan = 0;\n\texplain analyze\n\t select * from mytable where mydate>='2003-09-01';\n\nIf you find the index scan to be faster, there might be lots of dead\ntuples in which case you should\n\n\tVACUUM FULL mytable;\n\nServus\n Manfred\n",
"msg_date": "Wed, 01 Oct 2003 21:06:17 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing >= and <= for numbers and dates"
}
] |
[
{
"msg_contents": "That would be great! When do you think this would be ready for us to see\n;?)\n\n-----Original Message-----\nFrom: Jeff [mailto:[email protected]] \nSent: Wednesday, October 01, 2003 11:42 AM\nTo: Oleg Lebedev\nCc: [email protected]\nSubject: RE: [PERFORM] Tuning/performance issue...\n\n\nOn Wed, 1 Oct 2003, Oleg Lebedev wrote:\n\n> Jeff,\n> I would really appreciate if you could send me that lengthy \n> presentation that you've written on pg/other dbs comparison. Thanks.\n>\n\nAfter I give the presentation at work and collect comments from my\ncoworkers (and remove some information you folks don't need to know :) I\nwill be very willing to post it for people to see.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************\n",
"msg_date": "Wed, 1 Oct 2003 11:43:12 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning/performance issue..."
}
] |
[
{
"msg_contents": "Sure, below is the query. I attached the plan to this posting.\n\nselect\n\tnation,\n\to_year,\n\tsum(amount) as sum_profit\nfrom\n\t(\n\t\tselect\n\t\t\tn_name as nation,\n\t\t\textract(year from o_orderdate) as o_year,\n\t\t\tl_extendedprice * (1 - l_discount) -\nps_supplycost * l_quantity as amount\n\t\tfrom\n\t\t\tpart,\n\t\t\tsupplier,\n\t\t\tlineitem,\n\t\t\tpartsupp,\n\t\t\torders,\n\t\t\tnation\n\t\twhere\n\t\t\ts_suppkey = l_suppkey\n\t\t\tand ps_suppkey = l_suppkey\n\t\t\tand ps_partkey = l_partkey\n\t\t\tand p_partkey = l_partkey\n\t\t\tand o_orderkey = l_orderkey\n\t\t\tand s_nationkey = n_nationkey\n\t\t\tand p_name like '%green%'\n\t) as profit\ngroup by\n\tnation,\n\to_year\norder by\n\tnation,\n\to_year desc;\n\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Wednesday, October 01, 2003 11:42 AM\nTo: Oleg Lebedev; scott.marlowe\nCc: [email protected]\nSubject: Re: [PERFORM] TPC-R benchmarks\n\n\nOleg,\n\n> The output of the query should contain about 200 rows. So, I guess the\n\n> planer is off assuming that the query should return 1 row.\n\nOh, also did you post the query before? Can you re-post it with the\nplanner \nresults?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************",
"msg_date": "Wed, 1 Oct 2003 11:59:59 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "All right, my query just finished running with EXPLAIN ANALYZE.\nI show the plan below and also attached it as a file.\nAny ideas?\n\n -> Sort (cost=54597.49..54597.50 rows=1 width=121) (actual\ntime=6674562.03..6674562.15 rows=175 loops=1)\n Sort Key: nation.n_name, date_part('year'::text,\norders.o_orderdate)\n -> Aggregate (cost=54597.45..54597.48 rows=1 width=121)\n(actual time=6668919.41..6674522.48 rows=175 loops=1)\n -> Group (cost=54597.45..54597.47 rows=3 width=121)\n(actual time=6668872.68..6672136.96 rows=348760 loops=1)\n -> Sort (cost=54597.45..54597.46 rows=3\nwidth=121) (actual time=6668872.65..6669499.95 rows=348760 loops=1)\n Sort Key: nation.n_name,\ndate_part('year'::text, orders.o_orderdate)\n -> Hash Join (cost=54596.00..54597.42\nrows=3\nwidth=121) (actual time=6632768.89..6650192.67 rows=348760 loops=1)\n Hash Cond: (\"outer\".n_nationkey =\n\"inner\".s_nationkey)\n -> Seq Scan on nation\n(cost=0.00..1.25 rows=25 width=33) (actual time=6.75..7.13 rows=25\nloops=1)\n -> Hash (cost=54596.00..54596.00\nrows=3\nwidth=88) (actual time=6632671.96..6632671.96 rows=0 loops=1)\n -> Nested Loop\n(cost=0.00..54596.00 rows=3 width=88) (actual time=482.41..6630601.46\nrows=348760 loops=1)\n Join Filter:\n(\"inner\".s_suppkey = \"outer\".l_suppkey)\n -> Nested Loop\n(cost=0.00..54586.18 rows=3 width=80) (actual time=383.87..6594984.40\nrows=348760 loops=1)\n -> Nested Loop\n(cost=0.00..54575.47 rows=4 width=68) (actual time=199.95..3580882.07\nrows=348760 loops=1)\n Join Filter:\n(\"outer\".p_partkey = \"inner\".ps_partkey)\n -> Nested Loop\n(cost=0.00..22753.33 rows=9343 width=49) (actual time=146.85..3541433.10\nrows=348760 loops=1)\n -> Seq\nScan on part (cost=0.00..7868.00 rows=320 width=4) (actual\ntime=33.64..15651.90 rows=11637 loops=1)\n\nFilter: (p_name ~~ '%green%'::text)\n -> Index\nScan using i_l_partkey on lineitem (cost=0.00..46.15 rows=29 width=45)\n(actual time=10.71..302.67 rows=30 loops=11637)\n \nIndex\nCond: (\"outer\".p_partkey = lineitem.l_partkey)\n -> Index Scan\nusing pk_partsupp on partsupp (cost=0.00..3.39 rows=1 width=19) (actual\ntime=0.09..0.09 rows=1 loops=348760)\n Index\nCond: ((partsupp.ps_partkey = \"outer\".l_partkey) AND\n(partsupp.ps_suppkey =\n\"outer\".l_suppkey))\n -> Index Scan using\npk_orders on orders (cost=0.00..3.01 rows=1 width=12) (actual\ntime=8.62..8.62 rows=1 loops=348760)\n Index Cond:\n(orders.o_orderkey = \"outer\".l_orderkey)\n -> Index Scan using\npk_supplier on supplier (cost=0.00..3.01 rows=1 width=8) (actual\ntime=0.08..0.08 rows=1 loops=348760)\n Index Cond:\n(\"outer\".ps_suppkey = supplier.s_suppkey) Total runtime: 6674724.23\nmsec (28 rows)\n\n\n-----Original Message-----\nFrom: Oleg Lebedev \nSent: Wednesday, October 01, 2003 12:00 PM\nTo: Josh Berkus; scott.marlowe\nCc: [email protected]\nSubject: Re: [PERFORM] TPC-R benchmarks\nImportance: Low\n\n\nSure, below is the query. I attached the plan to this posting.\n\nselect\n\tnation,\n\to_year,\n\tsum(amount) as sum_profit\nfrom\n\t(\n\t\tselect\n\t\t\tn_name as nation,\n\t\t\textract(year from o_orderdate) as o_year,\n\t\t\tl_extendedprice * (1 - l_discount) -\nps_supplycost * l_quantity as amount\n\t\tfrom\n\t\t\tpart,\n\t\t\tsupplier,\n\t\t\tlineitem,\n\t\t\tpartsupp,\n\t\t\torders,\n\t\t\tnation\n\t\twhere\n\t\t\ts_suppkey = l_suppkey\n\t\t\tand ps_suppkey = l_suppkey\n\t\t\tand ps_partkey = l_partkey\n\t\t\tand p_partkey = l_partkey\n\t\t\tand o_orderkey = l_orderkey\n\t\t\tand s_nationkey = n_nationkey\n\t\t\tand p_name like '%green%'\n\t) as profit\ngroup by\n\tnation,\n\to_year\norder by\n\tnation,\n\to_year desc;\n\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Wednesday, October 01, 2003 11:42 AM\nTo: Oleg Lebedev; scott.marlowe\nCc: [email protected]\nSubject: Re: [PERFORM] TPC-R benchmarks\n\n\nOleg,\n\n> The output of the query should contain about 200 rows. So, I guess the\n\n> planer is off assuming that the query should return 1 row.\n\nOh, also did you post the query before? Can you re-post it with the\nplanner \nresults?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for\nthe named recipient only. If you are not the named recipient, delete\nthis message and all attachments. Unauthorized reviewing, copying,\nprinting, disclosing, or otherwise using information in this e-mail is\nprohibited. We reserve the right to monitor e-mail sent through our\nnetwork. \n\n*************************************\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************",
"msg_date": "Wed, 1 Oct 2003 15:02:43 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "For troubleshooting, can you try it with \"set enable_nestloop = false\" and \nrerun the query and see how long it takes? \n\nIt looks like the estimates of rows returned is WAY off (estimate is too \nlow compared to what really comes back.)\n\nAlso, you might try to alter the table.column to have a higher target on \nthe rows p_partkey and ps_partkey and any others where the estimate is so \nfar off of the reality.\n\nOn Wed, 1 Oct 2003, Oleg Lebedev wrote:\n\n> All right, my query just finished running with EXPLAIN ANALYZE.\n> I show the plan below and also attached it as a file.\n> Any ideas?\n> \n> -> Sort (cost=54597.49..54597.50 rows=1 width=121) (actual\n> time=6674562.03..6674562.15 rows=175 loops=1)\n> Sort Key: nation.n_name, date_part('year'::text,\n> orders.o_orderdate)\n> -> Aggregate (cost=54597.45..54597.48 rows=1 width=121)\n> (actual time=6668919.41..6674522.48 rows=175 loops=1)\n> -> Group (cost=54597.45..54597.47 rows=3 width=121)\n> (actual time=6668872.68..6672136.96 rows=348760 loops=1)\n> -> Sort (cost=54597.45..54597.46 rows=3\n> width=121) (actual time=6668872.65..6669499.95 rows=348760 loops=1)\n> Sort Key: nation.n_name,\n> date_part('year'::text, orders.o_orderdate)\n> -> Hash Join (cost=54596.00..54597.42\n> rows=3\n> width=121) (actual time=6632768.89..6650192.67 rows=348760 loops=1)\n> Hash Cond: (\"outer\".n_nationkey =\n> \"inner\".s_nationkey)\n> -> Seq Scan on nation\n> (cost=0.00..1.25 rows=25 width=33) (actual time=6.75..7.13 rows=25\n> loops=1)\n> -> Hash (cost=54596.00..54596.00\n> rows=3\n> width=88) (actual time=6632671.96..6632671.96 rows=0 loops=1)\n> -> Nested Loop\n> (cost=0.00..54596.00 rows=3 width=88) (actual time=482.41..6630601.46\n> rows=348760 loops=1)\n> Join Filter:\n> (\"inner\".s_suppkey = \"outer\".l_suppkey)\n> -> Nested Loop\n> (cost=0.00..54586.18 rows=3 width=80) (actual time=383.87..6594984.40\n> rows=348760 loops=1)\n> -> Nested Loop\n> (cost=0.00..54575.47 rows=4 width=68) (actual time=199.95..3580882.07\n> rows=348760 loops=1)\n> Join Filter:\n> (\"outer\".p_partkey = \"inner\".ps_partkey)\n> -> Nested Loop\n> (cost=0.00..22753.33 rows=9343 width=49) (actual time=146.85..3541433.10\n> rows=348760 loops=1)\n> -> Seq\n> Scan on part (cost=0.00..7868.00 rows=320 width=4) (actual\n> time=33.64..15651.90 rows=11637 loops=1)\n> \n> Filter: (p_name ~~ '%green%'::text)\n> -> Index\n> Scan using i_l_partkey on lineitem (cost=0.00..46.15 rows=29 width=45)\n> (actual time=10.71..302.67 rows=30 loops=11637)\n> \n> Index\n> Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n> -> Index Scan\n> using pk_partsupp on partsupp (cost=0.00..3.39 rows=1 width=19) (actual\n> time=0.09..0.09 rows=1 loops=348760)\n> Index\n> Cond: ((partsupp.ps_partkey = \"outer\".l_partkey) AND\n> (partsupp.ps_suppkey =\n> \"outer\".l_suppkey))\n> -> Index Scan using\n> pk_orders on orders (cost=0.00..3.01 rows=1 width=12) (actual\n> time=8.62..8.62 rows=1 loops=348760)\n> Index Cond:\n> (orders.o_orderkey = \"outer\".l_orderkey)\n> -> Index Scan using\n> pk_supplier on supplier (cost=0.00..3.01 rows=1 width=8) (actual\n> time=0.08..0.08 rows=1 loops=348760)\n> Index Cond:\n> (\"outer\".ps_suppkey = supplier.s_suppkey) Total runtime: 6674724.23\n> msec (28 rows)\n> \n> \n> -----Original Message-----\n> From: Oleg Lebedev \n> Sent: Wednesday, October 01, 2003 12:00 PM\n> To: Josh Berkus; scott.marlowe\n> Cc: [email protected]\n> Subject: Re: [PERFORM] TPC-R benchmarks\n> Importance: Low\n> \n> \n> Sure, below is the query. I attached the plan to this posting.\n> \n> select\n> \tnation,\n> \to_year,\n> \tsum(amount) as sum_profit\n> from\n> \t(\n> \t\tselect\n> \t\t\tn_name as nation,\n> \t\t\textract(year from o_orderdate) as o_year,\n> \t\t\tl_extendedprice * (1 - l_discount) -\n> ps_supplycost * l_quantity as amount\n> \t\tfrom\n> \t\t\tpart,\n> \t\t\tsupplier,\n> \t\t\tlineitem,\n> \t\t\tpartsupp,\n> \t\t\torders,\n> \t\t\tnation\n> \t\twhere\n> \t\t\ts_suppkey = l_suppkey\n> \t\t\tand ps_suppkey = l_suppkey\n> \t\t\tand ps_partkey = l_partkey\n> \t\t\tand p_partkey = l_partkey\n> \t\t\tand o_orderkey = l_orderkey\n> \t\t\tand s_nationkey = n_nationkey\n> \t\t\tand p_name like '%green%'\n> \t) as profit\n> group by\n> \tnation,\n> \to_year\n> order by\n> \tnation,\n> \to_year desc;\n> \n> \n> -----Original Message-----\n> From: Josh Berkus [mailto:[email protected]] \n> Sent: Wednesday, October 01, 2003 11:42 AM\n> To: Oleg Lebedev; scott.marlowe\n> Cc: [email protected]\n> Subject: Re: [PERFORM] TPC-R benchmarks\n> \n> \n> Oleg,\n> \n> > The output of the query should contain about 200 rows. So, I guess the\n> \n> > planer is off assuming that the query should return 1 row.\n> \n> Oh, also did you post the query before? Can you re-post it with the\n> planner \n> results?\n> \n> \n\n",
"msg_date": "Wed, 1 Oct 2003 15:59:41 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "Oleg Lebedev <[email protected]> writes:\n> All right, my query just finished running with EXPLAIN ANALYZE.\n> I show the plan below and also attached it as a file.\n> Any ideas?\n\nUh, have you done an ANALYZE (or VACUUM ANALYZE) on this database?\nIt sure looks like the planner thinks the tables are a couple of orders\nof magnitude smaller than they actually are. Certainly the estimated\nsizes of the joins are way off :-(\n\nIf you did analyze, it might help to increase the statistics target and\nre-analyze.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Oct 2003 18:19:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks "
},
{
"msg_contents": "Oleg,\n\n> All right, my query just finished running with EXPLAIN ANALYZE.\n> I show the plan below and also attached it as a file.\n> Any ideas?\n\nYes. Your problem appears to be right here:\n\n> -> Nested Loop\n> (cost=0.00..54596.00 rows=3 width=88) (actual time=482.41..6630601.46\n> rows=348760 loops=1)\n> Join Filter:\n> (\"inner\".s_suppkey = \"outer\".l_suppkey)\n> -> Nested Loop\n> (cost=0.00..54586.18 rows=3 width=80) (actual time=383.87..6594984.40\n> rows=348760 loops=1)\n> -> Nested Loop\n> (cost=0.00..54575.47 rows=4 width=68) (actual time=199.95..3580882.07\n> rows=348760 loops=1)\n> Join Filter:\n> (\"outer\".p_partkey = \"inner\".ps_partkey)\n> -> Nested Loop\n> (cost=0.00..22753.33 rows=9343 width=49) (actual time=146.85..3541433.10\n> rows=348760 loops=1)\n\nFor some reason, the row estimate on the supplier --> lineitem join is bad, as \nis the estimate on part --> partsupp. Let me first check two things:\n\n1) You have an index on l_suppkey and on ps_partkey.\n2) you have run ANALYZE on your whole database before the query\n\nIf both of those are true, I'd like to see the lines in pg_stats that apply to \nps_partkey and l_suppkey; please do a:\n\nSELECT * FROM pg_stats WHERE attname = 'l_suppkey' or attname = 'ps_partkey'\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 1 Oct 2003 15:23:12 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "Hi folks.\n\nWhat's wrong with planner that executes my query in function?:\n(i mean no explanation but runtime)\n\n\ntele=# EXPLAIN analyze select calc_total(6916799, 1062363600, 1064955599);\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=36919.37..36919.37 rows=1 loops=1)\n Total runtime: 36919.40 msec\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\ntele=# \\df+ calc_total\n...\ndeclare\n usr alias for $1;\n d1 alias for $2;\n d2 alias for $3;\n res integer;\nbegin\n select sum(cost) into res\n from bills where\n (parent(user_id) = usr or user_id = usr)\n and dat >= d1 and dat < d2;\n if res is not null then\n return res;\n else\n return 0;\n end if;\nend;\n\ntele=# EXPLAIN analyze select sum(cost) from bills where (parent(user_id) = 6916799 or user_id = 6916799) and dat >= 1062363600 and dat < 10649555\n99;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n------------------\n Aggregate (cost=17902.80..17902.80 rows=1 width=4) (actual time=101.04..101.04 rows=1 loops=1)\n -> Index Scan using bills_parent_user_id_idx, bills_userid_dat_idx on bills (cost=0.00..17901.11 rows=679 width=4) (actual time=101.03..101.0\n3 rows=0 loops=1)\n Index Cond: ((parent(user_id) = 6916799) OR ((user_id = 6916799) AND (dat >= 1062363600) AND (dat < 1064955599)))\n Filter: (((parent(user_id) = 6916799) OR (user_id = 6916799)) AND (dat >= 1062363600) AND (dat < 1064955599))\n Total runtime: 101.14 msec\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nSo the query is the same as in calc_total(usr,d1,d2) function,\nbut execute time extremely differs.\n\nIs it normal?\n\nThanks,\n Andriy Tkachuk.\n\n",
"msg_date": "Thu, 2 Oct 2003 16:39:11 +0300 (EEST)",
"msg_from": "Andriy Tkachuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "runtime of the same query in function differs on 2 degree!"
},
{
"msg_contents": "Andriy Tkachuk <[email protected]> writes:\n> What's wrong with planner that executes my query in function?:\n\n> tele=# EXPLAIN analyze select sum(cost) from bills where (parent(user_id) = 6916799 or user_id = 6916799) and dat >= 1062363600 and dat < 10649555\n> 99;\n\nIn the function case, the planner will not have access to the specific\nvalues that \"dat\" is being compared to --- it'll see something like\n\n\t... and dat >= $1 and dat < $2\n\nIn this case it has to fall back on a default estimate of how many rows\nwill be selected, and I suspect it's guessing that a seqscan will be\nfaster. The trouble is that for a sufficiently large range of d1/d2,\na seqscan *will* be faster.\n\nYou might find that the best solution is to use FOR ... EXECUTE and plug\nthe parameters into the query string so that the planner can see their\nvalues. This will mean re-planning on every function call, but the\nadvantage is the plan will adapt to the actual range of d1/d2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Oct 2003 11:30:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: runtime of the same query in function differs on 2 degree! "
},
{
"msg_contents": "Andriy Tkachuk wrote:\n\n> Hi folks.\n> \n> What's wrong with planner that executes my query in function?:\n> (i mean no explanation but runtime)\n> \n> \n> tele=# EXPLAIN analyze select calc_total(6916799, 1062363600, 1064955599);\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=0) (actual time=36919.37..36919.37 rows=1 loops=1)\n> Total runtime: 36919.40 msec\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> tele=# \\df+ calc_total\n> ...\n> declare\n> usr alias for $1;\n> d1 alias for $2;\n> d2 alias for $3;\n> res integer;\n> begin\n> select sum(cost) into res\n> from bills where\n> (parent(user_id) = usr or user_id = usr)\n> and dat >= d1 and dat < d2;\n> if res is not null then\n> return res;\n> else\n> return 0;\n> end if;\n> end;\n\nYou didn't wrote the type of d1 and d2, I had your same problem:\n\ndeclare\n a_user alias for $1;\n res INTEGER;\nbegin\n select cost into res\n from my_table\n where login = a_user;\n\n\t......\nend;\n\nthe problem was that login was a VARCHAR and a_user was a TEXT so\nthe index was not used, was enough cast a_user::varchar;\n\n\nI believe that your dat, d1, d2 are not \"index\" comparable.\n\n\nGaetano\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Fri, 03 Oct 2003 01:52:46 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: runtime of the same query in function differs on 2 degree!"
},
{
"msg_contents": "No: the function is calc_total(int,int,int) and the table have the\nsame types.\n\nAs Tom said that my problem is because of planning in pl/pgsql. As\nis written in\nhttp://www.postgresql.org/docs/7.3/static/plpgsql.html#PLPGSQL-OVERVIEW\nplans for queries in pl/pgsql are made just once - when they are\nfirst used in function by backend. So AFAICS this planning do not\ntake into consideration table statistics because it don't see values\nof variables in queries (or if see than it must not take them into account,\nbecause they may be changed in future function callings).\n\nI rollback to my previous realization of calc_total() on pl/tcl. I\nuse there spi_exec - so the query always regards as dynamic - it\nalways parsed, rewritten, planned but executes fastest much more\n:)\n\nOn Fri, 3 Oct 2003, Gaetano Mendola wrote:\n\n> Andriy Tkachuk wrote:\n>\n> > Hi folks.\n> >\n> > What's wrong with planner that executes my query in function?:\n> > (i mean no explanation but runtime)\n> >\n> >\n> > tele=# EXPLAIN analyze select calc_total(6916799, 1062363600, 1064955599);\n> > QUERY PLAN\n> > ------------------------------------------------------------------------------------------\n> > Result (cost=0.00..0.01 rows=1 width=0) (actual time=36919.37..36919.37 rows=1 loops=1)\n> > Total runtime: 36919.40 msec\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> >\n> > tele=# \\df+ calc_total\n> > ...\n> > declare\n> > usr alias for $1;\n> > d1 alias for $2;\n> > d2 alias for $3;\n> > res integer;\n> > begin\n> > select sum(cost) into res\n> > from bills where\n> > (parent(user_id) = usr or user_id = usr)\n> > and dat >= d1 and dat < d2;\n> > if res is not null then\n> > return res;\n> > else\n> > return 0;\n> > end if;\n> > end;\n>\n> You didn't wrote the type of d1 and d2, I had your same problem:\n>\n> declare\n> a_user alias for $1;\n> res INTEGER;\n> begin\n> select cost into res\n> from my_table\n> where login = a_user;\n>\n> \t......\n> end;\n>\n> the problem was that login was a VARCHAR and a_user was a TEXT so\n> the index was not used, was enough cast a_user::varchar;\n>\n>\n> I believe that your dat, d1, d2 are not \"index\" comparable.\n>\n>\n> Gaetano\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n",
"msg_date": "Fri, 3 Oct 2003 10:02:03 +0300 (EEST)",
"msg_from": "Andriy Tkachuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: runtime of the same query in function differs on 2"
}
] |
[
{
"msg_contents": "I ran VACUUM FULL ANALYZE yesterday and the re-ran the query with\nEXPLAIN ANALYZE.\nI got the same query plan and execution time. \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Wednesday, October 01, 2003 4:20 PM\nTo: Oleg Lebedev\nCc: Josh Berkus; scott.marlowe; [email protected]\nSubject: Re: [PERFORM] TPC-R benchmarks\n\n\nOleg Lebedev <[email protected]> writes:\n> All right, my query just finished running with EXPLAIN ANALYZE. I show\n\n> the plan below and also attached it as a file. Any ideas?\n\nUh, have you done an ANALYZE (or VACUUM ANALYZE) on this database? It\nsure looks like the planner thinks the tables are a couple of orders of\nmagnitude smaller than they actually are. Certainly the estimated sizes\nof the joins are way off :-(\n\nIf you did analyze, it might help to increase the statistics target and\nre-analyze.\n\n\t\t\tregards, tom lane\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************\n",
"msg_date": "Thu, 2 Oct 2003 08:29:52 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "Oleg,\n\n> I ran VACUUM FULL ANALYZE yesterday and the re-ran the query with\n> EXPLAIN ANALYZE.\n> I got the same query plan and execution time.\n\nHow about my question? Those rows from pg_stats would be really useful in \ndiagnosing the problem.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 2 Oct 2003 09:22:20 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "As Scott recommended, I did the following:\n# set enable_nestloop = false;\n# vacuum full analyze;\n\nAfter this I re-ran the query and its execution time went down from 2\nhours to 2 minutes. I attached the new query plan to this posting.\nIs there any way to optimize it even further?\nWhat should I do to make this query run fast without hurting the\nperformance of the other queries?\nThanks.\n\nOleg\n\n-----Original Message-----\nFrom: scott.marlowe [mailto:[email protected]] \nSent: Wednesday, October 01, 2003 4:00 PM\nTo: Oleg Lebedev\nCc: Josh Berkus; [email protected]\nSubject: Re: [PERFORM] TPC-R benchmarks\n\n\nFor troubleshooting, can you try it with \"set enable_nestloop = false\"\nand \nrerun the query and see how long it takes? \n\nIt looks like the estimates of rows returned is WAY off (estimate is too\n\nlow compared to what really comes back.)\n\nAlso, you might try to alter the table.column to have a higher target on\n\nthe rows p_partkey and ps_partkey and any others where the estimate is\nso \nfar off of the reality.\n\nOn Wed, 1 Oct 2003, Oleg Lebedev wrote:\n\n> All right, my query just finished running with EXPLAIN ANALYZE. I show\n\n> the plan below and also attached it as a file. Any ideas?\n> \n> -> Sort (cost=54597.49..54597.50 rows=1 width=121) (actual \n> time=6674562.03..6674562.15 rows=175 loops=1)\n> Sort Key: nation.n_name, date_part('year'::text,\n> orders.o_orderdate)\n> -> Aggregate (cost=54597.45..54597.48 rows=1 width=121) \n> (actual time=6668919.41..6674522.48 rows=175 loops=1)\n> -> Group (cost=54597.45..54597.47 rows=3 width=121) \n> (actual time=6668872.68..6672136.96 rows=348760 loops=1)\n> -> Sort (cost=54597.45..54597.46 rows=3\n> width=121) (actual time=6668872.65..6669499.95 rows=348760 loops=1)\n> Sort Key: nation.n_name, \n> date_part('year'::text, orders.o_orderdate)\n> -> Hash Join (cost=54596.00..54597.42 \n> rows=3\n> width=121) (actual time=6632768.89..6650192.67 rows=348760 loops=1)\n> Hash Cond: (\"outer\".n_nationkey =\n> \"inner\".s_nationkey)\n> -> Seq Scan on nation \n> (cost=0.00..1.25 rows=25 width=33) (actual time=6.75..7.13 rows=25\n> loops=1)\n> -> Hash (cost=54596.00..54596.00 \n> rows=3\n> width=88) (actual time=6632671.96..6632671.96 rows=0 loops=1)\n> -> Nested Loop \n> (cost=0.00..54596.00 rows=3 width=88) (actual time=482.41..6630601.46 \n> rows=348760 loops=1)\n> Join Filter: \n> (\"inner\".s_suppkey = \"outer\".l_suppkey)\n> -> Nested Loop \n> (cost=0.00..54586.18 rows=3 width=80) (actual time=383.87..6594984.40 \n> rows=348760 loops=1)\n> -> Nested Loop \n> (cost=0.00..54575.47 rows=4 width=68) (actual time=199.95..3580882.07 \n> rows=348760 loops=1)\n> Join Filter: \n> (\"outer\".p_partkey = \"inner\".ps_partkey)\n> -> Nested \n> Loop (cost=0.00..22753.33 rows=9343 width=49) (actual \n> time=146.85..3541433.10 rows=348760 loops=1)\n> -> Seq\n\n> Scan on part (cost=0.00..7868.00 rows=320 width=4) (actual \n> time=33.64..15651.90 rows=11637 loops=1)\n> \n> Filter: (p_name ~~ '%green%'::text)\n> -> \n> Index Scan using i_l_partkey on lineitem (cost=0.00..46.15 rows=29 \n> width=45) (actual time=10.71..302.67 rows=30 loops=11637)\n> \n> Index\n> Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n> -> Index \n> Scan using pk_partsupp on partsupp (cost=0.00..3.39 rows=1 width=19) \n> (actual time=0.09..0.09 rows=1 loops=348760)\n> Index\n> Cond: ((partsupp.ps_partkey = \"outer\".l_partkey) AND \n> (partsupp.ps_suppkey =\n> \"outer\".l_suppkey))\n> -> Index Scan \n> using pk_orders on orders (cost=0.00..3.01 rows=1 width=12) (actual \n> time=8.62..8.62 rows=1 loops=348760)\n> Index Cond: \n> (orders.o_orderkey = \"outer\".l_orderkey)\n> -> Index Scan using \n> pk_supplier on supplier (cost=0.00..3.01 rows=1 width=8) (actual \n> time=0.08..0.08 rows=1 loops=348760)\n> Index Cond: \n> (\"outer\".ps_suppkey = supplier.s_suppkey) Total runtime: 6674724.23 \n> msec (28 rows)\n> \n> \n> -----Original Message-----\n> From: Oleg Lebedev\n> Sent: Wednesday, October 01, 2003 12:00 PM\n> To: Josh Berkus; scott.marlowe\n> Cc: [email protected]\n> Subject: Re: [PERFORM] TPC-R benchmarks\n> Importance: Low\n> \n> \n> Sure, below is the query. I attached the plan to this posting.\n> \n> select\n> \tnation,\n> \to_year,\n> \tsum(amount) as sum_profit\n> from\n> \t(\n> \t\tselect\n> \t\t\tn_name as nation,\n> \t\t\textract(year from o_orderdate) as o_year,\n> \t\t\tl_extendedprice * (1 - l_discount) -\n> ps_supplycost * l_quantity as amount\n> \t\tfrom\n> \t\t\tpart,\n> \t\t\tsupplier,\n> \t\t\tlineitem,\n> \t\t\tpartsupp,\n> \t\t\torders,\n> \t\t\tnation\n> \t\twhere\n> \t\t\ts_suppkey = l_suppkey\n> \t\t\tand ps_suppkey = l_suppkey\n> \t\t\tand ps_partkey = l_partkey\n> \t\t\tand p_partkey = l_partkey\n> \t\t\tand o_orderkey = l_orderkey\n> \t\t\tand s_nationkey = n_nationkey\n> \t\t\tand p_name like '%green%'\n> \t) as profit\n> group by\n> \tnation,\n> \to_year\n> order by\n> \tnation,\n> \to_year desc;\n> \n> \n> -----Original Message-----\n> From: Josh Berkus [mailto:[email protected]]\n> Sent: Wednesday, October 01, 2003 11:42 AM\n> To: Oleg Lebedev; scott.marlowe\n> Cc: [email protected]\n> Subject: Re: [PERFORM] TPC-R benchmarks\n> \n> \n> Oleg,\n> \n> > The output of the query should contain about 200 rows. So, I guess \n> > the\n> \n> > planer is off assuming that the query should return 1 row.\n> \n> Oh, also did you post the query before? Can you re-post it with the\n> planner\n> results?\n> \n> \n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************",
"msg_date": "Thu, 2 Oct 2003 10:18:00 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "Have you tried increasing the statistics target for those columns that are \ngetting bad estimates yet and then turning back on enable_nestloop and \nrerunning analyze and seeing how the query does? \n\nThe idea being to try and get a good enough estimate of your statistics so \nthe planner stops using nestloops on its own rather than forcing it to \nwith enable_nestloop = false.\n\nOn Thu, 2 Oct 2003, Oleg Lebedev wrote:\n\n> As Scott recommended, I did the following:\n> # set enable_nestloop = false;\n> # vacuum full analyze;\n> \n> After this I re-ran the query and its execution time went down from 2\n> hours to 2 minutes. I attached the new query plan to this posting.\n> Is there any way to optimize it even further?\n> What should I do to make this query run fast without hurting the\n> performance of the other queries?\n> Thanks.\n> \n> Oleg\n> \n> -----Original Message-----\n> From: scott.marlowe [mailto:[email protected]] \n> Sent: Wednesday, October 01, 2003 4:00 PM\n> To: Oleg Lebedev\n> Cc: Josh Berkus; [email protected]\n> Subject: Re: [PERFORM] TPC-R benchmarks\n> \n> \n> For troubleshooting, can you try it with \"set enable_nestloop = false\"\n> and \n> rerun the query and see how long it takes? \n> \n> It looks like the estimates of rows returned is WAY off (estimate is too\n> \n> low compared to what really comes back.)\n> \n> Also, you might try to alter the table.column to have a higher target on\n> \n> the rows p_partkey and ps_partkey and any others where the estimate is\n> so \n> far off of the reality.\n> \n> On Wed, 1 Oct 2003, Oleg Lebedev wrote:\n> \n> > All right, my query just finished running with EXPLAIN ANALYZE. I show\n> \n> > the plan below and also attached it as a file. Any ideas?\n> > \n> > -> Sort (cost=54597.49..54597.50 rows=1 width=121) (actual \n> > time=6674562.03..6674562.15 rows=175 loops=1)\n> > Sort Key: nation.n_name, date_part('year'::text,\n> > orders.o_orderdate)\n> > -> Aggregate (cost=54597.45..54597.48 rows=1 width=121) \n> > (actual time=6668919.41..6674522.48 rows=175 loops=1)\n> > -> Group (cost=54597.45..54597.47 rows=3 width=121) \n> > (actual time=6668872.68..6672136.96 rows=348760 loops=1)\n> > -> Sort (cost=54597.45..54597.46 rows=3\n> > width=121) (actual time=6668872.65..6669499.95 rows=348760 loops=1)\n> > Sort Key: nation.n_name, \n> > date_part('year'::text, orders.o_orderdate)\n> > -> Hash Join (cost=54596.00..54597.42 \n> > rows=3\n> > width=121) (actual time=6632768.89..6650192.67 rows=348760 loops=1)\n> > Hash Cond: (\"outer\".n_nationkey =\n> > \"inner\".s_nationkey)\n> > -> Seq Scan on nation \n> > (cost=0.00..1.25 rows=25 width=33) (actual time=6.75..7.13 rows=25\n> > loops=1)\n> > -> Hash (cost=54596.00..54596.00 \n> > rows=3\n> > width=88) (actual time=6632671.96..6632671.96 rows=0 loops=1)\n> > -> Nested Loop \n> > (cost=0.00..54596.00 rows=3 width=88) (actual time=482.41..6630601.46 \n> > rows=348760 loops=1)\n> > Join Filter: \n> > (\"inner\".s_suppkey = \"outer\".l_suppkey)\n> > -> Nested Loop \n> > (cost=0.00..54586.18 rows=3 width=80) (actual time=383.87..6594984.40 \n> > rows=348760 loops=1)\n> > -> Nested Loop \n> > (cost=0.00..54575.47 rows=4 width=68) (actual time=199.95..3580882.07 \n> > rows=348760 loops=1)\n> > Join Filter: \n> > (\"outer\".p_partkey = \"inner\".ps_partkey)\n> > -> Nested \n> > Loop (cost=0.00..22753.33 rows=9343 width=49) (actual \n> > time=146.85..3541433.10 rows=348760 loops=1)\n> > -> Seq\n> \n> > Scan on part (cost=0.00..7868.00 rows=320 width=4) (actual \n> > time=33.64..15651.90 rows=11637 loops=1)\n> > \n> > Filter: (p_name ~~ '%green%'::text)\n> > -> \n> > Index Scan using i_l_partkey on lineitem (cost=0.00..46.15 rows=29 \n> > width=45) (actual time=10.71..302.67 rows=30 loops=11637)\n> > \n> > Index\n> > Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n> > -> Index \n> > Scan using pk_partsupp on partsupp (cost=0.00..3.39 rows=1 width=19) \n> > (actual time=0.09..0.09 rows=1 loops=348760)\n> > Index\n> > Cond: ((partsupp.ps_partkey = \"outer\".l_partkey) AND \n> > (partsupp.ps_suppkey =\n> > \"outer\".l_suppkey))\n> > -> Index Scan \n> > using pk_orders on orders (cost=0.00..3.01 rows=1 width=12) (actual \n> > time=8.62..8.62 rows=1 loops=348760)\n> > Index Cond: \n> > (orders.o_orderkey = \"outer\".l_orderkey)\n> > -> Index Scan using \n> > pk_supplier on supplier (cost=0.00..3.01 rows=1 width=8) (actual \n> > time=0.08..0.08 rows=1 loops=348760)\n> > Index Cond: \n> > (\"outer\".ps_suppkey = supplier.s_suppkey) Total runtime: 6674724.23 \n> > msec (28 rows)\n> > \n> > \n> > -----Original Message-----\n> > From: Oleg Lebedev\n> > Sent: Wednesday, October 01, 2003 12:00 PM\n> > To: Josh Berkus; scott.marlowe\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] TPC-R benchmarks\n> > Importance: Low\n> > \n> > \n> > Sure, below is the query. I attached the plan to this posting.\n> > \n> > select\n> > \tnation,\n> > \to_year,\n> > \tsum(amount) as sum_profit\n> > from\n> > \t(\n> > \t\tselect\n> > \t\t\tn_name as nation,\n> > \t\t\textract(year from o_orderdate) as o_year,\n> > \t\t\tl_extendedprice * (1 - l_discount) -\n> > ps_supplycost * l_quantity as amount\n> > \t\tfrom\n> > \t\t\tpart,\n> > \t\t\tsupplier,\n> > \t\t\tlineitem,\n> > \t\t\tpartsupp,\n> > \t\t\torders,\n> > \t\t\tnation\n> > \t\twhere\n> > \t\t\ts_suppkey = l_suppkey\n> > \t\t\tand ps_suppkey = l_suppkey\n> > \t\t\tand ps_partkey = l_partkey\n> > \t\t\tand p_partkey = l_partkey\n> > \t\t\tand o_orderkey = l_orderkey\n> > \t\t\tand s_nationkey = n_nationkey\n> > \t\t\tand p_name like '%green%'\n> > \t) as profit\n> > group by\n> > \tnation,\n> > \to_year\n> > order by\n> > \tnation,\n> > \to_year desc;\n> > \n> > \n> > -----Original Message-----\n> > From: Josh Berkus [mailto:[email protected]]\n> > Sent: Wednesday, October 01, 2003 11:42 AM\n> > To: Oleg Lebedev; scott.marlowe\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] TPC-R benchmarks\n> > \n> > \n> > Oleg,\n> > \n> > > The output of the query should contain about 200 rows. So, I guess \n> > > the\n> > \n> > > planer is off assuming that the query should return 1 row.\n> > \n> > Oh, also did you post the query before? Can you re-post it with the\n> > planner\n> > results?\n> > \n> > \n> \n> *************************************\n> \n> This e-mail may contain privileged or confidential material intended for the named recipient only.\n> If you are not the named recipient, delete this message and all attachments.\n> Unauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\n> We reserve the right to monitor e-mail sent through our network. \n> \n> *************************************\n> \n\n",
"msg_date": "Thu, 2 Oct 2003 10:29:09 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a select like this:\n\nSELECT MAX(transactionid) FROM cbntransaction WHERE transactiontypeid=0;\n\nin the query:\ntransactionid is the primary key of cbntransaction table,\nBut transactiontypeid is a low cardinality column, there're over 100,000\nrecords has the same trnsactiontypeid.\nI was trying to create an index on (transactiontypeid, transactionid), but\nno luck on that, postgresql will still scan the table.\nI'm wondering if there's solution for this query:\nMaybe something like if I can partition the table using transactiontypeid,\nand do a local index on transactionid on each partition, but I couldnt'\nfind any doc on postgresql to do that.\n\nThanks in advance,\nrong :-)\n\n\n",
"msg_date": "Thu, 2 Oct 2003 13:03:13 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "low cardinality column"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a select like this:\n\nSELECT MAX(transactionid) FROM cbntransaction WHERE transactiontypeid=0;\n\nin the query:\ntransactionid is the primary key of cbntransaction table,\nBut transactiontypeid is a low cardinality column, there're over 100,000\nrecords has the same trnsactiontypeid.\nI was trying to create an index on (transactiontypeid, transactionid), but\nno luck on that, postgresql will still scan the table.\nI'm wondering if there's solution for this query:\nMaybe something like if I can partition the table using transactiontypeid,\nand do a local index on transactionid on each partition, but I couldnt'\nfind any doc on postgresql to do that.\n\nThanks in advance,\nrong :-)\n\n\n",
"msg_date": "Thu, 2 Oct 2003 14:30:01 -0400 (EDT)",
"msg_from": "\"Rong Wu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "low cardinality column"
},
{
"msg_contents": "Rong,\n\n> I have a select like this:\n> \n> SELECT MAX(transactionid) FROM cbntransaction WHERE transactiontypeid=0;\n\nSimple workaround:\n\nCreate an mulit-column index on transactiontypeid, transactionid.\n\nSELECT transactionid FROM cbtransaction \nWHERE transactiontypeid=0\nORDER BY transactionid DESC LIMIT 1;\n\nThis approach will use the index.\n\nOf course, if the reason you are selecting the max id is to get the next id, \nthere are much better ways to do that.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 2 Oct 2003 11:37:22 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: low cardinality column"
},
{
"msg_contents": "On Thu, 2003-10-02 at 14:30, Rong Wu wrote:\n> Hi,\n> \n> I have a select like this:\n> \n> SELECT MAX(transactionid) FROM cbntransaction WHERE transactiontypeid=0;\n\nFor various reasons (primarily MVCC and the ability to make custom\naggregates making it difficult) MAX() is not optimized in this fashion.\n\nTry:\n\n SELECT transactionid\n FROM ...\n WHERE ...\nORDER BY transactionid DESC\n LIMIT 1;",
"msg_date": "Thu, 02 Oct 2003 14:50:44 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: low cardinality column"
},
{
"msg_contents": "Rod Taylor wrote:\n> On Thu, 2003-10-02 at 14:30, Rong Wu wrote:\n> \n>>Hi,\n>>\n>>I have a select like this:\n>>\n>>SELECT MAX(transactionid) FROM cbntransaction WHERE transactiontypeid=0;\n> \n> \n> For various reasons (primarily MVCC and the ability to make custom\n> aggregates making it difficult) MAX() is not optimized in this fashion.\n> \n> Try:\n> \n> SELECT transactionid\n> FROM ...\n> WHERE ...\n> ORDER BY transactionid DESC\n> LIMIT 1;\n\nDespite this good suggestion, if you're using this technique to generate\nthe next transaction ID, you're going to have errors as concurrency rises.\n\nUse a SERIAL, which guarantees that you won't have two processes generate\nthe same number.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Thu, 02 Oct 2003 15:00:19 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: low cardinality column"
},
{
"msg_contents": "Thanks, Rod, Josh and Bill, That' fantastic.\n\nhave a nice day,\nrong :-)\n\n> Rod Taylor wrote:\n>> On Thu, 2003-10-02 at 14:30, Rong Wu wrote:\n>>\n>>>Hi,\n>>>\n>>>I have a select like this:\n>>>\n>>>SELECT MAX(transactionid) FROM cbntransaction WHERE transactiontypeid=0;\n>>\n>>\n>> For various reasons (primarily MVCC and the ability to make custom\n>> aggregates making it difficult) MAX() is not optimized in this fashion.\n>>\n>> Try:\n>>\n>> SELECT transactionid\n>> FROM ...\n>> WHERE ...\n>> ORDER BY transactionid DESC\n>> LIMIT 1;\n>\n> Despite this good suggestion, if you're using this technique to generate\n> the next transaction ID, you're going to have errors as concurrency rises.\n>\n> Use a SERIAL, which guarantees that you won't have two processes generate\n> the same number.\n>\n> --\n> Bill Moran\n> Potential Technologies\n> http://www.potentialtech.com\n>\n>\n",
"msg_date": "Thu, 2 Oct 2003 16:11:21 -0400 (EDT)",
"msg_from": "\"Rong Wu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Thanks - Re: low cardinality column"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a somewhat large table, 3 million rows, 1 Gig on disk, and growing. Doing a\ncount(*) takes around 40 seconds.\n\nLooks like the count(*) fetches the table from disk and goes through it.\nMade me wonder, why the optimizer doesn't just choose the smallest index\nwhich in my case is around 60 Megs and goes through it, which it could\ndo in a fraction of the time.\n\nDror\n\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Thu, 2 Oct 2003 12:15:47 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": true,
"msg_subject": "count(*) slow on large tables"
},
{
"msg_contents": "> Hi,\n> \n> I have a somewhat large table, 3 million rows, 1 Gig on disk, and growing. Doing a\n> count(*) takes around 40 seconds.\n> \n> Looks like the count(*) fetches the table from disk and goes through it.\n> Made me wonder, why the optimizer doesn't just choose the smallest index\n> which in my case is around 60 Megs and goes through it, which it could\n> do in a fraction of the time.\n> \n> Dror\n\nJust like other aggregate functions, count(*) won't use indexes when \ncounting whole table.\n\nRegards,\nTomasz Myrta\n\n",
"msg_date": "Thu, 02 Oct 2003 21:36:42 +0200",
"msg_from": "Tomasz Myrta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "On Thu, Oct 02, 2003 at 12:15:47 -0700,\n Dror Matalon <[email protected]> wrote:\n> Hi,\n> \n> I have a somewhat large table, 3 million rows, 1 Gig on disk, and growing. Doing a\n> count(*) takes around 40 seconds.\n> \n> Looks like the count(*) fetches the table from disk and goes through it.\n> Made me wonder, why the optimizer doesn't just choose the smallest index\n> which in my case is around 60 Megs and goes through it, which it could\n> do in a fraction of the time.\n\nBecause it can't tell from the index if a tuple is visible to the current\ntransaction and would still have to hit the table to check this. So that\nperformance would be a lot worse instead of better.\n",
"msg_date": "Thu, 2 Oct 2003 14:39:05 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "On Thu, Oct 02, 2003 at 12:46:45 -0700,\n Dror Matalon <[email protected]> wrote:\n\nPlease keep replies copied to the list.\n\n> When would it happen that a tuple be invisible to the current\n> transaction? Are we talking about permissions?\n\nThey could be tuples that were changed by a transaction that hasn't committed\nor in the case of serializable isolation, a transaction that committed after\nthe current transaction started.\n\n> \n> On Thu, Oct 02, 2003 at 02:39:05PM -0500, Bruno Wolff III wrote:\n> > On Thu, Oct 02, 2003 at 12:15:47 -0700,\n> > Dror Matalon <[email protected]> wrote:\n> > > Hi,\n> > > \n> > > I have a somewhat large table, 3 million rows, 1 Gig on disk, and growing. Doing a\n> > > count(*) takes around 40 seconds.\n> > > \n> > > Looks like the count(*) fetches the table from disk and goes through it.\n> > > Made me wonder, why the optimizer doesn't just choose the smallest index\n> > > which in my case is around 60 Megs and goes through it, which it could\n> > > do in a fraction of the time.\n> > \n> > Because it can't tell from the index if a tuple is visible to the current\n> > transaction and would still have to hit the table to check this. So that\n> > performance would be a lot worse instead of better.\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/docs/faqs/FAQ.html\n> \n> -- \n> Dror Matalon, President\n> Zapatec Inc \n> 1700 MLK Way\n> Berkeley, CA 94709\n> http://www.zapatec.com\n",
"msg_date": "Thu, 2 Oct 2003 14:58:43 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "That's one of the draw back of MVCC. \nI once suggested that the transaction number and other house keeping\ninfo be included in the index, but was told to forget it...\nIt would solve once and for all the issue of seq_scan vs index_scan.\nIt would simplify the aggregate problem.\n\n\nBruno Wolff III wrote:\n> \n> On Thu, Oct 02, 2003 at 12:15:47 -0700,\n> Dror Matalon <[email protected]> wrote:\n> > Hi,\n> >\n> > I have a somewhat large table, 3 million rows, 1 Gig on disk, and growing. Doing a\n> > count(*) takes around 40 seconds.\n> >\n> > Looks like the count(*) fetches the table from disk and goes through it.\n> > Made me wonder, why the optimizer doesn't just choose the smallest index\n> > which in my case is around 60 Megs and goes through it, which it could\n> > do in a fraction of the time.\n> \n> Because it can't tell from the index if a tuple is visible to the current\n> transaction and would still have to hit the table to check this. So that\n> performance would be a lot worse instead of better.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n",
"msg_date": "Thu, 02 Oct 2003 17:29:28 -0400",
"msg_from": "Jean-Luc Lachance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "[email protected] (Jean-Luc Lachance) writes:\n> That's one of the draw back of MVCC. \n> I once suggested that the transaction number and other house keeping\n> info be included in the index, but was told to forget it...\n> It would solve once and for all the issue of seq_scan vs index_scan.\n> It would simplify the aggregate problem.\n\nIt would only simplify _one_ case, namely the case where someone cares\nabout the cardinality of a relation, and it would do that at\n_considerable_ cost.\n\nA while back I outlined how this would have to be done, and for it to\nbe done efficiently, it would be anything BUT simple. \n\nIt would be very hairy to implement it correctly, and all this would\ncover is the single case of \"SELECT COUNT(*) FROM SOME_TABLE;\"\n\nIf you had a single WHERE clause attached, you would have to revert to\nwalking through the tuples looking for the ones that are live and\ncommitted, which is true for any DBMS.\n\nAnd it still begs the same question, of why the result of this query\nwould be particularly meaningful to anyone. I don't see the\nusefulness; I don't see the value of going to the considerable effort\nof \"fixing\" this purported problem.\n-- \nlet name=\"cbbrowne\" and tld=\"libertyrms.info\" in String.concat \"@\" [name;tld];;\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n",
"msg_date": "Thu, 02 Oct 2003 17:57:30 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "\nI don't have an opinion on how hard it would be to implement the\ntracking in the indexes, but \"select count(*) from some table\" is, in my\nexperience, a query that people tend to run quite often. \nOne of the databases that I've used, I believe it was Informix, had that\ninfo cached so that it always new how many rows there were in any\ntable. It was quite useful.\n\n\nOn Thu, Oct 02, 2003 at 05:57:30PM -0400, Christopher Browne wrote:\n> [email protected] (Jean-Luc Lachance) writes:\n> > That's one of the draw back of MVCC. \n> > I once suggested that the transaction number and other house keeping\n> > info be included in the index, but was told to forget it...\n> > It would solve once and for all the issue of seq_scan vs index_scan.\n> > It would simplify the aggregate problem.\n> \n> It would only simplify _one_ case, namely the case where someone cares\n> about the cardinality of a relation, and it would do that at\n> _considerable_ cost.\n> \n> A while back I outlined how this would have to be done, and for it to\n> be done efficiently, it would be anything BUT simple. \n> \n> It would be very hairy to implement it correctly, and all this would\n> cover is the single case of \"SELECT COUNT(*) FROM SOME_TABLE;\"\n> \n> If you had a single WHERE clause attached, you would have to revert to\n> walking through the tuples looking for the ones that are live and\n> committed, which is true for any DBMS.\n> \n> And it still begs the same question, of why the result of this query\n> would be particularly meaningful to anyone. I don't see the\n> usefulness; I don't see the value of going to the considerable effort\n> of \"fixing\" this purported problem.\n> -- \n> let name=\"cbbrowne\" and tld=\"libertyrms.info\" in String.concat \"@\" [name;tld];;\n> <http://dev6.int.libertyrms.com/>\n> Christopher Browne\n> (416) 646 3304 x124 (land)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nDror Matalon, President\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Thu, 2 Oct 2003 15:33:13 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "The world rejoiced as [email protected] (Dror Matalon) wrote:\n> I don't have an opinion on how hard it would be to implement the\n> tracking in the indexes, but \"select count(*) from some table\" is, in my\n> experience, a query that people tend to run quite often. \n> One of the databases that I've used, I believe it was Informix, had that\n> info cached so that it always new how many rows there were in any\n> table. It was quite useful.\n\nI can't imagine why the raw number of tuples in a relation would be\nexpected to necessarily be terribly useful.\n\nI'm involved with managing Internet domains, and it's only when people\nare being pretty clueless that anyone imagines that \"select count(*)\nfrom domains;\" would be of any use to anyone. There are enough \"test\ndomains\" and \"inactive domains\" and other such things that the raw\nnumber of \"things in the table\" aren't really of much use.\n\n- I _do_ care how many pages a table occupies, to some extent, as that\ndetermines whether it will fit in my disk space or not, but that's not\nCOUNT(*).\n\n- I might care about auditing the exact numbers of records in order to\nbe assured that a data conversion process was done correctly. But in\nthat case, I want to do something a whole *lot* more detailed than\nmere COUNT(*).\n\nI'm playing \"devil's advocate\" here, to some extent. But\nrealistically, there is good reason to be skeptical of the merits of\nusing SELECT COUNT(*) FROM TABLE for much of anything.\n\nFurthermore, the relation that you query mightn't be a physical\n\"table.\" It might be a more virtual VIEW, and if that's the case,\nbets are even MORE off. If you go with the common dictum of \"good\ndesign\" that users don't directly access tables, but go through VIEWs,\nusers may have no way to get at SELECT COUNT(*) FROM TABLE.\n-- \noutput = reverse(\"ac.notelrac.teneerf\" \"@\" \"454aa\")\nhttp://www.ntlug.org/~cbbrowne/finances.html\nRules of the Evil Overlord #74. \"When I create a multimedia\npresentation of my plan designed so that my five-year-old advisor can\neasily understand the details, I will not label the disk \"Project\nOverlord\" and leave it lying on top of my desk.\"\n<http://www.eviloverlord.com/>\n",
"msg_date": "Thu, 02 Oct 2003 22:08:18 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "\nI smell a religious war in the aii:-). \nCan you go several days in a row without doing select count(*) on any\nof your tables? \n\nI suspect that this is somewhat a domain specific issue. In some areas\nyou don't need to know the total number of rows in your tables, in\nothers you do. \n\nI also suspect that you're right, that end user applications don't use\nthis information as often as DBAs would. On the other hand, it seems\nwhenever you want to optimize your app (something relevant to this list),\none of the things you do need to know is the number of rows in your\ntable.\n\nDror\n\nOn Thu, Oct 02, 2003 at 10:08:18PM -0400, Christopher Browne wrote:\n> The world rejoiced as [email protected] (Dror Matalon) wrote:\n> > I don't have an opinion on how hard it would be to implement the\n> > tracking in the indexes, but \"select count(*) from some table\" is, in my\n> > experience, a query that people tend to run quite often. \n> > One of the databases that I've used, I believe it was Informix, had that\n> > info cached so that it always new how many rows there were in any\n> > table. It was quite useful.\n> \n> I can't imagine why the raw number of tuples in a relation would be\n> expected to necessarily be terribly useful.\n> \n> I'm involved with managing Internet domains, and it's only when people\n> are being pretty clueless that anyone imagines that \"select count(*)\n> from domains;\" would be of any use to anyone. There are enough \"test\n> domains\" and \"inactive domains\" and other such things that the raw\n> number of \"things in the table\" aren't really of much use.\n> \n> - I _do_ care how many pages a table occupies, to some extent, as that\n> determines whether it will fit in my disk space or not, but that's not\n> COUNT(*).\n> \n> - I might care about auditing the exact numbers of records in order to\n> be assured that a data conversion process was done correctly. But in\n> that case, I want to do something a whole *lot* more detailed than\n> mere COUNT(*).\n> \n> I'm playing \"devil's advocate\" here, to some extent. But\n> realistically, there is good reason to be skeptical of the merits of\n> using SELECT COUNT(*) FROM TABLE for much of anything.\n> \n> Furthermore, the relation that you query mightn't be a physical\n> \"table.\" It might be a more virtual VIEW, and if that's the case,\n> bets are even MORE off. If you go with the common dictum of \"good\n> design\" that users don't directly access tables, but go through VIEWs,\n> users may have no way to get at SELECT COUNT(*) FROM TABLE.\n> -- \n> output = reverse(\"ac.notelrac.teneerf\" \"@\" \"454aa\")\n> http://www.ntlug.org/~cbbrowne/finances.html\n> Rules of the Evil Overlord #74. \"When I create a multimedia\n> presentation of my plan designed so that my five-year-old advisor can\n> easily understand the details, I will not label the disk \"Project\n> Overlord\" and leave it lying on top of my desk.\"\n> <http://www.eviloverlord.com/>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nDror Matalon, President\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Thu, 2 Oct 2003 21:27:54 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "\nChristopher Browne <[email protected]> writes:\n\n> It would be very hairy to implement it correctly, and all this would\n> cover is the single case of \"SELECT COUNT(*) FROM SOME_TABLE;\"\n> \n> If you had a single WHERE clause attached, you would have to revert to\n> walking through the tuples looking for the ones that are live and\n> committed, which is true for any DBMS.\n\nWell it would be handy for a few other cases as well. \n\n1 It would be useful for the case where you have a partial index with a\n matching where clause. The optimizer already considers using such indexes\n but it has to pay the cost of the tuple lookup, which is substantial.\n\n2 It would be useful for the very common queries of the form \n WHERE x IN (select id from foo where some_indexed_expression)\n\n (Or the various equivalent forms including outer joins that test to see if\n the matching record was found and don't retrieve any other columns in the\n select list.)\n\n3 It would be useful for many-many relationships where the intermediate table\n has only the two primary key columns being joined. If you create a\n multi-column index on the two columns it shouldn't need to look up the\n tuple. This would be effectively be nearly equivalent to an \"index organized\n table\".\n\n\n4 It would be useful for just about all the referential integrity queries...\n\n\nI don't mean to say this is definitely a good thing. The tradeoff in\ncomplexity and time to maintain the index pages would be large. But don't\ndismiss it as purely a count(*) optimization hack.\n\nI know Oracle is capable of it and it can speed up your query a lot when you\nremove that last unnecessary column from a join table allowing oracle to skip\nthe step of reading the table.\n\n-- \ngreg\n\n",
"msg_date": "03 Oct 2003 01:13:08 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "Dror Matalon wrote:\n\n> I smell a religious war in the aii:-). \n> Can you go several days in a row without doing select count(*) on any\n> of your tables? \n> \n> I suspect that this is somewhat a domain specific issue. In some areas\n> you don't need to know the total number of rows in your tables, in\n> others you do. \n\nIf I were you, I would have an autovacuum daemon running and rather than doing \nselect count(*), I would look at stats generated by vacuums. They give \napproximate number of tuples and it should be good enough it is accurate within \na percent.\n\nJust another approach of achieving same thing.. Don't be religious about running \na qeury from SQL prompt. That's it..\n\n Shridhar\n\n",
"msg_date": "Fri, 03 Oct 2003 11:59:02 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "Oops! [email protected] (Dror Matalon) was seen spray-painting on a wall:\n> I smell a religious war in the aii:-). \n> Can you go several days in a row without doing select count(*) on any\n> of your tables? \n\nI would be more likely, personally, to run \"VACUUM VERBOSE ANALYZE\",\nwhich has useful side-effects :-).\n\n> I suspect that this is somewhat a domain specific issue. In some\n> areas you don't need to know the total number of rows in your\n> tables, in others you do.\n\n\"Relationship tables,\" which don't contain data in their own right,\nbut which, instead, link together records in other tables, are likely\nto have particularly useless COUNT(*)'s.\n\n> I also suspect that you're right, that end user applications don't\n> use this information as often as DBAs would. On the other hand, it\n> seems whenever you want to optimize your app (something relevant to\n> this list), one of the things you do need to know is the number of\n> rows in your table.\n\nAh, but in the case of optimization, there's little need for\n\"transactionally safe, MVCC-managed, known-to-be-exact\" values.\nApproximations are plenty good enough to get the right plan.\n\nFurthermore, it's not the number of rows that is most important when\noptimizing queries; the number of pages are more relevant to the\nmatter, as that's what the database is slinging around.\n-- \n(reverse (concatenate 'string \"ac.notelrac.teneerf\" \"@\" \"454aa\"))\nhttp://www3.sympatico.ca/cbbrowne/multiplexor.html\nRules of the Evil Overlord #134. \"If I am escaping in a large truck\nand the hero is pursuing me in a small Italian sports car, I will not\nwait for the hero to pull up along side of me and try to force him off\nthe road as he attempts to climb aboard. Instead I will slam on the\nbrakes when he's directly behind me. (A rudimentary knowledge of\nphysics can prove quite useful.)\" <http://www.eviloverlord.com/>\n",
"msg_date": "Fri, 03 Oct 2003 07:37:07 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "On Thu, 2 Oct 2003, Christopher Browne wrote:\n\n> I can't imagine why the raw number of tuples in a relation would be\n> expected to necessarily be terribly useful.\n>\n\nWe use stuff like that for reporting queries.\n\nexample:\nOn our message boards each post is a row. The powers that be like to know\nhow many posts there are total (In addition to 'today')-\nselect count(*) from posts is how it has been\ndone on our informix db. With our port to PG I instead select reltuples\npg_class.\n\nI know when I login to a new db (or unknown to me db) the first thing I do\nis look at tables and see what sort of data there is.. but in code I'd\nrarely do that.\n\nI know some monitoring things around here also do a select count(*) on\nsometable to ensure it is growing, but like you said, this is easily done\nwith the number of pages as well.\n\nyes. Informix caches this data. I believe Oracle does too.\n\nMysql with InnoDB does the same thing PG does. (MyISAM caches it)\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Fri, 3 Oct 2003 08:36:42 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "Well I can think of many more case where it would be usefull:\n\nSELECT COUNT(DISTINCT x) FROM ...\nSELECT COUNT(*) FROM ... WHERE x = ?\n\n\nAlso having transaction number (visibility) would tip the balance more\ntoward index_scan than seq_scan because you do not have to look up\nvisibility in the data file. We all know this has been an issue many\ntimes.\nHaving a different index file structure when the index is not UNIQUE\nwould help too.\nThe last page of a non unique index could hold more stats.\n\n\n\nChristopher Browne wrote:\n> \n> [email protected] (Jean-Luc Lachance) writes:\n> > That's one of the draw back of MVCC.\n> > I once suggested that the transaction number and other house keeping\n> > info be included in the index, but was told to forget it...\n> > It would solve once and for all the issue of seq_scan vs index_scan.\n> > It would simplify the aggregate problem.\n> \n> It would only simplify _one_ case, namely the case where someone cares\n> about the cardinality of a relation, and it would do that at\n> _considerable_ cost.\n> \n> A while back I outlined how this would have to be done, and for it to\n> be done efficiently, it would be anything BUT simple.\n> \n> It would be very hairy to implement it correctly, and all this would\n> cover is the single case of \"SELECT COUNT(*) FROM SOME_TABLE;\"\n> \n> If you had a single WHERE clause attached, you would have to revert to\n> walking through the tuples looking for the ones that are live and\n> committed, which is true for any DBMS.\n> \n> And it still begs the same question, of why the result of this query\n> would be particularly meaningful to anyone. I don't see the\n> usefulness; I don't see the value of going to the considerable effort\n> of \"fixing\" this purported problem.\n> --\n> let name=\"cbbrowne\" and tld=\"libertyrms.info\" in String.concat \"@\" [name;tld];;\n> <http://dev6.int.libertyrms.com/>\n> Christopher Browne\n> (416) 646 3304 x124 (land)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n",
"msg_date": "Fri, 03 Oct 2003 11:48:39 -0400",
"msg_from": "Jean-Luc Lachance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "In the last exciting episode, [email protected] (Jean-Luc Lachance) wrote:\n> Well I can think of many more case where it would be usefull:\n>\n> SELECT COUNT(DISTINCT x) FROM ...\n> SELECT COUNT(*) FROM ... WHERE x = ?\n\nThose are precisely the cases that the \"other databases\" ALSO fall\ndown on.\n\nMaintaining those sorts of statistics would lead [in _ANY_ database;\nPostgreSQL has no disadvantage in this] to needing for each and every\nupdate to update a whole host of statistic values.\n\nIt would be fairly reasonable to have a trigger, in PostgreSQL, to\nmanage this sort of information. It would not be outrageously\ndifficult to substantially improve performance of queries, at the\nconsiderable cost that each and every update would have to update a\nstatistics table.\n\nIf you're doing a whole lot of these sorts of queries, then it is a\nreasonable idea to create appropriate triggers for the (probably very\nfew) tables where you are doing these counts.\n\nBut the notion that this should automatically be applied to all tables\nalways is a dangerous one. It would make update performance Suck\nBadly, because the extra statistical updates would be quite expensive.\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','cbbrowne.com').\nhttp://www3.sympatico.ca/cbbrowne/multiplexor.html\nI'm sorry Dave, I can't let you do that.\nWhy don't you lie down and take a stress pill?\n",
"msg_date": "Fri, 03 Oct 2003 17:55:25 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "Christopher Browne kirjutas R, 03.10.2003 kell 00:57:\n> [email protected] (Jean-Luc Lachance) writes:\n> > That's one of the draw back of MVCC. \n> > I once suggested that the transaction number and other house keeping\n> > info be included in the index, but was told to forget it...\n> > It would solve once and for all the issue of seq_scan vs index_scan.\n> > It would simplify the aggregate problem.\n> \n> It would only simplify _one_ case, namely the case where someone cares\n> about the cardinality of a relation, and it would do that at\n> _considerable_ cost.\n> \n> A while back I outlined how this would have to be done, and for it to\n> be done efficiently, it would be anything BUT simple. \n\nCould this be made a TODO item, perhaps with your attack plan. \nOf course as strictly optional feature useful only for special situations\n(see below)\n\nI cross-post this to [HACKERS] as it seem relevant to a problem recently\ndiscussed there.\n\n> It would be very hairy to implement it correctly, and all this would\n> cover is the single case of \"SELECT COUNT(*) FROM SOME_TABLE;\"\n\nNot really. Just yesterday there was a discussion on [HACKERS] about\nimplementing btree-organized tables, which would be much less needed if\nthe visibility info were kept in indexes. \n\n> If you had a single WHERE clause attached, you would have to revert to\n> walking through the tuples looking for the ones that are live and\n> committed, which is true for any DBMS.\n\nIf the WHERE clause could use the same index (or any index with\nvisibility info) then there would be no need for \"walking through the\ntuples\" in data relation.\n\nthe typical usecase cited on [HACKERS] was time series data, where\ninserts are roughly in (timestamp,id)order but queries in (id,timestamp)\norder. Now if the index would include all relevant fields\n(id,timestamp,data1,data2,...,dataN) then the query could run on index\nonly touching just a few pages and thus vastly improving performance. I\nagree that this is not something everybody needs, but when it is needed\nit is needed bad.\n\n> And it still begs the same question, of why the result of this query\n> would be particularly meaningful to anyone. I don't see the\n> usefulness; I don't see the value of going to the considerable effort\n> of \"fixing\" this purported problem.\n\nBeing able to do fast count(*) is just a side benefit.\n\n----------------\nHannu\n\n",
"msg_date": "Sat, 04 Oct 2003 12:00:04 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Index/Function organized table layout (from Re:"
},
{
"msg_contents": "\n> On our message boards each post is a row. The powers that be like to know\n> how many posts there are total (In addition to 'today')-\n> select count(*) from posts is how it has been\n> done on our informix db. With our port to PG I instead select reltuples\n> pg_class.\n\nWe have exactly the same situation, except we just added a 'num_replies' \nfield to each thread and a 'num_posts' field to each forum, so that \ngetting that information out is a very fast operation. Because, of \ncourse, there are hundreds of times more reads of that information than \nwrites...\n\nChris\n\n",
"msg_date": "Sat, 04 Oct 2003 17:37:32 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "Christopher Browne wrote:\n> [email protected] (Jean-Luc Lachance) writes:\n> > That's one of the draw back of MVCC. \n> > I once suggested that the transaction number and other house keeping\n> > info be included in the index, but was told to forget it...\n> > It would solve once and for all the issue of seq_scan vs index_scan.\n> > It would simplify the aggregate problem.\n> \n> It would only simplify _one_ case, namely the case where someone cares\n> about the cardinality of a relation, and it would do that at\n> _considerable_ cost.\n> \n> A while back I outlined how this would have to be done, and for it to\n> be done efficiently, it would be anything BUT simple. \n> \n> It would be very hairy to implement it correctly, and all this would\n> cover is the single case of \"SELECT COUNT(*) FROM SOME_TABLE;\"\n\nWe do have a TODO item:\n\n\t* Consider using MVCC to cache count(*) queries with no WHERE clause\n\nThe idea is to cache a recent count of the table, then have\ninsert/delete add +/- records to the count. A COUNT(*) would get the\nmain cached record plus any visible +/- records. This would allow the\ncount to return the proper value depending on the visibility of the\nrequesting transaction, and it would require _no_ heap or index scan.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 4 Oct 2003 11:56:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> Christopher Browne kirjutas R, 03.10.2003 kell 00:57:\n>> A while back I outlined how this would have to be done, and for it to\n>> be done efficiently, it would be anything BUT simple. \n\n> Could this be made a TODO item, perhaps with your attack plan. \n\nIf I recall that discussion correctly, no one including Christopher\nthought the attack plan was actually reasonable.\n\nWhat this keeps coming down to is that an optimization that helps only\nCOUNT(*)-of-one-table-with-no-WHERE-clause would be too expensive in\ndevelopment and maintenance effort to justify its existence.\n\nAt least if you insist on an exact, MVCC-correct answer. So far as I've\nseen, the actual use cases for unqualified COUNT(*) could be handled\nequally well by an approximate answer. What we should be doing rather\nthan wasting large amounts of time trying to devise exact solutions is\ntelling people to look at pg_class.reltuples for approximate answers.\nWe could also be looking at beefing up support for that approach ---\nmaybe provide some syntactic sugar for the lookup, maybe see if we can\nupdate reltuples in more places than we do now, make sure that the\nautovacuum daemon includes \"keep reltuples accurate\" as one of its\ndesign goals, etc etc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Oct 2003 12:07:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "COUNT(*) again (was Re: [HACKERS] Index/Function organized table\n\tlayout)"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> We do have a TODO item:\n> \t* Consider using MVCC to cache count(*) queries with no WHERE clause\n\n> The idea is to cache a recent count of the table, then have\n> insert/delete add +/- records to the count. A COUNT(*) would get the\n> main cached record plus any visible +/- records. This would allow the\n> count to return the proper value depending on the visibility of the\n> requesting transaction, and it would require _no_ heap or index scan.\n\n... and it would give the wrong answers. Unless the cache is somehow\nsnapshot-aware, so that it can know which other transactions should be\nincluded in your count.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Oct 2003 12:49:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > We do have a TODO item:\n> > \t* Consider using MVCC to cache count(*) queries with no WHERE clause\n> \n> > The idea is to cache a recent count of the table, then have\n> > insert/delete add +/- records to the count. A COUNT(*) would get the\n> > main cached record plus any visible +/- records. This would allow the\n> > count to return the proper value depending on the visibility of the\n> > requesting transaction, and it would require _no_ heap or index scan.\n> \n> ... and it would give the wrong answers. Unless the cache is somehow\n> snapshot-aware, so that it can know which other transactions should be\n> included in your count.\n\nThe cache is an ordinary table, with xid's on every row. I meant it\nwould require no index/heap scans of the large table --- it would still\nrequire a scan of the \"count\" table.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 4 Oct 2003 13:48:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom Lane wrote:\n>> ... and it would give the wrong answers. Unless the cache is somehow\n>> snapshot-aware, so that it can know which other transactions should be\n>> included in your count.\n\n> The cache is an ordinary table, with xid's on every row. I meant it\n> would require no index/heap scans of the large table --- it would still\n> require a scan of the \"count\" table.\n\nOh, that idea. Yeah, I think we had concluded it might work. You'd\nbetter make the TODO item link to that discussion, because there's sure\nbeen plenty of discussion of ideas that wouldn't work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Oct 2003 13:51:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Tom Lane wrote:\n> >> ... and it would give the wrong answers. Unless the cache is somehow\n> >> snapshot-aware, so that it can know which other transactions should be\n> >> included in your count.\n> \n> > The cache is an ordinary table, with xid's on every row. I meant it\n> > would require no index/heap scans of the large table --- it would still\n> > require a scan of the \"count\" table.\n> \n> Oh, that idea. Yeah, I think we had concluded it might work. You'd\n> better make the TODO item link to that discussion, because there's sure\n> been plenty of discussion of ideas that wouldn't work.\n\nOK, I beefed up the TODO:\n\n\t* Use a fixed row count and a +/- count with MVCC visibility rules\n\t to allow fast COUNT(*) queries with no WHERE clause(?)\n\nI can always give the details if someone asks. It doesn't seem complex\nenough for a separate TODO.detail item.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 4 Oct 2003 14:19:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> It doesn't seem complex enough for a separate TODO.detail item.\n\nI thought it was, if only because it is so easy to think of wrong\nimplementations.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Oct 2003 14:34:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables "
},
{
"msg_contents": "Tom Lane kirjutas L, 04.10.2003 kell 19:07:\n> Hannu Krosing <[email protected]> writes:\n> > Christopher Browne kirjutas R, 03.10.2003 kell 00:57:\n> >> A while back I outlined how this would have to be done, and for it to\n> >> be done efficiently, it would be anything BUT simple. \n> \n> > Could this be made a TODO item, perhaps with your attack plan. \n> \n> If I recall that discussion correctly, no one including Christopher\n> thought the attack plan was actually reasonable.\n> \n> What this keeps coming down to is that an optimization that helps only\n> COUNT(*)-of-one-table-with-no-WHERE-clause would be too expensive in\n> development and maintenance effort to justify its existence.\n\nPlease read further in my email ;)\n\nThe point I was trying to make was that faster count(*)'s is just a side\neffect. If we could (conditionally) keep visibility info in indexes,\nthen this would also solve the problem fo much more tricky question of\nindex-structured tables.\n\nCount(*) is *not* the only query that could benefit from not needing to\ngo to actual data table for visibilty info, The much more needed case\nwould be the \"inveres time series\" type of queries, which would\notherways trash cache pages badly.\n\n----------------------------\nHannu\n\n",
"msg_date": "Sat, 04 Oct 2003 23:59:00 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT(*) again (was Re: [HACKERS] Index/Function"
},
{
"msg_contents": "On 10/4/03 2:00 AM, \"Hannu Krosing\" <[email protected]> wrote:\n> \n> If the WHERE clause could use the same index (or any index with\n> visibility info) then there would be no need for \"walking through the\n> tuples\" in data relation.\n> \n> the typical usecase cited on [HACKERS] was time series data, where\n> inserts are roughly in (timestamp,id)order but queries in (id,timestamp)\n> order. Now if the index would include all relevant fields\n> (id,timestamp,data1,data2,...,dataN) then the query could run on index\n> only touching just a few pages and thus vastly improving performance. I\n> agree that this is not something everybody needs, but when it is needed\n> it is needed bad.\n\n\n\nI would add that automatically index-organizing tuples isn't just useful for\ntime-series data (though it is a good example), but can be used to\nsubstantially improve the query performance of any really large table in a\nnumber of different and not always direct ways. Once working sets routinely\nexceed the size of physical RAM, buffer access/utilization efficiency often\nbecomes the center of performance tuning, but not one that many people know\nmuch about.\n\nOne of the less direct ways of using btree-organized tables for improving\nscalability is to \"materialize\" table indexes of tables that *shouldn't* be\nbtree-organized. Not only can you turn tables into indexes, but you can\nalso turn indexes into tables, which can have advantages in some cases.\n\n\nFor example, I did some scalability consulting at a well-known movie rental\ncompany with some very large Oracle databases running on big Sun boxen. One\nof the biggest problems was that their rental history table, which had a\ndetailed record of every movie ever rented by every customer, had grown so\nlarge that the performance was getting painfully slow. To make matters\nworse, it and a few related tables had high concurrent usage, a mixture of\nmany performance-sensitive queries grabbing windows of a customer's history\nplus a few broader OLAP queries which were not time sensitive. Everything\nwas technically optimized in a relational and basic configuration sense, and\nthe database guys at the company were at a loss on how to fix this problem.\nPerformance of all queries was essentially bound by how fast pages could be\nmoved between the disk and buffers.\n\nIssue #1: The history rows had quite a lot of columns and the OLAP\nprocesses used non-primary indexes, so the table was not particularly\nsuitable for btree-organizing.\n\nIssue #2: Partitioning was not an option because it would have exceeded\ncertain limits in Oracle (at that time, I don't know if that has changed).\n\nIssue #3: Although customer histories were being constantly queried, data\nneeded most was really an index view of the customers history, not the\ndetails of the history itself.\n\n\nThe solution I came up with was to use a synced btree-organized partial\nclone of the main history table that only contained a small number of key\ncolumns that mattered for generating customer history indexes in the\napplications that used them. While this substantially increased the disk\nspace footprint for the same data (since we were cloning it), it greatly\nreduced the total number of cache misses for the typical query, only\nfetching the full history row pages when actually needed. In other words,\nbasically concentrating more buffer traffic into a smaller number of page\nbuffers. What we had was an exceedingly active but relatively compact\nmaterialized index of the history table that could essentially stay resident\nin RAM, and a much less active history table+indexes that while less likely\nto be buffered than before, had pages accessed at such a reduced frequency\nthat there was a huge net performance gain because disk access plummeted.\n\nAverage performance improvement for the time sensitive queries: 50-70x\n\nSo btree-organized tables can do more than make tables behave like indexes.\nThey can also make indexes behave like tables. Both are very useful in some\ncases when your working set exceeds the physical buffer space. For smaller\ndatabases this has much less utility and users need to understand the\nlimitations, nonetheless when tables and databases get really big it becomes\nan important tool in the tool belt.\n\nCheers,\n\n-James Rogers\n [email protected]\n\n",
"msg_date": "Sat, 04 Oct 2003 14:15:12 -0700",
"msg_from": "James Rogers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Uses for Index/Function organizing"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> The point I was trying to make was that faster count(*)'s is just a side\n> effect. If we could (conditionally) keep visibility info in indexes,\n\nI think that's not happening, conditionally or otherwise. The atomicity\nproblems alone are sufficient reason why not, even before you look at\nthe performance issues.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Oct 2003 17:15:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT(*) again (was Re: [HACKERS] Index/Function organized table\n\tlayout)"
},
{
"msg_contents": "Quoth [email protected] (Tom Lane):\n> Bruce Momjian <[email protected]> writes:\n>> We do have a TODO item:\n>> \t* Consider using MVCC to cache count(*) queries with no WHERE clause\n>\n>> The idea is to cache a recent count of the table, then have\n>> insert/delete add +/- records to the count. A COUNT(*) would get the\n>> main cached record plus any visible +/- records. This would allow the\n>> count to return the proper value depending on the visibility of the\n>> requesting transaction, and it would require _no_ heap or index scan.\n>\n> ... and it would give the wrong answers. Unless the cache is somehow\n> snapshot-aware, so that it can know which other transactions should be\n> included in your count.\n\n[That's an excellent summary that Bruce did of what came out of the\nprevious discussion...]\n\nIf this \"cache\" was a table, itself, the visibility of its records\nshould be identical to that of the visibility of the \"real\" records.\n+/- records would become visible when the transaction COMMITed, at the\nvery same time the source records became visible.\n\nI thought, at one point, that it would be a slick idea for \"record\ncompression\" to take place automatically; when you do a COUNT(*), the\nprocess would include compressing multiple records down to one.\nUnfortunately, that turns out to be Tremendously Evil if the same\nCOUNT(*) were being concurrently processed in multiple transactions.\nBoth would repeat much the same work, and this would ultimately lead\nto one of the transactions aborting. [I recently saw this effect\noccur, um, a few times...]\n\nFor this not to have Evil Effects on unsuspecting transactions, we\nwould instead require some process analagous to VACUUM, where a single\ntransaction would be used to compress the \"counts table\" down to one\nrecord per table. Being independent of \"user transactions,\" it could\nsafely compress the data without injuring unsuspecting transactions.\n\nBut in most cases, the cost of this would be pretty prohibitive.\nEvery transaction that adds a record to a table leads to a record\nbeing added to table \"pg_exact_row_counts\". If transactions typically\ninvolve adding ONE row to any given table, this effectively doubles\nthe update traffic. Ouch. That means that in a _real_\nimplementation, it would make sense to pick and choose the tables that\nwould be so managed.\n\nIn my earlier arguing of \"You don't really want that!\", while I may\nhave been guilty of engaging in a _little_ hyperbole, I was certainly\n_not_ being facetious overall. At work, we tell the developers \"avoid\ndoing COUNT(*) inside ordinary transactions!\", and that is certainly\nNOT facetious comment. I recall a case a while back where system\nperformance was getting brutalized by a lurking COUNT(*). (Combined\nwith some other pathological behaviour, of course!) And note that\nthis wasn't a query that the TODO item could address; it was of the\nform \"SELECT COUNT(*) FROM SOME_TABLE WHERE OWNER = VALUE;\"\n\nAs you have commented elsewhere in the thread, much of the time, the\npoint of asking for COUNT(*) is often to get some idea of table size,\nwhere the precise number isn't terribly important in comparison with\ngetting general magnitude. Improving the ability to get approximate\nvalues would be of some value.\n\nI would further argue that \"SELECT COUNT(*) FROM TABLE\" isn't\nparticularly useful even when precision _is_ important. If I'm\nworking on reports that would be used to reconcile things, the queries\nI use are a whole lot more involved than that simple form. It is far\nmore likely that I'm using a GROUP BY.\n\nIt is legitimate to get wishful and imagine that it would be nice if\nwe could get the value of that query \"instantaneously.\" It is also\nlegitimate to think that the effort required to implement that might\nbe better used on improving other things.\n-- \n(reverse (concatenate 'string \"ac.notelrac.teneerf\" \"@\" \"454aa\"))\nhttp://www3.sympatico.ca/cbbrowne/\n\"very few people approach me in real life and insist on proving they\nare drooling idiots.\" -- Erik Naggum, comp.lang.lisp\n",
"msg_date": "Sat, 04 Oct 2003 19:33:46 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "Tom Lane wrote:\n> Hannu Krosing <[email protected]> writes:\n> > The point I was trying to make was that faster count(*)'s is just a side\n> > effect. If we could (conditionally) keep visibility info in indexes,\n> \n> I think that's not happening, conditionally or otherwise. The atomicity\n> problems alone are sufficient reason why not, even before you look at\n> the performance issues.\n\nWhat are the atomicity problems of adding a create/expire xid to the\nindex tuples?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 5 Oct 2003 00:20:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT(*) again (was Re: [HACKERS] Index/Function organized"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom Lane wrote:\n>> I think that's not happening, conditionally or otherwise. The atomicity\n>> problems alone are sufficient reason why not, even before you look at\n>> the performance issues.\n\n> What are the atomicity problems of adding a create/expire xid to the\n> index tuples?\n\nYou can't update a tuple's status in just one place ... you have to\nupdate the copies in the indexes too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Oct 2003 02:08:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT(*) again (was Re: [HACKERS] Index/Function organized table\n\tlayout)"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Tom Lane wrote:\n> >> I think that's not happening, conditionally or otherwise. The atomicity\n> >> problems alone are sufficient reason why not, even before you look at\n> >> the performance issues.\n> \n> > What are the atomicity problems of adding a create/expire xid to the\n> > index tuples?\n> \n> You can't update a tuple's status in just one place ... you have to\n> update the copies in the indexes too.\n\nBut we don't update the tuple status for a commit, we just mark the xid\nas committed. We do have lazy status bits that prevent later lookups in\npg_clog, but we have those in the index already also.\n\nWhat am I missing?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 5 Oct 2003 09:36:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT(*) again (was Re: [HACKERS] Index/Function organized"
},
{
"msg_contents": "Bruce,\n\n> OK, I beefed up the TODO:\n>\n> \t* Use a fixed row count and a +/- count with MVCC visibility rules\n> \t to allow fast COUNT(*) queries with no WHERE clause(?)\n>\n> I can always give the details if someone asks. It doesn't seem complex\n> enough for a separate TODO.detail item.\n\nHmmm ... this doesn't seem effort-worthy to me. How often does anyone do \nCOUNT with no where clause, except GUIs that give you a record count? (of \ncourse, as always, if someone wants to code it, feel free ...)\n\nAnd for those GUIs, wouldn't it be 97% as good to run an ANALYZE and give the \napproximate record counts for large tables?\n\nAs for counts with a WHERE clause, this is obviously up to the user. Joe \nConway and I tested using a C trigger to track some COUNT ... GROUP BY values \nfor large tables based on additive numbers. It worked fairly well for \naccuracy, but the performance penalty on data writes was significant ... 8% \nto 25% penalty for UPDATES, depending on the frequency and batch size (> \nfrequency > batch size --> > penalty)\n\nIt's possible that this could be improved through some mechanism more tightly \nintegrated with the source code. However,the coding effort would be \nsignificant ( 12-20 hours ) and it's possible that there would be no \nimprovement, which is why we didn't do it.\n\nWe also discussed an asynchronous aggregates collector that would work \nsomething like the statistics collector, and keep pre-programmmed aggregate \ndata, updating during \"low-activity\" periods. This would significantly \nreduce the performance penalty, but at the cost of accuracy ... that is, a \n1%-5% variance on high-activity tables would be unavoidable, and all cached \naggregates would have to be recalculated on database restart, significantly \nslowing down startup. Again, we felt that the effort-result payoff was not \nworthwhile.\n\nOverall, I think the stuff we already have planned ... the hash aggregates in \n7.4 and Tom's suggestion of adding an indexable flag to pg_aggs ... are far \nmore likely to yeild useful fruit than any caching plan.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 5 Oct 2003 11:57:21 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "> And for those GUIs, wouldn't it be 97% as good to run an ANALYZE and give the \n> approximate record counts for large tables?\n\nInterfaces which run a COUNT(*) like that are broken by design. They\nfail to consider the table may really be a view which of course could\nnot be cached with results like that and may take days to load a full\nresult set (we had some pretty large views in an old billing system).",
"msg_date": "Sun, 05 Oct 2003 15:11:50 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "Bruce Momjian wrote:\n> OK, I beefed up the TODO:\n> \n> \t* Use a fixed row count and a +/- count with MVCC visibility rules\n> \t to allow fast COUNT(*) queries with no WHERE clause(?)\n> \n> I can always give the details if someone asks. It doesn't seem complex\n> enough for a separate TODO.detail item.\n\nMay I propose alternate approach for this optimisation?\n\n- Postgresql allows to maintain user defined variables in shared memory.\n- These variables obey transactions but do not get written to disk at all.\n- There should be a facility to detect whether such a variable is initialized or \nnot.\n\nHow it will help? This is in addition to trigger proposal that came up earlier. \nWith triggers it's not possible to make values visible across backends unless \ntrigger updates a table, which eventually leads to vacuum/dead tuples problem.\n\n1. User creates a trigger to check updates/inserts for certain conditions.\n2. It updates the count as and when required.\n3. If the trigger detects the count is not initialized, it would issue the same \nquery first time. There is no avoiding this issue.\n\nBesides providing facility of resident variables could be used imaginatively as \nwell.\n\nDoes this make sense? IMO this is more generalised approach over all.\n\nJust a thought.\n\n Shridhar\n\n\n\n",
"msg_date": "Mon, 06 Oct 2003 11:36:36 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "In article <[email protected]>,\nShridhar Daithankar <[email protected]> writes:\n\n> Dror Matalon wrote:\n>> I smell a religious war in the aii:-). Can you go several days in a\n>> row without doing select count(*) on any\n>> of your tables? I suspect that this is somewhat a domain specific\n>> issue. In some areas\n>> you don't need to know the total number of rows in your tables, in\n>> others you do.\n\n> If I were you, I would have an autovacuum daemon running and rather\n> than doing select count(*), I would look at stats generated by\n> vacuums. They give approximate number of tuples and it should be good\n> enough it is accurate within a percent.\n\nThe stats might indeed be a good estimate presumed there were not many\nchanges since the last VACUUM. But how about a variant of COUNT(*)\nusing an index? It would not be quite exact since it might contain\ntuples not visible in the current transaction, but it might be a much\nbetter estimate than the stats.\n\n",
"msg_date": "06 Oct 2003 17:08:36 +0200",
"msg_from": "Harald Fuchs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
},
{
"msg_contents": "> How it will help? This is in addition to trigger proposal that came\n> up earlier. With triggers it's not possible to make values visible\n> across backends unless trigger updates a table, which eventually\n> leads to vacuum/dead tuples problem.\n> \n> 1. User creates a trigger to check updates/inserts for certain conditions.\n> 2. It updates the count as and when required.\n> 3. If the trigger detects the count is not initialized, it would issue the \n> same query first time. There is no avoiding this issue.\n> \n> Besides providing facility of resident variables could be used\n> imaginatively as well.\n> \n> Does this make sense? IMO this is more generalised approach over all.\n\nI do this _VERY_ frequently in my databases, only I have my stored\nprocs do the aggregate in a predefined MVCC table that's always there.\nHere's a denormalized version for public consumption/thought:\n\nCREATE TABLE global.dba_aggregate_cache (\n dbl TEXT NOT NULL, -- The database location, doesn't need to be\n -- qualified (ex: schema.table.col)\n op TEXT NOT NULL, -- The operation, SUM, COUNT, etc.\n qual TEXT, -- Any kind of conditional, such as a where clause\n val_int INT, -- Whatever the value is, of type INT\n val_bigint BIGINT, -- Whatever the value is, of type BIGINT\n val_text TEXT, -- Whatever the value is, of type TEXT\n val_bytea BYTEA, -- Whatever the value is, of type BYTEA\n);\nCREATE UNIQUE INDEX dba_aggregate_cache_dbl_op_udx ON global.dba_aggregate_cache(dbl,op);\n\nThen, I use a function to retrieve this value instead of a SELECT\nCOUNT(*).\n\nSELECT public.cache_count('dbl','qual'); -- In this case, the op is COUNT\nSELECT public.cache_count('dbl'); -- Returns the COUNT for the table listed in the dbl\n\nThen, I create 4 or 5 functions (depends on the op I'm performing):\n\n1) A private function that _doesn't_ run as security definer, that\n populates the global.dba_aggregate_cache row if it's empty.\n2) A STABLE function for SELECTs, if the row doesn't exist, then it\n calls function #1 to populate its existence.\n3) A STABLE function for INSERTs, if the row doesn't exist, then it\n calls function #1 to populate its existence, then adds the\n necessary bits to make it accurate.\n4) A STABLE function for DELETEs, if the row doesn't exist, then it\n calls function #1 to populate its existence, then deletes the\n necessary bits to make it accurate.\n5) A STABLE function for UPDATEs, if the row doesn't exist, then it\n calls function #1 to populate its existence, then updates the\n necessary bits to make it accurate. It's not uncommon for me to\n not have an UPDATE function/trigger.\n\nCreate triggers for functions 2-5, and test away. It's MVCC,\nsearching through a table that's INDEX'ed for a single row is\nobviously vastly faster than a seqscan/aggregate. If I need any kind\nof an aggregate to be fast, I use this system with a derivation of the\nabove table. The problem with it being that I have to retrain others\nto use cache_count(), or some other function instead of using\nCOUNT(*).\n\nThat said, it'd be nice if there were a way to tell PostgreSQL to do\nthe above for you and teach COUNT(*), SUM(*), or other aggregates to\nuse an MVCC backed cache similar to the above. If people want their\nCOUNT's to be fast, then they have to live with the INSERT, UPDATE,\nDELETE cost. The above doesn't work with anything complex such as\njoin's, but it's certainly a start and I think satisfies everyone's\ngripes other than the tuple churn that _does_ happen (*nudge nudge*,\npg_autovacuum could be integrated into the backend to handle this).\nThose worried about performance, the pages that are constantly being\nrecycled would likely stay in disk cache (PG or the OS). There's\nstill some commit overhead, but still... no need to over optimize by\nrequiring the table to be stored in the out dated, slow, and over used\nshm (also, *nudge nudge*).\n\nAnyway, let me throw that out there as a solution that I use and it\nworks quite well. I didn't explain the use of the qual column, but I\nthink those who grasp the above way of handling things probably grok\nhow to use the qual column in a dynamically executed query.\n\nCREATE AGGREGATE CACHE anyone?\n\n-sc\n\n-- \nSean Chittenden\n",
"msg_date": "Mon, 6 Oct 2003 10:01:36 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) slow on large tables"
}
] |
[
{
"msg_contents": "I was testing to get some idea of how to speed up the speed of pgbench \nwith IDE drives and the write caching turned off in Linux (i.e. hdparm -W0 \n/dev/hdx).\n\nThe only parameter that seems to make a noticeable difference was setting \nwal_sync_method = open_sync. With it set to either fsync, or fdatasync, \nthe speed with pgbench -c 5 -t 1000 ran from 11 to 17 tps. With open_sync \nit jumped to the range of 45 to 52 tps. with write cache on I was getting \n280 to 320 tps. so, not instead of being 20 to 30 times slower, I'm only \nabout 5 times slower, much better.\n\nNow I'm off to start a \"pgbench -c 10 -t 10000\" and pull the power cord \nand see if the data gets corrupted with write caching turned on, i.e. do \nmy hard drives have the ability to write at least some of their cache \nduring spin down.\n\n\n\n",
"msg_date": "Thu, 2 Oct 2003 13:34:16 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "further testing on IDE drives"
},
{
"msg_contents": "On Thu, 2 Oct 2003, scott.marlowe wrote:\n\n> I was testing to get some idea of how to speed up the speed of pgbench \n> with IDE drives and the write caching turned off in Linux (i.e. hdparm -W0 \n> /dev/hdx).\n> \n> The only parameter that seems to make a noticeable difference was setting \n> wal_sync_method = open_sync. With it set to either fsync, or fdatasync, \n> the speed with pgbench -c 5 -t 1000 ran from 11 to 17 tps. With open_sync \n> it jumped to the range of 45 to 52 tps. with write cache on I was getting \n> 280 to 320 tps. so, not instead of being 20 to 30 times slower, I'm only \n> about 5 times slower, much better.\n> \n> Now I'm off to start a \"pgbench -c 10 -t 10000\" and pull the power cord \n> and see if the data gets corrupted with write caching turned on, i.e. do \n> my hard drives have the ability to write at least some of their cache \n> during spin down.\n\nOK, back from testing.\n\nInformation: Dual PIV system with a pair of 80 gig IDE drives, model \nnumber: ST380023A (seagate). File system is ext3 and is on a seperate \ndrive from the OS.\n\nThese drives DO NOT write cache when they lose power. Testing was done by \nissuing a 'hdparm -W0/1 /dev/hdx' command where x is the real drive \nletter, and 0 or 1 was chosen in place of 0/1. Then I'd issue a 'pgbench \n-c 50 -t 100000000' command, wait for a few minutes, then pull the power \ncord.\n\nI'm running RH linux 9.0 stock install, kernel: 2.4.20-8smp.\n\nThree times pulling the plug with 'hdparm -W0 /dev/hdx' resulted in a \nmachine that would boot up, recover with journal, and a database that came \nup within about 30 seconds, with all the accounts still intact.\n\nSwitching the caching back on with 'hdparm -W1 /dev/hdx' and doing the \nsame 'pgbench -c 50 -t 100000000' resulted in a corrupted database each \ntime.\n\nAlso, I tried each of the following fsync methods: fsync, fdatasync, and\nopen_sync with write caching turned off. Each survived a power off test \nwith no corruption of the database. fsync and fdatasync result in 11 to \n17 tps with 'pgbench -c 5 -t 500' while open_sync resulted in 45 to 55 \ntps, as mentioned in the previous post.\n\nI'd be interested in hearing from other folks which sync method works \nfor them and whether or not there are any IDE drives out there that can \nwrite their cache to the platters on power off when caching is enabled.\n\n",
"msg_date": "Thu, 2 Oct 2003 17:35:39 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": "scott.marlowe wrote:\n> I was testing to get some idea of how to speed up the speed of pgbench \n> with IDE drives and the write caching turned off in Linux (i.e. hdparm -W0 \n> /dev/hdx).\n> \n> The only parameter that seems to make a noticeable difference was setting \n> wal_sync_method = open_sync. With it set to either fsync, or fdatasync, \n> the speed with pgbench -c 5 -t 1000 ran from 11 to 17 tps. With open_sync \n> it jumped to the range of 45 to 52 tps. with write cache on I was getting \n> 280 to 320 tps. so, not instead of being 20 to 30 times slower, I'm only \n> about 5 times slower, much better.\n> \n> Now I'm off to start a \"pgbench -c 10 -t 10000\" and pull the power cord \n> and see if the data gets corrupted with write caching turned on, i.e. do \n> my hard drives have the ability to write at least some of their cache \n> during spin down.\n\nIs this a reason we should switch to open_sync as a default, if it is\navailble, rather than fsync? I think we are doing a single write before\nfsync a lot more often than we are doing multiple writes before fsync.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 9 Oct 2003 20:16:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": "\nHow did this drive come by default? Write-cache disabled?\n\n---------------------------------------------------------------------------\n\nscott.marlowe wrote:\n> On Thu, 2 Oct 2003, scott.marlowe wrote:\n> \n> > I was testing to get some idea of how to speed up the speed of pgbench \n> > with IDE drives and the write caching turned off in Linux (i.e. hdparm -W0 \n> > /dev/hdx).\n> > \n> > The only parameter that seems to make a noticeable difference was setting \n> > wal_sync_method = open_sync. With it set to either fsync, or fdatasync, \n> > the speed with pgbench -c 5 -t 1000 ran from 11 to 17 tps. With open_sync \n> > it jumped to the range of 45 to 52 tps. with write cache on I was getting \n> > 280 to 320 tps. so, not instead of being 20 to 30 times slower, I'm only \n> > about 5 times slower, much better.\n> > \n> > Now I'm off to start a \"pgbench -c 10 -t 10000\" and pull the power cord \n> > and see if the data gets corrupted with write caching turned on, i.e. do \n> > my hard drives have the ability to write at least some of their cache \n> > during spin down.\n> \n> OK, back from testing.\n> \n> Information: Dual PIV system with a pair of 80 gig IDE drives, model \n> number: ST380023A (seagate). File system is ext3 and is on a seperate \n> drive from the OS.\n> \n> These drives DO NOT write cache when they lose power. Testing was done by \n> issuing a 'hdparm -W0/1 /dev/hdx' command where x is the real drive \n> letter, and 0 or 1 was chosen in place of 0/1. Then I'd issue a 'pgbench \n> -c 50 -t 100000000' command, wait for a few minutes, then pull the power \n> cord.\n> \n> I'm running RH linux 9.0 stock install, kernel: 2.4.20-8smp.\n> \n> Three times pulling the plug with 'hdparm -W0 /dev/hdx' resulted in a \n> machine that would boot up, recover with journal, and a database that came \n> up within about 30 seconds, with all the accounts still intact.\n> \n> Switching the caching back on with 'hdparm -W1 /dev/hdx' and doing the \n> same 'pgbench -c 50 -t 100000000' resulted in a corrupted database each \n> time.\n> \n> Also, I tried each of the following fsync methods: fsync, fdatasync, and\n> open_sync with write caching turned off. Each survived a power off test \n> with no corruption of the database. fsync and fdatasync result in 11 to \n> 17 tps with 'pgbench -c 5 -t 500' while open_sync resulted in 45 to 55 \n> tps, as mentioned in the previous post.\n> \n> I'd be interested in hearing from other folks which sync method works \n> for them and whether or not there are any IDE drives out there that can \n> write their cache to the platters on power off when caching is enabled.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 9 Oct 2003 20:18:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": "Nope, write-cache enabled by default.\n\nOn Thu, 9 Oct 2003, Bruce Momjian wrote:\n\n> \n> How did this drive come by default? Write-cache disabled?\n> \n> ---------------------------------------------------------------------------\n> \n> scott.marlowe wrote:\n> > On Thu, 2 Oct 2003, scott.marlowe wrote:\n> > \n> > > I was testing to get some idea of how to speed up the speed of pgbench \n> > > with IDE drives and the write caching turned off in Linux (i.e. hdparm -W0 \n> > > /dev/hdx).\n> > > \n> > > The only parameter that seems to make a noticeable difference was setting \n> > > wal_sync_method = open_sync. With it set to either fsync, or fdatasync, \n> > > the speed with pgbench -c 5 -t 1000 ran from 11 to 17 tps. With open_sync \n> > > it jumped to the range of 45 to 52 tps. with write cache on I was getting \n> > > 280 to 320 tps. so, not instead of being 20 to 30 times slower, I'm only \n> > > about 5 times slower, much better.\n> > > \n> > > Now I'm off to start a \"pgbench -c 10 -t 10000\" and pull the power cord \n> > > and see if the data gets corrupted with write caching turned on, i.e. do \n> > > my hard drives have the ability to write at least some of their cache \n> > > during spin down.\n> > \n> > OK, back from testing.\n> > \n> > Information: Dual PIV system with a pair of 80 gig IDE drives, model \n> > number: ST380023A (seagate). File system is ext3 and is on a seperate \n> > drive from the OS.\n> > \n> > These drives DO NOT write cache when they lose power. Testing was done by \n> > issuing a 'hdparm -W0/1 /dev/hdx' command where x is the real drive \n> > letter, and 0 or 1 was chosen in place of 0/1. Then I'd issue a 'pgbench \n> > -c 50 -t 100000000' command, wait for a few minutes, then pull the power \n> > cord.\n> > \n> > I'm running RH linux 9.0 stock install, kernel: 2.4.20-8smp.\n> > \n> > Three times pulling the plug with 'hdparm -W0 /dev/hdx' resulted in a \n> > machine that would boot up, recover with journal, and a database that came \n> > up within about 30 seconds, with all the accounts still intact.\n> > \n> > Switching the caching back on with 'hdparm -W1 /dev/hdx' and doing the \n> > same 'pgbench -c 50 -t 100000000' resulted in a corrupted database each \n> > time.\n> > \n> > Also, I tried each of the following fsync methods: fsync, fdatasync, and\n> > open_sync with write caching turned off. Each survived a power off test \n> > with no corruption of the database. fsync and fdatasync result in 11 to \n> > 17 tps with 'pgbench -c 5 -t 500' while open_sync resulted in 45 to 55 \n> > tps, as mentioned in the previous post.\n> > \n> > I'd be interested in hearing from other folks which sync method works \n> > for them and whether or not there are any IDE drives out there that can \n> > write their cache to the platters on power off when caching is enabled.\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> > \n> \n> \n\n",
"msg_date": "Fri, 10 Oct 2003 09:26:24 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": "On Thu, 9 Oct 2003, Bruce Momjian wrote:\n\n> scott.marlowe wrote:\n> > I was testing to get some idea of how to speed up the speed of pgbench \n> > with IDE drives and the write caching turned off in Linux (i.e. hdparm -W0 \n> > /dev/hdx).\n> > \n> > The only parameter that seems to make a noticeable difference was setting \n> > wal_sync_method = open_sync. With it set to either fsync, or fdatasync, \n> > the speed with pgbench -c 5 -t 1000 ran from 11 to 17 tps. With open_sync \n> > it jumped to the range of 45 to 52 tps. with write cache on I was getting \n> > 280 to 320 tps. so, not instead of being 20 to 30 times slower, I'm only \n> > about 5 times slower, much better.\n> > \n> > Now I'm off to start a \"pgbench -c 10 -t 10000\" and pull the power cord \n> > and see if the data gets corrupted with write caching turned on, i.e. do \n> > my hard drives have the ability to write at least some of their cache \n> > during spin down.\n> \n> Is this a reason we should switch to open_sync as a default, if it is\n> availble, rather than fsync? I think we are doing a single write before\n> fsync a lot more often than we are doing multiple writes before fsync.\n\nSounds reasonable to me. Are there many / any scenarios where a plain \nfsync would be faster than open_sync?\n\n",
"msg_date": "Fri, 10 Oct 2003 09:27:19 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": "scott.marlowe wrote:\n> On Thu, 9 Oct 2003, Bruce Momjian wrote:\n> \n> > scott.marlowe wrote:\n> > > I was testing to get some idea of how to speed up the speed of pgbench \n> > > with IDE drives and the write caching turned off in Linux (i.e. hdparm -W0 \n> > > /dev/hdx).\n> > > \n> > > The only parameter that seems to make a noticeable difference was setting \n> > > wal_sync_method = open_sync. With it set to either fsync, or fdatasync, \n> > > the speed with pgbench -c 5 -t 1000 ran from 11 to 17 tps. With open_sync \n> > > it jumped to the range of 45 to 52 tps. with write cache on I was getting \n> > > 280 to 320 tps. so, not instead of being 20 to 30 times slower, I'm only \n> > > about 5 times slower, much better.\n> > > \n> > > Now I'm off to start a \"pgbench -c 10 -t 10000\" and pull the power cord \n> > > and see if the data gets corrupted with write caching turned on, i.e. do \n> > > my hard drives have the ability to write at least some of their cache \n> > > during spin down.\n> > \n> > Is this a reason we should switch to open_sync as a default, if it is\n> > availble, rather than fsync? I think we are doing a single write before\n> > fsync a lot more often than we are doing multiple writes before fsync.\n> \n> Sounds reasonable to me. Are there many / any scenarios where a plain \n> fsync would be faster than open_sync?\n\nYes. If you were doing multiple WAL writes before transaction fsync,\nyou would be fsyncing every write, rather than doing two writes and\nfsync'ing them both. I wonder if larger transactions would find\nopen_sync slower?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 10 Oct 2003 13:24:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": "Bruce,\n\n> Yes. If you were doing multiple WAL writes before transaction fsync,\n> you would be fsyncing every write, rather than doing two writes and\n> fsync'ing them both. I wonder if larger transactions would find\n> open_sync slower?\n\nWant me to test? I've got an ide-based test machine here, and the TPCC \ndatabases.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 10 Oct 2003 10:44:21 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": "On Fri, 10 Oct 2003, Josh Berkus wrote:\n\n> Bruce,\n> \n> > Yes. If you were doing multiple WAL writes before transaction fsync,\n> > you would be fsyncing every write, rather than doing two writes and\n> > fsync'ing them both. I wonder if larger transactions would find\n> > open_sync slower?\n> \n> Want me to test? I've got an ide-based test machine here, and the TPCC \n> databases.\n\nJust make sure the drive's write cache is disabled.\n\n",
"msg_date": "Fri, 10 Oct 2003 12:01:00 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": "Josh Berkus wrote:\n> Bruce,\n> \n> > Yes. If you were doing multiple WAL writes before transaction fsync,\n> > you would be fsyncing every write, rather than doing two writes and\n> > fsync'ing them both. I wonder if larger transactions would find\n> > open_sync slower?\n> \n> Want me to test? I've got an ide-based test machine here, and the TPCC \n> databases.\n\nI would be interested to see if wal_sync_method = fsync is slower than\nwal_sync_method = open_sync. How often are we doing more then one write\nbefore a fsync anyway?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 10 Oct 2003 14:39:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": "Bruce,\n\n> I would be interested to see if wal_sync_method = fsync is slower than\n> wal_sync_method = open_sync. How often are we doing more then one write\n> before a fsync anyway?\n\nOK. I'll see if I can get to it around my other stuff I have to do this \nweekend.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 10 Oct 2003 13:43:17 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": ">>>>> \"BM\" == Bruce Momjian <[email protected]> writes:\n\n>> Sounds reasonable to me. Are there many / any scenarios where a plain \n>> fsync would be faster than open_sync?\n\nBM> Yes. If you were doing multiple WAL writes before transaction fsync,\nBM> you would be fsyncing every write, rather than doing two writes and\nBM> fsync'ing them both. I wonder if larger transactions would find\nBM> open_sync slower?\n\nconsider loading a large database from a backup dump. one big\ntransaction during the COPY. I don't know the implications it has on\nthis scenario, though.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Fri, 10 Oct 2003 16:49:06 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": "Vivek Khera wrote:\n> >>>>> \"BM\" == Bruce Momjian <[email protected]> writes:\n> \n> >> Sounds reasonable to me. Are there many / any scenarios where a plain \n> >> fsync would be faster than open_sync?\n> \n> BM> Yes. If you were doing multiple WAL writes before transaction fsync,\n> BM> you would be fsyncing every write, rather than doing two writes and\n> BM> fsync'ing them both. I wonder if larger transactions would find\n> BM> open_sync slower?\n> \n> consider loading a large database from a backup dump. one big\n> transaction during the COPY. I don't know the implications it has on\n> this scenario, though.\n\nCOPY only does fsync on COPY completion, so I am not sure there are\nenough fsync's there to make a difference.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 10 Oct 2003 17:22:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": "On Fri, 10 Oct 2003, Josh Berkus wrote:\n\n> Bruce,\n> \n> > Yes. If you were doing multiple WAL writes before transaction fsync,\n> > you would be fsyncing every write, rather than doing two writes and\n> > fsync'ing them both. I wonder if larger transactions would find\n> > open_sync slower?\n> \n> Want me to test? I've got an ide-based test machine here, and the TPCC \n> databases.\n\nOK, I decided to do a quick dirty test of things that are big transactions \nin each mode my kernel supports. I did this:\n\ncreatedb dbname\ntime pg_dump -O -h otherserver dbname|psql dbname\n\nthen I would drop the db, edit postgresql.conf, and restart the server.\n\nopen_sync was WAY faster at this than the other two methods.\n\nopen_sync:\n\n1st run:\n\nreal 11m27.107s\nuser 0m26.570s\nsys 0m1.150s\n\n2nd run:\n\nreal 6m5.712s\nuser 0m26.700s\nsys 0m1.700s\n\nfsync:\n\n1st run:\n\nreal 15m8.127s\nuser 0m26.710s\nsys 0m0.990s\n\n2nd run:\n\nreal 15m8.396s\nuser 0m26.990s\nsys 0m1.870s\n\nfdatasync:\n\n1st run:\n\nreal 15m47.878s\nuser 0m26.570s\nsys 0m1.480s\n\n2nd run:\n\n\nreal 15m9.402s\nuser 0m27.000s\nsys 0m1.660s\n\nI did the first runs in order, then started over, i.e. opensync run1, \nfsync run1, fdatasync run1, opensync run2, etc...\n\nThe machine I was restoring to was under no other load. The machine I was \nreading from had little or no load, but is a production server, so it's \npossible the load there could have had a small effect, but probably not \nthis big of a one.\n\nThe machine this is one is setup so that the data partition is on a drive \nwith write cache enabled, but the pg_xlog and pg_clog directories are on a \ndrive with write cache disabled. Same drive models as listed before in my \nprevious test, Seagate generic 80gig IDE drives, model ST380023A.\n\n",
"msg_date": "Fri, 10 Oct 2003 16:52:59 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": ">>>>> \"BM\" == Bruce Momjian <[email protected]> writes:\n\nBM> COPY only does fsync on COPY completion, so I am not sure there are\nBM> enough fsync's there to make a difference.\n\n\nPerhaps then it is part of the indexing that takes so much time with\nthe WAL. When I applied Marc's WAL disabling patch, it shaved nearly\n50 minutes off of a 4-hour restore.\n\nI sent to Tom the logs from the restores since he was interested in\nfiguring out where the time was saved.\n",
"msg_date": "Mon, 13 Oct 2003 09:35:17 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> open_sync was WAY faster at this than the other two methods.\n\nDo you not have open_datasync? That's the preferred method if\navailable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Oct 2003 13:25:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: further testing on IDE drives "
},
{
"msg_contents": "On Tue, 14 Oct 2003, Tom Lane wrote:\n\n> \"scott.marlowe\" <[email protected]> writes:\n> > open_sync was WAY faster at this than the other two methods.\n> \n> Do you not have open_datasync? That's the preferred method if\n> available.\n\nNope, when I try to start postgresql with it set to that, I get this error \nmessage:\n\nFATAL: invalid value for \"wal_sync_method\": \"open_datasync\"\n\nThis is on RedHat 9, but I have the same problem on a RH 7.2 box as well.\n\n",
"msg_date": "Tue, 14 Oct 2003 11:29:40 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: further testing on IDE drives "
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> Yes. If you were doing multiple WAL writes before transaction fsync,\n> you would be fsyncing every write, rather than doing two writes and\n> fsync'ing them both. I wonder if larger transactions would find\n> open_sync slower?\n\nNo hard numbers, but I remember testing fsync vs open_sync something ago \non 7.3.x.\n\nopen_sync was blazingly fast for pgbench, but for when we switched our \ndevelopment database over to open_sync, things slowed to a crawl.\n\nThis was some months ago, and I might be wrong, so take it with a grain \nof salt. It was on Red Hat 8's Linux kernel 2.4.18, I think. YMMV.\n\nWill be testing it real soon tonight, if possible.",
"msg_date": "Wed, 15 Oct 2003 18:25:19 +0800",
"msg_from": "Ang Chin Han <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: further testing on IDE drives"
},
{
"msg_contents": "I have updated my hardware performance documentation to reflect the\nfindings during the past few months on the performance list:\n\n\thttp://candle.pha.pa.us/main/writings/pgsql/hw_performance/index.html\n\nThanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 14 Dec 2003 00:44:00 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Update performance doc"
}
] |
[
{
"msg_contents": "I was trying to get the pg_stats information to Josh and decided to\nrecreate the indexes on all my tables. After that I ran vacuum full\nanalyze, re-enabled nestloop and ran explain analyze on the query. It\nran in about 2 minutes.\nI attached the new query plan. I am not sure what did the trick, but 2\nminutes is much better than 2 hours. But then again, I can't take long\nlunches anymore :)\nIs there any way to make this query run even faster without increasing\nthe memory dedicated to postgres?\nThanks.\n\nOleg\n\n-----Original Message-----\nFrom: scott.marlowe [mailto:[email protected]] \nSent: Thursday, October 02, 2003 10:29 AM\nTo: Oleg Lebedev\nCc: Josh Berkus; [email protected]\nSubject: RE: [PERFORM] TPC-R benchmarks\n\n\nHave you tried increasing the statistics target for those columns that\nare \ngetting bad estimates yet and then turning back on enable_nestloop and \nrerunning analyze and seeing how the query does? \n\nThe idea being to try and get a good enough estimate of your statistics\nso \nthe planner stops using nestloops on its own rather than forcing it to \nwith enable_nestloop = false.\n\nOn Thu, 2 Oct 2003, Oleg Lebedev wrote:\n\n> As Scott recommended, I did the following:\n> # set enable_nestloop = false;\n> # vacuum full analyze;\n> \n> After this I re-ran the query and its execution time went down from 2 \n> hours to 2 minutes. I attached the new query plan to this posting. Is \n> there any way to optimize it even further? What should I do to make \n> this query run fast without hurting the performance of the other \n> queries? Thanks.\n> \n> Oleg\n> \n> -----Original Message-----\n> From: scott.marlowe [mailto:[email protected]]\n> Sent: Wednesday, October 01, 2003 4:00 PM\n> To: Oleg Lebedev\n> Cc: Josh Berkus; [email protected]\n> Subject: Re: [PERFORM] TPC-R benchmarks\n> \n> \n> For troubleshooting, can you try it with \"set enable_nestloop = false\"\n\n> and rerun the query and see how long it takes?\n> \n> It looks like the estimates of rows returned is WAY off (estimate is \n> too\n> \n> low compared to what really comes back.)\n> \n> Also, you might try to alter the table.column to have a higher target \n> on\n> \n> the rows p_partkey and ps_partkey and any others where the estimate is\n\n> so far off of the reality.\n> \n> On Wed, 1 Oct 2003, Oleg Lebedev wrote:\n> \n> > All right, my query just finished running with EXPLAIN ANALYZE. I \n> > show\n> \n> > the plan below and also attached it as a file. Any ideas?\n> > \n> > -> Sort (cost=54597.49..54597.50 rows=1 width=121) (actual\n> > time=6674562.03..6674562.15 rows=175 loops=1)\n> > Sort Key: nation.n_name, date_part('year'::text,\n> > orders.o_orderdate)\n> > -> Aggregate (cost=54597.45..54597.48 rows=1 width=121) \n> > (actual time=6668919.41..6674522.48 rows=175 loops=1)\n> > -> Group (cost=54597.45..54597.47 rows=3 width=121)\n\n> > (actual time=6668872.68..6672136.96 rows=348760 loops=1)\n> > -> Sort (cost=54597.45..54597.46 rows=3\n> > width=121) (actual time=6668872.65..6669499.95 rows=348760 loops=1)\n> > Sort Key: nation.n_name, \n> > date_part('year'::text, orders.o_orderdate)\n> > -> Hash Join (cost=54596.00..54597.42 \n> > rows=3\n> > width=121) (actual time=6632768.89..6650192.67 rows=348760 loops=1)\n> > Hash Cond: (\"outer\".n_nationkey =\n> > \"inner\".s_nationkey)\n> > -> Seq Scan on nation \n> > (cost=0.00..1.25 rows=25 width=33) (actual time=6.75..7.13 rows=25\n> > loops=1)\n> > -> Hash (cost=54596.00..54596.00 \n> > rows=3\n> > width=88) (actual time=6632671.96..6632671.96 rows=0 loops=1)\n> > -> Nested Loop \n> > (cost=0.00..54596.00 rows=3 width=88) (actual\ntime=482.41..6630601.46 \n> > rows=348760 loops=1)\n> > Join Filter: \n> > (\"inner\".s_suppkey = \"outer\".l_suppkey)\n> > -> Nested Loop \n> > (cost=0.00..54586.18 rows=3 width=80) (actual\ntime=383.87..6594984.40 \n> > rows=348760 loops=1)\n> > -> Nested Loop \n> > (cost=0.00..54575.47 rows=4 width=68) (actual\ntime=199.95..3580882.07 \n> > rows=348760 loops=1)\n> > Join\nFilter: \n> > (\"outer\".p_partkey = \"inner\".ps_partkey)\n> > -> Nested \n> > Loop (cost=0.00..22753.33 rows=9343 width=49) (actual \n> > time=146.85..3541433.10 rows=348760 loops=1)\n> > ->\nSeq\n> \n> > Scan on part (cost=0.00..7868.00 rows=320 width=4) (actual\n> > time=33.64..15651.90 rows=11637 loops=1)\n> > \n> > Filter: (p_name ~~ '%green%'::text)\n> > ->\n> > Index Scan using i_l_partkey on lineitem (cost=0.00..46.15 rows=29 \n> > width=45) (actual time=10.71..302.67 rows=30 loops=11637)\n> > \n> > Index\n> > Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n> > -> Index\n> > Scan using pk_partsupp on partsupp (cost=0.00..3.39 rows=1\nwidth=19) \n> > (actual time=0.09..0.09 rows=1 loops=348760)\n> > Index\n> > Cond: ((partsupp.ps_partkey = \"outer\".l_partkey) AND \n> > (partsupp.ps_suppkey =\n> > \"outer\".l_suppkey))\n> > -> Index Scan \n> > using pk_orders on orders (cost=0.00..3.01 rows=1 width=12) (actual\n\n> > time=8.62..8.62 rows=1 loops=348760)\n> > Index Cond:\n\n> > (orders.o_orderkey = \"outer\".l_orderkey)\n> > -> Index Scan using \n> > pk_supplier on supplier (cost=0.00..3.01 rows=1 width=8) (actual \n> > time=0.08..0.08 rows=1 loops=348760)\n> > Index Cond: \n> > (\"outer\".ps_suppkey = supplier.s_suppkey) Total runtime: 6674724.23\n\n> > msec (28 rows)\n> > \n> > \n> > -----Original Message-----\n> > From: Oleg Lebedev\n> > Sent: Wednesday, October 01, 2003 12:00 PM\n> > To: Josh Berkus; scott.marlowe\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] TPC-R benchmarks\n> > Importance: Low\n> > \n> > \n> > Sure, below is the query. I attached the plan to this posting.\n> > \n> > select\n> > \tnation,\n> > \to_year,\n> > \tsum(amount) as sum_profit\n> > from\n> > \t(\n> > \t\tselect\n> > \t\t\tn_name as nation,\n> > \t\t\textract(year from o_orderdate) as o_year,\n> > \t\t\tl_extendedprice * (1 - l_discount) -\n> > ps_supplycost * l_quantity as amount\n> > \t\tfrom\n> > \t\t\tpart,\n> > \t\t\tsupplier,\n> > \t\t\tlineitem,\n> > \t\t\tpartsupp,\n> > \t\t\torders,\n> > \t\t\tnation\n> > \t\twhere\n> > \t\t\ts_suppkey = l_suppkey\n> > \t\t\tand ps_suppkey = l_suppkey\n> > \t\t\tand ps_partkey = l_partkey\n> > \t\t\tand p_partkey = l_partkey\n> > \t\t\tand o_orderkey = l_orderkey\n> > \t\t\tand s_nationkey = n_nationkey\n> > \t\t\tand p_name like '%green%'\n> > \t) as profit\n> > group by\n> > \tnation,\n> > \to_year\n> > order by\n> > \tnation,\n> > \to_year desc;\n> > \n> > \n> > -----Original Message-----\n> > From: Josh Berkus [mailto:[email protected]]\n> > Sent: Wednesday, October 01, 2003 11:42 AM\n> > To: Oleg Lebedev; scott.marlowe\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] TPC-R benchmarks\n> > \n> > \n> > Oleg,\n> > \n> > > The output of the query should contain about 200 rows. So, I guess\n> > > the\n> > \n> > > planer is off assuming that the query should return 1 row.\n> > \n> > Oh, also did you post the query before? Can you re-post it with\nthe\n> > planner\n> > results?\n> > \n> > \n> \n> *************************************\n> \n> This e-mail may contain privileged or confidential material intended \n> for the named recipient only. If you are not the named recipient, \n> delete this message and all attachments. Unauthorized reviewing, \n> copying, printing, disclosing, or otherwise using information in this \n> e-mail is prohibited. We reserve the right to monitor e-mail sent \n> through our network.\n> \n> *************************************\n> \n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************",
"msg_date": "Thu, 2 Oct 2003 13:39:55 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "On Thu, 2 Oct 2003, Oleg Lebedev wrote:\n\n> I was trying to get the pg_stats information to Josh and decided to\n> recreate the indexes on all my tables. After that I ran vacuum full\n> analyze, re-enabled nestloop and ran explain analyze on the query. It\n> ran in about 2 minutes.\n> I attached the new query plan. I am not sure what did the trick, but 2\n> minutes is much better than 2 hours. But then again, I can't take long\n> lunches anymore :)\n> Is there any way to make this query run even faster without increasing\n> the memory dedicated to postgres?\n> Thanks.\n\nAs long as the estimated row counts and real ones match up, and postgresql \nseems to be picking the right plan, there's probably not a lot to be done. \nYou might want to look at increasing sort_mem a bit, but don't go crazy, \nas being too high can result in swap storms under load, which are a very \nbad thing.\n\nI'd check for index growth. You may have been reloading your data over \nand over and had an index growth problem. Next time instead of recreating \nthe indexed completely, you might wanna try reindex indexname.\n\nAlso, 7.4 mostly fixes the index growth issue, especially as it applies to \ntruncating/reloading a table over and over, so moving to 7.4 beta3/4 and \ntesting might be a good idea (if you aren't there already).\n\nWhat you want to avoid is having postgresql switch back to that nestloop \njoin on you in the middle of the day, and to prevent that you might need \nto have higher statistics targets so the planner gets the right number \nall the time.\n\n",
"msg_date": "Thu, 2 Oct 2003 13:44:12 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "Thanks everyone for the help.\n\nI have another question. How do I optimize my indexes for the query that\ncontains a lot of ORed blocks, each of which contains a bunch of ANDed\nexpressions? The structure of each ORed block is the same except the\nright-hand-side values vary. \nThe first expression of each AND-block is a join condition. However,\npostgres tries to use a sequential scan on both of the tables applying\nthe OR-ed blocks of ANDed expressions. So, the cost of the plan is\naround 700,000,000,000. \n\nHere is an example:\nselect\n\tsum(l_extendedprice* (1 - l_discount)) as revenue\nfrom\n\tlineitem,\n\tpart\nwhere\n\t(\n\t\tp_partkey = l_partkey\n\t\tand p_brand = 'Brand#24'\n\t\tand p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM\nPKG')\n\t\tand l_quantity >= 4 and l_quantity <= 4 + 10\n\t\tand p_size between 1 and 5\n\t\tand l_shipmode in ('AIR', 'AIR REG')\n\t\tand l_shipinstruct = 'DELIVER IN PERSON'\n\t)\n\tor\n\t(\n\t\tp_partkey = l_partkey\n\t\tand p_brand = 'Brand#22'\n\t\tand p_container in ('MED BAG', 'MED BOX', 'MED PKG',\n'MED PACK')\n\t\tand l_quantity >= 18 and l_quantity <= 18 + 10\n\t\tand p_size between 1 and 10\n\t\tand l_shipmode in ('AIR', 'AIR REG')\n\t\tand l_shipinstruct = 'DELIVER IN PERSON'\n\t)\n\tor\n\t(\n\t\tp_partkey = l_partkey\n\t\tand p_brand = 'Brand#33'\n\t\tand p_container in ('LG CASE', 'LG BOX', 'LG PACK', 'LG\nPKG')\n\t\tand l_quantity >= 24 and l_quantity <= 24 + 10\n\t\tand p_size between 1 and 15\n\t\tand l_shipmode in ('AIR', 'AIR REG')\n\t\tand l_shipinstruct = 'DELIVER IN PERSON'\n\t);\n\n-----Original Message-----\nFrom: scott.marlowe [mailto:[email protected]] \nSent: Thursday, October 02, 2003 1:44 PM\nTo: Oleg Lebedev\nCc: Josh Berkus; [email protected]\nSubject: RE: [PERFORM] TPC-R benchmarks\n\n\nOn Thu, 2 Oct 2003, Oleg Lebedev wrote:\n\n> I was trying to get the pg_stats information to Josh and decided to \n> recreate the indexes on all my tables. After that I ran vacuum full \n> analyze, re-enabled nestloop and ran explain analyze on the query. It \n> ran in about 2 minutes. I attached the new query plan. I am not sure \n> what did the trick, but 2 minutes is much better than 2 hours. But \n> then again, I can't take long lunches anymore :)\n> Is there any way to make this query run even faster without increasing\n> the memory dedicated to postgres?\n> Thanks.\n\nAs long as the estimated row counts and real ones match up, and\npostgresql \nseems to be picking the right plan, there's probably not a lot to be\ndone. \nYou might want to look at increasing sort_mem a bit, but don't go crazy,\n\nas being too high can result in swap storms under load, which are a very\n\nbad thing.\n\nI'd check for index growth. You may have been reloading your data over \nand over and had an index growth problem. Next time instead of\nrecreating \nthe indexed completely, you might wanna try reindex indexname.\n\nAlso, 7.4 mostly fixes the index growth issue, especially as it applies\nto \ntruncating/reloading a table over and over, so moving to 7.4 beta3/4 and\n\ntesting might be a good idea (if you aren't there already).\n\nWhat you want to avoid is having postgresql switch back to that nestloop\n\njoin on you in the middle of the day, and to prevent that you might need\n\nto have higher statistics targets so the planner gets the right number \nall the time.\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************\n",
"msg_date": "Thu, 2 Oct 2003 16:27:29 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "Oleg,\n\n> I have another question. How do I optimize my indexes for the query that\n> contains a lot of ORed blocks, each of which contains a bunch of ANDed\n> expressions? The structure of each ORed block is the same except the\n> right-hand-side values vary.\n\nGiven the example, I'd do a multicolumn index on p_brand, p_container, p_size \nand a second multicolumn index on l_partkey, l_quantity, l_shipmode. Hmmm \n... or maybe seperate indexes, one on l_partkey and one on l_quantity, \nl_shipmode & l_instruct. Test both configurations.\n\nMind you, if this is also an OLTP table, then you'd want to test those \nmulti-column indexes to determine the least columns you need for the indexes \nstill to be used, since more columns = more index maintainence.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 2 Oct 2003 22:27:08 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "Hi!\n\nIt's just my curiosity. I wonder if there is any way to break my speed\nlimit on AMD 450Mhz:\n\nBest Regards,\nCN\n-------------------\n--This table contains 1036 rows.\nCREATE TABLE table1 (\nc1 VARCHAR(20) PRIMARY KEY,\nc2 \"char\"\n)WITHOUT OIDS;\n---------------------\n--This table contains 9429 rows.\nCREATE TABLE table2 (\nc1 VARCHAR(20) PRIMARY KEY,\nc2 DATE,\nc3 INTEGER,\nc4 INTEGER\n)WITHOUT OIDS;\nCREATE INDEX i2c3c4 ON table2 (c3,c4);\n---------------------\n--This table contains 28482 rows.\nCREATE TABLE table3 (\nCONSTRAINT fk3c1 FOREIGN KEY (c1) REFERENCES table2 (c1) ON UPDATE\nCASCADE ON DELETE CASCADE,\nCONSTRAINT fk3c3 FOREIGN KEY (c3) REFERENCES table1 (c1),\nPRIMARY KEY (c1,c2),\nc1 VARCHAR(20),\nc2 INTEGER,\nc3 VARCHAR(20),\nc4 \"char\",\nc5 INTEGER\n)WITHOUT OIDS;\n---------------------\nEXPLAIN ANALYZE\nSELECT\n table2.c3 AS year\n ,table2.c4 AS month\n ,(SELECT CASE\n WHEN (table1.c2 = 'A' OR table1.c2 = 'E') AND table3.c4 = 'D'\n OR table1.c2 IN ('L','O','I') AND table3.c4 = 'C'\n THEN table3.c5 ELSE 0-table3.c5 END\n FROM table1\n WHERE table1.c1=table3.c3\n ) AS amount\nFROM table2,table3\nWHERE table3.c1=table2.c1\n AND table2.c3 > 2001;\n\n Hash Join (cost=189.79..1508.67 rows=11203 width=48) (actual\n time=129.20..1780.53 rows=9912 loops=1)\n Hash Cond: (\"outer\".c1 = \"inner\".c1)\n -> Seq Scan on table3 (cost=0.00..822.82 rows=28482 width=27)\n (actual time=14.01..403.78 rows=28482 \nloops=1)\n -> Hash (cost=180.69..180.69 rows=3640 width=21) (actual\n time=85.61..85.61 rows=0 loops=1)\n -> Seq Scan on table2 (cost=0.00..180.69 rows=3640 width=21)\n (actual time=0.28..64.62 rows=3599 \nloops=1)\n Filter: (c3 > 2001)\n SubPlan\n -> Index Scan using table1_pkey on table1 (cost=0.00..3.01 rows=1\n width=1) (actual time=0.06..0.06 \nrows=1 loops=9912)\n Index Cond: (c1 = $2)\n Total runtime: 1802.71 msec\n-------------------\nEXPLAIN ANALYZE\nSELECT\n table2.c3 AS year\n ,table2.c4 AS month\n ,CASE\n WHEN (table1.c2 = 'A' OR table1.c2 = 'E') AND table3.c4 = 'D'\n OR table1.c2 IN ('L','O','I') AND table3.c4 = 'C'\n THEN table3.c5 ELSE 0-table3.c5 END\n AS amount\nFROM table2,table3,table1\nWHERE table3.c1=table2.c1\n AND table1.c1=table3.c3\nAND table2.c3 > 2001;\n\n Hash Join (cost=208.74..1751.68 rows=11203 width=58) (actual\n time=135.87..1113.69 rows=9912 loops=1)\n Hash Cond: (\"outer\".c3 = \"inner\".c1)\n -> Hash Join (cost=189.79..1508.67 rows=11203 width=48) (actual\n time=123.81..899.29 rows=9912 loops=1)\n Hash Cond: (\"outer\".c1 = \"inner\".c1)\n -> Seq Scan on table3 (cost=0.00..822.82 rows=28482 width=27)\n (actual time=9.30..371.10 rows=28482 \nloops=1)\n -> Hash (cost=180.69..180.69 rows=3640 width=21) (actual\n time=85.62..85.62 rows=0 loops=1)\n -> Seq Scan on table2 (cost=0.00..180.69 rows=3640\n width=21) (actual time=0.31..64.33 \nrows=3599 loops=1)\n Filter: (c3 > 2001)\n -> Hash (cost=16.36..16.36 rows=1036 width=10) (actual\n time=11.91..11.91 rows=0 loops=1)\n -> Seq Scan on table1 (cost=0.00..16.36 rows=1036 width=10)\n (actual time=0.05..7.16 rows=1036 \nloops=1)\n Total runtime: 1133.95 msec\n\n-- \nhttp://www.fastmail.fm - Does exactly what it says on the tin\n",
"msg_date": "Thu, 02 Oct 2003 18:14:07 -0800",
"msg_from": "\"CN\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is This My Speed Limit?"
},
{
"msg_contents": "> Hi!\n> \n> It's just my curiosity. I wonder if there is any way to break my speed\n> limit on AMD 450Mhz:\n\n> Hash Join (cost=189.79..1508.67 rows=11203 width=48) (actual\n> time=129.20..1780.53 rows=9912 loops=1)\n\n> Hash Join (cost=208.74..1751.68 rows=11203 width=58) (actual\n> time=135.87..1113.69 rows=9912 loops=1)\n\nWell, it looks like a speed limit. I wouldn't expect better speed for\nqueries returning 10000 rows.\n\nRegards,\nTomasz Myrta\n\n",
"msg_date": "Fri, 03 Oct 2003 07:59:25 +0200",
"msg_from": "Tomasz Myrta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is This My Speed Limit?"
},
{
"msg_contents": "On Thu, 2 Oct 2003, CN wrote:\n\n> Hi!\n> \n> It's just my curiosity. I wonder if there is any way to break my speed\n> limit on AMD 450Mhz:\n\nYou're most likely I/O bound, not CPU bound here. So, if you want better \nspeed, you'll likely need a better storage subsystem.\n\n",
"msg_date": "Fri, 3 Oct 2003 08:47:43 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is This My Speed Limit?"
}
] |
[
{
"msg_contents": "\nI can tell you that this is one of the first thing applications' programmers and IT managers notice. It can slightly tarnish postgres' image when it takes it many long seconds to do what other databases can do in a snap. The \"whys and wherefores\" can be hard to get across once they see the comparative numbers.\n\nWhen I use Informix \"dbaccess\" it has a \"status\" which will tell me the row count of a table virtually instantly -- it can be locked out by a user with an exclusive lock so its not entirely independant of the table (like a stored value in one of the system catalog tables).\n\nThis is not to say Informix is \"right\" and Postgres is \"wrong\" ... but it is something that virtually any newcomer will run into head long, with resulting bruises and contusions, not to mention confusion.\n\nAt the very least this needs to be VERY clearly explained right up front, along with some of the possible work-arounds, depending on what one is really after with this info.\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n-----Original Message-----\nFrom:\tDror Matalon [mailto:[email protected]]\nSent:\tThu 10/2/2003 9:27 PM\nTo:\[email protected]\nCc:\t\nSubject:\tRe: [PERFORM] count(*) slow on large tables\n\n\nI smell a religious war in the aii:-). \nCan you go several days in a row without doing select count(*) on any\nof your tables? \n\nI suspect that this is somewhat a domain specific issue. In some areas\nyou don't need to know the total number of rows in your tables, in\nothers you do. \n\nI also suspect that you're right, that end user applications don't use\nthis information as often as DBAs would. On the other hand, it seems\nwhenever you want to optimize your app (something relevant to this list),\none of the things you do need to know is the number of rows in your\ntable.\n\nDror\n\nOn Thu, Oct 02, 2003 at 10:08:18PM -0400, Christopher Browne wrote:\n> The world rejoiced as [email protected] (Dror Matalon) wrote:\n> > I don't have an opinion on how hard it would be to implement the\n> > tracking in the indexes, but \"select count(*) from some table\" is, in my\n> > experience, a query that people tend to run quite often. \n> > One of the databases that I've used, I believe it was Informix, had that\n> > info cached so that it always new how many rows there were in any\n> > table. It was quite useful.\n> \n> I can't imagine why the raw number of tuples in a relation would be\n> expected to necessarily be terribly useful.\n> \n> I'm involved with managing Internet domains, and it's only when people\n> are being pretty clueless that anyone imagines that \"select count(*)\n> from domains;\" would be of any use to anyone. There are enough \"test\n> domains\" and \"inactive domains\" and other such things that the raw\n> number of \"things in the table\" aren't really of much use.\n> \n> - I _do_ care how many pages a table occupies, to some extent, as that\n> determines whether it will fit in my disk space or not, but that's not\n> COUNT(*).\n> \n> - I might care about auditing the exact numbers of records in order to\n> be assured that a data conversion process was done correctly. But in\n> that case, I want to do something a whole *lot* more detailed than\n> mere COUNT(*).\n> \n> I'm playing \"devil's advocate\" here, to some extent. But\n> realistically, there is good reason to be skeptical of the merits of\n> using SELECT COUNT(*) FROM TABLE for much of anything.\n> \n> Furthermore, the relation that you query mightn't be a physical\n> \"table.\" It might be a more virtual VIEW, and if that's the case,\n> bets are even MORE off. If you go with the common dictum of \"good\n> design\" that users don't directly access tables, but go through VIEWs,\n> users may have no way to get at SELECT COUNT(*) FROM TABLE.\n> -- \n> output = reverse(\"ac.notelrac.teneerf\" \"@\" \"454aa\")\n> http://www.ntlug.org/~cbbrowne/finances.html\n> Rules of the Evil Overlord #74. \"When I create a multimedia\n> presentation of my plan designed so that my five-year-old advisor can\n> easily understand the details, I will not label the disk \"Project\n> Overlord\" and leave it lying on top of my desk.\"\n> <http://www.eviloverlord.com/>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nDror Matalon, President\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n\n",
"msg_date": "Thu, 2 Oct 2003 23:22:46 -0700",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: count(*) slow on large tables"
}
] |
[
{
"msg_contents": "12:28p\nDear All,\nThis question is regarding the performance of queries in general.\nThe performance of the queries wud varying depending on the no. Of tuples it is returning, and the sort of alogorithm that will be implemented or the retrieval. \nNow if the relation returns zero tuples.. (the seq, and the index scan is the best option) and if there are 1 or more then rest PG-supported scans will be the best. \nNow here is where I am having a bit of considerations. My relation works fast, when it returns more than on tuple. But get's slow when it returns zero tuple.\nNow how shud I got abt it.\n\n-----\nWarm Regards\nShÿam Peri\n\nII Floor, Punja Building,\nM.G.Road,\nBallalbagh,\nMangalore-575003 \nPh : 91-824-2451001/5\nFax : 91-824-2451050 \n\n\nDISCLAIMER: This message contains privileged and confidential information and is\nintended only for the individual named.If you are not the intended\nrecipient you should not disseminate,distribute,store,print, copy or\ndeliver this message.Please notify the sender immediately by e-mail if\nyou have received this e-mail by mistake and delete this e-mail from\nyour system.\n12:28pDear All,\nThis question is regarding the performance of queries in general.\nThe performance of the queries wud varying depending on the no. Of tuples it is returning, and the sort of alogorithm that will be implemented or the retrieval. Now if the relation returns zero tuples.. (the seq, and the index scan is the best option) and if there are 1 or more then rest PG-supported scans will be the best. \nNow here is where I am having a bit of considerations. My relation works fast, when it returns more than on tuple. But get's slow when it returns zero tuple.\nNow how shud I got abt it.-----\nWarm Regards\nSh�am Peri\n\nII Floor, Punja Building,\nM.G.Road,\nBallalbagh,\nMangalore-575003 \nPh : 91-824-2451001/5\nFax : 91-824-2451050 \n\n\n\n\nDISCLAIMER: This message contains privileged and confidential information and is\nintended only for the individual named.If you are not the intended\nrecipient you should not disseminate,distribute,store,print, copy or\ndeliver this message.Please notify the sender immediately by e-mail if\nyou have received this e-mail by mistake and delete this e-mail from\nyour system.",
"msg_date": "Fri, 3 Oct 2003 12:04:38 +0530 (IST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "A Basic Question"
},
{
"msg_contents": "On Friday 03 October 2003 07:34, [email protected] wrote:\n> 12:28p\n> Dear All,\n> This question is regarding the performance of queries in general.\n> The performance of the queries wud varying depending on the no. Of tuples\n> it is returning, and the sort of alogorithm that will be implemented or the\n> retrieval. Now if the relation returns zero tuples.. (the seq, and the\n> index scan is the best option) and if there are 1 or more then rest\n> PG-supported scans will be the best. Now here is where I am having a bit of\n> considerations. My relation works fast, when it returns more than on tuple.\n> But get's slow when it returns zero tuple. Now how shud I got abt it.\n\nIf PG has to examine a lot of tuples to rule them out, then returning no rows \ncan take longer.\n\nIf you post EXPLAIN ANALYSE output for both queries, someone will be able to \nexplain why in your case.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 3 Oct 2003 11:10:25 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A Basic Question"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI've been trying to find out if some guidelines\nexist, somewhere, describing how postgres\ncan possibly run on less than 8MB of RAM.\n(Disk space not an issue).\n\nThe closest thread I could find in the list \narchives is :\nhttp://archives.postgresql.org/pgsql-general/2002-06/msg01343.php\n\nIs it possible to have a stripped-down version of \npostgres that will use an absolute minimal amount\nof memory? \n\nMaybe by switching off some features/options\nat compile time, and/or configuration tweaks?\n(Or anything else)\n\nThis will be on very low end i386 architecture.\nPerformance penalties are expected and\nwill be accepted. I will need the\nfunctionality of >= 7.3.4 , at least.\n\nAny help will be much appreciated.\n\nRegards\nStef",
"msg_date": "Fri, 3 Oct 2003 16:30:40 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres low end processing."
},
{
"msg_contents": "Stef <[email protected]> writes:\n> I've been trying to find out if some guidelines\n> exist, somewhere, describing how postgres\n> can possibly run on less than 8MB of RAM.\n\nAre you sure you want Postgres, and not something smaller? BDB,\nor SQL Lite, for example?\n\n\"Postgres is bloatware by design: it was built to house PhD theses.\"\n-- J. Hellerstein (who ought to know)\n\nBut having said that ... given virtual memory and cramped configuration\nsettings, Postgres would certainly run in an 8M machine. Maybe \"crawl\"\nwould be a more applicable verb than \"run\", but you could execute it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Oct 2003 11:42:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres low end processing. "
},
{
"msg_contents": "On Fri, 03 Oct 2003 11:42:54 -0400\nTom Lane <[email protected]> wrote:\n\n=> Are you sure you want Postgres, and not something smaller? BDB,\n=> or SQL Lite, for example?\nI have considered various options, including BDB and SQL Lite, but\nalas, it will have to be postgres if it's going to be a database. Otherwise\nit will be back to the original idea of flat .idx files :(\n \n=> \"Postgres is bloatware by design: it was built to house PhD theses.\"\n=> -- J. Hellerstein (who ought to know)\n :o) Believe me, I've been amazed since I encountered postgres v6.3.2\nin '98\n\n=> But having said that ... given virtual memory and cramped configuration\n=> settings, Postgres would certainly run in an 8M machine. Maybe \"crawl\"\n=> would be a more applicable verb than \"run\", but you could execute it.\n\nCrawling is ok. Won't differ much from normal operation on a machine like that.\nAny tips on how to achieve the most diminutive vmem an conf settings?\nI tried to figure this out from the docs, and played around with \nbackend/storage , but I'm not really winning.\n\nRegards\nStef",
"msg_date": "Fri, 3 Oct 2003 18:10:02 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres low end processing."
},
{
"msg_contents": "Stef <[email protected]> writes:\n> Crawling is ok. Won't differ much from normal operation on a machine\n> like that. Any tips on how to achieve the most diminutive vmem an\n> conf settings?\n\nThe out-of-the-box settings are already pretty diminutive on current\nreleases :-(. In 7.4 you'd likely want to knock back shared_buffers\nand max_connections, and maybe the fsm settings if the database is going\nto be tiny.\n\n> I tried to figure this out from the docs, and played\n> around with backend/storage , but I'm not really winning.\n\nWhat exactly is failing? And what's the platform, anyway?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Oct 2003 12:32:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres low end processing. "
},
{
"msg_contents": "On Fri, 03 Oct 2003 12:32:00 -0400\nTom Lane <[email protected]> wrote:\n\n=> What exactly is failing? And what's the platform, anyway?\n\nNothing is really failing atm, except the funds for better \nhardware. JBOSS and some other servers need to be \nrun on these machines, along with linux, which will be \na minimal RH >= 7.2 with kernel 2.4.21\n(Any better suggestions here?)\n\nIn this case, whatever is the least amount of memory\npostgres can run on, is what is needed. So this is still\na kind of feasibility study. Of course, it will still be thoroughly\ntested, if it turns out to be possible. (Which I know it is, but not how)\n\nRegards\nStef",
"msg_date": "Fri, 3 Oct 2003 19:52:38 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres low end processing."
},
{
"msg_contents": "Stef,\n\n> I've been trying to find out if some guidelines\n> exist, somewhere, describing how postgres\n> can possibly run on less than 8MB of RAM.\n> (Disk space not an issue).\n\nI can tell you from experience that you will get some odd behaviour, and even \nconnection failures, when Postgres is forced into swap by lack of memory. \nAlso, you will run into trouble with the default Linux kernel 2.4 VM manager, \nwhich allows applications to overcommit memory; either hack your own memory \nmanager, or designate swap space >= 200% of RAM.\n\nAlso, program your application to expect, and recover from, PostgreSQL \nfailures and connection failures.\n\n> Is it possible to have a stripped-down version of\n> postgres that will use an absolute minimal amount\n> of memory?\n\nI don;t know that there is anything you can remove safely from the postgres \ncore that would save you memory.\n\n> Maybe by switching off some features/options\n> at compile time, and/or configuration tweaks?\n> (Or anything else)\n\nYou're in luck; the default postgresql.conf file for 7.3 is actually cofigured \nfor a low-memory, slow machine setup (which most other people bitch about). \nHere's a few other things you can do:\n\n1. Make sure that the WAL files (pg_xlog) are on a seperate disk from the \ndatabase files, either through mounting or symlinking.\n\n2. Tweak the .conf file for low vacuum_mem (1024?), but vacuum very \nfrequently, like every 1-5 minutes. Spend some time tuning your \nfsm_max_pages to the ideal level so that you're not allocating any extra \nmemory to the FSM.\n\n3. If your concern is *average* CPU/RAM consumption, and not peak load \nactivity, increase wal_files and checkpoint_segments to do more efficient \nbatch processing of pending updates as the cost of some disk space. If peak \nload activity is a problem, don't do this.\n\n4. Tune all of your queries carefully to avoid anything requiring a \nRAM-intensive merge join or CPU-eating calculated expression hash join, or \nsimilar computation-or-RAM-intensive operations.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 3 Oct 2003 11:08:48 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres low end processing."
},
{
"msg_contents": "On Fri, 2003-10-03 at 12:52, Stef wrote:\n> On Fri, 03 Oct 2003 12:32:00 -0400\n> Tom Lane <[email protected]> wrote:\n> \n> => What exactly is failing? And what's the platform, anyway?\n> \n> Nothing is really failing atm, except the funds for better \n> hardware. JBOSS and some other servers need to be \n> run on these machines, along with linux, which will be \n> a minimal RH >= 7.2 with kernel 2.4.21\n> (Any better suggestions here?)\n> \n> In this case, whatever is the least amount of memory\n> postgres can run on, is what is needed. So this is still\n> a kind of feasibility study. Of course, it will still be thoroughly\n> tested, if it turns out to be possible. (Which I know it is, but not how)\n\nJBOSS, PostgreSQL & 2.4.21 all on a computer w/ 8MB RAM? A 486 or\n*very* low end Pentium?\n\nIt'll thrash (in the literal sense) the page files. *No* work \nwill get done.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"...always eager to extend a friendly claw\"\n\n",
"msg_date": "Fri, 03 Oct 2003 13:34:00 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres low end processing."
},
{
"msg_contents": "On Fri, 2003-10-03 at 14:08, Josh Berkus wrote:\n> I can tell you from experience that you will get some odd behaviour, and even \n> connection failures, when Postgres is forced into swap by lack of memory.\n\nWhy would you get a connection failure? And other than poor performance,\nwhy would you get \"odd behavior\" due to a lack of physical memory?\n\n-Neil\n\n\n",
"msg_date": "Fri, 03 Oct 2003 15:04:52 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres low end processing."
},
{
"msg_contents": "On Fri, 3 Oct 2003, Ron Johnson wrote:\n\n> On Fri, 2003-10-03 at 12:52, Stef wrote:\n> > On Fri, 03 Oct 2003 12:32:00 -0400\n> > Tom Lane <[email protected]> wrote:\n> > \n> > => What exactly is failing? And what's the platform, anyway?\n> > \n> > Nothing is really failing atm, except the funds for better \n> > hardware. JBOSS and some other servers need to be \n> > run on these machines, along with linux, which will be \n> > a minimal RH >= 7.2 with kernel 2.4.21\n> > (Any better suggestions here?)\n> > \n> > In this case, whatever is the least amount of memory\n> > postgres can run on, is what is needed. So this is still\n> > a kind of feasibility study. Of course, it will still be thoroughly\n> > tested, if it turns out to be possible. (Which I know it is, but not how)\n> \n> JBOSS, PostgreSQL & 2.4.21 all on a computer w/ 8MB RAM? A 486 or\n> *very* low end Pentium?\n> \n> It'll thrash (in the literal sense) the page files. *No* work \n> will get done.\n\nI built a test server four years ago on a P100 with 64 Megs of RAM and it \nwas already a pretty slow / old box at that time.\n\nConsidering that those kind of beasts sell by the pound nowadays, I can't \nimagine torturing yourself by using a 486 with 8 megs of ram. Even my \nancient 486DX50 Toshiba 4700 has 16 Megs of ram in it.\n\nIF ons has to develop in such a low end environment you're much better \noff either writing perl CGI or using PHP, which both use much less memory \nthan JBoss.\n\nI don't think I'd try to run JBoss / Postgresql on anything less than 64 \nor 128 Meg of RAM. Even then you're probably looking at having a fair bit \nof swapping going on.\n\n",
"msg_date": "Fri, 3 Oct 2003 13:13:51 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres low end processing."
},
{
"msg_contents": "On Fri, 03 Oct 2003 11:42:54 -0400 Tom Lane <[email protected]> wrote:\n> \"Postgres is bloatware by design: it was built to house PhD theses.\"\n> -- J. Hellerstein (who ought to know)\n\nif postgres is bloatware, what is oracle 9i?\n\n(after i downloaded a copy of oracle 8i a couple of months back, i swore i'd\nnever complain about the size of postgresql ever ever again.)\n\nrichard\n-- \nRichard Welty [email protected]\nAverill Park Networking 518-573-7592\n Java, PHP, PostgreSQL, Unix, Linux, IP Network Engineering, Security\n\n\n",
"msg_date": "Fri, 3 Oct 2003 15:51:28 -0400 (EDT)",
"msg_from": "Richard Welty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres low end processing."
},
{
"msg_contents": "On Fri, 2003-10-03 at 14:04, Neil Conway wrote:\n> On Fri, 2003-10-03 at 14:08, Josh Berkus wrote:\n> > I can tell you from experience that you will get some odd behaviour, and even \n> > connection failures, when Postgres is forced into swap by lack of memory.\n> \n> Why would you get a connection failure? And other than poor performance,\n> why would you get \"odd behavior\" due to a lack of physical memory?\n\nIt would take so long for the \"server\" to respond that the client\nmight time out.\n\nOf course, back in the day, we supported 70 people on a mainframe\nw/ 1.6 MIPS and 8MB RAM. FEPs and 3270 terminals helped, of course.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"All machines, no matter how complex, are considered to be based\non 6 simple elements: the lever, the pulley, the wheel and axle,\nthe screw, the wedge and the inclined plane.\"\nMarilyn Vos Savant \n\n",
"msg_date": "Fri, 03 Oct 2003 15:42:32 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgres low end processing."
},
{
"msg_contents": "Stef wrote:\n\n> On Fri, 03 Oct 2003 12:32:00 -0400\n> Tom Lane <[email protected]> wrote:\n> \n> => What exactly is failing? And what's the platform, anyway?\n> \n> Nothing is really failing atm, except the funds for better \n> hardware. JBOSS and some other servers need to be \n> run on these machines, along with linux, which will be \n> a minimal RH >= 7.2 with kernel 2.4.21\n> (Any better suggestions here?)\n> \n> In this case, whatever is the least amount of memory\n> postgres can run on, is what is needed. So this is still\n> a kind of feasibility study. Of course, it will still be thoroughly\n> tested, if it turns out to be possible. (Which I know it is, but not how)\n\nIf you mean to say that postgresql should use just 8 MB of RAM rather than \nrunning it on a 8MB machine, then that is impossible given how much postgresql \nrelies upon OS cache.\n\nYou may configure postgresql with 8MB shared memory or the old holy default of \n512K, but if your database is 100MB and OS is caching half of it on behalf of \npostgresql, your goal is already missed..\n\n Shridhar\n\n",
"msg_date": "Mon, 06 Oct 2003 11:41:34 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres low end processing."
},
{
"msg_contents": "Thanks for the replies,\n\nOn Fri, 3 Oct 2003 11:08:48 -0700\nJosh Berkus <[email protected]> wrote:\n=> 1. Make sure that the WAL files (pg_xlog) are on a seperate disk from the \n=> database files, either through mounting or symlinking.\n \nI'm not sure I understand how this helps?\n\n=> 2. Tweak the .conf file for low vacuum_mem (1024?), but vacuum very \n=> frequently, like every 1-5 minutes. Spend some time tuning your \n=> fsm_max_pages to the ideal level so that you're not allocating any extra \n=> memory to the FSM.\n=>\n=> 3. If your concern is *average* CPU/RAM consumption, and not peak load \n=> activity, increase wal_files and checkpoint_segments to do more efficient \n=> batch processing of pending updates as the cost of some disk space. If peak \n=> load activity is a problem, don't do this.\n=> \n=> 4. Tune all of your queries carefully to avoid anything requiring a \n=> RAM-intensive merge join or CPU-eating calculated expression hash join, or \n=> similar computation-or-RAM-intensive operations.\n\nThanks, I'll try some of these, and post the results.\nThe actual machines seem to be Pentium I machines,\nwith 32M RAM. I've gathered that it is theoretically \npossible, so no to go try it.\n\nRegards\nStef",
"msg_date": "Mon, 6 Oct 2003 09:55:51 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres low end processing."
},
{
"msg_contents": "On Mon, Oct 06, 2003 at 09:55:51 +0200,\n Stef <[email protected]> wrote:\n> \n> Thanks, I'll try some of these, and post the results.\n> The actual machines seem to be Pentium I machines,\n> with 32M RAM. I've gathered that it is theoretically \n> possible, so no to go try it.\n\nI am running 7.4beta2 on a Pentium I machine with 48 MB of memory.\nI was running an earlier version of Postgres (probably 7.1.x) on it\nwhen it only had 32 MB of memory. It doesn't run very fast, but it\nworks OK. I remember increase from 32MB to 48MB was very noticible in\nthe time to serve web pages using information from the DB, but I\ndon't remember the details since it was a couple of years ago.\n",
"msg_date": "Mon, 6 Oct 2003 08:29:05 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres low end processing."
},
{
"msg_contents": "Stef,\n\n> => 1. Make sure that the WAL files (pg_xlog) are on a seperate disk from the \n> => database files, either through mounting or symlinking.\n> \n> I'm not sure I understand how this helps?\n\nIt gives you better fsync write performance on a low-end disk setup. \nOtherwise, the disk is forced to do a hop-back-and-forth between the database \nand the xlog, resulting in much slower updates and thus the database tying up \nblocks of RAM longer -- particularly if your shared_buffers are set very low, \nwhich they will be.\n\nOn RAID setups, this is unnecessary becuase the RAID takes care of disk access \nmanagement. But on a low-end, 2-IDE-disk machine, you have to do it.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 6 Oct 2003 11:21:20 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres low end processing."
},
{
"msg_contents": "Hi again all,\n\nI've tested postgres 7.3.4 on Linux version 2.4.17 \nand this is what I found :\n\nThe initial instance took up 8372K and this fluctuated\nbetween +- 8372K and 10372K, plus +- 3500K for\nevery connection.\n\nI did quite a few transactions on both connections, plus\na few vacuums and a pg_dump and the total memory usage\ndidn't seem to go over 16M\n\nI set all the _buffers, _mem, _fsm settings to the minimum,\nrestarted every time, but this had absolutely no noticeable \nincrease or decrease in total memory usage.\n\n(I used a program called gmemusage to get these stats.)\n\nOn the same machine , I tested postgres 7.1.2 with basically\nthe same conf options (not _fsm) and got the following :\n\nThe initial instance was 1772K and fluctuated to +- 4000K,\nplus +- 3400K for every connection.\n\nDoing the same transactions, vacuum + pg_dump, total\nmemory usage didn't really go over 11M, \nwhich was exactly what I needed. \n\nAlthough I've lived through some of the shortcomings of\n7.1.2, it is still very stable, and works perfectly for\nwhat it is going to be used for.\n\nAgain, here, I was only able to restrict things a little\nby changing the configuration options, but no major\ndifference in memory usage.\n\nRegards\nStef\n\nOn Mon, 6 Oct 2003 09:55:51 +0200\nStef <[email protected]> wrote:\n\n=> Thanks for the replies,\n=> \n=> On Fri, 3 Oct 2003 11:08:48 -0700\n=> Josh Berkus <[email protected]> wrote:\n=> => 1. Make sure that the WAL files (pg_xlog) are on a seperate disk from the \n=> => database files, either through mounting or symlinking.\n=> \n=> I'm not sure I understand how this helps?\n=> \n=> => 2. Tweak the .conf file for low vacuum_mem (1024?), but vacuum very \n=> => frequently, like every 1-5 minutes. Spend some time tuning your \n=> => fsm_max_pages to the ideal level so that you're not allocating any extra \n=> => memory to the FSM.\n=> =>\n=> => 3. If your concern is *average* CPU/RAM consumption, and not peak load \n=> => activity, increase wal_files and checkpoint_segments to do more efficient \n=> => batch processing of pending updates as the cost of some disk space. If peak \n=> => load activity is a problem, don't do this.\n=> => \n=> => 4. Tune all of your queries carefully to avoid anything requiring a \n=> => RAM-intensive merge join or CPU-eating calculated expression hash join, or \n=> => similar computation-or-RAM-intensive operations.\n=> \n=> Thanks, I'll try some of these, and post the results.\n=> The actual machines seem to be Pentium I machines,\n=> with 32M RAM. I've gathered that it is theoretically \n=> possible, so no to go try it.\n=> \n=> Regards\n=> Stef\n=>",
"msg_date": "Tue, 7 Oct 2003 17:28:22 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres low end processing."
},
{
"msg_contents": "On Tue, 7 Oct 2003, Stef wrote:\n\n> The initial instance took up 8372K and this fluctuated\n> between +- 8372K and 10372K, plus +- 3500K for\n> every connection.\n>\n\nDoes that include/exlude the size of say, shared code & libraries?\nI know linux does copy-on-write forking.. so it may be less in reality...\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Tue, 7 Oct 2003 11:40:01 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres low end processing."
}
] |
[
{
"msg_contents": "We frequently need to know the number of tuples in a table although sometimes we do have WHERE status='X' for example but this often doesn't guarantee an indexed scan. And yes, my reasons are the same - reporting figures eg number of bookings made since the system was introduced. Have you tried doing\n\nSELECT count(pkey)\n\nrather than count(*)\n\nwhere pkey is the primary key (assuming you have a single field that is a primary key or a unique indexed key). This is MUCH faster in my experience. If you don't have such an animal, I'd seriously suggesting putting in a serial number and recreate the table with that as the primary key.\n\nThe vacuuming bit is not accurate enough for us in many instances. Also a count can be easily fed into other programs/web pages etc without having to parse the vacuum output.\n\nHilary\n\nAt 23:22 02/10/2003 -0700, you wrote:\n\n>I can tell you that this is one of the first thing applications' programmers and IT managers notice. It can slightly tarnish postgres' image when it takes it many long seconds to do what other databases can do in a snap. The \"whys and wherefores\" can be hard to get across once they see the comparative numbers.\n>\n>When I use Informix \"dbaccess\" it has a \"status\" which will tell me the row count of a table virtually instantly -- it can be locked out by a user with an exclusive lock so its not entirely independant of the table (like a stored value in one of the system catalog tables).\n>\n>This is not to say Informix is \"right\" and Postgres is \"wrong\" ... but it is something that virtually any newcomer will run into head long, with resulting bruises and contusions, not to mention confusion.\n>\n>At the very least this needs to be VERY clearly explained right up front, along with some of the possible work-arounds, depending on what one is really after with this info.\n>\n>Greg Williamson\n>DBA\n>GlobeXplorer LLC\n>\n>-----Original Message-----\n>From: Dror Matalon [mailto:[email protected]]\n>Sent: Thu 10/2/2003 9:27 PM\n>To: [email protected]\n>Cc: \n>Subject: Re: [PERFORM] count(*) slow on large tables\n>\n>\n>I smell a religious war in the aii:-). \n>Can you go several days in a row without doing select count(*) on any\n>of your tables? \n>\n>I suspect that this is somewhat a domain specific issue. In some areas\n>you don't need to know the total number of rows in your tables, in\n>others you do. \n>\n>I also suspect that you're right, that end user applications don't use\n>this information as often as DBAs would. On the other hand, it seems\n>whenever you want to optimize your app (something relevant to this list),\n>one of the things you do need to know is the number of rows in your\n>table.\n>\n>Dror\n>\n>On Thu, Oct 02, 2003 at 10:08:18PM -0400, Christopher Browne wrote:\n>> The world rejoiced as [email protected] (Dror Matalon) wrote:\n>> > I don't have an opinion on how hard it would be to implement the\n>> > tracking in the indexes, but \"select count(*) from some table\" is, in my\n>> > experience, a query that people tend to run quite often. \n>> > One of the databases that I've used, I believe it was Informix, had that\n>> > info cached so that it always new how many rows there were in any\n>> > table. It was quite useful.\n>> \n>> I can't imagine why the raw number of tuples in a relation would be\n>> expected to necessarily be terribly useful.\n>> \n>> I'm involved with managing Internet domains, and it's only when people\n>> are being pretty clueless that anyone imagines that \"select count(*)\n>> from domains;\" would be of any use to anyone. There are enough \"test\n>> domains\" and \"inactive domains\" and other such things that the raw\n>> number of \"things in the table\" aren't really of much use.\n>> \n>> - I _do_ care how many pages a table occupies, to some extent, as that\n>> determines whether it will fit in my disk space or not, but that's not\n>> COUNT(*).\n>> \n>> - I might care about auditing the exact numbers of records in order to\n>> be assured that a data conversion process was done correctly. But in\n>> that case, I want to do something a whole *lot* more detailed than\n>> mere COUNT(*).\n>> \n>> I'm playing \"devil's advocate\" here, to some extent. But\n>> realistically, there is good reason to be skeptical of the merits of\n>> using SELECT COUNT(*) FROM TABLE for much of anything.\n>> \n>> Furthermore, the relation that you query mightn't be a physical\n>> \"table.\" It might be a more virtual VIEW, and if that's the case,\n>> bets are even MORE off. If you go with the common dictum of \"good\n>> design\" that users don't directly access tables, but go through VIEWs,\n>> users may have no way to get at SELECT COUNT(*) FROM TABLE.\n>> -- \n>> output = reverse(\"ac.notelrac.teneerf\" \"@\" \"454aa\")\n>> http://www.ntlug.org/~cbbrowne/finances.html\n>> Rules of the Evil Overlord #74. \"When I create a multimedia\n>> presentation of my plan designed so that my five-year-old advisor can\n>> easily understand the details, I will not label the disk \"Project\n>> Overlord\" and leave it lying on top of my desk.\"\n>> <http://www.eviloverlord.com/>\n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 4: Don't 'kill -9' the postmaster\n>\n>-- \n>Dror Matalon, President\n>Zapatec Inc \n>1700 MLK Way\n>Berkeley, CA 94709\n>http://www.zapatec.com\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster \n\n\nHilary Forbes\n-------------\nDMR Computer Limited: http://www.dmr.co.uk/\nDirect line: 01689 889950\nSwitchboard: (44) 1689 860000 Fax: (44) 1689 860330\nE-mail: [email protected]\n\n**********************************************************\n\n",
"msg_date": "Fri, 03 Oct 2003 17:50:17 +0100",
"msg_from": "Hilary Forbes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: count(*) slow on large tables"
}
] |
[
{
"msg_contents": "Josh,\n\nI declared all the indexes that you suggested and ran vacuum full\nanalyze. The query plan has not changed and it's still trying to use\nseqscan. I tried to disable seqscan, but the plan didn't change. Any\nother suggestions?\nI started explain analyze on the query, but I doubt it will finish any\ntime soon.\nThanks.\n\nOleg\n\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Thursday, October 02, 2003 11:27 PM\nTo: Oleg Lebedev; scott.marlowe\nCc: [email protected]\nSubject: Re: [PERFORM] TPC-R benchmarks\n\n\nOleg,\n\n> I have another question. How do I optimize my indexes for the query \n> that contains a lot of ORed blocks, each of which contains a bunch of \n> ANDed expressions? The structure of each ORed block is the same except\n\n> the right-hand-side values vary.\n\nGiven the example, I'd do a multicolumn index on p_brand, p_container,\np_size \nand a second multicolumn index on l_partkey, l_quantity, l_shipmode.\nHmmm \n... or maybe seperate indexes, one on l_partkey and one on l_quantity, \nl_shipmode & l_instruct. Test both configurations.\n\nMind you, if this is also an OLTP table, then you'd want to test those \nmulti-column indexes to determine the least columns you need for the\nindexes \nstill to be used, since more columns = more index maintainence.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************\n",
"msg_date": "Fri, 3 Oct 2003 10:54:42 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "Oleg,\n\n> I declared all the indexes that you suggested and ran vacuum full\n> analyze. The query plan has not changed and it's still trying to use\n> seqscan. I tried to disable seqscan, but the plan didn't change. Any\n> other suggestions?\n> I started explain analyze on the query, but I doubt it will finish any\n> time soon.\n\nCan I get a copy of the database so that I can tinker? I'm curious now, plus \nI want our benchmarks to look good.\n\nI have a private FTP if that helps.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 3 Oct 2003 10:21:50 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "Josh,\nMy data directory is 3.8 GB.\nI can send you flat data files and scripts to create indices, but still\nit would be about 1.3 GB of data. Do you still want me to transfer data\nto you? If yes, then just give me your FTP address.\nThanks.\n\nOleg\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Friday, October 03, 2003 11:22 AM\nTo: Oleg Lebedev; scott.marlowe\nCc: [email protected]\nSubject: Re: [PERFORM] TPC-R benchmarks\n\n\nOleg,\n\n> I declared all the indexes that you suggested and ran vacuum full \n> analyze. The query plan has not changed and it's still trying to use \n> seqscan. I tried to disable seqscan, but the plan didn't change. Any \n> other suggestions? I started explain analyze on the query, but I doubt\n\n> it will finish any time soon.\n\nCan I get a copy of the database so that I can tinker? I'm curious\nnow, plus \nI want our benchmarks to look good.\n\nI have a private FTP if that helps.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n*************************************\n\nThis e-mail may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments.\nUnauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.\nWe reserve the right to monitor e-mail sent through our network. \n\n*************************************\n",
"msg_date": "Fri, 3 Oct 2003 12:04:23 -0600",
"msg_from": "Oleg Lebedev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a query that ran quite well initially, but slowed down quite a\nbit once I introduced an aggregate into the equation. The average\nexecution time went up from around 15 msec to around 300 msec. \n\nThe original query fetches a bunch of articles:\n\nselect articlenumber, channel, description, title, link, dtstamp from\n\titems, my_channels where items.channel = '22222' and my_channels.id =\n\t'22222' and owner = 'drormata' and dtstamp > last_viewed and\n\tarticlenumber not in (select item from viewed_items where channel\n\t='22222' and owner = 'drormata');\n\n\nI then added a call to a function:\n\nand (dtstamp = item_max_date(22222, link))\n\n\nitem_max_date() looks like this:\n select max(dtstamp) from items where channel = $1 and link = $2;\n\nThis should eliminate duplicate articles and only show the most recent\none.\n\nresulting in the following query\n\nselect articlenumber, channel, description, title, link, dtstamp from\n\titems, my_channels where items.channel = '22222' and my_channels.id =\n\t'22222' and owner = 'drormata' and dtstamp > last_viewed and\n\tarticlenumber not in (select item from viewed_items where channel\n\t='22222' and owner = 'drormata') and (dtstamp = item_max_date(22222,\n\tlink));\n\n\n\nAny suggestions on optimizing the query/function? It makes sense that \nit slowed down, but I wonder if I can do better.\n\nI'm including index list as well as \"explain analyze\" of both versions.\n\nIndexes:\n \"item_channel_link\" btree (channel, link)\n \"item_created\" btree (dtstamp)\n \"item_signature\" btree (signature)\n \"items_channel_article\" btree (channel, articlenumber)\n\n\nexplain analyze select articlenumber, channel, description, title, link, dtstamp from items, my_channels where items.channel = '22222' and my_channels.id = '22222' and owner = 'drormata' and dtstamp > last_viewed and articlenumber not in (select item from viewed_items where channel ='22222' and owner = 'drormata'); QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=8.19..6982.58 rows=302 width=259) (actual time=16.95..17.16 rows=8 loops=1)\n Join Filter: (\"inner\".dtstamp > \"outer\".last_viewed)\n -> Seq Scan on my_channels (cost=0.00..3.23 rows=1 width=8) (actual time=0.36..0.38 rows=1 loops=1)\n Filter: ((id = 22222) AND ((\"owner\")::text = 'drormata'::text))\n -> Index Scan using items_channel_article on items (cost=8.19..6968.05 rows=904 width=259) (actual time=0.68..13.94 rows=899 loops=1)\n Index Cond: (channel = 22222)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on viewed_items (cost=0.00..8.19 rows=2 width=4) (actual time=0.48..0.48 rows=0 loops=1)\n Filter: ((channel = 22222) AND ((\"owner\")::text = 'drormata'::text))\n Total runtime: 17.42 msec\n(11 rows)\n\n\nexplain analyze select articlenumber, channel, description, title, link, dtstamp from items, my_channels where items.channel = '22222' and my_channels.id = '22222' and owner = 'drormata' and dtstamp > last_viewed and articlenumber not in (select item from viewed_items where channel ='22222' and owner = 'drormata') and (dtstamp = item_max_date(22222, link));\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=8.19..6980.33 rows=1 width=259) (actual time=262.94..265.14 rows=7 loops=1)\n Join Filter: (\"outer\".dtstamp > \"inner\".last_viewed)\n -> Index Scan using items_channel_article on items (cost=8.19..6977.08 rows=1 width=259) (actual time=1.94..150.55 rows=683 loops=1)\n Index Cond: (channel = 22222)\n Filter: ((dtstamp = item_max_date(22222, link)) AND (NOT (hashed subplan)))\n SubPlan\n -> Seq Scan on viewed_items (cost=0.00..8.19 rows=2 width=4) (actual time=0.43..0.43 rows=0 loops=1)\n Filter: ((channel = 22222) AND ((\"owner\")::text = 'drormata'::text))\n -> Seq Scan on my_channels (cost=0.00..3.23 rows=1 width=8) (actual time=0.14..0.15 rows=1 loops=683)\n Filter: ((id = 22222) AND ((\"owner\")::text = 'drormata'::text))\n Total runtime: 265.39 msec\n\n\n\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Fri, 3 Oct 2003 13:21:20 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Speeding up Aggregates"
},
{
"msg_contents": "Dror,\n\n> select articlenumber, channel, description, title, link, dtstamp from\n> \titems, my_channels where items.channel = '22222' and my_channels.id =\n> \t'22222' and owner = 'drormata' and dtstamp > last_viewed and\n> \tarticlenumber not in (select item from viewed_items where channel\n> \t='22222' and owner = 'drormata');\n\nthe NOT IN is a bad idea unless the subselect never returns more than a \nhandful of rows. If viewed_items can grow to dozens of rows, wyou should \nuse WHERE NOT EXISTS instead. Unless you're using 7.4.\n\n> item_max_date() looks like this:\n> select max(dtstamp) from items where channel = $1 and link = $2;\n\nChange it to \n\nSELECT dtstamp from iterm where channel = $1 and link = $2\nORDER BY dtstamp DESC LIMIT 1\n\nand possibly build an index on channel, link, dtstamp\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 3 Oct 2003 14:07:10 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "\nHi Josh,\n\nOn Fri, Oct 03, 2003 at 02:07:10PM -0700, Josh Berkus wrote:\n> Dror,\n> \n> > select articlenumber, channel, description, title, link, dtstamp from\n> > \titems, my_channels where items.channel = '22222' and my_channels.id =\n> > \t'22222' and owner = 'drormata' and dtstamp > last_viewed and\n> > \tarticlenumber not in (select item from viewed_items where channel\n> > \t='22222' and owner = 'drormata');\n> \n> the NOT IN is a bad idea unless the subselect never returns more than a \n> handful of rows. If viewed_items can grow to dozens of rows, wyou should \n> use WHERE NOT EXISTS instead. Unless you're using 7.4.\n> \n\nI am using 7.4, and had tried NOT EXISTS and didn't see any\nimprovements.\n\n> > item_max_date() looks like this:\n> > select max(dtstamp) from items where channel = $1 and link = $2;\n> \n> Change it to \n> \n> SELECT dtstamp from iterm where channel = $1 and link = $2\n> ORDER BY dtstamp DESC LIMIT 1\n> \n\nDidn't make a difference. And plugging real values into this query as\nwell as into the original \n select max(dtstamp) from items where channel = $1 and link = $2;\n\nand doing an explain analyze shows that the cost is the same. The\nstrange things is that when I run the above queries by hand they take\nabout .5 msec. Yet on a resultset that fetches 5 rows, I go up from 15\nmsec to 300 msec. It would seem like it should be something like 15 +\n(0.5 * 5) + small overhead, = 30 msec or so rather than the 300 I'm\nseeing.\n\n> and possibly build an index on channel, link, dtstamp\n\nDidn't make a difference either. Explain analyze shows that it didn't\nuse it.\n\n> \n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Fri, 3 Oct 2003 14:28:48 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "Dror,\n\n> I am using 7.4, and had tried NOT EXISTS and didn't see any\n> improvements.\n\nIt wouldn't if you're using 7.4, which has improved IN performance immensely.\n\nWhat happens if you stop using a function and instead use a subselect?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 3 Oct 2003 14:35:46 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "> item_max_date() looks like this:\n> select max(dtstamp) from items where channel = $1 and link = $2;\n\nIt is too bad the (channel, link) index doesn't have dtstamp at the end\nof it, otherwise the below query would be a gain (might be a small one\nanyway).\n\n select dtstamp\n from items\n where channel = $1\n and link = $2\nORDER BY dtstamp DESC\n LIMIT 1;\n\n\nCould you show us the exact specification of the function? In\nparticular, did you mark it VOLATILE, IMMUTABLE, or STABLE?\n\nI hope it isn't the first or second one ;)",
"msg_date": "Fri, 03 Oct 2003 17:44:49 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "On Fri, Oct 03, 2003 at 05:44:49PM -0400, Rod Taylor wrote:\n> > item_max_date() looks like this:\n> > select max(dtstamp) from items where channel = $1 and link = $2;\n> \n> It is too bad the (channel, link) index doesn't have dtstamp at the end\n> of it, otherwise the below query would be a gain (might be a small one\n> anyway).\n> \n> select dtstamp\n> from items\n> where channel = $1\n> and link = $2\n> ORDER BY dtstamp DESC\n> LIMIT 1;\n\nSimilar idea to what Josh suggested. I did create an additional index\nwith dtstamp at the end and it doesn't look like the planner used it.\nUsing the above query instead of max() didn't improve things either.\n\n> \n> \n> Could you show us the exact specification of the function? In\n> particular, did you mark it VOLATILE, IMMUTABLE, or STABLE?\n> \n> I hope it isn't the first or second one ;)\n\nCREATE or REPLACE FUNCTION item_max_date (int4, varchar) RETURNS\ntimestamptz AS '\nselect max(dtstamp) from items where channel = $1 and link = $2;\n' LANGUAGE 'sql';\n\n\n\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Fri, 3 Oct 2003 14:53:47 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "On Fri, Oct 03, 2003 at 02:35:46PM -0700, Josh Berkus wrote:\n> Dror,\n> \n> > I am using 7.4, and had tried NOT EXISTS and didn't see any\n> > improvements.\n> \n> It wouldn't if you're using 7.4, which has improved IN performance immensely.\n> \n> What happens if you stop using a function and instead use a subselect?\n\nAn improvement. Now I'm getting in the 200 msec response time. \n\nAnd by the way, I tried \"not exists\" again and it actually runs slower\nthan \"not in.\"\n\n> \n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Fri, 3 Oct 2003 15:03:58 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "On Fri, 2003-10-03 at 17:53, Dror Matalon wrote:\n> On Fri, Oct 03, 2003 at 05:44:49PM -0400, Rod Taylor wrote:\n> > > item_max_date() looks like this:\n> > > select max(dtstamp) from items where channel = $1 and link = $2;\n> > \n> > It is too bad the (channel, link) index doesn't have dtstamp at the end\n> > of it, otherwise the below query would be a gain (might be a small one\n> > anyway).\n> > \n> > select dtstamp\n> > from items\n> > where channel = $1\n> > and link = $2\n> > ORDER BY dtstamp DESC\n> > LIMIT 1;\n\nIt didn't make a difference even with the 3 term index? I guess you\ndon't have very many common values for channel / link combination.\n\n\n\nHow about the below? Note the word STABLE on the end.\n\nCREATE or REPLACE FUNCTION item_max_date (int4, varchar) RETURNS\ntimestamptz AS '\nselect max(dtstamp) from items where channel = $1 and link = $2;\n' LANGUAGE 'sql' STABLE;",
"msg_date": "Fri, 03 Oct 2003 18:10:29 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "On Fri, Oct 03, 2003 at 06:10:29PM -0400, Rod Taylor wrote:\n> On Fri, 2003-10-03 at 17:53, Dror Matalon wrote:\n> > On Fri, Oct 03, 2003 at 05:44:49PM -0400, Rod Taylor wrote:\n> > > > item_max_date() looks like this:\n> > > > select max(dtstamp) from items where channel = $1 and link = $2;\n> > > \n> > > It is too bad the (channel, link) index doesn't have dtstamp at the end\n> > > of it, otherwise the below query would be a gain (might be a small one\n> > > anyway).\n> > > \n> > > select dtstamp\n> > > from items\n> > > where channel = $1\n> > > and link = $2\n> > > ORDER BY dtstamp DESC\n> > > LIMIT 1;\n> \n> It didn't make a difference even with the 3 term index? I guess you\n> don't have very many common values for channel / link combination.\n\nThere's no noticeable difference between two term and three term\nindexes.\n\n> \n> \n> \n> How about the below? Note the word STABLE on the end.\n> \n> CREATE or REPLACE FUNCTION item_max_date (int4, varchar) RETURNS\n> timestamptz AS '\n> select max(dtstamp) from items where channel = $1 and link = $2;\n> ' LANGUAGE 'sql' STABLE;\n\nMade no difference.\n\n\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Fri, 3 Oct 2003 15:16:58 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "> > I hope it isn't the first or second one ;)\n> \n> CREATE or REPLACE FUNCTION item_max_date (int4, varchar) RETURNS\n> timestamptz AS '\n> select max(dtstamp) from items where channel = $1 and link = $2;\n> ' LANGUAGE 'sql';\n\n\nHow about the below?\n\nCREATE or REPLACE FUNCTION item_max_date (int4, varchar) RETURNS\ntimestamptz AS '\nselect max(dtstamp) from items where channel = $1 and link = $2;\n' LANGUAGE 'sql' STABLE;",
"msg_date": "Fri, 03 Oct 2003 19:32:16 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "Rod Taylor <[email protected]> writes:\n\n> On Fri, 2003-10-03 at 17:53, Dror Matalon wrote:\n> > On Fri, Oct 03, 2003 at 05:44:49PM -0400, Rod Taylor wrote:\n> > >\n> > > It is too bad the (channel, link) index doesn't have dtstamp at the end\n> > > of it, otherwise the below query would be a gain (might be a small one\n> > > anyway).\n> > > \n> > > select dtstamp\n> > > from items\n> > > where channel = $1\n> > > and link = $2\n> > > ORDER BY dtstamp DESC\n> > > LIMIT 1;\n> \n> It didn't make a difference even with the 3 term index? I guess you\n> don't have very many common values for channel / link combination.\n\nYou need to do:\n\n ORDER BY channel DESC, link DESC, dtstamp DESC\n\nThis is an optimizer nit. It doesn't notice that since it selected on channel\nand link already the remaining tuples in the index will be ordered simply by\ndtstamp.\n\n(This is the thing i pointed out previously in\n<[email protected]> on Feb 13th 2003 on pgsql-general)\n\n\n-- \ngreg\n\n",
"msg_date": "08 Oct 2003 10:54:24 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "\nActually what finally sovled the problem is repeating the \ndtstamp > last_viewed\nin the sub select\n\nselect articlenumber, channel, description, title, link, dtstamp from items i1, my_channels where ((i1.channel = '22222' and\nmy_channels.id = '22222' and owner = 'drormata' and (dtstamp > last_viewed)) ) and (dtstamp = (select max (dtstamp) from items i2 \n\twhere channel = '22222' and i1.link = i2.link));\n\nto\nexplain analyze select articlenumber, channel, description, title, link, dtstamp from items i1, my_channels where ((i1.channel = '22222' and\nmy_channels.id = '22222' and owner = 'drormata' and (dtstamp > last_viewed)) ) and (dtstamp = (select max (dtstamp) from items i2 where\nchannel = '22222' and i1.link = i2.link and dtstamp > last_viewed));\n\nWhich in the stored procedure looks like this:\nCREATE or REPLACE FUNCTION item_max_date (int4, varchar, timestamptz)\nRETURNS\ntimestamptz AS '\nselect max(dtstamp) from items where channel = $1 and link = $2 and\ndtstamp > $3;\n' LANGUAGE 'sql';\n\n\nBasically I have hundreds or thousands of items but only a few that\nsatisfy \"dtstamp > last_viewed\". Obviously I want to run the max() only on\non a few items. Repeating \"dtstamp > last_viewed\" did the trick, but it\nseems like there should be a more elegant/clear way to tell the planner\nwhich constraint to apply first.\n\nDror\n\n\n\nOn Wed, Oct 08, 2003 at 10:54:24AM -0400, Greg Stark wrote:\n> Rod Taylor <[email protected]> writes:\n> \n> > On Fri, 2003-10-03 at 17:53, Dror Matalon wrote:\n> > > On Fri, Oct 03, 2003 at 05:44:49PM -0400, Rod Taylor wrote:\n> > > >\n> > > > It is too bad the (channel, link) index doesn't have dtstamp at the end\n> > > > of it, otherwise the below query would be a gain (might be a small one\n> > > > anyway).\n> > > > \n> > > > select dtstamp\n> > > > from items\n> > > > where channel = $1\n> > > > and link = $2\n> > > > ORDER BY dtstamp DESC\n> > > > LIMIT 1;\n> > \n> > It didn't make a difference even with the 3 term index? I guess you\n> > don't have very many common values for channel / link combination.\n> \n> You need to do:\n> \n> ORDER BY channel DESC, link DESC, dtstamp DESC\n> \n> This is an optimizer nit. It doesn't notice that since it selected on channel\n> and link already the remaining tuples in the index will be ordered simply by\n> dtstamp.\n> \n> (This is the thing i pointed out previously in\n> <[email protected]> on Feb 13th 2003 on pgsql-general)\n> \n> \n> -- \n> greg\n> \n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Wed, 8 Oct 2003 11:18:19 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "Dror Matalon <[email protected]> writes:\n\n> Actually what finally sovled the problem is repeating the \n> dtstamp > last_viewed\n> in the sub select\n\nThat will at least convince the optimizer to use an index range lookup. But it\nstill will have to scan every record that matches channel==$1, link==$2, and\ndtstamp>$3.\n\nThe trick of using limit 1 will be faster still as it only has to retrieve a\nsingle record using the index. But you have to be sure to convince it to use\nthe index and the way to do that is to list exactly the same columns in the\nORDER BY as are in the index definition. \n\nEven if some of the leading columns are redundant because they'll be constant\nfor all of the records retrieved. The optimizer doesn't know to ignore those.\n\n> > (This is the thing i pointed out previously in\n> > <[email protected]> on Feb 13th 2003 on pgsql-general)\n\n-- \ngreg\n\n",
"msg_date": "09 Oct 2003 19:07:00 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "On Thu, Oct 09, 2003 at 07:07:00PM -0400, Greg Stark wrote:\n> Dror Matalon <[email protected]> writes:\n> \n> > Actually what finally sovled the problem is repeating the \n> > dtstamp > last_viewed\n> > in the sub select\n> \n> That will at least convince the optimizer to use an index range lookup. But it\n> still will have to scan every record that matches channel==$1, link==$2, and\n> dtstamp>$3.\n> \n> The trick of using limit 1 will be faster still as it only has to retrieve a\n> single record using the index. But you have to be sure to convince it to use\n\nHow is doing order by limit 1 faster than doing max()? Seems like the\noptimizer will need to sort or scan the data set either way. That part\ndidn't actually make a difference in my specific case.\n\n\n> the index and the way to do that is to list exactly the same columns in the\n> ORDER BY as are in the index definition. \n> \n> Even if some of the leading columns are redundant because they'll be constant\n> for all of the records retrieved. The optimizer doesn't know to ignore those.\n\nThe main problem in my case was that the optimizer was doing the max()\non all 700 rows, rather than the filtered rows. It's not until I put the\n\"dtstamp> last_viewed\" in the sub select as well as in the main query\nthat it realized that it can first filter the 696 rows out and then to\nthe max() on the 4 rows that satisfied this constraint. \n\nThat was the big saving.\n\nHope this all makes sense,\n\nDror\n> \n> > > (This is the thing i pointed out previously in\n> > > <[email protected]> on Feb 13th 2003 on pgsql-general)\n> \n> -- \n> greg\n> \n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Thu, 9 Oct 2003 17:44:46 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "On Thu, Oct 09, 2003 at 17:44:46 -0700,\n Dror Matalon <[email protected]> wrote:\n> \n> How is doing order by limit 1 faster than doing max()? Seems like the\n> optimizer will need to sort or scan the data set either way. That part\n> didn't actually make a difference in my specific case.\n\nmax() will never be evaluated by using an index to find the greatest value.\nSo in many cases using order by and limit 1 is faster.\n",
"msg_date": "Thu, 9 Oct 2003 20:35:22 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "On Thu, Oct 09, 2003 at 08:35:22PM -0500, Bruno Wolff III wrote:\n> On Thu, Oct 09, 2003 at 17:44:46 -0700,\n> Dror Matalon <[email protected]> wrote:\n> > \n> > How is doing order by limit 1 faster than doing max()? Seems like the\n> > optimizer will need to sort or scan the data set either way. That part\n> > didn't actually make a difference in my specific case.\n> \n> max() will never be evaluated by using an index to find the greatest value.\n> So in many cases using order by and limit 1 is faster.\n\nOuch. I just double checked and you're right. Is this considered a bug,\nor just an implementation issue? \n\nWhile I've seen this hint a few times in the lists, it seems like it's\none of those magic incantations that those in the know, know about, and\nthat people new to postgres are going to be surprised by the need to use\nthis idiom.\n\nRegards,\n\nDror\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Thu, 9 Oct 2003 20:55:26 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "\nDror Matalon <[email protected]> writes:\n\n> Ouch. I just double checked and you're right. Is this considered a bug,\n> or just an implementation issue? \n\nCall it a wishlist bug. The problem is it would be a hard feature to implement\nproperly. And none of the people paid to work on postgres by various companies\nseem to have this on their to-do lists. So don't expect it in the near future.\n\n> While I've seen this hint a few times in the lists, it seems like it's\n> one of those magic incantations that those in the know, know about, and\n> that people new to postgres are going to be surprised by the need to use\n> this idiom.\n\nYup. Though it's in the FAQ and comes up on the mailing list about once a week\nor so, so it's hard to see how to document it any better. Perhaps a warning\nspecifically on the min/max functions in the documentation?\n\n\nSay, what do people think about a comment board thing like php.net has\nattached to the documentation. People can add comments that show up directly\non the bottom of the documentation for each function. I find it's mostly full\nof junk but skimming the comments often turns up one or two relevant warnings,\nespecially when I'm wondering why something's not behaving the way I expect.\n\n-- \ngreg\n\n",
"msg_date": "10 Oct 2003 00:49:27 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "\n> Say, what do people think about a comment board thing like php.net has\n> attached to the documentation. People can add comments that show up directly\n> on the bottom of the documentation for each function. I find it's mostly full\n> of junk but skimming the comments often turns up one or two relevant warnings,\n> especially when I'm wondering why something's not behaving the way I expect.\n\nI thought we had that:\n\nhttp://www.postgresql.org/docs/7.3/interactive/functions-aggregate.html\n\n...and someone has already made the comment.\n\nChris\n\n\n",
"msg_date": "Fri, 10 Oct 2003 13:34:34 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "On Thu, 9 Oct 2003, Dror Matalon wrote:\n\n> On Thu, Oct 09, 2003 at 07:07:00PM -0400, Greg Stark wrote:\n> > Dror Matalon <[email protected]> writes:\n> > \n> > > Actually what finally sovled the problem is repeating the \n> > > dtstamp > last_viewed\n> > > in the sub select\n> > \n> > That will at least convince the optimizer to use an index range lookup. But it\n> > still will have to scan every record that matches channel==$1, link==$2, and\n> > dtstamp>$3.\n> > \n> > The trick of using limit 1 will be faster still as it only has to retrieve a\n> > single record using the index. But you have to be sure to convince it to use\n> \n> How is doing order by limit 1 faster than doing max()? Seems like the\n> optimizer will need to sort or scan the data set either way. That part\n> didn't actually make a difference in my specific case.\n>\n\nmax(field) = sequential scan looking for the hightest.\n\norder by field desc limit 1 = index scan (if available), read first \nrecord. \n\telse (if no index) sequential scan for highest.\n\n\taggregates don't use indexes because its only appilicable for\nmax() and min() and can't be done for sum(), count(), etc writing an\nalogorithim to use the index would be complex as you would need to tell\nthe optimized from the inside a function (you can write aggrate functions\nyour self if you wish) to do somthing slighly differently.\n\nfor my large table....\nselect max(field) from table; (5264.21 msec)\nselect field from table order by field limit 1; (54.88 msec)\n\nPeter Childs\n\n",
"msg_date": "Fri, 10 Oct 2003 09:51:43 +0100 (BST)",
"msg_from": "Peter Childs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "Greg Stark writes:\n> Call it a wishlist bug. The problem is it would be a hard feature to\n> implement properly. And none of the people paid to work on postgres\n> by various companies seem to have this on their to-do lists. So\n> don't expect it in the near future.\n\nWe are using Postgres heavily, and we should be able to find some time\nand/or funding to help.\n\nWe're becoming more and more frustrated with the discontinuous\nbehaviour of the planner. It seems every complex query we have these\ndays needs some \"hint\" like \"ORDER BY foo DESC LIMIT 1\" to make it run\non the order of seconds, not minutes. We usually figure out a way to\nwrite the query so the planner does the right thing, and pushes\nthe discontinuity out far enough that the user doesn't see it.\nHowever, it takes a lot of work, and it seems to me that work would be\nput to better use improving the planner than improving our knowledge\nof how to get the planner to do the right thing by coding the SQL in\nsome unusual way.\n\nPlease allow me to go out on a limb here. I know that Tom is\nphilosophically opposed to planner hints. However, we do have a\nproblem that the planner is never going to be smart enough. This\nleaves the SQL coder the only option of collecting a bag of\n(non-portable) SQL tricks. I saw what the best SQL coders did at\nTandem, and frankly, it's scary. Being at Tandem, the SQL coders also\nhad the option (if they could argue their case strong enough) of\nadding a new rule to the optimizer. This doesn't solve the problem\nfor outsiders no matter how good they are at SQL.\n\nWould it be possible to extend the planner with a pattern matching\nlanguage? It would formalize what it is doing already, and would\nallow outsiders to teach the planner about idiosyncrasies without\nchanging the SQL. It would be like a style sheet in Latex (or Scribe :-)\nif you are familiar with these typesetting languages.\n\nComments?\n\nRob\n\n\n",
"msg_date": "Fri, 10 Oct 2003 09:46:37 -0600",
"msg_from": "Rob Nagler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "Dror,\n\n> Ouch. I just double checked and you're right. Is this considered a bug,\n> or just an implementation issue?\n\nIt's an implementation issue, which may be fixed by 7.5 but not sooner. \nBasically, the free ability of PostgreSQL users to define their own \naggregates limits our ability to define query planner optimization for \naggregates. Only recently has anyone suggested a feasable way around this.\n\n> While I've seen this hint a few times in the lists, it seems like it's\n> one of those magic incantations that those in the know, know about, and\n> that people new to postgres are going to be surprised by the need to use\n> this idiom.\n\nIt IS in the FAQ.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 10 Oct 2003 10:32:32 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up Aggregates"
},
{
"msg_contents": "On Fri, Oct 10, 2003 at 10:32:32AM -0700, Josh Berkus wrote:\n> Dror,\n> \n> > Ouch. I just double checked and you're right. Is this considered a bug,\n> > or just an implementation issue?\n> \n> It's an implementation issue, which may be fixed by 7.5 but not sooner. \n> Basically, the free ability of PostgreSQL users to define their own \n> aggregates limits our ability to define query planner optimization for \n> aggregates. Only recently has anyone suggested a feasable way around this.\n> \n> > While I've seen this hint a few times in the lists, it seems like it's\n> > one of those magic incantations that those in the know, know about, and\n> > that people new to postgres are going to be surprised by the need to use\n> > this idiom.\n> \n> It IS in the FAQ.\n\nMight be a good idea to put it in its own section rather than under \"My\nqueries are slow or don't make use of the indexes. Why?\"\n\nAlso, you might want to take out for 7.4\n\n4.22) Why are my subqueries using IN so slow?\n\n\n> \n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Fri, 10 Oct 2003 11:23:48 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up Aggregates"
}
] |
[
{
"msg_contents": "I've read some posts that says vacuum doesn't lock, but my experience\ntoday indicates the opposite. It seemed that \"vacuum full analyze\"\nwas locked waiting and so were other postmaster processes. It\nappeared to be deadlock, because all were in \"WAITING\" state according\nto ps. I let this go for about a 1/2 hour, and then killed the vacuum\nat which point all other processes completed normally.\n\nThe same thing seemed to be happening with reindex on a table. It\nseems that the reindex locks the table and some other resource which\nthen causes deadlock with other active processes.\n\nAnother issue seems to be performance. A reindex on some indexes is\ntaking 12 minutes or so. Vacuum seems to be slow, too. Way longer\nthan the time it takes to reimport the entire database (30 mins).\n\nIn summary, I suspect that it is better from a UI perspective to bring\ndown the app on Sat at 3 a.m and reimport with a fixed time period\nthan to live through reindexing/vacuuming which may deadlock. Am I\nmissing something?\n\nThanks,\nRob\n\n\n",
"msg_date": "Fri, 3 Oct 2003 14:24:42 -0600",
"msg_from": "Rob Nagler <[email protected]>",
"msg_from_op": true,
"msg_subject": "reindex/vacuum locking/performance?"
},
{
"msg_contents": "Rob Nagler <[email protected]> writes:\n> I've read some posts that says vacuum doesn't lock, but my experience\n> today indicates the opposite. It seemed that \"vacuum full analyze\"\n> was locked waiting and so were other postmaster processes.\n\nvacuum full does require exclusive lock, plain vacuum does not.\n\n> It\n> appeared to be deadlock, because all were in \"WAITING\" state according\n> to ps. I let this go for about a 1/2 hour, and then killed the vacuum\n> at which point all other processes completed normally.\n\nIt's considerably more likely that the vacuum was waiting for an open\nclient transaction (that had a read or write lock on some table) to\nfinish than that there was an undetected deadlock. I suggest looking at\nyour client code. Also, in 7.3 or later you could look at the pg_locks\nview to work out exactly who has the lock that's blocking vacuum.\n\n> Another issue seems to be performance. A reindex on some indexes is\n> taking 12 minutes or so. Vacuum seems to be slow, too. Way longer\n> than the time it takes to reimport the entire database (30 mins).\n\nvacuum full is indeed slow. That's why we do not recommend it as a\nroutine maintenance procedure. The better approach is to do plain\nvacuums often enough that you don't need vacuum full. In pre-7.4\nreleases you might need periodic reindexes too, depending on whether\nyour usage patterns tickle the index-bloat problem. But it is easily\ndemonstrable that reindexing is cheaper than rebuilding the database.\n\n> In summary, I suspect that it is better from a UI perspective to bring\n> down the app on Sat at 3 a.m and reimport with a fixed time period\n> than to live through reindexing/vacuuming which may deadlock. Am I\n> missing something?\n\nAlmost certainly, though you've not provided enough detail to determine\nwhat.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Oct 2003 16:57:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance? "
},
{
"msg_contents": "[email protected] (Rob Nagler) writes:\n> I've read some posts that says vacuum doesn't lock, but my experience\n> today indicates the opposite. It seemed that \"vacuum full analyze\"\n> was locked waiting and so were other postmaster processes. It\n> appeared to be deadlock, because all were in \"WAITING\" state according\n> to ps. I let this go for about a 1/2 hour, and then killed the vacuum\n> at which point all other processes completed normally.\n\nVACUUM FULL certainly does lock.\n\nSee the man page:\n\n INPUTS\n FULL Selects ``full'' vacuum, which may reclaim more space, but takes\n much longer and exclusively locks the table.\n\nThe usual answer is that you probably _didn't_ want to VACUUM FULL.\n\nVACUUM ('no full') does NOT block updates.\n\n> The same thing seemed to be happening with reindex on a table. It\n> seems that the reindex locks the table and some other resource which\n> then causes deadlock with other active processes.\n\nNot surprising either. While the reindex takes place, updates to that\ntable have to be deferred.\n\n> Another issue seems to be performance. A reindex on some indexes is\n> taking 12 minutes or so. Vacuum seems to be slow, too. Way longer\n> than the time it takes to reimport the entire database (30 mins).\n\nThat seems a little surprising.\n\n> In summary, I suspect that it is better from a UI perspective to\n> bring down the app on Sat at 3 a.m and reimport with a fixed time\n> period than to live through reindexing/vacuuming which may deadlock.\n> Am I missing something?\n\nConsider running pg_autovacuum, and thereby do a little bit of\nvacuuming here and there all the time. It DOESN'T block, so unless\nyour system is really busy, it shouldn't slow things down to a major\ndegree.\n-- \n\"cbbrowne\",\"@\",\"libertyrms.info\"\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n",
"msg_date": "Fri, 03 Oct 2003 17:34:14 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "> vacuum full does require exclusive lock, plain vacuum does not.\n\nI think I need full, because there are updates on the table. As I\nunderstand it, an update in pg is an insert/delete, so it needs\nto be garbage collected.\n\n> It's considerably more likely that the vacuum was waiting for an open\n> client transaction (that had a read or write lock on some table) to\n> finish than that there was an undetected deadlock. I suggest looking at\n> your client code. Also, in 7.3 or later you could look at the pg_locks\n> view to work out exactly who has the lock that's blocking vacuum.\n\nMy client code does a lot. I look at more often than I'd like to. :-) \n\nI don't understand why the client transaction would block if vacuum\nwas waiting. Does vacuum lock the table and then try to get some\nother \"open transaction\" resource? Free space? I guess I don't\nunderstand what other resources would be required of vacuum. The\nclient transactions are short (< 1s). They don't deadlock normally,\nonly with reindex and vacuum did I see this behavior.\n\n> vacuum full is indeed slow. That's why we do not recommend it as a\n> routine maintenance procedure. The better approach is to do plain\n> vacuums often enough that you don't need vacuum full.\n\nThe description of vacuum full implies that is required if the db\nis updated frequently. This db gets about 1 txn a second, possibly\nmore at peak load.\n\n> In pre-7.4\n> releases you might need periodic reindexes too, depending on whether\n> your usage patterns tickle the index-bloat problem.\n\n7.3, and yes, we have date indexes as well as sequences for primary\nkeys.\n \n> But it is easily\n> demonstrable that reindexing is cheaper than rebuilding the database.\n\nIOW, vacuum+reindex is faster than dump+restore? I didn't see this,\nthen again, I had this locking problem, so the stats are distorted.\n\nOne other question: The reindex seems to lock the table for the entire\nprocess as opposed to freeing the lock between index rebuilds. It was\nhard to see, but it seemed like the clients were locked for the entire\n\"reindex table bla\" command.\n\nSorry for lack of detail, but I didn't expect these issues so I wasn't\nkeeping track of the system state as closely as I should have. Next\ntime. :-)\n\nThanks,\nRob\n",
"msg_date": "Fri, 3 Oct 2003 15:47:01 -0600",
"msg_from": "Rob Nagler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: reindex/vacuum locking/performance? "
},
{
"msg_contents": "On Fri, Oct 03, 2003 at 15:47:01 -0600,\n Rob Nagler <[email protected]> wrote:\n> > vacuum full does require exclusive lock, plain vacuum does not.\n> \n> I think I need full, because there are updates on the table. As I\n> understand it, an update in pg is an insert/delete, so it needs\n> to be garbage collected.\n\nPlain vacuum will mark the space used by deleted tuples as reusable.\nMost of the time this is good enough and you don't need to run vacuum full.\n",
"msg_date": "Fri, 3 Oct 2003 16:59:39 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "On Fri, 2003-10-03 at 17:47, Rob Nagler wrote:\n> They don't deadlock normally,\n> only with reindex and vacuum did I see this behavior.\n\nIf you can provide a reproducible example of a deadlock induced by\nREINDEX + VACUUM, that would be interesting.\n\n(FWIW, I remember noticing a potential deadlock in the REINDEX code and\nposting to -hackers about it, but I've never seen it occur in a\nreal-world situation...)\n\n> One other question: The reindex seems to lock the table for the entire\n> process as opposed to freeing the lock between index rebuilds.\n\nYeah, I wouldn't be surprised if there is some room for optimizing the\nlocks that are acquired by REINDEX.\n\n-Neil\n\n\n",
"msg_date": "Fri, 03 Oct 2003 18:11:45 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "> > vacuum full does require exclusive lock, plain vacuum does not.\n>\n> I think I need full, because there are updates on the table. As I\n> understand it, an update in pg is an insert/delete, so it needs\n> to be garbage collected.\n\nYes and no. You only need a plain VACUUM that is run often enough to\nrecover space as fast as you need to grab it. For heavily updated tables\nrun it often - I run it every 5 minutes on some tables. A VACUUM FULL is\nonly needed if you haven't been running VACUUM often enough in the first\nplace.\n\n> The description of vacuum full implies that is required if the db\n> is updated frequently. This db gets about 1 txn a second, possibly\n> more at peak load.\n\nAssuming you mean 1 update/insert per second that is an absolutely _trivial_\nload on any reasonable hardware. You can do thousands of updates/second on\nhardware costing less than $2000. If you vacuum every hour then you will be\nfine.\n\n> IOW, vacuum+reindex is faster than dump+restore? I didn't see this,\n> then again, I had this locking problem, so the stats are distorted.\n\nREINDEX also locks tables like VACUUM FULL. Either is terribly slow, but\nunless you turn off fsync during the restore it's unlikely to be slower than\ndump & restore.\n\nMatt\n\n",
"msg_date": "Sat, 4 Oct 2003 00:21:32 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance? "
},
{
"msg_contents": "Rob,\n\n> > I think I need full, because there are updates on the table. As I\n> > understand it, an update in pg is an insert/delete, so it needs\n> > to be garbage collected.\n> \n> Yes and no. You only need a plain VACUUM that is run often enough to\n> recover space as fast as you need to grab it. For heavily updated tables\n> run it often - I run it every 5 minutes on some tables. A VACUUM FULL is\n> only needed if you haven't been running VACUUM often enough in the first\n> place.\n\nAlso, if you find that you need to run VACUUM FULL often, then you need to \nraise your max_fsm_pages.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 3 Oct 2003 16:24:03 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "> > In summary, I suspect that it is better from a UI perspective to\n> > bring down the app on Sat at 3 a.m and reimport with a fixed time\n> > period than to live through reindexing/vacuuming which may deadlock.\n> > Am I missing something?\n>\n> Consider running pg_autovacuum, and thereby do a little bit of\n> vacuuming here and there all the time. It DOESN'T block, so unless\n> your system is really busy, it shouldn't slow things down to a major\n> degree.\n\nMy real world experience on a *very* heavily updated OLTP type DB, following\nadvice from this list (thanks guys!), is that there is essentially zero cost\nto going ahead and vacuuming as often as you feel like it. Go crazy, and\nspeed up your DB!\n\nOK, that's on a quad CPU box with goodish IO, so maybe there are issues on\nvery slow boxen, but in a heavy-update environment the advantages seem to\neasily wipe out the costs.\n\nMatt\n\np.s. Sorry to sound like a \"Shake'n'Vac\" advert.\n\n",
"msg_date": "Sat, 4 Oct 2003 00:29:55 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "> Also, if you find that you need to run VACUUM FULL often, then\n> you need to\n> raise your max_fsm_pages.\n\nYes and no. If it's run often enough then the number of tracked pages\nshouldn't need to be raised, but then again...\n\n...max_fsm_pages should be raised anyway. I'm about to reclaim a Pentium\n166 w/ 64MB of RAM from a friend I lent it to _many_ years ago, and I\nsuspect PG would run happily on it as configured by default. Set it to at\nleast 50,000 I say. What do you have to lose, I mean if they're not free\nthen they're not tracked in the FSM right?\n\nOf course if anyone knows a reason _not_ to raise it then I'm all ears!\n\nMatt\n\n\n>\n> --\n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n",
"msg_date": "Sat, 4 Oct 2003 00:44:33 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "On Fri, 2003-10-03 at 17:34, Christopher Browne wrote:\n> Not surprising either. While the reindex takes place, updates to that\n> table have to be deferred.\n\nRight, but that's no reason not to let SELECTs proceed, for example.\n(Whether that would actually be *useful* is another question...)\n\n-Neil\n\n\n",
"msg_date": "Fri, 03 Oct 2003 19:48:07 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "> > Also, if you find that you need to run VACUUM FULL often, then\n> > you need to\n> > raise your max_fsm_pages.\n>\n> Yes and no. If it's run often enough then the number of tracked pages\n> shouldn't need to be raised, but then again...\n\nOops, sorry, didn't pay attention and missed the mention of FULL. My bad,\nignore my OT useless response.\n\n",
"msg_date": "Sat, 4 Oct 2003 01:14:50 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "Neil Conway <[email protected]> writes:\n> On Fri, 2003-10-03 at 17:34, Christopher Browne wrote:\n>> Not surprising either. While the reindex takes place, updates to that\n>> table have to be deferred.\n\n> Right, but that's no reason not to let SELECTs proceed, for example.\n\nWhat if said SELECTs are using the index in question?\n\nI suspect it is true that REINDEX locks more than it needs to, but we\nshould tread carefully about loosening it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Oct 2003 23:49:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance? "
},
{
"msg_contents": "On Fri, Oct 03, 2003 at 02:24:42PM -0600, Rob Nagler wrote:\n> I've read some posts that says vacuum doesn't lock, but my experience\n> today indicates the opposite. It seemed that \"vacuum full analyze\"\n\nVACUUM doesn't. VACUUM FULL does.\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Sat, 4 Oct 2003 11:18:54 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "On Sat, Oct 04, 2003 at 12:29:55AM +0100, Matt Clark wrote:\n> My real world experience on a *very* heavily updated OLTP type DB, following\n> advice from this list (thanks guys!), is that there is essentially zero cost\n> to going ahead and vacuuming as often as you feel like it. Go crazy, and\n> speed up your DB!\n\nThat's not quite true. If vacuums start running into each other, you\ncan very easily start eating up all your I/O bandwidth. Even if you\ngots lots of it.\n\nAlso, a vacuum pretty much destroys your shared buffers, so you have\nto be aware of that trade-off too. Vacuum is not free. It's _way_\ncheaper than it used to be, though.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Sat, 4 Oct 2003 11:22:41 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "On Fri, Oct 03, 2003 at 11:49:03PM -0400, Tom Lane wrote:\n> \n> What if said SELECTs are using the index in question?\n\nThat's a good reason to build a new index and, when it's done, drop\nthe old one. It still prevents writes, of course.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Sat, 4 Oct 2003 11:23:38 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "> On Sat, Oct 04, 2003 at 12:29:55AM +0100, Matt Clark wrote:\n> > My real world experience on a *very* heavily updated OLTP type\n> DB, following\n> > advice from this list (thanks guys!), is that there is\n> essentially zero cost\n> > to going ahead and vacuuming as often as you feel like it. Go\n> crazy, and\n> > speed up your DB!\n>\n> That's not quite true. If vacuums start running into each other, you\n> can very easily start eating up all your I/O bandwidth. Even if you\n> gots lots of it.\n\nVery true, which is why all my scripts write a lockfile and delete it when\nthey're finished, to prevent that happening. I should have mentioned that.\n\n> Also, a vacuum pretty much destroys your shared buffers, so you have\n> to be aware of that trade-off too. Vacuum is not free. It's _way_\n> cheaper than it used to be, though.\n\nThat's _very_ interesting. I've never been quite clear what's in shared\nbuffers apart from scratch space for currently running transactions. Also\nthe docs imply that vacuum uses it's own space for working in. Do you have\nmore info on how it clobbers shared_buffers?\n\nM\n\n",
"msg_date": "Sun, 5 Oct 2003 12:14:24 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "On Sun, Oct 05, 2003 at 12:14:24PM +0100, Matt Clark wrote:\n> more info on how it clobbers shared_buffers?\n\nVacuum is like a seqscan. It touches everything on a table. So it\ndoesn't clobber them, but that's the latest data. It's unlikely your\nbuffers are big enough to hold your database, unless your database is\nsmall. So you'll end up expiring potentially useful data in the\nbuffer.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Sun, 5 Oct 2003 10:34:31 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "> On Sun, Oct 05, 2003 at 12:14:24PM +0100, Matt Clark wrote:\n> > more info on how it clobbers shared_buffers?\n>\n> Vacuum is like a seqscan. It touches everything on a table. So it\n> doesn't clobber them, but that's the latest data. It's unlikely your\n> buffers are big enough to hold your database, unless your database is\n> small. So you'll end up expiring potentially useful data in the\n> buffer.\n\nOK I'm definitely missing something here. I thought that the FSM was there\nto keep track of potentially free pages, and that all VACUUM did was double\ncheck and then write that info out for all to see? The promise being that a\nVACUUM FULL will walk all pages on disk and do a soft-shoe-shuffle to\naggresively recover space, but a simple VACUUM won't (merely confirming\npages as available for reuse).\n\nAs for buffers, my understanding is that they are *not* meant to be big\nenough to hold the DB, as PG explicitly leaves caching up to the underlying\nOS. 'buffers' here meaning shared memory between PG processes, and 'cache'\nmeaning OS cache. 'buffers' only need to be big enough to hold the\nintermediate calcs and the results for any current transactions?\n\nM\n\n",
"msg_date": "Sun, 5 Oct 2003 17:46:21 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "\"Matt Clark\" <[email protected]> writes:\n> OK I'm definitely missing something here.\n\nThe point is that a big seqscan (either VACUUM or a plain table scan)\nhits a lot of pages, and thereby tends to fill your cache with pages\nthat aren't actually likely to get hit again soon, perhaps pushing out\npages that will be needed again soon. This happens at both the\nshared-buffer and kernel-disk-cache levels of caching.\n\nIt would be good to find some way to prevent big seqscans from\npopulating cache, but I don't know of any portable way to tell the OS\nthat we don't want it to cache a page we are reading.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Oct 2003 13:11:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance? "
},
{
"msg_contents": "> The point is that a big seqscan (either VACUUM or a plain table scan)\n> hits a lot of pages, and thereby tends to fill your cache with pages\n> that aren't actually likely to get hit again soon, perhaps pushing out\n> pages that will be needed again soon. This happens at both the\n> shared-buffer and kernel-disk-cache levels of caching.\n\nOK, I had thought (wrongly it seems, as usual, but this is how we learn!)\nthat a plain VACUUM did not incur a read of all pages. I still don't\nunderstand *why* it does, but I'll take your word for it.\n\nClearly if it distorts the 'normal' balance of pages in any caches, PG's or\nthe OS's, that's a _bad thing_. I am currently in the nice position of\nhaving a DB that (just about) fits in RAM, so I pretty much don't care about\nread performance, but I will have to soon as it grows beyond 3GB :-( These\nconversations are invaluable in planning for that dread time...\n\n> It would be good to find some way to prevent big seqscans from\n> populating cache, but I don't know of any portable way to tell the OS\n> that we don't want it to cache a page we are reading.\n\nQuite. The only natural way would be to read those pages through some\nspecial device, but then you might as well do raw disk access from the\nget-go. Portability vs. Performance, the age old quandary. FWIW I and many\nothers stand back in pure amazement at the sheer _quality_ of PostgreSQL.\n\n\nRgds,\n\nMatt\n\n\n",
"msg_date": "Sun, 5 Oct 2003 18:59:14 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance? "
},
{
"msg_contents": "\"Matt Clark\" <[email protected]> writes:\n> OK, I had thought (wrongly it seems, as usual, but this is how we learn!)\n> that a plain VACUUM did not incur a read of all pages. I still don't\n> understand *why* it does, but I'll take your word for it.\n\nMainly 'cause it doesn't know where the dead tuples are till it's\nlooked. Also, VACUUM is the data collector for the free space map,\nand so it is also charged with finding out how much free space exists\non every page.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Oct 2003 14:07:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance? "
},
{
"msg_contents": "> Mainly 'cause it doesn't know where the dead tuples are till it's\n> looked.\n\nAt this point I feel very stupid...\n\n> Also, VACUUM is the data collector for the free space map,\n> and so it is also charged with finding out how much free space exists\n> on every page.\n\nAh, now I just feel enlightened! That makes perfect sense. I think I had\nbeen conflating free pages with free space, without understanding what the\ndifference was. Of course I still don't really understand, but at least I\nnow _know_ I don't.\n\nMany thanks\n\nMatt\n\n",
"msg_date": "Sun, 5 Oct 2003 19:43:17 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance? "
},
{
"msg_contents": "After a long battle with technology,[email protected] (\"Matt Clark\"), an earthling, wrote:\n>> The point is that a big seqscan (either VACUUM or a plain table scan)\n>> hits a lot of pages, and thereby tends to fill your cache with pages\n>> that aren't actually likely to get hit again soon, perhaps pushing out\n>> pages that will be needed again soon. This happens at both the\n>> shared-buffer and kernel-disk-cache levels of caching.\n>\n> OK, I had thought (wrongly it seems, as usual, but this is how we learn!)\n> that a plain VACUUM did not incur a read of all pages. I still don't\n> understand *why* it does, but I'll take your word for it.\n\nHow does it know what to do on any given page if it does not read it\nin? It has to evaluate whether tuples can be thrown away or not, and\nthat requires looking at the tuples. It may only be looking at a\nsmall portion of the page, but that still requires reading each page.\n\nNo free lunch, unfortunately...\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','cbbrowne.com').\nhttp://www3.sympatico.ca/cbbrowne/sgml.html\n\"End users are just test loads for verifying that the system works,\nkind of like resistors in an electrical circuit.\"\n-- Kaz Kylheku in c.o.l.d.s\n",
"msg_date": "Sun, 05 Oct 2003 17:57:42 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "On Sat, 2003-10-04 at 11:22, Andrew Sullivan wrote:\n> Also, a vacuum pretty much destroys your shared buffers, so you have\n> to be aware of that trade-off too.\n\nTrue, although there is no reason that this necessary needs to be the\ncase (at least, as far as the PostgreSQL shared buffer goes). As has\nbeen pointed out numerous times on -hackers and in the literature, using\nLRU for a DBMS shared buffer cache is far from optimal, and better\nalgorithms have been proposed (e.g. LRU-K, ARC). We could even have the\nVACUUM command inform the bufmgr that the pages it is in the process of\nreading in are part of a seqscan, and so are unlikely to be needed in\nthe immediate future.\n\n-Neil\n\n\n",
"msg_date": "Sun, 05 Oct 2003 19:32:47 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "Neil Conway <[email protected]> writes:\n> ... We could even have the\n> VACUUM command inform the bufmgr that the pages it is in the process of\n> reading in are part of a seqscan, and so are unlikely to be needed in\n> the immediate future.\n\nThis would be relatively easy to fix as far as our own buffering is\nconcerned, but the thing that's needed to make it really useful is\nto prevent caching of seqscan-read pages in the kernel disk buffers.\nI don't know any portable way to do that :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Oct 2003 19:43:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance? "
},
{
"msg_contents": "On Sun, 2003-10-05 at 19:43, Tom Lane wrote:\n> This would be relatively easy to fix as far as our own buffering is\n> concerned, but the thing that's needed to make it really useful is\n> to prevent caching of seqscan-read pages in the kernel disk buffers.\n\nTrue.\n\n> I don't know any portable way to do that :-(\n\nFor the non-portable way of doing this, are you referring to O_DIRECT?\n\nEven if it isn't available everywhere, it might be worth considering\nthis at least for the platforms on which it is supported.\n\n-Neil\n\n\n",
"msg_date": "Sun, 05 Oct 2003 19:50:35 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "On Sun, Oct 05, 2003 at 07:32:47PM -0400, Neil Conway wrote:\n\n> been pointed out numerous times on -hackers and in the literature, using\n> LRU for a DBMS shared buffer cache is far from optimal, and better\n> algorithms have been proposed (e.g. LRU-K, ARC). We could even have the\n> VACUUM command inform the bufmgr that the pages it is in the process of\n> reading in are part of a seqscan, and so are unlikely to be needed in\n> the immediate future.\n\nHey, when that happens, you'll find me first in line to praise the\nimplementor; but until then, it's important that people not get the\nidea that vacuum is free.\n\nIt is _way_ imporved, and on moderately loaded boxes, it'salmost\nunnoticable. But under heavy load, you need to be _real_ careful\nabout calling vacuum. I think one of the biggest needs in the AVD is\nsome sort of intelligence about current load on the postmaster, but I\nhaven't the foggiest idea how to give it such intelligence.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Sun, 5 Oct 2003 22:01:03 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "\n> This would be relatively easy to fix as far as our own buffering is\n> concerned, but the thing that's needed to make it really useful is\n> to prevent caching of seqscan-read pages in the kernel disk buffers.\n> I don't know any portable way to do that :-(\n\nraw disc ? :-)\n\n\n\n",
"msg_date": "Mon, 6 Oct 2003 11:07:01 +0800",
"msg_from": "\"Ronald Khoo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "On Sun, 5 Oct 2003, Neil Conway wrote:\n\n>\n> > I don't know any portable way to do that :-(\n>\n> For the non-portable way of doing this, are you referring to O_DIRECT?\n>\n> Even if it isn't available everywhere, it might be worth considering\n> this at least for the platforms on which it is supported.\n>\n\nI strongly agree here only if we can prove there is a benefit.\nI think it would be silly of us if some OS supported SnazzyFeatureC that\nwas able to speed up PG by a large percentage (hopefully, in a rather\nnon-invasive way in the code). But, I do see the problem here with bloat\nand PG being radically different platform to platform. I suppose we could\ndictate that at least N os's had to have it.. or perhaps supply it has\ncontrib/ patches.... Something to think about.\n\nI'd be interested in tinkering with this, but I'm more interested at the\nmoment of why (with proof, not antecdotal) Solaris is so much slower than\nLinux and what we cna do about this. We're looking to move a rather large\nInformix db to PG and ops has reservations about ditching Sun hardware.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Mon, 6 Oct 2003 08:07:27 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "On Mon, Oct 06, 2003 at 08:07:27AM -0400, Jeff wrote:\n> I strongly agree here only if we can prove there is a benefit.\n\nThere's plenty of academic work which purports to show that LRU is\nfar from the best choice. Just in principle, it seems obvious that a\nsingle-case seqscan-type operation (such as vacuum does) is a good\nway to lose your cache for no real gain.\n\n> I'd be interested in tinkering with this, but I'm more interested at the\n> moment of why (with proof, not antecdotal) Solaris is so much slower than\n> Linux and what we cna do about this. We're looking to move a rather large\n> Informix db to PG and ops has reservations about ditching Sun hardware.\n\nInterestingly, we're contemplating ditching Solaris because of the\nterrible reliability we're getting from the hardware.\n\nYou can use truss to find some of the problems on Solaris. The\nopen() syscall takes forever when you don't hit the Postgres shared\nbuffers (even if you can be sure the page is in filesystem buffers --\nwe could demonstrate it on a 1 G database on a machine with 10 G of\nRAM). I've heard grumblings about spinlocks on Solaris which might\nexplain this problem. I certainly notice that performance gets\ngeometrically worse when you add a few hundred extra connections.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 6 Oct 2003 08:15:00 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "Jeff wrote:\n> I'd be interested in tinkering with this, but I'm more interested at the\n> moment of why (with proof, not antecdotal) Solaris is so much slower than\n> Linux and what we cna do about this. We're looking to move a rather large\n> Informix db to PG and ops has reservations about ditching Sun hardware.\n\nIs linux on sparc hardware is an option..:-)\n\n Shridhar\n\n",
"msg_date": "Mon, 06 Oct 2003 17:49:51 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "On Mon, 6 Oct 2003, Andrew Sullivan wrote:\n\n> There's plenty of academic work which purports to show that LRU is\n> far from the best choice. Just in principle, it seems obvious that a\n> single-case seqscan-type operation (such as vacuum does) is a good\n> way to lose your cache for no real gain.\n>\n\nLogically bypassing caches for a seq scan also makes sense.\n\n> Interestingly, we're contemplating ditching Solaris because of the\n> terrible reliability we're getting from the hardware.\n>\n\nThe reason ops likes solaris / sun is twofold. 1. we have a pile of big\nsun machines around. 2. Solaris / Sun is quite a bit more graceful in the\negvent of a hardware failure. We've burned out our fair share of cpu's\netc and solaris has been rather graceful about it.\n\nI've started profiling and running tests... currently it is leaning\ntowards the sysv semaphores. I see in src/backend/port/ that pg_sema.c is\nlinked to the sysv implementation. So what I did was create a\nsemaphore set, and then fired off 5 copies of a program that attaches\nto that semaphore and then locks/unlocks it 1M times.\n\n2xP2-450, Linux 2.4.18: 1 process: 221680 / sec, 5 process: 98039 / sec\n4xUltraSparc II-400Mhz, Solaris 2.6: 1 proc: 142857 / sec, 5 process:\n23809\n\nSo I'm guessing that is where a LOT of the suck is coming from.\n\nWhat I plan to do next is looking to see if there are other interprocess\nlocking mechanisms on solaris (perhaps pthread_mutex with that\ninter-process flag or something) to see if I can get those numbers a\nlittle closer.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Mon, 6 Oct 2003 09:33:57 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "locking/performance, Solaris performance discovery"
},
{
"msg_contents": "Jeff <[email protected]> writes:\n> I've started profiling and running tests... currently it is leaning\n> towards the sysv semaphores. I see in src/backend/port/ that pg_sema.c is\n> linked to the sysv implementation.\n\nDoes Solaris have Posix semaphores? You could try using those instead.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Oct 2003 10:30:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locking/performance, Solaris performance discovery "
},
{
"msg_contents": "On Mon, 6 Oct 2003, Tom Lane wrote:\n\n>\n> Does Solaris have Posix semaphores? You could try using those instead.\n>\n> \t\t\tregards, tom lane\n\nYep. It does.\n\nI learned them quick enough (using posix_sema.c as a guide)\nand found out that at least on Sol 2.6 they are slower than sysv - with 5\nprocesses it went to about 16k lock/unlock a second.\n\nI'm going to try to find a box around here I can get sol(8|9) on that has\nsufficient disk space and see. I'm guessing sun has likely made\nimprovements...\n\n\nAnother odd thing I'm trying to work out is why my profiles come out so\nradically different on the linux box and the sun box.\n\nSun:\n 31.17 18.90 18.90 internal_mcount\n 19.10 30.48 11.58 8075381 0.00 0.00 _bt_checkkeys\n 5.66 33.91 3.43 24375253 0.00 0.00 FunctionCall2\n 4.82 36.83 2.92 8073010 0.00 0.00 _bt_step\n 3.51 38.96 2.13 14198 0.15 0.15 _read\n 2.77 40.64 1.68 8069040 0.00 0.00 varchareq\n 2.59 42.21 1.57 28454 0.06 0.23 _bt_next\n 2.29 43.60 1.39 1003 1.39 1.40 AtEOXact_Buffers\n 1.86 44.73 1.13 16281197 0.00 0.00 pg_detoast_datum\n 1.81 45.83 1.10 _mcount\n 1.68 46.85 1.02 2181 0.47 0.47 pglz_decompress\n\n\nLinux:\n 11.14 0.62 0.62 1879 0.00 0.00 pglz_decompress\n 6.71 0.99 0.37 1004 0.00 0.00 AtEOXact_Buffers\n 3.80 1.20 0.21 1103045 0.00 0.00 AllocSetAlloc\n 3.23 1.38 0.18 174871 0.00 0.00 nocachegetattr\n 2.92 1.54 0.16 1634957 0.00 0.00 AllocSetFreeIndex\n 2.50 1.68 0.14 20303 0.00 0.00 heapgettup\n 1.93 1.79 0.11 1003 0.00 0.00 AtEOXact_CatCache\n 1.76 1.89 0.10 128442 0.00 0.00 hash_any\n 1.72 1.98 0.10 90312 0.00 0.00 FunctionCall3\n 1.69 2.08 0.09 50632 0.00 0.00 ExecTargetList\n 1.60 2.17 0.09 51647 0.00 0.00 heap_formtuple\n 1.55 2.25 0.09 406162 0.00 0.00 newNode\n 1.46 2.33 0.08 133044 0.00 0.00 hash_search\n\nIt is the same query with slightly different data (The Sun has probably..\n20-40k more rows in the table the query hits).\n\nI'll be digging up more info later today.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Mon, 6 Oct 2003 11:16:54 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locking/performance, Solaris performance discovery"
},
{
"msg_contents": "On Mon, 2003-10-06 at 05:15, Andrew Sullivan wrote:\n> There's plenty of academic work which purports to show that LRU is\n> far from the best choice. Just in principle, it seems obvious that a\n> single-case seqscan-type operation (such as vacuum does) is a good\n> way to lose your cache for no real gain.\n\n\nTraditionally, seqscan type operations are accommodated in LRU type\nmanagers by having multiple buffer promotion policies, primarily because\nit is simple to implement. For example, if you are doing a seqscan, a\nbuffer loaded from disk is never promoted to the top of the LRU. \nInstead it is only partially promoted (say, halfway for example) toward\nthe top of the buffer list. A page that is already in the buffer is\npromoted either to the halfway point or top depending on where it was\nfound. There are numerous variations on the idea, some being more\nclever and complex than others. \n\nThe point of this being that a pathological or rare sequential scan can\nnever trash more than a certain percentage of the cache, while not\nsignificantly impacting the performance of a sequential scan. The\nprimary nuisance is that it slightly increases the API complexity. I'll\nadd that I don't know what PostgreSQL actually does in this regard, but\nfrom the thread it appears as though seqscans are handled like the\ndefault case.\n\nCheers,\n\n-James Rogers\n [email protected]\n\n\n",
"msg_date": "06 Oct 2003 09:55:38 -0700",
"msg_from": "James Rogers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Seqscan buffer promotion (was: reindex/vacuum locking/performance?)"
},
{
"msg_contents": "On Sun, 2003-10-05 at 19:50, Neil Conway wrote:\n> On Sun, 2003-10-05 at 19:43, Tom Lane wrote:\n> > This would be relatively easy to fix as far as our own buffering is\n> > concerned, but the thing that's needed to make it really useful is\n> > to prevent caching of seqscan-read pages in the kernel disk buffers.\n\n> For the non-portable way of doing this, are you referring to O_DIRECT?\n\nI was hoping you'd reply to this, Tom -- you were referring to O_DIRECT,\nright?\n\n(If you were referring to O_DIRECT, I wanted to add that I wouldn't be\nsurprised if using O_DIRECT on many kernels reduces or eliminates any\nreadahead the OS will be doing on the sequential read, so the net result\nmay actually be a loss for a typical seqscan.)\n\n-Neil\n\n\n",
"msg_date": "Mon, 06 Oct 2003 14:14:29 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "Neil Conway <[email protected]> writes:\n> On Sun, 2003-10-05 at 19:50, Neil Conway wrote:\n> I was hoping you'd reply to this, Tom -- you were referring to O_DIRECT,\n> right?\n\nNot necessarily --- as you point out, it's not clear that O_DIRECT would\nhelp us. What would be way cool is something similar to what James\nRogers was talking about: a way to tell the kernel not to promote this\npage all the way to the top of its LRU list. I'm not sure that *any*\nUnixen have such an API, let alone one that's common across more than\none platform :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Oct 2003 14:26:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance? "
},
{
"msg_contents": "--On Monday, October 06, 2003 14:26:10 -0400 Tom Lane <[email protected]> \nwrote:\n\n> Neil Conway <[email protected]> writes:\n>> On Sun, 2003-10-05 at 19:50, Neil Conway wrote:\n>> I was hoping you'd reply to this, Tom -- you were referring to O_DIRECT,\n>> right?\n>\n> Not necessarily --- as you point out, it's not clear that O_DIRECT would\n> help us. What would be way cool is something similar to what James\n> Rogers was talking about: a way to tell the kernel not to promote this\n> page all the way to the top of its LRU list. I'm not sure that *any*\n> Unixen have such an API, let alone one that's common across more than\n> one platform :-(\nI think Verita's VxFS has this as an option/IOCTL.\n\nYou can read the Veritas doc on my\nhttp://www.lerctr.org:8458/\n\npages under filesystems.\n\nThat should work on UnixWare and Solaris sites that have VxFS installed.\n\nVxFS is standard on UW.\n\nLER\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749",
"msg_date": "Mon, 06 Oct 2003 13:39:10 -0500",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance? "
},
{
"msg_contents": "Tom Lane wrote:\n> Neil Conway <[email protected]> writes:\n> > On Sun, 2003-10-05 at 19:50, Neil Conway wrote:\n> > I was hoping you'd reply to this, Tom -- you were referring to O_DIRECT,\n> > right?\n> \n> Not necessarily --- as you point out, it's not clear that O_DIRECT would\n> help us. What would be way cool is something similar to what James\n> Rogers was talking about: a way to tell the kernel not to promote this\n> page all the way to the top of its LRU list. I'm not sure that *any*\n> Unixen have such an API, let alone one that's common across more than\n> one platform :-(\n\nSolaris has \"free-behind\", which prevents a large kernel sequential scan\nfrom blowing out the cache.\n\nI only read about it in the Mauro Solaris Internals book, and it seems\nto be done automatically. I guess most OS's don't do this optimization\nbecause they usually don't read files larger than their cache.\n\nI see BSD/OS madvise() has:\n\n #define MADV_NORMAL 0 /* no further special treatment */\n #define MADV_RANDOM 1 /* expect random page references */\n #define MADV_SEQUENTIAL 2 /* expect sequential references */\n #define MADV_WILLNEED 3 /* will need these pages */\n--> #define MADV_DONTNEED 4 /* don't need these pages */\n #define MADV_SPACEAVAIL 5 /* insure that resources are reserved */\n\nThe marked one seems to have the control we need. Of course, the kernel\nmadvise() code has:\n\n\t/* Not yet implemented */\n\nLooks like NetBSD implements it, but it also unmaps the page from the\naddress space, which might be more than we want. NetBSD alao has:\n\n #define MADV_FREE 6 /* pages are empty, free them */\n\nwhich frees the page. I am unclear on its us.\n\nFreeBSD has this comment:\n\n/*\n * vm_page_dontneed\n *\n * Cache, deactivate, or do nothing as appropriate. This routine\n * is typically used by madvise() MADV_DONTNEED.\n *\n * Generally speaking we want to move the page into the cache so\n * it gets reused quickly. However, this can result in a silly syndrome\n * due to the page recycling too quickly. Small objects will not be\n * fully cached. On the otherhand, if we move the page to the inactive\n * queue we wind up with a problem whereby very large objects\n * unnecessarily blow away our inactive and cache queues.\n *\n * The solution is to move the pages based on a fixed weighting. We\n * either leave them alone, deactivate them, or move them to the cache,\n * where moving them to the cache has the highest weighting.\n * By forcing some pages into other queues we eventually force the\n * system to balance the queues, potentially recovering other unrelated\n * space from active. The idea is to not force this to happen too\n * often.\n */\n\nThe Linux comment is:\n\n/*\n * Application no longer needs these pages. If the pages are dirty,\n * it's OK to just throw them away. The app will be more careful about\n * data it wants to keep. Be sure to free swap resources too. The\n * zap_page_range call sets things up for refill_inactive to actually free\n * these pages later if no one else has touched them in the meantime,\n * although we could add these pages to a global reuse list for\n * refill_inactive to pick up before reclaiming other pages.\n *\n * NB: This interface discards data rather than pushes it out to swap,\n * as some implementations do. This has performance implications for\n * applications like large transactional databases which want to discard\n * pages in anonymous maps after committing to backing store the data\n * that was kept in them. There is no reason to write this data out to\n * the swap area if the application is discarding it.\n *\n * An interface that causes the system to free clean pages and flush\n * dirty pages is already available as msync(MS_INVALIDATE).\n */\n\nIt seems mmap is more for controlling the memory mapping of files rather\nthan controlling the cache itself.\n\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 6 Oct 2003 14:56:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
},
{
"msg_contents": "Stepping out on a limb... (I'm not a disk kernel guy)\n\nI have long thought that as part of a cache descriptor, there should be a\nprocess-definable replacement-strategy (RS). Each cache entry would be\nassociated to each process's replacement-strategy variable and the\npage-replacement algorithm would then take into consideration the desired\npolicy. Imagine for simplicity sake, that each strategy gets its own cache\ntable. When it comes time to replace a page, the system scans the cache\ntables, picks the most likely page for replacement from each table, then\nselects the most likely page between all policies. This allows the 99% of\napps that can make excellent use of use LRU to use LRU among themselves\n(best for everyone), and the MRU (better for databases) (best for everyone\ntoo) to only sacrifice the best pages between MRU apps. Though, once you\nhave an MRU process, the final decision between taking the page should be\nuse MRU, and not LRU. Of course there are a number of questions: does each\nRS get its own table, to be managed independently, or can we combine them\nall into one table? What are the performance implications of the multiple\ntable management?\n\nOne day, I'd like to see function pointers and kernel modules used as ways\nfor apps to manage replacement policy. fantasyland# insmod MRU.o\nfantasyland# vi postgresql.conf { replacement_policy=MRU }\n{meanwhile in some postgre .c file:}\nset_cache_policy(get_cfg_replacement_policy());\nfantasyland# service postmaster restart\n\nAnyone want to throw this at the kernel developers?\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Monday, October 06, 2003 2:26 PM\n> To: Neil Conway\n> Cc: Andrew Sullivan; PostgreSQL Performance\n> Subject: Re: [PERFORM] reindex/vacuum locking/performance?\n>\n>\n> Neil Conway <[email protected]> writes:\n> > On Sun, 2003-10-05 at 19:50, Neil Conway wrote:\n> > I was hoping you'd reply to this, Tom -- you were referring to O_DIRECT,\n> > right?\n>\n> Not necessarily --- as you point out, it's not clear that O_DIRECT would\n> help us. What would be way cool is something similar to what James\n> Rogers was talking about: a way to tell the kernel not to promote this\n> page all the way to the top of its LRU list. I'm not sure that *any*\n> Unixen have such an API, let alone one that's common across more than\n> one platform :-(\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Mon, 06 Oct 2003 15:14:38 -0400",
"msg_from": "Jason Hihn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex/vacuum locking/performance?"
}
] |
[
{
"msg_contents": "Ok, I asked this on [novice], but I was told it's be better to post it\nhere...\n\nI've got some money to spend on a new servers. The biggest concern is the\nPostgreSQL database server that will \"be the company.\" (*Everyone* uses the\ndatabase server in some form or another) I'm looking for hot-swappable RAID\n1 on a Linux platform at the least. Are there any vendors to avoid or\nprefer? What works best? Am I better off going with a DIY or getting\nsomething pre-packaged?\n\nIn terms of numbers, we expect have an average of 100 active connections\n(most of which are idle 9/10ths of the time), with about 85% reading\ntraffic. I hope to have one server host about 1000-2000 active databases,\nwith the largest being about 60 meg (no blobs). Inactive databases will only\nbe for reading (archival) purposes, and will seldom be accessed. (I could\nprobably move them off to another server with a r/o disk...)\n\nDoes any of this represent a problem for Postgres? The datasets are\ntypically not that large, only a few queries on a few databases ever return\nover 1000 rows.\n\nThe configuration that is going on in my head is:\nRAID 1, 200gig disks\n1 server, 4g ram\nLinux 2.4 or 2.6 (depends on when we deploy and 2.6's track record at that\ntime)\n\nI want something that can do hot-swaps and auto-mirroring after swap.\nUnfortunately, this is a new area for me. (I normally stick to S/W for\nnon-high end systems)\n\nThanks!\n\n\nJason Hihn\nPaytime Payroll\n\n\n",
"msg_date": "Mon, 06 Oct 2003 11:17:00 -0400",
"msg_from": "Jason Hihn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Shopping for hardware"
},
{
"msg_contents": "On Mon, 6 Oct 2003, Jason Hihn wrote:\n\n> Ok, I asked this on [novice], but I was told it's be better to post it\n> here...\n> \n> I've got some money to spend on a new servers. The biggest concern is the\n> PostgreSQL database server that will \"be the company.\" (*Everyone* uses the\n> database server in some form or another) I'm looking for hot-swappable RAID\n> 1 on a Linux platform at the least. Are there any vendors to avoid or\n> prefer? What works best? Am I better off going with a DIY or getting\n> something pre-packaged?\n\nDepends on your hardware expertise. You can do quite well either way. I \nprefer adding my own components to a pre-built vanilla server.\n\n> In terms of numbers, we expect have an average of 100 active connections\n> (most of which are idle 9/10ths of the time), with about 85% reading\n> traffic. I hope to have one server host about 1000-2000 active databases,\n> with the largest being about 60 meg (no blobs). Inactive databases will only\n> be for reading (archival) purposes, and will seldom be accessed. (I could\n> probably move them off to another server with a r/o disk...)\n\nThat's not a really big load, but I'm guessing the peaks will be big \nenough to notice.\n\n> Does any of this represent a problem for Postgres? The datasets are\n> typically not that large, only a few queries on a few databases ever return\n> over 1000 rows.\n\nNah, this is pretty normal stuff for Postgresql or any other database in \nits approximate class (Sybase, Oracle, Informix, DB2, MSSQL2k).\n\n> The configuration that is going on in my head is:\n> RAID 1, 200gig disks\n> 1 server, 4g ram\n> Linux 2.4 or 2.6 (depends on when we deploy and 2.6's track record at that\n> time)\n\nThat's a good starting point. I'd avoid 2.6 until it's had time for the \nbugs to drop out. The latest 2.4 kernels are pretty stable.\n\nList of things to include if you need more performance, in order of \npriority:\n\nproper tuning of the postgresql.conf file (see \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html)\nhardware RAID card with battery backed cache, the bigger the cache the \nbetter.\nmore drives for RAID 1+0\nfaster CPUs. \n\nsince you've already got 4 gigs of RAM slated, you're set there on linux, \nwhere having more won't likely help a lot unless you go to a 64 bit \nplatform.\n\n> I want something that can do hot-swaps and auto-mirroring after swap.\n> Unfortunately, this is a new area for me. (I normally stick to S/W for\n> non-high end systems)\n\nThe LSI/Megaraid cards can handle hot swaps quite well, make sure you get \nthe right kind of hot swap shoes so they isolate the drive from the buss \nwhen you turn it off and they don't lock up your scsi buss.\n\n",
"msg_date": "Mon, 6 Oct 2003 09:43:35 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shopping for hardware"
},
{
"msg_contents": ">>>>> \"JH\" == Jason Hihn <[email protected]> writes:\n\nJH> The configuration that is going on in my head is:\nJH> RAID 1, 200gig disks\nJH> 1 server, 4g ram\nJH> Linux 2.4 or 2.6 (depends on when we deploy and 2.6's track record at that\nJH> time)\n\nMy recommendation is to get more disks (smaller and faster) rather\nthan a few large ones. As for vendors, I always buy from Dell because\nthey actually honor their \"4-hour 24x7 replacement parts with\ntechnician to stick 'em\" in guarantee. That and their hardware is\nrock solid and non-funky (ie, I can run FreeBSD on it with no issues).\n\nHere's my latest setup I just got:\n\nDell PE 2650, dual Xeon processors (lowest speed they sell, as this is\nnot a bottleneck)\n4Gb RAM\nDell PERC3 RAID controller (rebranded AMI controller) dual channel\n2x 18Gb internal disks on RAID1 (RAID channel0)\n14x 18Gb external disks on RAID5 (RAID channel1, see my posts on this\nlist from a month or so ago on how I arrived at RAID5).\n\nAll the disks are SCSI 15kRPM U320 drives, tho the controller only\ndoes U160.\n\nI run FreeBSD, but it should run linux just fine, too.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Mon, 06 Oct 2003 12:21:57 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shopping for hardware"
},
{
"msg_contents": "Jason,\n\n> In terms of numbers, we expect have an average of 100 active connections\n> (most of which are idle 9/10ths of the time), with about 85% reading\n> traffic. I hope to have one server host about 1000-2000 active databases,\n> with the largest being about 60 meg (no blobs). Inactive databases will\n> only be for reading (archival) purposes, and will seldom be accessed. (I\n> could probably move them off to another server with a r/o disk...)\n\nHey, two people (one of them me) suggested that rather than putting all 2000 \ndatabases on one $15,000 server, that you buy 3 $5000 servers and split \nthings up. You may have considered this suggestion and rejected it, but \nI'mm wondering if you missed it ...\n\nIf you're lumping everything on one server, you'll need to remember to \nincrease max_fsm_relations to the total number of tables in all databases ... \nfor example, for 10 tables in 2000 databases you'll want a setting of 20000 \n(which sounds huge but it's really only about 1mb memory).\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 6 Oct 2003 09:33:27 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shopping for hardware"
}
] |
[
{
"msg_contents": "Ran the test on another linux box - the one that generated the dump the\nsun loaded (which should have similar data...) and I got a profile plan\nsimilar to the Sun. Which makes me feel more comfortable.\n\nStill interesting why that other box gave me the different profile.\nNow off the fun and exciting world of seeing what I can do about it.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Mon, 6 Oct 2003 13:58:57 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "SOlaris updates"
}
] |
[
{
"msg_contents": "Tom,\n\nI've found the problem with TPC-R query #19. And it, unfortunately, appears \nto be a problem in the PostgreSQL query planner.\n\nTo sum up the below: it appears that whenever a set of WHERE conditions \nexceeds a certain level of complexity, the planner just ignores all \napplicable indexes and goes for a seq scan. While this may be unavoidable \nto some degree, it seems to me that we need to raise the threshold of \ncomplexity at which it does this.\n\ntpcr=# select version();\n version\n---------------------------------------------------------------------------------------------------------\n PostgreSQL 7.3.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3 20030226 \n(prerelease) (SuSE Linux)\n(1 row)\n\nI've tested a number of indexes on the query, and found the two most efficient \non subsets of the query. Thus:\n\nexplain analyze\nselect\n\tsum(l_extendedprice* (1 - l_discount)) as revenue\nfrom\n\tlineitem,\n\tpart\nwhere\n\t(\n\t\tp_partkey = l_partkey\n\t\tand p_brand = 'Brand#33'\n\t\tand p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG')\n\t\tand l_quantity >= 8 and l_quantity <= 8 + 10\n\t\tand p_size between 1 and 5\n\t\tand l_shipmode in ('AIR', 'AIR REG')\n\t\tand l_shipinstruct = 'DELIVER IN PERSON'\n\t);\n \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=10380.70..10380.70 rows=1 width=30) (actual \ntime=161.61..161.61 rows=1 loops=1)\n -> Nested Loop (cost=0.00..10380.67 rows=13 width=30) (actual \ntime=81.54..161.47 rows=17 loops=1)\n -> Index Scan using idx_part_1 on part (cost=0.00..9466.33 rows=62 \nwidth=4) (actual time=81.21..137.24 rows=98 loops=1)\n Index Cond: (p_brand = 'Brand#33'::bpchar)\n Filter: (((p_container = 'SM CASE'::bpchar) OR (p_container = \n'SM BOX'::bpchar) OR (p_container = 'SM PACK'::bpchar) OR (p_container = 'SM \nPKG'::bpchar)) AND (p_size >= 1) AND (p_size <= 5))\n -> Index Scan using idx_lineitem_3 on lineitem (cost=0.00..14.84 \nrows=1 width=26) (actual time=0.22..0.24 rows=0 loops=98)\n Index Cond: ((\"outer\".p_partkey = lineitem.l_partkey) AND \n(lineitem.l_quantity >= 8::numeric) AND (lineitem.l_quantity <= 18::numeric))\n Filter: (((l_shipmode = 'AIR'::bpchar) OR (l_shipmode = 'AIR \nREG'::bpchar)) AND (l_shipinstruct = 'DELIVER IN PERSON'::bpchar))\n Total runtime: 161.71 msec\n\n\n\nThis also works for a similar query:\n\nexplain analyze\nselect\n\tsum(l_extendedprice* (1 - l_discount)) as revenue\nfrom\n\tlineitem,\n\tpart\nwhere\n\t(\n\t\tp_partkey = l_partkey\n\t\tand p_brand = 'Brand#52'\n\t\tand p_container in ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK')\n\t\tand l_quantity >= 14 and l_quantity <= 14 + 10\n\t\tand p_size between 1 and 10\n\t\tand l_shipmode in ('AIR', 'AIR REG')\n\t\tand l_shipinstruct = 'DELIVER IN PERSON'\n\t);\n\n Aggregate (cost=11449.36..11449.36 rows=1 width=30) (actual \ntime=195.72..195.72 rows=1 loops=1)\n -> Nested Loop (cost=0.00..11449.29 rows=28 width=30) (actual \ntime=56.42..195.39 rows=48 loops=1)\n -> Index Scan using idx_part_1 on part (cost=0.00..9466.33 rows=139 \nwidth=4) (actual time=56.15..153.17 rows=166 loops=1)\n Index Cond: (p_brand = 'Brand#52'::bpchar)\n Filter: (((p_container = 'MED BAG'::bpchar) OR (p_container = \n'MED BOX'::bpchar) OR (p_container = 'MED PKG'::bpchar) OR (p_container = \n'MED PACK'::bpchar)) AND (p_size >= 1) AND (p_size <= 10))\n -> Index Scan using idx_lineitem_3 on lineitem (cost=0.00..14.29 \nrows=1 width=26) (actual time=0.23..0.25 rows=0 loops=166)\n Index Cond: ((\"outer\".p_partkey = lineitem.l_partkey) AND \n(lineitem.l_quantity >= 14::numeric) AND (lineitem.l_quantity <= \n24::numeric))\n Filter: (((l_shipmode = 'AIR'::bpchar) OR (l_shipmode = 'AIR \nREG'::bpchar)) AND (l_shipinstruct = 'DELIVER IN PERSON'::bpchar))\n Total runtime: 195.82 msec\n(9 rows)\n\n\nIf, however, I combine the two where clauses with an OR, the planner gets \nconfused and insists on loading the entire tables into memory (even though I \ndon't have that much memory):\n\nexplain\nselect\n\tsum(l_extendedprice* (1 - l_discount)) as revenue\nfrom\n\tlineitem,\n\tpart\nwhere\n\t(\n\t\tp_partkey = l_partkey\n\t\tand p_brand = 'Brand#33'\n\t\tand p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG')\n\t\tand l_quantity >= 8 and l_quantity <= 8 + 10\n\t\tand p_size between 1 and 5\n\t\tand l_shipmode in ('AIR', 'AIR REG')\n\t\tand l_shipinstruct = 'DELIVER IN PERSON'\n\t)\n\tor\n\t(\n\t\tp_partkey = l_partkey\n\t\tand p_brand = 'Brand#52'\n\t\tand p_container in ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK')\n\t\tand l_quantity >= 14 and l_quantity <= 14 + 10\n\t\tand p_size between 1 and 10\n\t\tand l_shipmode in ('AIR', 'AIR REG')\n\t\tand l_shipinstruct = 'DELIVER IN PERSON'\n\t);\n\n Aggregate (cost=488301096525.25..488301096525.25 rows=1 width=146)\n -> Nested Loop (cost=0.00..488301096525.15 rows=42 width=146)\n Join Filter: (((\"outer\".l_shipmode = 'AIR'::bpchar) AND \n(\"inner\".p_container = 'SM CASE'::bpchar) AND (\"inner\".p_partkey = \n\"outer\".l_partkey) AND (\"inner\".p_brand = 'Brand#33'::bpchar) AND \n(\"outer\".l_quantity >= 8::numeric) AND (\"outer\".l_quantity <= 18::numeric) \nAND (\"inner\".p_size >= 1) AND (\"inner\".p_size <= 5) AND \n(\"outer\".l_shipinstruct = 'DELIVER IN PERSON'::bpchar)) OR \n((\"outer\".l_shipmode = 'AIR REG'::bpchar) AND (\"inner\".p_container = 'SM \nCASE'::bpchar) AND (\"inner\".p_partkey = \"outer\".l_partkey) AND \n(\"inner\".p_brand = 'Brand#33'::bpchar) AND (\"outer\".l_quantity >= 8::numeric) \nAND (\"outer\".l_quantity <= 18::numeric) AND (\"inner\".p_size >= 1) AND \n(\"inner\".p_size <= 5) AND (\"outer\".l_shipinstruct = 'DELIVER IN \nPERSON'::bpchar)) OR ((\"outer\".l_shipmode = 'AIR'::bpchar) AND \n(\"inner\".p_container = 'SM BOX'::bpchar) AND (\"inner\".p_partkey = \n\"outer\".l_partkey) AND (\"inner\".p_brand = 'Brand#33'::bpchar) AND \n(\"outer\".l_quantity >= 8::numeric) AND (\"outer\".l_quantity <= 18::numeric) \nAND (\"inner\".p_size >= 1) AND (\"inner\".p_size <= 5) AND \n(\"outer\".l_shipinstruct = 'DELIVER IN PERSON'::bpchar)) OR \n((\"outer\".l_shipmode = 'AIR REG'::bpchar) AND (\"inner\".p_container = 'SM \nBOX'::bpchar) AND (\"inner\".p_partkey = \"outer\".l_partkey) AND \n(\"inner\".p_brand = 'Brand#33'::bpchar) AND (\"outer\".l_quantity >= 8::numeric) \nAND (\"outer\".l_quantity <= 18::numeric) AND (\"inner\".p_size >= 1) AND \n(\"inner\".p_size <= 5) AND (\"outer\".l_shipinstruct = 'DELIVER IN \nPERSON'::bpchar)) OR ((\"outer\".l_shipmode = 'AIR'::bpchar) AND \n(\"inner\".p_container = 'SM PACK'::bpchar) AND (\"inner\".p_partkey = \n\"outer\".l_partkey) AND (\"inner\".p_brand = 'Brand#33'::bpchar) AND \n(\"outer\".l_quantity >= 8::numeric) AND (\"outer\".l_quantity <= 18::numeric) \nAND (\"inner\".p_size >= 1) AND (\"inner\".p_size <= 5) AND \n(\"outer\".l_shipinstruct = 'DELIVER IN PERSON'::bpchar)) OR \n((\"outer\".l_shipmode = 'AIR REG'::bpchar) AND (\"inner\".p_container = 'SM \nPACK'::bpchar) AND (\"inner\".p_partkey = \"outer\".l_partkey) AND \n(\"inner\".p_brand = 'Brand#33'::bpchar) AND (\"outer\".l_quantity >= 8::numeric) \nAND (\"outer\".l_quantity <= 18::numeric) AND (\"inner\".p_size >= 1) AND \n(\"inner\".p_size <= 5) AND (\"outer\".l_shipinstruct = 'DELIVER IN \nPERSON'::bpchar)) OR ((\"outer\".l_shipmode = 'AIR'::bpchar) AND \n(\"inner\".p_container = 'SM PKG'::bpchar) AND (\"inner\".p_partkey = \n\"outer\".l_partkey) AND (\"inner\".p_brand = 'Brand#33'::bpchar) AND \n(\"outer\".l_quantity >= 8::numeric) AND (\"outer\".l_quantity <= 18::numeric) \nAND (\"inner\".p_size >= 1) AND (\"inner\".p_size <= 5) AND \n(\"outer\".l_shipinstruct = 'DELIVER IN PERSON'::bpchar)) OR \n((\"outer\".l_shipmode = 'AIR REG'::bpchar) AND (\"inner\".p_container = 'SM \nPKG'::bpchar) AND (\"inner\".p_partkey = \"outer\".l_partkey) AND \n(\"inner\".p_brand = 'Brand#33'::bpchar) AND (\"outer\".l_quantity >= 8::numeric) \nAND (\"outer\".l_quantity <= 18::numeric) AND (\"inner\".p_size >= 1) AND \n(\"inner\".p_size <= 5) AND (\"outer\".l_shipinstruct = 'DELIVER IN \nPERSON'::bpchar)) OR ((\"outer\".l_shipmode = 'AIR'::bpchar) AND \n(\"inner\".p_container = 'MED BAG'::bpchar) AND (\"inner\".p_partkey = \n\"outer\".l_partkey) AND (\"inner\".p_brand = 'Brand#52'::bpchar) AND \n(\"outer\".l_quantity >= 14::numeric) AND (\"outer\".l_quantity <= 24::numeric) \nAND (\"inner\".p_size >= 1) AND (\"inner\".p_size <= 10) AND \n(\"outer\".l_shipinstruct = 'DELIVER IN PERSON'::bpchar)) OR \n((\"outer\".l_shipmode = 'AIR REG'::bpchar) AND (\"inner\".p_container = 'MED \nBAG'::bpchar) AND (\"inner\".p_partkey = \"outer\".l_partkey) AND \n(\"inner\".p_brand = 'Brand#52'::bpchar) AND (\"outer\".l_quantity >= \n14::numeric) AND (\"outer\".l_quantity <= 24::numeric) AND (\"inner\".p_size >= \n1) AND (\"inner\".p_size <= 10) AND (\"outer\".l_shipinstruct = 'DELIVER IN \nPERSON'::bpchar)) OR ((\"outer\".l_shipmode = 'AIR'::bpchar) AND \n(\"inner\".p_container = 'MED BOX'::bpchar) AND (\"inner\".p_partkey = \n\"outer\".l_partkey) AND (\"inner\".p_brand = 'Brand#52'::bpchar) AND \n(\"outer\".l_quantity >= 14::numeric) AND (\"outer\".l_quantity <= 24::numeric) \nAND (\"inner\".p_size >= 1) AND (\"inner\".p_size <= 10) AND \n(\"outer\".l_shipinstruct = 'DELIVER IN PERSON'::bpchar)) OR \n((\"outer\".l_shipmode = 'AIR REG'::bpchar) AND (\"inner\".p_container = 'MED \nBOX'::bpchar) AND (\"inner\".p_partkey = \"outer\".l_partkey) AND \n(\"inner\".p_brand = 'Brand#52'::bpchar) AND (\"outer\".l_quantity >= \n14::numeric) AND (\"outer\".l_quantity <= 24::numeric) AND (\"inner\".p_size >= \n1) AND (\"inner\".p_size <= 10) AND (\"outer\".l_shipinstruct = 'DELIVER IN \nPERSON'::bpchar)) OR ((\"outer\".l_shipmode = 'AIR'::bpchar) AND \n(\"inner\".p_container = 'MED PKG'::bpchar) AND (\"inner\".p_partkey = \n\"outer\".l_partkey) AND (\"inner\".p_brand = 'Brand#52'::bpchar) AND \n(\"outer\".l_quantity >= 14::numeric) AND (\"outer\".l_quantity <= 24::numeric) \nAND (\"inner\".p_size >= 1) AND (\"inner\".p_size <= 10) AND \n(\"outer\".l_shipinstruct = 'DELIVER IN PERSON'::bpchar)) OR \n((\"outer\".l_shipmode = 'AIR REG'::bpchar) AND (\"inner\".p_container = 'MED \nPKG'::bpchar) AND (\"inner\".p_partkey = \"outer\".l_partkey) AND \n(\"inner\".p_brand = 'Brand#52'::bpchar) AND (\"outer\".l_quantity >= \n14::numeric) AND (\"outer\".l_quantity <= 24::numeric) AND (\"inner\".p_size >= \n1) AND (\"inner\".p_size <= 10) AND (\"outer\".l_shipinstruct = 'DELIVER IN \nPERSON'::bpchar)) OR ((\"outer\".l_shipmode = 'AIR'::bpchar) AND \n(\"inner\".p_container = 'MED PACK'::bpchar) AND (\"inner\".p_partkey = \n\"outer\".l_partkey) AND (\"inner\".p_brand = 'Brand#52'::bpchar) AND \n(\"outer\".l_quantity >= 14::numeric) AND (\"outer\".l_quantity <= 24::numeric) \nAND (\"inner\".p_size >= 1) AND (\"inner\".p_size <= 10) AND \n(\"outer\".l_shipinstruct = 'DELIVER IN PERSON'::bpchar)) OR \n((\"outer\".l_shipmode = 'AIR REG'::bpchar) AND (\"inner\".p_container = 'MED \nPACK'::bpchar) AND (\"inner\".p_partkey = \"outer\".l_partkey) AND \n(\"inner\".p_brand = 'Brand#52'::bpchar) AND (\"outer\".l_quantity >= \n14::numeric) AND (\"outer\".l_quantity <= 24::numeric) AND (\"inner\".p_size >= \n1) AND (\"inner\".p_size <= 10) AND (\"outer\".l_shipinstruct = 'DELIVER IN \nPERSON'::bpchar)))\n -> Seq Scan on lineitem (cost=0.00..235620.15 rows=6001215 \nwidth=95)\n -> Seq Scan on part (cost=0.00..7367.00 rows=200000 width=51)\n\n\nYou'll pardon me for not doing an \"ANALYZE\", but I didn't want to wait \novernight. Manually disabling Seqscan and Nestloop did nothing to affect \nthis query plan; neither did removing the aggregate.\n\nTommorrow I will test 7.4 Beta 4.\n\nHow can we fix this?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Tue, 7 Oct 2003 16:59:17 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPC-R benchmarks"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> To sum up the below: it appears that whenever a set of WHERE conditions \n> exceeds a certain level of complexity, the planner just ignores all \n> applicable indexes and goes for a seq scan.\n\nIt looks to me like the planner is coercing the WHERE clause into\ncanonical OR-of-ANDs form (DNF). Which is often a good heuristic\nbut it seems unhelpful for this query.\n\n> How can we fix this?\n\nFeel free to propose improvements to the heuristics in\nsrc/backend/optimizer/prep/prepqual.c ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Oct 2003 00:15:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-R benchmarks "
}
] |
[
{
"msg_contents": "Well, as you guys know I've been tinkering with sun-vs-linux postgres for\na while trying to come up with reasons for the HUGE performance\ndifferences. We've all had our anecdotal thoughts (fork sucks, ipc sucks,\nufs sucks, etc) and I've had a breakthrough.\n\nKnowing that GCC only produces good code on x86 (and powerpc with apple's\nmods, but it is doubtful that is as good as ibm's power compiler) I\ndecided to try out Sunsoft CC. I'd heard from more than one person/place\nthat gcc makes abysmal sparc code. Given that the performance profiles\nfor both the linux and sun boxes showed the same functions taking up most\nof the time I thought I'd see what a difference sunsoft could give me.\n\nSo - hardware -\nSun E450 4x400mhz ultrasparc IIi, 4GB ram, scsi soemthing disk. (not\nraid) solaris 2.6\n\nLinux - 2xP3 500mhz, 2GB, scsi disk of some flavor (not raid) linux 2.2.17\n(old I know!)\n\nSo here's the results using my load tester (single connection per beater,\nrepeats the query 1000 times with different input each time (we'll get\n~20k rows back), the query is a common query around here.\n\nI discounted the first run of the test as caches populated.\n\nLinux - 1x - 35 seconds, 20x - 180 seconds\n\nSun - gcc - 1x 60 seconds 20x 245 seconds\nSun - sunsoft defaults - 1x 52 seonds 20x [similar to gcc most likely]\nSun - sunsoft -fast - 1x 28 seconds 20x 164 seconds\n\nAs you math guru's can probably deduce - that is a rather large\nimprovement. And by rather large I mean hugely significant. With results\nlike this, I think it warrants mentioning in the FAQ_Solaris, and probably\nthe performance guide.\n\nConnecting will always be a bit slower. But I think most people realize\nthat connecting to a db is not cheap.\n\nI think update/etc will cause more locking, but I think IO will become the\nbottle neck much sooner than lock/unlock will. (This is mostly anecdotal\ngiven how fast solaris can lock/unlock a semaphore and how much IO I know\nI have)\n\nOh yes, with was with 7.3.4 and sunsoft cc Sun WorkShop 6 update 1 C\n5.2 2000/09/11 (which is old, perhaps newer ones make even better code?)\n\nI'm not sure of PG's policy of non-gcc things in configure, but perhaps if\nwe detect sunsoft we toss in the -fast flag and maybe make it the\npreferred one on sun? [btw, it compiled with no changes but it did spew\nout tons of warnings]\n\ncomments?\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 8 Oct 2003 08:36:56 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sun performance - Major discovery!"
},
{
"msg_contents": "On Wed, Oct 08, 2003 at 08:36:56AM -0400, Jeff wrote:\n> \n> So here's the results using my load tester (single connection per beater,\n> repeats the query 1000 times with different input each time (we'll get\n> ~20k rows back), the query is a common query around here.\n\nMy worry about this test is that it gives us precious little\nknowledge about concurrent connection slowness, which is where I find\nthe most significant problems. When we tried a Sunsoft cc vs gcc 2.95\non Sol 7 about 1 1/2 years ago, we found more or less no difference\nonce we added more than 5 connections (and we always have more than 5\nconnections). It might be worth trying again, though, since we moved\nto Sol 8.\n\nThanks for the result. \n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Wed, 8 Oct 2003 10:48:55 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Wed, 2003-10-08 at 08:36, Jeff wrote:\n> So here's the results using my load tester (single connection per beater,\n> repeats the query 1000 times with different input each time (we'll get\n> ~20k rows back), the query is a common query around here.\n\nWhat is the query?\n\n> Linux - 1x - 35 seconds, 20x - 180 seconds\n\n\"20x\" means 20 concurrent testing processes, right?\n\n> Sun - gcc - 1x 60 seconds 20x 245 seconds\n> Sun - sunsoft defaults - 1x 52 seonds 20x [similar to gcc most likely]\n> Sun - sunsoft -fast - 1x 28 seconds 20x 164 seconds\n\nInteresting (and surprising that the performance differential is that\nlarge, to me at least). Can you tell if the performance gain comes from\nan improvement in a particular subsystem? (i.e. could you get a profile\nof Sun/gcc and compare it with Sun/sunsoft).\n\n-Neil\n\n\n",
"msg_date": "Wed, 08 Oct 2003 10:52:39 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Wed, 8 Oct 2003, Andrew Sullivan wrote:\n\n> My worry about this test is that it gives us precious little\n> knowledge about concurrent connection slowness, which is where I find\n> the most significant problems. When we tried a Sunsoft cc vs gcc 2.95\n> on Sol 7 about 1 1/2 years ago, we found more or less no difference\n> once we added more than 5 connections (and we always have more than 5\n> connections). It might be worth trying again, though, since we moved\n> to Sol 8.\n>\n\nThe 20x column are the results when I fired up 20 beater concurrently.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 8 Oct 2003 10:57:34 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Wed, 8 Oct 2003, Neil Conway wrote:\n\n> What is the query?\n>\n\nIt retrieves an index listing for our boards. The boards are flat (not\nthreaded) and messages are numbered starting at 1 for each board.\n\nIf you pass in 0 for the start_from it assumes the latest 60.\n\nAnd it should be noted - in some cases some boards have nearly 2M posts.\nIndex on board_name, number.\n\nI cannot give out too too much stuff ;)\n\ncreate or replace function get_index2(integer, varchar, varchar)\n\treturns setof snippet\n\tas '\nDECLARE\n\tp_start alias for $1;\n\tp_board alias for $2;\n\tv_start integer;\n\tv_num integer;\n\tv_body text;\n\tv_sender varchar(35);\n\tv_time timestamptz;\n\tv_finish integer;\n\tv_row record;\n\tv_ret snippet;\nBEGIN\n\n\tv_start := p_start;\n\n\tif v_start = 0 then\n\t\tselect * into v_start from get_high_msg(p_board);\n\t\tv_start := v_start - 59;\n\tend if;\n\n\tv_finish := v_start + 60;\n\n\tfor v_row in\n\t\tselect number, substr(body, 0, 50) as snip, member_handle,\ntimestamp\n\t\t\tfrom posts\n\t\t\twhere board_name = p_board and\n\t\t\tnumber >= v_start and\n\t\t\tnumber < v_finish\n\t\t\torder by number desc\n\tLOOP\n\t\treturn next v_row;\n\tEND LOOP;\n\n\treturn;\nEND;\n' language 'plpgsql';\n\n\n> Interesting (and surprising that the performance differential is that\n> large, to me at least). Can you tell if the performance gain comes from\n> an improvement in a particular subsystem? (i.e. could you get a profile\n> of Sun/gcc and compare it with Sun/sunsoft).\n>\n\nI'll get these later today.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 8 Oct 2003 11:00:30 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Wed, 8 Oct 2003, Neil Conway wrote:\n\n> Interesting (and surprising that the performance differential is that\n> large, to me at least). Can you tell if the performance gain comes from\n> an improvement in a particular subsystem? (i.e. could you get a profile\n> of Sun/gcc and compare it with Sun/sunsoft).\n>\n\nYeah - like I expected it was able to generate much better code for\n_bt_checkkeys which was the #1 function in gcc on both sun & linux.\n\nand as you can see, suncc was just able to generate much nicer code. I'd\nlook at the assembler output but that won't be useful since I am very\nunfamiliar with the [ultra]sparc instruction set..\n\n\nHere's the prof and gprof output for the latest run:\nGCC:\n % cumulative self self total\n time seconds seconds calls ms/call ms/call name\n 31.52 19.44 19.44 internal_mcount\n 20.28 31.95 12.51 8199466 0.00 0.00 _bt_checkkeys\n 5.61 35.41 3.46 8197422 0.00 0.00 _bt_step\n 5.01 38.50 3.09 24738620 0.00 0.00 FunctionCall2\n 3.00 40.35 1.85 8194186 0.00 0.00 varchareq\n 2.61 41.96 1.61 24309 0.07 0.28 _bt_next\n 2.42 43.45 1.49 1003 1.49 1.51 AtEOXact_Buffers\n 2.37 44.91 1.46 12642 0.12 0.12 _read\n 2.33 46.35 1.44 16517771 0.00 0.00 pg_detoast_datum\n 2.08 47.63 1.28 8193186 0.00 0.00 int4lt\n 1.35 48.46 0.83 8237204 0.00 0.00 BufferGetBlockNumber\n 1.35 49.29 0.83 8193888 0.00 0.00 int4ge\n 1.35 50.12 0.83 _mcount\n\n\nSunCC -pg -fast.\n %Time Seconds Cumsecs #Calls msec/call Name\n\n 23.2 4.27 4.27108922056 0.0000 _mcount\n 20.7 3.82 8.09 8304052 0.0005 _bt_checkkeys\n 13.7 2.53 10.6225054788 0.0001 FunctionCall2\n 5.1 0.94 11.56 24002 0.0392 _bt_next\n 4.4 0.81 12.37 8301867 0.0001 _bt_step\n 3.4 0.63 13.00 8298219 0.0001 varchareq\n 2.7 0.50 13.5016726855 0.0000 pg_detoast_datum\n 2.4 0.45 13.95 8342464 0.0001 BufferGetBlockNumber\n 2.4 0.44 14.39 8297941 0.0001 int4ge\n 2.2 0.41 14.80 1003 0.409 AtEOXact_Buffers\n 2.0 0.37 15.17 4220349 0.0001 lc_collate_is_c\n 2.0 0.37 15.54 8297219 0.0000 int4lt\n 1.6 0.29 15.83 26537 0.0109 AllocSetContextCreate\n 0.9 0.16 15.99 1887 0.085 pglz_decompress\n 0.7 0.13 16.12 159966 0.0008 nocachegetattr\n 0.7 0.13 16.25 4220349 0.0000 varstr_cmp\n 0.6 0.11 16.36 937576 0.0001 MemoryContextAlloc\n 0.5 0.09 16.45 150453 0.0006 hash_search\n\n\n\n\n\n> -Neil\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 8 Oct 2003 11:46:09 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Wed, 2003-10-08 at 10:48, Andrew Sullivan wrote:\n> My worry about this test is that it gives us precious little\n> knowledge about concurrent connection slowness, which is where I find\n> the most significant problems.\n\nAs Jeff points out, the second set of results is for 20 concurrent\nconnections. Note that the advantage sunsoft cc has over gcc decreases\nas the number of connections increases (which makes sense, as the 20x\nworkload is likely to be more I/O bound).\n\n-Neil\n\n\n",
"msg_date": "Wed, 08 Oct 2003 12:41:56 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Wed, 2003-10-08 at 11:46, Jeff wrote:\n> Yeah - like I expected it was able to generate much better code for\n> _bt_checkkeys which was the #1 function in gcc on both sun & linux.\n> \n> and as you can see, suncc was just able to generate much nicer code.\n\nWhat CFLAGS does configure pick for gcc? From\nsrc/backend/template/solaris, I'd guess it's not enabling any\noptimization. Is that the case? If so, some gcc numbers with -O and -O2\nwould be useful.\n\n-Neil\n\n\n",
"msg_date": "Wed, 08 Oct 2003 13:43:31 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Wed, 8 Oct 2003, Neil Conway wrote:\n\n>\n> What CFLAGS does configure pick for gcc? From\n> src/backend/template/solaris, I'd guess it's not enabling any\n> optimization. Is that the case? If so, some gcc numbers with -O and -O2\n> would be useful.\n>\n\nI can't believe I didn't think of this before! heh.\nTurns out gcc was getting nothing for flags.\n\nI added -O2 to CFLAGS and my 60 seconds went down to 21. A rather mild\nimprovment huh?\n\nI did a few more tests and suncc still beats it out - but not by too much\nnow (Not enought to justify buying a license just for compiling pg)\n\nI'll go run the regression test suite with my gcc -O2 pg and the suncc pg.\nSee if they pass the test.\n\nIf they do we should consider adding -O2 and -fast to the CFLAGS.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 8 Oct 2003 14:11:23 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "Jeff wrote:\n> On Wed, 8 Oct 2003, Neil Conway wrote:\n> \n> >\n> > What CFLAGS does configure pick for gcc? From\n> > src/backend/template/solaris, I'd guess it's not enabling any\n> > optimization. Is that the case? If so, some gcc numbers with -O and -O2\n> > would be useful.\n> >\n> \n> I can't believe I didn't think of this before! heh.\n> Turns out gcc was getting nothing for flags.\n> \n> I added -O2 to CFLAGS and my 60 seconds went down to 21. A rather mild\n> improvment huh?\n> \n> I did a few more tests and suncc still beats it out - but not by too much\n> now (Not enought to justify buying a license just for compiling pg)\n> \n> I'll go run the regression test suite with my gcc -O2 pg and the suncc pg.\n> See if they pass the test.\n> \n> If they do we should consider adding -O2 and -fast to the CFLAGS.\n\n[ CC added for hackers.]\n\nWell, this is really embarassing. I can't imagine why we would not set\nat least -O on all platforms. Looking at the template files, I see\nthese have no optimization set:\n\t\n\tdarwin\n\tdgux\n\tfreebsd (non-alpha)\n\tirix5\n\tnextstep\n\tosf (gcc)\n\tqnx4\n\tsolaris\n\tsunos4\n\tsvr4\n\tultrix4\n\nI thought we used to have code that did -O for any platforms that set no\ncflags, but I don't see that around anywhere. I recommend adding -O2,\nor at leaset -O to all these platforms --- we can then use platform\ntesting to make sure they are working.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 8 Oct 2003 14:31:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Wed, 2003-10-08 at 14:31, Bruce Momjian wrote:\n> Well, this is really embarassing. I can't imagine why we would not set\n> at least -O on all platforms.\n\nISTM the most legitimate reason for not enabling compilater\noptimizations on a given compiler/OS/architecture combination is might\ncause compiler errors / bad code generation.\n\nCan we get these optimizations enabled in time for the next 7.4 beta? It\nmight also be good to add an item in the release notes about it.\n\n-Neil\n\n\n",
"msg_date": "Wed, 08 Oct 2003 14:37:31 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Wed, 8 Oct 2003, Neil Conway wrote:\n\n> ISTM the most legitimate reason for not enabling compilater\n> optimizations on a given compiler/OS/architecture combination is might\n> cause compiler errors / bad code generation.\n>\n> Can we get these optimizations enabled in time for the next 7.4 beta? It\n> might also be good to add an item in the release notes about it.\n>\n> -Neil\n>\n\nI just ran make check for sun with gcc -O2 and suncc -fast and both\npassed.\n\nWe'll need other arguments to suncc to supress some warnings, etc. (-fast\ngenerates a warning for every file compiled telling you it will only\nrun on ultrasparc machines)\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 8 Oct 2003 14:45:49 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Sun performance - Major discovery!"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>Jeff wrote:\n> \n>\n>>On Wed, 8 Oct 2003, Neil Conway wrote:\n>>\n>> \n>>\n>>>What CFLAGS does configure pick for gcc? From\n>>>src/backend/template/solaris, I'd guess it's not enabling any\n>>>optimization. Is that the case? If so, some gcc numbers with -O and -O2\n>>>would be useful.\n>>>\n>>> \n>>>\n>>I can't believe I didn't think of this before! heh.\n>>Turns out gcc was getting nothing for flags.\n>>\n>>I added -O2 to CFLAGS and my 60 seconds went down to 21. A rather mild\n>>improvment huh?\n>>\n>>I did a few more tests and suncc still beats it out - but not by too much\n>>now (Not enought to justify buying a license just for compiling pg)\n>>\n>>I'll go run the regression test suite with my gcc -O2 pg and the suncc pg.\n>>See if they pass the test.\n>>\n>>If they do we should consider adding -O2 and -fast to the CFLAGS.\n>> \n>>\n>\n>[ CC added for hackers.]\n>\n>Well, this is really embarassing. I can't imagine why we would not set\n>at least -O on all platforms. Looking at the template files, I see\n>these have no optimization set:\n>\t\n>\tdarwin\n>\tdgux\n>\tfreebsd (non-alpha)\n>\tirix5\n>\tnextstep\n>\tosf (gcc)\n>\tqnx4\n>\tsolaris\n>\tsunos4\n>\tsvr4\n>\tultrix4\n>\n>I thought we used to have code that did -O for any platforms that set no\n>cflags, but I don't see that around anywhere. I recommend adding -O2,\n>or at leaset -O to all these platforms --- we can then use platform\n>testing to make sure they are working.\n>\n> \n>\nActually, I would not be surprised to see gains on Solaris/SPARC from \n-O3 with gcc, which enables inlining and register-renaming, although \nthis does make debugging pretty much impossible.\n\nworth testing at least (but I no longer have access to a Solaris machine).\n\ncheers\n\nandrew\n\n",
"msg_date": "Wed, 08 Oct 2003 14:50:24 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Sun performance - Major discovery!"
},
{
"msg_contents": "\nIn message <[email protected]>, Jeff writes:\n\n I'll go run the regression test suite with my gcc -O2 pg and the suncc pg.\n See if they pass the test.\n\nMy default set of gcc optimization flags is:\n\n-O3 -funroll-loops -frerun-cse-after-loop -frerun-loop-opt -falign-functions -mcpu=i686 -march=i686\n\nObviously the last two flags product CPU specific code, so would have\nto differ...autoconf is always possible, but so is just lopping them off.\n\nI have found these flags to produce faster code that a simple -O2, but\nI understand the exact combination which is best for you is\ncode-dependent. Of course, if you are getting really excited, you can\nuse -fbranch-probabilities, but as you will see if you investigate\nthat requires some profiling information, so is not very easy to\nactually practically use.\n\n -Seth Robertson\n",
"msg_date": "Wed, 08 Oct 2003 15:22:08 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery! "
},
{
"msg_contents": "Neil Conway <[email protected]> writes:\n> On Wed, 2003-10-08 at 14:31, Bruce Momjian wrote:\n>> Well, this is really embarassing. I can't imagine why we would not set\n>> at least -O on all platforms.\n\nI believe that autoconf will automatically select -O2 (when CFLAGS isn't\nalready set) *if* it's chosen gcc. It won't select anything for vendor\nccs.\n\n> Can we get these optimizations enabled in time for the next 7.4 beta?\n\nI think it's too late in the beta cycle to add optimization flags except\nfor platforms we can get specific success results for. (Solaris is\nprobably okay for instance.) The risk of breaking things seems too\nhigh.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Oct 2003 20:06:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery! "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Well, this is really embarassing. I can't imagine why we would not set\n> at least -O on all platforms. Looking at the template files, I see\n> these have no optimization set:\n\n> \tfreebsd (non-alpha)\n\nI'm wondering what that had in mind:\n\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/template/freebsd.diff?r1=1.10&r2=1.11\n\n-- \nPeter Eisentraut [email protected]\n\n",
"msg_date": "Thu, 9 Oct 2003 02:24:24 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Sun performance - Major discovery!"
},
{
"msg_contents": "Tom Lane wrote:\n> Neil Conway <[email protected]> writes:\n> > On Wed, 2003-10-08 at 14:31, Bruce Momjian wrote:\n> >> Well, this is really embarassing. I can't imagine why we would not set\n> >> at least -O on all platforms.\n> \n> I believe that autoconf will automatically select -O2 (when CFLAGS isn't\n> already set) *if* it's chosen gcc. It won't select anything for vendor\n> ccs.\n\nI think the problem is that template/solaris overrides that with:\n\n CFLAGS=\n\n> > Can we get these optimizations enabled in time for the next 7.4 beta?\n> \n> I think it's too late in the beta cycle to add optimization flags except\n> for platforms we can get specific success results for. (Solaris is\n> probably okay for instance.) The risk of breaking things seems too\n> high.\n\nAgreed. Do we set them all to -O2, then remove it from the ones we\ndon't get successful reports on?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 8 Oct 2003 21:44:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Well, this is really embarassing. I can't imagine why we would not set\n> > at least -O on all platforms. Looking at the template files, I see\n> > these have no optimization set:\n> \n> > \tfreebsd (non-alpha)\n> \n> I'm wondering what that had in mind:\n> \n> http://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/template/freebsd.diff?r1=1.10&r2=1.11\n\nI was wondering that myself. I think the idea was that we already do\n-O2 in configure if it is gcc, so why do it in the template files. What\nis killing us is the CFLAGS= lines in the configuration files.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 8 Oct 2003 21:48:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Sun performance - Major discovery!"
},
{
"msg_contents": ">>Well, this is really embarassing. I can't imagine why we would not set\n>>at least -O on all platforms. Looking at the template files, I see\n>>these have no optimization set:\n> \n> \n>>\tfreebsd (non-alpha)\n> \n> \n> I'm wondering what that had in mind:\n> \n> http://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/template/freebsd.diff?r1=1.10&r2=1.11\n\nWhen I used to build pgsql on freebsd/alpha, I would get heaps of GCC \nwarnings saying 'optimisations for the alpha are broken'. I can't \nremember if that meant anything more than just -O or not though.\n\nChris\n\n\n",
"msg_date": "Thu, 09 Oct 2003 10:07:11 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Sun performance - Major discovery!"
},
{
"msg_contents": "Tom Lane wrote:\n> Neil Conway <[email protected]> writes:\n> > On Wed, 2003-10-08 at 14:31, Bruce Momjian wrote:\n> >> Well, this is really embarassing. I can't imagine why we would not set\n> >> at least -O on all platforms.\n> \n> I believe that autoconf will automatically select -O2 (when CFLAGS isn't\n> already set) *if* it's chosen gcc. It won't select anything for vendor\n> ccs.\n> \n> > Can we get these optimizations enabled in time for the next 7.4 beta?\n> \n> I think it's too late in the beta cycle to add optimization flags except\n> for platforms we can get specific success results for. (Solaris is\n> probably okay for instance.) The risk of breaking things seems too\n> high.\n\nOK, patch attached and applied. It centralizes the optimization\ndefaults into configure.in, rather than having CFLAGS= in the template\nfiles.\n\nIt used -O2 for gcc (generated automatically by autoconf), and -O for\nnon-gcc, unless the template overrides it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: configure\n===================================================================\nRCS file: /cvsroot/pgsql-server/configure,v\nretrieving revision 1.302\ndiff -c -c -r1.302 configure\n*** configure\t3 Oct 2003 03:08:14 -0000\t1.302\n--- configure\t9 Oct 2003 03:16:44 -0000\n***************\n*** 2393,2398 ****\n--- 2393,2402 ----\n if test \"$ac_env_CFLAGS_set\" = set; then\n CFLAGS=$ac_env_CFLAGS_value\n fi\n+ # configure sets CFLAGS to -O2 for gcc, so this is only for non-gcc\n+ if test x\"$CFLAGS\" = x\"\"; then\n+ \tCFLAGS=\"-O\"\n+ fi\n if test \"$enable_debug\" = yes && test \"$ac_cv_prog_cc_g\" = yes; then\n CFLAGS=\"$CFLAGS -g\"\n fi\nIndex: configure.in\n===================================================================\nRCS file: /cvsroot/pgsql-server/configure.in,v\nretrieving revision 1.293\ndiff -c -c -r1.293 configure.in\n*** configure.in\t3 Oct 2003 03:08:14 -0000\t1.293\n--- configure.in\t9 Oct 2003 03:16:46 -0000\n***************\n*** 238,243 ****\n--- 238,247 ----\n if test \"$ac_env_CFLAGS_set\" = set; then\n CFLAGS=$ac_env_CFLAGS_value\n fi\n+ # configure sets CFLAGS to -O2 for gcc, so this is only for non-gcc\n+ if test x\"$CFLAGS\" = x\"\"; then\t\n+ \tCFLAGS=\"-O\"\n+ fi\n if test \"$enable_debug\" = yes && test \"$ac_cv_prog_cc_g\" = yes; then\n CFLAGS=\"$CFLAGS -g\"\n fi\nIndex: src/template/beos\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/beos,v\nretrieving revision 1.6\ndiff -c -c -r1.6 beos\n*** src/template/beos\t21 Oct 2000 22:36:13 -0000\t1.6\n--- src/template/beos\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1 ****\n- CFLAGS='-O2'\n--- 0 ----\nIndex: src/template/bsdi\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/bsdi,v\nretrieving revision 1.16\ndiff -c -c -r1.16 bsdi\n*** src/template/bsdi\t27 Sep 2003 16:24:44 -0000\t1.16\n--- src/template/bsdi\t9 Oct 2003 03:16:51 -0000\n***************\n*** 5,13 ****\n esac\n \n case $host_os in\n! bsdi2.0 | bsdi2.1 | bsdi3*)\n! CC=gcc2\n! ;;\n esac\n \n THREAD_SUPPORT=yes\n--- 5,11 ----\n esac\n \n case $host_os in\n! bsdi2.0 | bsdi2.1 | bsdi3*) CC=gcc2;;\n esac\n \n THREAD_SUPPORT=yes\nIndex: src/template/cygwin\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/cygwin,v\nretrieving revision 1.2\ndiff -c -c -r1.2 cygwin\n*** src/template/cygwin\t9 Oct 2003 02:37:09 -0000\t1.2\n--- src/template/cygwin\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,2 ****\n- CFLAGS='-O2'\n SRCH_LIB='/usr/local/lib'\n--- 1 ----\nIndex: src/template/dgux\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/dgux,v\nretrieving revision 1.10\ndiff -c -c -r1.10 dgux\n*** src/template/dgux\t21 Oct 2000 22:36:13 -0000\t1.10\n--- src/template/dgux\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1 ****\n- CFLAGS=\n--- 0 ----\nIndex: src/template/freebsd\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/freebsd,v\nretrieving revision 1.23\ndiff -c -c -r1.23 freebsd\n*** src/template/freebsd\t27 Sep 2003 16:24:44 -0000\t1.23\n--- src/template/freebsd\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,17 ****\n- CFLAGS='-pipe'\n- \n case $host_cpu in\n! alpha*) CFLAGS=\"$CFLAGS -O\" ;;\n esac\n \n THREAD_SUPPORT=yes\n NEED_REENTRANT_FUNCS=yes\n THREAD_CPPFLAGS=\"-D_THREAD_SAFE\"\n case $host_os in\n! \t\tfreebsd2*|freebsd3*|freebsd4*)\n! \t\t\tTHREAD_LIBS=\"-pthread\"\n! \t\t\t;;\n! \t\t*)\n! \t\t\tTHREAD_LIBS=\"-lc_r\"\n! \t\t\t;;\n esac\n--- 1,11 ----\n case $host_cpu in\n! alpha*) CFLAGS=\"-O\";;\n esac\n \n THREAD_SUPPORT=yes\n NEED_REENTRANT_FUNCS=yes\n THREAD_CPPFLAGS=\"-D_THREAD_SAFE\"\n case $host_os in\n! \tfreebsd2*|freebsd3*|freebsd4*) THREAD_LIBS=\"-pthread\";;\n! \t*) THREAD_LIBS=\"-lc_r\";;\n esac\nIndex: src/template/hpux\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/hpux,v\nretrieving revision 1.7\ndiff -c -c -r1.7 hpux\n*** src/template/hpux\t2 Apr 2003 00:49:28 -0000\t1.7\n--- src/template/hpux\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,8 ****\n! if test \"$GCC\" = yes ; then\n! CPPFLAGS=\"-D_XOPEN_SOURCE_EXTENDED\"\n! CFLAGS=\"-O2\"\n! else\n CC=\"$CC -Ae\"\n- CPPFLAGS=\"-D_XOPEN_SOURCE_EXTENDED\"\n CFLAGS=\"+O2\"\n fi\n--- 1,6 ----\n! CPPFLAGS=\"-D_XOPEN_SOURCE_EXTENDED\"\n! \n! if test \"$GCC\" != yes ; then\n CC=\"$CC -Ae\"\n CFLAGS=\"+O2\"\n fi\nIndex: src/template/irix5\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/irix5,v\nretrieving revision 1.9\ndiff -c -c -r1.9 irix5\n*** src/template/irix5\t21 Oct 2000 22:36:13 -0000\t1.9\n--- src/template/irix5\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1 ****\n- CFLAGS=\n--- 0 ----\nIndex: src/template/linux\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/linux,v\nretrieving revision 1.18\ndiff -c -c -r1.18 linux\n*** src/template/linux\t27 Sep 2003 22:23:35 -0000\t1.18\n--- src/template/linux\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,4 ****\n- CFLAGS=-O2\n # Force _GNU_SOURCE on; plperl is broken with Perl 5.8.0 otherwise\n CPPFLAGS=\"-D_GNU_SOURCE\"\n \n--- 1,3 ----\n***************\n*** 6,9 ****\n NEED_REENTRANT_FUNCS=yes\t# Debian kernel 2.2 2003-09-27\n THREAD_CPPFLAGS=\"-D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS\"\n THREAD_LIBS=\"-lpthread\"\n- \n--- 5,7 ----\nIndex: src/template/netbsd\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/netbsd,v\nretrieving revision 1.13\ndiff -c -c -r1.13 netbsd\n*** src/template/netbsd\t27 Sep 2003 16:24:44 -0000\t1.13\n--- src/template/netbsd\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,4 ****\n- CFLAGS='-O2 -pipe'\n- \n THREAD_SUPPORT=yes\n NEED_REENTRANT_FUNCS=yes\t# 1.6 2003-09-14\n--- 1,2 ----\nIndex: src/template/nextstep\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/nextstep,v\nretrieving revision 1.7\ndiff -c -c -r1.7 nextstep\n*** src/template/nextstep\t15 Jul 2000 15:54:52 -0000\t1.7\n--- src/template/nextstep\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,4 ****\n AROPT=rc\n- CFLAGS=\n SHARED_LIB=\n DLSUFFIX=.o\n--- 1,3 ----\nIndex: src/template/openbsd\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/openbsd,v\nretrieving revision 1.8\ndiff -c -c -r1.8 openbsd\n*** src/template/openbsd\t21 Oct 2000 22:36:14 -0000\t1.8\n--- src/template/openbsd\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1 ****\n- CFLAGS='-O2 -pipe'\n--- 0 ----\nIndex: src/template/osf\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/osf,v\nretrieving revision 1.10\ndiff -c -c -r1.10 osf\n*** src/template/osf\t27 Sep 2003 16:24:45 -0000\t1.10\n--- src/template/osf\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,6 ****\n! if test \"$GCC\" = yes ; then\n! CFLAGS=\n! else\n CC=\"$CC -std\"\n CFLAGS='-O4 -Olimit 2000'\n fi\n--- 1,4 ----\n! if test \"$GCC\" != yes ; then\n CC=\"$CC -std\"\n CFLAGS='-O4 -Olimit 2000'\n fi\nIndex: src/template/qnx4\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/qnx4,v\nretrieving revision 1.4\ndiff -c -c -r1.4 qnx4\n*** src/template/qnx4\t24 May 2001 22:33:18 -0000\t1.4\n--- src/template/qnx4\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,2 ****\n! CFLAGS=-I/usr/local/include\n! LIBS=-lunix\n--- 1,2 ----\n! CFLAGS=\"-O2 -I/usr/local/include\"\n! LIBS=\"-lunix\"\nIndex: src/template/sco\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/sco,v\nretrieving revision 1.10\ndiff -c -c -r1.10 sco\n*** src/template/sco\t11 Dec 2002 22:27:26 -0000\t1.10\n--- src/template/sco\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,7 ****\n- if test \"$GCC\" = yes; then\n- CFLAGS=-O2\n- else\n- CFLAGS=-O\n- fi\n CC=\"$CC -b elf\"\n \n--- 1,2 ----\nIndex: src/template/solaris\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/solaris,v\nretrieving revision 1.5\ndiff -c -c -r1.5 solaris\n*** src/template/solaris\t27 Sep 2003 16:24:45 -0000\t1.5\n--- src/template/solaris\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,8 ****\n! if test \"$GCC\" = yes ; then\n! CFLAGS=\n! else\n CC=\"$CC -Xa\"\t\t\t# relaxed ISO C mode\n! CFLAGS=-v\t\t\t# -v is like gcc -Wall\n fi\n \n THREAD_SUPPORT=yes\n--- 1,6 ----\n! if test \"$GCC\" != yes ; then\n CC=\"$CC -Xa\"\t\t\t# relaxed ISO C mode\n! CFLAGS=\"-O -v\"\t\t# -v is like gcc -Wall\n fi\n \n THREAD_SUPPORT=yes\nIndex: src/template/sunos4\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/sunos4,v\nretrieving revision 1.2\ndiff -c -c -r1.2 sunos4\n*** src/template/sunos4\t21 Oct 2000 22:36:14 -0000\t1.2\n--- src/template/sunos4\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1 ****\n- CFLAGS=\n--- 0 ----\nIndex: src/template/svr4\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/svr4,v\nretrieving revision 1.10\ndiff -c -c -r1.10 svr4\n*** src/template/svr4\t21 Oct 2000 22:36:14 -0000\t1.10\n--- src/template/svr4\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1 ****\n- CFLAGS=\n--- 0 ----\nIndex: src/template/ultrix4\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/ultrix4,v\nretrieving revision 1.10\ndiff -c -c -r1.10 ultrix4\n*** src/template/ultrix4\t21 Oct 2000 22:36:14 -0000\t1.10\n--- src/template/ultrix4\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1 ****\n- CFLAGS=\n--- 0 ----\nIndex: src/template/univel\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/univel,v\nretrieving revision 1.13\ndiff -c -c -r1.13 univel\n*** src/template/univel\t21 Oct 2000 22:36:14 -0000\t1.13\n--- src/template/univel\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,2 ****\n CFLAGS='-v -O -K i486,host,inline,loop_unroll -Dsvr4'\n! LIBS=-lc89 \n--- 1,2 ----\n CFLAGS='-v -O -K i486,host,inline,loop_unroll -Dsvr4'\n! LIBS=\"-lc89\"\nIndex: src/template/unixware\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/unixware,v\nretrieving revision 1.24\ndiff -c -c -r1.24 unixware\n*** src/template/unixware\t27 Sep 2003 16:24:45 -0000\t1.24\n--- src/template/unixware\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,5 ****\n if test \"$GCC\" = yes; then\n- CFLAGS=-O2\n THREAD_CPPFLAGS=\"-pthread\"\n else\n # the -Kno_host is temporary for a bug in the compiler. See -hackers\n--- 1,4 ----\nIndex: src/template/win\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/win,v\nretrieving revision 1.5\ndiff -c -c -r1.5 win\n*** src/template/win\t8 Oct 2003 18:23:08 -0000\t1.5\n--- src/template/win\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,3 ****\n- if test \"$GCC\" = yes; then\n- CFLAGS=\"-O2\"\n- fi\n--- 0 ----\nIndex: src/template/win32\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/template/win32,v\nretrieving revision 1.1\ndiff -c -c -r1.1 win32\n*** src/template/win32\t15 May 2003 16:35:30 -0000\t1.1\n--- src/template/win32\t9 Oct 2003 03:16:51 -0000\n***************\n*** 1,3 ****\n- if test \"$GCC\" = yes; then\n- CFLAGS=\"-O2\"\n- fi\n--- 0 ----",
"msg_date": "Wed, 8 Oct 2003 23:19:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Sun performance - Major discovery!"
},
{
"msg_contents": "",
"msg_date": "Thu, 09 Oct 2003 01:10:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Sun performance - Major discovery! "
},
{
"msg_contents": "On Wed, Oct 08, 2003 at 02:31:29PM -0400, Bruce Momjian wrote:\n> Well, this is really embarassing. I can't imagine why we would not set\n> at least -O on all platforms. Looking at the template files, I see\n> these have no optimization set:\n\nI think gcc _used_ to generate bad code on SPARC if you set any\noptimisation. We tested it on Sol7 with gcc 2.95 more than a year\nago, and tried various settings. -O2 worked, but other items were\nreally bad. Some of them would pass regression but cause strange\nbehaviour, random coredumps, &c. A little digging demonstrated that\nanything beyond -O2 just didn't work for gcc at the time.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Thu, 9 Oct 2003 10:27:05 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Sun performance - Major discovery!"
},
{
"msg_contents": "On Wed, 2003-10-08 at 21:44, Bruce Momjian wrote:\n> Agreed. Do we set them all to -O2, then remove it from the ones we\n> don't get successful reports on?\n\nI took the time to compile CVS tip with a few different machines from\nHP's TestDrive program, to see if there were any regressions using the\nnew optimization flags:\n\n(1) (my usual dev machine)\n\n$ uname -a\nLinux tokyo 2.4.19-xfs #1 Mon Jan 20 19:12:29 EST 2003 i686 GNU/Linux\n$ gcc --version\ngcc (GCC) 3.3.2 20031005 (Debian prerelease)\n\n'make check' passes\n\n(2)\n\n$ uname -a\nLinux spe161 2.4.18-smp #1 SMP Sat Apr 6 21:42:22 EST 2002 alpha unknown\n$ gcc --version\ngcc (GCC) 3.3.1\n\n'make check' passes\n\n(3)\n\n$ uname -a\nLinux spe170 2.4.17-64 #1 Sat Mar 16 17:31:44 MST 2002 parisc64 unknown\n$ gcc --version\n3.0.4\n\n'make check' passes\n\nBTW, this platform doesn't have any code written for native spinlocks.\n\n(4)\n\n$ uname -a\nLinux spe156 2.4.18-mckinley-smp #1 SMP Thu Jul 11 12:51:02 MDT 2002\nia64 unknown\n$ gcc --version\n\nWhen you compile PostgreSQL without changing the CFLAGS configure picks,\nthe initdb required for 'make check' fails with:\n\n[...]\ninitializing pg_depend... ok\ncreating system views... ok\nloading pg_description... ok\ncreating conversions... ERROR: could not identify operator 679\n\nI tried to compile PostgreSQL with CFLAGS='-O0' to see if the above\nresulted from an optimization-induced compiler error, but I got the\nfollowing error:\n\n$ gcc -O0 -Wall -Wmissing-prototypes -Wmissing-declarations\n-I../../../../src/include -D_GNU_SOURCE -c -o xlog.o xlog.c\n../../../../src/include/storage/s_lock.h: In function `tas':\n../../../../src/include/storage/s_lock.h:125: error: inconsistent\noperand constraints in an `asm'\n\nWhereas this works fine:\n\n$ gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n-I../../../../src/include -D_GNU_SOURCE -c -o xlog.o xlog.c\n$\n\nBTW, line 138 of s_lock.h is:\n\n#if defined(__arm__) || defined(__arm__)\n\nThat seems a little redundant.\n\nAnyway, I tried running initdb after compiling all of pgsql with \"-O0',\nexcept for the files that included s_lock.h, but make check still\nfailed:\n\ncreating information schema... ok\nvacuuming database template1...\n/house/neilc/pgsql/src/test/regress/./tmp_check/install//usr/local/pgsql/bin/initdb: line 882: 22035 Segmentation fault (core dumped) \"$PGPATH\"/postgres $PGSQL_OPT template1 >/dev/null <<EOF\nANALYZE;\nVACUUM FULL FREEZE;\nEOF\n\nThe core file seems to indicate a stack overflow due to an infinitely\nrecursive function:\n\n(gdb) bt 25\n#0 0x4000000000645dc0 in hash_search ()\n#1 0x4000000000616930 in RelationSysNameCacheGetRelation ()\n#2 0x4000000000616db0 in RelationSysNameGetRelation ()\n#3 0x4000000000082e40 in relation_openr ()\n#4 0x4000000000083910 in heap_openr ()\n#5 0x400000000060e6b0 in ScanPgRelation ()\n#6 0x4000000000611d60 in RelationBuildDesc ()\n#7 0x4000000000616e70 in RelationSysNameGetRelation ()\n#8 0x4000000000082e40 in relation_openr ()\n#9 0x4000000000083910 in heap_openr ()\n#10 0x400000000060e6b0 in ScanPgRelation ()\n#11 0x4000000000611d60 in RelationBuildDesc ()\n#12 0x4000000000616e70 in RelationSysNameGetRelation ()\n#13 0x4000000000082e40 in relation_openr ()\n#14 0x4000000000083910 in heap_openr ()\n#15 0x400000000060e6b0 in ScanPgRelation ()\n#16 0x4000000000611d60 in RelationBuildDesc ()\n#17 0x4000000000616e70 in RelationSysNameGetRelation ()\n#18 0x4000000000082e40 in relation_openr ()\n#19 0x4000000000083910 in heap_openr ()\n#20 0x400000000060e6b0 in ScanPgRelation ()\n#21 0x4000000000611d60 in RelationBuildDesc ()\n#22 0x4000000000616e70 in RelationSysNameGetRelation ()\n#23 0x4000000000082e40 in relation_openr ()\n#24 0x4000000000083910 in heap_openr ()\n(More stack frames follow...)\n\n(It also dumps core in the same place during initdb if CFLAGS='-O' is\nspecified.)\n\nSo it looks like the Itanium port is a little broken. Does anyone have\nan idea what needs to be done to fix it?\n\n-Neil\n\n\n",
"msg_date": "Thu, 09 Oct 2003 23:54:53 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "\nIsn't it great how you have the same directory on every host so you can\ndownload once and run the same tests easily.\n\n\nNeil Conway wrote:\n> $ uname -a\n> Linux spe170 2.4.17-64 #1 Sat Mar 16 17:31:44 MST 2002 parisc64 unknown\n> $ gcc --version\n> 3.0.4\n> \n> 'make check' passes\n\nI didn't know there was a pa-risc-64 chip.\n\n> BTW, this platform doesn't have any code written for native spinlocks.\n> \n> (4)\n> \n> $ uname -a\n> Linux spe156 2.4.18-mckinley-smp #1 SMP Thu Jul 11 12:51:02 MDT 2002\n> ia64 unknown\n> $ gcc --version\n> \n> When you compile PostgreSQL without changing the CFLAGS configure picks,\n> the initdb required for 'make check' fails with:\n> \n> [...]\n> initializing pg_depend... ok\n> creating system views... ok\n> loading pg_description... ok\n> creating conversions... ERROR: could not identify operator 679\n> \n> I tried to compile PostgreSQL with CFLAGS='-O0' to see if the above\n> resulted from an optimization-induced compiler error, but I got the\n> following error:\n> \n> $ gcc -O0 -Wall -Wmissing-prototypes -Wmissing-declarations\n> -I../../../../src/include -D_GNU_SOURCE -c -o xlog.o xlog.c\n> ../../../../src/include/storage/s_lock.h: In function `tas':\n> ../../../../src/include/storage/s_lock.h:125: error: inconsistent\n> operand constraints in an `asm'\n> \n> Whereas this works fine:\n> \n> $ gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n> -I../../../../src/include -D_GNU_SOURCE -c -o xlog.o xlog.c\n> $\n> \n> BTW, line 138 of s_lock.h is:\n> \n> #if defined(__arm__) || defined(__arm__)\n\nFix just committed. Thanks.\n\n> That seems a little redundant.\n> \n> Anyway, I tried running initdb after compiling all of pgsql with \"-O0',\n> except for the files that included s_lock.h, but make check still\n> failed:\n> \n> creating information schema... ok\n> vacuuming database template1...\n> /house/neilc/pgsql/src/test/regress/./tmp_check/install//usr/local/pgsql/bin/initdb: line 882: 22035 Segmentation fault (core dumped) \"$PGPATH\"/postgres $PGSQL_OPT template1 >/dev/null <<EOF\n> ANALYZE;\n> VACUUM FULL FREEZE;\n> EOF\n> \n> The core file seems to indicate a stack overflow due to an infinitely\n> recursive function:\n> \n> (gdb) bt 25\n> #0 0x4000000000645dc0 in hash_search ()\n> #1 0x4000000000616930 in RelationSysNameCacheGetRelation ()\n> #2 0x4000000000616db0 in RelationSysNameGetRelation ()\n> #3 0x4000000000082e40 in relation_openr ()\n> #4 0x4000000000083910 in heap_openr ()\n> #5 0x400000000060e6b0 in ScanPgRelation ()\n> #6 0x4000000000611d60 in RelationBuildDesc ()\n> #7 0x4000000000616e70 in RelationSysNameGetRelation ()\n> #8 0x4000000000082e40 in relation_openr ()\n> #9 0x4000000000083910 in heap_openr ()\n> #10 0x400000000060e6b0 in ScanPgRelation ()\n> #11 0x4000000000611d60 in RelationBuildDesc ()\n> #12 0x4000000000616e70 in RelationSysNameGetRelation ()\n> #13 0x4000000000082e40 in relation_openr ()\n> #14 0x4000000000083910 in heap_openr ()\n> #15 0x400000000060e6b0 in ScanPgRelation ()\n> #16 0x4000000000611d60 in RelationBuildDesc ()\n> #17 0x4000000000616e70 in RelationSysNameGetRelation ()\n> #18 0x4000000000082e40 in relation_openr ()\n> #19 0x4000000000083910 in heap_openr ()\n> #20 0x400000000060e6b0 in ScanPgRelation ()\n> #21 0x4000000000611d60 in RelationBuildDesc ()\n> #22 0x4000000000616e70 in RelationSysNameGetRelation ()\n> #23 0x4000000000082e40 in relation_openr ()\n> #24 0x4000000000083910 in heap_openr ()\n> (More stack frames follow...)\n> \n> (It also dumps core in the same place during initdb if CFLAGS='-O' is\n> specified.)\n> \n> So it looks like the Itanium port is a little broken. Does anyone have\n> an idea what needs to be done to fix it?\n\nMy guess is that the compiler itself is broken --- what else could it\nbe?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 10 Oct 2003 00:01:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On 8.10.2003, at 21:31, Bruce Momjian wrote:\n> Well, this is really embarassing. I can't imagine why we would not set\n> at least -O on all platforms. Looking at the template files, I see\n> these have no optimization set:\n> \t\n> \tdarwin\n\nRegarding Darwin optimizations, Apple has introduced a \"-fast\" flag in \ntheir GCC 3.3 version that they recommend when compiling code for their \nnew G5 systems. Because of this, I foresee a lot of people defining \nCFLAGS=\"-fast\" on their systems.\n\nThis is problematic for PostgreSQL, however, since the -fast flag is \nthe equivalent of:\n\n-O3 -falign-loops-max-skip=15 -falign-jumps-max-skip=15 \n-falign-loops=16 -falign-jumps=16 -falign-functions=16 -malign-natural \n-ffast-math -fstrict-aliasing -frelax-aliasing -fgcse-mem-alias \n-funroll-loops -floop-transpose -floop-to-memset -finline-floor \n-mcpu=G5 -mpowerpc64 -mpowerpc-gpopt -mtune=G5 -fsched-interblock \n-fload-after-store --param max-gcse-passes=3 -fno-gcse-sm \n-fgcse-loop-depth -funit-at-a-time -fcallgraph-inlining \n-fdisable-typechecking-for-spec\n\nAt least the --fast-math part causes problems, seeing that PostgreSQL \nactually checks for the __FAST_MATH__ macro to make sure that it isn't \nturned on. There might be other problems with Apple's flags, but I \nthink that the __FAST_MATH__ check should be altered.\n\nAs you know, setting --fast-math in GCC is the equivalent of setting \n-fno-math-errno, -funsafe-math-optimizations, -fno-trapping-math, \n-ffinite-math-only and -fno-signaling-nans. What really should be done, \nI think, is adding the opposites of these flags (-fmath-errno, \n-fno-unsafe-math-optimizations, -ftrapping_math, -fno-finite-math-only \nand -fsignaling-nans) to the command line if __FAST_MATH__ is detected. \nThis would allow people to use CFLAGS=\"-fast\" on their G5s, beat some \nXeon speed records, and not worry about esoteric IEEE math standards. \nWhat do you guys think?\n\nGCC sets __FAST_MATH__ even if you counter a -ffast-math with the \nnegating flags above. This means that it is not currently possible to \nuse the -fast flag when compiling PostgreSQL at all. Instead, you have \nto go through all the flags Apple is setting and only pass on those \nthat don't break pg.\n\nmk\n\n",
"msg_date": "Sat, 11 Oct 2003 20:46:40 +0300",
"msg_from": "Marko Karppinen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Sun performance - Major discovery!"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, patch attached and applied. It centralizes the optimization\n> defaults into configure.in, rather than having CFLAGS= in the template\n> files.\n\nI think there's a problem here:\n\n> + # configure sets CFLAGS to -O2 for gcc, so this is only for non-gcc\n> + if test x\"$CFLAGS\" = x\"\"; then\n> + \tCFLAGS=\"-O\"\n> + fi\n> if test \"$enable_debug\" = yes && test \"$ac_cv_prog_cc_g\" = yes; then\n> CFLAGS=\"$CFLAGS -g\"\n> fi\n\nsince this will cause \"configure --enable-debug\" to default to selecting\nCFLAGS=\"-O -g\" for non-gcc compilers. On a lot of compilers that\ncombination does not work, and will generate tons of useless warnings.\nI think it might be better to do\n\n if test \"$enable_debug\" = yes && test \"$ac_cv_prog_cc_g\" = yes; then\n CFLAGS=\"$CFLAGS -g\"\n+ else\n+ # configure sets CFLAGS to -O2 for gcc, so this is only for non-gcc\n+ if test x\"$CFLAGS\" = x\"\"; then\n+ \tCFLAGS=\"-O\"\n+ fi\n fi\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Oct 2003 18:25:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Sun performance - Major discovery! "
},
{
"msg_contents": "\nDone as you suggested.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > OK, patch attached and applied. It centralizes the optimization\n> > defaults into configure.in, rather than having CFLAGS= in the template\n> > files.\n> \n> I think there's a problem here:\n> \n> > + # configure sets CFLAGS to -O2 for gcc, so this is only for non-gcc\n> > + if test x\"$CFLAGS\" = x\"\"; then\n> > + \tCFLAGS=\"-O\"\n> > + fi\n> > if test \"$enable_debug\" = yes && test \"$ac_cv_prog_cc_g\" = yes; then\n> > CFLAGS=\"$CFLAGS -g\"\n> > fi\n> \n> since this will cause \"configure --enable-debug\" to default to selecting\n> CFLAGS=\"-O -g\" for non-gcc compilers. On a lot of compilers that\n> combination does not work, and will generate tons of useless warnings.\n> I think it might be better to do\n> \n> if test \"$enable_debug\" = yes && test \"$ac_cv_prog_cc_g\" = yes; then\n> CFLAGS=\"$CFLAGS -g\"\n> + else\n> + # configure sets CFLAGS to -O2 for gcc, so this is only for non-gcc\n> + if test x\"$CFLAGS\" = x\"\"; then\n> + \tCFLAGS=\"-O\"\n> + fi\n> fi\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 13 Oct 2003 20:48:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Sun performance - Major discovery!"
},
{
"msg_contents": "Marko Karppinen writes:\n\n> GCC sets __FAST_MATH__ even if you counter a -ffast-math with the\n> negating flags above. This means that it is not currently possible to\n> use the -fast flag when compiling PostgreSQL at all. Instead, you have\n> to go through all the flags Apple is setting and only pass on those\n> that don't break pg.\n\nThat sounds perfectly reasonable to me. Why should we develop elaborate\nworkarounds for compiler flags that are known to create broken code? I\nalso want to point out that I'm getting kind of tired of developing more\nand more workarounds for sloppy Apple engineering.\n\n-- \nPeter Eisentraut [email protected]\n\n",
"msg_date": "Tue, 14 Oct 2003 17:13:52 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Sun performance - Major discovery!"
},
{
"msg_contents": "Marko Karppinen <[email protected]> writes:\n> At least the --fast-math part causes problems, seeing that PostgreSQL \n> actually checks for the __FAST_MATH__ macro to make sure that it isn't \n> turned on. There might be other problems with Apple's flags, but I \n> think that the __FAST_MATH__ check should be altered.\n\nRemoving the check is not acceptable --- we spent far too much time\nfighting bug reports that turned out to trace to -ffast-math.\nSee for example\nhttp://archives.postgresql.org/pgsql-bugs/2002-09/msg00169.php\n\n> As you know, setting --fast-math in GCC is the equivalent of setting \n> -fno-math-errno, -funsafe-math-optimizations, -fno-trapping-math, \n> -ffinite-math-only and -fno-signaling-nans.\n\nI suspect that -funsafe-math-optimizations is the only one of those that\nreally affects the datetime code, but I would be quite worried about the\nside-effects of any of them on the float8 arithmetic routines. Also I\nthink the behavior of -ffast-math has changed over time; in the gcc\n2.95.3 manual I see none of the above and only the description\n\n`-ffast-math'\n This option allows GCC to violate some ANSI or IEEE rules and/or\n specifications in the interest of optimizing code for speed. For\n example, it allows the compiler to assume arguments to the `sqrt'\n function are non-negative numbers and that no floating-point values\n are NaNs.\n\nSince we certainly do use NaNs, it would be very bad to allow -ffast-math\nin gcc 2.95.\n\ngcc 3.2 has some but not all of the sub-flags you list above, so\napparently the behavior changed again as of gcc 3.3.\n\nThis means that relaxing the check would require (a) finding out which\nof the sub-flags break our code and which don't; (b) finding out how the\nanswer to (a) has varied with gcc release; and (c) finding out how we\ncan test whether a given sub-flag is set --- are there #defines for each\nof them in gcc 3?\n\nThis does not sound real practical to me...\n\n> This would allow people to use CFLAGS=\"-fast\" on their G5s, beat some \n> Xeon speed records, and not worry about esoteric IEEE math standards. \n\nIn the words of the sage, \"I can make this code *arbitrarily* fast ...\nif it doesn't have to give the right answer.\" Those \"esoteric\"\nstandards make the difference between printing 5:00:00 and printing\n4:59:60.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Oct 2003 12:52:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Sun performance - Major discovery! "
},
{
"msg_contents": "On 14.10.2003, at 19:52, Tom Lane wrote:\n> This means that relaxing the check would require (a) finding out which\n> of the sub-flags break our code and which don't; (b) finding out how \n> the\n> answer to (a) has varied with gcc release; and (c) finding out how we\n> can test whether a given sub-flag is set --- are there #defines for \n> each\n> of them in gcc 3?\n\nOkay, I can see how that makes this unpractical to implement. Thanks.\n\nThe current error message is \"do not put -ffast-math in CFLAGS\"; does\nsomeone have an idea for a better text that doesn't imply that you\nactually /have/ --ffast-math in CFLAGS? It'd be good to acknowledge\nthat it can be set implicitly, too.\n\nAnd on the same subject:\n\nOn 14.10.2003, at 18:13, Peter Eisentraut wrote:\n> That sounds perfectly reasonable to me. Why should we develop \n> elaborate\n> workarounds for compiler flags that are known to create broken code? I\n> also want to point out that I'm getting kind of tired of developing \n> more\n> and more workarounds for sloppy Apple engineering.\n\nPeter, you are free to consider your current environment to be the\npeak of perfection, but that doesn't mean that the only reason for\ndifferences between your system and others' is the sloppiness of\ntheir engineering.\n\nI'm not aware of any Darwin-specific \"workarounds\" in the tree\nright now; the only thing close to that is the support for Apple's\ntwo-level namespaces feature. And while you can argue the relative\nmerits of Apple's approach, the reason for its existence isn't\nsloppiness and the support for it that was implemented by Tom\nmost certainly isn't a workaround.\n\nThe fact of the matter is that Mac OS X has about ten million active\nusers, and when one of these people is looking for an RDBMS, he's\ngonna go for one that compiles and works great on his system, rather\nworrying if his platform is optimal for running PostgreSQL. Supporting\nthis platform well is absolutely crucial to the overall adoption of pg,\nand even if you consider yourself to be above such pedestrian\nconcerns, many people who have to make the business case for putting\nmoney into PostgreSQL development most definitely think otherwise.\n\nmk\n\n",
"msg_date": "Tue, 14 Oct 2003 21:02:45 +0300",
"msg_from": "Marko Karppinen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Sun performance - Major discovery! "
},
{
"msg_contents": "Marko Karppinen writes:\n\n> I'm not aware of any Darwin-specific \"workarounds\" in the tree\n> right now; the only thing close to that is the support for Apple's\n> two-level namespaces feature. And while you can argue the relative\n> merits of Apple's approach, the reason for its existence isn't\n> sloppiness and the support for it that was implemented by Tom\n> most certainly isn't a workaround.\n\nPostgreSQL is only part of the deal; in other projects, people have to\nfight with different kinds of problems. Let me just point out the broken\nprecompiler, the namespace level thing (which might be a fine feature, but\nthe way it was shoved in was not), using zsh as the default \"Bourne\"\nshell, using different file types for loadable modules and linkable shared\nlibraries, standard system paths with spaces in them, and there may be\nmore that I don't remember now. In my experience, the whole system just\nhas been very unpleasant to develop portable software for since the day it\nappeared. You're not at fault for that, but please understand that,\nconsidering all this, the last thing I want to spend time on is improving\nthe user response mechanics for a \"don't do that then\" problem.\n\n> The fact of the matter is that Mac OS X has about ten million active\n> users, and when one of these people is looking for an RDBMS, he's\n> gonna go for one that compiles and works great on his system, rather\n> worrying if his platform is optimal for running PostgreSQL. Supporting\n> this platform well is absolutely crucial to the overall adoption of pg,\n> and even if you consider yourself to be above such pedestrian\n> concerns, many people who have to make the business case for putting\n> money into PostgreSQL development most definitely think otherwise.\n\nEveryone shall be happy if they don't use compiler switches that are known\nto create broken code.\n\n-- \nPeter Eisentraut [email protected]\n\n",
"msg_date": "Tue, 14 Oct 2003 22:41:41 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Sun performance - Major discovery! "
}
] |
[
{
"msg_contents": "Hi folks. I notice that immutable flag does nothing when i invoke\nmy plpgsql function within one session with same args.\n\n\ntele=# SELECT version();\n version\n-------------------------------------------------------------\n PostgreSQL 7.3.4 on i686-pc-linux-gnu, compiled by GCC 2.96\n\n\n\nAt first EXPLAIN ANALYZE shown strange runtime :)\n\n[15:41]/0:ant@monstr:~>time psql -c 'EXPLAIN ANALYZE SELECT calc_total(1466476, 1062363600, 1064955599)' tele\n QUERY PLAN\n----------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.00..0.00 rows=1 loops=1)\n Total runtime: 0.02 msec\n ^^^^^^^^^\n(2 rows)\n\nreal 0m19.282s\n ^^^^^^^^^\n\n\n\n\nAt second. calc_total() is immutable function:\n\ntele=# SELECT provolatile from pg_proc where proname = 'calc_total' and pronargs =3;\n provolatile\n-------------\n i\n\nbut it seems that it's not cached in one session:\n\n[15:38]/0:ant@monstr:~>psql tele\nWelcome to psql 7.3.4, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntele=# EXPLAIN ANALYZE SELECT calc_total(1466476, 1062363600, 1064955599);\n QUERY PLAN\n----------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.00..0.00 rows=1 loops=1)\n Total runtime: 0.02 msec\n(2 rows)\n\ntele=# EXPLAIN ANALYZE SELECT calc_total(1466476, 1062363600, 1064955599);\n QUERY PLAN\n----------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.00..0.00 rows=1 loops=1)\n Total runtime: 0.02 msec\n(2 rows)\n\n\nWhat i miss?\n\nThanks,\n Andriy Tkachuk\n\nhttp://www.imt.com.ua\n\n",
"msg_date": "Wed, 8 Oct 2003 16:16:52 +0300 (EEST)",
"msg_from": "Andriy Tkachuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "IMMUTABLE function's flag do not work: 7.3.4, plpgsql"
},
{
"msg_contents": "Andriy Tkachuk <[email protected]> writes:\n> At second. calc_total() is immutable function:\n> but it seems that it's not cached in one session:\n\nIt's not supposed to be.\n\nThe reason the \"runtime\" is small in your example is that the planner\nexecutes the function call while preparing the plan, and this isn't\ncounted in EXPLAIN's runtime measurement. There's no claim anywhere\nthat the results of such an evaluation would be saved for other plans.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Oct 2003 18:22:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IMMUTABLE function's flag do not work: 7.3.4, plpgsql "
},
{
"msg_contents": "On Wed, 8 Oct 2003, Tom Lane wrote:\n\n> Andriy Tkachuk <[email protected]> writes:\n> > At second. calc_total() is immutable function:\n> > but it seems that it's not cached in one session:\n>\n> It's not supposed to be.\n\nbut it's written id doc:\n\n IMMUTABLE indicates that the function always returns the same\n result when given the same argument values; that is, it does not\n do database lookups or otherwise use information not directly\n present in its parameter list. If this option is given, any call\n of the function with all-constant arguments can be immediately\n replaced with the function value.\n\nI meant that the result of calc_total() is not \"immediately replaced with the function value\"\nas it's written in doc, but it takes as long time as the first function call\nin the session (with the same arguments).\n\nMaybe i misunderstand something?\n\nThank you,\n Andriy Tkachuk.\n\nhttp://www.imt.com.ua\n\n",
"msg_date": "Thu, 9 Oct 2003 10:25:33 +0300 (EEST)",
"msg_from": "Andriy Tkachuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: IMMUTABLE function's flag do not work: 7.3.4, plpgsql"
},
{
"msg_contents": "Andriy Tkachuk wrote:\n> On Wed, 8 Oct 2003, Tom Lane wrote:\n> \n> \n>>Andriy Tkachuk <[email protected]> writes:\n>>\n>>>At second. calc_total() is immutable function:\n>>>but it seems that it's not cached in one session:\n>>\n>>It's not supposed to be.\n> \n> \n> but it's written id doc:\n> \n> IMMUTABLE indicates that the function always returns the same\n> result when given the same argument values; that is, it does not\n> do database lookups or otherwise use information not directly\n> present in its parameter list. If this option is given, any call\n> of the function with all-constant arguments can be immediately\n> replaced with the function value.\n\nThe doc say \"can be\" not must and will be.\n\n\n\nRegards\nGaetano Mendola\n\n",
"msg_date": "Thu, 09 Oct 2003 18:41:50 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IMMUTABLE function's flag do not work: 7.3.4, plpgsql"
},
{
"msg_contents": "On Thu, 9 Oct 2003, Gaetano Mendola wrote:\n\n> Andriy Tkachuk wrote:\n> > On Wed, 8 Oct 2003, Tom Lane wrote:\n> >\n> >\n> >>Andriy Tkachuk <[email protected]> writes:\n> >>\n> >>>At second. calc_total() is immutable function:\n> >>>but it seems that it's not cached in one session:\n> >>\n> >>It's not supposed to be.\n> >\n> >\n> > but it's written id doc:\n> >\n> > IMMUTABLE indicates that the function always returns the same\n> > result when given the same argument values; that is, it does not\n> > do database lookups or otherwise use information not directly\n> > present in its parameter list. If this option is given, any call\n> > of the function with all-constant arguments can be immediately\n> > replaced with the function value.\n>\n> The doc say \"can be\" not must and will be.\n\nok, but on what it depends on?\n\nthanks,\n andriy\n\nhttp://www.imt.com.ua\n\n",
"msg_date": "Fri, 10 Oct 2003 10:15:47 +0300 (EEST)",
"msg_from": "Andriy Tkachuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: IMMUTABLE function's flag do not work: 7.3.4, plpgsql"
},
{
"msg_contents": "Andriy Tkachuk wrote:\n\n> On Thu, 9 Oct 2003, Gaetano Mendola wrote:\n>>Andriy Tkachuk wrote:\n>>>On Wed, 8 Oct 2003, Tom Lane wrote:\n>>>>Andriy Tkachuk <[email protected]> writes:\n>>>>>At second. calc_total() is immutable function:\n>>>>>but it seems that it's not cached in one session:\n>>>>\n>>>>It's not supposed to be.\n>>>\n>>>\n>>>but it's written id doc:\n>>>\n>>> IMMUTABLE indicates that the function always returns the same\n>>> result when given the same argument values; that is, it does not\n>>> do database lookups or otherwise use information not directly\n>>> present in its parameter list. If this option is given, any call\n>>> of the function with all-constant arguments can be immediately\n>>> replaced with the function value.\n>>\n>>The doc say \"can be\" not must and will be.\n> \n> \n> ok, but on what it depends on?\n\nFor example in:\n\nselect * from T where f_immutable ( 4 ) = T.id;\n\n\nin this case f_immutable will be evaluated once.\n\n\n\nselect * from T where f_immutable ( T.id ) = X;\n\nhere f_immutable will be avaluated for each different T.id.\n\n\n\nRegards\nGaetano Mendola\n\n",
"msg_date": "Sun, 12 Oct 2003 23:28:56 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IMMUTABLE function's flag do not work: 7.3.4, plpgsql"
},
{
"msg_contents": "Oh, Gaetano, didn't you see in my first letter of this topic\nthat args are the same in session (they are constant)?\n\nThe first paragraf of my first letter of this topic was:\n< Hi folks. I notice that immutable flag does nothing when i invoke\n< my plpgsql function within one session with same args.\n ^^^^^^^^^^^^^^\n... ok, mabe i should say \"constant args\" as in doc.\n\nAnyway, thank you for attention and willing to help.\n\nregards, andriy tkachuk (http://imt.com.ua)\n\nOn Sun, 12 Oct 2003, Gaetano Mendola wrote:\n\n> Andriy Tkachuk wrote:\n>\n> > On Thu, 9 Oct 2003, Gaetano Mendola wrote:\n> >>Andriy Tkachuk wrote:\n> >>>On Wed, 8 Oct 2003, Tom Lane wrote:\n> >>>>Andriy Tkachuk <[email protected]> writes:\n> >>>>>At second. calc_total() is immutable function:\n> >>>>>but it seems that it's not cached in one session:\n> >>>>\n> >>>>It's not supposed to be.\n> >>>\n> >>>\n> >>>but it's written id doc:\n> >>>\n> >>> IMMUTABLE indicates that the function always returns the same\n> >>> result when given the same argument values; that is, it does not\n> >>> do database lookups or otherwise use information not directly\n> >>> present in its parameter list. If this option is given, any call\n> >>> of the function with all-constant arguments can be immediately\n> >>> replaced with the function value.\n> >>\n> >>The doc say \"can be\" not must and will be.\n> >\n> >\n> > ok, but on what it depends on?\n>\n> For example in:\n>\n> select * from T where f_immutable ( 4 ) = T.id;\n>\n>\n> in this case f_immutable will be evaluated once.\n>\n>\n>\n> select * from T where f_immutable ( T.id ) = X;\n>\n> here f_immutable will be avaluated for each different T.id.\n>\n> Regards\n> Gaetano Mendola\n\n",
"msg_date": "Mon, 13 Oct 2003 10:32:03 +0300 (EEST)",
"msg_from": "Andriy Tkachuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: IMMUTABLE function's flag do not work: 7.3.4, plpgsql"
}
] |
[
{
"msg_contents": "All,\n\nAnyone have any suggestions on how to efficiently compare\nrows in the same table? This table has 637 columns to be\ncompared and 642 total columns.\n\nTIA,\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n",
"msg_date": "Wed, 08 Oct 2003 10:07:36 -0400",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Compare rows"
},
{
"msg_contents": "Greg,\n\n> Anyone have any suggestions on how to efficiently compare\n> rows in the same table? This table has 637 columns to be\n> compared and 642 total columns.\n\n637 columns? Are you sure that's normalized? It's hard for me to conceive \nof a circumstance where that many columns would be necessary.\n\nIf this isn't a catastrophic normalization problem (which it sounds like), \nthen you will probably still need to work through procedureal normalization \ncode, as SQL simply doesn't offer any way around naming all the columns by \nhand. Perhaps you could describe the problem in more detail?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 8 Oct 2003 09:01:45 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Josh Berkus wrote:\n> Greg,\n> \n> \n>>Anyone have any suggestions on how to efficiently compare\n>>rows in the same table? This table has 637 columns to be\n>>compared and 642 total columns.\n> \n> \n> 637 columns? Are you sure that's normalized? It's hard for me to conceive \n> of a circumstance where that many columns would be necessary.\n> \n> If this isn't a catastrophic normalization problem (which it sounds like), \n> then you will probably still need to work through procedureal normalization \n> code, as SQL simply doesn't offer any way around naming all the columns by \n> hand. Perhaps you could describe the problem in more detail?\n> \n\nThe data represents metrics at a point in time on a system for\nnetwork, disk, memory, bus, controller, and so-on. Rx, Tx, errors,\nspeed, and whatever else can be gathered.\n\nWe arrived at this one 642 column table after testing the whole\nprocess from data gathering, methods of temporarily storing then\nloading to the database. Initially, 37+ tables were in use but\nthe one big-un has saved us over 3.4 minutes.\n\nThe reason for my initial question was this. We save changes only.\nIn other words, if system S has row T1 for day D1 and if on day D2\nwe have another row T1 (excluding our time column) we don't want\nto save it.\n\nThat said, if the 3.4 minutes gets burned during our comparison which\nsaves changes only we may look at reverting to separate tables. There\nare only 1,700 to 3,000 rows on average per load.\n\nOh, PostgreSQL 7.3.3, PHP 4.3.1, RedHat 7.3, kernel 2.4.20-18.7smp,\n2x1.4GHz PIII, 2GB memory, and 1Gbs SAN w/ Hitachi 9910 LUN's.\n\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n",
"msg_date": "Wed, 08 Oct 2003 12:27:41 -0400",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Greg Spiegelberg wrote:\n\n> The data represents metrics at a point in time on a system for\n> network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,\n> speed, and whatever else can be gathered.\n> \n> We arrived at this one 642 column table after testing the whole\n> process from data gathering, methods of temporarily storing then\n> loading to the database. Initially, 37+ tables were in use but\n> the one big-un has saved us over 3.4 minutes.\n\nI am sure you changed the desing because those 3.4 minutes were significant to you.\n\n\nBut I suggest you go back to 37 table design and see where bottleneck is. \nProbably you can tune a join across 37 tables much better than optimizing a \ndifference between two 637 column rows.\n\nBesides such a large number of columns will cost heavily in terms of \ndefragmentation across pages. The wasted space and IO therof could be \nsignificant issue for large number of rows.\n\n642 column is a bad design. Theoretically and from implementation of postgresql \npoint of view. You did it because of speed problem. Now if we can resolve those \nspeed problems, perhaps you could go back to other design.\n\nIs it feasible for you right now or you are too much committed to the big table?\n\nAnd of course, then it is routing postgresql tuning exercise..:-)\n\n Shridhar\n\n\n",
"msg_date": "Wed, 08 Oct 2003 22:07:45 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Greg Spiegelberg wrote:\n> The reason for my initial question was this. We save changes only.\n> In other words, if system S has row T1 for day D1 and if on day D2\n> we have another row T1 (excluding our time column) we don't want\n> to save it.\n\nIt still isn't entirely clear to me what you are trying to do, but \nperhaps some sort of calculated checksum or hash would work to determine \nif the data has changed?\n\nJoe\n\n\n",
"msg_date": "Wed, 08 Oct 2003 09:46:43 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Greg,\n\n> The data represents metrics at a point in time on a system for\n> network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,\n> speed, and whatever else can be gathered.\n>\n> We arrived at this one 642 column table after testing the whole\n> process from data gathering, methods of temporarily storing then\n> loading to the database. Initially, 37+ tables were in use but\n> the one big-un has saved us over 3.4 minutes.\n\nHmmm ... if few of those columns are NULL, then you are probably right ... \nthis is probably the most normalized design. If, however, many of columns \nare NULL the majority of the time, then the design you should be using is a \nvertial child table, of the form ( value_type | value ). \n\nSuch a vertical child table would also make your comparison between instances \n*much* easier, as it could be executed via a simple 4-table-outer-join and 3 \nwhere clauses. So even if you don't have a lot of NULLs, you probably want \nto consider this.\n\n> The reason for my initial question was this. We save changes only.\n> In other words, if system S has row T1 for day D1 and if on day D2\n> we have another row T1 (excluding our time column) we don't want\n> to save it.\n\nIf re-designing the table per the above is not a possibility, then I'd suggest \nthat you locate 3-5 columns that:\n1) are not NULL for any row;\n2) combined, serve to identify a tiny subset of rows, i.e. 3% or less of the \ntable.\n\nThen put a multi-column index on those columns, and do your comparison. \nHopefully the planner should pick up on the availablity of the index and scan \nonly the rows retrieved by the index. However, there is the distinct \npossibility that the presence of 637 WHERE criteria will confuse the planner, \ncausing it to resort to a full table seq scan; in that case, you will want to \nuse a subselect to force the issue.\n\nOr, as Joe Conway suggested, you could figure out some kind of value hash that \nuniquely identifies your rows.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 8 Oct 2003 10:10:26 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Comment interjected below.\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Greg\n> Spiegelberg\n> Sent: Wednesday, October 08, 2003 12:28 PM\n> To: PgSQL Performance ML\n> Subject: Re: [PERFORM] Compare rows\n>\n>\n> Josh Berkus wrote:\n> > Greg,\n> >\n> >\n> >>Anyone have any suggestions on how to efficiently compare\n> >>rows in the same table? This table has 637 columns to be\n> >>compared and 642 total columns.\n> >\n> >\n> > 637 columns? Are you sure that's normalized? It's hard for\n> me to conceive\n> > of a circumstance where that many columns would be necessary.\n> >\n> > If this isn't a catastrophic normalization problem (which it\n> sounds like),\n> > then you will probably still need to work through procedureal\n> normalization\n> > code, as SQL simply doesn't offer any way around naming all the\n> columns by\n> > hand. Perhaps you could describe the problem in more detail?\n> >\n>\n> The data represents metrics at a point in time on a system for\n> network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,\n> speed, and whatever else can be gathered.\n>\n> We arrived at this one 642 column table after testing the whole\n> process from data gathering, methods of temporarily storing then\n> loading to the database. Initially, 37+ tables were in use but\n> the one big-un has saved us over 3.4 minutes.\n>\n> The reason for my initial question was this. We save changes only.\n> In other words, if system S has row T1 for day D1 and if on day D2\n> we have another row T1 (excluding our time column) we don't want\n> to save it.\n\nUm, isn't this a purpose of a key? And I am confused. Do you want to UPDATE\nthe changed columns? or skip it all together?\nYou have: (System, Day, T1 | T2 |...Tn )\nBut should use:\nMaster: (System, Day, Table={T1, T2, .. Tn)) [Keys: sytem, day, table]\nT1 { System, Day, {other fields}} [foreign keys [system, day]\n\nThis should allow you to find your dupes very fast (indexes!) and save a lot\nof space (few/no null columns), and now you don't have to worry about\ncomparing fields, and moving huge result sets around.\n\n\n> That said, if the 3.4 minutes gets burned during our comparison which\n> saves changes only we may look at reverting to separate tables. There\n> are only 1,700 to 3,000 rows on average per load.\n>\n> Oh, PostgreSQL 7.3.3, PHP 4.3.1, RedHat 7.3, kernel 2.4.20-18.7smp,\n> 2x1.4GHz PIII, 2GB memory, and 1Gbs SAN w/ Hitachi 9910 LUN's.\n>\n> Greg\n>\n> --\n> Greg Spiegelberg\n> Sr. Product Development Engineer\n> Cranel, Incorporated.\n> Phone: 614.318.4314\n> Fax: 614.431.8388\n> Email: [email protected]\n> Cranel. Technology. Integrity. Focus.\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n",
"msg_date": "Wed, 08 Oct 2003 13:13:26 -0400",
"msg_from": "Jason Hihn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "See below.\n\n\nShridhar Daithankar wrote:\n> Greg Spiegelberg wrote:\n> \n>> The data represents metrics at a point in time on a system for\n>> network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,\n>> speed, and whatever else can be gathered.\n>>\n>> We arrived at this one 642 column table after testing the whole\n>> process from data gathering, methods of temporarily storing then\n>> loading to the database. Initially, 37+ tables were in use but\n>> the one big-un has saved us over 3.4 minutes.\n> \n> \n> I am sure you changed the desing because those 3.4 minutes were \n> significant to you.\n> \n> \n> But I suggest you go back to 37 table design and see where bottleneck \n> is. Probably you can tune a join across 37 tables much better than \n> optimizing a difference between two 637 column rows.\n\nThe bottleneck is across the board.\n\nOn the data collection side I'd have to manage 37 different methods\nand output formats whereas now I have 1 standard associative array\nthat gets reset in memory for each \"row\" stored.\n\nOn the data validation side, I have one routine to check the incoming\ndata for errors, missing columns, data types and so on. Quick & easy.\n\nOn the data import it's easier and more efficient to do one COPY for\na standard format from one program instead of multiple programs or\nCOPY's. We were using 37 PHP scripts to handle the import and the\ntime it took to load, execute, exit, reload each script was killing\nus. Now, 1 PHP and 1 COPY.\n\n\n> Besides such a large number of columns will cost heavily in terms of \n> defragmentation across pages. The wasted space and IO therof could be \n> significant issue for large number of rows.\n\nNo arguement here.\n\n\n> 642 column is a bad design. Theoretically and from implementation of \n> postgresql point of view. You did it because of speed problem. Now if we \n> can resolve those speed problems, perhaps you could go back to other \n> design.\n> \n> Is it feasible for you right now or you are too much committed to the \n> big table?\n\nPretty commited though I do try to be open.\n\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n",
"msg_date": "Wed, 08 Oct 2003 14:05:34 -0400",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Joe Conway wrote:\n> Greg Spiegelberg wrote:\n> \n>> The reason for my initial question was this. We save changes only.\n>> In other words, if system S has row T1 for day D1 and if on day D2\n>> we have another row T1 (excluding our time column) we don't want\n>> to save it.\n> \n> \n> It still isn't entirely clear to me what you are trying to do, but \n> perhaps some sort of calculated checksum or hash would work to determine \n> if the data has changed?\n\nBest example I have is this.\n\nYou're running Solaris 5.8 with patch 108528-X and you're collecting\nthat data daily. Would you want option 1 or 2 below?\n\nOption 1 - Store it all\n Day | OS | Patch\n------+-------------+-----------\nOct 1 | Solaris 5.8 | 108528-12\nOct 2 | Solaris 5.8 | 108528-12\nOct 3 | Solaris 5.8 | 108528-13\nOct 4 | Solaris 5.8 | 108528-13\nOct 5 | Solaris 5.8 | 108528-13\nand so on...\n\nTo find what you're running:\nselect * from table order by day desc limit 1;\n\nTo find when it last changed however takes a join.\n\n\nOption 2 - Store only changes\n Day | OS | Patch\n------+-------------+-----------\nOct 1 | Solaris 5.8 | 108528-12\nOct 3 | Solaris 5.8 | 108528-13\n\nTo find what you're running:\nselect * from table order by day desc limit 1;\n\nTo find when it last changed:\nselect * from table order by day desc limit 1 offset 1;\n\nI selected Option 2 because I'm dealing with mounds of complicated and\nvarying data formats and didn't want to have to write complex queries\nfor everything.\n\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n",
"msg_date": "Wed, 08 Oct 2003 14:39:54 -0400",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "It's still not quite clear what you're trying to do. Many people's gut\nreaction is that you're doing something strange with so many columns in\na table.\n\nUsing your example, a different approach might be to do this instead:\n\n Day | Name | Value\n ------+-------------+-----------\n Oct 1 | OS | Solaris 5.8 \n Oct 1 | Patch | 108528-12\n Oct 3 | Patch | 108528-13\n\n\nYou end up with lots more rows, fewer columns, but it might be\nharder to query the table. On the other hand, queries should run quite\nfast, since it's a much more \"normal\" table.\n\nBut without knowing more, and seeing what the other columns look like,\nit's hard to tell.\n\nDror\n\nOn Wed, Oct 08, 2003 at 02:39:54PM -0400, Greg Spiegelberg wrote:\n> Joe Conway wrote:\n> >Greg Spiegelberg wrote:\n> >\n> >>The reason for my initial question was this. We save changes only.\n> >>In other words, if system S has row T1 for day D1 and if on day D2\n> >>we have another row T1 (excluding our time column) we don't want\n> >>to save it.\n> >\n> >\n> >It still isn't entirely clear to me what you are trying to do, but \n> >perhaps some sort of calculated checksum or hash would work to determine \n> >if the data has changed?\n> \n> Best example I have is this.\n> \n> You're running Solaris 5.8 with patch 108528-X and you're collecting\n> that data daily. Would you want option 1 or 2 below?\n> \n> Option 1 - Store it all\n> Day | OS | Patch\n> ------+-------------+-----------\n> Oct 1 | Solaris 5.8 | 108528-12\n> Oct 2 | Solaris 5.8 | 108528-12\n> Oct 3 | Solaris 5.8 | 108528-13\n> Oct 4 | Solaris 5.8 | 108528-13\n> Oct 5 | Solaris 5.8 | 108528-13\n> and so on...\n> \n> To find what you're running:\n> select * from table order by day desc limit 1;\n> \n> To find when it last changed however takes a join.\n> \n> \n> Option 2 - Store only changes\n> Day | OS | Patch\n> ------+-------------+-----------\n> Oct 1 | Solaris 5.8 | 108528-12\n> Oct 3 | Solaris 5.8 | 108528-13\n> \n> To find what you're running:\n> select * from table order by day desc limit 1;\n> \n> To find when it last changed:\n> select * from table order by day desc limit 1 offset 1;\n> \n> I selected Option 2 because I'm dealing with mounds of complicated and\n> varying data formats and didn't want to have to write complex queries\n> for everything.\n> \n> Greg\n> \n> -- \n> Greg Spiegelberg\n> Sr. Product Development Engineer\n> Cranel, Incorporated.\n> Phone: 614.318.4314\n> Fax: 614.431.8388\n> Email: [email protected]\n> Cranel. Technology. Integrity. Focus.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Wed, 8 Oct 2003 11:55:27 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Dror,\n\nI gave this some serious thought at first. I only deal with\nint8, numeric(24,12) and varchar(32) columns which I could\nreduce to 3 different tables. Problem was going from 1700-3000\nrows to around 300,000-1,000,000 rows per system per day that\nis sending data to our database.\n\nBTW, the int8 and numeric(24,12) are for future expansion.\nI hate limits.\n\nGreg\n\n\nDror Matalon wrote:\n> It's still not quite clear what you're trying to do. Many people's gut\n> reaction is that you're doing something strange with so many columns in\n> a table.\n> \n> Using your example, a different approach might be to do this instead:\n> \n> Day | Name | Value\n> ------+-------------+-----------\n> Oct 1 | OS | Solaris 5.8 \n> Oct 1 | Patch | 108528-12\n> Oct 3 | Patch | 108528-13\n> \n> \n> You end up with lots more rows, fewer columns, but it might be\n> harder to query the table. On the other hand, queries should run quite\n> fast, since it's a much more \"normal\" table.\n> \n> But without knowing more, and seeing what the other columns look like,\n> it's hard to tell.\n> \n> Dror\n\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n",
"msg_date": "Wed, 08 Oct 2003 15:07:30 -0400",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Josh Berkus wrote:\n> Greg,\n> \n> \n>>The data represents metrics at a point in time on a system for\n>>network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,\n>>speed, and whatever else can be gathered.\n>>\n>>We arrived at this one 642 column table after testing the whole\n>>process from data gathering, methods of temporarily storing then\n>>loading to the database. Initially, 37+ tables were in use but\n>>the one big-un has saved us over 3.4 minutes.\n> \n> \n> Hmmm ... if few of those columns are NULL, then you are probably right ... \n> this is probably the most normalized design. If, however, many of columns \n> are NULL the majority of the time, then the design you should be using is a \n> vertial child table, of the form ( value_type | value ). \n> \n> Such a vertical child table would also make your comparison between instances \n> *much* easier, as it could be executed via a simple 4-table-outer-join and 3 \n> where clauses. So even if you don't have a lot of NULLs, you probably want \n> to consider this.\n\nYou lost me on that one. What's a \"vertical child table\"?\n\nStatistically, about 6% of the rows use more than 200 of the columns,\n27% of the rows use 80-199 or more columns, 45% of the rows use 40-79\ncolumns and the remaining 22% of the rows use 39 or less of the columns.\nThat is a lot of NULLS. Never gave that much thought.\n\nTo ensure query efficiency, hide the NULLs and simulate the multiple\ntables I have a boatload of indexes, ensure that every query makees use\nof an index, and have created 37 views. It's worked pretty well so\nfar\n\n\n>>The reason for my initial question was this. We save changes only.\n>>In other words, if system S has row T1 for day D1 and if on day D2\n>>we have another row T1 (excluding our time column) we don't want\n>>to save it.\n> \n> \n> If re-designing the table per the above is not a possibility, then I'd suggest \n> that you locate 3-5 columns that:\n> 1) are not NULL for any row;\n> 2) combined, serve to identify a tiny subset of rows, i.e. 3% or less of the \n> table.\n\nThere are always, always, always 7 columns that contain data.\n\n\n> Then put a multi-column index on those columns, and do your comparison. \n> Hopefully the planner should pick up on the availablity of the index and scan \n> only the rows retrieved by the index. However, there is the distinct \n> possibility that the presence of 637 WHERE criteria will confuse the planner, \n> causing it to resort to a full table seq scan; in that case, you will want to \n> use a subselect to force the issue.\n\nThat's what I'm trying to avoid is a big WHERE (c1,c2,...,c637) <> \n(d1,d2,...,d637) clause. Ugly.\n\n\n> Or, as Joe Conway suggested, you could figure out some kind of value hash that \n> uniquely identifies your rows.\n\nI've given that some though and though appealing I don't think I'd care\nto spend the CPU cycles to do it. Best way I can figure to accomplish\nit would be to generate an MD5 on each row without the timestamp and\nstore it in another column, create an index on the MD5 column, generate\nMD5 on each line I want to insert. Makes for a simple WHERE...\n\nOkay. I'll give it a whirl. What's one more column, right?\n\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n",
"msg_date": "Wed, 08 Oct 2003 15:10:53 -0400",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Greg\n> Spiegelberg\n> Sent: Wednesday, October 08, 2003 3:11 PM\n> To: PgSQL Performance ML\n> Subject: Re: [PERFORM] Compare rows\n> \n> \n> Josh Berkus wrote:\n> > Greg,\n> > \n> > \n> >>The data represents metrics at a point in time on a system for\n> >>network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,\n> >>speed, and whatever else can be gathered.\n> >>\n> >>We arrived at this one 642 column table after testing the whole\n> >>process from data gathering, methods of temporarily storing then\n> >>loading to the database. Initially, 37+ tables were in use but\n> >>the one big-un has saved us over 3.4 minutes.\n> > \n> > \n> > Hmmm ... if few of those columns are NULL, then you are \n> probably right ... \n> > this is probably the most normalized design. If, however, \n> many of columns \n> > are NULL the majority of the time, then the design you should \n> be using is a \n> > vertial child table, of the form ( value_type | value ). \n> > \n> > Such a vertical child table would also make your comparison \n> between instances \n> > *much* easier, as it could be executed via a simple \n> 4-table-outer-join and 3 \n> > where clauses. So even if you don't have a lot of NULLs, you \n> probably want \n> > to consider this.\n> \n> You lost me on that one. What's a \"vertical child table\"?\n\nParent table Fkey | Option | Value\n------------------+--------+-------\n | OS | Solaris\n | DISK1 | 30g\n ^^^^^^^^ ^^^-- values \n fields are values in a column rather than 'fields'\n\n\n> Statistically, about 6% of the rows use more than 200 of the columns,\n> 27% of the rows use 80-199 or more columns, 45% of the rows use 40-79\n> columns and the remaining 22% of the rows use 39 or less of the columns.\n> That is a lot of NULLS. Never gave that much thought.\n> \n> To ensure query efficiency, hide the NULLs and simulate the multiple\n> tables I have a boatload of indexes, ensure that every query makees use\n> of an index, and have created 37 views. It's worked pretty well so\n> far\n> \n> \n> >>The reason for my initial question was this. We save changes only.\n> >>In other words, if system S has row T1 for day D1 and if on day D2\n> >>we have another row T1 (excluding our time column) we don't want\n> >>to save it.\n> > \n> > \n> > If re-designing the table per the above is not a possibility, \n> then I'd suggest \n> > that you locate 3-5 columns that:\n> > 1) are not NULL for any row;\n> > 2) combined, serve to identify a tiny subset of rows, i.e. 3% \n> or less of the \n> > table.\n> \n> There are always, always, always 7 columns that contain data.\n> \n> \n> > Then put a multi-column index on those columns, and do your \n> comparison. \n> > Hopefully the planner should pick up on the availablity of the \n> index and scan \n> > only the rows retrieved by the index. However, there is the distinct \n> > possibility that the presence of 637 WHERE criteria will \n> confuse the planner, \n> > causing it to resort to a full table seq scan; in that case, \n> you will want to \n> > use a subselect to force the issue.\n> \n> That's what I'm trying to avoid is a big WHERE (c1,c2,...,c637) <> \n> (d1,d2,...,d637) clause. Ugly.\n> \n> \n> > Or, as Joe Conway suggested, you could figure out some kind of \n> value hash that \n> > uniquely identifies your rows.\n> \n> I've given that some though and though appealing I don't think I'd care\n> to spend the CPU cycles to do it. Best way I can figure to accomplish\n> it would be to generate an MD5 on each row without the timestamp and\n> store it in another column, create an index on the MD5 column, generate\n> MD5 on each line I want to insert. Makes for a simple WHERE...\n> \n> Okay. I'll give it a whirl. What's one more column, right?\n> \n> Greg\n> \n> -- \n> Greg Spiegelberg\n> Sr. Product Development Engineer\n> Cranel, Incorporated.\n> Phone: 614.318.4314\n> Fax: 614.431.8388\n> Email: [email protected]\n> Cranel. Technology. Integrity. Focus.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n",
"msg_date": "Wed, 08 Oct 2003 15:24:56 -0400",
"msg_from": "Jason Hihn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Greg,\n\nOn Wed, Oct 08, 2003 at 03:07:30PM -0400, Greg Spiegelberg wrote:\n> Dror,\n> \n> I gave this some serious thought at first. I only deal with\n> int8, numeric(24,12) and varchar(32) columns which I could\n> reduce to 3 different tables. Problem was going from 1700-3000\n\nI'm not sure how the data types come into play here. I was for the most\npart following your examples.\n\n> rows to around 300,000-1,000,000 rows per system per day that\n> is sending data to our database.\n> \n\nDepending on the distribution of your data you can end up with more,\nless or roughly the same amount of data in the end. It all depends on\nhow many of the 600+ columns change every time you insert a row. If only\na few of them do, then you'll clearly end up with less total data, since\nyou'll be writing several rows that are very short instead of one\nhuge row that contains all the information. In other words, you're\ntracking changes better.\n\nIt also sounds like you feel that having a few thousand rows in a very\n\"wide\" table is better than having 300,000 - 1,00,000 rows in a \"narrow\"\ntable. My gut feeling is that it's the other way around, but there are\nplenty of people on this list who can provide a more informed answer.\n\nUsing the above eample, assuming that both tables roughly have the same\nnumber of pages in them, would postgres deal better with a table with\n3-4 columns with 300,000 - 1,000,000 rows or with a table with several\nhundred columns with only 3000 or so rows?\n\nRegards,\n\nDror\n\n\n> BTW, the int8 and numeric(24,12) are for future expansion.\n> I hate limits.\n> \n> Greg\n> \n> \n> Dror Matalon wrote:\n> >It's still not quite clear what you're trying to do. Many people's gut\n> >reaction is that you're doing something strange with so many columns in\n> >a table.\n> >\n> >Using your example, a different approach might be to do this instead:\n> >\n> > Day | Name | Value\n> > ------+-------------+-----------\n> > Oct 1 | OS | Solaris 5.8 \n> > Oct 1 | Patch | 108528-12\n> > Oct 3 | Patch | 108528-13\n> >\n> >\n> >You end up with lots more rows, fewer columns, but it might be\n> >harder to query the table. On the other hand, queries should run quite\n> >fast, since it's a much more \"normal\" table.\n> >\n> >But without knowing more, and seeing what the other columns look like,\n> >it's hard to tell.\n> >\n> >Dror\n> \n> \n> -- \n> Greg Spiegelberg\n> Sr. Product Development Engineer\n> Cranel, Incorporated.\n> Phone: 614.318.4314\n> Fax: 614.431.8388\n> Email: [email protected]\n> Cranel. Technology. Integrity. Focus.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Wed, 8 Oct 2003 12:39:37 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Here is what i think you can use:\n\nOne master table with out duplicates and one anciliary table with\nduplicate for the day.\nInsert the result of the select from the anciliary table into the master\ntable, truncate the anciliary table.\n\n\nselect distinct on ( {all the fields except day}) * from table order by\n{all the fields except day}, day;\n\nAs in:\n\nselect distinct on ( OS, Patch) * from table order by OS, Patch, Day;\n\nJLL\n\nBTW, PG developper, since the distinct on list MUST be included in the\norder by clause why not make it implicitly part of the order by clause?\n\n\n\nGreg Spiegelberg wrote:\n> \n> Joe Conway wrote:\n> > Greg Spiegelberg wrote:\n> >\n> >> The reason for my initial question was this. We save changes only.\n> >> In other words, if system S has row T1 for day D1 and if on day D2\n> >> we have another row T1 (excluding our time column) we don't want\n> >> to save it.\n> >\n> >\n> > It still isn't entirely clear to me what you are trying to do, but\n> > perhaps some sort of calculated checksum or hash would work to determine\n> > if the data has changed?\n> \n> Best example I have is this.\n> \n> You're running Solaris 5.8 with patch 108528-X and you're collecting\n> that data daily. Would you want option 1 or 2 below?\n> \n> Option 1 - Store it all\n> Day | OS | Patch\n> ------+-------------+-----------\n> Oct 1 | Solaris 5.8 | 108528-12\n> Oct 2 | Solaris 5.8 | 108528-12\n> Oct 3 | Solaris 5.8 | 108528-13\n> Oct 4 | Solaris 5.8 | 108528-13\n> Oct 5 | Solaris 5.8 | 108528-13\n> and so on...\n> \n> To find what you're running:\n> select * from table order by day desc limit 1;\n> \n> To find when it last changed however takes a join.\n> \n> Option 2 - Store only changes\n> Day | OS | Patch\n> ------+-------------+-----------\n> Oct 1 | Solaris 5.8 | 108528-12\n> Oct 3 | Solaris 5.8 | 108528-13\n> \n> To find what you're running:\n> select * from table order by day desc limit 1;\n> \n> To find when it last changed:\n> select * from table order by day desc limit 1 offset 1;\n> \n> I selected Option 2 because I'm dealing with mounds of complicated and\n> varying data formats and didn't want to have to write complex queries\n> for everything.\n> \n> Greg\n> \n> --\n> Greg Spiegelberg\n> Sr. Product Development Engineer\n> Cranel, Incorporated.\n> Phone: 614.318.4314\n> Fax: 614.431.8388\n> Email: [email protected]\n> Cranel. Technology. Integrity. Focus.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n",
"msg_date": "Wed, 08 Oct 2003 15:47:24 -0400",
"msg_from": "Jean-Luc Lachance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Greg,\n\n> You lost me on that one. What's a \"vertical child table\"?\n\nCurrently, you store data like this:\n\nid\taddress\tuptime\tspeed\tmemory\ttty\n3\t67.92\t0.3\t\t11.2\t\t37\t\t6\n7\t69.5\t\t1.1\t\tNULL\t15\t\tNULL\n9\t65.5\t\t0.1\t\tNULL\t94\t\t2\n\nThe most efficient way for you to store data would be like this:\n\nmain table\nid\taddress\n3\t67.92\n7\t69.5\n9\t65.5\n\nchild table\nid\tvalue_type\tvalue\n3\tuptime\t\t0.3\n3\tspeed\t\t11.2\n3\tmemory\t\t37\n3\ttty\t\t\t6\n7\tuptime\t\t1.1\n7\tmemory\t\t15\n9\tuptime\t\t0.1\n9\tmemory\t\t94\n9\ttty\t\t\t2\n\nAs you can see, the NULLs are not stored, making this system much more \nefficient on storage space.\n\nTommorrow I'll (hopefully) write up how to query this for comparisons. It \nwould help if you gave a little more details about what specific comparison \nyou're doing, e.g. between tables or table to value, comparing just the last \nvalue or all rows, etc.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 8 Oct 2003 16:11:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "In an attempt to throw the authorities off his trail, [email protected] (Josh Berkus) transmitted:\n> child table\n> id\tvalue_type\tvalue\n> 3\tuptime\t\t0.3\n> 3\tspeed\t\t11.2\n> 3\tmemory\t\t37\n> 3\ttty\t\t\t6\n> 7\tuptime\t\t1.1\n> 7\tmemory\t\t15\n> 9\tuptime\t\t0.1\n> 9\tmemory\t\t94\n> 9\ttty\t\t\t2\n>\n> As you can see, the NULLs are not stored, making this system much more \n> efficient on storage space.\n\nWow, that takes me back to a paper I have been looking for for\n_years_.\n\nSome time in the late '80s, probably '88 or '89, there was a paper\npresented in Communications of the ACM that proposed using this sort\nof \"hypernormalized\" schema as a way of having _really_ narrow schemas\nthat would be exceedingly expressive. They illustrated an example of\nan address table that could hold full addresses with a schema with\nonly about half a dozen columns, the idea being that you'd have\nseveral rows linked together.\n\nThe methodology was _heavy_ on metadata, though not so much so that\nthere were no columns left over for \"real\" data.\n\nThe entertaining claim was that they felt they could model the\ncomplexities of the operations of any sort of company using not more\nthan 50 tables. It seemed somewhat interesting, at the time; it truly\nresonated as Really Interesting when I saw SAP R/3, with its bloat of\n1500-odd tables.\n\n(I seem to remember the authors being Boston-based, and they indicated\nthat they had implemented this \"on VMS,\" which would more than likely\nimply RDB; somehow I doubt that'll be the set of detail that makes\nsomeone remember it...)\n\nThe need to do a lot of joins would likely hurt performance somewhat,\nas well as the way that it greatly increases the number of rows.\nAlthough you could always split it into several tables, one for each\n\"value_type\", and UNION them into a view...\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','acm.org').\nhttp://cbbrowne.com/info/unix.html\nYou shouldn't anthropomorphize computers; they don't like it.\n",
"msg_date": "Wed, 08 Oct 2003 22:07:46 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Chris,\n\n> Some time in the late '80s, probably '88 or '89, there was a paper\n> presented in Communications of the ACM that proposed using this sort\n> of \"hypernormalized\" schema as a way of having _really_ narrow schemas\n> that would be exceedingly expressive. They illustrated an example of\n<snip>\n> The entertaining claim was that they felt they could model the\n> complexities of the operations of any sort of company using not more\n> than 50 tables. It seemed somewhat interesting, at the time; it truly\n> resonated as Really Interesting when I saw SAP R/3, with its bloat of\n> 1500-odd tables.\n\nOne can always take things too far. Trying to make everying 100% dynamic so \nthat you can cram your whole database into 4 tables is going too far; so is \nthe kind of bloat that produces systems like SAP, which is more based on \nlegacy than design (I analyzed a large commercial billing system once and was \nstartled to discover that 1/4 of its 400 tables and almost half of the 40,000 \ncollective columns were not used and present only for backward \ncompatibility).\n\nThe usefulness of the \"vertical values child table\" which I suggest is largely \ndependant on the number of values not represented. In Greg's case, fully \n75% of the fields in his huge table are NULL; this is incredibly inefficient, \nthe more so when you consider his task of calling each field by name in each \nquery.\n\nThe \"vertical values child table\" is also ideal for User Defined Fields or any \nother form of user-configurable add-on data which will be NULL more often \nthan not.\n\nThis is an old SQL concept, though; I'm sure it has an official name \nsomewhere.\n\n> The need to do a lot of joins would likely hurt performance somewhat,\n> as well as the way that it greatly increases the number of rows.\n> Although you could always split it into several tables, one for each\n> \"value_type\", and UNION them into a view...\n\nIt increases the number of rows, yes, but *decreases* the storage size of data \nby eliminating thousands ... or millions ... of NULL fields. How would \nsplitting the vertical values into dozens of seperate tables help things?\n\nPersonally, I'd rather have a table with 3 columns and 8 million rows than a \ntable with 642 columns and 100,000 rows. Much easier to deal with.\n\nAnd we are also assuming that Greg seldom needs to see all of the fields at \nonce. I'm pretty sure of this; if he did, he'd have run into the \"wide row\" \nbug in 7.3 and would be complaining about it.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 8 Oct 2003 22:36:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "The world rejoiced as [email protected] (Josh Berkus) wrote:\n> Chris,\n>> Some time in the late '80s, probably '88 or '89, there was a paper\n>> presented in Communications of the ACM that proposed using this sort\n>> of \"hypernormalized\" schema as a way of having _really_ narrow schemas\n>> that would be exceedingly expressive. They illustrated an example of\n> <snip>\n>> The entertaining claim was that they felt they could model the\n>> complexities of the operations of any sort of company using not\n>> more than 50 tables. It seemed somewhat interesting, at the time;\n>> it truly resonated as Really Interesting when I saw SAP R/3, with\n>> its bloat of 1500-odd tables.\n>\n> One can always take things too far. Trying to make everying 100%\n> dynamic so that you can cram your whole database into 4 tables is\n> going too far; so is the kind of bloat that produces systems like\n> SAP, which is more based on legacy than design (I analyzed a large\n> commercial billing system once and was startled to discover that 1/4\n> of its 400 tables and almost half of the 40,000 collective columns\n> were not used and present only for backward compatibility).\n\nWith R/3, the problem is that there are hundreds (now thousands) of\ndevelopers trying to coexist on the same code base, with the result\ntables containing nearly-the-same fields are strewn all over.\n\nIt's _possible_ that the design I saw amounted to nothing more than a\nclever hack for implementing LDAP atop a relational database, but they\nseemed to have something slightly more to say than that.\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','ntlug.org').\nhttp://www3.sympatico.ca/cbbrowne/emacs.html\nWhy does the word \"lisp\" have an \"s\" in it? \n",
"msg_date": "Thu, 09 Oct 2003 07:41:28 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Christopher Browne wrote:\n> \n> Wow, that takes me back to a paper I have been looking for for\n> _years_.\n> \n> Some time in the late '80s, probably '88 or '89, there was a paper\n> presented in Communications of the ACM that proposed using this sort\n> of \"hypernormalized\" schema as a way of having _really_ narrow schemas\n> that would be exceedingly expressive. They illustrated an example of\n> an address table that could hold full addresses with a schema with\n> only about half a dozen columns, the idea being that you'd have\n> several rows linked together.\n\nI'd be interested in the title / author when you remember.\nI'm kinda sick. I like reading on most computer theory,\ndesigns, algorithms, database implementations, etc. Usually\nhow I get into trouble too with 642 column tables though. :)\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n",
"msg_date": "Thu, 09 Oct 2003 08:50:07 -0400",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Josh Berkus wrote:\n> Greg,\n> \n> \n>>You lost me on that one. What's a \"vertical child table\"?\n> \n> \n> Currently, you store data like this:\n> \n> id\taddress\tuptime\tspeed\tmemory\ttty\n> 3\t67.92\t0.3\t\t11.2\t\t37\t\t6\n> 7\t69.5\t\t1.1\t\tNULL\t15\t\tNULL\n> 9\t65.5\t\t0.1\t\tNULL\t94\t\t2\n> \n> The most efficient way for you to store data would be like this:\n> \n> main table\n> id\taddress\n> 3\t67.92\n> 7\t69.5\n> 9\t65.5\n> \n> child table\n> id\tvalue_type\tvalue\n> 3\tuptime\t\t0.3\n> 3\tspeed\t\t11.2\n> 3\tmemory\t\t37\n> 3\ttty\t\t\t6\n> 7\tuptime\t\t1.1\n> 7\tmemory\t\t15\n> 9\tuptime\t\t0.1\n> 9\tmemory\t\t94\n> 9\ttty\t\t\t2\n> \n> As you can see, the NULLs are not stored, making this system much more \n> efficient on storage space.\n> \n> Tommorrow I'll (hopefully) write up how to query this for comparisons. It \n> would help if you gave a little more details about what specific comparison \n> you're doing, e.g. between tables or table to value, comparing just the last \n> value or all rows, etc.\n> \n\nGot it. I can see how it would be more efficient in storing. At this\npoint it would require a lot of query and code rewrites to handle it.\nFortunately, we're looking for alternatives for the next revision and\nwe're leaving ourselves open for a rewrite much to the boss's chagrin.\n\nI will be spinning up a test server soon and may attempt a quick\nimplementation. I may make value_type a foreign key on a table that\nincludes a full and/or brief description of the key. Problem I'll have\nthen will be categorizing all those keys into disk, cpu, memory, user,\nand all the other data categories since it's in one big table rather\nthan specialized tables.\n\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n",
"msg_date": "Thu, 09 Oct 2003 11:26:20 -0400",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Josh Berkus kirjutas N, 09.10.2003 kell 08:36:\n> Chris,\n\n> > The need to do a lot of joins would likely hurt performance somewhat,\n> > as well as the way that it greatly increases the number of rows.\n> > Although you could always split it into several tables, one for each\n> > \"value_type\", and UNION them into a view...\n> \n> It increases the number of rows, yes, but *decreases* the storage size of data \n> by eliminating thousands ... or millions ... of NULL fields. \n\nI'm not sure I buy that.\n\nNull fields take exactly 1 *bit* to store (or more exactly, if you have\nany null fields in tuple then one 32bit int for each 32 fields is used\nfor NULL bitmap), whereas the same fields in \"vertical\" table takes 4\nbytes for primary key and 1-4 bytes for category key + tuple header per\nvalue + neccessary indexes. So if you have more than one non-null field\nper tuple you will certainly lose in storage. \n\n> How would splitting the vertical values into dozens of seperate tables help things?\n\nIf you put each category in a separate table you save 1-4 bytes for\ncategory per value, but still store primary key and tuple header *per\nvalue*.\n\nJou may stii get better performance for single-column comparisons as\nfewer pages must be touched.\n\n> Personally, I'd rather have a table with 3 columns and 8 million rows than a \n> table with 642 columns and 100,000 rows. Much easier to deal with.\n\nSame here ;)\n\n------------------\nHannu\n\n",
"msg_date": "Thu, 09 Oct 2003 20:16:22 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Greg Spiegelberg wrote:\n\n> Josh Berkus wrote:\n> \n>>\n>> As you can see, the NULLs are not stored, making this system much more \n>> efficient on storage space.\n>>\n>> Tommorrow I'll (hopefully) write up how to query this for \n>> comparisons. It would help if you gave a little more details about \n>> what specific comparison you're doing, e.g. between tables or table to \n>> value, comparing just the last value or all rows, etc.\n>>\n> \n> Got it. I can see how it would be more efficient in storing. At this\n> point it would require a lot of query and code rewrites to handle it.\n> Fortunately, we're looking for alternatives for the next revision and\n> we're leaving ourselves open for a rewrite much to the boss's chagrin.\n\nI'm not sure about the save in storage. See the Hannu Krosing\narguments.\n\n\nRegards\nGaetano Mendola\n\n",
"msg_date": "Thu, 09 Oct 2003 20:32:21 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "Per Josh's recommendation to implement a Vertical Child Table I came\nup with 3 possible tables to handle the 3 possible value types: varchar,\nnumeric and bigint. Each table has 7 columns: 1 to denote the time the\ndata was collected, 4 which identify where the data came from, 1 to\ntell me the value name and the last being the value itself.\n\n\t\tOLD\t\tNEW\ntables\t\t1\t\t3\ncolumns\t\t642\t\t7 each\nindexes\t\t~1200\t\t39\nviews\t\t37\t\t?\nrows\t\t1700-3000\t30,000\nquery on table\t0.01 sec\t0.06 sec\nquery on view\t0.02 sec\t?\n\nNot too bad. Guess there were a few 0's and NULL's out there, eh?\n\n642 * 1,700 = 1,091,400 cells\n3 * 7 * 30,000 = 630,000 cells\n 461,400 NULL's and 0's using the big 'ol table\n\nI can get around in this setup, however, I would appreciate some help\nin recreating my views. The views use to be there simply as an initial\nfilter and to hide all the 0's and NULL's. If I can't do this I will\nbe revisiting and testing possibly hundreds of programs and scripts.\n\nAny takers?\n\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n",
"msg_date": "Thu, 09 Oct 2003 16:14:08 -0400",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Compare rows, SEMI-SUMMARY"
},
{
"msg_contents": "I took this approach with a former company in designing an dynamic \ne-commerce system. This kept the addition of new products from \nrequiring an alteration of the schema. With an ORB manager and cache \ncontrol the performance was not significantly, but the automatic \nextensibility and the ease of maintainabilty was greatly enhanced.\n\nThomas\n\n\nJason Hihn wrote:\n\n> \n>\n>>-----Original Message-----\n>>From: [email protected]\n>>[mailto:[email protected]]On Behalf Of Greg\n>>Spiegelberg\n>>Sent: Wednesday, October 08, 2003 3:11 PM\n>>To: PgSQL Performance ML\n>>Subject: Re: [PERFORM] Compare rows\n>>\n>>\n>>Josh Berkus wrote:\n>> \n>>\n>>>Greg,\n>>>\n>>>\n>>> \n>>>\n>>>>The data represents metrics at a point in time on a system for\n>>>>network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,\n>>>>speed, and whatever else can be gathered.\n>>>>\n>>>>We arrived at this one 642 column table after testing the whole\n>>>>process from data gathering, methods of temporarily storing then\n>>>>loading to the database. Initially, 37+ tables were in use but\n>>>>the one big-un has saved us over 3.4 minutes.\n>>>> \n>>>>\n>>>Hmmm ... if few of those columns are NULL, then you are \n>>> \n>>>\n>>probably right ... \n>> \n>>\n>>>this is probably the most normalized design. If, however, \n>>> \n>>>\n>>many of columns \n>> \n>>\n>>>are NULL the majority of the time, then the design you should \n>>> \n>>>\n>>be using is a \n>> \n>>\n>>>vertial child table, of the form ( value_type | value ). \n>>>\n>>>Such a vertical child table would also make your comparison \n>>> \n>>>\n>>between instances \n>> \n>>\n>>>*much* easier, as it could be executed via a simple \n>>> \n>>>\n>>4-table-outer-join and 3 \n>> \n>>\n>>>where clauses. So even if you don't have a lot of NULLs, you \n>>> \n>>>\n>>probably want \n>> \n>>\n>>>to consider this.\n>>> \n>>>\n>>You lost me on that one. What's a \"vertical child table\"?\n>> \n>>\n>\n>Parent table Fkey | Option | Value\n>------------------+--------+-------\n> | OS | Solaris\n> | DISK1 | 30g\n> ^^^^^^^^ ^^^-- values \n> fields are values in a column rather than 'fields'\n>\n>\n> \n>\n>>Statistically, about 6% of the rows use more than 200 of the columns,\n>>27% of the rows use 80-199 or more columns, 45% of the rows use 40-79\n>>columns and the remaining 22% of the rows use 39 or less of the columns.\n>>That is a lot of NULLS. Never gave that much thought.\n>>\n>>To ensure query efficiency, hide the NULLs and simulate the multiple\n>>tables I have a boatload of indexes, ensure that every query makees use\n>>of an index, and have created 37 views. It's worked pretty well so\n>>far\n>>\n>>\n>> \n>>\n>>>>The reason for my initial question was this. We save changes only.\n>>>>In other words, if system S has row T1 for day D1 and if on day D2\n>>>>we have another row T1 (excluding our time column) we don't want\n>>>>to save it.\n>>>> \n>>>>\n>>>If re-designing the table per the above is not a possibility, \n>>> \n>>>\n>>then I'd suggest \n>> \n>>\n>>>that you locate 3-5 columns that:\n>>>1) are not NULL for any row;\n>>>2) combined, serve to identify a tiny subset of rows, i.e. 3% \n>>> \n>>>\n>>or less of the \n>> \n>>\n>>>table.\n>>> \n>>>\n>>There are always, always, always 7 columns that contain data.\n>>\n>>\n>> \n>>\n>>>Then put a multi-column index on those columns, and do your \n>>> \n>>>\n>>comparison. \n>> \n>>\n>>>Hopefully the planner should pick up on the availablity of the \n>>> \n>>>\n>>index and scan \n>> \n>>\n>>>only the rows retrieved by the index. However, there is the distinct \n>>>possibility that the presence of 637 WHERE criteria will \n>>> \n>>>\n>>confuse the planner, \n>> \n>>\n>>>causing it to resort to a full table seq scan; in that case, \n>>> \n>>>\n>>you will want to \n>> \n>>\n>>>use a subselect to force the issue.\n>>> \n>>>\n>>That's what I'm trying to avoid is a big WHERE (c1,c2,...,c637) <> \n>>(d1,d2,...,d637) clause. Ugly.\n>>\n>>\n>> \n>>\n>>>Or, as Joe Conway suggested, you could figure out some kind of \n>>> \n>>>\n>>value hash that \n>> \n>>\n>>>uniquely identifies your rows.\n>>> \n>>>\n>>I've given that some though and though appealing I don't think I'd care\n>>to spend the CPU cycles to do it. Best way I can figure to accomplish\n>>it would be to generate an MD5 on each row without the timestamp and\n>>store it in another column, create an index on the MD5 column, generate\n>>MD5 on each line I want to insert. Makes for a simple WHERE...\n>>\n>>Okay. I'll give it a whirl. What's one more column, right?\n>>\n>>Greg\n>>\n>>-- \n>>Greg Spiegelberg\n>> Sr. Product Development Engineer\n>> Cranel, Incorporated.\n>> Phone: 614.318.4314\n>> Fax: 614.431.8388\n>> Email: [email protected]\n>>Cranel. Technology. Integrity. Focus.\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to [email protected])\n>>\n>> \n>>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n>\n\n\n",
"msg_date": "Fri, 10 Oct 2003 07:27:22 -0500",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
},
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> The most efficient way for you to store data would be like this:\n> main table\n> id address\n> 3 67.92\n> 7 69.5\n>\n> child table\n> id value_type value\n> 3 uptime 0.3\n> 3 memory 37\n> 7 uptime 1.1\n> 7 memory 15\n\nActually, a more efficient* way is this:\n\nvalue table\nvid value_name\n1 uptime\n2 memory\n\nchild table\nid vid value\n3 1 0.3\n3 2 37\n7 1 1.1\n7 2 15\n\n\n* Still not necessarily the *most* efficient, depending on how the \nvalues are distributed, but it sure beats storing \"uptime\" over \nand over again. :)\n\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200310101243\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE/huHxvJuQZxSWSsgRAiMNAKD4kQCwdv3fXyEFUu64mymtf567dwCcCKd5\nZzJaV7wjfs00DBT62bVpHhs=\n=32b8\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Fri, 10 Oct 2003 16:44:21 -0000",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Compare rows"
}
] |
[
{
"msg_contents": "We've a table with about 8 million rows, and we need to get rows by the value of two of its fields( the type of the fields are int2 and int4, the where condition is v.g. partido=99 and partida=123). We created a multicolumn index on that fields but the planner doesn't use it, it still use a seqscan. That fields are primary key of the table and we clusterded the table based on that index, but it still doesn't work. We also set the enviroment variable enable_seqscan to false and nathing happends. The only way the planner use it is in querys that order by the expression of the index.\nAny idea?\nthanks.\nAdri�n\n\n\n\n---------------------------------\nDo You Yahoo!?\nTodo lo que quieres saber de Estados Unidos, Am�rica Latina y el resto del Mundo.\nVis�ta Yahoo! Noticias.\n\nWe've a table with about 8 million rows, and we need to get rows by the value of two of its fields( the type of the fields are int2 and int4, the where condition is v.g. partido=99 and partida=123). We created a multicolumn index on that fields but the planner doesn't use it, it still use a seqscan. That fields are primary key of the table and we clusterded the table based on that index, but it still doesn't work. We also set the enviroment variable enable_seqscan to false and nathing happends. The only way the planner use it is in querys that order by the expression of the index.\nAny idea?\nthanks.\nAdri�nDo You Yahoo!?\n\nTodo lo que quieres saber de Estados Unidos, Am�rica Latina y el resto del Mundo.\nVis�ta Yahoo! Noticias.",
"msg_date": "Wed, 8 Oct 2003 09:08:59 -0500 (CDT)",
"msg_from": "=?iso-8859-1?q?Adrian=20Demaestri?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "planner doesn't use multicolumn index"
},
{
"msg_contents": "> We've a table with about 8 million rows, and we need to get rows by the \n> value of two of its fields( the type of the fields are int2 and int4, \n> the where condition is v.g. partido=99 and partida=123). We created a \n> multicolumn index on that fields but the planner doesn't use it, it \n> still use a seqscan. That fields are primary key of the table and we \n> clusterded the table based on that index, but it still doesn't work. We \n> also set the enviroment variable enable_seqscan to false and nathing \n> happends. The only way the planner use it is in querys that order by the \n> expression of the index.\n> Any idea?\n> thanks.\n> Adri�n\n\nwhere partido=99::int2 and partida=123;\n\nRegards,\nTomasz Myrta\n\n",
"msg_date": "Wed, 08 Oct 2003 16:18:23 +0200",
"msg_from": "Tomasz Myrta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner doesn't use multicolumn index"
},
{
"msg_contents": "Adrian Demaestri wrote:\n\n> We've a table with about 8 million rows, and we need to get rows by the value \n >of two of its fields( the type of the fields are int2 and int4,\n>the where condition is v.g. partido=99 and partida=123). We created a\n >multicolumn index on that fields but the planner doesn't use it, it still use\n >a seqscan. That fields are primary key of the table and we clusterded the table\n >based on that index, but it still doesn't work. We also set the enviroment\n > variable enable_seqscan to false and nathing happends. The only way the\n >planner use it is in querys that order by the expression of the index.\n\nUse partido=99::int2 and partida=123::int4\n\nMatch the data types basically..\n\n Shridhar\n\n",
"msg_date": "Wed, 08 Oct 2003 19:57:38 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner doesn't use multicolumn index"
},
{
"msg_contents": "On Wed, 8 Oct 2003 09:08:59 -0500 (CDT), Adrian Demaestri\n<[email protected]> wrote:\n>the type of the fields are int2 and\n>int4, the where condition is v.g. partido=99 and partida=123).\n\nWrite your search condition as\n\n\tWHERE partido=99::int2 and partida=123\n\nServus\n Manfred\n",
"msg_date": "Wed, 08 Oct 2003 16:35:24 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner doesn't use multicolumn index"
}
] |
[
{
"msg_contents": "The boss cleared my de-company info-ified pg presentation.\nIt deals with PG features, crude comparison to other dbs, install, admin,\nand most importantly - optimization & quirks.\n\nIts avail in powerpoint and (ugg) powerpoint exported html.\n\nLet me know if there are blatant errors, etc in there.\nMaybe even slightly more subtle blatant errors :)\n\nThe people here thought it was good.\n\nhttp://postgres.jefftrout.com/\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 8 Oct 2003 11:02:14 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Presentation"
},
{
"msg_contents": "Jeff wrote:\n> Let me know if there are blatant errors, etc in there.\n> Maybe even slightly more subtle blatant errors :)\n\nSome minor nitpicks,\n\n* Slide 5, postgresql already features 64 bit port. The sentence is slightly \nconfusing\n* Same slide. IIRC postgresql always compresses bytea/varchar. Not too much sure \nabout which but there is something that is compressed by default..:-)\n* Tablespaces has a patch floating somewhere. IIRC Gavin Sherry is the one who \nis most ahead of it. For all goodness, they will feature in 7.5 and design is \ndone. There aren't much issues there.\n* Mysql transaction breaks down if tables from different table types are involved.\n* Mysql transactions do not feature constant time commit/rollback like \npostgresql. The time to rollback depends upon size of transaction\n* Mysql does not split large files in segments the way postgresql do. Try \nstoring 60GB of data in single mysql table.\n* Slide on informix. It would be better if you mention what database you were \nusing on your pentium. Assuming postgresql is fine, but being specific helps.\n* Slide on caching. Postgresql can use 7000MB of caching. Important part is it \ndoes not lock that memory in it's own process space. OS can move around buffer \ncache but not memory space of an application.\n* Installation slide. We can do without 'yada' for being formal, right..:-) \n(Sorry if thats too nitpicky but couldn't help it..:-))\n* initdb could be coupled with configure/make install but again, it's a matter \nof choice.\n* Slide on configuration. 'Reliable DB corruption' is a confusing term. 'DB \ncorruption for sure' or something on that line would be more appropriate \nespecially if presentation is read in documentation form and not explained in a \nlive session. but you decide.\n* I doubt pg_autovacuum will be in core source but predicting that long is \nalways risky..:-)\n* Using trigger for maintening a row count would generate as much dead rows as \nyou wanted to avoid in first place..:-)\n\nAll of them are really minor. It's a very well done presentation but 45 slides \ncould be bit too much at a time. I suggest splitting the presentation in 3. \nIntro and comparison, features, administration, programming and tuning. Wow.. \nthey are 5..:-)\n\nCan you rip out informix migration? It could be a good guide by itself.\n\nThanks again for documentation. After you decide what license you want to \nrelease it under, the team can put it on techdocs.postgresql.org..\n\nAgain, thanks for a good presentation..\n\n Shridhar\n\n\n",
"msg_date": "Wed, 08 Oct 2003 21:21:16 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Presentation"
},
{
"msg_contents": "> * Same slide. IIRC postgresql always compresses bytea/varchar. Not too much sure \n> about which but there is something that is compressed by default..:-)\n\nI'm not sure about that.\n\nEven toasted values are not always compressed, though they certainly can\nbe and usually are.",
"msg_date": "Wed, 08 Oct 2003 11:59:17 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Presentation"
},
{
"msg_contents": "Jeff,\n\n> Its avail in powerpoint and (ugg) powerpoint exported html.\n\nI can probably convert it to OpenOffice.org and Flash. OK?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 8 Oct 2003 09:02:37 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Presentation"
},
{
"msg_contents": "Jeff,\n\nI'm jumping this thread to ADVOCACY, where it belongs, not PERFORMANCE.\n\n> The boss cleared my de-company info-ified pg presentation.\n> It deals with PG features, crude comparison to other dbs, install, admin,\n> and most importantly - optimization & quirks.\n>\n> Its avail in powerpoint and (ugg) powerpoint exported html.\n>\n> Let me know if there are blatant errors, etc in there.\n> Maybe even slightly more subtle blatant errors :)\n>\n> The people here thought it was good.\n>\n> http://postgres.jefftrout.com/\n\nI'll check it out later today. \n\nAs I said in my e-mail, I'm happy to convert the format to something Open \nSource. Can you release it under an OSS license?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 8 Oct 2003 09:05:52 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Presentation"
},
{
"msg_contents": "Shridhar, \n\nI'm jumping your reply to the proper mailing list. I have one comment at the \nbottom; the rest of this quote is for the people on the Advocacy list.\n\n> > Let me know if there are blatant errors, etc in there.\n> > Maybe even slightly more subtle blatant errors :)\n>\n> Some minor nitpicks,\n>\n> * Slide 5, postgresql already features 64 bit port. The sentence is\n> slightly confusing\n> * Same slide. IIRC postgresql always compresses bytea/varchar. Not too much\n> sure about which but there is something that is compressed by default..:-)\n> * Tablespaces has a patch floating somewhere. IIRC Gavin Sherry is the one\n> who is most ahead of it. For all goodness, they will feature in 7.5 and\n> design is done. There aren't much issues there.\n> * Mysql transaction breaks down if tables from different table types are\n> involved. * Mysql transactions do not feature constant time commit/rollback\n> like postgresql. The time to rollback depends upon size of transaction\n> * Mysql does not split large files in segments the way postgresql do. Try\n> storing 60GB of data in single mysql table.\n> * Slide on informix. It would be better if you mention what database you\n> were using on your pentium. Assuming postgresql is fine, but being specific\n> helps. * Slide on caching. Postgresql can use 7000MB of caching. Important\n> part is it does not lock that memory in it's own process space. OS can move\n> around buffer cache but not memory space of an application.\n> * Installation slide. We can do without 'yada' for being formal, right..:-)\n> (Sorry if thats too nitpicky but couldn't help it..:-))\n> * initdb could be coupled with configure/make install but again, it's a\n> matter of choice.\n> * Slide on configuration. 'Reliable DB corruption' is a confusing term. 'DB\n> corruption for sure' or something on that line would be more appropriate\n> especially if presentation is read in documentation form and not explained\n> in a live session. but you decide.\n> * I doubt pg_autovacuum will be in core source but predicting that long is\n> always risky..:-)\n> * Using trigger for maintening a row count would generate as much dead rows\n> as you wanted to avoid in first place..:-)\n>\n> All of them are really minor. It's a very well done presentation but 45\n> slides could be bit too much at a time. I suggest splitting the\n> presentation in 3. Intro and comparison, features, administration,\n> programming and tuning. Wow.. they are 5..:-)\n>\n> Can you rip out informix migration? It could be a good guide by itself.\n>\n> Thanks again for documentation. After you decide what license you want to\n> release it under, the team can put it on techdocs.postgresql.org..\n\nI'd suggest just having him release it under an OSS license now. That way, \nyou can make the corrections yourself, and we can convert it to OOo and Flash \nand offer it in a variety of configurations.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 8 Oct 2003 09:08:03 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Presentation"
},
{
"msg_contents": "Josh Berkus wrote:\n\n> Shridhar, \n> \n> I'm jumping your reply to the proper mailing list. I have one comment at the \n> bottom; the rest of this quote is for the people on the Advocacy list.\n\n> I'd suggest just having him release it under an OSS license now. That way, \n> you can make the corrections yourself, and we can convert it to OOo and Flash \n> and offer it in a variety of configurations.\n\nYeah. I read the request. After he does that, I could take some time off to \nconevrt it. Long time since I have actually done on postgresql other than posts \non mailing list but.. anyways..\n\nSorry I should have put that on advocacy.. got left out..\n\n Shridhar\n\n",
"msg_date": "Wed, 08 Oct 2003 21:43:41 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Presentation"
},
{
"msg_contents": "On Wed, 8 Oct 2003, Shridhar Daithankar wrote:\n\n\nThanks for the nitpicks :)\n\nI've taken some into consideration.\nI also signed onto the advocacy list so I can be in on discussions there.\n\nFeel free to convert to whatever format you'd like. I originally started\nworking on it in OpenOffice, but I got mad at it. So I switched to\npowerpoint and got mad at that too :)\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 8 Oct 2003 12:27:31 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Presentation"
},
{
"msg_contents": "On Wed, 8 Oct 2003, Shridhar Daithankar wrote:\n\n> * Same slide. IIRC postgresql always compresses bytea/varchar. Not too much sure\n> about which but there is something that is compressed by default..:-)\n\n> * Tablespaces has a patch floating somewhere. IIRC Gavin Sherry is the one who\n> is most ahead of it. For all goodness, they will feature in 7.5 and design is\n\nFor the sake of things, I didn't include any features a patch provides. I\ndid include things that may appear in contrib/.\n\n> * Mysql transaction breaks down if tables from different table types are involved.\n> * Mysql transactions do not feature constant time commit/rollback like\n> postgresql. The time to rollback depends upon size of transaction\n> * Mysql does not split large files in segments the way postgresql do. Try\n> storing 60GB of data in single mysql table.\n\nI didn't add these ones. The user can figure this one out.\nPerhaps when we/me expands this into multiple documents we can expand on\nthis.\n\n> * Slide on caching. Postgresql can use 7000MB of caching. Important part is it\n> does not lock that memory in it's own process space. OS can move around buffer\n> cache but not memory space of an application.\n\nI'm guilty of this myself - when I first started pg I was looking for a\nway to make it use a zillion megs of memory like we have informix do -\nPerhaps I'll reword that segment.. the point was to show PG relies on the\nOS to do a lot of caching and that it doesn't do it itself.\n\n> * Using trigger for maintening a row count would generate as much dead rows as\n> you wanted to avoid in first place..:-)\n\nWe all know this.. but it is a way to get a fast select count(*) from\ntable\n\n\n> All of them are really minor. It's a very well done presentation but 45 slides\n> could be bit too much at a time. I suggest splitting the presentation in 3.\n> Intro and comparison, features, administration, programming and tuning. Wow..\n> they are 5..:-)\n>\n\nYeah. What I'd really love to do is de-powerpointify it and make it a nice\nset of \"real\" web pages.\n\n\n> Can you rip out informix migration? It could be a good guide by itself.\n>\n\nI agree. It would be good to rip out. I think we have the oracle guide\nsomewhere..\n\n\nI've put this updated on up on hte postgres.jefftrout.com site\nalong with openoffice version.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 8 Oct 2003 13:05:28 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Presentation"
},
{
"msg_contents": "On Wed, 2003-10-08 at 11:02, Jeff wrote:\n> The boss cleared my de-company info-ified pg presentation.\n\nSlide 37: as far as I know, reordering of outer joins is not implemented\nin 7.4\n\n-Neil\n\n\n",
"msg_date": "Wed, 08 Oct 2003 14:46:44 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Presentation"
},
{
"msg_contents": "On Wed, 8 Oct 2003, Neil Conway wrote:\n\n> On Wed, 2003-10-08 at 11:02, Jeff wrote:\n> > The boss cleared my de-company info-ified pg presentation.\n>\n> Slide 37: as far as I know, reordering of outer joins is not implemented\n> in 7.4\n>\n\nHuh. I could have sworn Tom did something like that.\nPerhaps I am thinking of something else.\nYou had to enable some magic GUC.\n\nMaybe he did a test and it never made it in.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 8 Oct 2003 15:38:37 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Presentation"
},
{
"msg_contents": "On Wed, 2003-10-08 at 15:38, Jeff wrote:\n> Huh. I could have sworn Tom did something like that.\n> Perhaps I am thinking of something else.\n> You had to enable some magic GUC.\n\nPerhaps you're thinking of the new GUC var join_collapse_limit, which is\nrelated, but doesn't effect the reordering of outer joins.\n\n-Neil\n\n\n",
"msg_date": "Wed, 08 Oct 2003 17:48:26 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Presentation"
},
{
"msg_contents": "Jeff <[email protected]> writes:\n> On Wed, 8 Oct 2003, Neil Conway wrote:\n>> Slide 37: as far as I know, reordering of outer joins is not implemented\n>> in 7.4\n\n> Huh. I could have sworn Tom did something like that.\n\nNot yet. 7.4 can reorder *inner* joins that happen to be written\nwith JOIN syntax.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Oct 2003 19:28:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Presentation "
}
] |
[
{
"msg_contents": "Hi,\nI am trying to design a large text search database.\n\nIt will have upwards of 6 million documents, along with meta data on \neach.\n\nI am currently looking at tsearch2 to provide fast text searching and \nalso playing around with different hardware configurations.\n\n1. With tsearch2 I get very good query times up until I insert more \nrecords. For example with 100,000 records tsearch2 returns in around 6 \nseconds, with 200,000 records tsearch2 returns in just under a minute. \nIs this due to the indices fitting entirely in memory with 100,000 \nrecords?\n\n2. As well as whole word matching i also need to be able to do \nsubstring matching. Is the FTI module the way to approach this?\n\n3. I have just begun to look into distibuted queries. Is there an \nexisting solution for distibuting a postgresql database amongst \nmultiple servers, so each has the same schema but only a subset of the \ntotal data?\n\nAny other helpful comments or sugestions on how to improve query times \nusing different hardware or software techniques would be appreciated.\n\nThanks,\n\nMat\n",
"msg_date": "Wed, 08 Oct 2003 16:48:17 +0100",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Large Text Search Help"
},
{
"msg_contents": "Mat,\n\n> 1. With tsearch2 I get very good query times up until I insert more\n> records. For example with 100,000 records tsearch2 returns in around 6\n> seconds, with 200,000 records tsearch2 returns in just under a minute.\n> Is this due to the indices fitting entirely in memory with 100,000\n> records?\n\nMaybe, maybe not. If you want a difinitive answer, post your EXPLAIN ANALYZE \nresults with the original query. \n\nI assume that you have run VACUUM ANALYZE, first? Don't bother to respond \nuntil you have.\n\n> 2. As well as whole word matching i also need to be able to do\n> substring matching. Is the FTI module the way to approach this?\n\nYes.\n\n> 3. I have just begun to look into distibuted queries. Is there an\n> existing solution for distibuting a postgresql database amongst\n> multiple servers, so each has the same schema but only a subset of the\n> total data?\n\nNo, it would be ad-hoc. So far, Moore's law has prevented us from needing to \ndevote serious effort to the above approach.\n\n> Any other helpful comments or sugestions on how to improve query times\n> using different hardware or software techniques would be appreciated.\n\nRead the archives of this list.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 14 Oct 2003 10:12:50 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large Text Search Help"
}
] |
[
{
"msg_contents": "Hi guys,\n\nI followed the discussion and here are my 0.2$:\n\nI think instead of thinking about where to put the\ninformation about tuning, someone should provide a\n\"pgsql-autotune\". Maybe even a shell script would do the\ntrick.\n\nIt's not so hard to find out, how much memory is installed,\nand IMHO SHARED_BUFFERS, SORT_MEM and EFFECTIVE_CACHE_SIZE\ndepend heavily on this. a \"cat /proc/sys/kernel/shmmax\"\nwould give some valuable information on linux boxes,\nthere is probably other stuff for different OSes.\n\nrandom_page_cost could be set after probing the harddisks,\nmaybe even do a hdparm -tT if they seem to be ATA, not SCSI.\n\nNow, let's pretend the script finds out there is 1 GB RAM,\nit could ask something like \"Do you want to optimize the\nsettings for postgres (other applications may suffer from\nhaving not enough RAM) or do you want to use moderate\nsettings?\"\n\nSomething like this, you get the idea.\n\nThis would give new users a much more usable start than\nthe current default settings and would still leave all\nthe options to do fine-tuning later.\n\nI guess my point is simply this:\ninstead of saying: \"okay we use default settings that will\nrun on _old_ hardware too\" we should go for a little script\nthat creates a \"still save but much better\" config file.\nThere's just no point in setting SHARED_BUFFERS to something\nlike 16 (what's the current default?) if the PC has >= 1 GB\nof RAM. Setting it to 8192 would still be save, but 512 times\nbetter... ;-) (IIRC 8192 would take 64 MB of RAM, which\nshould be save if you leave the default MAX_CONNECTIONS.)\n\nAs said before: just my $0.2\n\nMy opinion on this case is Open Source. Feel free to modify\nand add. :-)\n\nregards,\nOli\n",
"msg_date": "Thu, 9 Oct 2003 10:29:52 +0200",
"msg_from": "\"Oliver Scheit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "go for a script! / ex: PostgreSQL vs. MySQL"
},
{
"msg_contents": "\nOn 09/10/2003 09:29 Oliver Scheit wrote:\n> Hi guys,\n> \n> I followed the discussion and here are my 0.2$:\n> \n> I think instead of thinking about where to put the\n> information about tuning, someone should provide a\n> \"pgsql-autotune\". Maybe even a shell script would do the\n> trick.\n> \n> It's not so hard to find out, how much memory is installed,\n> and IMHO SHARED_BUFFERS, SORT_MEM and EFFECTIVE_CACHE_SIZE\n> depend heavily on this. a \"cat /proc/sys/kernel/shmmax\"\n> would give some valuable information on linux boxes,\n> there is probably other stuff for different OSes.\n> \n> random_page_cost could be set after probing the harddisks,\n> maybe even do a hdparm -tT if they seem to be ATA, not SCSI.\n> \n> Now, let's pretend the script finds out there is 1 GB RAM,\n> it could ask something like \"Do you want to optimize the\n> settings for postgres (other applications may suffer from\n> having not enough RAM) or do you want to use moderate\n> settings?\"\n> \n> Something like this, you get the idea.\n\n\nISR reading that 7.4 will use a default of shared_beffers = 1000 if the \nmachine can support it (most can). This alone should make a big difference \nin out-of-the-box performance.\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Thu, 9 Oct 2003 11:12:55 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL"
},
{
"msg_contents": "Oliver,\n\n> I think instead of thinking about where to put the\n> information about tuning, someone should provide a\n> \"pgsql-autotune\". Maybe even a shell script would do the\n> trick.\n\nWell, you see, there's the issue. \"I think someone.\" Lots of people have \nspoken in favor of an \"auto-conf\" script; nobody so far has stepped forward \nto get it done for 7.4, and I doubt we have time now.\n\nI'll probably create a Perl script in a month or so, but not before that ....\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 9 Oct 2003 09:56:11 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL"
},
{
"msg_contents": "\nYeah, I had similar thought to Oliver's and suspected that this would be\nthe answer. \nAlso, while it's not too hard to do this for a single platform, it gets\ncomplecated once you start looking at different ones.\n\nJosh, let me know when you're ready to do this. I'll try to help,\nalthough my perl's kind of rusty. Also, can you even assume perl for a\npostgres install? Does Solaris, for instance come with perl?\n\nDror\n\nOn Thu, Oct 09, 2003 at 09:56:11AM -0700, Josh Berkus wrote:\n> Oliver,\n> \n> > I think instead of thinking about where to put the\n> > information about tuning, someone should provide a\n> > \"pgsql-autotune\". Maybe even a shell script would do the\n> > trick.\n> \n> Well, you see, there's the issue. \"I think someone.\" Lots of people have \n> spoken in favor of an \"auto-conf\" script; nobody so far has stepped forward \n> to get it done for 7.4, and I doubt we have time now.\n> \n> I'll probably create a Perl script in a month or so, but not before that ....\n> \n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nDror Matalon\nZapatec Inc \n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n",
"msg_date": "Thu, 9 Oct 2003 10:17:15 -0700",
"msg_from": "Dror Matalon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL"
},
{
"msg_contents": "> Yeah, I had similar thought to Oliver's and suspected that this\n> would be the answer. Also, while it's not too hard to do this for a\n> single platform, it gets complecated once you start looking at\n> different ones.\n> \n> Josh, let me know when you're ready to do this. I'll try to help,\n> although my perl's kind of rusty. Also, can you even assume perl for\n> a postgres install? Does Solaris, for instance come with perl?\n\nUm, why not wait until the C version of initdb is committed, then\nsteak out a section that'll allow us to submit patches to have initdb\nautotune to our hearts content? There's a tad bit of precedence with\nhaving shared buffer's automatically set in initdb, why not continue\nwith it? I know under FreeBSD initdb will have some #ifdef's to wrap\naround the syscall sysctl() to get info about kernel bits. Talking\nabout how to expand handle this gracefully for a gazillion different\nplatforms might be a more useful discussion at this point because I'm\nsure people from their native OS will be able to contrib the necessary\npatches to extract info from their OS so that initdb can make useful\ndecisions. Or, lastly, does anyone think that this should be in a\ndifferent, external program? -sc\n\n-- \nSean Chittenden\n",
"msg_date": "Thu, 9 Oct 2003 11:11:12 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL"
},
{
"msg_contents": ">>>>> \"SC\" == Sean Chittenden <[email protected]> writes:\n\nSC> patches to extract info from their OS so that initdb can make useful\nSC> decisions. Or, lastly, does anyone think that this should be in a\nSC> different, external program? -sc\n\nWell, there should definitely be a way to run a \"get current best\ntuning advice\" for those times when I go and do something like add a\nGig of RAM. ;-)\n\nAlso, I'm sure the tuning advice will change over time, so having to\ndo initdb to get that advice would be a bit onerous.\n\nAs long as initdb has an option for just getting the tuning info, I\nsee no reason to make it separate.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Fri, 10 Oct 2003 16:22:08 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: go for a script! / ex: PostgreSQL vs. MySQL"
}
] |
[
{
"msg_contents": "http://www.ussg.iu.edu/hypermail/linux/kernel/0310.1/0208.html\n\nShridhar\n\n\n",
"msg_date": "Thu, 09 Oct 2003 16:42:53 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linux filesystem shootout"
},
{
"msg_contents": "...and on Thu, Oct 09, 2003 at 04:42:53PM +0530, Shridhar Daithankar used the keyboard:\n>\n> http://www.ussg.iu.edu/hypermail/linux/kernel/0310.1/0208.html\n> \n> Shridhar\n\nMy $0.1:\n\nI just stumbled across an interesting filesystem comparison table today,\ncomparing ext2/ext3/reiser/reiser4/jfs/xfs on a single UP P2/450 machine\nwith an old UDMA2 Seagate.\n\nNow however archaic this box may have been, I think that the tests still\nbear some objectivity, as it's a comparison test and not some \"how much\ncan we squeeze out of xyz\" type of bragging.\n\nThe tests were done using bonnie++ and IOZone and are essentially just a\ncouple of tables listing the average results achieved by each of those\ntests.\n\nAlso, ext3, reiser and reiser4 were tested in a couple of different\nconfigurations (reiser4 extents, reiser notail, ext3 journal, ordered and\nwriteback mode).\n\nOh, i shouldn't forget - the address is http://fsbench.netnation.com/ :)\n\nCheers,\n-- \n Grega Bremec\n Sistemska administracija in podpora\n grega.bremec-at-noviforum.si\n http://najdi.si/\n http://www.noviforum.si/",
"msg_date": "Thu, 9 Oct 2003 14:11:24 +0200",
"msg_from": "Grega Bremec <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux filesystem shootout"
},
{
"msg_contents": "I should at least read the URLs before re-posting info.\n\nMy bad, I'm utterly sorry about this... :-(\n\nCheers,\n-- \n Grega Bremec\n Sistemska administracija in podpora\n grega.bremec-at-noviforum.si\n http://najdi.si/\n http://www.noviforum.si/",
"msg_date": "Thu, 9 Oct 2003 14:13:16 +0200",
"msg_from": "Grega Bremec <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux filesystem shootout"
},
{
"msg_contents": "\n\n\n\n\n\n\n\n\nhttp://www.ussg.iu.edu/hypermail/linux/kernel/0310.1/0208.html\n\nShridhar\n \n\n\n\nI feel incompetent when it comes to file systems. Yet everybody would\nlike to have the best file system if given the choice...so do I :) Here\nI am looking at those tables seeing JFS having more green cells than\nothers. The more green the better right? So based on these tests JFS\nought to be the one?\n\nKaarel\n\n\n",
"msg_date": "Thu, 09 Oct 2003 17:52:44 +0300",
"msg_from": "Kaarel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux filesystem shootout"
},
{
"msg_contents": "Kaarel wrote:\n>>>http://www.ussg.iu.edu/hypermail/linux/kernel/0310.1/0208.html\n>>>\n>>>Shridhar\n>>> \n>>>\n> I feel incompetent when it comes to file systems. Yet everybody would like to \n> have the best file system if given the choice...so do I :) Here I am looking at \n> those tables seeing JFS having more green cells than others. The more green the \n> better right? So based on these tests JFS ought to be the one?\n\nYes and no. Yes for the results. No for the tests that weren't run.\n\nDatabase load is quite different. Its mixture of read and write load with a \ndynamics varying from one extreme to other, between these two.\n\nAll it says that if you want to choose a good file system for postgresql, look \nat JFS first..:-)\n\n Besides all the tests were done on files file bigger than 1GB. If single file \nsize is restricted to 1GB, it might produce a different result set. And \npostgresql does not exceed 1GB limit per file.\n\nSo still, quite a few unknowns there..\n\nBest thing could be repeat those benchmarks on $PGDATA with your live data \ninside it. It could mimmic the load pretty well..\n\n Shridhar\n\n",
"msg_date": "Thu, 09 Oct 2003 20:36:51 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux filesystem shootout"
},
{
"msg_contents": "Kaarel wrote:\n> \n>>>http://www.ussg.iu.edu/hypermail/linux/kernel/0310.1/0208.html\n>>>\n>>>Shridhar\n>>> \n> I feel incompetent when it comes to file systems. Yet everybody would \n> like to have the best file system if given the choice...so do I :) Here \n> I am looking at those tables seeing JFS having more green cells than \n> others. The more green the better right? So based on these tests JFS \n> ought to be the one?\n\nThose tests seem to align with the ones I did recently:\nhttp://www.potentialtech.com/wmoran/postgresql.php#results\n\nThere were less filesystems involved, and the data is less comprehensive,\nbut probably a little easier to understand (i.e. -> fastest filesystem\nat the top of the graph, slowest at the bottom).\n\nI've been telling people that JFS is fastest. This is definately\noversimplified, since the \"shoot out\" shows that it's not _always_\nfastest, but for people who just want to make a good initial choice,\nand won't do their own testing to find out what's fastest in their\nconfiguration (for whatever reason), I think JFS is the safest bet.\nSince it's a journalling filesystem as well, it should have good\nrecoverability in the even of catastrophy, but I haven't tested\nthat.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Thu, 09 Oct 2003 11:19:42 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux filesystem shootout"
},
{
"msg_contents": "On Thu, 9 Oct 2003, Shridhar Daithankar wrote:\n\n> Kaarel wrote:\n> >>>http://www.ussg.iu.edu/hypermail/linux/kernel/0310.1/0208.html\n> >>>\n> >>>Shridhar\n> >>> \n> >>>\n> > I feel incompetent when it comes to file systems. Yet everybody would like to \n> > have the best file system if given the choice...so do I :) Here I am looking at \n> > those tables seeing JFS having more green cells than others. The more green the \n> > better right? So based on these tests JFS ought to be the one?\n> \n> Yes and no. Yes for the results. No for the tests that weren't run.\n> \n> Database load is quite different. Its mixture of read and write load with a \n> dynamics varying from one extreme to other, between these two.\n> \n> All it says that if you want to choose a good file system for postgresql, look \n> at JFS first..:-)\n> \n> Besides all the tests were done on files file bigger than 1GB. If single file \n> size is restricted to 1GB, it might produce a different result set. And \n> postgresql does not exceed 1GB limit per file.\n> \n> So still, quite a few unknowns there..\n\nAbsolutely. For instance, one file system may be faster on a RAID card \nwith battery backed cache, while another may be faster on an IDE drive \nwith write cache disabled, while another may be faster on software RAID1, \nwhile another might be faster on software RAID5.\n\nIf you haven't tested different file systems on your setup, you don't \nreally know which will be faster until you do.\n\n",
"msg_date": "Thu, 9 Oct 2003 09:30:36 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux filesystem shootout"
}
] |
[
{
"msg_contents": "On Wed, 8 Oct 2003, Neil Conway wrote:\n\n> Hey Jeff,\n>\n> On Wed, 2003-10-08 at 11:46, Jeff wrote:\n> > Yeah - like I expected it was able to generate much better code for\n> > _bt_checkkeys which was the #1 function in gcc on both sun & linux.\n>\n> If you get a minute, would it be possible to compare the performance of\n> your benchmark under linux/gcc and solaris/gcc when PostgreSQL is\n> compiled with \"-O3\"?\n>\nSun:\ngcc:\nnone: 60 seconds\n-O: 21 seconds\n-O2: 20 seconds\n-O3: 19 seconds\n\nsuncc:\nnone: 52 seconds\n-fast: 20 secondsish.\n\n-fast is actually a macro that expands to the \"best settings\" for the\nplatform that is doing the compilation.\n\n\nLinux:\n-O2: 35\n-O3: 40\nOdd.. I wonder why it took longer. Perhaps gcc built some bad code?\nI thought the results were odd there so I ran the test many times.. same\nresults! Swapped the binaries back (so -O2 was running) and boom. back to\n35.\n\nSun gcc -O2 and suncc -fast both pass make check.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Thu, 9 Oct 2003 08:15:32 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "\nSo you want -fast added as default for non-gcc Solaris? You mentioned\nthere is a warning generated that we have to deal with?\n\n---------------------------------------------------------------------------\n\nJeff wrote:\n> On Wed, 8 Oct 2003, Neil Conway wrote:\n> \n> > Hey Jeff,\n> >\n> > On Wed, 2003-10-08 at 11:46, Jeff wrote:\n> > > Yeah - like I expected it was able to generate much better code for\n> > > _bt_checkkeys which was the #1 function in gcc on both sun & linux.\n> >\n> > If you get a minute, would it be possible to compare the performance of\n> > your benchmark under linux/gcc and solaris/gcc when PostgreSQL is\n> > compiled with \"-O3\"?\n> >\n> Sun:\n> gcc:\n> none: 60 seconds\n> -O: 21 seconds\n> -O2: 20 seconds\n> -O3: 19 seconds\n> \n> suncc:\n> none: 52 seconds\n> -fast: 20 secondsish.\n> \n> -fast is actually a macro that expands to the \"best settings\" for the\n> platform that is doing the compilation.\n> \n> \n> Linux:\n> -O2: 35\n> -O3: 40\n> Odd.. I wonder why it took longer. Perhaps gcc built some bad code?\n> I thought the results were odd there so I ran the test many times.. same\n> results! Swapped the binaries back (so -O2 was running) and boom. back to\n> 35.\n> \n> Sun gcc -O2 and suncc -fast both pass make check.\n> \n> \n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 9 Oct 2003 09:40:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Thu, 9 Oct 2003, Bruce Momjian wrote:\n\n>\n> So you want -fast added as default for non-gcc Solaris? You mentioned\n> there is a warning generated that we have to deal with?\n>\n\n Yeah, suncc generates a warning for _every_ file that says:\nWarning: -xarch=native has been explicitly specified, or implicitly\nspecified by a macro option, -xarch=native on this architecture implies\n-xarch=v8plusa which generates code that does not run on pre-UltraSPARC\nprocessors\n\nAnd then I get various warnings here and there...\n\nlots of \"statement not reached\" as in ecpg's type.c module\nThe offending code is a big switch statement like:\n case ECPGt_bool:\n return (\"ECPGt_bool\");\n break;\n\nAnd then any functiont aht uses PG_RETURN_NULL generates \" warning:\nend-of-loop code not reached\"\n\nand a bunch of \"constant promoted to unsigned long long\"\n\n\nAnd some places such as in fe-exec.c have code like this:\n buflen = strlen(strtext); /* will shrink, also we discover\nif\n\nwhere strtext is an unsigned char * which generates warning: argument #1\nis incompatible with prototype:\n\nand then various other type mismatches here and there.\n\nI skimmed through the manpage.. it doesn't look like we can supress\nthese..\n\n\nNot sure we want it to look like we have bad code if someone uses cc.\nperhaps issue a ./configure notice or something?\n\ngcc compiles things fine.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Thu, 9 Oct 2003 10:51:29 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "\nWhat is the performance win for the -fast flag again?\n\n---------------------------------------------------------------------------\n\nJeff wrote:\n> On Thu, 9 Oct 2003, Bruce Momjian wrote:\n> \n> >\n> > So you want -fast added as default for non-gcc Solaris? You mentioned\n> > there is a warning generated that we have to deal with?\n> >\n> \n> Yeah, suncc generates a warning for _every_ file that says:\n> Warning: -xarch=native has been explicitly specified, or implicitly\n> specified by a macro option, -xarch=native on this architecture implies\n> -xarch=v8plusa which generates code that does not run on pre-UltraSPARC\n> processors\n> \n> And then I get various warnings here and there...\n> \n> lots of \"statement not reached\" as in ecpg's type.c module\n> The offending code is a big switch statement like:\n> case ECPGt_bool:\n> return (\"ECPGt_bool\");\n> break;\n> \n> And then any functiont aht uses PG_RETURN_NULL generates \" warning:\n> end-of-loop code not reached\"\n> \n> and a bunch of \"constant promoted to unsigned long long\"\n> \n> \n> And some places such as in fe-exec.c have code like this:\n> buflen = strlen(strtext); /* will shrink, also we discover\n> if\n> \n> where strtext is an unsigned char * which generates warning: argument #1\n> is incompatible with prototype:\n> \n> and then various other type mismatches here and there.\n> \n> I skimmed through the manpage.. it doesn't look like we can supress\n> these..\n> \n> \n> Not sure we want it to look like we have bad code if someone uses cc.\n> perhaps issue a ./configure notice or something?\n> \n> gcc compiles things fine.\n> \n> \n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 9 Oct 2003 11:28:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Thu, 9 Oct 2003, Bruce Momjian wrote:\n\n>\n> What is the performance win for the -fast flag again?\n>\n> ---------------------------------------------------------------------------\n>\n52 seconds to 19-20 seconds\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Thu, 9 Oct 2003 11:45:24 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "Jeff wrote:\n> On Thu, 9 Oct 2003, Bruce Momjian wrote:\n> \n> >\n> > What is the performance win for the -fast flag again?\n> >\n> > ---------------------------------------------------------------------------\n> >\n> 52 seconds to 19-20 seconds\n\nWow, that's dramatic. Do you want to propose some flags for non-gcc\nSolaris? Is -fast the only one? Is there one that suppresses those\nwarnings or are they OK?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 9 Oct 2003 11:51:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Thu, 9 Oct 2003, Bruce Momjian wrote:\n\n> > 52 seconds to 19-20 seconds\n>\n> Wow, that's dramatic. Do you want to propose some flags for non-gcc\n> Solaris? Is -fast the only one? Is there one that suppresses those\n> warnings or are they OK?\n>\n\nWell. As I said, I didn't see an obvious way to hide those warnings.\nI'd love to make those warnings go away. That is why I suggested perhaps\nprinting a message to ensure the user knows that warnings may be printed\nwhen using sunsoft.\n\n-fast should be all you need - it picks the \"best settings\" to use for the\nplatform that is doing the compile.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Thu, 9 Oct 2003 12:07:20 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "Jeff,\n\nMy first concern with the -fast option is that it makes an executable\nthat is specific for the platform on which the compilation is run\nunless other flags are given. My second concern is the effect it has\non IEEE floating point behavior w.r.t. rounding, error handling, ....\nAnd my third concern is that if you use -fast, all other code must\nbe compiled and linked with the -fast option for correct operation,\nthis includes any functional languages such as perl, python, R,...\nThat is a pretty big requirement for a default compilation flag.\n\nKen Marshall\n\nOn Thu, Oct 09, 2003 at 12:07:20PM -0400, Jeff wrote:\n> On Thu, 9 Oct 2003, Bruce Momjian wrote:\n> \n> > > 52 seconds to 19-20 seconds\n> >\n> > Wow, that's dramatic. Do you want to propose some flags for non-gcc\n> > Solaris? Is -fast the only one? Is there one that suppresses those\n> > warnings or are they OK?\n> >\n> \n> Well. As I said, I didn't see an obvious way to hide those warnings.\n> I'd love to make those warnings go away. That is why I suggested perhaps\n> printing a message to ensure the user knows that warnings may be printed\n> when using sunsoft.\n> \n> -fast should be all you need - it picks the \"best settings\" to use for the\n> platform that is doing the compile.\n> \n> \n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n",
"msg_date": "Thu, 9 Oct 2003 11:25:25 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Thu, 9 Oct 2003, Kenneth Marshall wrote:\n\n> Jeff,\n>\n> My first concern with the -fast option is that it makes an executable\n> that is specific for the platform on which the compilation is run\n> unless other flags are given. My second concern is the effect it has\n> on IEEE floating point behavior w.r.t. rounding, error handling, ....\n> And my third concern is that if you use -fast, all other code must\n> be compiled and linked with the -fast option for correct operation,\n> this includes any functional languages such as perl, python, R,...\n> That is a pretty big requirement for a default compilation flag.\n>\n> Ken Marshall\n>\n\nSo you think we should leave PG alone and let it run horrifically slowly?\nDo you have a better idea of how to do this?\n\nAnd do you have evidence apps compiled with -fast linked to non -fast\n(or gcc compiled) have problems?\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Thu, 9 Oct 2003 13:04:23 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "Jeff wrote:\n> On Thu, 9 Oct 2003, Kenneth Marshall wrote:\n> \n> > Jeff,\n> >\n> > My first concern with the -fast option is that it makes an executable\n> > that is specific for the platform on which the compilation is run\n> > unless other flags are given. My second concern is the effect it has\n> > on IEEE floating point behavior w.r.t. rounding, error handling, ....\n> > And my third concern is that if you use -fast, all other code must\n> > be compiled and linked with the -fast option for correct operation,\n> > this includes any functional languages such as perl, python, R,...\n> > That is a pretty big requirement for a default compilation flag.\n> >\n> > Ken Marshall\n> >\n> \n> So you think we should leave PG alone and let it run horrifically slowly?\n> Do you have a better idea of how to do this?\n> \n> And do you have evidence apps compiled with -fast linked to non -fast\n> (or gcc compiled) have problems?\n\nI have updated the Solaris FAQ:\n\n\n5) How can I compile for optimum performance?\n\nTry using the \"-fast\" compile flag. The binaries might not be portable to\nother Solaris systems, and you might need to compile everything that links\nto PostgreSQL with \"-fast\", but PostgreSQL will run significantly faster,\n50% faster on some tests.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 9 Oct 2003 13:08:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "On Thu, Oct 09, 2003 at 01:04:23PM -0400, Jeff wrote:\n> \n> So you think we should leave PG alone and let it run horrifically slowly?\n> Do you have a better idea of how to do this?\n\nGiven the point in the release cycle, mightn't the FAQ_Solaris or\nsome other place be better for this for now? I agree with the\nconcern. I'd rather have slow'n'stable than fast-but-broken.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nAfilias Canada Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n",
"msg_date": "Thu, 9 Oct 2003 13:16:11 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "Andrew Sullivan wrote:\n> On Thu, Oct 09, 2003 at 01:04:23PM -0400, Jeff wrote:\n> > \n> > So you think we should leave PG alone and let it run horrifically slowly?\n> > Do you have a better idea of how to do this?\n> \n> Given the point in the release cycle, mightn't the FAQ_Solaris or\n> some other place be better for this for now? I agree with the\n> concern. I'd rather have slow'n'stable than fast-but-broken.\n\nFAQ added.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 9 Oct 2003 13:57:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "We're keeping the -O2 for gcc in the template and moving the mention of\n-fast to the FAQ, correct?\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Thu, 9 Oct 2003 14:04:41 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "Jeff wrote:\n> We're keeping the -O2 for gcc in the template and moving the mention of\n> -fast to the FAQ, correct?\n\ngcc gets -O2, non-gcc gets -O, and -fast is in the FAQ, yea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 9 Oct 2003 14:10:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "[email protected] (Bruce Momjian) writes:\n> 5) How can I compile for optimum performance?\n>\n> Try using the \"-fast\" compile flag. The binaries might not be portable to\n> other Solaris systems, and you might need to compile everything that links\n> to PostgreSQL with \"-fast\", but PostgreSQL will run significantly faster,\n> 50% faster on some tests.\n\nYou might also mention something like the following:\n\n If you are compiling using GCC, you will quite likely want to add in\n the \"-O2\" compile flag.\n-- \nlet name=\"cbbrowne\" and tld=\"libertyrms.info\" in String.concat \"@\" [name;tld];;\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n",
"msg_date": "Thu, 09 Oct 2003 14:12:30 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "Christopher Browne wrote:\n> [email protected] (Bruce Momjian) writes:\n> > 5) How can I compile for optimum performance?\n> >\n> > Try using the \"-fast\" compile flag. The binaries might not be portable to\n> > other Solaris systems, and you might need to compile everything that links\n> > to PostgreSQL with \"-fast\", but PostgreSQL will run significantly faster,\n> > 50% faster on some tests.\n> \n> You might also mention something like the following:\n> \n> If you are compiling using GCC, you will quite likely want to add in\n> the \"-O2\" compile flag.\n\nWe already do that by default in current CVS for gcc, and -O for\nnon-gcc.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 9 Oct 2003 15:27:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
},
{
"msg_contents": "I would use a simple -xO2 or -xO3 instead as the default with\nan -fsimple=2.\n\n--Ken\n\n-x02 -xbuiltin=%all\nOn Thu, Oct 09, 2003 at 01:04:23PM -0400, Jeff wrote:\n> On Thu, 9 Oct 2003, Kenneth Marshall wrote:\n> \n> > Jeff,\n> >\n> > My first concern with the -fast option is that it makes an executable\n> > that is specific for the platform on which the compilation is run\n> > unless other flags are given. My second concern is the effect it has\n> > on IEEE floating point behavior w.r.t. rounding, error handling, ....\n> > And my third concern is that if you use -fast, all other code must\n> > be compiled and linked with the -fast option for correct operation,\n> > this includes any functional languages such as perl, python, R,...\n> > That is a pretty big requirement for a default compilation flag.\n> >\n> > Ken Marshall\n> >\n> \n> So you think we should leave PG alone and let it run horrifically slowly?\n> Do you have a better idea of how to do this?\n> \n> And do you have evidence apps compiled with -fast linked to non -fast\n> (or gcc compiled) have problems?\n> \n> \n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n",
"msg_date": "Thu, 9 Oct 2003 15:31:24 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun performance - Major discovery!"
}
] |
[
{
"msg_contents": "DISCLAIMER: This message contains privileged and confidential information and is\nintended only for the individual named.If you are not the intended\nrecipient you should not disseminate,distribute,store,print, copy or\ndeliver this message.Please notify the sender immediately by e-mail if\nyou have received this e-mail by mistake and delete this e-mail from\nyour system.\n18:15p\nDear all,\nThere is a problem I am facing while connecting to postgresqk database server, which is intalled on the remote machine. When I check the log's \nat database end PG_recv buf is reaching the EOF, and at my program level, java socket exception.\nI need some help regarding this... as this is not allowing my programs to execute..\nThanking You\n-----\nWarm Regards\nSh�am Peri\n\nII Floor, Punja Building,\nM.G.Road,\nBallalbagh,\nMangalore-575003 \nPh : 91-824-2451001/5\nFax : 91-824-2451050 \n\n\n\n\nDISCLAIMER: This message contains privileged and confidential information and is\nintended only for the individual named.If you are not the intended\nrecipient you should not disseminate,distribute,store,print, copy or\ndeliver this message.Please notify the sender immediately by e-mail if\nyou have received this e-mail by mistake and delete this e-mail from\nyour system.",
"msg_date": "Thu, 9 Oct 2003 17:46:16 +0530 (IST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Serious Problem with the windows and postgres configuration"
}
] |
[
{
"msg_contents": "This is a timely thread for myself, as I'm in the middle of testing both\ndatabases as an Oracle replacement.\n \nAs of this moment, I know more about MySQL (tuning, setup, features)\nthan I do about Postgres. Not because I like MySQL more, but because\n \n1) the MySQL docs are better (sorry - I found them easier to read, and\nmore comprehensive; I had an easier time finding the answers I needed)\n2) there are more web pages devoted to MySQL (probably because it has a\nbit more market share)\n3) there are more books on MySQL at the bookstore (I haven't had a\nchance to pick up Bruce's book yet; it might be all the book I'd ever\nneed)\n4) we looked at MySQL first (we needed replication, and eRServer had not\nbeen open-sourced when we started looking)\n \nWith regards to #1, I'd like to specifically mention tuning - the docs\nat http://www.postgresql.org/docs/7.3/static/runtime-config.html\n<http://www.postgresql.org/docs/7.3/static/runtime-config.html> give a\nbasic explanation of the different options, but much more is needed for\ntuning. I'm running into a problem with an update statement (that uses a\nselect in a sub-query) in Postgres - it's taking hours to run (the\nequiv, using a multi-table update statement in MySQL instead of a\nsub-query, takes all of 2 seconds). I'll be posting it later once I do\nmore reading to make sure I've done as much as I can to solve it myself.\n \nI really agree with this post:\n \n\"I guess my point is simply this: instead of saying: \"okay we use\ndefault settings that will run on _old_ hardware too\" we should go for a\nlittle script that creates a \"still save but much better\" config file.\nThere's just no point in setting SHARED_BUFFERS to something like 16\n(what's the current default?) if the PC has >= 1 GB of RAM. Setting it\nto 8192 would still be save, but 512 times better... ;-) (IIRC 8192\nwould take 64 MB of RAM, which should be save if you leave the default\nMAX_CONNECTIONS.)\" It provides examples, and some real numbers to help\nsomeone new to the database take an initial crack at tuning. Remember,\nyou're trying to compete with the big-guys (Oracle, etc), so providing\ninfo that an Oracle DBA needs is pretty critical. I'm currently at a\ncomplete loss for tuning Postgres (it seems to do things very\ndifferently than both Oracle and MySQL).\n \n \nI also have to admit a bit of irritation reading this thread; there is a\nfair number of incorrect statements on this thread that, while not\nwrong, definately aren't right:\n \n\"Speed depends on the nature of use and the complexity of queries. If\nyou are doing updates of related tables, ACID is of vital importance and\nMySQL doesn't provide it.\"\nMySQL has ACID in InnoDB. I've found that MySQL is actually very fast on\ncomplex queries w/InnoDB (six tables, 1 million rows, two of the joins\nare outer-joins. In fact, I can get InnoDB to be almost as fast as\nMyISAM. Complex updates are also very very fast. We have not tried\nflooding either database with dozens of complex statements from multiple\nclients; that's coming soon, and from what I've read, MySQL won't do too\nwell.\n \n\"using InnoDB tables (the only way to have foreign keys, transactions,\nand row level locking for MySQL) makes MySQL slower and adds complexity\nto tuning the database\"\nAdding this: \"innodb_flush_method=O_DSYNC\" to the my.cnf made InnoDB as\nfast as MyISAM in our tests. It doesn't turn off disk flushing; it's\njust a flush method that might work better with different kernels and\ndrives; it's one of those \"play with this and see if it helps\"\nparameters; there are lots of those in Postgres, it seems. There are 10\nvariables for tuning InnoDB (and you don't have to tune for MyISAM, so\nit's actually a six-of-one, half-dozen-of-the-other). Setup between the\ntwo seems to be about the same.\n \n\"PostgreSQL supports constraints. MySQL doesn't; programmers need to\ntake care of that from the client side\"\nAgain, InnoDB supports constraints.\n \n\"Transactions: We've been here before. Suffice to say, MySQL+InnoDB is\nalmost there. Plain ol' MySQL doesn't have it, which tells you something\nabout their philosophy towards database design.\"\nInnoDB supports transactions very nicely, has the equivalent of WAL, and\none thing I really like: a tablespace (comprised of data files that can\nbe spread around multiple hard drives), and in a month or so, InnoDB\nwill support multiple tablespaces.\n \n \nTo be fair, here are a few MySQL \"bad-things\" that weren't mentioned:\n \n1) InnoDB can't do a hot-backup with the basic backup tools. To\nhot-backup an InnoDB database, you need to pay $450 US per database per\nyear ($1150 per database perpetual) for a proprietary hot-backup tool\n2) InnoDB can't do full-text searching.\n3) I see alot more corrupt-database bugs on the MySQL lists (most are\nMyISAM, but a few InnoDB bugs pop up from time to time) - way more than\nI see on the Postgres lists.\n4) There are some really cranky people on the MySQL lists; the Postgres\nlists seem to be much more effective (esp. with people like Tom Lane).\nMaybe it's because they get alot of dumb questions, as people unfamiliar\nwith databases turn to MySQL first?\n \nMaybe the Postgres community needs an anti-FUD individual or two; people\nthat know both databases, and can provide the proper information for\nanswering questions like this. A section in the docs would help as well.\nYes, I know many of the people advocating Postgres do not want to\ncompare themselves to MySQL (but rather to Oracle, Sybase, DB2, etc) ,\nbut the volume of responses on a thread like this indicates that the\ncomparison is going to happen regardless. Better to nip it in the bud\nquickly than let it go on over 3-4 days.\n \nOne last observation: someone looking at both databases, reading those\nposts, might get a bad impression of Postgres based on the inconsistency\nand incorrectness of some of the statements made about MySQL. If a\nsalesperson provides misinformation about a competitors product and you\nfind out about it, that salesperson has most likely lost a customer.\n \nAnyway, I hope I haven't offended anyone - I'm not trying to troll or\nflame, but rather just give some constructive criticism from someone\noutside both the MySQL and Postgres camps.\n \nDavid\n \n\n\n\n\n\n\nThis is a timely thread for myself, as I'm in the \nmiddle of testing both databases as an Oracle replacement.\n \nAs of this moment, I know more about MySQL (tuning, \nsetup, features) than I do about Postgres. Not because I like MySQL more, but \nbecause\n \n1) the MySQL docs are better (sorry - I found \nthem easier to read, and more comprehensive; I had an easier time finding the \nanswers I needed)\n2) there are more web pages devoted to MySQL \n(probably because it has a bit more market share)\n3) there are more books on MySQL at the \nbookstore (I haven't had a chance to pick up Bruce's book yet; it might be all \nthe book I'd ever need)\n4) we looked at MySQL first (we needed \nreplication, and eRServer had not been open-sourced when we started \nlooking)\n \nWith regards to #1, I'd like to specifically \nmention tuning - the docs at http://www.postgresql.org/docs/7.3/static/runtime-config.html give \na basic explanation of the different options, but much more is needed for \ntuning. I'm running into a problem with an update statement (that uses a select \nin a sub-query) in Postgres - it's taking hours to run (the equiv, using a \nmulti-table update statement in MySQL instead of a sub-query, takes all of 2 \nseconds). I'll be posting it later once I do more reading to make sure I've done \nas much as I can to solve it myself.\n \nI really agree with this post:\n \n\"I guess my point is simply this: instead of \nsaying: \"okay we use default settings that will run on _old_ hardware too\" we \nshould go for a little script that creates a \"still save but much better\" config \nfile. There's just no point in setting SHARED_BUFFERS to something like 16 \n(what's the current default?) if the PC has >= 1 GB of RAM. Setting it to \n8192 would still be save, but 512 times better... ;-) (IIRC 8192 would \ntake 64 MB of RAM, which should be save if you leave the default \nMAX_CONNECTIONS.)\" It provides examples, and some real numbers to help someone \nnew to the database take an initial crack at tuning. Remember, you're trying to \ncompete with the big-guys (Oracle, etc), so providing info that an Oracle DBA \nneeds is pretty critical. I'm currently at a complete loss for tuning Postgres \n(it seems to do things very differently than both Oracle and \nMySQL).\n \n \nI also have to admit a bit of irritation reading \nthis thread; there is a fair number of incorrect statements on this thread that, \nwhile not wrong, definately aren't right:\n \n\"Speed depends on the nature of use and the \ncomplexity of queries. If you are doing updates of related tables, ACID is \nof vital importance and MySQL doesn't provide it.\"\nMySQL has ACID in InnoDB. I've found that MySQL is \nactually very fast on complex queries w/InnoDB (six tables, 1 million rows, two \nof the joins are outer-joins. In fact, I can get InnoDB to be almost as fast as \nMyISAM. Complex updates are also very very fast. We have not tried flooding \neither database with dozens of complex statements from multiple clients; that's \ncoming soon, and from what I've read, MySQL won't do too well.\n \n\"using InnoDB tables (the only way to have foreign \nkeys, transactions, and row level locking for MySQL) makes MySQL slower \nand adds complexity to tuning the database\"\nAdding this: \"innodb_flush_method=O_DSYNC\" to the \nmy.cnf made InnoDB as fast as MyISAM in our tests. It doesn't turn off disk \nflushing; it's just a flush method that might work better with different kernels \nand drives; it's one of those \"play with this and see if it helps\" parameters; \nthere are lots of those in Postgres, it seems. There are 10 variables for tuning \nInnoDB (and you don't have to tune for MyISAM, so it's actually a six-of-one, \nhalf-dozen-of-the-other). Setup between the two seems to be about the \nsame.\n \n\"PostgreSQL supports constraints. MySQL doesn't; \nprogrammers need to take care of that from the client side\"\nAgain, InnoDB supports constraints.\n \n\"Transactions: We've been here before. Suffice to \nsay, MySQL+InnoDB is almost there. Plain ol' MySQL doesn't have it, which tells \nyou something about their philosophy towards database design.\"\nInnoDB supports transactions very nicely, has the \nequivalent of WAL, and one thing I really like: a tablespace (comprised of data \nfiles that can be spread around multiple hard drives), and in a month or so, \nInnoDB will support multiple tablespaces.\n \n \nTo be fair, here are a few MySQL \"bad-things\" that \nweren't mentioned:\n \n1) InnoDB can't do a hot-backup with the basic \nbackup tools. To hot-backup an InnoDB database, you need to pay $450 US per \ndatabase per year ($1150 per database perpetual) for a proprietary hot-backup \ntool\n2) InnoDB can't do full-text \nsearching.\n3) I see alot more corrupt-database bugs on the \nMySQL lists (most are MyISAM, but a few InnoDB bugs pop up from time to time) - \nway more than I see on the Postgres lists.\n4) There are some really cranky people on the MySQL \nlists; the Postgres lists seem to be much more effective (esp. with people like \nTom Lane). Maybe it's because they get alot of dumb questions, as people \nunfamiliar with databases turn to MySQL first?\n \nMaybe the Postgres community needs an anti-FUD \nindividual or two; people that know both databases, and can provide the proper \ninformation for answering questions like this. A section in the docs would help \nas well. Yes, I know many of the people advocating Postgres do not want to \ncompare themselves to MySQL (but rather to Oracle, Sybase, DB2, etc) , but the \nvolume of responses on a thread like this indicates that the comparison is going \nto happen regardless. Better to nip it in the bud quickly than let it go on over \n3-4 days.\n \nOne last observation: someone looking at both \ndatabases, reading those posts, might get a bad impression of Postgres based on \nthe inconsistency and incorrectness of some of the statements made about MySQL. \nIf a salesperson provides misinformation about a competitors product and you \nfind out about it, that salesperson has most likely lost a \ncustomer.\n \nAnyway, I hope I haven't offended anyone - I'm not \ntrying to troll or flame, but rather just give some constructive criticism from \nsomeone outside both the MySQL and Postgres camps.\n \nDavid",
"msg_date": "Thu, 9 Oct 2003 10:30:07 -0700",
"msg_from": "David Griffiths <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL vs MySQL"
},
{
"msg_contents": "David,\n\nThanks for being considerate, thourough, and honest about your opinions. \nParticulary that you didn't simple depart in a huff.\n\n> 1) the MySQL docs are better (sorry - I found them easier to read, and\n> more comprehensive; I had an easier time finding the answers I needed)\n\nI can believe that. MySQL AB has paid documentation writers; we don't. \n\n> 2) there are more web pages devoted to MySQL (probably because it has a\n> bit more market share)\n\nParticularly among web developers.\n\n> 3) there are more books on MySQL at the bookstore (I haven't had a\n> chance to pick up Bruce's book yet; it might be all the book I'd ever\n> need)\n\nBruce's book is out of date -- released in 1998. I recommend Korry Douglas' \nbook instead, just because of its up-to-date nature (printed late 2002 or \nearly 2003).\n\n> 4) we looked at MySQL first (we needed replication, and eRServer had not\n> been open-sourced when we started looking)\n\nI can't do anything about that, now can I?\n\n> With regards to #1, I'd like to specifically mention tuning - the docs\n> at http://www.postgresql.org/docs/7.3/static/runtime-config.html\n> <http://www.postgresql.org/docs/7.3/static/runtime-config.html> give a\n\nHave you checked these pages? They've been posted on this list numerous \ntimes:\nhttp://techdocs.postgresql.org\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\nAlso, the runtime docs are being improved in 7.4:\nhttp://developer.postgresql.org/docs/postgres/runtime-config.html\n... and I'm still working on more general \"how to\" text.\n\n> \"I guess my point is simply this: instead of saying: \"okay we use\n> default settings that will run on _old_ hardware too\" we should go for a\n> little script that creates a \"still save but much better\" config file.\n> There's just no point in setting SHARED_BUFFERS to something like 16\n> (what's the current default?) if the PC has >= 1 GB of RAM. Setting it\n> to 8192 would still be save, but 512 times better... ;-) (IIRC 8192\n> would take 64 MB of RAM, which should be save if you leave the default\n> MAX_CONNECTIONS.)\" \n\nYou'll be interested to know that SHARED_BUFFERS are actually addressed in the \ninitdb script in 7.4. However, may OSes have low limits on per-process \nmemory that requires the admin to modify the sysconf before postgresql.conf \ncan be adjusted properly. This makes writing a multi-platform tuning script \na significant effort, and to date nobody who is complaining about it the \nloudest has volunteered to do the work.\n\nTo reiterate my point above, PostgreSQL is a 100% volunteer Open Source \nproject. MySQL is a commercial company which distributes its products via \nOpen Source licensing. That makes some things easier for them than for us \n(and vice-versa, of course).\n\n> I also have to admit a bit of irritation reading this thread; there is a\n> fair number of incorrect statements on this thread that, while not\n> wrong, definately aren't right:\n\nWe've been working on this on the advocacy list .... that is, giving an \naccurate listing of PostgreSQL features not posessed by MySQL (same for \nOracle and DB2 as well, MySQL is just easier to start becuase we don't have \nto worry about being sued). I'd appreciate it if you'd take an interest in \nthat document and revise anything which is innaccurate or perjorative.\n\nAlso, keep in mind that many members of the PostgreSQL community have \"an axe \nto grind\" about MySQL. This is not only because of MySQL's eclipsing us in \nthe popular press as \"THE open source database\"; it is also because prominent \nindividuals at MySQL AB, particularly Monty and David Axmark, have in the \npast signaled their intent to rub out all other OSS databases, starting with \nPostgreSQL. While this says little about the MySQL community, it does make \nmembers of our communty very touchy when the \"M\" word comes up.\n\nI quote the rest of your debunking for the benefit of the readers on the \nAdvocacy list, with a couple of comments:\n\n> \"Speed depends on the nature of use and the complexity of queries. If\n> you are doing updates of related tables, ACID is of vital importance and\n> MySQL doesn't provide it.\"\n> MySQL has ACID in InnoDB. I've found that MySQL is actually very fast on\n> complex queries w/InnoDB (six tables, 1 million rows, two of the joins\n> are outer-joins. In fact, I can get InnoDB to be almost as fast as\n> MyISAM. Complex updates are also very very fast. We have not tried\n> flooding either database with dozens of complex statements from multiple\n> clients; that's coming soon, and from what I've read, MySQL won't do too\n> well.\n>\n> \"using InnoDB tables (the only way to have foreign keys, transactions,\n> and row level locking for MySQL) makes MySQL slower and adds complexity\n> to tuning the database\"\n> Adding this: \"innodb_flush_method=O_DSYNC\" to the my.cnf made InnoDB as\n> fast as MyISAM in our tests. It doesn't turn off disk flushing; it's\n> just a flush method that might work better with different kernels and\n> drives; it's one of those \"play with this and see if it helps\"\n> parameters; there are lots of those in Postgres, it seems. There are 10\n> variables for tuning InnoDB (and you don't have to tune for MyISAM, so\n> it's actually a six-of-one, half-dozen-of-the-other). Setup between the\n> two seems to be about the same.\n>\n> \"PostgreSQL supports constraints. MySQL doesn't; programmers need to\n> take care of that from the client side\"\n> Again, InnoDB supports constraints.\n\nReally? This is news. We did some tests on constraints on InnoDB, and found \nthat while they parsed, they were not actually enforced. Was our test in \nerror?\n\n> \"Transactions: We've been here before. Suffice to say, MySQL+InnoDB is\n> almost there. Plain ol' MySQL doesn't have it, which tells you something\n> about their philosophy towards database design.\"\n> InnoDB supports transactions very nicely, has the equivalent of WAL, and\n> one thing I really like: a tablespace (comprised of data files that can\n> be spread around multiple hard drives), and in a month or so, InnoDB\n> will support multiple tablespaces.\n\nWe'll have multiple tablespaces soon as well. They didn't quite make it for \n7.4, but will be in 7.5.\n\n> To be fair, here are a few MySQL \"bad-things\" that weren't mentioned:\n>\n> 1) InnoDB can't do a hot-backup with the basic backup tools. To\n> hot-backup an InnoDB database, you need to pay $450 US per database per\n> year ($1150 per database perpetual) for a proprietary hot-backup tool\n> 2) InnoDB can't do full-text searching.\n> 3) I see alot more corrupt-database bugs on the MySQL lists (most are\n> MyISAM, but a few InnoDB bugs pop up from time to time) - way more than\n> I see on the Postgres lists.\n\nThis is consistent with MySQL's emphasis on speed and ease-of-use over \nreliability; we have the opposite emphasis (see below).\n\n> 4) There are some really cranky people on the MySQL lists; the Postgres\n> lists seem to be much more effective (esp. with people like Tom Lane).\n> Maybe it's because they get alot of dumb questions, as people unfamiliar\n> with databases turn to MySQL first?\n\nPossibly. Also I think it's because of the poor organization of their mailing \nlists; ours are clearly divided into particular topics and experienced \nmembers politiely encourage toplicality. Further, the participation by \nmajor contributors on our lists is, from what I've heard, higher; this means \nthat complaintants have faith that their complaints will reach the eyes of \nthose actually responsible for the code.\n\n> Maybe the Postgres community needs an anti-FUD individual or two; people\n> that know both databases, and can provide the proper information for\n> answering questions like this. A section in the docs would help as well.\n> Yes, I know many of the people advocating Postgres do not want to\n> compare themselves to MySQL (but rather to Oracle, Sybase, DB2, etc) ,\n> but the volume of responses on a thread like this indicates that the\n> comparison is going to happen regardless. Better to nip it in the bud\n> quickly than let it go on over 3-4 days.\n\nWould you care to volunteer? We'd be glad to have you.\n\n> One last observation: someone looking at both databases, reading those\n> posts, might get a bad impression of Postgres based on the inconsistency\n> and incorrectness of some of the statements made about MySQL. If a\n> salesperson provides misinformation about a competitors product and you\n> find out about it, that salesperson has most likely lost a customer.\n\nMaybe. Not that I'm saying that inaccurate propaganda is a good thing, but \nthat it seems so pervasive in the industry that I think people expect it. We \ntrash MySQL; MySQL publishes 6-year-old PG vs. MySQL benchmarks; Oracle puts \ndown all Open Source databases based on MySQL's limitations; and MS SQL \nServer publishes benchmarks based on MSSQL on a cluster vs. other DBs on \nworkstations.\n\n> Anyway, I hope I haven't offended anyone - I'm not trying to troll or\n> flame, but rather just give some constructive criticism from someone\n> outside both the MySQL and Postgres camps.\n\nHmmm .... also, come to think about it, MySQL has done us a favor in some ways \nby making our project take advocacy and user-friendliness seriously, \nsomething we didn't always do.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 9 Oct 2003 10:59:03 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] OFFTOPIC: PostgreSQL vs MySQL"
},
{
"msg_contents": "On Thu, 9 Oct 2003, David Griffiths wrote:\n\n> 1) the MySQL docs are better (sorry - I found them easier to read, and\n> more comprehensive; I had an easier time finding the answers I needed)\n\nHuh. I had the opposite experience. Each to his own.\nI think everybody agrees PG needs a better tuning doc (or pointers to it,\nor something).\n\n> \"Speed depends on the nature of use and the complexity of queries. If\n> you are doing updates of related tables, ACID is of vital importance and\n> MySQL doesn't provide it.\"\n\nI don't know if you looked at my presentation. But in preparation for it I\nchecked out MySQL 4.0.x[most recent stable]. I found that I violates the C\nin acid in some places. ie you can insert a date of 0000/00/00 and have it\nsit there and be fine. Perhaps this is the fault of mysql's timestamp\ntype.\n\n> MyISAM. Complex updates are also very very fast. We have not tried\n> flooding either database with dozens of complex statements from multiple\n> clients;\n\nYou don't need complex statements to topple mysql over in high\nconcurrency. I was doing fairly simple queries with 20 load generators -\nit didn't like it. Not at all (mysql: 650 seconds pg: 220)\n\n> 3) I see alot more corrupt-database bugs on the MySQL lists (most are\n> MyISAM, but a few InnoDB bugs pop up from time to time) - way more than\n> I see on the Postgres lists.\n\nI saw this as well. I was seeing things in the changelog as late as\nseptember (this year) about fixing bugs that cause horrific corruption.\nThat doesn't make me feel comfy. Remember - in reality InnoDB is still\nvery new. The PG stuff has been tinkered with for years. I like\ninnovation and new things, but in some cases, I prefer the old code\nthat has been looked at for years.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Thu, 9 Oct 2003 14:02:10 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs MySQL"
},
{
"msg_contents": "> Have you checked these pages? They've been posted on this list numerous\n> times:\n> http://techdocs.postgresql.org\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n>\n\nJosh- It would be great to have a link to those last two excellent resources\nfrom the techdocs area- perhaps from the \"optimizing\" section in\nhttp://techdocs.postgresql.org/oresources.php. Who should we suggest this\nto? (I submitted these using the form in that area, but you may have better\nconnections.)\n\n-Nick\n\n\n",
"msg_date": "Thu, 9 Oct 2003 13:14:41 -0500",
"msg_from": "\"Nick Fankhauser\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] OFFTOPIC: PostgreSQL vs MySQL"
},
{
"msg_contents": "Nick,\n\n> Josh- It would be great to have a link to those last two excellent resources\n> from the techdocs area- perhaps from the \"optimizing\" section in\n> http://techdocs.postgresql.org/oresources.php. Who should we suggest this\n> to? (I submitted these using the form in that area, but you may have better\n> connections.)\n\nThis is my responsibility; I'll add it to the list.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 9 Oct 2003 12:01:10 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] OFFTOPIC: PostgreSQL vs MySQL"
},
{
"msg_contents": "On Thu, 2003-10-09 at 14:14, Nick Fankhauser wrote:\n> > Have you checked these pages? They've been posted on this list numerous\n> > times:\n> > http://techdocs.postgresql.org\n> > http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n> > http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n> >\n> \n> Josh- It would be great to have a link to those last two excellent resources\n> from the techdocs area- perhaps from the \"optimizing\" section in\n> http://techdocs.postgresql.org/oresources.php. Who should we suggest this\n> to? (I submitted these using the form in that area, but you may have better\n> connections.)\n> \n\nUnfortunately techdocs is becoming more and more a bastard child since\nno one can seem to agree on and actually implement a solution to it's\ncurrent woes. I've (quietly) complained about new articles getting\nwritten by the community and posted to sites other than techdocs since I\nthink it makes it harder for folks to find useful information... \n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "09 Oct 2003 15:06:05 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] OFFTOPIC: PostgreSQL vs MySQL"
},
{
"msg_contents": "On Thu, 9 Oct 2003, Josh Berkus wrote:\n\n> David Griffiths wrote: \n> > With regards to #1, I'd like to specifically mention tuning - the docs\n> > at http://www.postgresql.org/docs/7.3/static/runtime-config.html\n> > <http://www.postgresql.org/docs/7.3/static/runtime-config.html> give a\n> \n> Have you checked these pages? They've been posted on this list numerous \n> times:\n> http://techdocs.postgresql.org\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n> \n> Also, the runtime docs are being improved in 7.4:\n> http://developer.postgresql.org/docs/postgres/runtime-config.html\n> ... and I'm still working on more general \"how to\" text.\n\nany chance of getting the perf.html file from varlena folded into the main \ndocumentation tree somewhere? it's a great document, and it would \ndefinitely help if the tuning section of the main docs said \"For a more \nthorough examination of postgresql tuning see this:\" and pointed to it.\n\n",
"msg_date": "Thu, 9 Oct 2003 13:23:25 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] OFFTOPIC: PostgreSQL vs MySQL"
},
{
"msg_contents": "On Thu, 9 Oct 2003, Jeff wrote:\n\n> On Thu, 9 Oct 2003, David Griffiths wrote:\n> \n> > 1) the MySQL docs are better (sorry - I found them easier to read, and\n> > more comprehensive; I had an easier time finding the answers I needed)\n> \n> Huh. I had the opposite experience. Each to his own.\n> I think everybody agrees PG needs a better tuning doc (or pointers to it,\n> or something).\n\nI think the issue is that Postgresql documentation is oriented towards DBA \ntypes, who already understand databases in general, so they can find what \nthey want, while MySQL docs are oriented towards dbms newbies, who don't \nknow much, if anything, about databases.\n\n",
"msg_date": "Thu, 9 Oct 2003 13:25:47 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs MySQL"
},
{
"msg_contents": "Hello,\n\n One of the other problems with techdocs is that it is way to top level \nheavy. There is a ton of information that is on the front page that\nreally shouldn't be. IMHO:\n\nThere shouldn't be a separate box for JDBC... There should be one that \nis a link to an intefaces page. The interfaces page should have\nsubsequent information about JDBC/Perl/Python/etc...\n\nAs much as I personally appreciate the Online Books box, it should be \none link that say Documentation. That link should open\na page that has resources for the books, and other online docs as well \nas a link to an articles page.\n\netc...\n\nSincerely,\n\nJoshua Drake\n\n\nRobert Treat wrote:\n\n>On Thu, 2003-10-09 at 14:14, Nick Fankhauser wrote:\n> \n>\n>>>Have you checked these pages? They've been posted on this list numerous\n>>>times:\n>>>http://techdocs.postgresql.org\n>>>http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n>>>http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n>>>\n>>> \n>>>\n>>Josh- It would be great to have a link to those last two excellent resources\n>>from the techdocs area- perhaps from the \"optimizing\" section in\n>>http://techdocs.postgresql.org/oresources.php. Who should we suggest this\n>>to? (I submitted these using the form in that area, but you may have better\n>>connections.)\n>>\n>> \n>>\n>\n>Unfortunately techdocs is becoming more and more a bastard child since\n>no one can seem to agree on and actually implement a solution to it's\n>current woes. I've (quietly) complained about new articles getting\n>written by the community and posted to sites other than techdocs since I\n>think it makes it harder for folks to find useful information... \n>\n>Robert Treat\n> \n>\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-222-2783 - [email protected] - http://www.commandprompt.com\nEditor-N-Chief - PostgreSQl.Org - http://www.postgresql.org\n\n\n",
"msg_date": "Thu, 09 Oct 2003 12:30:08 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] OFFTOPIC: PostgreSQL vs MySQL"
},
{
"msg_contents": "I concur 100%. PostgreSQL was big and scary and MySQL seemed cute and\ncuddly, warm and fuzzy. Then I took my undergrad CS RDBMS course (a course\nthat focused on designing the backend software), and only then was I ready\nto appreciate and wield the battle axe that is PostgreSQL.\n\nHe also let me use PostgreSQL for my final project (the standard was\nOracle). I got an A. :)\n\nI do have to admit that I prefer OSS (and docs) better than proprietary. I\nhad some Informix work and that was not fun at all. So even though the MySQL\nis pink fuzzy bunnies, PostgreSQL is at least a brown fuzzy bunny [to me\nanyway].\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of\n> scott.marlowe\n> Sent: Thursday, October 09, 2003 3:26 PM\n> To: Jeff\n> Cc: David Griffiths; [email protected]\n> Subject: Re: [PERFORM] PostgreSQL vs MySQL\n>\n>\n> On Thu, 9 Oct 2003, Jeff wrote:\n>\n> > On Thu, 9 Oct 2003, David Griffiths wrote:\n> >\n> > > 1) the MySQL docs are better (sorry - I found them easier to read, and\n> > > more comprehensive; I had an easier time finding the answers I needed)\n> >\n> > Huh. I had the opposite experience. Each to his own.\n> > I think everybody agrees PG needs a better tuning doc (or\n> pointers to it,\n> > or something).\n>\n> I think the issue is that Postgresql documentation is oriented\n> towards DBA\n> types, who already understand databases in general, so they can find what\n> they want, while MySQL docs are oriented towards dbms newbies, who don't\n> know much, if anything, about databases.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n\n",
"msg_date": "Thu, 09 Oct 2003 15:56:33 -0400",
"msg_from": "Jason Hihn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs MySQL"
},
{
"msg_contents": "On Thu, 2003-10-09 at 13:30, David Griffiths wrote:\n> I also have to admit a bit of irritation reading this thread; there is a\n> fair number of incorrect statements on this thread that, while not\n> wrong, definately aren't right:\n> \n> \"Speed depends on the nature of use and the complexity of queries. If\n> you are doing updates of related tables, ACID is of vital importance and\n> MySQL doesn't provide it.\"\n> MySQL has ACID in InnoDB. \n\nActually it only kinda sorta has acid. As Jeff mentioned, and it can be\nexpanded upon, mysql has a nasty habit of transforming invalid data into\nsomething that will insert into a table and not telling you about it. I\nthink Josh mentioned reports that it ignores some constraint\ndefinitions. And then theres the whole mixing MyISAM and InnoDB tables\ncompletely breaks the ability to rollback transactions...\n\n> \n> \"using InnoDB tables (the only way to have foreign keys, transactions,\n> and row level locking for MySQL) makes MySQL slower and adds complexity\n> to tuning the database\"\n> Adding this: \"innodb_flush_method=O_DSYNC\" to the my.cnf made InnoDB as\n> fast as MyISAM in our tests. It doesn't turn off disk flushing; it's\n> just a flush method that might work better with different kernels and\n> drives; it's one of those \"play with this and see if it helps\"\n> parameters; there are lots of those in Postgres, it seems. There are 10\n> variables for tuning InnoDB (and you don't have to tune for MyISAM, so\n> it's actually a six-of-one, half-dozen-of-the-other). Setup between the\n> two seems to be about the same.\n\nWell, I've yet to see MySQL benchmark themselves vs. the big boys using\nInnoDB tables, I'm only guessing that it's because those tables are\nslower. (Well, guessing and calling upon experience) Sure there may be\nwork arounds, but that does add a certain complexity. (Bonus for us,\nPostgreSQL is just complex from the get go :-P )\n\n> \n> \"PostgreSQL supports constraints. MySQL doesn't; programmers need to\n> take care of that from the client side\"\n> Again, InnoDB supports constraints.\n> \n\nWe've seen evidence it doesn't. If they've fixed this great. Of course\nI'll quote from the mysql docs \n\n\"InnoDB allows you to drop any table even though that would break the\nforeign key constraints which reference the table.\" \n\nlast I knew it did this silently and without warning. there are other\nissues as well, so it's support is relative...\n\n> \"Transactions: We've been here before. Suffice to say, MySQL+InnoDB is\n> almost there. Plain ol' MySQL doesn't have it, which tells you something\n> about their philosophy towards database design.\"\n> InnoDB supports transactions very nicely, has the equivalent of WAL, and\n> one thing I really like: a tablespace (comprised of data files that can\n> be spread around multiple hard drives), and in a month or so, InnoDB\n> will support multiple tablespaces.\n> \n\nJust don't mix InnoDB and MyISAM tables together or you could end up in\na world of trouble... its unfortunate that this breaks one of the main\nkeys to building a DBMS, namely hiding implementation details from the\nend users. \n\n> Maybe the Postgres community needs an anti-FUD individual or two; people\n> that know both databases, and can provide the proper information for\n> answering questions like this. \n\nWell, among the major advocacy folk we do have a mantra about no FUD,\nbut these are public lists so we cant really stop people from posting. \nOf course this overlooks the fact that different people interpret\ndifferent information differently. (heh) Take this quote I saw posted\nin a non postgresql forum a while back: \"MySQL doesn't fully support\nsubqueries\" which of course created a slew of posts about FUD and\npostgresql users being idiots. If course, when the posted responded back\nwith the question \"Can mysql do subselects in the SELECT, FROM, and\nWHERE clauses like postgresql, and nest subselects within those\nsubselects?\" it stopped everyone in their tracks...\n\n> A section in the docs would help as well.\n\nIn the docs no, on techdocs, maybe. \n\n> Yes, I know many of the people advocating Postgres do not want to\n> compare themselves to MySQL (but rather to Oracle, Sybase, DB2, etc) ,\n> but the volume of responses on a thread like this indicates that the\n> comparison is going to happen regardless. Better to nip it in the bud\n> quickly than let it go on over 3-4 days.\n> \n\nIt was due to the help of postgresql users that the following site has\nbecome available: http://sql-info.de/mysql/gotchas.html\nI'd suggest you look it over if your trying to evaluate a switch from\nOracle to MySQL.\n\nAnd anyone is welcome, actually encouraged, to correct erroneous\ninformation they see posted about any system on these lists. God bless\nif you're willing to try and follow every list every day to watch for\nthese types of posts. \n\n\n> One last observation: someone looking at both databases, reading those\n> posts, might get a bad impression of Postgres based on the inconsistency\n> and incorrectness of some of the statements made about MySQL. If a\n> salesperson provides misinformation about a competitors product and you\n> find out about it, that salesperson has most likely lost a customer.\n> \n\nUnfortunate that you'd attribute anyone who posts on these lists as a\nsales person for postgresql...\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "09 Oct 2003 15:58:17 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs MySQL"
},
{
"msg_contents": "Scott,\n\n> any chance of getting the perf.html file from varlena folded into the main \n> documentation tree somewhere? it's a great document, and it would \n> definitely help if the tuning section of the main docs said \"For a more \n> thorough examination of postgresql tuning see this:\" and pointed to it.\n\nActually, I'm working on that this weekend.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n",
"msg_date": "Thu, 9 Oct 2003 13:01:17 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] OFFTOPIC: PostgreSQL vs MySQL"
},
{
"msg_contents": "> Thanks for being considerate, thourough, and honest about your opinions.\n> Particulary that you didn't simple depart in a huff.\n\nWhy would I depart in a huff? I was just trying to make a few objective\nobservations.\n\nI really have no biases; I like what I've seen in MySQL, and I like alot of\nthe more Oracle-like\nfeatures in Postgres.\n\n> > 4) we looked at MySQL first (we needed replication, and eRServer had not\n> > been open-sourced when we started looking)\n>\n> I can't do anything about that, now can I?\n\nMy point was that it's since been open-sourced; it just means I've looked\nlonger at\nMySQL, as it had replication when we started looking.\n\n> Have you checked these pages? They've been posted on this list numerous\n> times:\n> http://techdocs.postgresql.org\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\nThose are much more instructive; I'm curious - why aren't then in the\nadministrator's\nsection of the docs?\n\n> We've been working on this on the advocacy list .... that is, giving an\n> accurate listing of PostgreSQL features not posessed by MySQL (same for\n> Oracle and DB2 as well, MySQL is just easier to start becuase we don't\nhave\n> to worry about being sued). I'd appreciate it if you'd take an interest\nin\n> that document and revise anything which is innaccurate or perjorative.\n\nI might be able to provide some insight, but I've only been working with\nMySQL for a month\nor so (Oracle for about 8 years).\n\n> > \"PostgreSQL supports constraints. MySQL doesn't; programmers need to\n> > take care of that from the client side\"\n> > Again, InnoDB supports constraints.\n>\n> Really? This is news. We did some tests on constraints on InnoDB, and\nfound\n> that while they parsed, they were not actually enforced. Was our test\nin\n> error?\n\nYou may have turned them off to load data? I've run into constraints when my\ndata-load script missed some rows in address_type. When it went to do the\naddress_list table, all rows that had the missing address_type failed, as\nthey\nshould. I saw no weakness in the constraints.\n\n\n> > Maybe the Postgres community needs an anti-FUD individual or two; people\n> > that know both databases, and can provide the proper information for\n> > answering questions like this. A section in the docs would help as well.\n> > Yes, I know many of the people advocating Postgres do not want to\n> > compare themselves to MySQL (but rather to Oracle, Sybase, DB2, etc) ,\n> > but the volume of responses on a thread like this indicates that the\n> > comparison is going to happen regardless. Better to nip it in the bud\n> > quickly than let it go on over 3-4 days.\n>\n> Would you care to volunteer? We'd be glad to have you.\n\nMaybe once all this database testing is done; it's extra work on top of an\nalready\nheavy load (add a new baby, and free time goes right down the toilet).\n\nI need to figure out my performance issues with Postgres, finish my\nbenchmark\nsuite, test a bunch of databases, argue with the CTO, and then start\nmigrating.\n\nI'll be sure to post my results to the [email protected]\nalong with\nthe tests.\n\nDavid.\n",
"msg_date": "Thu, 9 Oct 2003 13:26:22 -0700",
"msg_from": "David Griffiths <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OFFTOPIC: PostgreSQL vs MySQL"
},
{
"msg_contents": "Josh,\n\nThe plan is to re-design techdocs based on Bricolage, allowing writers to \ncontribute easier. However, we got as far as installing Bric on David \nFetter's test server, and haven't worked on setting up templates yet.\n\nToo many things.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 9 Oct 2003 17:29:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] OFFTOPIC: PostgreSQL vs MySQL"
},
{
"msg_contents": "On Thu, 9 Oct 2003, David Griffiths wrote:\n\n> > > \"PostgreSQL supports constraints. MySQL doesn't; programmers need to\n> > > take care of that from the client side\"\n> > > Again, InnoDB supports constraints.\n> >\n> > Really? This is news. We did some tests on constraints on InnoDB, and\n> > found that while they parsed, they were not actually enforced. Was \n> > our test in error?\n> \n> You may have turned them off to load data? I've run into constraints\n> when my data-load script missed some rows in address_type. When it went\n> to do the address_list table, all rows that had the missing address_type\n> failed, as they should. I saw no weakness in the constraints.\n\nIt sounds like you talk about foreign keys only, while the previous writer \ntalkes about other constraints also. For example, in postgresql you \ncan do:\n\nCREATE TABLE foo (\n x int,\n\n CONSTRAINT bar CHECK (x > 5)\n);\n\nand then\n\n# INSERT INTO foo VALUES (4);\nERROR: ExecInsert: rejected due to CHECK constraint \"bar\" on \"foo\"\n\n\nI don't know MySQL, but I've got the impression from other posts on the\nlists that innodb supports foreign keys only. I might be wrong though.\n\n-- \n/Dennis\n\n",
"msg_date": "Fri, 10 Oct 2003 05:21:24 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OFFTOPIC: PostgreSQL vs MySQL"
},
{
"msg_contents": "David Griffiths wrote:\n>>Have you checked these pages? They've been posted on this list numerous\n>>times:\n>>http://techdocs.postgresql.org\n>>http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n>>http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n> Those are much more instructive; I'm curious - why aren't then in the\n> administrator's\n> section of the docs?\n\nBecause they are much more recent. Not even 6 months old. And lot of people \ndiffer on how exactly these tips applies. What goes in postgresql documentation \nis fact and nothing but facts. Clearly such tips do not have any place in \npostgresql documentation(At least in my opinion)\n\nA pointer might get added to postgresql documentation. That's about it at the most.\n\n Shridhar\n\n",
"msg_date": "Fri, 10 Oct 2003 13:13:09 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OFFTOPIC: PostgreSQL vs MySQL"
},
{
"msg_contents": "David Griffiths wrote:\n\n> This is a timely thread for myself, as I'm in the middle of testing \n> both databases as an Oracle replacement.\n> \n> As of this moment, I know more about MySQL (tuning, setup, features) \n> than I do about Postgres. Not because I like MySQL more, but because\n> \n> 1) the MySQL docs are better (sorry - I found them easier to read, and \n> more comprehensive; I had an easier time finding the answers I needed)\n> 2) there are more web pages devoted to MySQL (probably because it has \n> a bit more market share)\n> 3) there are more books on MySQL at the bookstore (I haven't had a \n> chance to pick up Bruce's book yet; it might be all the book I'd ever \n> need)\n> 4) we looked at MySQL first (we needed replication, and eRServer had \n> not been open-sourced when we started looking)\n> \n> With regards to #1, I'd like to specifically mention tuning - the docs \n> at http://www.postgresql.org/docs/7.3/static/runtime-config.html give \n> a basic explanation of the different options, but much more is needed \n> for tuning. I'm running into a problem with an update statement (that \n> uses a select in a sub-query) in Postgres - it's taking hours to run \n> (the equiv, using a multi-table update statement in MySQL instead of a \n> sub-query, takes all of 2 seconds). I'll be posting it later once I do \n> more reading to make sure I've done as much as I can to solve it myself.\n\n\nDavid,\n\nI think you have valid observations. And the issue regarding \nreplication has been quite a hot topic on occasion in the developer \nlists. I'm hoping at some point it would become part of the standard \nPostgreSQL package; but point in time recovery, PITR, is needed as a \nstepping stone to providing that functionality.\n\nHave you attempted the multi table update inside of a transaction for \nPostgreSQL yet and thus assuring the all of your updates are only \nvisible after the commit? Depending on the design and the nature of \nthe updates, their could be a race condition if the updates on one table \nare utilized by another process before the rest of the updates have \ncompleted.\n\nSets of updates in a single transaction can improve performance as well.\n\n> \n> I really agree with this post:\n> \n> \"I guess my point is simply this: instead of saying: \"okay we use \n> default settings that will run on _old_ hardware too\" we should go for \n> a little script that creates a \"still save but much better\" config \n> file. There's just no point in setting SHARED_BUFFERS to something \n> like 16 (what's the current default?) if the PC has >= 1 GB of RAM. \n> Setting it to 8192 would still be save, but 512 times better... ;-) \n> (IIRC 8192 would take 64 MB of RAM, which should be save if you leave \n> the default MAX_CONNECTIONS.)\" It provides examples, and some real \n> numbers to help someone new to the database take an initial crack at \n> tuning. Remember, you're trying to compete with the big-guys (Oracle, \n> etc), so providing info that an Oracle DBA needs is pretty critical. \n> I'm currently at a complete loss for tuning Postgres (it seems to do \n> things very differently than both Oracle and MySQL).\n> \n> \n> I also have to admit a bit of irritation reading this thread; there is \n> a fair number of incorrect statements on this thread that, while not \n> wrong, definately aren't right:\n> \n> \"Speed depends on the nature of use and the complexity of queries. If \n> you are doing updates of related tables, ACID is of vital importance \n> and MySQL doesn't provide it.\"\n> MySQL has ACID in InnoDB. I've found that MySQL is actually very fast \n> on complex queries w/InnoDB (six tables, 1 million rows, two of the \n> joins are outer-joins. In fact, I can get InnoDB to be almost as fast \n> as MyISAM. Complex updates are also very very fast. We have not tried \n> flooding either database with dozens of complex statements from \n> multiple clients; that's coming soon, and from what I've read, MySQL \n> won't do too well.\n> \n> \"using InnoDB tables (the only way to have foreign keys, transactions, \n> and row level locking for MySQL) makes MySQL slower and adds \n> complexity to tuning the database\"\n> Adding this: \"innodb_flush_method=O_DSYNC\" to the my.cnf made InnoDB \n> as fast as MyISAM in our tests. It doesn't turn off disk flushing; \n> it's just a flush method that might work better with different kernels \n> and drives; it's one of those \"play with this and see if it helps\" \n> parameters; there are lots of those in Postgres, it seems. There are \n> 10 variables for tuning InnoDB (and you don't have to tune for MyISAM, \n> so it's actually a six-of-one, half-dozen-of-the-other). Setup between \n> the two seems to be about the same.\n> \n> \"PostgreSQL supports constraints. MySQL doesn't; programmers need to \n> take care of that from the client side\"\n> Again, InnoDB supports constraints.\n> \n> \"Transactions: We've been here before. Suffice to say, MySQL+InnoDB is \n> almost there. Plain ol' MySQL doesn't have it, which tells you \n> something about their philosophy towards database design.\"\n> InnoDB supports transactions very nicely, has the equivalent of WAL, \n> and one thing I really like: a tablespace (comprised of data files \n> that can be spread around multiple hard drives), and in a month or so, \n> InnoDB will support multiple tablespaces.\n> \n> \n> To be fair, here are a few MySQL \"bad-things\" that weren't mentioned:\n> \n> 1) InnoDB can't do a hot-backup with the basic backup tools. To \n> hot-backup an InnoDB database, you need to pay $450 US per database \n> per year ($1150 per database perpetual) for a proprietary hot-backup tool\n> 2) InnoDB can't do full-text searching.\n> 3) I see alot more corrupt-database bugs on the MySQL lists (most are \n> MyISAM, but a few InnoDB bugs pop up from time to time) - way more \n> than I see on the Postgres lists.\n> 4) There are some really cranky people on the MySQL lists; the \n> Postgres lists seem to be much more effective (esp. with people like \n> Tom Lane). Maybe it's because they get alot of dumb questions, as \n> people unfamiliar with databases turn to MySQL first?\n> \n> Maybe the Postgres community needs an anti-FUD individual or two; \n> people that know both databases, and can provide the proper \n> information for answering questions like this. A section in the docs \n> would help as well. Yes, I know many of the people advocating Postgres \n> do not want to compare themselves to MySQL (but rather to Oracle, \n> Sybase, DB2, etc) , but the volume of responses on a thread like this \n> indicates that the comparison is going to happen regardless. Better to \n> nip it in the bud quickly than let it go on over 3-4 days.\n> \n> One last observation: someone looking at both databases, reading those \n> posts, might get a bad impression of Postgres based on the \n> inconsistency and incorrectness of some of the statements made about \n> MySQL. If a salesperson provides misinformation about a competitors \n> product and you find out about it, that salesperson has most likely \n> lost a customer.\n> \n> Anyway, I hope I haven't offended anyone - I'm not trying to troll or \n> flame, but rather just give some constructive criticism from someone \n> outside both the MySQL and Postgres camps.\n> \n> David\n> \n\n\n\n",
"msg_date": "Fri, 10 Oct 2003 07:59:49 -0500",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL vs MySQL"
}
] |
[
{
"msg_contents": "I am very interested in the non-Cygwin windows port. Looking over the 7.4\nbeta release, it looks like the code made it in. I read through the win32\nrelated docs, to find out that they are out-of date instructions (11/2002).\nI do hope these get updated with the native windows stuff.\n\nBut I came here to ask more about the performance of pg-w32. Did it take a\nhit? Is it faster (than Cygwin, than Unix)? Stability? I saw there were some\nmailings about file-moving race conditions, links and such.\n\nThanks.\n\nJason Hihn\nPaytime Payroll\n\n\n",
"msg_date": "Thu, 09 Oct 2003 14:37:41 -0400",
"msg_from": "Jason Hihn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Any 7.4 w32 numbers in yet?"
},
{
"msg_contents": "Jason Hihn wrote:\n> I am very interested in the non-Cygwin windows port. Looking over the 7.4\n> beta release, it looks like the code made it in. I read through the win32\n> related docs, to find out that they are out-of date instructions (11/2002).\n> I do hope these get updated with the native windows stuff.\n> \n> But I came here to ask more about the performance of pg-w32. Did it take a\n> hit? Is it faster (than Cygwin, than Unix)? Stability? I saw there were some\n> mailings about file-moving race conditions, links and such.\n\nSee:\n\n\thttp://momjian.postgresql.org/main/writings/pgsql/win32.html\n\nWe don't have it running yet. It will be running in 7.5.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 9 Oct 2003 14:40:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any 7.4 w32 numbers in yet?"
}
] |
[
{
"msg_contents": "Boy, I must be getting annoying by now huh?\n\nAnyway, after the joys of Solaris being fast I'm moving onto another area\n- backup & restore. I've been checking the archives and haven't seen any\n\"good\" tips for backing up big databases (and more importantly,\nrestoring).\n\nI've noticed while doing a backup (with both -Fc and regular recipe) that\nmy IO is no where near being stressed. According to vmstat, it sits\naround reading about 512kB/sec (with occasional spikes) and every 5-6\nseconds it writes out a 3MB hunk.\n\nSo as a test I decided to cp a 1GB file and got a constant read speed of\n20MB/sec and the writes. well. were more sporatic (buffering most likely)\nand it would write out 60MB every 3 seconds.\n\nAnd. then.. on the restore I notice similar things - IO hardly being\nstressed at all... reading in at ~512kB/sec and every now and then writing\nout a few MB.\n\n\nSo, I've been thinking of various backup/restore strategies... some I'm\nsure some people do, some need code written and may be controvertial..\n\nIdea #1:\nUse an LVM and take a snapshop - archive that.\n>From the way I see it. the downside is the LVM will use a lot of space\nuntil the snapshot is removed. Also PG may be in a slightly inconsistant\nstate - but this should \"appear\" to PG the same as if the power went out.\n\nFor restore, simply unarchive this snapshot and point postgres at it. Let\nit recover and you are good to go.\n\nLittle overhead from what I see...\nI'm leaning towards this method the more I think of it.\n\nIdea #2:\n\na new program/internal \"system\". Lets call it pg_backup. It would generate\na very fast backup (that restores very fast) at the expense of disk space.\nPretty much what we would do is write out new copies of all the pages in\nthe db - both indexes and tables.\n\nthe pro's to this is it does not depend on an LVM and therefore is\naccessable to all platforms. it also has the other benfets mentioned\nabove, except speed.\n\nFor a restore PG would need something like a 'restore mode' where we can\njust have it pump pages into it somehow.. It would not have to build\nindex, check constraints, and all that because by definition the backup\nwould contain valid data.\n\nThe downside for both of these are that the backup is only good for that\nversion of PG on that architecture. Speaking in Informix world this is\nhow it is - it has a fast backup & fast restore that does essentially #2\nand then it has export/import options (works like our current pg_dump and\nrestore).\n\nand oh yeah -I've tried disabling fsync on load and while it did go faster\nit was only 2 minutes faster (9m vs 11m).\n\nAny thoughts on this? What do you ther folk with big db's do?\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Thu, 9 Oct 2003 15:34:03 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "backup/restore - another area. "
},
{
"msg_contents": "\n\nJeff <[email protected]> writes:\n\n> Idea #1:\n> Use an LVM and take a snapshop - archive that.\n> From the way I see it. the downside is the LVM will use a lot of space\n> until the snapshot is removed. Also PG may be in a slightly inconsistant\n> state - but this should \"appear\" to PG the same as if the power went out.\n> \n> For restore, simply unarchive this snapshot and point postgres at it. Let\n> it recover and you are good to go.\n> \n> Little overhead from what I see...\n> I'm leaning towards this method the more I think of it.\n\nI don't quite follow your #2 so I can only comment on the above idea of using\nan LVM snapshot. If you have the hardware and the LVM-fu to be able to do this\nproperly I would recommend it.\n\nWe actually used to do this with veritas even on Oracle which has full online\nbackup support simply because it was much much faster and the snapshot could\nbe backed up during peak times without any significant performance impact.\nThat's partly because Veritas and Hitachi storage systems kick butt though.\nDepending on the systems you're considering you may or may not have nearly the\nsame success.\n\nNote, you should *test* this backup. You're depending on some subtle semantics\nwith this. If you do it slightly wrong or the LVM does something slightly\nwrong and you end up with an inconsistent snapshot or missing some critical\nfile the whole backup could be useless.\n\nAlso, I wouldn't consider this a replacement for having a pg_dump export. In a\ncrisis when you want to restore everything *exactly* the way things were you\nwant the complete filesystem snapshot. But if you just want to load a table\nthe way it was the day before to compare, or if you want to load a test box to\ndo some performance testing, or whatever, you'll need the logical export.\n\n-- \ngreg\n\n",
"msg_date": "09 Oct 2003 19:32:22 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: backup/restore - another area."
},
{
"msg_contents": "On 9 Oct 2003, Greg Stark wrote:\n\n> I don't quite follow your #2 so I can only comment on the above idea of using\n> an LVM snapshot. If you have the hardware and the LVM-fu to be able to do this\n> properly I would recommend it.\n>\nJust to be a bit clearer incase it was my wording:\n\nMethod #2 is nearly identical to method #1, except that no logical volume\nmanager is needed. We cannot just cp $PGDATA because it is (or could be)\nchanging and we need to take data from a constitant point. So what we do\nis write code that understands xids and all that and simply \"dumps\" out\nthe pages of data in a raw form that can be quickly reloaded. The key is\nthat the data be in a somewhat consistant state. Method #2 requires a ton\nmore work but it would be able to run on platforms without an lvm (or\nrequiring the use of an lvm). Does that make more sense?\n\nThe idea here is to backup & restore as fast as possible, throwing away\nsome things like inter-version compat and whatnot. Being able to add\n\"fast backup / restore\" is a good thing in the list of enterprise\nfeatures.\n\n> Also, I wouldn't consider this a replacement for having a pg_dump export. In a\n> crisis when you want to restore everything *exactly* the way things were you\n> want the complete filesystem snapshot. But if you just want to load a table\n> the way it was the day before to compare, or if you want to load a test box to\n> do some performance testing, or whatever, you'll need the logical export.\n>\n\nYeah, a pg_dump now and then would be useful (and safe).\nIf you wanted to get fancy schmancy you could take the snapshot, archive\nit, transfer it and unarchive it on machine B. (We actually used to do\nthat here until machine B no longer had the capacity to hold all our data\n:)\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Fri, 10 Oct 2003 08:08:37 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: backup/restore - another area."
},
{
"msg_contents": "Jeff,\n\nI'm curious to what kind of testing you've done with LVM. I'm not\ncurrently trying any backup/restore stuff, but I'm running our DBT-2\nworkload using LVM. I've started collecting vmstat, iostat, and\nreadprofile data, initially running disktest to gauge the performance.\n\nFor anyone curious, I have some data on a 14-disk volume here:\n\thttp://developer.osdl.org/markw/lvm/results.4/log/\n\t\nand a 52-disk volume here:\n\thttp://developer.osdl.org/markw/lvm/results.5/data/\n\nMark\n\n>Jeff <[email protected]> writes:\n>\n> Idea #1:\n> Use an LVM and take a snapshop - archive that.\n> From the way I see it. the downside is the LVM will use a lot of space\n> until the snapshot is removed. Also PG may be in a slightly inconsistant\n> state - but this should \"appear\" to PG the same as if the power went out.\n> \n> For restore, simply unarchive this snapshot and point postgres at it. Let\n> it recover and you are good to go.\n> \n> Little overhead from what I see...\n> I'm leaning towards this method the more I think of it.\n",
"msg_date": "Tue, 14 Oct 2003 15:18:29 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: backup/restore - another area."
},
{
"msg_contents": "On Tue, 14 Oct 2003, [email protected] wrote:\n\n> I'm curious to what kind of testing you've done with LVM. I'm not\n> currently trying any backup/restore stuff, but I'm running our DBT-2\n> workload using LVM. I've started collecting vmstat, iostat, and\n> readprofile data, initially running disktest to gauge the performance.\n>\n[added -admin to this, since this is very relevant there]\n\nI was going to post this data yesterday, but I had severe inet issues.\n\nSo, I tried this out with lvm2 on 2.4.21 on a 2xp2-450 with 2 disks.\n(I just looked at your 14 and 52 disk data. drool.)\n\nSo I have a db which is about 3.2GB on disk.\nAll backups were done to an nfs mount, but I ran a network monitor to\ncheck bandwidth usage. I note where things were io bound.\n\nbacking up:\n\npg_dump: 18m [cpu bound]\npg_dump | gzip -1: 18m [cpu bound]\n\nsnapshot, then tar: 4m [io bound]\nsnapshot, then tar | gzip: 21m [cpu bound]\n\nThe times for a compressed backup are a bit slower for snapshots, but this\nis where the snapshot method wins tacos - restore.\n\nrestore:\n\npsql: 158m\nsnapshot: 8m\n\nYes folks, 8m.\nWhen I started PG back up it checked the WAL and got itself back online.\n\nThe benefits of the pg_dump backup afaict are that the data is in a format\nreadable to anything and is [mostly] cross-pg compatible. The downside is\nit seems to be quite slow and restoring it can be long and tiresome.\n\nThe benefits of the snapshot are that backups are very, very quick and\nrestore is very, very quick (It won't need to re-enable foriegn keys, no\nneed to rebuild indexes, no need to re-vacuum analyze). The downside is\nthis method will only work on that specific version of PG and it isn't the\n\"cleanest\" thing in the world since you are essentially simulating a power\nfailure to PG. Luckly the WAL works like a champ. Also, these backups can\nbe much larger since it has to include the indexes as well. but this is a\nprice you have to pay.\n\nI did have some initial problems with snapshots & corruption but it turned\nout to be user-error on my part.\n\nCOOL HUH?\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Thu, 16 Oct 2003 07:50:27 -0400 (EDT)",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] backup/restore - another area."
},
{
"msg_contents": "Jeff,\n\n> The downside is\n> this method will only work on that specific version of PG and it isn't the\n> \"cleanest\" thing in the world since you are essentially simulating a power\n> failure to PG. Luckly the WAL works like a champ. Also, these backups can\n> be much larger since it has to include the indexes as well. but this is a\n> price you have to pay.\n\nThe other downside is, of course, that the database needs to be shut down.\n\n> COOL HUH?\n\nCertainly very useful in the DBA's arsenal of backup tools.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 16 Oct 2003 09:49:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] backup/restore - another area."
},
{
"msg_contents": "On Thu, 16 Oct 2003 09:49:59 -0700\nJosh Berkus <[email protected]> wrote:\n\n> Jeff,\n> \n> > The downside is\n> > this method will only work on that specific version of PG and it\n> > isn't the\"cleanest\" thing in the world since you are essentially\n> > simulating a power failure to PG. Luckly the WAL works like a champ.\n> > Also, these backups can be much larger since it has to include the\n> > indexes as well. but this is a price you have to pay.\n> \n> The other downside is, of course, that the database needs to be shut\n> down.\n> \n\nI left the DB up while doing this.\n\nEven had a program sitting around committing data to try and corrupt\nthings. (Which is how I discovered I was doing the snapshot wrong)\n\nYou could do pg_ctl stop; snapshot; pg_ctls tart for a \"clean\" image. \n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n",
"msg_date": "Thu, 16 Oct 2003 13:06:37 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] backup/restore - another area."
},
{
"msg_contents": "Jeff,\n\n> I left the DB up while doing this.\n>\n> Even had a program sitting around committing data to try and corrupt\n> things. (Which is how I discovered I was doing the snapshot wrong)\n\nReally? I'm unclear on the method you're using to take the snapshot, then; I \nseem to have missed a couple posts on this thread. Want to refresh me?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 16 Oct 2003 10:09:27 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] backup/restore - another area."
},
{
"msg_contents": "On Thu, 16 Oct 2003 10:09:27 -0700\nJosh Berkus <[email protected]> wrote:\n\n> Jeff,\n> \n> > I left the DB up while doing this.\n> >\n> > Even had a program sitting around committing data to try and corrupt\n> > things. (Which is how I discovered I was doing the snapshot wrong)\n> \n> Really? I'm unclear on the method you're using to take the snapshot,\n> then; I seem to have missed a couple posts on this thread. Want to\n> refresh me?\n> \n\nI have a 2 disk stripe LVM on /dev/postgres/pgdata/\n\nlvcreate -L4000M -s -n pg_backup /dev/postgres/pgdata\nmount /dev/postgres/pg_backup /pg_backup \ntar cf - /pg_backup | gzip -1 > /squeegit/mb.backup \numount /pg_backup;\nlvremove -f /dev/postgres/pg_backup;\n\nIn a nutshell an LVM snapshot is an atomic operation that takes, well, a\nsnapshot of hte FS as it was at that instant. It does not make a 2nd\ncopy of the data. This way you can simply tar up the pgdata directory\nand be happy as the snapshot will not be changing due to db activity.\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n",
"msg_date": "Thu, 16 Oct 2003 13:37:27 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] backup/restore - another area."
},
{
"msg_contents": "\n> > Jeff,\n> >\n> > > The downside is\n> > > this method will only work on that specific version of PG and it\n> > > isn't the\"cleanest\" thing in the world since you are essentially\n> > > simulating a power failure to PG. Luckly the WAL works like a champ.\n> > > Also, these backups can be much larger since it has to include the\n> > > indexes as well. but this is a price you have to pay.\n> >\n> > The other downside is, of course, that the database needs to be shut\n> > down.\n> >\n>\n> I left the DB up while doing this.\n>\n> Even had a program sitting around committing data to try and corrupt\n> things. (Which is how I discovered I was doing the snapshot wrong)\n>\n> You could do pg_ctl stop; snapshot; pg_ctls tart for a \"clean\" image.\n>\n\nSince this seems to work for you,\nwould you be kind enough to post the shell script for doing the snapshot with\nLVM.\n\nRegards\nDonald Fraser\n\n",
"msg_date": "Thu, 16 Oct 2003 23:35:48 +0100",
"msg_from": "\"Donald Fraser\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] backup/restore - another area."
},
{
"msg_contents": "On Thu, 16 Oct 2003 23:35:48 +0100\n\"Donald Fraser\" <[email protected]> wrote:\n\n> \n> Since this seems to work for you,\n> would you be kind enough to post the shell script for doing the\n> snapshot with LVM.\n> \n\nAhh, I posted it to -perform. Guess it didn't make it here.\nI have a 2 disk striped LVM as /dev/postgresql/pgdata\n\nHere's what I do:\nlvcreate -L4000M -s -n pg_backup /dev/postgres/pgdata \nmount /dev/postgres/pg_backup /pg_backup\ntar cf - /pg_backup | gzip -1 > /squeegit/mb.backup \numount /pg_backup;\nlvremove -f/dev/postgres/pg_backup;\n\nThe key is that -L that tells it how big to make htings. If your -L is\nsmaller than the actual size of the volume you'll get corruption (as I\nfound out). \n\nThe restore is to simply take pg down, rm $PGDATA and untar mb.backup\ninto $PGDATA, start up PG and thats it. \n\nGodo luck - be sure to test it out first!\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n",
"msg_date": "Fri, 17 Oct 2003 07:45:53 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] backup/restore - another area."
}
] |
[
{
"msg_contents": "Hello,\n\nI am running 7.3.2 RPMs on RH9, on a celeron 1.7 w/ 1gig ram.\n\nI have a table that has 6.9 million rows, 2 columns, and an index on \neach column. When I run:\n\nSELECT DISTINCT column1 FROM table\n\nIt is very, very slow (10-15 min to complete). An EXPLAIN shows no \nindexes are being used.\n\nIs there any way to speed this up, or is that DISTINCT going to keep \nhounding me?\n\nI checked the mailing list, and didn't see anything like this.\n\nAny tips or hints would be greatly appreciated. Thanks for your help!\nSeth\n\n",
"msg_date": "Thu, 9 Oct 2003 23:41:36 -1000",
"msg_from": "Seth Ladd <[email protected]>",
"msg_from_op": true,
"msg_subject": "way to speed up a SELECT DISTINCT?"
},
{
"msg_contents": "On Thu, 9 Oct 2003, Seth Ladd wrote:\n\n> Hello,\n> \n> I am running 7.3.2 RPMs on RH9, on a celeron 1.7 w/ 1gig ram.\n> \n> I have a table that has 6.9 million rows, 2 columns, and an index on \n> each column. When I run:\n> \n> SELECT DISTINCT column1 FROM table\n> \n> It is very, very slow (10-15 min to complete). An EXPLAIN shows no \n> indexes are being used.\n> \n> Is there any way to speed this up, or is that DISTINCT going to keep \n> hounding me?\n> \n> I checked the mailing list, and didn't see anything like this.\n> \n> Any tips or hints would be greatly appreciated. Thanks for your help!\n> Seth\n> \n> \n\tTry group by instead. I think this is an old bug its fixed in \n7.3.2 which I'm using.\n\nPeter Childs\n`\n\n\npeter@bernardo:express=# explain select distinct region from region;\n QUERY PLAN\n----------------------------------------------------------------------------------------------\n Unique (cost=0.00..4326.95 rows=9518 width=14)\n -> Index Scan using regionview_region on region (cost=0.00..4089.00 \nrows=95183 width=14)\n(2 rows)\n\npeter@bernardo:express=# explain select distinct region from region group \nby region;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Unique (cost=0.00..4350.75 rows=952 width=14)\n -> Group (cost=0.00..4326.95 rows=9518 width=14)\n -> Index Scan using regionview_region on region \n(cost=0.00..4089.00 rows=95183 width=14)\n(3 rows)\n\n\n\n",
"msg_date": "Fri, 10 Oct 2003 11:07:23 +0100 (BST)",
"msg_from": "Peter Childs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: way to speed up a SELECT DISTINCT?"
},
{
"msg_contents": ">> Is there any way to speed this up, or is that DISTINCT going to keep\n>> hounding me?\n>>\n>> I checked the mailing list, and didn't see anything like this.\n>>\n>> Any tips or hints would be greatly appreciated. Thanks for your help!\n>> Seth\n>>\n>>\n> \tTry group by instead. I think this is an old bug its fixed in\n> 7.3.2 which I'm using.\n>\n> Peter Childs\n> `\n>\n>\n> peter@bernardo:express=# explain select distinct region from region;\n> QUERY PLAN\n> ----------------------------------------------------------------------- \n> -----------------------\n> Unique (cost=0.00..4326.95 rows=9518 width=14)\n> -> Index Scan using regionview_region on region \n> (cost=0.00..4089.00\n> rows=95183 width=14)\n> (2 rows)\n\nThanks for the tip, I'll give this a shot soon. I am curious, your \nexample above does not use GROUP BY yet you have an INDEX SCAN. I am \nusing a similar query, yet I get a full table scan. I wonder how they \nare different?\n\nI'll try the group by anyway.\n\nThanks,\nSeth\n\n",
"msg_date": "Fri, 10 Oct 2003 00:50:48 -1000",
"msg_from": "Seth Ladd <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: way to speed up a SELECT DISTINCT?"
},
{
"msg_contents": "Seth Ladd wrote:\n\n>> peter@bernardo:express=# explain select distinct region from region;\n>> QUERY PLAN\n>> ----------------------------------------------------------------------- \n>> -----------------------\n>> Unique (cost=0.00..4326.95 rows=9518 width=14)\n>> -> Index Scan using regionview_region on region (cost=0.00..4089.00\n>> rows=95183 width=14)\n>> (2 rows)\n> \n> \n> Thanks for the tip, I'll give this a shot soon. I am curious, your \n> example above does not use GROUP BY yet you have an INDEX SCAN. I am \n> using a similar query, yet I get a full table scan. I wonder how they \n> are different?\n\nHave you tuned your shared buffers and effective cache correctly?\n\n Shridhar\n\n",
"msg_date": "Fri, 10 Oct 2003 16:30:30 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: way to speed up a SELECT DISTINCT?"
},
{
"msg_contents": "> Thanks for the tip, I'll give this a shot soon. I am curious, your\n> example above does not use GROUP BY yet you have an INDEX SCAN. I am\n> using a similar query, yet I get a full table scan. I wonder how they\n> are different?\n\nPlease send us the results of EXPLAIN ANALYZE the query. The EXPLAIN\nresults usually aren't too interesting for degenerate queries.\n\nAlso, make sure you have run ANALYZE on your database.\n\nChris\n\n\n",
"msg_date": "Fri, 10 Oct 2003 19:05:27 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: way to speed up a SELECT DISTINCT?"
},
{
"msg_contents": "On Fri, 10 Oct 2003, Seth Ladd wrote:\n\n> >> Is there any way to speed this up, or is that DISTINCT going to keep\n> >> hounding me?\n> >>\n> >> I checked the mailing list, and didn't see anything like this.\n> >>\n> >> Any tips or hints would be greatly appreciated. Thanks for your help!\n> >> Seth\n> >>\n> >>\n> > \tTry group by instead. I think this is an old bug its fixed in\n> > 7.3.2 which I'm using.\n> >\n> > Peter Childs\n> > `\n> >\n> >\n> > peter@bernardo:express=# explain select distinct region from region;\n> > QUERY PLAN\n> > ----------------------------------------------------------------------- \n> > -----------------------\n> > Unique (cost=0.00..4326.95 rows=9518 width=14)\n> > -> Index Scan using regionview_region on region \n> > (cost=0.00..4089.00\n> > rows=95183 width=14)\n> > (2 rows)\n> \n> Thanks for the tip, I'll give this a shot soon. I am curious, your \n> example above does not use GROUP BY yet you have an INDEX SCAN. I am \n> using a similar query, yet I get a full table scan. I wonder how they \n> are different?\n> \n> I'll try the group by anyway.\n> \n\tIts a guess but ANALYSE might help. `\n\nPeter Childs\n\n",
"msg_date": "Fri, 10 Oct 2003 12:17:10 +0100 (BST)",
"msg_from": "Peter Childs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: way to speed up a SELECT DISTINCT?"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.